| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 7
|
|
| Author |
|
|
TimAndHedy
Senior Cruncher Joined: Jan 27, 2009 Post Count: 267 Status: Offline Project Badges:
|
Why would you have single vs double Quorum? Is it only speed?
|
||
|
|
JmBoullier
Former Community Advisor Normandy - France Joined: Jan 26, 2007 Post Count: 3716 Status: Offline Project Badges:
|
No, speed (of the machine, I presume?) is not concerned.
----------------------------------------Please see this FAQ for all details Credit Method for DDD-T, FA@H and HFCC - Zero Redundancy distribution Cheers. Jean. |
||
|
|
TimAndHedy
Senior Cruncher Joined: Jan 27, 2009 Post Count: 267 Status: Offline Project Badges:
|
I just don't understand the, Why? Did I miss it in there?
Does HFCC have enough randomness in it that redundancy checks are not as valuable as in HCC? |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The choice for Zero redundancy for several projects must have to do with the speed at which an amount of science work can be done with a number of systems. Less redundancy means you can complete the science quicker with the same amount of computing power.
----------------------------------------Apparently the science software used for HFCC combined with the validation logic available on the server side I assume has proved reliable enough to avoid false positives. Also WCG only sends zero redundancy jobs to computers which are deemed reliable, and even those get a double quorum job now and then to check them. Why this zero redundancy isn't used for all projects (e.g. HCC)? No idea. I can only guess it has to do with the kind of science work that is being done (HCC uses a completely different science core than HFCC) which mean zero redundancy can't be applied reliably. [Edit 1 times, last edit by Former Member at Apr 30, 2009 4:35:59 AM] |
||
|
|
JmBoullier
Former Community Advisor Normandy - France Joined: Jan 26, 2007 Post Count: 3716 Status: Offline Project Badges:
|
Ideally all projects should be run without redundancy because it is obviously a waste of computing power to compute the same job twice (and in the past it was three times!).
----------------------------------------However WCG and the scientists need to be sure that the results are reliable, i.e. that they have been computed correctly, and for some projects like HCC and TCEP there is currently no other way to be sure than to use redundancy. The reason may be the nature of the work done (HCC) or the software used (TCEP). DDDT, FAAH and HFCC are all using the same software, Autodock, for which a way has been found to check the reliability of a result without having to duplicate the whole computation. To sketch it simply each WU contains a mini-WU whose result is known, and if this result is OK and the device is known as reliable it is assumed that the rest of the work has been done seriously (see paragraph 4 of the post I have referenced in my previous post). This way only the computation of the mini WU is "wasted", not the whole runtime. In addition devices are checked randomly or when a result seems unusual (e.g. much shorter or longer than the average for thiis project and this device) to make sure that they are still reliable. This is done by using redundancy for a number of WUs sent to this device. NRW and HPF2 use another method for reducing duplicate work. Have a look at their respective posts in the thread I have indicated earlier to see how the techs have implemented the distribution for these projects. I hope I have not finished to confuse you. Jean. ![]() |
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
The nutshell difference between full quorum 2 and the predictive sciences such as those on AutoDock and Rosetta is that the results for HCC require absolute accuracy since they become part of a reference database. Same was for instance for the Genomic Comparison project. For HPF2 e.g. the ensemble result of 10s of thousands are taken to look if there are patterns in the many simulations and pick and choose from those what the scientists think are the best.
----------------------------------------
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
|
TimAndHedy
Senior Cruncher Joined: Jan 27, 2009 Post Count: 267 Status: Offline Project Badges:
|
That seems reasonable.
I would expect even the reliable computers have the occasional problem. I just worry about a single mistake causing them to miss something important. The 2 quorum system pretty much guarantees that not happening, but if they are computing multiple variations of very similar sets of data maybe it's not such an issue. |
||
|
|
|