| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 87
|
|
| Author |
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
Language is great is it not Sgt.Joe. (I) Translated what Jean said as there being a 0.25% bandwidth to either side or consulting MB quickly: "scattering of the values of a frequency distribution from an average". That's pretty darn close and should drive anyone to Nirvanean happiness.
----------------------------------------
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
|
JmBoullier
Former Community Advisor Normandy - France Joined: Jan 26, 2007 Post Count: 3716 Status: Offline Project Badges:
|
Good translation Sek!
---------------------------------------- |
||
|
|
JmBoullier
Former Community Advisor Normandy - France Joined: Jan 26, 2007 Post Count: 3716 Status: Offline Project Badges:
|
First set of comments on 11 results (all in single redundancy) in a Q6600 clocked at 2.88 GHz under XP32. This device was running DDDT exclusively before the change so I also have a set of 14 "old" DDDT results for comparison.
----------------------------------------1. WUs are noticeably longer now. Average was 3.68 hours for the old set, 4.67 currently. 2. Average claimed credit per hour unchanged, 17.50 3. Average granted credit per hour is now 18.33 vs 17.86 before. Not too much different, and at least it is on the side I prefer. For the old set the total of granted credits was 2.10% more than the total of claimed credits. For the new ones it is 4.38% more. 4. Where I am more surprised is that I have as many discrepancies now as I had before. For the 14 old ones I had one granted -8.42% less than claimed, one at 12.15% more, and one at 17.40% more. But as you all know, it was "because the partner was so different in his claims". In the new set of 11 results I have one at -8.91%, one at +9.80%, one at +20.53% and one at +21.51%. And this time I don't know "whose fault it is"? Since those last two WUs were the second and the third returned I can also wonder if it was some kind of teething problem which could be corrected in the following batches or through some magic of the algorithms. That's all for this device, folks! Jean. |
||
|
|
JmBoullier
Former Community Advisor Normandy - France Joined: Jan 26, 2007 Post Count: 3716 Status: Offline Project Badges:
|
The same quad as above (Q6600 clocked at 2.88 GHz) also has another life under Ubuntu 64. The floating benchmark is the same as in 32bit mode but the fixed one is about 50% higher, which is normal.
----------------------------------------This UB64 device usually runs HCC only, which is what it is doing best (less than 3 hours each in average). For testing I have crunched 17 new DDDT WUs in this device. Here it goes... 1. Average duration is 3.36 hours, ranging from 2.54 to 4.64 hours. I don't know how to split the difference with the XP32 set (4.67) between the 64 mode and possibly very different batches. 2. The average claimed credit per hour is 22.00 (21.52 for HCC-64, 17.50 in XP32), consistent with the different benchmarks. 3. The average granted credit is only 19.91, 9.52% below the claimed average and much disappointing, although it is not unusual under Linux, unfortunately. However for the HCC set of 66 WUs that I can still analyze the average granted is 21.39, close to the claimed one of 21.52. 4. Here again the dispersion can really be called discrepancy SgtJoe! Only 3 results are granted more than claimed, namely +0.13%, +0.14% and +3.98%. All others are below, most around -11%, with the "winners" at -16.61%, -19.64%, -22.10% and -30.24%!!! This device is back to HCC for the time being, guess why?... Cheers. Jean. |
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
|
jonathandl
Advanced Cruncher Joined: Nov 12, 2007 Post Count: 106 Status: Offline Project Badges:
|
I presume the mini-workunit is a workunit-within-a-workunit, and every singly-redundant workunit will have one? If so, is it calculated at the beginning of each workunit, or towards the end?
I would suggest calculating it near the end because this would be more likely to detect any conditions on the client computer that might have corrupted the computer's memory during the crunching of the real data. What do you think? |
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
At the beginning, and my logic would say that if the mini result fails to compute correctly, why bother going for the next 7 hours?
----------------------------------------I understand the mini test takes 10 minutes on a reference machine.... yes i can see some doing their abacus exercises, for 10 minutes we save an average 7 hours per redundant job. The error rate will tell as data is collected. There was at start 11.5% needing a second copy. Not heard of how it is now after about 7 days on the new release.
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hey Sek,
Your chart says the end date for this project is March, 10. Will this need to be adjusted now that this is Single Redundancy ? |
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
Hi there Brinkster,
----------------------------------------That was the adjusted previously and now WCG apparently slowed the project down a bit because the scientists can't keep up . This Zero Redundancy (ZR) has been in the planning for a long time. Word behind the scenes is that in fact there is work till 2013, but they'll assess in early 2009 based on what's learned where to narrow it down. Also phase 2 has still to run. I've put that into the estimate. No word if this is going to be ZR or single redundancy. That phase will use CHARMM.Crunch On
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Here is the details (pulled from the beta test) Single validation is going to work in the following way: Workunits are loaded into BOINC with a quorum size of 1. This means that 1 replica is created and it will only take 1 successfully run result in order for validation to be attempted on the result. However, there are a few checks in place: 2) When validation is attempted, the value for the host is checked again. If the value has fallen below the required level, then the result is marked INCONCLUSIVE and another result is sent. 3) Additionally, during validation, there is a certain random chance that the result will be flagged to be checked again. Any result picked in this case will be marked INCONCLUSIVE until the validation with the additional result occurs. All computers are subject to random checking. Ok.. I'm back with another question... Is there a means; mechanism; technique; to differentiate INCONCLUSIVE if step 2 or if step 3? I've turned out 2 INCONCLUSIVE results within an hour or so of each other.... and now I'm curious if the machine is behaving badly or if these are just some of the normal statistical validation checks. Here's what I'm seeing:
and
and
I can only surmise the 3 WU's above fall into the bullet 2 category ... but I can't imagine why this would be the case. How far does "value for the host" have to fall for this to kick in. I'm presuming this is once again testing processor speed or something along those lines. |
||
|
|
|