Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go »
No member browsing this thread
Thread Status: Active
Total posts in this thread: 87
Posts: 87   Pages: 9   [ Previous Page | 1 2 3 4 5 6 7 8 9 | Next Page ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 14603 times and has 86 replies Next Thread
Sekerob
Ace Cruncher
Joined: Jul 24, 2005
Post Count: 20043
Status: Offline
Reply to this Post  Reply with Quote 
Re: DDDT Single Redundancy Change

Language is great is it not Sgt.Joe. (I) Translated what Jean said as there being a 0.25% bandwidth to either side or consulting MB quickly: "scattering of the values of a frequency distribution from an average". That's pretty darn close and should drive anyone to Nirvanean happiness.
----------------------------------------
WCG Global & Research > Make Proposal Help: Start Here!
Please help to make the Forums an enjoyable experience for All!
[Aug 1, 2008 3:50:49 PM]   Link   Report threatening or abusive post: please login first  Go to top 
JmBoullier
Former Community Advisor
Normandy - France
Joined: Jan 26, 2007
Post Count: 3716
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: DDDT Single Redundancy Change

Good translation Sek!
----------------------------------------
Team--> Decrypthon -->Statistics/Join -->Thread
[Aug 1, 2008 4:29:23 PM]   Link   Report threatening or abusive post: please login first  Go to top 
JmBoullier
Former Community Advisor
Normandy - France
Joined: Jan 26, 2007
Post Count: 3716
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: DDDT Single Redundancy Change

First set of comments on 11 results (all in single redundancy) in a Q6600 clocked at 2.88 GHz under XP32. This device was running DDDT exclusively before the change so I also have a set of 14 "old" DDDT results for comparison.

1. WUs are noticeably longer now. Average was 3.68 hours for the old set, 4.67 currently.

2. Average claimed credit per hour unchanged, 17.50

3. Average granted credit per hour is now 18.33 vs 17.86 before. Not too much different, and at least it is on the side I prefer. smile
For the old set the total of granted credits was 2.10% more than the total of claimed credits.
For the new ones it is 4.38% more.

4. Where I am more surprised is that I have as many discrepancies now as I had before. For the 14 old ones I had one granted -8.42% less than claimed, one at 12.15% more, and one at 17.40% more. But as you all know, it was "because the partner was so different in his claims". smile
In the new set of 11 results I have one at -8.91%, one at +9.80%, one at +20.53% and one at +21.51%. And this time I don't know "whose fault it is"? smile
Since those last two WUs were the second and the third returned I can also wonder if it was some kind of teething problem which could be corrected in the following batches or through some magic of the algorithms.

That's all for this device, folks! Jean.
----------------------------------------
Team--> Decrypthon -->Statistics/Join -->Thread
[Aug 1, 2008 5:03:15 PM]   Link   Report threatening or abusive post: please login first  Go to top 
JmBoullier
Former Community Advisor
Normandy - France
Joined: Jan 26, 2007
Post Count: 3716
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: DDDT Single Redundancy Change

The same quad as above (Q6600 clocked at 2.88 GHz) also has another life under Ubuntu 64. The floating benchmark is the same as in 32bit mode but the fixed one is about 50% higher, which is normal.
This UB64 device usually runs HCC only, which is what it is doing best (less than 3 hours each in average). For testing I have crunched 17 new DDDT WUs in this device. Here it goes...

1. Average duration is 3.36 hours, ranging from 2.54 to 4.64 hours. I don't know how to split the difference with the XP32 set (4.67) between the 64 mode and possibly very different batches.

2. The average claimed credit per hour is 22.00 (21.52 for HCC-64, 17.50 in XP32), consistent with the different benchmarks.

3. The average granted credit is only 19.91, 9.52% below the claimed average and much disappointing, although it is not unusual under Linux, unfortunately. However for the HCC set of 66 WUs that I can still analyze the average granted is 21.39, close to the claimed one of 21.52.

4. Here again the dispersion can really be called discrepancy SgtJoe!
Only 3 results are granted more than claimed, namely +0.13%, +0.14% and +3.98%.
All others are below, most around -11%, with the "winners" at -16.61%, -19.64%, -22.10% and -30.24%!!!

This device is back to HCC for the time being, guess why?... smile
Cheers. Jean.
----------------------------------------
Team--> Decrypthon -->Statistics/Join -->Thread
[Aug 1, 2008 5:53:59 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Sekerob
Ace Cruncher
Joined: Jul 24, 2005
Post Count: 20043
Status: Offline
Reply to this Post  Reply with Quote 
Re: DDDT Single Redundancy Change


----------------------------------------
WCG Global & Research > Make Proposal Help: Start Here!
Please help to make the Forums an enjoyable experience for All!
[Aug 1, 2008 6:07:36 PM]   Link   Report threatening or abusive post: please login first  Go to top 
jonathandl
Advanced Cruncher
Joined: Nov 12, 2007
Post Count: 106
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: DDDT Single Redundancy Change

I presume the mini-workunit is a workunit-within-a-workunit, and every singly-redundant workunit will have one? If so, is it calculated at the beginning of each workunit, or towards the end?

I would suggest calculating it near the end because this would be more likely to detect any conditions on the client computer that might have corrupted the computer's memory during the crunching of the real data. What do you think?
[Aug 6, 2008 3:21:11 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Sekerob
Ace Cruncher
Joined: Jul 24, 2005
Post Count: 20043
Status: Offline
Reply to this Post  Reply with Quote 
Re: DDDT Single Redundancy Change

At the beginning, and my logic would say that if the mini result fails to compute correctly, why bother going for the next 7 hours?

I understand the mini test takes 10 minutes on a reference machine.... yes i can see some doing their abacus exercises, for 10 minutes we save an average 7 hours per redundant job.

The error rate will tell as data is collected. There was at start 11.5% needing a second copy. Not heard of how it is now after about 7 days on the new release.
----------------------------------------
WCG Global & Research > Make Proposal Help: Start Here!
Please help to make the Forums an enjoyable experience for All!
[Aug 6, 2008 4:07:20 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: DDDT Single Redundancy Change

Hey Sek,
Your chart says the end date for this project is March, 10. Will this need to be adjusted now that this is Single Redundancy ?
[Aug 6, 2008 6:50:41 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Sekerob
Ace Cruncher
Joined: Jul 24, 2005
Post Count: 20043
Status: Offline
Reply to this Post  Reply with Quote 
Re: DDDT Zero Redundancy Change

Hi there Brinkster,

That was the adjusted previously and now WCG apparently slowed the project down a bit because the scientists can't keep up shock. This Zero Redundancy (ZR) has been in the planning for a long time. Word behind the scenes is that in fact there is work till 2013, but they'll assess in early 2009 based on what's learned where to narrow it down. Also phase 2 has still to run. I've put that into the estimate. No word if this is going to be ZR or single redundancy. That phase will use CHARMM.

Crunch On
----------------------------------------
WCG Global & Research > Make Proposal Help: Start Here!
Please help to make the Forums an enjoyable experience for All!
[Aug 6, 2008 7:44:12 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: DDDT Single Redundancy Change

Here is the details (pulled from the beta test)

Single validation is going to work in the following way:

Workunits are loaded into BOINC with a quorum size of 1. This means that 1 replica is created and it will only take 1 successfully run result in order for validation to be attempted on the result.

However, there are a few checks in place:

2) When validation is attempted, the value for the host is checked again. If the value has fallen below the required level, then the result is marked INCONCLUSIVE and another result is sent.

3) Additionally, during validation, there is a certain random chance that the result will be flagged to be checked again. Any result picked in this case will be marked INCONCLUSIVE until the validation with the additional result occurs. All computers are subject to random checking.


Ok.. I'm back with another question...

Is there a means; mechanism; technique; to differentiate INCONCLUSIVE if step 2 or if step 3?

I've turned out 2 INCONCLUSIVE results within an hour or so of each other.... and now I'm curious if the machine is behaving badly or if these are just some of the normal statistical validation checks.

Here's what I'm seeing:




Workunit Status

Project Name: Discovering Dengue Drugs - Together
Created: 08/06/2008 08:36:50
Name: dddt0602j0541_100456
Minimum Quorum: 1
Initial Replication: 2


Result Name Status Sent Time Time Due /
Return Time CPU Time (hours) Claimed/ Granted BOINC Credit
dddt0602j0541_ 100456_ 1-- In Progress 08/07/2008 05:42:26 08/09/2008 15:18:26 0.00 0.0 / 0.0
dddt0602j0541_ 100456_ 0-- Inconclusive 08/07/2008 01:10:03 08/07/2008 04:24:46 2.54 58.3 / 0.0



and



Workunit Status

Project Name: Discovering Dengue Drugs - Together
Created: 08/05/2008 21:18:31
Name: dddt0602i0535_100316
Minimum Quorum: 1
Initial Replication: 2


Result Name Status Sent Time Time Due /
Return Time CPU Time (hours) Claimed/ Granted BOINC Credit
dddt0602i0535_ 100316_ 1-- In Progress 08/06/2008 21:53:09 08/09/2008 07:29:09 0.00 0.0 / 0.0
dddt0602i0535_ 100316_ 0-- Inconclusive 08/06/2008 18:11:35 08/06/2008 21:47:49 3.04 69.6 / 0.0



and




Workunit Status

Project Name: Discovering Dengue Drugs - Together
Created: 08/05/2008 21:10:43
Name: dddt0602i0535_100707
Minimum Quorum: 1
Initial Replication: 2


Result Name Status Sent Time Time Due /
Return Time CPU Time (hours) Claimed/ Granted BOINC Credit
dddt0602i0535_ 100707_ 1-- In Progress 08/06/2008 21:52:02 08/09/2008 21:27:18 0.00 0.0 / 0.0
dddt0602i0535_ 100707_ 0-- Inconclusive 08/06/2008 17:06:42 08/06/2008 21:47:49 3.98 91.2 / 0.0



I can only surmise the 3 WU's above fall into the bullet 2 category ... but I can't imagine why this would be the case. How far does "value for the host" have to fall for this to kick in. I'm presuming this is once again testing processor speed or something along those lines.
[Aug 7, 2008 6:02:44 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 87   Pages: 9   [ Previous Page | 1 2 3 4 5 6 7 8 9 | Next Page ]
[ Jump to Last Post ]
Post new Thread