Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 12
|
![]() |
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Can someone explain this to me?
----------------------------------------I have two WU's that processed within 0.03 hrs of each other or in other words, one took about 1.8 seconds longer than the other. What is confusing is these 2 WU's processed at the same time on my Q6600, on the same machine and completed within about 45 minutes of each other (they also started about 45 minutes within each other). What's not clear is the first link below shows a WU requiring more CPU (not much mind you) than the second WU, yet received less credit / points than a similar WU referenced by the second link. I have chased this kind of thing up and down all the documented computational algorithms for WCG I can locate and can't seem to make heads or tails out of what appears as a discrepancy in the claimed / granted results. Can someone explain this to me since I've not been able to understand this behaviour. TIA ---Barney dddt0602i0524_100061 4.04 hrs & 92.8 / 96.5 Claimed / Granted and dddt0602i0526_100200 4.07 hrs & 93.3 / 92.5 Claimed / Granted [Edit 1 times, last edit by Former Member at Aug 6, 2008 3:22:16 PM] |
||
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
Partay Partay, Barney has just been promoted to the new title of Senior In-Questionneur.... 75 info requests in 2 weeks. Who betters him?
----------------------------------------kiddin, Can't see your results, we're not allowed, no one is, so you need to do a copy / paste or wait on a WCG tech who has permission to look at the data. The claim part is obvious, should come directly from your computer (Device Benchmark values summed, divvied by 480 multiplied by the run time). The grant part is verbiaged in a Support FAQ, you found it, but not spelled out as it worked off a pool of averages plus it looks at the Mini Work Unit performance that sits at the beginning of each DDDT job. But, with so few it is fairly sure your device has not build up enough 'credit' to have found the sweet-point. Mine has still to do the first 6.06 DDDT on Vista. ttyl
WCG
----------------------------------------Please help to make the Forums an enjoyable experience for All! [Edit 1 times, last edit by Sekerob at Aug 6, 2008 3:50:43 PM] |
||
|
JmBoullier
Former Community Advisor Normandy - France Joined: Jan 26, 2007 Post Count: 3715 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I have two WU's that processed within 0.03 hrs of each other or in other words, one took about 1.8 seconds longer than the other. Barney, 0.03 hours is not 1.8 second but 1.8 minute, or in other words 108 seconds. Then the difference on the claimed values is OK. What is stranger (but common) is the difference between granted, values and I have no explanation yet, only one guess: maybe that depends on how similar (or dissimilar) the real computation is versus the computation of the mini-WU. I have asked the question and maybe one tech will explain it for us. Until yesterday my most beautiful cases were one WU credited 45 % less than claimed and one credited 42 % more. Both in no-redundancy mode, i.e. without any partner influencing the crediting. But the majority of those new DDDT WUs are in the "few percents" range. Cheers. Jean. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Edit - beaten to it by another CA but I am leaving this here so others can see the computation behind it.
----------------------------------------One thing to notice is that there was not 1.8 seconds difference between the two WUS. If you look at the results of other things, you can get 4.75 hrs of crunch time. This means that to get the true difference: 1 Hour in Seconds = 60 * 6- = 3600 Part of hour (difference) = 0.03 hours = 0.03 * 3600 = 108 So total run time of WU1 (4.04hrs) = (3600 * 4) + ( 0.04 * 3600) which is 14400 + 144 = 14544 seconds. Total run time for WU2 (4.07 hrs) = (3600 * 4) + (0.07 * 3600) which is 1440 + 252 = 14652 Difference in crunching time is 14652 - 14544 = 108 seconds (108/3600 = 0.03 hrs ![]() 108 seconds is actually 1min 48 seconds of a difference. Taking this and what Sekerob has said should hopefully make it clearer ![]() [Edit 2 times, last edit by Former Member at Aug 6, 2008 4:27:25 PM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Sekerob, Stares, & JmBoullier,
Thanks for pointing out the errors of my ways... ![]() I knew that .. honest I did.. but I obviously got my fingers engaged before my complete thoughts had formulated... kinda like eating a runny, 1/2 cooked egg ! ![]() Senior Questioner!!! Wooo WOoooo.. Do I get a new badge for that??? Ok.. so here are 3 similar WU's. 4.04 Hrs to complete ![]() 4.07 Hrs to complete ![]() 4.16 Hrs to complete ![]()
So the thing that is strange is how the credits for what one would think is a CPU second don't seem to be very constant. It doesn't seem to be consistent for what the machine claims nor does it seem to be consistent with the computed value on the granted. I'm not trying to throw stones, nor cause any problems, I'm just really curious how these deviations can occur. I have not done any equivalence analysis on other kinds of WU's yet to ascertain if the same behaviour is exhibited.. and one of these afternoons I'll sit and do that analysis just for my own curiosity. This is exactly why I suggested building some kind of CPU coefficient table for various kinds of CPU's and adjusting them for the clock speeds. From what I can see it seems the credit values would certainly become more proportionate using that kind of scheme. Hmmmm.... do you guys know which processor of a multi-core processor the WU ran on? That might shed a little light on some of this... I dunno.... n' thanks for all your input / thoughts and guidance.. I appreciate it. Honest I do.. now.. leme get outa here before the rotten tomatoes and eggs start flying my way! ![]() ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Wow! I knew the credit granted was fairly stable now, but I have to say, that's remarkably consistent. Kudos to the techs.
|
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
From what I can see while it's close out to the 3rd decimal, beyond that it becomes a little swirly... however, it's entirely possible it's because of how the CPU is sampled for it's performance.
Further, since this is a mufti-core CPU (this one is a quad) it's entirely possible each CPU has a little different clock.. not much mind you.. nevertheless, each cpu could be off in clock time by just a touch. So... if when the client code runs, if it could determine which CPU it's on, and report that it might be interesting... which could also explain some of the differences. Of course, should these tasks switch between processors as a normal part of the computational process, it reporting which of the physical cores it's using might be a moot point.... Thoughts? |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The difference to the 3rd decimal place is very good. Remember that there is a device benchmark (Whetstone and Drystone) and if *anything* else is happening on the core, it will slew the result.
The Mini Work Unit performance will also have a bearing. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Stares,
1) I agree with you that the data to the 3rd point is pretty consistent. Where it gets a bit wobbly is the 4th and beyond decimal. 2). Because I was curious I decided to work out another WU to see if the same points / second held true and it appears, at least for my system it presented points per second within the limits of what would have been expected. I'm suspecting this is because of some other influences and normalization algorithms likely based on some standard deviations or some memory loss curve but who knows. here's the data from another work unit
|
||
|
JmBoullier
Former Community Advisor Normandy - France Joined: Jan 26, 2007 Post Count: 3715 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Hi Barney! The highlighted number is wrong. The correct one is 0.00638, which makes things even more remarkably constant as far as claimed credits are concerned. ![]() By the way, if you expect such levels of precision (less than 1 %) you should keep an eye at the benchmarks in each machine. You may search in (a copy of) the stdoutdae.txt file to find past ones and you will probably see that benchmarks themselves are not perfectly constant. Cheers. Jean. |
||
|
|
![]() |