Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
World Community Grid Forums
Category: Beta Testing Forum: Beta Test Support Forum Thread: Linux Only Beta Test |
No member browsing this thread |
Thread Status: Locked Total posts in this thread: 177
|
Author |
|
uplinger
Former World Community Grid Tech Joined: May 23, 2005 Post Count: 3952 Status: Offline Project Badges: |
What happens if I don't complete an WU in 8 hours? The work units are designed to exit after 8 hours of computational time. There are multiple calculations sent with each work unit. The first being the most important to the researchers. The others help narrow their search and are extra fine tuning. So the extra work done by the more powerful machines are not discarded, they are actually sent to the researchers. There will be more on this in the future when the project launches, but right now this is all the info we can provide. -Uplinger |
||
|
uplinger
Former World Community Grid Tech Joined: May 23, 2005 Post Count: 3952 Status: Offline Project Badges: |
What I don't know is if there will be 22,000 jobs with two copies, i.e. 44,000 WUs, or if the announced 22,000 was already taking the 2 copies in account. Another way to look at how long it will run is by name. The betas seem to be coming in order. Prefix A.21 came yesterday and A.22 are coming out now. What's the final batch? Today I got a repair job for A20. Was it the first batch? There are no real batch identifiers. We have packaged multiple batches from the researchers into 2 batches randomly. We actually have stopped the 2nd batch from running as we still have unsent work units to run in the first batch. We are analyzing what has been run so far and sending results back to the researchers as well. -Uplinger |
||
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
So there is a slow machine doing 8:00 hours being cut off and a more powerful machine doing all 16. As observed, the credit is a simple 1/2 of the total claimed by both. Would have expected a similar approach to HCMD2 or RICE... number of positions/seeds computed and proration, only when both doing equal jobs in a task-quorum the simple credit average being applied.
----------------------------------------Not being reissued where there is no true quorum, yet used by the researchers may warrant credit consideration. 2 cents thought.
WCG Global & Research > Make Proposal Help: Start Here!
Please help to make the Forums an enjoyable experience for All! |
||
|
Coleslaw
Veteran Cruncher USA Joined: Mar 29, 2007 Post Count: 1343 Status: Offline Project Badges: |
What are the system requirements for these BETA's other then Linux? Is it still the same as CEP1
---------------------------------------- |
||
|
hunterkasy
Senior Cruncher USA Joined: Dec 8, 2008 Post Count: 300 Status: Offline Project Badges: |
What are the system requirements for these BETA's other then Linux? Is it still the same as CEP1 I had 1 linux machine that could not do the beta's, I did not have enough memory, the message said I needed 750 MB of memory and that was for 1 beta |
||
|
JmBoullier
Former Community Advisor Normandy - France Joined: Jan 26, 2007 Post Count: 3715 Status: Offline Project Badges: |
And since there is a huge file to upload with each WU I would not be surprised that the servers check the average upload speed of the requesting device (at least I think they should ).
----------------------------------------For my latest results the size of this file was close to 70 MB! |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
So there is a slow machine doing 8:00 hours being cut off and a more powerful machine doing all 16. As observed, the credit is a simple 1/2 of the total claimed by both. Would have expected a similar approach to HCMD2 or RICE... number of positions/seeds computed and proration, only when both doing equal jobs in a task-quorum the simple credit average being applied. I've been seeing credit proportional to number of "jobs" completed. e.g. BETA_ A.22.C17H12N2S2Si.2_ 1-- 613 Valid 1/06/10 03:57:29 1/06/10 15:23:45 7.37 132.8 / 86.2 BETA_ A.22.C17H12N2S2Si.2_ 0-- 613 Valid 1/06/10 02:29:41 1/06/10 12:33:08 8.00 34.7 / 75.4 <-- mine Faster machine completed, while mine was terminated during Job #13 (i.e. the 14th job) and was credited with exactly 14/16 of the total of the faster machine. It's a good approximation in the long run, but job duration varies widely, even within the one WU (one may take 20 times as long as another), so credit levels vary widely too. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
And since there is a huge file to upload with each WU I would not be surprised that the servers check the average upload speed of the requesting device (at least I think they should ). It seems that the upload speed is capped on the servers. Not getting more than 80kb/s (which could be 10MB/s on some of my cores |
||
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
Might you have an old BOINC version which has not been tuned to at least match the Windows claim level? By what's seen here your machine was treated as a severe lowlier, claim dismissed (is this frequent?), the highclaimer knocked down by whatever logic and then 14/16 taken when in fact really 13 were completed, 14 assumed. There are multiple algo's in place similar to what's seen for other sciences and as yet not worked out which when kicks in.
----------------------------------------
WCG Global & Research > Make Proposal Help: Start Here!
Please help to make the Forums an enjoyable experience for All! |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
By what's seen here your machine was treated as a severe lowlier, claim dismissed (is this frequent?), the highclaimer knocked down by whatever logic I don't see how that can be. If the claim was dismissed, surely the other machine should have received the full 132.8 claimed? Looks to me like it's averaged the two. I hadn't noticed before, but, for this beta, that machine is regularly receiving well more than its claim. I guess this program likes that machine's architecture. When running HCMD2, it gets about 5% less than claim. If it were to have a higher benchmark result on another version of client, it would then be massively over-claiming on HCMD2 and other projects. |
||
|
|