| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 14
|
|
| Author |
|
|
KLiK
Master Cruncher Croatia Joined: Nov 13, 2006 Post Count: 3108 Status: Offline Project Badges:
|
There is an alternative way to do this - one which, depending upon your set-up, you may/may not wish to pursue... and that's to manage your WU's manually ![]() Me, I've just got the 1 dual-core machine that I can get access to very easily, and therefore, I can closely monitor what's running at any particular time. Thus, as I'm "working" on 3 badges at the moment (CEP, HCC & HFCC), I endeavour to keep 1 core running on CEP whilst the other is on either of the other 2 cancer projects. In fact, I find it quite interesting to see if I can manually juggle the WU's so that none of them would end up being returned 'too late' whilst still keeping enough in my 'awaiting processing' queue so as to allow me to do this juggling act. you must be joking, with that suggestion! especially 'cause I have at least 5 of dual-core or dual-CPU machines...so that is not an option! ![]() |
||
|
|
gb009761
Master Cruncher Scotland Joined: Apr 6, 2005 Post Count: 3010 Status: Offline Project Badges:
|
Okay KLIK, it was only a suggestion - one that you may/may not wish to pursue (not be vehermantly against it...). Others may wish to raise themselves up to the challenge of running WU's this way... I personally "enjoy" attempting to beat BOINC at it's own game - especially for periods when I'm not at my PC (i.e., when I'm asleep...)
----------------------------------------Yes, getting BOINC updated so as to do this automatically, would be the best way forward - although even if they took onboard and agreed to your suggestion tomorrow, it'd take time to implement/filter down... ![]() |
||
|
|
Rickjb
Veteran Cruncher Australia Joined: Sep 17, 2006 Post Count: 666 Status: Offline Project Badges:
|
What is the algorithm used by BOINC to determine the order of running ordinary-priority WUs?
I too am trying to manipulate the order of running WUs on my Athlon64 X2 to run 1 CEP and 1 DDDT simultaneously. I sort my work queue on descending order of Progress. WUs at 0% (Ready to start) generally appear in order received. WUs seem to run basically in this queue order, but sometimes they don't. WUs that have run for only a few seconds, but which still show 0%, have priority over most others that are RTS. There seems to be a bias against WUs fetched from further down the queue (by running them briefly), and sometimes a bias towards CEP. Can someone explain? I can't find any info in BOINC wikis. BTW, the A64 X2 seems to perform relatively better on CEP than on any other current WCG projects. I haven't calculated actual average CPU times ratios over a number of WUs, but it beat the wingman in 12/20 recent CEPs, and was really thrashed only occasionally. On HPF2 or NRW, this proportion is closer to about 35%. On CEP it slightly underclaims credits, while on other projects it overclaims by at least 10%. Its credits awarded/hr are best on CEP, OK on Autodock (DDDT/FAAH/HFCC), but abysmal on HCC. It would help if the WCG work despatcher was better able to interleave WUs from the various projects in the selected mix. |
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
You need a degree to understand (and it's changed all the time without notice), so wont extend, just RTFW of which there are a few around. Basics:
----------------------------------------1. Jobs are run in order of receipt per DCG (WCG/SIMAP/ROSETTA/MALARIACONTROL/DOCKING etc) 2. Jobs are alternated per resource share/project weight of 1 above if client is attached to 2 or more active DCGs. 3. Jobs that are under deadline threat are pushed AUTOMATICALLY ahead of the queue. 4. In new alpha client 6.6.24 the system now remembers the past memory/vm peak uses of each science version too prevent the silly job hopping to find tasks that will fit together in memory on multi core devices. With LAIM this had very bad effects on memory use. 5. Jobs will run uninterrupted until at least first checkpoint, unless for instance a point 3 condition develops. Those are just a few things kicking in and supposedly, the clients are tested on a simulator system to ensure the DCG switching etc happens per design and theory.
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
|
|