| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 15
|
|
| Author |
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
It takes some effort and time, but one may want to try the manual method:
1] Suspend all WUs at the WU level (and not at the BOINC level). 2] Release suspension on WUs with the shortest deadlines and run it for some time. The shorter the deadline for a WU, the more the run time to be given for that WU. 3] Release the suspension on all WUs. That should leave all WUs with a short deadline a status of "Waiting to run", and the WUs with longer deadline with their usual "Ready to start" status. If you like coffee, time now to get some and next enjoy -- for there is no way until thy Kingdom come that the longer-deadline WU will start ahead of the shorter-deadline WU ! ![]() If BOINC is in a panic mode, things will be messy. Try to get ahead of things before BOINC gets into that panic mode: do the manual method as soon as WUs with short deadlines comes filling up the stock. In the future, 'dynamic scheduling' will take into account what has happened for the science thus far at the userLevel vis-a-vis at the globalLevel, and only then next assign a dynamic 'timeAllotted' (and not a static 'deadline') for doing the WU. ; |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Sure as heck, if you run your cache close or over the repair deadlines and one or the other task runs longer causing to inflated the whole cache, there's panic. As the FAQ states, if you micromanage and run larger cache you'll be needing coffee (nothing at all to do with personal prefs, but I like good coffee anyhow ;-). Those that don't and set a 'smart' cache level never have to and can forget running BOINC... it's design purpose.
Dynamic deadline... not new... WCG has been doing that for years for the devices that *need* extra time. If 7 or 10 days is a problem, then size down the cache... hear the mantra? For wanting to rush Beta tasks, to have a *remote* chance of getting another when completed / returned in the 1:1 per core rule, this was understood, else the WCG system really could not care less whether a repair/beta comes back today or tomorrow, long as it comes back before the deadline. As for restof's observation, we've gone through that long before in a ''hair split discussion''. Who cares whether the first or last of the same time due is processed. To BOINC the order at which tasks were entered into the client_state.xml (random when a batch assigns e.g. 15 in one call), is how it processes them, so it will when there's panic try the last LIFO not FIFO for a while to find out how they run (which may not be the order you see in the task view). Since the scheduling is constantly developed even from point release to point release and the BOINC 7 client (numbered 6.13 in development) is again going to do it *very* differently it is kind of trying to understand how it works, but not knowing what levers are being operated. Some asked for a trick to *automate* the short deadline rush and they've got it. Those who want to do the scheduling themselves... they can do the scheduling themselves, but should not expect lucency trying to argue with a computer.... you'll always loose. Net, run a cache that is constantly over 2 days, even with the repairs being rushed and eventually the client will be taken off the quick return, reliability list... no more repairs... all production work at least with a 7 day deadline (HCC having the current shortest). For Beta it does not matter at all. The purpose of Beta is to test all devices, good or not so reliable, to find the bugs. :D --//-- P.S. andzgrid, have fun doing your suspend/start tricks... that's addictive crunching if done more than incidental... maybe cut down on the coffee ;O) |
||
|
|
PMH_UK
Veteran Cruncher UK Joined: Apr 26, 2007 Post Count: 786 Status: Offline Project Badges:
|
If BOINC gets in a panic with a high value such as 7200 you can ease this by reducing the value (local or website) and it will take effect immediately.
----------------------------------------It does seem to go haywire starting many tasks when it appears plenty of time to do all. I found lower values to be better with short caches, 2000 to 4000 often is enough. Paul.
Paul.
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Indeed haywire, so like you, found lower values good too... depends for instance on the sum of the cache and the switch time, so if there is a 3 day cache and a 4 day deadline task is in, a 1500 minutes value will do the same, skipping up the WUs in question... but as stated in the FAQ, not all clients behave same, so trial and error is needed to find the client sweetpoint.
Will add this good observation to the write-up... thanks for reminding --//-- |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
BOINC_v6.10.58 sometimes exhibits this behavior whereby the WUs with a short(er) deadline is not executed first -- a behavior that is confirmed by a number of crunchers. Assuming that BOINC/WCG-server calculated correctly that the short(er) deadline WU will be done before their deadline, and that it does not matter to the WCG-server when the WU was received as long as the deadline for that WU is met, I remain convinced that, at the client-side, the best policy is still to do the short deadline WU first -- regardless of whether the cached WUs will eventually be done on time or not on time.
In any case, and to enforce my personal do-the-shortest-deadline-WU-first-regardless-of-anything rule, I do the manual intervention every time BOINC deviates from that rule. In fact, I go one step further: I manually check the deadlines and do the manual method most every time my machines do a WU download where there are repair/rush jobs. My reward? I've never seen my BOINC go into panic mode. Another reward is: OK, its my excuse to enjoy my coffee by! ; |
||
|
|
|