| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 17
|
|
| Author |
|
|
noderaser
Senior Cruncher United States Joined: Jun 6, 2006 Post Count: 297 Status: Offline Project Badges:
|
I would have a look in your event log for clues as to why you're not getting as much work as you'd like.
---------------------------------------- |
||
|
|
uplinger
Former World Community Grid Tech Joined: May 23, 2005 Post Count: 3952 Status: Offline Project Badges:
|
There is a limit of work in progress at 25 units per core. Don't know if I've seen it posted anywhere, but it definitely limits my downloads. Wow! What a great memory! currently the system does have a max limit per core. This is set on the server side. I will need to debate raising this, but since I don't remember why it was set to 25 in the first place or how long ago it was set, I will need to think of worst case scenarios so that history does not repeat itself. fyi, the server setting in the config is this: <max_wus_in_progress>25</max_wus_in_progress> Thanks, -Uplinger |
||
|
|
KWSN - A Shrubbery
Master Cruncher Joined: Jan 8, 2006 Post Count: 1585 Status: Offline |
Seems to me this was set towards the end of HCMD2 because of the totally random lengths and run-times. Many systems were overcommitting with very small cache sizes.
----------------------------------------As for the limit, I only rarely hit it and that is when running only FAAH Vina. Doesn't seem to be to burdensome. ![]() Distributed computing volunteer since September 27, 2000 |
||
|
|
Byteball_730a2960
Senior Cruncher Joined: Oct 29, 2010 Post Count: 318 Status: Offline Project Badges:
|
KWSN - Thanks a lot for that. You are spot on. 8 thread machine and I had 200 tasks.
A quick back of the envelope calculation based on work done over the weekend (96% FAAH, 4% MCM) shows that I had roughly a 92 hour buffer. Due to my schedule, I have quite a few 3 day weekends during which the computer is left to crunch until I come back and connect it to the internet again. This is usually an 88-92 hours absence which cuts it really really fine. I have a 4 perhaps 5 day weekend coming up meaning a max 136 hour window. Going from what you guys have said, there is no way that I can extend this 88-92 hour buffer is there? Unless I switch to MCM for that weekend? |
||
|
|
KWSN - A Shrubbery
Master Cruncher Joined: Jan 8, 2006 Post Count: 1585 Status: Offline |
Right.
----------------------------------------Unless you want to load a bunch of tasks and abort all the VINA till you have enough Autodock which should carry you through. ![]() Distributed computing volunteer since September 27, 2000 |
||
|
|
Byteball_730a2960
Senior Cruncher Joined: Oct 29, 2010 Post Count: 318 Status: Offline Project Badges:
|
Yup. That is what I was thinking I will have to do.
Probably easier to switch to MCM for that particular weekend then. Thanks again. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
There's a conflict between how boinc is designed to work using the duration correction factor and wcg having disabled this function remotely for the version 7 agents. The dcf for these is locked to 1.000000 for the project, speak wcg overall. Some projects do this to prevent buffer overload because of higher variable runtimes from workunit to workunit and batch to batch. Devices that process faster than average rarely hit buffer setting and devices that are slower often have more work than wanted. Regrettably, this and adjusting the buffer to that behavior also affects the work request frequency, when using maximum additional work buffer and the order of processing, the earlier deadline first, if set over a certain factor relating to deadlines. All in all, loading multiple days and only connect once or twice a week is problametic in amongst. Everyone has to work out for themselves what's the best setting to suite their needs, far from optimal.
|
||
|
|
|