| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 17
|
|
| Author |
|
|
OldChap
Veteran Cruncher UK Joined: Jun 5, 2009 Post Count: 978 Status: Offline Project Badges:
|
I have a minimum work buffer of 3.5 days. Based on 32 threads that, in my book, is 112days.
----------------------------------------Normal calculated on this rig is around less at around 70 days and increasing the minimum buffer size seems to change nothing. Is this another setting that doesn't work with FA@H? ![]() |
||
|
|
Mgruben
Advanced Cruncher Joined: May 26, 2013 Post Count: 94 Status: Offline Project Badges:
|
I've had my problems with WCG scheduling, and found that increasing the value for connect to network about every ___ days will increase the amount of work that sits on your rig.
----------------------------------------Increasing cache ___ extra days of work just increases the amount of time between connecting to the WCG server ![]() |
||
|
|
Sgt.Joe
Ace Cruncher USA Joined: Jul 4, 2006 Post Count: 7846 Status: Offline Project Badges:
|
I have both of my 8 core rigs set for 1 day work cache. One is on FAAH and the other on MCM1. The 1 day work cache seems to work pretty well and barring an extended outage by WCG always keeps plenty of work in the hopper. The variability of the lengths of both the FAAH and FAHV units will throw off the number of units in the cache, but BOINC seems to adjust after a little while. I really don't worry much about it.
----------------------------------------Cheers
Sgt. Joe
*Minnesota Crunchers* |
||
|
|
Mgruben
Advanced Cruncher Joined: May 26, 2013 Post Count: 94 Status: Offline Project Badges:
|
The variability of the lengths of both the FAAH and FAHV units will throw off the number of units in the cache This is absolutely true, and has bit me several times on my dedicated CEP2 rig (which expects 18-hour runtimes and sometimes only gets 4-, even with a 2.5-day buffer it will start sucking air pretty soon unless you're connecting frequently enough).![]() |
||
|
|
Byteball_730a2960
Senior Cruncher Joined: Oct 29, 2010 Post Count: 318 Status: Offline Project Badges:
|
This really annoyed me this week.
I have a 4 core/8 thread laptop that crunches 24/7 but is unattended over the weekends. I have my buffer set at 5.5 days yet it has run out of work over the last 3 day weekend and also the last normal weekend. Checking the logs, it ran out of work after about 60 hours last weekend. Even though there were no tasks to upload nor report. 60 hours = 2.5 days which is not 5.5! Aaaarggh! |
||
|
|
KWSN - A Shrubbery
Master Cruncher Joined: Jan 8, 2006 Post Count: 1585 Status: Offline |
There is a limit of work in progress at 25 units per core.
----------------------------------------Don't know if I've seen it posted anywhere, but it definitely limits my downloads. ![]() Distributed computing volunteer since September 27, 2000 |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Back in 2004-2005 BOINC had a number of limit algorithms designed to outsmart the users and prevent them from overfilling their cache. We worked out all the details, but then we noticed that the scheduler was one of the most frequently changed pieces of software.
I don't know how the scheduler in BOINC 7.2.47 for Windows works, but I assume that the volunteer developers still follow the old philosophy of protecting the users from themselves. Lawrence |
||
|
|
noderaser
Senior Cruncher United States Joined: Jun 6, 2006 Post Count: 297 Status: Offline Project Badges:
|
The "work buffer" setting works just fine for me... Set it to 2 days just to see what happens, and got 115 new WUs from Enigma!!! That was using 7.2.33, BOINC distribution--not the WCG one. I typically leave it set at 0 unless there's a reason I need to cache extra work, usually towards then of a WCG subproject when I want to be sure to hit a computing time target and work is scarce. The most recently I've done this with WCG was when HFCC, HCC and CFSW were all looking like they were going to finish within an inch of each other. Otherwise, I don't want to risk running over deadlines or depriving other people of work.
----------------------------------------To the original poster, the buffer setting is based on calendar days. The fact that you have around 70 days total of work on your computer would suggest that it's probably doing its best to keep your buffer filled. The projects can also limit the number of simultaneous WUs each host or user can have, or the WU generator may be lagging behind. At any rate, if you really want to keep a high work buffer for whatever reason, you may need to throw in some other projects or subprojects. |
||
|
|
noderaser
Senior Cruncher United States Joined: Jun 6, 2006 Post Count: 297 Status: Offline Project Badges:
|
You might also check your event log, to see if your client is actually requesting additional work. Sometimes it takes a few cycles of work to get everything dialed in on a new installation, new project or when the length of WUs changes a bit. Also, you may be hitting some other limits within BOINC manager, such as disk space--or maybe there's a hard-coded maximum that only someone with a lot of cores would experience.
----------------------------------------I just tried to cache additional work from WCG/FAAH and got the message "We are currently experiencing high load and are temporarily deferring your scheduler request. Your client will automatically try again later." ---------------------------------------- [Edit 1 times, last edit by noderaser at Apr 9, 2014 2:42:25 AM] |
||
|
|
OldChap
Veteran Cruncher UK Joined: Jun 5, 2009 Post Count: 978 Status: Offline Project Badges:
|
My original post made no reference to a reason why one might want a larger buffer but in this instance I would like to set as high as possible with a view to being disconnected from the net for some days (7-9).
----------------------------------------My hope was that if MCM is limited to 7 days then FA@H would extend this capability some. ![]() |
||
|
|
|