| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 182
|
|
| Author |
|
|
BQL_FFM
Cruncher Germany Joined: Jun 16, 2016 Post Count: 15 Status: Offline Project Badges:
|
Thanks for the suggestion. Prior to migration we will temporarily up this limit to 70. Thanks, armstrdj THX ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Unfortunately members will start hitting the 1000 wu limit hardcoded in BOINC. It happens to me right now on my 32 thread machines even with the 35 per core. Increasing it to 70 per core means 16 thread machines will hit the 1000 WU limit before getting 70 per core. Anything over 70 will only help 8 threads and fewer. Relative to SCC and FAH1, most members will hit the 1000 WU limit long before they get 3 days worth of work... Smart thing might be to mix MCM in with the shorter units to get a 3 day queue. Not easy, but you can run multiple clients on a host, needs tweaking cc_config.xml and assign the RPC port to each, 31416, 31417 etc and of course point each client to a different data dir. Each client needs a processor % (an excellent way to assign each client to a different profile and then be in control of how much of each science is contributed to). Good for a separate topic to discuss. [Edit 1 times, last edit by Former Member at Apr 27, 2017 11:12:14 AM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Could not immediately remember the 1000 limit, but do remember a BM performance discussion as a reason.of such limit.
|
||
|
|
KLiK
Master Cruncher Croatia Joined: Nov 13, 2006 Post Count: 3108 Status: Offline Project Badges:
|
Will sufficient WU will be sent to not overflow the available work? vep, can you clarify what you mean here? If you want to make sure you have enough work in your queue to cover the outage you may need to modify your settings to increase the number of tasks you download. To accomplish this set the "Cache n extra days of work" to 1 or 2 to be safe. it was spoken, that limit will be extended to 70/core Doubling the limit to 70 will still leave the vast majority of my machines dead in the water soon after they start the shutdown. It needs to be done away with a day or so ahead of the shutdown until it is over. Right now, the 35 wu per core limit gets me 15-26 hours of work with SCC, HSTB and FAAH selected and that's better than normal. That 35 limit has been as little as 2-3 hours of work on these same machines (Xeon chips) when FAAH or SCC had the real short WUs. A 140 limit might work if they have absolutely no issues with the move or right after it but who can guarantee that? With a PLANNED two day outage, I will want to have three days of work in each machine's queue at the start. With the 35 limit kicking back in when they come back up, my machines would simply not get any new work until they work back down under the limit in the hours after they come back up. The large majority of Linux machines will go idle during this outage, even with a 70 per core limit. add additional projects & then disable them after the shut-down! don't wimp when they already double the amount of the WUs per core... ![]() |
||
|
|
SekeRob
Master Cruncher Joined: Jan 7, 2013 Post Count: 2741 Status: Offline |
just to quote ""Don't mind him. He's usually like that!""
---------------------------------------- No one has mentioned statistics, but with "The migration will begin on May 15 and is expected to last approximately 48 hours, during which World Community Grid will be unavailable. This means that volunteers will not be able to access the website, fetch new research or return completed work during that time. " What is the last period run before commencement of the cut-over?, in order that I can train my global stats progs performance charting and hunting tool. Is there a few hours in the morning (burning midnight oil), to grab the numbers before going off-line? MMTIA [Edit 1 times, last edit by SekeRob* at Apr 28, 2017 1:57:45 PM] |
||
|
|
NixChix
Veteran Cruncher United States Joined: Apr 29, 2007 Post Count: 1187 Status: Offline Project Badges:
|
Is the present WCG staff going to be affected by this? Is IBM laying off or transferring any of our beloved WCG staff? I hope that this just means they can focus on other things.
----------------------------------------Cheers ![]() ![]() |
||
|
|
cowtipperbs
Advanced Cruncher Joined: Aug 24, 2009 Post Count: 78 Status: Offline Project Badges:
|
From how I read it. it's just a hardware change. I think currently WCS had there own hardware and are now are moving to "cloud"
----------------------------------------![]() |
||
|
|
NixChix
Veteran Cruncher United States Joined: Apr 29, 2007 Post Count: 1187 Status: Offline Project Badges:
|
From how I read it. it's just a hardware change. I think currently WCS had there own hardware and are now are moving to "cloud" Right. Who takes care of the hardware now? After moving operations to the cloud there wouldn't be hardware to take care of. Cheers ![]() ![]() |
||
|
|
KLiK
Master Cruncher Croatia Joined: Nov 13, 2006 Post Count: 3108 Status: Offline Project Badges:
|
From how I read it. it's just a hardware change. I think currently WCS had there own hardware and are now are moving to "cloud" Right. Who takes care of the hardware now? After moving operations to the cloud there wouldn't be hardware to take care of. Cheers ![]() & it also makes possible to "scale up" or down the power needed! ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
what is WU
|
||
|
|
|