Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go »
No member browsing this thread
Thread Status: Active
Total posts in this thread: 182
Posts: 182   Pages: 19   [ Previous Page | 1 2 3 4 5 6 7 8 9 10 | Next Page ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 306171 times and has 181 replies Next Thread
BQL_FFM
Cruncher
Germany
Joined: Jun 16, 2016
Post Count: 15
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: World Community Grid Moves to IBM Cloud


Thanks for the suggestion. Prior to migration we will temporarily up this limit to 70.

Thanks,
armstrdj


THX smile
[Apr 27, 2017 4:36:49 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: World Community Grid Moves to IBM Cloud

Unfortunately members will start hitting the 1000 wu limit hardcoded in BOINC. It happens to me right now on my 32 thread machines even with the 35 per core. Increasing it to 70 per core means 16 thread machines will hit the 1000 WU limit before getting 70 per core. Anything over 70 will only help 8 threads and fewer. Relative to SCC and FAH1, most members will hit the 1000 WU limit long before they get 3 days worth of work... Smart thing might be to mix MCM in with the shorter units to get a 3 day queue.

Not easy, but you can run multiple clients on a host, needs tweaking cc_config.xml and assign the RPC port to each, 31416, 31417 etc and of course point each client to a different data dir. Each client needs a processor % (an excellent way to assign each client to a different profile and then be in control of how much of each science is contributed to). Good for a separate topic to discuss.
----------------------------------------
[Edit 1 times, last edit by Former Member at Apr 27, 2017 11:12:14 AM]
[Apr 27, 2017 7:06:26 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: World Community Grid Moves to IBM Cloud

Could not immediately remember the 1000 limit, but do remember a BM performance discussion as a reason.of such limit.
[Apr 27, 2017 7:10:28 AM]   Link   Report threatening or abusive post: please login first  Go to top 
KLiK
Master Cruncher
Croatia
Joined: Nov 13, 2006
Post Count: 3108
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: World Community Grid Moves to IBM Cloud

Will sufficient WU will be sent to not overflow the available work?


vep, can you clarify what you mean here? If you want to make sure you have enough work in your queue to cover the outage you may need to modify your settings to increase the number of tasks you download. To accomplish this set the "Cache n extra days of work" to 1 or 2 to be safe.
with the 35 WU limit, most of my quad cores cannot even cache one day nevermind two or three

it was spoken, that limit will be extended to 70/core


Doubling the limit to 70 will still leave the vast majority of my machines dead in the water soon after they start the shutdown. It needs to be done away with a day or so ahead of the shutdown until it is over. Right now, the 35 wu per core limit gets me 15-26 hours of work with SCC, HSTB and FAAH selected and that's better than normal. That 35 limit has been as little as 2-3 hours of work on these same machines (Xeon chips) when FAAH or SCC had the real short WUs. A 140 limit might work if they have absolutely no issues with the move or right after it but who can guarantee that? With a PLANNED two day outage, I will want to have three days of work in each machine's queue at the start. With the 35 limit kicking back in when they come back up, my machines would simply not get any new work until they work back down under the limit in the hours after they come back up. The large majority of Linux machines will go idle during this outage, even with a 70 per core limit.

add additional projects & then disable them after the shut-down!

don't wimp when they already double the amount of the WUs per core...
wink
----------------------------------------
oldies:UDgrid.org & PS3 Life@home


non-profit org. Play4Life in Zagreb, Croatia
[Apr 27, 2017 11:06:16 AM]   Link   Report threatening or abusive post: please login first  Go to top 
SekeRob
Master Cruncher
Joined: Jan 7, 2013
Post Count: 2741
Status: Offline
Reply to this Post  Reply with Quote 
Re: World Community Grid Moves to IBM Cloud

just to quote ""Don't mind him. He's usually like that!"" alien 2

No one has mentioned statistics, but with

"The migration will begin on May 15 and is expected to last approximately 48 hours, during which World Community Grid will be unavailable. This means that volunteers will not be able to access the website, fetch new research or return completed work during that time. "

What is the last period run before commencement of the cut-over?, in order that I can train my global stats progs performance charting and hunting tool. Is there a few hours in the morning (burning midnight oil), to grab the numbers before going off-line?

MMTIA
----------------------------------------
[Edit 1 times, last edit by SekeRob* at Apr 28, 2017 1:57:45 PM]
[Apr 28, 2017 1:33:28 PM]   Link   Report threatening or abusive post: please login first  Go to top 
NixChix
Veteran Cruncher
United States
Joined: Apr 29, 2007
Post Count: 1187
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: World Community Grid Moves to IBM Cloud

Is the present WCG staff going to be affected by this? Is IBM laying off or transferring any of our beloved WCG staff? I hope that this just means they can focus on other things.

Cheers coffee
----------------------------------------

[Apr 30, 2017 9:30:51 PM]   Link   Report threatening or abusive post: please login first  Go to top 
cowtipperbs
Advanced Cruncher
Joined: Aug 24, 2009
Post Count: 78
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: World Community Grid Moves to IBM Cloud

From how I read it. it's just a hardware change. I think currently WCS had there own hardware and are now are moving to "cloud"
----------------------------------------

[Apr 30, 2017 10:36:31 PM]   Link   Report threatening or abusive post: please login first  Go to top 
NixChix
Veteran Cruncher
United States
Joined: Apr 29, 2007
Post Count: 1187
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: World Community Grid Moves to IBM Cloud

From how I read it. it's just a hardware change. I think currently WCS had there own hardware and are now are moving to "cloud"

Right. Who takes care of the hardware now? After moving operations to the cloud there wouldn't be hardware to take care of.

Cheers coffee
----------------------------------------

[May 1, 2017 6:02:37 AM]   Link   Report threatening or abusive post: please login first  Go to top 
KLiK
Master Cruncher
Croatia
Joined: Nov 13, 2006
Post Count: 3108
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: World Community Grid Moves to IBM Cloud

From how I read it. it's just a hardware change. I think currently WCS had there own hardware and are now are moving to "cloud"

Right. Who takes care of the hardware now? After moving operations to the cloud there wouldn't be hardware to take care of.

Cheers coffee

& it also makes possible to "scale up" or down the power needed!
cool
----------------------------------------
oldies:UDgrid.org & PS3 Life@home


non-profit org. Play4Life in Zagreb, Croatia
[May 1, 2017 7:40:53 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: World Community Grid Moves to IBM Cloud

what is WU
[May 1, 2017 1:02:08 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 182   Pages: 19   [ Previous Page | 1 2 3 4 5 6 7 8 9 10 | Next Page ]
[ Jump to Last Post ]
Post new Thread