| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 352
|
|
| Author |
|
|
RCC_Survivor
Veteran Cruncher USA Joined: Apr 28, 2007 Post Count: 1337 Status: Offline Project Badges:
|
Whatever the attribution, extended Italian holidays, economics, project choice/continuity, server problems, sure to observe is the deepest summer decline seen in the WCG history: Lets work to ramp this thing back up to new records. You talking to me? Everything I have is running 24/7/365. It has been that way for years. There is nothing I can do to improve the stats.
Be kinder than necessary, for everyone you meet is fighting some battle.
Please join the team The survivors ![]() Bilateral Renal, Melanoma, and Squamous Cell cancers |
||
|
|
astrolabe.
Senior Cruncher Joined: May 9, 2011 Post Count: 496 Status: Offline |
Lets work to ramp this thing back up to new records. You talking to me?Everything I have is running 24/7/365. It has been that way for years. There is nothing I can do to improve the stats. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
been crunchin 24-7 365 here also.. but have had to reduce due to abnormal high summer temps. Should be comming to an end hopefully next week..
|
||
|
|
robertmiles
Senior Cruncher US Joined: Apr 16, 2008 Post Count: 445 Status: Offline Project Badges:
|
Is there currently a frequent problem with uploading the output files of workunits? My two faster computers both appear to be waiting to finish uploading the output files from a previous workunit before they request any more WCG workunits, and are waiting around 2 hours for the next retry at uploading. They're connected to several other BOINC projects, though, and getting an adequate supply of workunits from those.
|
||
|
|
Dataman
Ace Cruncher Joined: Nov 16, 2004 Post Count: 4865 Status: Offline Project Badges:
|
Is there currently a frequent problem with uploading the output files of workunits? My two faster computers both appear to be waiting to finish uploading the output files from a previous workunit before they request any more WCG workunits, and are waiting around 2 hours for the next retry at uploading. They're connected to several other BOINC projects, though, and getting an adequate supply of workunits from those. http://www.worldcommunitygrid.org/forums/wcg/viewthread_thread,33701 ![]() |
||
|
|
KWSN - A Shrubbery
Master Cruncher Joined: Jan 8, 2006 Post Count: 1585 Status: Offline |
And this is why I never let my cache drop below 1.8 days. Just too many occasions where something goes horribly wrong either on my end or theirs. That little extra keeps my machines humming along.
----------------------------------------![]() Distributed computing volunteer since September 27, 2000 |
||
|
|
branjo
Master Cruncher Slovakia Joined: Jun 29, 2012 Post Count: 1892 Status: Offline Project Badges:
|
Will follow your recommendation. I have 1 day cache, but it seems to be not enough. 1.5 - 2 days is reasonable - it is enough to bypass such outages and in case of computer crash it is not that big amount of lost work for sub-project(s).
----------------------------------------Cheers and NI! ![]() Crunching@Home since January 13 2000. Shrubbing@Home since January 5 2006 ![]() [Edit 2 times, last edit by branjo at Aug 31, 2012 8:05:22 PM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Have marked as resolved, happy days.
Thanks guys for your work on this. ![]() |
||
|
|
Rickjb
Veteran Cruncher Australia Joined: Sep 17, 2006 Post Count: 666 Status: Offline Project Badges:
|
branjo: "I have 1 day cache, but it seems to be not enough. 1.5 - 2 days is reasonable"
My "farm" got through the recent server outage with about 1 hour's work to spare on a cache setting of 1.4-1.5d. I can remember 1 other occasion of a long server outage, and that lasted several days. It was over the christmas holiday period in 2006 or 07, when the servers were located in Boulder, CO. There was a server crash during a blizzard and techs could not get physical access to the machines. You needed a cache setting of about 3 days to get through that one. I think it's interesting that WCG found limitations in IBM's commercial GPFS product. It's an indication of just how big "we" are. Better for this to happen in-house than with an external customer, I guess. I expect that the problems will be reported back to IBM HQ, who will make changes to their product. Meanwhile, WCG have not said that they're now running regular scans for large directory-files that have a high proportion of entries for deleted files. They've added more RAM to the servers, but unless there's a lid on the size of the problem they may hit the RAM limit again at some stage. Also, allowing large sparse directories probably increases the amount of CPU time spent searching them and may noticeably degrade system performance. Comments? |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Surely you understand, that WCG is a R&D project to IBM at the same time... things learned here doubtlessly flow back into their knowledge base and products/services quality. Good for them.
|
||
|
|
|