| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 26
|
|
| Author |
|
|
armstrdj
Former World Community Grid Tech Joined: Oct 21, 2004 Post Count: 695 Status: Offline Project Badges:
|
We have increased the average runtime of all GO Fight Against Malaria workunits by about 90%.
Thanks, armstrdj |
||
|
|
CandymanWCG
Senior Cruncher Romania Joined: Dec 20, 2010 Post Count: 421 Status: Offline Project Badges:
|
Thank you for the heads up, armstrdj!
----------------------------------------![]() Knowledge is limited. Imagination encircles the world! - Albert Einstein ![]() |
||
|
|
rbotterb
Senior Cruncher United States Joined: Jul 21, 2005 Post Count: 401 Status: Offline Project Badges:
|
Are these new bigger GFAM WUs just in the queue and some older smaller WUs still coming? Just wondering since all the GFAM WUs I'm running on my laptop seem to take 7-9 hours each, and if your change has hit my machine yet, I would think my turnaround times on GFAM WUs would then jump to 15+ Hours. I've seen my laptop lately give estimates of around 15 hours or more, but once the crunching is done, they still come in around 7-9 hours (at least to date).
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Are these new bigger GFAM WUs just in the queue and some older smaller WUs still coming? Just wondering since all the GFAM WUs I'm running on my laptop seem to take 7-9 hours each, and if your change has hit my machine yet, I would think my turnaround times on GFAM WUs would then jump to 15+ Hours. I've seen my laptop lately give estimates of around 15 hours or more, but once the crunching is done, they still come in around 7-9 hours (at least to date). Depends on how much pre-loaded work there was before these longer ones come/came out the hopper. Since GFAM is on low feed priority, they were bound to hit the feeder with greater delay. The chart to check is http://bit.ly/WCGFAM to see the global state [light blue line and legend, giving a trend arrow. They are suggesting the distribution has been going on for a little. The hint/observation I made in the FAAH thread was the belated cue to share the info, impacting 6 sciences' run times. Did not see impact yet for HPF2 & DSFL. |
||
|
|
rbotterb
Senior Cruncher United States Joined: Jul 21, 2005 Post Count: 401 Status: Offline Project Badges:
|
Looks like the longer running GFAM WUs starting popping up on Sunday. I've got four of them running on my laptop from Sunday feeds, and it looks like they are all going to take 12-18 hours to complete at the pace they are going. I guess for as long as I run these going forward, I'll have to plan on them taking a good chunk of two workdays to complete for each WU.
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
SPAM spotted - this post within this thread: https://secure.worldcommunitygrid.org/forums/wcg/viewpostinthread?post=389452
|
||
|
|
TKH
Former World Community Grid Admin USA Joined: Aug 13, 2005 Post Count: 775 Status: Offline Project Badges:
|
Thanks Moonian,
The post has been removed. TKH |
||
|
|
Rickjb
Veteran Cruncher Australia Joined: Sep 17, 2006 Post Count: 666 Status: Offline Project Badges:
|
I've just started crunching the longer GFAM WUs (late Tues 28th UTC, work queues of about 1.5 days).
----------------------------------------I came looking here because there were large numbers of WUs "ready to report" on 2 machines, a sign that one or more WUs has exceeded its built-in estimate of CPU time. Did you increase the built-in estimates of crunch times to match the new WU lengths? [Edit]: It seems not, as the "To complete" times of GFAM WUs that are "Ready to start" is still about 10% less than for FAAH Wus, as it was before the change.[/Edit] Over the last few hours, the proportion of GFAM WUs that my machines have fetched from WCG has increased. Is this just a blip associated with the change in runtimes, or has GFAM returned to normal priority? Of course, if the number of GFAM WUs being issued remains the same, their increased runtimes will increase the proportion of WCG CPU time devoted to GFAM. [OT]: What is the current reason(s) for GFAM WUs being issued at reduced priority? IIRC, the intial reason given was that there was a server failure at Scripps Institute that reduced their ability to handle the required quantity of data. Is this still the case, or are there other reasons, eg the scientists' pace of data analysis? [/OT] Should the increase in GFAM WU runtimes also be announced in the Member News forum? [Edit 3 times, last edit by Rickjb at Aug 29, 2012 9:00:22 AM] |
||
|
|
mgl_ALPerryman
FightAIDS@Home, GO Fight Against Malaria and OpenZika Scientist USA Joined: Aug 25, 2007 Post Count: 283 Status: Offline Project Badges:
|
Hello Rickjb and all my fellow crunchers,
----------------------------------------During the 4th of July weekend, the server at TSRI that we use for both FightAIDS@Home and GO Fight Against Malaria died on us. It had multiple failed disks on each RAID partition, its three power supplies broke sequentially over time, and it had some old, faulty firmware. Consequently, after it died we had to substantially slow down the pace of GFAM, while we ran it from a temporary server (i.e., a desktop and an external hard drive). But everything is now back to normal. Our server was fixed recently, and last week we asked the World Community Grid team at IBM to resume running our projects at full speed. The jobs are flowing out for both projects, and the results are pouring in. During the interim there might be a few hiccups, but GFAM should be running at its normal, full priority very soon (if it isn't already). When our server died on us a couple years ago, we lost some of the earliest FightAIDS@Home data (i.e., experiments that were performed before I joined the team). But I learned from that experience. Due to my obsessive-compulsive behavior regarding backing up at least two copies of all data, when the server died this July, we did not lose any of the FAAH or GFAM data. The IBM team just had to re-send the most recent results to us again, once we had the temporary server running. After a tough couple months, we should have smooth sailing from here on out. Thank you very much for your interest and your continued support, Alex L. Perryman, Ph.D. [Edit 1 times, last edit by mgl_ALPerryman at Aug 29, 2012 5:40:45 PM] |
||
|
|
bieberj
Senior Cruncher United States Joined: Dec 2, 2004 Post Count: 406 Status: Offline Project Badges:
|
Thanks for your post Dr. Perryman and for ensuring that you did not lose any data when the server died this time.
|
||
|
|
|