| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 3
|
|
| Author |
|
|
Baywhale
Advanced Cruncher Joined: Apr 18, 2006 Post Count: 88 Status: Offline Project Badges:
|
This question is most relevant to HPF2 as there may be long periods of time between checkpoints.
----------------------------------------Am I right in thinking that recent data is lost if the computer is shut down before the agent reaches the next checkpoint? If this is the case then wouldn't it make sense to only release HPF2 work units to computers that are run continuously (or to suggest this in the minimum requirements for this project)? |
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
The nature of the HPF2 beast is: The longer its up, the less time is lost in relative terms. 8 Hours a day would loose X x 7 a week due checkpoint return. 16 Hours a day, would potentially loose half the time. Its very dependent on a shutdown just after a segment finish or just before a segment finish. This is the very reason that minimum CPU requirements were set by WCG, so that you can be expected complete 1 or more segments a day, prior to close. Remember, its meant to run in idle time and not insight anyone to let it run 24/7.....that's personal choice entirely, though surely very much appreciated. Just assigning this to fast machines or 24/7 would slow the overall HPF2 project down is my estimation.....The WCG grid is not large enough 'yet' to be able to manage it like that!
----------------------------------------
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
|
Alther
Former World Community Grid Tech United States of America Joined: Sep 30, 2004 Post Count: 414 Status: Offline Project Badges:
|
This question is most relevant to HPF2 as there may be long periods of time between checkpoints. Am I right in thinking that recent data is lost if the computer is shut down before the agent reaches the next checkpoint? If this is the case then wouldn't it make sense to only release HPF2 work units to computers that are run continuously (or to suggest this in the minimum requirements for this project)? Yes, data is lost, but how much depends on the application and how often it checkpoints. HPF2 actually checkpoints more at a more consistent rate than either FAAH or HPF1. Just because you don't see the progress increasing doesn't mean it's not checkpointing. That doesn't mean HPF1 will not checkpoint sooner than HPF2, it's just that HPF1 varies wildly between checkpoints. It might be 5 minutes 1 time and 45 minutes the next. That's because HPF1 checkpoints only at the end of a successful structure prediction while HPF2 checkpoints at the end of each attempted structure prediction.
Rick Alther
Former World Community Grid Developer |
||
|
|
|