| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 3
|
|
| Author |
|
|
bieberj
Senior Cruncher United States Joined: Dec 2, 2004 Post Count: 406 Status: Offline Project Badges:
|
I am wondering if the algorithm that is used to compute the estimated time to completion could be made smarter.
Basically, if the computer has been declaring steady estimates for quite some time, then FAAH starts taking longer than expected, but tasks for other projects remains stable, instead of changing estimates for all projects, why not start out by changing the estimated time for completion for the FAAH only? At least the estimates for the other projects would remain stable while the FAAH settles with its new estimate. Just a suggestion. JB |
||
|
|
knreed
Former World Community Grid Tech Joined: Nov 8, 2004 Post Count: 4504 Status: Offline Project Badges:
|
The way that your client estimates work is based on two factors.
1) The estimated flops for a specific workunit provided by the project 2) An adjustment to the estimated flops called 'Duration Correction Factor' (DCF) which is maintained by the client on your computer to track actual runtime against what was estimated The problem is that the DCF your client uses is applied for all of our projects even if we are only having trouble with our estimates for a single research project. We actually tried really hard for a long time to estimate #1 accurately. We finally gave up and started using the flops reported by recently returned results for a given research application as the basis for our estimated flops. This in general has worked really well since we try to size our workunits to be a consistent size. Unfortunately, their are periods of time where this doesn't work so well (now is one of those times). The problem is when the nature of the work changes slightly to include a easier or harder set of data (i.e. requiring shorter or longer to compute the outcome). We work hard with the researchers to make sure that they provide us with some information about the relatively difficulty of a given batch. However, there are times where this doesn't work out well and we generate workunits that are not of a consistent size. We have resolved the current situation and the FightAIDS@Home workunits that are being sent out now are back to a more normal length. We will continue to work with the researchers to minimize how often this happens. We of course apologize for the inconvenience that this causes our members. |
||
|
|
bieberj
Senior Cruncher United States Joined: Dec 2, 2004 Post Count: 406 Status: Offline Project Badges:
|
Ok knreed, thanks for your feedback.
From what you told me, it sounds like you are trying your best. JB |
||
|
|
|