| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 99
|
|
| Author |
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
Here's a filter from BOINCview of the CEP1 version 6.19. There were 10 done, averaging 18:23 hours. The Quad averages 16 hours, the C2D near 23 hours. All Stock.
----------------------------------------![]()
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
different machines with the same specs
1.53 and 40.34 hours respectively. both were confirmed as valid. |
||
|
|
knreed
Former World Community Grid Tech Joined: Nov 8, 2004 Post Count: 4504 Status: Offline Project Badges:
|
We are working on narrowing the range of how long the workunits last. Starting at batch 10 we have reduced the overall average length. We are hoping that these changes reduce it down to around 9 hours from the 15 hours the first batches.
You can identify the batch number from the workunit name by looking at the first part of the name. In this example: E000003_352A_00002700y_1_3 'E000003' is the key part and means that this is part of batch 3. However, there is larger variability of workunit duration then we have on our other projects so there is going to be more variation than normal. We will see if we can improve our estimates, but that may not be possible. For the overall average estimate, our tool that estimates duration based on past results has enough data so it is now accurate. This should improve the estimates (subject to the variation). With the length reduction, we have also reduced the deadline to 7 days. Due to the size of the returned results, we need to minimize the amount of time that they are stored on our servers before being sent back to Harvard. For a given batch of data, it takes about 1.5*deadline for the batch to finish. Somewhere around 80% of the workunits in a batch are returned in the first 3 days so those results have to be stored until it completes. We would like to reduce the size of the returned data, but Harvard currently needs all of it. We will continue to explore the issue with them but changes in this are not likely. |
||
|
|
Sirius B
Advanced Cruncher Joined: Dec 11, 2007 Post Count: 67 Status: Offline Project Badges:
|
Darn, signed up for this thing over a year ago........didn't realise it had moved here and gone live. **Edited for inappropriate language**TKH Same here All my rigs are crunching WCG, but I've yet to see a CEP wu ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The Linux/Mac versions haven't launched yet. Perhaps that explains it?
|
||
|
|
Sirius B
Advanced Cruncher Joined: Dec 11, 2007 Post Count: 67 Status: Offline Project Badges:
|
The Linux/Mac versions haven't launched yet. Perhaps that explains it? No, all my rigs are running XP Pro/XP X64/Vista/Server 2003. Should have received 1/2 at least ![]() |
||
|
|
JmBoullier
Former Community Advisor Normandy - France Joined: Jan 26, 2007 Post Count: 3716 Status: Offline Project Badges:
|
Just to make sure, have you checked in the "Available Projects" section of your device profiles if "Please opt me in to new projects..." and/or "The Clean Energy Project" boxes are checked the way you think they are?
----------------------------------------Cheers. Jean. |
||
|
|
Soriak
Cruncher Joined: May 17, 2008 Post Count: 3 Status: Offline Project Badges:
|
With the length reduction, we have also reduced the deadline to 7 days. Due to the size of the returned results, we need to minimize the amount of time that they are stored on our servers before being sent back to Harvard. For a given batch of data, it takes about 1.5*deadline for the batch to finish. Somewhere around 80% of the workunits in a batch are returned in the first 3 days so those results have to be stored until it completes. We would like to reduce the size of the returned data, but Harvard currently needs all of it. We will continue to explore the issue with them but changes in this are not likely. The people behind Folding@Home published a paper about "Storage@Home". The idea behind it is to use a torrent-like system to store data on multiple volunteer computers, instead of having to keep it in-house. Maybe that's something worth exploring for this? Broadband seems available enough, might not be too hard to find a few hundred/thousand people willing to make available a couple dozen GBs each. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Many papers have been published on this subject.
It's not all that easy, though. Redundancy is very important in preventing data loss. Given the technical limitations of the BOINC system, it simply isn't practical. In the case of World Community Grid, such a system isn't even necessarily desirable. The results need to be kept on the servers for validation purposes. After validation, the redundant data can be discarded. Finally, World Community Grid need to deliver the computed data to the researchers. What the researchers do with the data is a different problem. Distributed storage using highly available desktops may be a very practical solution. |
||
|
|
Soriak
Cruncher Joined: May 17, 2008 Post Count: 3 Status: Offline Project Badges:
|
That makes sense - thanks for the explanation.
|
||
|
|
|