| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 30
|
|
| Author |
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Famous movie: Le Souffle au Coeur
(Not one of your soufflés ;>) --//-- |
||
|
|
pcwr
Ace Cruncher England Joined: Sep 17, 2005 Post Count: 10903 Status: Offline Project Badges:
|
Anyway, please consider that currently is high vacation period in France Cheers, Yves Yes this lasts until at least Sept. I was meant to be going to France for a show, but due to the venue being "closed" for the whole of the summer holidays, the event wont be happening until the last weekend of the holidays, just when everyone has to be back home due to kids being back at school. Patrick ![]() |
||
|
|
Speedy51
Veteran Cruncher New Zealand Joined: Nov 4, 2005 Post Count: 1326 Status: Offline Project Badges:
|
192 results were returned as of 8/10/11 00:05:33 I gather this work is on crunches PC'S & not resend jobs. Please correct if I'm mistaken.
----------------------------------------![]() |
||
|
|
Old Jim
Cruncher Joined: Nov 10, 2006 Post Count: 3 Status: Offline Project Badges:
|
Anyone have an idea when the project will go back on-line for those of us that wish to crunch numbers for them?
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Anyone have an idea when the project will go back on-line for those of us that wish to crunch numbers for them? Actually we don't. August is holiday month in France and most of the research team is away from the office. We expect everyone back in a few weeks and then we'll all know when the project will come back online. Thanks for asking. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Patience is a virtue:
Project Name: Help Cure Muscular Dystrophy - Phase 2 Created: 07/20/2011 16:56:47 Name: CMD2_2047-2JGB_A.clustersOccur-3C1X_A.clustersOccur_2_34173_37254 Minimum Quorum: 2 Replication: 2 Result Name App Version Number Status Sent Time Time Due / Return Time CPU Time (hours) Claimed/ Granted BOINC Credit CMD2_ 2047-2JGB_ A.clustersOccur-3C1X_ A.clustersOccur_ 2_ 34173_ 37254_ 1-- 640 Valid 7/21/11 04:30:22 7/21/11 15:40:51 6.00 28.8 / 34.8 <-- Mine CMD2_ 2047-2JGB_ A.clustersOccur-3C1X_ A.clustersOccur_ 2_ 34173_ 37254_ 0-- 640 Valid 7/21/11 04:30:17 8/13/11 10:51:23 6.00 25.2 / 21.5 <-- "No Reply" that returned the result 23 days late, and validated OK! CMD2_ 2047-2JGB_ A.clustersOccur-3C1X_ A.clustersOccur_ 2_ 34173_ 37254_ 2-- - Other 1/1/70 00:00:00 1/1/70 00:00:00 0.00 0.0 / 0.0 |
||
|
|
gb009761
Master Cruncher Scotland Joined: Apr 6, 2005 Post Count: 3010 Status: Offline Project Badges:
|
"No Reply" that returned the result 23 days late, and validated OK! Good grief... must be some sort of record...![]() |
||
|
|
Mysteron347
Senior Cruncher Australia Joined: Apr 28, 2007 Post Count: 179 Status: Offline Project Badges:
|
"No Reply" that returned the result 23 days late, and validated OK! Hmm. Let's look at a relevant chart from Sekerob: The "Task CPU Hours" line shows a distinct dip when the project was turned off, then a sustained above-historical-average runtime which is actually INCREASING. It doesn't take a genius to figure out that this indicates slow processors reporting late. In normal circumstances with the project running, these late results would have been discarded and the job re-queued to another cruncher. It's reasonable in my view to conclude that the 'work' being done by these plodders is in fact NEGATIVE. The results are discarded and the work unit is effectively simply delayed by 10 days while the tortoise toils fruitlessly. So - what to do? The politicians' solution (quick, easy and an absence of thought) is to simply ban the slower processors. The smart volunteer-manager's solution is to allot the tiny repair jobs to the slower processors. I recall receiving a job that was either 1 or 3 positions. Didn't take long on my 3G2 quadcore - but that's not the point. It would have been better to send that unit to an old W98 Pentium (and they're still attached...) Even if it took an hour, it would have been useful work Which all leads back to the "Crunch 'til You Drop" debate. Gains were made by implementing the performance-matching system. I believe even greater gains could be made by using a simple per-installation moving-average: the crunch-speed given by the number of positions processed per unit of CPU time. Generate different workunit sizes depending on the distribution of available processors and pick units matched to the crunch-speed of the target processor - the very small units and repairs for the slower processors to keep them usefully employed and use the faster processors on larger chunks. And given that according to the project status, even given the Gallic holiday season, when WUs in the 2000s were being despatched in early July, batch 1785+ were still outstanding. I'd suggest the 10-day deadline needs a closer look if this published data is to be trusted. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hi,
there is a problem in Lyon with the reception of the data and diskspace completely full. We need to wait the end of the vacation time for solving it. We, from paris, have no way to solve the matter. for the rest, we are right now continuing data analysis over the first WCG dataset by crossing JET (prediction analysis of interaction sites) with the dataset you provided. we hope to have written a report in a month or two on this and give you extra info on the prediction level we can hope for. Yes, over the summer things do not stop completely in france :) Alessandra |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Mysteron347,
----------------------------------------Good but not original thought. It's in the development path to shape and assign jobs based on power, so even when you'd have a large variation in FLOPS per job, you'd see that Hours per Task go much flatter. In a way by peak shaving at 6hrs/60% and CPU matching it's already partially achieved for this science. What we now see, is typical project end-game behavior... hours going up, credit per hour going down, brief hikes when the repairs on "No Reply" (NR) go out... the statistics of small numbers. On NR, long as those tasks are on the live system, they can come in, but if a make-up copy was send even though the project is on hold [did they send copies?], then these 23 day overdue returns are redundant. The server would have send a message, but with older clients, there would not be an auto-cancellation... only a warning that ''you might not get credit''... The older the client, the less likely they're in ''managed'' operation mode and these are as welcome too. Future can be seen, but we can't touch it yet. Kind off. --//-- P.S. I'd suggest the 10-day deadline needs a closer look if this published data is to be trusted. Give us a good reason, but for the progress percent / remaining days, why it should not be trusted to a very large degree? [Edit 2 times, last edit by Former Member at Aug 14, 2011 8:00:05 AM] |
||
|
|
|