Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
World Community Grid Forums
Category: Completed Research Forum: Help Fight Childhood Cancer Project Forum Thread: Test batch for new HFCC target (target-7) |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 85
|
Author |
|
Falconet
Master Cruncher Portugal Joined: Mar 9, 2009 Post Count: 3295 Status: Offline Project Badges: |
Could be that your copy on these was classified as inconclusive and another was sent out:
----------------------------------------HFCC_ target-7_ 00180959_ target-7_ 0000_ 1-- 640 Valid 21.8.2011 23:51:41 22.8.2011 21:04:21 3.75 64.9 / 67.9 HFCC_ target-7_ 00180959_ target-7_ 0000_ 0-- 640 Valid 21.8.2011 23:49:41 24.8.2011 19:01:39 4.02 70.9 / 67.9 But you didn't see it.And in the meantime these were sent HFCC_ target-7_ 00184532_ target-7_ 0000_ 1-- - In Progress 21.8.2011 23:18:04 31.8.2011 23:18:04 0.00 0.0 / 0.0 HFCC_ target-7_ 00184532_ target-7_ 0000_ 0-- 640 Pending Validation 21.8.2011 23:16:25 24.8.2011 19:01:39 3.66 64.4 / 0.0 Because until an inconclusive is validated all other workunits that are received will be quorum 2. AMD Ryzen 5 1600AF 6C/12T 3.2 GHz - 85W AMD Ryzen 5 2500U 4C/8T 2.0 GHz - 28W AMD Ryzen 7 7730U 8C/16T 3.0 GHz |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
snip
----------------------------------------Because until an inconclusive is validated all other workunits that are received will be quorum 2. edit: Maybe you were talking about a device that newly started on HFCC? Then yes, this can be 15-20 units before most jobs are allowed to run alone. This same general rule applies to FAAH and C4CW too. cheers --//-- P.S. This thread is well and truly expired. HFCC is full production and by my sheet calculations there is 166 days work left at full production... really took a bight out of the HCC production [and others too]. The days remaining actually climbed a bit for HCC :D [Edit 1 times, last edit by Former Member at Aug 25, 2011 12:25:57 PM] |
||
|
joeperry39@gmail.com
Advanced Cruncher USA Joined: Nov 22, 2006 Post Count: 140 Status: Offline Project Badges: |
I'm receiving target 7 WUs on both of my computers and have noticed that the actual & projected time to completion is obscenely long, ie 50 hours or longer projected on both computers. Both are AMD, 1 Athlon dual core; the other is a Phenom quad.
----------------------------------------Also I have noted on the quad-core box that is also running CEP2, those WUs are also taking much longer than when I first started running them. Any ideas what might be happening? Are the WUs from these two projects a lot larger than the ones previously available? Neither computer is running any other work at a greater than normal rate that should be interfering with WCG/BOINC. In fact, the dual-core machine is almost totally BOINC 24/7. Any thoughts or ideas will be most appreciated. Thanks in advance. A small correction/addition: The HFCC units are now showing 25+ hours to completion, up from about 12-14 hrs previously; the CEP2 units are the ones showing 50+ hrs. "Everything in moderation, including moderation" -- Mark Twain [Edit 1 times, last edit by osugrad at Aug 25, 2011 5:15:35 PM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello osugrad,
I can think of 2 possibilities that I have run into. The first check is to ensure that each project is running at near 100% core efficiency, using Task Manager or another utility. The second and more likely possibility is a time outage that has your estimation algorithm producing mistaken time estimates. If this is the problem, it will quickly return to normal. Lawrence |
||
|
KerSamson
Master Cruncher Switzerland Joined: Jan 29, 2007 Post Count: 1671 Status: Offline Project Badges: |
@osugrad
----------------------------------------some weeks ago, during a really hot period I did experienced a similar situation with a Phenom II based host running Ubuntu 10.04 x64. I cleaned the ventilator filters and reboot the host. Afterwards everything was OK and the crunching performance was again "normal". Cheers, Yves |
||
|
|