| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 31
|
|
| Author |
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I noticed that I usually get 6.12 (64 bit) tasks, but still there are some 6.11 (32 bit) tasks in between. But when I get a 6.11 task while my computer has only 6.12 tasks the 6.11 application files are downloaded. So could it be that those application files are deleted if the last 6.11 task is reported? And if the next one comes around they need to be downloaded anew?
Or is it just a local issue with my computer? |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Per my 1:12:54 PM post, was experimenting by pushing the 6.12 ahead and sure enough, the 6.11 no longer are coming... just 6.12. That 6.11 app files get deleted when the last has completed, and then you receiving 6.11 again... don't know if by design... sounds strange. What client are you running? I'm now all 7.0.xx test clients.
--//-- |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Running 6.12.34(x64) on Win7, 7.0.8(x64) on Vista and some 5.10.45 on XP/Win7 as well. Did not have a closer look on all, but at least the mixing of 6.12 and 6.11 happened today on all of them.
----------------------------------------But the vast majority is 6.12, i.e. 64 bit WUs. [Edit 1 times, last edit by Former Member at Jun 16, 2012 7:17:51 PM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
At the moment I receive only WUs with initial quorum 2 and the first WU issued some 4 hours ago. Maybe there was/is a problem with finding the appropriate pair of computers...
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
What happens is that the new *app* is send with a quota limit. Once that limit is reached, old ones [6.11]will be send until the client has returned enough of the 6.12. Really, it makes no sense at all that 6.12 and 6.11 are alternating, unless there is an actual performance test being executed on your hosts. Certainly quite contrary to what I see on my W7 and Linux host. Go to Results Status, filter on a host where you see the weirdness on, select CFSW only and sort on "sent time", then copy the whole lot, all from first 6.12 through the very latest, so the order of assignment can be made out. You need to mark the tasks with the version numbers as that's not shown on the summary pages. If you run BOINCTasks it's even easier as the Tasks view includes the version... sort on Received date/time and copy / paste the CFSW tasks.
----------------------------------------Edit: You need to tell us how many cores the device has. edit2: You will receive quorum 2 until the first 5 of 6.12 have validated, after, long as the sequential validation continues, quorum 1 is send [except when the periodic re-verification is scheduled, or when your device is used to act as wingman to some other device]. If your device receives 4 day deadline tasks, then the device has reached highest rating for a specific science [in a nutshell] --//-- [Edit 2 times, last edit by Former Member at Jun 16, 2012 7:44:19 PM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hmm, I don't use boinctasks and have atm 54 pages for this computer (AMD Phenom II 1100T, 6 cores, Win7, boinc 7.0.8) and don't want to click each pending result this evening, so I copy only one block where it happened several times. If necessary I can look for examples on other machines tomorrow morning.
64bit cfsw_ 4729_ 04729306_ 1-- Gwynedd In Progress 6/16/12 18:49:45 6/26/12 18:49:45 0.00 0.0 / 0.0 32bit cfsw_ 4729_ 04729543_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729609_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729301_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729160_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729213_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729658_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729240_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729140_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729663_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 32bit cfsw_ 4729_ 04729552_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729669_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729948_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729008_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 32bit cfsw_ 4729_ 04729440_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729131_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 32bit cfsw_ 4729_ 04729368_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 32bit cfsw_ 4729_ 04729014_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 32bit cfsw_ 4729_ 04729016_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729553_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729005_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729516_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729515_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729776_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729261_ 1-- Gwynedd In Progress 6/16/12 18:49:30 6/26/12 18:49:30 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729323_ 1-- Gwynedd In Progress 6/16/12 18:49:11 6/26/12 18:49:11 0.00 0.0 / 0.0 64bit cfsw_ 4729_ 04729110_ 1-- Gwynedd In Progress 6/16/12 18:49:11 6/26/12 18:49:11 0.00 0.0 / 0.0 Not one is a repair unit (first WU was sent about 14:02), although I received already repair units for both applications and had some WUs with single redundancy. I guess its no (big) problem because it causes only some overhead but the calculations should be ok, but if you need more information, just tell me. But I will not reply until tomorrow morning... Good night! :-) |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hi.
I'm getting resends for 6.11 as well as the ones for the new 6.12, ( It keeps downloading the 6.11 app every time ) can't the app purge be turned off if for nothing else to save some bandwidth on both sides. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
You're the second who reports this. When I had the mix of 6.12, I pushed the 6.12 ahead to speed up the validation to
A) get past the initial quota 2 limitation. B) get soonest the 5 validated to allow running alone again. Result: Not received a single 6.11 since the first 6.12, and in fact already receiving 6.12 repair jobs. Since the last 6.11 is about to report [which is what you get by pushing the 6.12 ahead], I'll monitor if my 7.0.27 client looses the 6.11 app info ** . It's a repair approved device, so chances are I'd get 6.11 resents if they're in the queue. 7.0 has the advantage to only fetch work when the MinB has been reached, which is when it back-fills in one big call to top up the MaxAB. (not hunting for these stray rare tasks of certain sciences). Reduces chance of multiple downloads considerably, in this case the 6.11 with resents. A suggestion to switch off "app purge", in the first week [maybe 14 days] of a upgraded science app version release may be something the techs would want to consider. The DB that comes with an initial download of this science is not exactly small (20MB per the system requirement page). ** And indeed, same here, 6.11 instantly removed... maybe a slightly overzealous housekeeping rule (default no doubt for server 700, as under server 601 these were ever lasting, until doing a project detach/attach IIRC). --//-- |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
As another data point for you, I've just received a 6.11 repair job on a machine that is now normally running 6.12 at zero redundancy, and 6.12 repairs, with no other 6.11 WUs in the cache:
----------------------------------------Project Name: Computing for Sustainable Water Created: 06/14/2012 02:26:47 Name: cfsw_4496_04496871 Minimum Quorum: 2 Replication: 2 cfsw_ 4496_ 04496871_ 3-- - In Progress 18/06/12 06:40:34 22/06/12 06:40:34 0.00 0.0 / 0.0 cfsw_ 4496_ 04496871_ 2-- 611 Pending Validation 15/06/12 14:48:43 16/06/12 18:36:54 2.00 19.5 / 0.0 cfsw_ 4496_ 04496871_ 1-- 611 Error 15/06/12 11:15:33 18/06/12 06:25:57 0.00 21.3 / 0.0 cfsw_ 4496_ 04496871_ 0-- 611 Error 15/06/12 11:13:15 15/06/12 14:08:50 0.00 0.0 / 0.0 Edit: now Valid: cfsw_ 4496_ 04496871_ 3-- 611 Valid 18/06/12 06:40:34 18/06/12 07:38:54 0.72 21.2 / 20.4 cfsw_ 4496_ 04496871_ 2-- 611 Valid 15/06/12 14:48:43 16/06/12 18:36:54 2.00 19.5 / 20.4 Edit2: In a subsequent download of CfSW (all cfsw_4814), about 20% were 6.11, intermingled with 6.12. Since that, other downloads have been 6.12 only. This i5-750 (Win7 64-bit) runs 6.11 in 0.72h, 6.12 in 0.64h. [Edit 2 times, last edit by Former Member at Jun 18, 2012 10:08:27 AM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Just another interesting job (well, at least for me...):
W64 cfsw_ 4715_ 04715350_ 1-- 612 Valid 16.06.12 11:39:42 18.06.12 11:07:00 0.69 20.3 / 20.2 W64 cfsw_ 4715_ 04715350_ 2-- 612 Valid 16.06.12 11:39:10 16.06.12 12:53:56 1.06 20.1 / 20.2 W32 cfsw_ 4715_ 04715350_ 0-- 611 Error 16.06.12 10:26:11 16.06.12 10:38:02 0.00 20.8 / 0.0 The WU must have started as w32, since the _0 job had app version 6.11. But it returned as error before the second job (minimum quorum 2) was sent. And then it decided to send the remaining job and the repair job both with version 6.12 - maybe because the first job returned an error and therefore didn't need to be matched? Impressive... |
||
|
|
|