| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 39
|
|
| Author |
|
|
JmBoullier
Former Community Advisor Normandy - France Joined: Jan 26, 2007 Post Count: 3716 Status: Offline Project Badges:
|
During times where HCMD2 jobs are mostly children or grandchildren these WUs are very good at pulling your DCF down, and when a parent or a WU from another project completes the total duration of your job queue will jump considerably, especially if it is as big as you say.
----------------------------------------After several days since I switched from HCMD2-only to DDDT-only estimated durations are pretty well stabilized: the average duration is quite stable now, which allowed me to observe that Ubuntu 64 is more efficient also for DDDT, and by more than I would have said, i.e. 20 % faster than XP 32. The average duration under Ubuntu 64 is a few minutes below 4 hours while it is a few minutes below 5 hours for XP 32. Cheers. Jean. ---------------------------------------- [Edit 1 times, last edit by JmBoullier at Jul 20, 2009 3:56:27 PM] |
||
|
|
Ingleside
Veteran Cruncher Norway Joined: Nov 19, 2005 Post Count: 974 Status: Offline Project Badges:
|
2; Then tasks has been downloaded, client immediately starts running them in "high-priority"-mode. As long as WCG is in this mode, v6.2.xx will block all work-requests to WCG, except idle cpu. This is a common belief but it is simply wrong. It seems to be true when the client switches to high priority mode because the estimated duration of all queued jobs has been suddenly increased after an exceptionally longer job completes. But in this case this is simply because the total estimated duration of all queud jobs has jumped by several hours at once and there is no reason to request new work before a while. Hmm, I'm aware v6.6.xx keeps asking for more work, but was under the impression v6.2.xx worked similar to v5.10.xx... But doing a quick test with v5.10.45 shows there's been a small change since v5.8.xx I wasnt' aware of. In v5.10.xx, if all cpu's runs "high priority", work-request is blocked. If only some but not all runs "high priority", work-request is still allowed. This was tested with cached work < 10 hours, and deadline > 5 days, so clearly the only reason for "high priority" was due to "Connect about every N days". If v6.2.xx works similar to v5.10.xx, a "Connect..." of 10 days will work as my post indicated, but since I can't run v6.2.28, someone else must do this testing... (just disable network, increase Connect... until all cores runs 'high priority', enable network again, and see if asks for more work or not...) ![]() "I make so many mistakes. But then just think of all the mistakes I don't make, although I might." |
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
Correct, else the risk would exist of idle cores under circumstances, so total block does not occur until ALL cores are running HP.
----------------------------------------
WCG
----------------------------------------Please help to make the Forums an enjoyable experience for All! [Edit 1 times, last edit by Sekerob at Jul 20, 2009 7:51:26 PM] |
||
|
|
JmBoullier
Former Community Advisor Normandy - France Joined: Jan 26, 2007 Post Count: 3716 Status: Offline Project Badges:
|
Total block did not occur at all with all cores running HP, and this has lasted as long as I kept the same 5-day setting for the extra work buffer. It is only when I set them down to .5 day that those three devices stopped fetching work, which is quite normal then.
----------------------------------------And I started to play with the "connect every xx" parameter only after setting the extra work lower, i.e. when those devices were already no longer fetching work because of the lower setting. Before then two devices had the connect parameter set at 0.0004 and the third one at 0. In fact it is because I knew that the clients would not fetch work during several days that I tried to force a periodical reporting via the connect parameter, and it does not work this way. I don't remember how it worked with 5.10.45 (I rarely raise the cache to even one day anyway) and I have no intention to install it back for checking. One last surprising fact: the P4 HT is still crunching its last two DDDT WUs in high priority mode although - they are due for July 25 - one is almost complete (>97%) - the network and extra parms are back to 0.004 and 0.5 day respectively - only two CEP WUs are waiting to run now that I have purged the other DDDT WUs a while ago. Funny scheduler... Jean. |
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
Maybe I'm confusing something, the Connect every X days and the scheduled connect in conjunction with the activity menu of network according preferences. Latter has always worked for me perfectly, in 5.10. , 6.2 and 6.6. I'm running it all the time with brief connect periods because interrupted internet connection continues to be a source of bad work, heartbeat issues / zero status and also because I want to see at the end of the day what's going up and in what status.
----------------------------------------As for clients, currently got 5 on one machine and can alternate at will on the same work using the cc_config.xml to make sure they know where to look for the data_dir. Even got one which is not called BOINC, so far perfectly compatible.
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
|
Ingleside
Veteran Cruncher Norway Joined: Nov 19, 2005 Post Count: 974 Status: Offline Project Badges:
|
Total block did not occur at all with all cores running HP, and this has lasted as long as I kept the same 5-day setting for the extra work buffer. It is only when I set them down to .5 day that those three devices stopped fetching work, which is quite normal then. And I started to play with the "connect every xx" parameter only after setting the extra work lower, i.e. when those devices were already no longer fetching work because of the lower setting. Before then two devices had the connect parameter set at 0.0004 and the third one at 0. In fact it is because I knew that the clients would not fetch work during several days that I tried to force a periodical reporting via the connect parameter, and it does not work this way. This behaviour was changed in v5.10.14 and later. For v5.8.xx - v5.10.13, reporting is done at the latest "Connect..." days after result-files uploaded. v5.10.14 introduced a new rule instead, report at most 24 hours after uploaded. Also, <report_results_immediately> in cc_config.xml was introduced for the users that really "need" this. Don't remember at that point the Report finished work "Connect..." days before the deadline was introduced... I don't remember how it worked with 5.10.45 (I rarely raise the cache to even one day anyway) and I have no intention to install it back for checking. Appart for the system running win7rc, I'm still forced to run v5.10.45. Not running any of the v6-versions before v6.6.20 is a very long gap for me... ![]() "I make so many mistakes. But then just think of all the mistakes I don't make, although I might." |
||
|
|
JmBoullier
Former Community Advisor Normandy - France Joined: Jan 26, 2007 Post Count: 3716 Status: Offline Project Badges:
|
OK, we are drifting off topic.
----------------------------------------The purpose of my post was to highlight the fact that current versions of the client do not forcibly stop fetching new work when all tasks are running in high priority mode. If there is no real reason to stop fetching work (like deadlines really exposed or obviously more queued work than set by the user) it just goes on as usual. In fact the confusion comes because Boinc flags normal (i.e. not "rush") running tasks as high priority much too earlier, and it seems that the display modules are not using the same criteria (or the same internal flag) as the scheduler for deciding high priority. By the way, the last DDDT task running in high priority mode in my P4 HT was still flagged as such one hour ago with only one hour before completion and a due date of July 25. Jean. |
||
|
|
Ingleside
Veteran Cruncher Norway Joined: Nov 19, 2005 Post Count: 974 Status: Offline Project Badges:
|
OK, we are drifting off topic. The purpose of my post was to highlight the fact that current versions of the client do not forcibly stop fetching new work when all tasks are running in high priority mode. If there is no real reason to stop fetching work (like deadlines really exposed or obviously more queued work than set by the user) it just goes on as usual. In fact the confusion comes because Boinc flags normal (i.e. not "rush") running tasks as high priority much too earlier, and it seems that the display modules are not using the same criteria (or the same internal flag) as the scheduler for deciding high priority. The Scheduling-server doesn't know there's anything called "high priority" at all, it only checks if a task can be returned before the deadline or not. The client on the other hand does a simulation and decides if needs to run one or more tasks in "high priority" or not. By the way, the last DDDT task running in high priority mode in my P4 HT was still flagged as such one hour ago with only one hour before completion and a due date of July 25. Hmm, what is your on_frac, active_frac, cpu_efficiency and duration_correction_factor? if none of them is screwed-up, there shouldn't be any reason to run "high priority", if you're not also running other BOINC-projects... ![]() "I make so many mistakes. But then just think of all the mistakes I don't make, although I might." |
||
|
|
JmBoullier
Former Community Advisor Normandy - France Joined: Jan 26, 2007 Post Count: 3716 Status: Offline Project Badges:
|
OK, we are drifting off topic. The purpose of my post was to highlight the fact that current versions of the client do not forcibly stop fetching new work when all tasks are running in high priority mode. If there is no real reason to stop fetching work (like deadlines really exposed or obviously more queued work than set by the user) it just goes on as usual. In fact the confusion comes because Boinc flags normal (i.e. not "rush") running tasks as high priority much too earlier, and it seems that the display modules are not using the same criteria (or the same internal flag) as the scheduler for deciding high priority. The Scheduling-server doesn't know there's anything called "high priority" at all, it only checks if a task can be returned before the deadline or not. The client on the other hand does a simulation and decides if needs to run one or more tasks in "high priority" or not. I hoped you would understand that I was talking of the scheduling routines of the client! The server has obviously nothing to do in all this discussion. By the way, the last DDDT task running in high priority mode in my P4 HT was still flagged as such one hour ago with only one hour before completion and a due date of July 25. Hmm, what is your on_frac, active_frac, cpu_efficiency and duration_correction_factor? if none of them is screwed-up, there shouldn't be any reason to run "high priority", if you're not also running other BOINC-projects... Nice try! Since you asked: <on_frac>0.997695</on_frac> <active_frac>0.999859</active_frac> <cpu_efficiency>0.988450</cpu_efficiency> <duration_correction_factor>1.733994</duration_correction_factor> I am a WCG-only person, so no other external project involved, and all my devices are usually running a single WCG project at a time, precisely to avoid the scheduler (of the client) to be too much distressed. The DCF is always around this value since it is a pseudo 2-core (a P4HT in fact), also when the client is not showing funny emergencies. As I said and as you reckon, "there shouldn't be any reason to run "high priority" in such situations, although I have seen it too often with several different versions. Jean. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
... I think you will also want to "suspend network activity" so it won't even try to connect, avoiding roaming charges, until you have returned. Steve Is there somewhere else to implement or invoke that option besides choosing Activity->Network Activity Suspended on the File/View/Tools menu from Advanced View? i.e. I don't find access to that option anywhere in Simple View. Thanks. |
||
|
|
|