Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go »
No member browsing this thread
Thread Status: Active
Total posts in this thread: 23
Posts: 23   Pages: 3   [ Previous Page | 1 2 3 ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 2609 times and has 22 replies Next Thread
Ingleside
Veteran Cruncher
Norway
Joined: Nov 19, 2005
Post Count: 974
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Pre-set device profiles

Maybe that's why Rosetta@home seems to have gone to some effort to allow participants to select expected workunit lengths, so that at least the machines with fewer problems do not have to connect to the server as often.

The Rosetta-implementation was largely done so users with monthly download-caps could still run Rosetta, since only downloading one wu per day per core instead of 8 greatly reduces bandwidth-usage. In the opposite end of the spectrum, some users wanted 1-hour-wu's, and due to the nature of Rosetta@home both extremes is possible.

Now, with users controlling the run-time of tasks, it will also control how often client asks for work, since the cache normally needs to be re-filled for maybe every task or few tasks (depends on how many cores). With 1-hour-tasks the cache needs re-filling much more frequently than if 24-hour-tasks. But, the size of tasks has nothing to do with either "Connect..." nor "Additional...", so in most projects there aren't any controlling of how frequently needs to asks for more work.

According to what I've seen over at GPUGRID, not completely true for versions of BOINC more recent than about 6.6.20. These versions assume that use of the GPU is disabled unless some effort has been made to enable it. So preserving a setting to enable it, even when changing settings on a site that does not plan to offer workunits that use the GPU any time soon, should be sufficient.

The client-default, and the global-preference-default, is to only use GPU if idle. Also, the project-specific preference of AFAIK all projects with GPU is to not use this. But, client is still GPU-enabled, if user haven't either edited cc_config.xml, or installed client as service on vista or win7.

Does not quite work that way for requests for GPU workunits sent to WCG, at least from BOINC 6.6.36 with GPU use enabled by the local preferences file. Such requests stay about as frequent as requests for CPU workunits, and often become frequent again if the cache size is decreased in the local preferences file.

Now, I don't have a nvidia but I do have an ati, so I can't test how it works in v6.6.xx, only in v6.10.x. But, atleast my experience is for Ati-requests the DI starts at 1 minute, and is doubled for each failure to get work, upto a max DI of 24 hours. The actual deferral is more or less exponential like all deferrals, and is between zero and the current DI. Also, in case project doesn't have any cpu-work either, the ATi-DI and the cpu-DI is separately increasing, and the work-requests for either is separate...

So, isn't this also the behaviour in v6.6.xx? You'll see the DI (deferral interval) on project-tab, together with any deferrals if any currently active, if selects a project and hits "preferences".

BTW, please note, all DI for project is zeroed in case user hits "update"...

Also, the GPUGRID project seems to need some information on what to do at their end to reduce requests for CPU-only workunits, which they currently aren't offering for all the operating systems they offer GPU workunits for.

The option was added a week ago, so probably no project has yet upgraded their scheduling-server and started to use this. Also, very few users is currently running the v6.10.6 alpha-build, so for now it's not really an option. By the time v6.10.xx is ready for release on the other hand, it's time for GPUGRID to upgrade...
----------------------------------------


"I make so many mistakes. But then just think of all the mistakes I don't make, although I might."
----------------------------------------
[Edit 1 times, last edit by Ingleside at Sep 22, 2009 11:49:11 PM]
[Sep 22, 2009 11:47:22 PM]   Link   Report threatening or abusive post: please login first  Go to top 
robertmiles
Senior Cruncher
US
Joined: Apr 16, 2008
Post Count: 445
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Pre-set device profiles

According to what I've seen over at GPUGRID, not completely true for versions of BOINC more recent than about 6.6.20. These versions assume that use of the GPU is disabled unless some effort has been made to enable it. So preserving a setting to enable it, even when changing settings on a site that does not plan to offer workunits that use the GPU any time soon, should be sufficient.

The client-default, and the global-preference-default, is to only use GPU if idle. Also, the project-specific preference of AFAIK all projects with GPU is to not use this. But, client is still GPU-enabled, if user haven't either edited cc_config.xml, or installed client as service on vista or win7.

According to my experience, 6.6.36 and at least one version between 6.6.20 and 6.6.36 come without any cc_config.xml file and in this case assume that use of the GPU by BOINC should be completely disabled unless the user creates a cc_config.xml file with the proper parameter or enables it in a local preferences file. GPUGRID doesn't seem to have found much reason to mention whether this changed again in some version after 6.6.36.

I found instructions for creating a cc_config.xml file file there today, but so far without the line needed to enable the GPU for 6.6.36.

Does not quite work that way for requests for GPU workunits sent to WCG, at least from BOINC 6.6.36 with GPU use enabled by the local preferences file. Such requests stay about as frequent as requests for CPU workunits, and often become frequent again if the cache size is decreased in the local preferences file.

Now, I don't have a nvidia but I do have an ati, so I can't test how it works in v6.6.xx, only in v6.10.x. But, atleast my experience is for Ati-requests the DI starts at 1 minute, and is doubled for each failure to get work, upto a max DI of 24 hours. The actual deferral is more or less exponential like all deferrals, and is between zero and the current DI. Also, in case project doesn't have any cpu-work either, the ATi-DI and the cpu-DI is separately increasing, and the work-requests for either is separate...

So, isn't this also the behaviour in v6.6.xx? You'll see the DI (deferral interval) on project-tab, together with any deferrals if any currently active, if selects a project and hits "preferences".

BTW, please note, all DI for project is zeroed in case user hits "update"...


At least for 6.6.36, the behavior with GPU use enabled is to increase the GPU-DI about as fast as the CPU-DI, and reset them at the same time even for projects that offer only one of these types of workunits, but sometimes alternate the requests between GPU and CPU at the same time, GPU only, and CPU only. I've seen no sign it makes ATI requests at all.

Also, the GPUGRID project seems to need some information on what to do at their end to reduce requests for CPU-only workunits, which they currently aren't offering for all the operating systems they offer GPU workunits for.

The option was added a week ago, so probably no project has yet upgraded their scheduling-server and started to use this. Also, very few users is currently running the v6.10.6 alpha-build, so for now it's not really an option. By the time v6.10.xx is ready for release on the other hand, it's time for GPUGRID to upgrade...


The GPUGRID project does ask its participants to upgrade their BOINC versions much more often than most other BOINC projects; apparantly, they often feel a need for the latest GPU-related features.

From what I've seen, they now recommend 6.10.3 for at least one operating system, and have already started working on making use of this new capability for BOINC versions that can handle it, now that I've called it to their attention. No clear sign that they've finished, though.

I've already asked them if 6.6.36 is still the version they recommend for Vista.

A few users there have already mentioned the results of trying versions 6.10.6, 6.10.7, and even 6.10.9. They don't recommend 6.10.9 for multiple GPUs on the same machine.
----------------------------------------
[Edit 2 times, last edit by robertmiles at Sep 27, 2009 3:32:45 AM]
[Sep 27, 2009 3:04:54 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: Pre-set device profiles

...
1. I probably should have written profile-specific project settings instead. I haven't found any information on how to create profiles other than the official 4.
...


Here is a thread that tells how. Though currently it's somewhat broken if you also want to beta test (the boxes won't stay checked after the custom profiles under MyGrid/Beta Testing).

https://secure.worldcommunitygrid.org/ms/devi...ileConfiguration.do?name=

Essentially, you right-click and Copy Shortcut, then Paste that into your address bar, type in the preferred profile name after the equals sign, and hit Enter.

Be sure to click Save at the bottom when you're done. Then go to MyGrid/Device Manager/Device Profiles to see your new profile listed there.
I think underscore is not a legal character in the profile name, and there might be an 8-character limit... I don't know either of those to be facts; just anomalies I seem to remember from creating some.
See the other thread linked above for how to actually implement them, because the website does not show them in the picklist on the Device Configuration page.
[Oct 5, 2009 8:27:25 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 23   Pages: 3   [ Previous Page | 1 2 3 ]
[ Jump to Last Post ]
Post new Thread