Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 7
|
![]() |
Author |
|
Rickjb
Veteran Cruncher Australia Joined: Sep 17, 2006 Post Count: 666 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I can't find any info on this feature in the BOINC documentation, but may have a use for it.
----------------------------------------I think one would use that to start a second Manager to manage a second instance of the the client (boinc.exe on Windows) that is running with files in a second BOINC data directory. Confirm? But how would one install such a second instance of BOINC? AFAIK, if one used the standard installer to try to install a second setup, it would delete the first. Could both instances of BOINC simultaneously crunch WCG work? Let me explain why I'd want such a setup. The CPU on the machine with my GPU does not like core voltages as high as 1.24V, which I used to run and which have caused it to degrade. Over 1-2 years, it's proven to be happy at 100% load with 1.216V. But it's in an old motherboard which has poor CPU voltage regulation that requires a BIOS setting of just under 1.30V to get 1.216V under load. When using all 4 cores to feed the GPU, the CPU load fluctuates and this allows the CPU voltage to rise too high at times. To counteract this, I have reduced the speed and voltage settings, and also assign 1 core to a CPU task and only 3 cores to feeding the GPU. The constant load of the CPU task stabilises the voltage. It would be a bonus if the CPU task continued in the event of a GPU crash. But we tend to have Le Tour problem: the GPU tasks form un peleton and the ones at lower %progress slipstream the leaders, who can't get away from the pack because they need more CPU (Cofactor de Performance Undetectable) to break away & sprint over the finish line. It slows down the whole pack. Running under one boinc client, the core on the CPU task is quarantined from helping out with the GPU tasks when they need a little help. My plan is to assign all 4 cores to the GPU in client #1, and run 1 core on a CPU task in client #2. They will have to time-share, but my priority here is the GPU work and ... BOINC runs GPU tasks at higher priority ("below normal") than CPU tasks ("low"). If things work out, the GPU tasks will grab all 4 cores when they need them, and the CPU task will mostly run with the spare CPU time not needed by the GPU client. Is such a setup achievable, and do you think it would work? A 2nd boinc client might also be a way of scheduling to run GPU tasks at certain times and only CPU tasks at other times. I thought I'd want such a setup until I found that the energy used by my 1GHz HD7870 GPU crunching HCC1 is quite acceptable, at only about 60W. [Edit 2 times, last edit by Rickjb at Jan 24, 2013 10:43:30 AM] |
||
|
Crystal Pellet
Veteran Cruncher Joined: May 21, 2008 Post Count: 1323 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Is such a setup achievable, and do you think it would work? That's quite possible. AH is an i7 with 8 threads, but running 6 boinc clients. FS is a dual core with 2 BOINC clients and PC is a quad running 3 BOINC clients. ![]() My purpose was to run more than the 1 default task of Test4Theory. I made it without extra BOINC-installs. I didn't use an extra BOINC Manager, because I'm using BoincTasks ;) You'll find more information in the message boards of Test4Theory: How to crunch multiple T4T tasks simultaneously |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
How to run a second boinc.exe and / or boincmgr.exe is actually in the documentation, but it requires planning and methodology. Each of these 2 are permitted to be passed a host of parms during launch though for boincmgr I've not found an actual wiki ... force a launch error and a window pops up with the parms that can be passed such as --namehost:xxx.xxx.xxx.xxx [name or ip of host] and another to pass the port as when you run multiple boinc instances you need to tell it to use a different port to communicate, default 31416, many using 31417 for an additional instance.
----------------------------------------Here's a post with all the boinc parms: http://boinc.berkeley.edu/dev/forum_thread.php?id=7807&postid=45415 . The front statement in that post is erroneous. boinccmd can be used to attach to projects... he did not read the documentation he posted. edit: Running multiple hosts wont earn you double time [as some will mouthwatering speculate], the cycles just get divided over how many tasks run. Try running 3 on a hyperthreaded duo [2+2] and you'll find them go faster than 4 as hyperthreading is smart enough to use any unused cycles of an idle thread. In effect, when a HCC_GPU task on a HT machine is in GPU phase, the actual spare cycles go toward the other CPU tasks. [Doubting Thomasses in the room may proof me wrong... my octo+W7-64 does this.] [Edit 1 times, last edit by Former Member at Jan 23, 2013 5:59:56 PM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
P.S., Never tried it on Linux, but my duo laptop has 4 client [test] versions on it, each pointed to a different data_dir. A question of installing, and copying away and tweaking the startup parms and cc_config.xml including the <data_dir> line. *don't* install as service. The windows registry will insert a big spanner if you're not careful.
|
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Let me explain why I'd want such a setup. The CPU on the machine with my GPU does not like core voltages as high as 1.24V, which I used to run and which have caused it to degrade. Over 1-2 years, it's proven to be happy at 100% load with 1.216V. But it's in an old motherboard which has poor CPU voltage regulation that requires a BIOS setting of just under 1.30V to get 1.216V under load. When using all 4 cores to feed the GPU, the CPU load fluctuates and this allows the CPU voltage to rise too high at times. To counteract this, I have reduced the speed and voltage settings, and also assign 1 core to a CPU task and only 3 cores to feeding the GPU. The constant load of the CPU task stabilises the voltage. It would be a bonus if the CPU task continued in the event of a GPU crash. I imagine that it can be difficult for any one to mimic -, else compensate for the lack of a good -, voltage-regulation as otherwise done in hardware for the hardware (the job of the load-line calibration hardware-feature controlled via the load-line calibration parameter as seen at a machine's BIOS) given the constraints imposed from concerns about hardware-longevity vis-s-vis heat-management. Anyway, I tried the <ncpus>X</ncpus> xml tag at my cc_config.xml (where X = 8) for my AMD 1090T CPU (which has 6 cores). That did a better job of spreading the CPU-cycles across all WU-types crunched by my CPU -- be it CPU-WUs or GPU-WUs, and hopefully that spreading for your CPU would help stabilize your operating voltage-value. As a bonus, I've seen crunch-times for my GPU-WUs improve (compared to the case where I use X = 6). Another bonus is that I don't have to change the <cpu_usage>N</cpu_usage> at my app_config where I leave N = 1.; ; andgridPost#820 ; [Edit 1 times, last edit by Former Member at Jan 26, 2013 10:21:18 AM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
But we tend to have Le Tour problem: the GPU tasks form un peleton and the ones at lower %progress slipstream the leaders, who can't get away from the pack because they need more CPU (Cofactor de Performance Undectable) to break away & sprint over the finish line. It slows down the whole pack. I have the same observation: GPU-WUs that are being crunched tend to huddle into a pack and once huddled as such, left alone by themselves, it is difficult for the huddle to break. While remaining in that huddle, the pack hits and leaves the processors nearly all at once. My understanding of the underlying processes led me to propose that the HCC-GPU-WUs be dovetailed as way to introduce a pattern in the HCC-GPU work-load. It's easier to maintain good voltage stability for any machine (for that matter) when work-loads are shaped into a pattern (dovetailing is one example) so that GPU-WUs do -, or can be made to be controlled as to -, not hit processors all at the same time.; ; andzgridPost#821 ; |
||
|
Rickjb
Veteran Cruncher Australia Joined: Sep 17, 2006 Post Count: 666 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Thanks, all.
----------------------------------------I'm currently trying <ncpus>5</ncpus> (non-HT Q9650 C2Q) as it's simplest. [Edit]: Have now deleted that setting. See below. [/Edit] I'd forgotten that option was there. Initial impression is that Task Manager shows that the CPU usage of the CPU task is not backing off enough when overall CPU usage hits 100%. I think GPU task execution times are somewhere between the values from allocating 4 cores for GPU vs 3 for GPU+ 1 for CPU tasks. Running 9 x GPU tasks total. Will try a 2nd BOINC client later. [Edit]: I've been thinking about how BOINC runs mixtures of GPU tasks and CPU tasks, and what could be different between the various ways of configuring 1 or 2 instances of BOINC to run them. The clue is that if you look at the OS process monitor (task manager or ps), all configurations have the same result if the same mix of tasks is running! Once they are in the OS task list, the tasks are under the short-term control of the OS, and BOINC does not influence the way they run. The only things that could differ between BOINC setups are the priority levels of GPU vs CPU tasks, and I think the same priorities will be assigned under all configurations. So ... sorry I asked about this. My conclusion: The loss of GPU throughput that I get if I run 1 CPU task is of the order of 10-15%. Slowing the CPU speed a little and assigning all CPU cores to feeding the GPU is a more productive strategy. Slowing the CPU by only 1 more notch (0.025%, to 3.69GHz) enabled me to reduce the core Volts by another notch and it should now rise only to a value that is still safe when the CPU is idle. No discernable effect on throughput noticed so far. [/Edit] [Edit 3 times, last edit by Rickjb at Jan 25, 2013 7:36:19 AM] |
||
|
|
![]() |