| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 23
|
|
| Author |
|
|
mikey
Veteran Cruncher Joined: May 10, 2009 Post Count: 824 Status: Offline Project Badges:
|
Can you enlighten a computer illiterate what the benefits of GPU processing are and how to do it? do I just go to preferences and click the "Use GPU while computer is in use" ? sorry. :) No unfortunately there is more to it than that! First there is the brand of gpu you get, as SekeRob said most new ones are OpenCL compatible but not all are, older ones can be hit and miss. Go to the Nvidia or AMD website and look at each cards specs before you buy if you can't find it! Second heat is a killer of pc's and gpu's put out a TON of extra heat that basic box pc's just can't handle very well. When Dell/HP/whoever makes a pc they do not expect the gpu to be used for crunching, so they build a box for the average user and it works fine for the most part, but when you start crunching you are going into the realm of 'geeks' and that is not what they think of as 'normal' users. We are more in the 'gamer' area of their thinking. Adding better/more fans might fix that, a better case WILL, but so could just leaving the side off of it. But then leaving the side off of a pc is ugly, so if yours is in the Living Room that may not be the best idea. Third gpu's use power, lots of power, most basic box pc's are built to make money for the vendor, that often means using bare bones parts that are good for what they do but not always for 'geeks'. So that is probably another expense you will have, upgrading the power supply, which really is normally just plug and play. Fourth all those extra workunits will require more on board memory, ram, which could mean a ram upgrade for the machine, meaning even more expense. Fifth your electricity costs WILL go up if you use a gpu to crunch with! Making the decision to go to gpu crunching should NOT be taken lightly, but once you go you will NEVER go back to just plain old cpu crunching!! It is kind of like dial-up, once you try cable you will NEVER go back to dial-up again!!! I have a 6 core AMD machine crunching for WCG, it is doing a little over 3,000 RAC using all 6 cores of the cpu crunching for SN2S, I also have 2 gpu's in the same machine and together they are getting OVER 320,000 RAC on their project!! And the gpu's are NOT the newest by any means!! ![]() ![]() [Edit 3 times, last edit by mikey159b at Mar 7, 2012 2:42:14 PM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Bearcat, what I was trying to say is: We don't know [piles of assuming continues]. The prologue and epilogue could be using a full core. It's for a tech to fill us in what the relative CPU load is and if it allows for a regular job to share that core [if that is what does not require manual adjusting of the cc_config.xml and the other xml file that keeps popping up now and then]. Or, we could just wait till beta comes, play the settings as per tech instructions and then sell the bearskin. [Few brown bears roam here in the Nat.Park Abruzzo and are extremely shy of people... very hard to "photo"-shoot]
--//-- |
||
|
|
BSD
Senior Cruncher Joined: Apr 27, 2011 Post Count: 224 Status: Offline |
I haven't seen a comment about it yet, but I presume the time calculation for badges will be the same for GPU as it is for CPU: 24 hours of GPU crunching = 24 hours of CPU crunching. Not that I'm concerned about badges and points, just curious.
![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Yes, the CPU does the time counting in form of "elapsed". Just packing lots more work and points in the same amount of ticks on the clock. Suppose if you manage to get multiple GPU jobs running concurrently on the same card, or multiple cards, the time accounting will be different. We'll see when beta arrives.
--//-- |
||
|
|
nanoprobe
Master Cruncher Classified Joined: Aug 29, 2008 Post Count: 2998 Status: Offline Project Badges:
|
FWIW I have done some experimenting getting ready for the WCG GPU app by running a few other GPU projects. One thing you can do to save power and reduce heat is to down clock the memory on your graphics card. The default memory setting on the ATI 5830 I'm running is 1000MHz. My final setting was 300MHz. with no discernable loss of effeciency. It lowered the power consumption by 25 watts and temps by 10c when running POEM. It will be interesting to see what the comparisons will be when WCG/GPU goes live.
----------------------------------------
In 1969 I took an oath to defend and protect the U S Constitution against all enemies, both foreign and Domestic. There was no expiration date.
![]() ![]() |
||
|
|
Richard Mitnick
Veteran Cruncher USA Joined: Feb 28, 2007 Post Count: 583 Status: Offline Project Badges:
|
I am just about 20 hours into GPU crunching on my handy dandy new machine. Running twin liquid cooled Nvidia GTX580's. Boy, this is the warmest room in the house.
----------------------------------------Anyway, no hitches. I played audio from my library and also ran some video. The video is tough, so a good test, especially a rich .mkv file. I am on Milky way, GPUGrid, Einstein for GPU, with a line in the cc_config.xml file so that both units are used. This is a hyper threaded six core unit, so I am also on WCG (all projects), Rosetta, and a bunch of others. I just checked tasks, and both units are in use. One cannot go into this lightly. One must plan, especially in the budget. The GPU page in the BOINC Wiki is extremely helpful. I must say, I came to these forums with questions, and even though WCG is not into GPU just yet, I got great help. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
what WCG projects use gpu at this time?
|
||
|
|
KWSN - A Shrubbery
Master Cruncher Joined: Jan 8, 2006 Post Count: 1585 Status: Offline |
what WCG projects use gpu at this time? None right now. HCC beta soon. ![]() Distributed computing volunteer since September 27, 2000 |
||
|
|
Tlabs
Cruncher China Joined: Aug 24, 2007 Post Count: 12 Status: Offline Project Badges:
|
1
----------------------------------------
www.equn.com
![]() ![]() ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I still haven't understood why HCC can't let other tasks use the CPU when it doesn' need to use it itself.
At the moment HCC works like this: Each WU both reserves a GPU and a CPU completely E. g. on a Dual Core with one GPU it is only possible to simultaneously run 1 HCC task + 1 CPU-only task Compare for instance Einstein@home: Each WU reservers 0.5 CPUs + 1 GPU On the same Dual Core it is possible to run 1 GPU + 2 CPU-only tasks. The CPU-only tasks is at lowest priority The combined GPU+CPU task is at second lowest priority. So if HCC only reserves 0.5 CPUs it will run on CPU at the beginning and end of each WU (at highest possible speed due to its higher priority). When HCC only needs to process few CPU cycles another CPU-only task can kick in and use the otherwise-wasted CPU cycles. |
||
|
|
|