| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 31
|
|
| Author |
|
|
mikey
Veteran Cruncher Joined: May 10, 2009 Post Count: 826 Status: Offline Project Badges:
|
.....of course the other question is "can you have 2 types of card in at the same time, with the monitor plugged into just one of them?" Being a novice at GPU I have no idea! As Hardnews said YES YOU CAN but it can be difficult to get it all working optimally! AMD and Nvidia each have their own driver tweaks and sometimes they just conflict. The BEST way to have two gpu's in one machine is for them both to be identical and even then in Windows you will need to load the drivers twice, once for each card. YES lots of people have mismatched gpu's and they work, but it depends on the project on how well they work. The other thing is 'crossfire', in most cases using 'crossfire' is not helpful to crunching like it is for gaming. In some projects, Moo for example, BOTH gpu's work on the same workunit at the exact same time, so if the cards are mismatched it can take longer as one gpu sits idle waiting for the other card to do its thing. Projects like Collatz use each gpu separately, so if you have 4 gpu's you will crunch 4 units at once. WCG has given no indication, that I have seen, as to how theirs will work. The BIGGEST thing about gpu's is the power they use, usually requiring the purchase of a bigger power supply to keep everything running! Fortunately power supplies are fairly generic so installing a new one, that is big enough, is as simple as removing 4 screws and then making sure all the plugs go back in the right places. A few wire ties will help in keeping the air flow free and clear too. If you are unsure of yourself ask a friend or a local mom and pop computer store can easily do it for you. ![]() ![]() [Edit 1 times, last edit by mikey159b at Dec 24, 2011 1:33:55 PM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
....after due consideration I think I will stick to just one GPU.....which one? I will decide later when I can monitor what happens on WCG. I am not really concerned about other projects, Milkyway is just one that I can test a couple of different cards upon.
----------------------------------------When we get GPU here I will test the same cards and see the results, listen to others who arer also doing it here, then go for a new graphics card..... Thanks for all the help & advise guys.....now awaiting progress here. [Edit 1 times, last edit by Former Member at Dec 24, 2011 2:18:44 PM] |
||
|
|
sk..
Master Cruncher http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif Joined: Mar 22, 2007 Post Count: 2324 Status: Offline Project Badges:
|
GPU crunching is a learning curve. Seek detailed advice on purchases.
Often a single 'powerful' GPU will do as much as two 'mid-range' GPU's. Using AMD and NVidia GPU's together is not presently feasible for GPU crunching newbies, but it probably will be when Boinc 7.x is released. As for what GPU to get - If you only want to crunch here, wait and see. While you would miss all those Beta's, you won't miss the system crashes ;P and who knows when they will Beta test for Mac. NVidia cards tend to work for all projects, but some projects such as MW use 'double precision', and AMD cards are generally better at dp than NVidia. The HCC GPU project will use OpenCL. This does not need dp, so that consideration is irrelevant here. Ditto for POEM. |
||
|
|
mikey
Veteran Cruncher Joined: May 10, 2009 Post Count: 826 Status: Offline Project Badges:
|
GPU crunching is a learning curve. NVidia cards tend to work for all projects, but some projects such as MW use 'double precision', and AMD cards are generally better at dp than NVidia. The HCC GPU project will use OpenCL. This does not need dp, so that consideration is irrelevant here. Ditto for POEM. Learning something NEW every day, that is what gpu crunching all about! I just learned something, THANK YOU!!! I knew about MW and DP but not the rest!!! ![]() ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Well after about one week of experimenting, if you look at my signature and compare the recent RAC's.
WCG has the only CPU RAC Milkway has the ATI RAC Collatz has the NVIDIA RAC I played around with the GPU's, putting the top one for each of the two projects. There is a marked difference between the two. ATI on Milkyway takes about 5 minutes per WU, NVIDIA is about 35 minutes, credits per WU are identical. NVIDIA on Collatz is about 23 minutes elapsed, ATI about the same, but CPU for NVIDIA is about 84 seconds, CPU for ATI is about 20 minutes. Points are the same. When WGC has GPU I will perform the same test to see if one is better than the other, but, it seems obvious that the other GPU projects 'tune' to a specific one......will WCG do the same? |
||
|
|
mikey
Veteran Cruncher Joined: May 10, 2009 Post Count: 826 Status: Offline Project Badges:
|
When WGC has GPU I will perform the same test to see if one is better than the other, but, it seems obvious that the other GPU projects 'tune' to a specific one......will WCG do the same? I was on PrimeGrid this morning and saw this: "For the most part, all of the software that we use is highly optimized and hand-tuned for the particular hardware it's designed to run on. The best example of this is George Woltman's gwnum libraries, which is inside LLR and a lot of other software. It's hand-written assembly language code designed specifically for the various CPU improvements that have been created over the years. It squeezes every last drop of performance out of whatever CPU you have. Opening up the CUDA architecture to more platforms makes it easy to write portable code. It won't necessarily make it easy to write fast code for all platforms. You can have portable code, or you can have fast code, but fast portable code is almost an oxymoron. Take a look at our GPU sieving software, for example. On Nvidia GPUs, the software is written in CUDA, which is sepcifically designed for the Nvidia CPUs. It's blazingly fast. On ATI/AMD, it's written in OpenCL, which is portable and is not specifically designed for ATI's GPUs. It's pitifully slow by comparison to the Nvidia version of the software. The hardware itself is not inferior; it's the fact that a portable, cross platform software architecture is being used that makes it so slow." I am sure WCG is making the same hard choices! ![]() ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
....well if Open CL is not really NVIDIA 'compatible', is it better on ATI?
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I was on PrimeGrid this morning and saw this:... mikey159b, please provide us with a link to the relevant webpage at PrimeGrid.Thanks ; |
||
|
|
petehardy
Senior Cruncher USA Joined: May 4, 2007 Post Count: 318 Status: Offline Project Badges:
|
This is it:-
----------------------------------------http://www.primegrid.com/forum_thread.php?id=3879 Here's another interesting one:- http://www.primegrid.com/forum_thread.php?id=3672 ![]() "Patience is a virtue", I can't wait to learn it! |
||
|
|
sk..
Master Cruncher http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif Joined: Mar 22, 2007 Post Count: 2324 Status: Offline Project Badges:
|
The WCG decided to opt for OpenCL. While the merits of this decision are debatable (CUDA probably being faster for NVidia, Stream for AMD/ATI and C++ possibly for CPU's), the researchers wanted an app that was universal, or as close to it as possible. Unfortunately they still had to design 3 somewhat separate apps; for CPU, AMD and NVidia. There are however some positives; WCG can at least compare performances from a relatively level playing field when the apps are released and award credit accordingly, hopefully setting the standard Boinc-wide. It's not necessarily a bad thing to have a low performing app either; lower GPU utilization reduces running costs and heat problems, making the app more affordable and compatible.
----------------------------------------"On Nvidia GPUs, the software is written in CUDA, which is sepcifically designed for the Nvidia CPUs. It's blazingly fast." To nit-pick; CUDA is fast, low level code designed specifically for NVIDIA GPU's. Perhaps the snippet was taken from a thread discussing the topical use of CUDA on CPU's, supported by recent NVidia tools? This just facilitates future uses of CUDA and app development; some CUDA code can be used now on CPU's (development/testing/early releases) and when at a later date a GPU is added that CUDA code would move to the GPU, and work much faster. "CAL ATI Radeon HD 3800 (RV670) (512MB) and NVIDIA GeForce GT 430 (961MB)" I suspect your 3800 may not be of any use here. I don't think it supports OpenCL1.0 (not 100% sure though). It's about 4years old, but does have 320 stream processors. Still works with MW's streaming ati14 app. ATI Radeon HD 4800's (RV770) on the other hand do support OpenCL1.0, so from that up, probably for here. Possibly 4700's too (RV740). I'm sure someone here will construct a list, prior to or during Beta testing (unlike elsewhere). Your GT430 is somewhat shader (cuda core) limited (96), so while it will work here, don't expect fantastic results from it. Until it can be used here you could use it at Einstein, and might be able to use it soon at PEOM (a single precision Bio-science project). The ATI stream code used at MW is designed for ATI GPU's. It's more akin to CUDA on NVidia than OpenCL on NVidia, ATI, or a CPU. There's also a CUDA_OpenCL app for NVidia GPU's, but desktop/gaming NVidia GPU's are not designed with double precision in mind, and dp performance tends to be around 1/8th to /12th for recent NVidia GPU's (compared to their sp performance). Hence NVidia GPU's are poor at MW. OpenCL is NVIDIA 'compatible'; as much as it is ATI compatible. [Edit 1 times, last edit by skgiven at Dec 28, 2011 6:41:27 PM] |
||
|
|
|