| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 43
|
|
| Author |
|
|
nanoprobe
Master Cruncher Classified Joined: Aug 29, 2008 Post Count: 2998 Status: Offline Project Badges:
|
As per the requirement for high end graficcards, i am sure there are a few gamers here as (like me). I remember when HCC had an GPU-app, it practically produced 10-15 times my normal output for about 100 watts more. Definately worth it in terms of computing power... not sure if any card is is worth the extra electricity...while you can have power-efficient chip-sets which crunch without extra power intake! ![]() IDK: When the HCC GPU app was running my 7970 could crank out 1 million PPD all by itself. Don't remember how many tasks it took to get that much but it was thousands per day.
In 1969 I took an oath to defend and protect the U S Constitution against all enemies, both foreign and Domestic. There was no expiration date.
![]() ![]() |
||
|
|
KLiK
Master Cruncher Croatia Joined: Nov 13, 2006 Post Count: 3108 Status: Offline Project Badges:
|
As per the requirement for high end graficcards, i am sure there are a few gamers here as (like me). I remember when HCC had an GPU-app, it practically produced 10-15 times my normal output for about 100 watts more. Definately worth it in terms of computing power... not sure if any card is is worth the extra electricity...while you can have power-efficient chip-sets which crunch without extra power intake! ![]() IDK: When the HCC GPU app was running my 7970 could crank out 1 million PPD all by itself. Don't remember how many tasks it took to get that much but it was thousands per day. power draw of 7970 is extra 250W! while the same crunching power you get from 2x 750 Ti...with only 2x 60W... ![]() ---------------------------------------- [Edit 1 times, last edit by KLiK at May 30, 2016 9:03:46 AM] |
||
|
|
Mumak
Senior Cruncher Joined: Dec 7, 2012 Post Count: 477 Status: Offline Project Badges:
|
KLiK - that's only the TDP, real numbers are a bit different and depend on application used.
----------------------------------------For example on Einstein@Home running BRP6, a HD7950 consumes ~120-130W, Fury X ~150W. A 750 Ti about 30 W. If I compare the runtimes, running 2x750 Ti gives slightly more output than the HD7950 for 1/4 power. But this is because of the application used, CUDA 5.5 tasks are just much better optimized there. On other projects you might get different results, exact opposites when using double-precision (i.e. MW@H). ![]() |
||
|
|
KLiK
Master Cruncher Croatia Joined: Nov 13, 2006 Post Count: 3108 Status: Offline Project Badges:
|
I do know that...so mostly I run optimized GPUs without additional power on SETi@home!
----------------------------------------Power bills are the issue mostly...& heat, also! ;) |
||
|
|
nanoprobe
Master Cruncher Classified Joined: Aug 29, 2008 Post Count: 2998 Status: Offline Project Badges:
|
As per the requirement for high end graficcards, i am sure there are a few gamers here as (like me). I remember when HCC had an GPU-app, it practically produced 10-15 times my normal output for about 100 watts more. Definately worth it in terms of computing power... not sure if any card is is worth the extra electricity...while you can have power-efficient chip-sets which crunch without extra power intake! ![]() IDK: When the HCC GPU app was running my 7970 could crank out 1 million PPD all by itself. Don't remember how many tasks it took to get that much but it was thousands per day. power draw of 7970 is extra 250W! while the same crunching power you get from 2x 750 Ti...with only 2x 60W... ![]() There are several problems with your analogy. #1 When the HCC GPU app was running here it was OpenCl only. #2. At that time NVidia OpenCl support was horrible to say the least. It was so bad that an ATI 7770 could run circles around the best NVidia had to offer at that time. #3 The 750Ti was not available when the HCC GPU app was running here and wouldn't have made any difference if it had. The awful Nvidia Opencl support would have pretty much neutered its' performance. #4. Now how many of the best Nvidia cards available at that time would it have taken to equal the performance of 1 7970 is anyone's guess. My guess is at least 10, probably more. ![]() BTW I have 750Ti cards an I like them very much.
In 1969 I took an oath to defend and protect the U S Constitution against all enemies, both foreign and Domestic. There was no expiration date.
![]() ![]() |
||
|
|
KLiK
Master Cruncher Croatia Joined: Nov 13, 2006 Post Count: 3108 Status: Offline Project Badges:
|
As per the requirement for high end graficcards, i am sure there are a few gamers here as (like me). I remember when HCC had an GPU-app, it practically produced 10-15 times my normal output for about 100 watts more. Definately worth it in terms of computing power... not sure if any card is is worth the extra electricity...while you can have power-efficient chip-sets which crunch without extra power intake! ![]() IDK: When the HCC GPU app was running my 7970 could crank out 1 million PPD all by itself. Don't remember how many tasks it took to get that much but it was thousands per day. power draw of 7970 is extra 250W! while the same crunching power you get from 2x 750 Ti...with only 2x 60W... ![]() There are several problems with your analogy. #1 When the HCC GPU app was running here it was OpenCl only. #2. At that time NVidia OpenCl support was horrible to say the least. It was so bad that an ATI 7770 could run circles around the best NVidia had to offer at that time. #3 The 750Ti was not available when the HCC GPU app was running here and wouldn't have made any difference if it had. The awful Nvidia Opencl support would have pretty much neutered its' performance. #4. Now how many of the best Nvidia cards available at that time would it have taken to equal the performance of 1 7970 is anyone's guess. My guess is at least 10, probably more. ![]() BTW I have 750Ti cards an I like them very much. GT 440 & GT 545 would be the best 2011 had to offer without extra power! https://www.techpowerup.com/gpudb/261/geforce-gt-440-oem https://www.techpowerup.com/gpudb/627/geforce-gt-545 in 200W area, there's another card 560Ti: https://www.techpowerup.com/gpudb/289/geforce-gtx-560-ti-448 anyway...not to go there, about the speed! I'm simply checking the costs fo running those cards...200W while cooking that card is much! too much in fact! ![]() |
||
|
|
nanoprobe
Master Cruncher Classified Joined: Aug 29, 2008 Post Count: 2998 Status: Offline Project Badges:
|
I thought our discussion was about the best performing card for the HCC GPU app that was available to run here. Guess I was wrong.
----------------------------------------
In 1969 I took an oath to defend and protect the U S Constitution against all enemies, both foreign and Domestic. There was no expiration date.
![]() ![]() |
||
|
|
SekeRob
Master Cruncher Joined: Jan 7, 2013 Post Count: 2741 Status: Offline |
Those not wanting to bet on a one horse race, here a 2016 Phoronix article on NVidia Cuda v OpenCL.
http://www.phoronix.com/scan.php?page=article...16-first-clcuda&num=1 The sum is simple... if a CUDA project would come to WCG, a very large portion of the volunteers gets cut out. If OpenCL, yes NVidia might not be up to par, but pretty much all modern cards could participate from both camps [And maybe even the Intel/AMD APUs]. |
||
|
|
nanoprobe
Master Cruncher Classified Joined: Aug 29, 2008 Post Count: 2998 Status: Offline Project Badges:
|
Those not wanting to bet on a one horse race, here a 2016 Phoronix article on NVidia Cuda v OpenCL. http://www.phoronix.com/scan.php?page=article...16-first-clcuda&num=1 The sum is simple... if a CUDA project would come to WCG, a very large portion of the volunteers gets cut out. If OpenCL, yes NVidia might not be up to par, but pretty much all modern cards could participate from both camps [And maybe even the Intel/AMD APUs]. +1 Nvidia OpenCl support has gotten better since 2011 but why they still lag behind AMD support is a mystery. I read somewhere a while ago that it was intentional to protect their CUDA app. Multi-platform is the future IMHO so it really makes no sense to me for Nvidia to continue to not fully support OpenCl. There will probably always be CUDA apps that can't be ported to OpenCl. JMHO
In 1969 I took an oath to defend and protect the U S Constitution against all enemies, both foreign and Domestic. There was no expiration date.
![]() ![]() |
||
|
|
KLiK
Master Cruncher Croatia Joined: Nov 13, 2006 Post Count: 3108 Status: Offline Project Badges:
|
Those not wanting to bet on a one horse race, here a 2016 Phoronix article on NVidia Cuda v OpenCL. http://www.phoronix.com/scan.php?page=article...16-first-clcuda&num=1 The sum is simple... if a CUDA project would come to WCG, a very large portion of the volunteers gets cut out. If OpenCL, yes NVidia might not be up to par, but pretty much all modern cards could participate from both camps [And maybe even the Intel/AMD APUs]. +1 Nvidia OpenCl support has gotten better since 2011 but why they still lag behind AMD support is a mystery. I read somewhere a while ago that it was intentional to protect their CUDA app. Multi-platform is the future IMHO so it really makes no sense to me for Nvidia to continue to not fully support OpenCl. There will probably always be CUDA apps that can't be ported to OpenCl. JMHO well, SETi@home just launched OpenCL version of SaH & SoG science...compared to CUDA jobs, it's using a whole core instead of 5-20% of it! so would like to see a CUDA app for nVidia cards...while OpenCL can be made for intel HD or ATi's... |
||
|
|
|