| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 73
|
|
| Author |
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The link to the dna site, then two stepping through several threads puts the finger by one of the developers right on the spot where it hurts: Can the process be parallelized, in single or double precision? Past comment posts at WCG by the various scientists have touched on this in the CEP2, FAAH, NRW forums. Maybe in the HCC forum too, but presently don't recollect.
At any rate, based on current throughput, HCC might be finished before the next college year is in full swing. Today the center point is heading for 365k WU's validated. Think when we first discussed this science completion time there was partial panic amongst members of it taking 7 years or more :O --//-- |
||
|
|
anhhai
Veteran Cruncher Joined: Mar 22, 2005 Post Count: 839 Status: Offline Project Badges:
|
At any rate, based on current throughput, HCC might be finished before the next college year is in full swing. Today the center point is heading for 365k WU's validated. Think when we first discussed this science completion time there was partial panic amongst members of it taking 7 years or more :O --//-- Sekerob, Are you factoring in HFCC restarting? Or a new project? Next college year is around the beginning of Sept, right? I project I need another 6 months at my current rate to reach my goal on HCC. If it really is going to finish that much faster, I will make the necessary adjustments so that I can reach 50 yrs on HCC (only 20 now). ![]() |
||
|
|
Coleslaw
Veteran Cruncher USA Joined: Mar 29, 2007 Post Count: 1343 Status: Offline Project Badges:
|
http://www.differencebetween.net/technology/difference-between-cpu-and-gpu/
----------------------------------------This pretty much says that GPU's can't replace the CPU altogether. They just aren't designed that way. I'm sure as the big 3 continue to merge their GPU technology with the CPU, we will see them pretty much become the same chip. But until then, they each have their purposes. I don't have the links anymore, but I remember reading threads that discussed that their (other BOINC projects) research COULDN'T be crunched on a GPU or at least CUDA at that time. So, I'm sure there are reasons for not using them now too. ![]() ![]() ![]() ![]() [Edit 1 times, last edit by Coleslaw at May 3, 2011 10:06:13 PM] |
||
|
|
robertmiles
Senior Cruncher US Joined: Apr 16, 2008 Post Count: 445 Status: Offline Project Badges:
|
I've seen some reasons for stopping efforts for a GPU version at various BOINC projects, but no reason why it can never be done.
----------------------------------------At Rosetta@Home, they decided that the algorithm of their current application is so serial in nature that they had two choices: 1. Put as many copies of that algorithm into the GPU as it can find enough graphics memory for. Since each copy of that algorithm currently requires about 600 MB of memory to run, a fairly small fraction of the GPU boards available would have enough graphics memory to produce outputs faster than one CPU core, and then only for their workunits where each decoy starts a new copy of the algorithm instead of waiting for the outputs of the previous decoy. 2. Start over with a new algorithm much better suited to running in parallel with less memory per GPU core, and write an entirely new application. A few other BOINC projects with similar high memory requirements probably have the same problem, but haven't stated it as clearly. At GPUGRID, they're already using Nvidia-based GPU boards, but when they looked at the possibility of an AMD/ATI-based version, they found: 1. The AMD/ATI OpenCL libraries just aren't ready to supply an FFT section that they need, and there are currently problems with getting boards older than the 5000 series to reach enough graphics memory at an adequate speed. 2. There's some question about whether the versions of BOINC available so far are capable of handling GPU applications compiled from OpenCL; perhaps just CPU-only applications. 3. Writing a new version of the application in the AMD/ATI-specific computer language that BOINC can already handle would just take too long. Some other BOINC projects are just too short of money to hire another developer who already knows OpenCL, and too short of their current developers' time to do it without one. As for merging a CPU chip with a GPU chip, Intel already tried it with the Larrabee chip design, but had this problem: The CPU created enough heat that they couldn't put much of a GPU on the same chip without making it either very slow or very ready to overheat and stop working. If they had ever released the chip, BOINC would not have been ready to interface to that chip. [Edit 1 times, last edit by robertmiles at May 4, 2011 1:55:33 AM] |
||
|
|
sk..
Master Cruncher http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif Joined: Mar 22, 2007 Post Count: 2324 Status: Offline Project Badges:
|
astrolab, for the projects that can/did use both CPU and GPU it's clear that the top GPU is at least 20times as powerful as the top CPU, but not all projects can use a GPU and most are limited in the choice of GPU. Crunching with a GPU is really a horses for courses situation; double precision means you are limited to a small range of ATI cards, single and its most ATI and NVidia. If the calculations are complex then you probably need cuda. If you want to include as many cards as possible then OpenCL sounds good, but it's slower than cuda, not well supported by ATI for anything complex and opens a portal straight to bobs chambers when it comes to support/troubleshooting. But there are benefits to using a GPU; lots of work (hundreds/thousands of cores), and it's a lot easier to get a second, third or fourth GPU. You can even buy dual GPU cards, replace a faulty one or just upgrade it without investing in a completely new system; when crunching with a GPU the CPU is less important, so you can keep a good system for a lot longer (a SB increases GPU performance over an i7-920 by less than 2% for projects using 1 GPU + 1 CPU).
----------------------------------------It's quite hard to find comparisons of CPU and GPU because anyone crunching for a project that can use a GPU does just that. An old quote but, "In short, 7 hour WU's on the CPU compared to the 10 minute WU's when using the GPU" - bobsma at Boincstats. Slightly newer, "I'm getting 13 minutes per work unit on my gtx 295 (no overclocking or anything). ive turned in 10 GPU results for a 169 average per. CPU is still waiting to finish even one WU per core :> 16% at an hour for an I7 920 slightly OC'd". So, 6.25h for 8 CPU tasks, or 2 GPU tasks on one GTX295 in 13min. That's 30 tasks a day on a CPU vs 222 tasks a day on a GPU. So the GPU does 7.4 times the work of the i7-920. Although this is a dual GPU, its from a previous generation NVidia card, and the ATI cards are much faster at MilkyWay, especially the high end new cards: HD6950 94sec for one task! That's 30 times the work of an i7-920. anhhai, why don't you go for a more meaningful points tally? [Edit 2 times, last edit by skgiven at May 4, 2011 12:07:24 PM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
So only SOME projects can even be considered for GPU processing and if one is selcted, it will only run on SOME GPU cards. Therefore the people that are left out will wail that their GPU needs to be supported.
Instead of writing software that eveyone can run on a CPU, the requirement is that the application must be re-written for a specific project to run on a specific GPU. Sounds like a very narrow market. So that when the GPU is retired, all the money used to develop and support a very specifically tuned application is wasted. Sounds like something I want to invest a few hundred thou into. There is no analysis as to whether running a GPU hard has any impact on lifespan, but lots of 'dont worry' opinions. It's obvious why WCG is moving slowly. Only a portion of the pro-GPU community is going to happy, leading the majority to moan and wail. The GPU manufacturers can easily kill the application with a spec change. The devopers are having to write a whole new application from scratch. The CAs are going to have a whole new set of problems supporting the GPUs that work and the GPU that are not supported. The new project requires its own beta testing and the server systems will have another project to majage and schedule. But if it works, it will be 7 times faster, so we will need 7 times as many projects to keep our crunchers involved, upgrades to the server and disk systems to manage 7 times the number of WU and 7 times the bandwith to move the data to/from the server, and the scientists will have 7 times the volume of data to manage and analyze in the same time frame. Unless, of course, only relatively few people GPU crunch, in which case GPU has a limited impact. Did I miss anything? |
||
|
|
nasher
Veteran Cruncher USA Joined: Dec 2, 2005 Post Count: 1423 Status: Offline Project Badges:
|
Personaly I dont mind that they do not have a GPU version yet.
----------------------------------------For me I dont have a GPU on any of my computers that would work ,that are currently crunching. but if i did i would have it running another BOINC project and getting credit over there. if i could afford a good GPU i would probably just replace the power supply on my currently dead computer (it would run but the fans in the power supply dont work so i wont turn it on) right now they have 3 major versions of the project Windows, Linux, and Mac. or 6 versions if you count 64 bit... to keep all those versions up to date takes alot of time and money.. and you want them to add GPU processing for 2 brands of cards and who knows how many major types of chipsets for 3 (or 6) versions... Yes GPU is great for some projects, its ok for others, and it isnt running on the rest.. Right now WCG dose not have GPU but other BOINC projects do.. good luck and happy crunching ![]() |
||
|
|
nanoprobe
Master Cruncher Classified Joined: Aug 29, 2008 Post Count: 2998 Status: Offline Project Badges:
|
Did I miss anything? Yea! The cost of buying and running power hungry GPUs for crunching. Non of my current GPUs fit the bill and I wouldn't spend the extra it would cost to run GPU crunch capable cards even if WCG would offer it. My electric bill is already high enough and the high cost of fuel being passed on to consumers isn't helping. JMHO.
In 1969 I took an oath to defend and protect the U S Constitution against all enemies, both foreign and Domestic. There was no expiration date.
![]() ![]() |
||
|
|
Coleslaw
Veteran Cruncher USA Joined: Mar 29, 2007 Post Count: 1343 Status: Offline Project Badges:
|
nanoprobe...I think you are misunderstanding the cost a bit. One high end card around $400 that is cheaper then many i7 systems that produces 7 times the results for same electricity or pretty close to it eliminates the need for multiple systems. Not to mention you could do it on a low cost high efficiency Atom board with a PCI Express slot.
----------------------------------------I think Astrolab also underestimates how many of the big hitters have these cards in mass. Re-writing the code to get this kind of performance increase is well worth it. Worrying about WCG's ability to handle the traffic is dumb. This is the same as saying don't bring more projects and expand. I personally only have a few older CUDA cards and a few borged ATI cards. They pretty much crunch PrimeGrid and Collatz due to their low performance. My GeForce 210 outperforms my Quad on a per task basis. {Edited for typos} ![]() ![]() ![]() ![]() [Edit 2 times, last edit by Coleslaw at May 4, 2011 9:23:32 PM] |
||
|
|
robertmiles
Senior Cruncher US Joined: Apr 16, 2008 Post Count: 445 Status: Offline Project Badges:
|
Did I miss anything? Yea! The cost of buying and running power hungry GPUs for crunching. Non of my current GPUs fit the bill and I wouldn't spend the extra it would cost to run GPU crunch capable cards even if WCG would offer it. My electric bill is already high enough and the high cost of fuel being passed on to consumers isn't helping. JMHO. I haven't seen anything on GPUs for the more common computers that will use more that 300 watts each without being overclocked - and that's often less than adding another CPU-based computer. I've had more of a problem with my computer room overheating, though, so I've had to limit the graphics card I use for GPU crunching to a GTS 450. Would you prefer the high cost to consumers for medical problems without adequate research yet? [Edit 1 times, last edit by robertmiles at May 5, 2011 3:54:26 PM] |
||
|
|
|