Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 22
|
![]() |
Author |
|
sk..
Master Cruncher http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif Joined: Mar 22, 2007 Post Count: 2324 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I would be very surprised to find a top NVidia GPU unable to work here, even if it's 2 years before a GPU project turns up. That said you are right; there is no point buying a GPU for here now. GPU crunching is not plug and play for any of the present GPU projects elsewhere, so GPU crunching experience could be gained at other projects, should they interest you.
Last I heard WCG is planning to launch an OpenCL project - one that would facilitate the use of many GPU types (in theory) and presumably CPU's at the same time. As there has been 3 years or more of speculation about a GPU project for WCG, I don't think there is even any point in planning for it's arrival - not before the announcement, which I would expect to contain full details of participation requirements. |
||
|
mikaok
Senior Cruncher Finland Joined: Aug 8, 2006 Post Count: 489 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I would be very surprised to find a top NVidia GPU unable to work here, even if it's 2 years before a GPU project turns up. That said you are right; there is no point buying a GPU for here now. GPU crunching is not plug and play for any of the present GPU projects elsewhere, so GPU crunching experience could be gained at other projects, should they interest you. GPU projects aren't problem free. You can have a four year old CPU and gain fairly good amount of work done. But a three year old GPU can be too old. So don't look for the future projects but those that are currently available. Then browse their web forums and read what kind of gear people are using and will you be able to accept those runtimes... Here's an example what I mean: to my 8800gts one small gpugrid wu took 11 days to finish. I just find it uninteresting to update my GPU every one or two years just to be able to finish wus in <6 hours.
to infinity and beyond
![]() |
||
|
Simplex0
Advanced Cruncher Sweden Joined: Aug 14, 2008 Post Count: 83 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I would be very surprised to find a top NVidia GPU unable to work here, even if it's 2 years before a GPU project turns up. That said you are right; there is no point buying a GPU for here now. GPU crunching is not plug and play for any of the present GPU projects elsewhere, so GPU crunching experience could be gained at other projects, should they interest you. GPU projects aren't problem free. You can have a four year old CPU and gain fairly good amount of work done. But a three year old GPU can be too old. So don't look for the future projects but those that are currently available. Then browse their web forums and read what kind of gear people are using and will you be able to accept those runtimes... Here's an example what I mean: to my 8800gts one small gpugrid wu took 11 days to finish. I just find it uninteresting to update my GPU every one or two years just to be able to finish wus in <6 hours. On the other hand according to this... " Findings The Nutritious Rice for the World Project (NRW) on World Community Grid predicted de novo, the structures of over 62,000 small proteins and protein domains returning a total of 10 billion candidate structures. Clustering ensembles of structures on this scale requires calculation of large similarity matrices consisting of RMSDs between each pair of structures in the set. As a real-world test, we calculated the matrices for 6 different ensembles from NRW. The GPU method was 260 times faster that the fastest existing CPU based method and over 500 times faster than the method that had been previously used. " indicates that GPU cruncher can be more than 250 times faster than a high-end CPU cruncher. It is absolutely mind-blowing that GPU's has not been put to more use already. Maybe time to employ professional programmers like Folding@home? |
||
|
KWSN - A Shrubbery
Master Cruncher Joined: Jan 8, 2006 Post Count: 1585 Status: Offline |
Yes, a GPU can be considerably faster and 250 times is not unheard of. Now for a reality check.
----------------------------------------This cannot be done in the real world. It can be accomplished under ideal situations with a single or a few models of GPU. Distributed computing is designed to take advantage of the widest possible range of hardware. As has been previously mentioned, CPUs are compatible for many years. GPUs require highly modified programs and few are compatible. I'm not going to make up numbers about the potential, but I can assure you there is a problem of diminishing returns when attempting to write an application for GPU crunching. The more cards you attempt to include the more difficult the task becomes. Is it worthwhile to spend considerable resources for a handful of crunchers when there is already a backlog of resource demands? ![]() Distributed computing volunteer since September 27, 2000 |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello Simplex0,
It is absolutely mind-blowing that GPU's has not been put to more use already. Maybe time to employ professional programmers like Folding@home? Here is a report from The Register : "Deep inside AMD's master plan to topple Intel" http://www.theregister.co.uk/2011/07/07/amd_graphics_core_next/ AMD is planning to introduce a new GPU architecture in 2013 designed for stream-processing - unlike current GPU architectures that have some additional commands tacked on to an old-fashioned GPU architecture. There are many old timers like me that would like to see some competition and evolution in stream-processing GPUs. Stanford (where Folding@Home is located) announced Brook in 2004 and F@H spent years programming (with programming aid from Nvidia and ATI) before their first program was released - with serious bugs. So it is still early days (for those daring young men in their flying machines). But don't worry. There should be a successful cross-Channel flight Real Soon Now! ![]() Lawrence |
||
|
Simplex0
Advanced Cruncher Sweden Joined: Aug 14, 2008 Post Count: 83 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Yes, a GPU can be considerably faster and 250 times is not unheard of. Now for a reality check. This cannot be done in the real world. That is not correct, it HAS been done, read it agin [url]http://www.biomedcentral.com/1756-0500/4/97[/url] Or is it a false statement? |
||
|
Simplex0
Advanced Cruncher Sweden Joined: Aug 14, 2008 Post Count: 83 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Hello Simplex0, It is absolutely mind-blowing that GPU's has not been put to more use already. Maybe time to employ professional programmers like Folding@home? Here is a report from The Register : "Deep inside AMD's master plan to topple Intel" http://www.theregister.co.uk/2011/07/07/amd_graphics_core_next/ AMD is planning to introduce a new GPU architecture in 2013 designed for stream-processing - unlike current GPU architectures that have some additional commands tacked on to an old-fashioned GPU architecture. There are many old timers like me that would like to see some competition and evolution in stream-processing GPUs. Stanford (where Folding@Home is located) announced Brook in 2004 and F@H spent years programming (with programming aid from Nvidia and ATI) before their first program was released - with serious bugs. So it is still early days (for those daring young men in their flying machines). But don't worry. There should be a successful cross-Channel flight Real Soon Now! ![]() Lawrence Yes but it is has also been done already for years by they who are good in writing ASSEMBLER. I have run both Folding@home & Milkyway@home for years and they usually works just fine. And you also have applications like Password Cracking So the use of GPU's is no longer anything new, except for universities maybe ;) I guess it is a question of both skill, effort and budget. Just some more intresting stuf, this is taken from an more than 2 year old interview with Gipsel. " If you compare the beginning of the project with today’s situation, you could claim a gain from “one WU a day” on a single Core 2 processor @3GHz to almost 10,000 WUs a day with a HD4870 [this is a live testament what code optimization can achieve - imagine if every application would have such a dedicated code-optimizer - " You can read the whole aticle here [url]http://www.brightsideofnews.com/news/2009/3/2...he-power-of-graphics.aspx[/url] [Edit 2 times, last edit by Simplex0 at Jul 11, 2011 4:57:48 PM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hi Simplex0,
We are both right. The hardware / software barriers to GPU computing will have been surmounted when corporations start specifying GPUs for their computers. After all, they pay their professional programmers a great deal. Both AMD and Intel are aware of this and have development programs for GPU programming. So there is great promise, but there are still problems. Almost by definition, devices that are sold mainly to enthusiasts have either limited utility or serious problems. Lawrence |
||
|
KWSN - A Shrubbery
Master Cruncher Joined: Jan 8, 2006 Post Count: 1585 Status: Offline |
Yes, a GPU can be considerably faster and 250 times is not unheard of. Now for a reality check. This cannot be done in the real world. That is not correct, it HAS been done, read it agin [url]http://www.biomedcentral.com/1756-0500/4/97[/url] Or is it a false statement? I'm going to assume that you simply neglected to quote me in context due to over-zealousness. I would hesitate to accuse someone I don't know of intentional deception simply to win an internet argument. For completeness, allow me to produce that sentence in it's entirety to show that my point stands as originally stated. This cannot be done in the real world. It can be accomplished under ideal situations with a single or a few models of GPU. I never questioned that a GPU can perform at these speeds, I suggested that in the general population of hardware available in a distributed computing setting, this type of performance is impossible. Only a few or even a single card will fit the optimized code. The rest will be considerably slower if they are compatible at all. My argument was, and remains, that programming for a general release GPU application is far more difficult than it's proponents suggest. ![]() Distributed computing volunteer since September 27, 2000 |
||
|
fablefox
Senior Cruncher Joined: May 31, 2010 Post Count: 161 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I'm not trying to win arguments here, but to quote the article:
----------------------------------------"The test system that we used is a modest one and is exceeded by many home computers used for gaming or for high definition video." I don't know why, but I have the feeling that this would be the next GPU based WCG project. You know what, I'll just post the whole paragraph. "GPUs with the power of small supercomputers are becoming ubiquitous in consumer computing devices. The test system that we used is a modest one and is exceeded by many home computers used for gaming or for high definition video. These devices are being made accessible to scientific applications through community grids which link millions of volunteer nodes together. Although GPU-Q-J was developed for clustering large ensemble sets on our local servers, the method will form part of a new GPU-aware protein folding client that is in development. Such GPU-aware clients have already made an impact in projects such as Folding@home [20]. Effective GPU adaptations of routines for commonly used calculations such as the optimal superposition/RMSD are important if we are to fully utilize the enormous power being made available through the generosity of participants in projects such as NRW." ---------------------------------------- [Edit 1 times, last edit by fablefox at Jul 15, 2011 6:33:09 AM] |
||
|
|
![]() |