| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 19
|
|
| Author |
|
|
RicktheBrick
Senior Cruncher Joined: Sep 23, 2005 Post Count: 206 Status: Offline Project Badges:
|
There is a very interesting article about Standord's folding at home project. The url is http://www.anandtech.com/video/showdoc.aspx?i=2849. They state that they can accelerate their program by 20 to 40 times faster. It is limited to ATI's X 19xx GPUs but the article states they are trying to use more of the ATI's GPUs. I would think that since Stanford is public supported that their research would be available to IBM. I would buy a GPU if I thought it would make my computer twice as fast and the article states it would increase by a factor of 20 to 40 times.
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
We have seen all sorts of outrageous (and conflicting) claims. It will be interesting to see what can really be done.
However, there are two major problems that prevents this being a panacea: first, not all grid projects can benefit from massive parallelism. They are already cut down into the smallest units that can run in parallel, and there is no benefit in splitting it up further because of memory requirements. Some of our current projects may fail this test, others may pass. Image filtering can run in parallel with shared memory, for example. The second problem is variation in GPUs and the amount of effort it requires just to adapt one project to run on one GPU. Currently, there's not a snowball's chance in hell that WCG can do this for any of our current projects. But this may change: efforts are underway to abstract much of the architectural differences, and provide a uniform platform. Also, future projects may be delivered to WCG with science applications already designed to take advantage of this kind of processing. Time will tell, but WCG are unlikely to jump the gun on this one. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
We are aware that science apps optimized for certain GPUs would make things go faster, but as Didactylos points out, it's a lot of work to customize the app for anything more than one or two GPUs. We already spend the bulk of our time making sure the apps work consistently across only 4 variations (UD Windows, BOINC Windows, BOINC Linux, and BOINC Mac).
If GPU makers (or anyone else) could make an abstraction layer that would allow us to program for many GPUs at once, we would consider doing that. But for now, let's focus on getting more machines crunching to make the research go faster (ie: tell your friends ). |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
it's a lot of work to customize the app for anything more than one or two GPUs. .... Yes any technology advance means a lot of work... My grandfather always told me that nothing big comes lonely to us without big effort from us... I think a 20x to 40x computing power is something serious enough to invest time into... If GPU makers (or anyone else) could make an abstraction layer that would allow us to program for many GPUs at once, we would consider doing that. Nvidia has already annouced that this layer would be certainly available for their G80 chips ;-) But for now, let's focus on getting more machines crunching to make the research go faster (ie: tell your friends ). I think you are mistaken here in your analysis... you should not take this advance in technology information (and reality) made by standford folding project so in the style "entering by one ear and leaving by the other ear, and let see what happens".... If WCG takes too much gap with other project and do not invest time in that new possibility... it can mean that a lot of people will quit this project because they donate their time but want their time being used at best... So if with folding at home you know you can process 20x faster because of GPU processing technology... then you know you can process more complex proteins for the same time, then the quality of the project is better... Just think about it PS: i know for the moment there are not a lot of crunchers who have X1900 series... but things come fast today... so if you say it takes some time perhaps you should already invest time and work to it... |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Oh sorry i did not see the date of the first post and answers...
Now it's a reality (GPU client for folding home) ;-) what's your position now ?????? ![]() |
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
some foreign language sites discussed and figured it to take about 3 years before it will be more common / can go mainstream.... the x1900 cards run at 90C, suck 125 wh, put fans in hi gear and have to be throttled down from 3d to 2d to get it down into the 70's C's....and the CPU still loaded at 50%. Is it head first or feet first for a clients computer?
----------------------------------------BTW, some of these cards seem to outdo the price of a PC & wonder what it does to the warranty if run continuously at 90C.
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The difference between WCG and Folding@Home: (actually, there are many, but one in particular interests us) - Folding@Home are a single project, running their own grid with their own client and their own science application. WCG runs software provided by the institutions running the projects they host.
So, in the first instance the responsibility for completely redesigning the software rests with those institutions. WCG help by porting software to the platforms they support, but the change involved with using a GPU is massive. And I wonder how many computers Folding@Home will blow up before they find the best way to do it? |
||
|
|
Sgt.Joe
Ace Cruncher USA Joined: Jul 4, 2006 Post Count: 7850 Status: Offline Project Badges:
|
I think Sekerob is right. That is a a lot of heat/energy to dissipate. That can not be good for the rest of the components much less the chip itself, unless you have some fancy cooling system. His point about the energy usage (125W) is also a good one. Most of us do not exist on the bleeding edge of technology, so the added expense of both the card and its energy use are significant, besides the fact that most older systems are not compatible with the new highpower cards. At any rate, the proof of concept I have yet to see in any real world testing. Let us see some published results, weigh the costs and time of the additional programming against the perceived benefits of some potential(probably small) number of cards. Would the amount of work required justify the return ?? This may be a viable alternative at some time in the future, but probably not until there is some significant market penetration of the hardware (if the claims prove to be true). Beware of the hype.
----------------------------------------Cheers Joe
Sgt. Joe
*Minnesota Crunchers* |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
some foreign language sites discussed and figured it to take about 3 years before it will be more common / can go mainstream.... the x1900 cards run at 90C, suck 125 wh, put fans in hi gear and have to be throttled down from 3d to 2d to get it down into the 70's C's....and the CPU still loaded at 50%. Is it head first or feet first for a clients computer? BTW, some of these cards seem to outdo the price of a PC & wonder what it does to the warranty if run continuously at 90C. If you compare power processing per watt then you are totally mistaken... These cards can run much hotter (until 125°) they are designed for ;-) Double electric consumption than a CPU but for a power processing X20.... [Edit 1 times, last edit by Former Member at Oct 6, 2006 3:38:30 PM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Read the information more carefully: certain operations can run 20x faster. However, that's not everything, and some things will run slower, bringing the average speed down to something much less impressive. I think they are actually claiming 4x, which is still useful.
But 125°? People are already complaining about overheating with normal CPUs. That sort of temperature effectively turns your computer into an oven. |
||
|
|
|