Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go »
No member browsing this thread
Thread Status: Active
Total posts in this thread: 46
Posts: 46   Pages: 5   [ Previous Page | 1 2 3 4 5 | Next Page ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 9997 times and has 45 replies Next Thread
JmBoullier
Former Community Advisor
Normandy - France
Joined: Jan 26, 2007
Post Count: 3716
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: GPU computing?

ignoring the power they can give is insane.

Nobody is ignoring that here, it's only that WCG is not ready to use this technology as extensively as some members would like. But work will be done to see how it can be reasonably used here, as has been already said in many CUDA/GPU threads of this forum.

To put things in perspective, I have recently made a search on another topic in this forum, and it went back to the early times of WCG (end of 2004). And to my surprise I found what might be the first "have you considered using the tremendous power of GPUs in WCG" question! In 2004! Have you considered how much development, admin and support time would have been wasted between WCG staff and Projects scientists if they had wanted to be on the bleeding edge of the technology? Four years later things are still in a pseudo experimental status in very few adventurous DC sites.

IBM has certainly several researchers in its labs who work on this subject. But you should see WCG more as a plant than as a research lab.

Cheers. Jean.
----------------------------------------
Team--> Decrypthon -->Statistics/Join -->Thread
[Jan 19, 2009 10:06:43 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: GPU computing?

Just adding a few cents' worth of future technology...

First, you're looking at GPU computing as specialized - it's not thanks to the "standardization" of two (or three, if you count SGI) basic platforms. However, the major difference is not "can we use a WCG work unit on a parallel architecture", but "can we EFFICIENTLY use a work unit" on such a platform. Some computations don't lend themselves to parallel processing (finite element analysis, for example). Others, such as hydro/aero simulations, thrive on it.

It seems to me that a multiple docking or molecular/atomic interaction algorithm can easily be modified to fit the parallel nature of a cell processor - but you have to look at the right level in the code: at the individual docking sequence, or the individual interaction sequence - so that each cell processes an entire sequence, comprised of all of its steps, before returning to the pot for the next piece of work. Attempting to fragment at a lower level won't yield viable results; and breaking it at a higher level means lots of work units in parallel, and long processing times (which may not be so bad either, and may be easier to implement).

The question should be expanded to include all parallel platforms that use Cell processors, including the new generation IBM Linux boxes with the hybrid compute cards, or the nVidia Tesla compute cards. I think we should stop calling it GPU processing and start calling it what it is - parallel processing. The difference with yesterday's parallel procesing: it's finally affordable. Not this year, maybe, and maybe not even next depending on world economic conditions, but as soon as the small business realizes that they can model all sorts of fun things RIGHT AT THE ENGINEER'S DESK, there will be such demand that hybrid computation units will drop into the 100-400 USD range, and everyone will develop for them, so everyone will want one. You're right, Jean, it will be two years before it's mainstream. The chicken and egg rule applies: if no one supports it, why buy it? If no one buys is, why support it?

Where does that leave WCG? On the curve, or behind it? It's not up to us the contributors. The question requires knowledge of the coding side of WCG that those of us who are mere contributors don't have access to.

It would be too bad for these researchers and these projects, though, if all that computing power went somewhere else, when there are volunteers here who want to do more. Trust me, I know just how much apathy is out there, and having an enthusiastic fan base is not an asset to ignore for too long.

Cheers
Fred
Team Captain, Homebrewers

(in my 'real' job, Principal Software Systems Architect, FLSmidth Krebs)
[Jan 20, 2009 10:16:57 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: GPU computing?

The difference with yesterday's parallel procesing: it's finally affordable. Not this year, maybe, and maybe not even next depending on world economic conditions, <snip>


Apologies for quoting myself..However, I have run numbers, and right now, this week, I can build a supercomputer cluster that will give the fastest machines in the world a solid run for their spot on the top 500 list ( www.top500.org ) for under $20,000 US.

OK, it won't break into the top 400, but anything in the 450th-500th spot range would find itself challenged. That's IN THE WORLD, and the machines would fit nicely under the average size desk, and not need special rooms, power, or cooling.

My prediction: By next fall, that same machine won't cost more then $10K US. The desktop engineering "supercomputer" will retail for under $2500. The catch: There will only be demand if there's a useful software. Right now, major engineering simulation software packages are being adapted, and the top ones already work (Matlab, for instance).

FB
[Jan 21, 2009 1:25:53 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: GPU computing?

For CUDA computing, read Paul's GTX 295 Adventure at http://www.gpugrid.net/forum_thread.php?id=675
I had not realized that BOINC 6.4.5 was the earliest version that works.
[Jan 30, 2009 4:00:35 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: GPU computing?

Well, 6.4.5 is not the earliest working version. It is the most recent stable GPU client. And an improved stable client will make an appearance shortly.

Boinc GPU crunching is still in it's infant stages. And progress is being made on a stable client daily (it seems...;). But there are also non-Boinc projects expanding the use and compatibilty of GPU's (seti, dnet and einstein). And of course there is Folding.
----------------------------------------
[Edit 2 times, last edit by Former Member at Feb 25, 2009 6:12:46 PM]
[Feb 25, 2009 6:05:34 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: GPU computing?

Where, by "stable", you mean "recommended by Berkeley".

Here, we have more stringent requirements for stability. Having bugfix after bugfix is not "stable". And don't forget - if the developers hadn't virtually abandoned BOINC 6.4 in favour of working on BOINC 6.6, we would have seen more bugfixes for the 6.4 line.

This is how the WCG testing process works: http://wcg.wikia.com/wiki/BOINC_beta_testing
[Feb 25, 2009 6:15:19 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Sekerob
Ace Cruncher
Joined: Jul 24, 2005
Post Count: 20043
Status: Offline
Reply to this Post  Reply with Quote 
Re: GPU computing?

How is the GPU crunch time allocated/attributed? Is it counting GPU time or the fraction of the CPU or both?
----------------------------------------
WCG Global & Research > Make Proposal Help: Start Here!
Please help to make the Forums an enjoyable experience for All!
[Feb 27, 2009 12:05:41 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Dieter Matuschek
Advanced Cruncher
Germany
Joined: Aug 13, 2005
Post Count: 142
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: GPU computing?

How is the GPU crunch time allocated/attributed? Is it counting GPU time or the fraction of the CPU or both?

From project developer of GPUGRID in his post http://www.gpugrid.net/forum_thread.php?id=219:

The way we assign credits takes into account these facts.
First of all we need to measure the floating point performance of the application. We have build a performance model of our applications (CELLMD and ACEMD) by counting the number of flops manually per step. For a specific WU, we are able to compute how many total floating operations are performed in average depending on the number of atoms, number of steps and so on. For CELLMD it was also possible to verify that the estimated flops were correct within few percent from the real value (multiplication, addition, subtraction, division and reciprocal square root are counted as as a single floating-point operation). In the case of GPU, we can also use interpolating texture units instead of computing some expensive expression. In this case, as the CPU does not have anything similar, we use the number of floats of the equivalent expression. It is not easy to measure the number of integer operations, so we guess the estimated MIPS to be 2 times the number of floating-point operations (really, we reckon that it would be correct to assign up to a factor 3 times, as in the example above). Therefore,

Credits = 0.5(MFLOP per WU + approx MIPS per WU)/864,000
(MFLOP is million of floating point operations)

----------------------------------------

Ask not what the world can do for you - ask what you can do for the world.
[Feb 28, 2009 9:01:18 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Thomas515
Cruncher
Joined: Aug 7, 2006
Post Count: 22
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: GPU computing?

I got 2450 points (boinc) for 50 hours gpugrid on a 8600gt, this is one wu for the project. The deadline is 4 days, for me nomal to short. Time counted in boinc is only the CPU-time, a few minutes at the start and then about 3%. "Time reamaning" is working poor. And sometimes the system is slowing down. Not perfect at the moment.

Sorry for my english and greetings from germany

Thomas
[Feb 28, 2009 11:43:36 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Dieter Matuschek
Advanced Cruncher
Germany
Joined: Aug 13, 2005
Post Count: 142
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: GPU computing?

I got 2450 points (boinc) for 50 hours gpugrid on a 8600gt, this is one wu for the project.
In addition: with a GTX 295 video card you get double these points in five hours. smile
----------------------------------------

Ask not what the world can do for you - ask what you can do for the world.
----------------------------------------
[Edit 1 times, last edit by Dieter Matuschek at Feb 28, 2009 12:19:55 PM]
[Feb 28, 2009 12:18:55 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 46   Pages: 5   [ Previous Page | 1 2 3 4 5 | Next Page ]
[ Jump to Last Post ]
Post new Thread