Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go »
No member browsing this thread
Thread Status: Active
Total posts in this thread: 19
Posts: 19   Pages: 2   [ Previous Page | 1 2 ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 1893 times and has 18 replies Next Thread
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: BOINC GPU processing?

Hi Movieman,
Actually, there aren't all that many CUDA-capable GPUs out there. So any decent sized project is likely to last more than a year while gathering attention and publicity from a new group of users. And if we could get a project that ran on a console - wow!

We are definitely not discouraging project scientists from using these new technologies. But it is difficult to find somebody with a good science problem who also wants to take on the computer hassle. Most just want to adapt a current program that runs on Linux in their department to run on a big grid. For that matter, a small number of our projects use the same underlying program code. This happens when a project attracts attention among researchers who want to do the same thing on a different problem.

Lawrence
[Oct 17, 2008 8:01:15 AM]   Link   Report threatening or abusive post: please login first  Go to top 
JmBoullier
Former Community Advisor
Normandy - France
Joined: Jan 26, 2007
Post Count: 3716
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: BOINC GPU processing?

Hi Movieman!
The idea in Sekerob's post is not to make projects artificially last 2 years, it is that WCG needs projects lasting long enough, even if they are GPU enabled. So if/when WCG is able to give work to GPUs and if there are many GPU clients in our community, projects will have to be really that big.

Given the time it takes from the first project submission to the actual launch date (your 3-4 months are far from reality) and WCG's will to offer a reasonable choice of projects to members, it is neither affordable nor realistic to have to launch 60 (5 x 12) projects per year.

Cheers. Jean.
----------------------------------------
Team--> Decrypthon -->Statistics/Join -->Thread
[Oct 17, 2008 12:49:37 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Movieman
Veteran Cruncher
Joined: Sep 9, 2006
Post Count: 1042
Status: Offline
Reply to this Post  Reply with Quote 
Re: BOINC GPU processing?

Lawrence and Jean;
Thanks for the info. Puts a new light on the subject.
I do however think you'd be surprised at how many CUDA enabled GPU's are out there.
I see just what our Xs FAH team has up and running and it's scary.
guys running Quads with 2-nVidia GTX 9800X2's or GTX280's in SLI.
Monster machines in FAH..
We also have quite a few WCG guys with those cards that although they run WCG on their cpu's rub FAH on the GPU at the same time.
Thanks again for the info.
----------------------------------------

[Oct 17, 2008 6:51:17 PM]   Link   Report threatening or abusive post: please login first  Go to top 
JmBoullier
Former Community Advisor
Normandy - France
Joined: Jan 26, 2007
Post Count: 3716
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: BOINC GPU processing?

When I say "if there are many GPU clients in our community, projects will have to be really that big" that does not mean I believe that there would be only a handful.

I mean that if there are 11 % of procs crunching for WCG which are GPUs and if they are 10 times faster than normal procs the minimum size for a project to be acceptable by WCG is twice bigger than today. E.g. a project which would last 6 months today would no longer be big enough, the entry point would become projects lasting one year in today's environment.

If GPUs are 100 times faster as I read sometimes we have the same problem with only 1 % GPUs in the picture, and we need projects 11 times bigger when they reach 10 % of the total!

As you can see GPUs which are very interesting with endless projects (here put the names you prefer smile ) can quickly make it difficult for WCG to find big enough projects for feeding the grid.

That does not mean that it must not be done, but it is not as simple or as wonderful as it looks at first. Jean.
----------------------------------------
Team--> Decrypthon -->Statistics/Join -->Thread
[Oct 18, 2008 1:16:46 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Eric-Montreal
Cruncher
Canada
Joined: Nov 16, 2004
Post Count: 34
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: BOINC GPU processing?

Oh, I wish that were true.

Oh, I wish you would give a real and honest answer, but in all threads related to GPU use, you've made your point clear: you consider them all hype, no substance. A few months ago, it was debatable. Today, GPU in grid computing is used by F@H on a large scale and has been proved to be reliable, scalable, and effective. Lack of double precision arithmetic was one of your best arguments. Sorry for your sunshine, but that argument is gone with the new chips.
However, converting a program to be massively parallel is not easy,

Did I pretend it was easy ?

Even without using GPUs, the need for some parallelism in the WCG applications is clear. In the current situation, when a quad core executes 4 work units at the same time, it uses 4 times the memory. With 6 and 8 cores CPUs just around the corner, the problem will only get worse. Using all the available cores to solve one work unit faster would bring the memory requirement back to a more reasonable level and make a better use of 3rd level CPU cache. Keeping a small, unobtrusive footprint is among the most important requirements for broad acceptance of grid processing.
and is not always even possible.

Check the facts. NVidia is already sponsoring an Autodock adaptation for CUDA and reports a 12 to 20 times speedup :
http://www.nvidia.com/object/io_1209593316409.html
http://www.siliconinformatics.com/

Still believe it's impossible or not worth the trouble ? Just look at what F@H is doing !

When the 8087 appeared, some people quickly understood how important this was while others were doing the same kind of criticism, pretending that a floating point coprocessor was a waste of silicon and a faster CPU with software FP routines would be better...
[Oct 18, 2008 5:11:37 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: BOINC GPU processing?

Hello Eric-Montreal,
Good information about AutoDock on a GPU! I had not seen that, though I knew there had been some contact with Nvidia. Nothing has been said about an upcoming project, but I know that the staff are interested in the BOINC 6 ability to run with GPUs.

I doubt that a GPU project will appear in 2009, since nothing has been said about it yet, but maybe . . . 2010? At least I can hope.

Lawrence
[Oct 18, 2008 5:33:53 AM]   Link   Report threatening or abusive post: please login first  Go to top 
mreuter80
Advanced Cruncher
Joined: Oct 2, 2006
Post Count: 83
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: BOINC GPU processing?

... GPU in grid computing is used by F@H on a large scale ...


Not quite correct. F@H ist still providing the GPU clients as beta version (I also crunch there). The team there is also still looking into some bugs and optimizations for the various GPU clients.
Yes, you are correct, the GPU clients work great - specifically the NVIDIA - and the team there is very excited, but please keep in mind that f@h does not use the BOINC client.

Did I pretend it was easy ?

Even without using GPUs, the need for some parallelism in the WCG applications is clear. In the current situation, when a quad core executes 4 work units at the same time, it uses 4 times the memory. With 6 and 8 cores CPUs just around the corner, the problem will only get worse. Using all the available cores to solve one work unit faster would bring the memory requirement back to a more reasonable level and make a better use of 3rd level CPU cache. Keeping a small, unobtrusive footprint is among the most important requirements for broad acceptance of grid processing.
and is not always even possible.

Check the facts. NVidia is already sponsoring an Autodock adaptation for CUDA and reports a 12 to 20 times speedup :
http://www.nvidia.com/object/io_1209593316409.html
http://www.siliconinformatics.com/

WCG is not only providing Autodock adaptation - and not everyone has a NVIDIA card (e.g. I use ATI). Anyways, the team at WCG are not the one who do the programming. You can address it to the scientists - how about that.

Still believe it's impossible or not worth the trouble ? Just look at what F@H is doing !

I'm sure a lot of people look what f@h is doing (not only WCG) - the opportunities are great ... no doubt. But you are jumping the gun. Just because one project (or maybe two) use the GPU (and just a BETA) doesn't mean other projects must use it immediatly as well. Please don't get me wrong, I also would like to see the WCG projects using the GPU, but give the people some time to look into it.
If you want to and even can help please contact the project teams.

Cheers,
[Oct 18, 2008 5:49:44 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: BOINC GPU processing?

Did I pretend it was easy ?

You certainly implied it, with your incorrect assertion that "The beauty of GPU programming being that the whole application does not need to be rewritten, only the usually tiny part where most processing time is spent."

I will remind you (and everyone in this thread) that World Community Grid do no write the science applications, nor do they have the resources to completely rewrite them even if they determined it was practical.

They are, however, looking for new projects that are positioned to take advantage of GPUs, and will be eager to partner with anyone adapting current science applications.

GPUs capable of this kind of processing remain expensive, rare, and the sole domain of hardcore gamers. This will change. I hope that World Community Grid will be ready for the next generation of average retail grade GPUs, when such massively multiple technologies become ubiquitous.

Yes, the hype still irritates me. The strength of World Community Grid relies on the fact that it doesn't require specialised hardware. Anyone can contribute.
[Oct 18, 2008 5:49:44 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: BOINC GPU processing?

Oh, this is priceless: "Multi-processing is a hard problem in computer science. It's been there for 30 years. It's not answered by software tools." That was Nvidia's Andy Keane, trying to downplay Larrabee last August. But a few months earlier, in the very press release that Eric-Montreal pointed out, it talks of "new user-friendly programming languages that will allow developers to exploit parallelism automatically"....

What a joke. Andy Keane is right, parallelism is hard. Even great programmers struggle with it, and the hash made by lesser programmers - well, I have seen examples of multithreaded code that would make you weep. Programming tools can only go part of the way. The rest requires a different way of thinking.

But, oh, the hypocrisy of Nvidia complaining about Intel's marketing!
[Oct 18, 2008 6:04:37 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 19   Pages: 2   [ Previous Page | 1 2 ]
[ Jump to Last Post ]
Post new Thread