Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go »
No member browsing this thread
Thread Status: Active
Total posts in this thread: 77
Posts: 77   Pages: 8   [ Previous Page | 1 2 3 4 5 6 7 8 | Next Page ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 32050 times and has 76 replies Next Thread
robertmiles
Senior Cruncher
US
Joined: Apr 16, 2008
Post Count: 445
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: What's with the laziness, why no GPU projects?!?

Robertmiles, you make an excellent point about using GPUGRID and Poem for the GPU work! I haven't used GPUGRID but, from my personal experience with Poem I can confidently say that it has been very reliable and plays well with other projects.


I agree. Both of my desktops now participate mostly in Poem@Home for GPU work, now that GPUGRID's GPU requirements have gone beyond being compatible with my desktops.
[Nov 22, 2015 3:08:08 PM]   Link   Report threatening or abusive post: please login first  Go to top 
branjo
Master Cruncher
Slovakia
Joined: Jun 29, 2012
Post Count: 1892
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: What's with the laziness, why no GPU projects?!?

nanoprobe wrote:
Come on, give it a break :-D Why so agressive? :-P


Any fool who comes on here and disrespects the techs and all the hard work they do needs to be B slapped. If you can't bother to do even a little bit of research before making a post like that then you should expect to get called out.


Thank you nano for voicing (also) my thoughts.


----------------------------------------

Crunching@Home since January 13 2000. Shrubbing@Home since January 5 2006

[Nov 22, 2015 6:43:13 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: What's with the laziness, why no GPU projects?!?

I also found this udacity course, which is just CUDA. Free though:
https://www.udacity.com/course/intro-to-parallel-programming--cs344
[Jan 4, 2016 12:17:37 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: What's with the laziness, why no GPU projects?!?

Looks like the coursera 'heterogeneous parallel programming' course is starting up on 1/12.
https://www.coursera.org/course/hetero

CUDA mostly, some open cl.
Certificate period is closed, but course is still open.
[Jan 4, 2016 1:48:43 PM]   Link   Report threatening or abusive post: please login first  Go to top 
robertmiles
Senior Cruncher
US
Joined: Apr 16, 2008
Post Count: 445
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: What's with the laziness, why no GPU projects?!?

I also found this udacity course, which is just CUDA. Free though:
https://www.udacity.com/course/intro-to-parallel-programming--cs344


Looks like a course I tried taking, and found that it required software that wouldn't install on my computer.
[Jan 5, 2016 12:26:48 AM]   Link   Report threatening or abusive post: please login first  Go to top 
robertmiles
Senior Cruncher
US
Joined: Apr 16, 2008
Post Count: 445
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: What's with the laziness, why no GPU projects?!?

Looks like the coursera 'heterogeneous parallel programming' course is starting up on 1/12.
https://www.coursera.org/course/hetero

CUDA mostly, some open cl.
Certificate period is closed, but course is still open.


Looks like this year's version of a course I completed last year. Leaves out the details of how to get started on your own computer, but offers a website that will run simple CUDA and OpenCL programs. Offers a CUDA emulator that will run under Cygwin (if you recompile it) and probably under some flavors of Linux also.

Altera offers some OpenCL classes, but these appear to be targeted to using OpenCL on Altera's products only, which do not include GPUs.

Doulos looks worth watching to see if they will offer any suitable courses.
----------------------------------------
[Edit 2 times, last edit by robertmiles at Jan 5, 2016 12:40:50 AM]
[Jan 5, 2016 12:32:11 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Composer
Cruncher
Joined: May 28, 2014
Post Count: 29
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: What's with the laziness, why no GPU projects?!?

Because none of the current project have resources to develop the GPU app (CEP2, I think), and some just aren't suitable (Fightaids@home). No idea about MCM.

As branjo pointed out, there are lots and lots of discussion about why there are no GPU projects at WCG.



So are you saying that CEP2 has the potential to run on a GPU, they just don't have the resources to modify the code for it? I know that none of the projects are currently running on GPU, and that there is no immediate indication that this will change, but I'm just curious if CEP2 uses an algorithm that is highly parallel. I'm sure that the Harvard team already knows about cuda, but from what I have seen, it looks like it is a relatively simple modification in order to run certain things in parallel. Like for example, instead of having a single for loop that goes through 10,000 iterations for 10,000 different values that do not depend on each other, a GPU with 10,000 cuda cores could run 10,000 if statements simultaneously, or if it only had 1000 cores it could run the same for loop with 10 iterations instead of 10,000.
[Oct 22, 2016 6:38:26 PM]   Link   Report threatening or abusive post: please login first  Go to top 
robertmiles
Senior Cruncher
US
Joined: Apr 16, 2008
Post Count: 445
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: What's with the laziness, why no GPU projects?!?

Because none of the current project have resources to develop the GPU app (CEP2, I think), and some just aren't suitable (Fightaids@home). No idea about MCM.

As branjo pointed out, there are lots and lots of discussion about why there are no GPU projects at WCG.



So are you saying that CEP2 has the potential to run on a GPU, they just don't have the resources to modify the code for it? I know that none of the projects are currently running on GPU, and that there is no immediate indication that this will change, but I'm just curious if CEP2 uses an algorithm that is highly parallel. I'm sure that the Harvard team already knows about cuda, but from what I have seen, it looks like it is a relatively simple modification in order to run certain things in parallel. Like for example, instead of having a single for loop that goes through 10,000 iterations for 10,000 different values that do not depend on each other, a GPU with 10,000 cuda cores could run 10,000 if statements simultaneously, or if it only had 1000 cores it could run the same for loop with 10 iterations instead of 10,000.

I've studied CUDA enough that I might help them do it (for running under Windows only); however, I cannot do a corresponding OpenCL version yet.

One thing about the 10,000 if statements - if it's part of an if-then-else, there are strong restrictions on when the then branch and the else branch can run simultaneously.

The last I read, the maximum number of cores you can currently find in a GPU was between 3000 and 4000.

Another restriction on how many cores can run at once - you must have enough graphics memory to handle all the threads currently trying to run, or many of the threads will fail to find enough memory to run.
----------------------------------------
[Edit 3 times, last edit by robertmiles at Oct 23, 2016 2:39:39 AM]
[Oct 23, 2016 2:25:27 AM]   Link   Report threatening or abusive post: please login first  Go to top 
KLiK
Master Cruncher
Croatia
Joined: Nov 13, 2006
Post Count: 3108
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: What's with the laziness, why no GPU projects?!?

Because none of the current project have resources to develop the GPU app (CEP2, I think), and some just aren't suitable (Fightaids@home). No idea about MCM.

As branjo pointed out, there are lots and lots of discussion about why there are no GPU projects at WCG.



So are you saying that CEP2 has the potential to run on a GPU, they just don't have the resources to modify the code for it? I know that none of the projects are currently running on GPU, and that there is no immediate indication that this will change, but I'm just curious if CEP2 uses an algorithm that is highly parallel. I'm sure that the Harvard team already knows about cuda, but from what I have seen, it looks like it is a relatively simple modification in order to run certain things in parallel. Like for example, instead of having a single for loop that goes through 10,000 iterations for 10,000 different values that do not depend on each other, a GPU with 10,000 cuda cores could run 10,000 if statements simultaneously, or if it only had 1000 cores it could run the same for loop with 10 iterations instead of 10,000.

I've studied CUDA enough that I might help them do it (for running under Windows only); however, I cannot do a corresponding OpenCL version yet.

One thing about the 10,000 if statements - if it's part of an if-then-else, there are strong restrictions on when the then branch and the else branch can run simultaneously.

The last I read, the maximum number of cores you can currently find in a GPU was between 3000 and 4000.

Another restriction on how many cores can run at once - you must have enough graphics memory to handle all the threads currently trying to run, or many of the threads will fail to find enough memory to run.

Now, maybe WCG can get in touch with your details to scientist...so you can also make some suggestion & porting engine to CUDA!

about GPU & cores:
1. there's no problem with about of cores used...'cause you can port the science to use as from minimum of 8/16 to (for example) 1024 for science in engine...if there's extra cores available, that means that graphics will work great of PC with science running!
2. every engine can have a limit of 256/512MB (for example)...if there's more memory available, another WU can be run! Been running SETi@home with CUDA32 to CUDA50 engines on dual-WU config on my machines, which are not that powerful! wink

hope this architecting ideas come in handy...
----------------------------------------
oldies:UDgrid.org & PS3 Life@home


non-profit org. Play4Life in Zagreb, Croatia
[Oct 24, 2016 7:43:51 AM]   Link   Report threatening or abusive post: please login first  Go to top 
robertmiles
Senior Cruncher
US
Joined: Apr 16, 2008
Post Count: 445
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: What's with the laziness, why no GPU projects?!?

[snip]

Now, maybe WCG can get in touch with your details to scientist...so you can also make some suggestion & porting engine to CUDA!

about GPU & cores:
1. there's no problem with about of cores used...'cause you can port the science to use as from minimum of 8/16 to (for example) 1024 for science in engine...if there's extra cores available, that means that graphics will work great of PC with science running!
2. every engine can have a limit of 256/512MB (for example)...if there's more memory available, another WU can be run! Been running SETi@home with CUDA32 to CUDA50 engines on dual-WU config on my machines, which are not that powerful! wink

hope this architecting ideas come in handy...

1. For me, ONLY if I'm familiar enough with the method used to access multiple CPU cores to translate it to using multiple GPU cores. I've already had to turn down one such conversion because it used a multi-threading method I was not familiar with.

2. For which BOINC project? The one I've found enough information to give an estimate for is Rosetta@Home, which typically requires 600 MB per workunit.
[Oct 25, 2016 2:42:37 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 77   Pages: 8   [ Previous Page | 1 2 3 4 5 6 7 8 | Next Page ]
[ Jump to Last Post ]
Post new Thread