Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go »
No member browsing this thread
Thread Status: Active
Total posts in this thread: 12
Posts: 12   Pages: 2   [ 1 2 | Next Page ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 2546 times and has 11 replies Next Thread
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Any way of using our GPU's?

I've noticed that each work unit takes upwards to about 14 hours for each core to compute a single work unit. Is there any way that a GPU application could be formed to slash this time down significantly?
[Dec 27, 2015 3:48:17 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Falconet
Master Cruncher
Portugal
Joined: Mar 9, 2009
Post Count: 3294
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Any way of using our GPU's?

GPU apps development is up to the scientists. Some projects are simpy not GPU-able, some scientists simply don't have the time or resources to develop a GPU app, etc.

Right now, as you noticed, there are no GPU apps. So far on WCG, there has only been 1 GPU app which ran during 2013 IIRC, for the finished Help Conquer Cancer project.

Again, up to the scientists, who for a variety of reasons, may not be able to do it.
----------------------------------------


AMD Ryzen 5 1600AF 6C/12T 3.2 GHz - 85W
AMD Ryzen 5 2500U 4C/8T 2.0 GHz - 28W
AMD Ryzen 7 7730U 8C/16T 3.0 GHz
[Dec 27, 2015 8:20:57 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: Any way of using our GPU's?

Short answer; No.
Long answer, FAAH uses AD (AutoDock) Vina which is not GPU capable at all. So if the scientists wanted to use GPU's, they would need to utilize a totally different set of software. Also, you need to realize that the project only uses a single core. If you have 16 physical cores, you run 16 WU's. So the processing time could be reduced by allowing more cores to be used on a single task. There are some drawbacks; if you allow more cores to be used, it is not scalable in a linear. So if a WU takes 16 hours on a single core, using 16 cores won't cut that down to an hour. So running 16 WU's concurrently is actually better than running a single WU over 16 cores.

While GPU's are powerful, they are only powerful at certain tasks. One reason why they are so powerful, they have many cores. So a single GPU core may not be as powerful as a single core in the CPU, it just has hundreds or thousands of them. There are other tasks that a CPU handles far better. Take the Xeon Phi and compare it to a GPU. They GFLOPS is not all that much different especially if you factor in the logical core count. The GOU will have over 10 times the number of logical cores but yet can only achieve 1/3 better performance. So the cores in the Xeon Phi are more powerful, it just loses out by shear number of cores on the GPU side. For this project, don't expect to see support for GPU's. It is too far along and would take a lot of beta testing. The scientists for FAAH Phase 2 are using what they know/learned/used from FAAH Phase 1. They used AD on that project and in the middle or so switched over to AD Vina which sped the project up.
[Dec 28, 2015 2:04:06 AM]   Link   Report threatening or abusive post: please login first  Go to top 
KLiK
Master Cruncher
Croatia
Joined: Nov 13, 2006
Post Count: 3108
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Any way of using our GPU's?

Short answer; No.
Long answer, FAAH uses AD (AutoDock) Vina which is not GPU capable at all. So if the scientists wanted to use GPU's, they would need to utilize a totally different set of software. Also, you need to realize that the project only uses a single core. If you have 16 physical cores, you run 16 WU's. So the processing time could be reduced by allowing more cores to be used on a single task. There are some drawbacks; if you allow more cores to be used, it is not scalable in a linear. So if a WU takes 16 hours on a single core, using 16 cores won't cut that down to an hour. So running 16 WU's concurrently is actually better than running a single WU over 16 cores.

While GPU's are powerful, they are only powerful at certain tasks. One reason why they are so powerful, they have many cores. So a single GPU core may not be as powerful as a single core in the CPU, it just has hundreds or thousands of them. There are other tasks that a CPU handles far better. Take the Xeon Phi and compare it to a GPU. They GFLOPS is not all that much different especially if you factor in the logical core count. The GOU will have over 10 times the number of logical cores but yet can only achieve 1/3 better performance. So the cores in the Xeon Phi are more powerful, it just loses out by shear number of cores on the GPU side. For this project, don't expect to see support for GPU's. It is too far along and would take a lot of beta testing. The scientists for FAAH Phase 2 are using what they know/learned/used from FAAH Phase 1. They used AD on that project and in the middle or so switched over to AD Vina which sped the project up.

FAAH is closed for business!

& FAHB is on BEDAM: https://secure.worldcommunitygrid.org/research/fahb/overview.do

so about getting BEDAM to be run on GPU...it's up to scientists!
but we all have to think of:
- if GPU porting takes more than a 1y, than it's questionable to port science on it! why? not much gain after a 1y of CPU collective time...
- if GPU porting is less than 3m, than it's viable to use it on beginning of the project...so that it can make a full use of project till ECD!
wink
----------------------------------------
oldies:UDgrid.org & PS3 Life@home


non-profit org. Play4Life in Zagreb, Croatia
[Dec 28, 2015 7:50:31 AM]   Link   Report threatening or abusive post: please login first  Go to top 
vlado101
Senior Cruncher
Joined: Jul 23, 2013
Post Count: 226
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Any way of using our GPU's?

If there would be a project that uses GPU, would I have to not use one of my CPU cores? I am using a laptop and it feels like its gets a lot hotter when GPU is running. I tried using it for other projects like GPU Grid or Milkyway/SETI, but each time it seemed like it would take a part of processing power from WCG and use it for the GPU workunit it was working on.

Do any of you have the best setting for using GPU processing on a laptop? Unfortunately I do not have top of the line. I only have NVIDIA NVS 5400.
----------------------------------------

[Dec 28, 2015 3:06:47 PM]   Link   Report threatening or abusive post: please login first  Go to top 
supdood
Senior Cruncher
USA
Joined: Aug 6, 2015
Post Count: 333
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Any way of using our GPU's?

vlado101--I've used a laptop GPU (nvidia GT 555M) over at Einstein a little bit and have had success with the following settings:
-no CPU tasks running while GPU is in use
-limit CPU cycles to ~70% (keeps the CPU from boosting and keeps it cooler overall--limits GPU performance somewhat)
-use dual-fan cooling tray and prop up front of laptop for added airflow (side fan on laptop)
-only do this in the winter (no AC)

With these settings I've kept the GPU, CPU, and board relatively cool. I've hesitated to do too much with this because I haven't been able to find a replacement heatsink and fan if the current one goes. I'd rather be slow and cool on a laptop than need to replace components.

Hope this helps you out on other projects that use GPU.
----------------------------------------
Crunch with BOINC team USA
www.boincusa.com

[Dec 28, 2015 4:23:03 PM]   Link   Report threatening or abusive post: please login first  Go to top 
vlado101
Senior Cruncher
Joined: Jul 23, 2013
Post Count: 226
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Any way of using our GPU's?

Hi supdood, thank you for the suggestion. When you mean no cpu tasks running you mean that you suspended them while doing work with the GPU correct? Darn, then I guess this mean I would have to decide if I want to use this laptop for WCG or for a GPU project. I am guessing I will have to contribute to WCG. What I am worried about is when WCG actually does have a GPU project. I am guessing I will have to download a bunch of them and have it work for one day, and then another day use only CPU tasks.
----------------------------------------

[Dec 28, 2015 6:08:14 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Jim1348
Veteran Cruncher
USA
Joined: Jul 13, 2009
Post Count: 1066
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Any way of using our GPU's?

There was a glimmer of hope earlier about BEDAM and GPUs:
http://www.compmolbiophysbc.org/project-updat...enmmagbnpbedammini-summit
But the winter is still dark.
[Dec 28, 2015 6:37:37 PM]   Link   Report threatening or abusive post: please login first  Go to top 
KLiK
Master Cruncher
Croatia
Joined: Nov 13, 2006
Post Count: 3108
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Any way of using our GPU's?

If there would be a project that uses GPU, would I have to not use one of my CPU cores? I am using a laptop and it feels like its gets a lot hotter when GPU is running. I tried using it for other projects like GPU Grid or Milkyway/SETI, but each time it seemed like it would take a part of processing power from WCG and use it for the GPU workunit it was working on.

Do any of you have the best setting for using GPU processing on a laptop? Unfortunately I do not have top of the line. I only have NVIDIA NVS 5400.

actually, there are 2 things that help:
1. if you know how internally your CPU, GPU & heat-pipe are arranged?!
what does it have to do with it?
well, as a heat-pipe has a liquid inside & you know from thermodynamics that heat goes from higher level to lower lever ONLY...so you arrange it that way on Tthrottling it...
example:
my T60 can't process 'cause of the X1300 on MBO...but if I play video on it, GPU heats up & Tthrottle raises the % used CPU...
& if I could do a GPU on it...I'd set my GPU to be 90°C, while CPU is only 80°C...making a nice flowing to coolest area of the laptop - fan!
2. use of some program such a Tthrottle to manage computer temp levels...
wink

Edit:
also my desktop computer I've got X3230 with 730 GT (last year's best PCIe x8)...running 2 tasks on them (made a setup for that) for SETi@home takes 2x 0.108 of CPU power for feeding GPU cores for CUDA42 (there are several variants of CUDA apps in SETi@home)...
& GPU's running as I write you this... wink

so no, you won't use all core CPU power unless you use an old CPU with a state of the art new GPU!
cool
----------------------------------------
oldies:UDgrid.org & PS3 Life@home


non-profit org. Play4Life in Zagreb, Croatia
----------------------------------------
[Edit 1 times, last edit by KLiK at Dec 28, 2015 8:08:28 PM]
[Dec 28, 2015 7:57:22 PM]   Link   Report threatening or abusive post: please login first  Go to top 
supdood
Senior Cruncher
USA
Joined: Aug 6, 2015
Post Count: 333
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Any way of using our GPU's?

Hi supdood, thank you for the suggestion. When you mean no cpu tasks running you mean that you suspended them while doing work with the GPU correct? Darn, then I guess this mean I would have to decide if I want to use this laptop for WCG

That's correct. Limits how much you can crunch, but takes care of heat problems.
----------------------------------------
Crunch with BOINC team USA
www.boincusa.com

[Dec 28, 2015 8:48:11 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 12   Pages: 2   [ 1 2 | Next Page ]
[ Jump to Last Post ]
Post new Thread