Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go »
No member browsing this thread
Thread Status: Active
Total posts in this thread: 15
Posts: 15   Pages: 2   [ Previous Page | 1 2 ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 5170 times and has 14 replies Next Thread
Rickjb
Veteran Cruncher
Australia
Joined: Sep 17, 2006
Post Count: 666
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Theory Crafting OPN CPU vs GPU

I think it's likely that only part of a WU can be performed on a GPU, with the remainder requiring the CPU. I also think that it's very important that using a GPU does not compromise the accuracy of the calculations, eg by having to use 32-bit floating-point where 64-bit would be better, or by limiting the choice of algorithms.

Be careful that your GPU isn't a solution looking for a problem,
[Jul 11, 2020 1:07:59 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: Theory Crafting OPN CPU vs GPU

I think it's likely that only part of a WU can be performed on a GPU, with the remainder requiring the CPU. I also think that it's very important that using a GPU does not compromise the accuracy of the calculations, eg by having to use 32-bit floating-point where 64-bit would be better, or by limiting the choice of algorithms.


If I can run Folding@Home or Einstein@Home with one CPU thread and one GPU I know of no reason why it will not be valuable here. In fact, @scrippsresearch is already working on porting this project to GPU.

Be careful that your GPU isn't a solution looking for a problem,


That is exactly what my computers are. I want my tools to be part of the solutions to the worlds problems. It is the only reason why distributed computing projects exist at all and if I have the right tools for the job I want to use them. I agree that not every project can benefit from GPU processing but I also know many projects that could benefit from it don't. Science is complex but it can be broken down into simple, repeatable steps. Unlike the politics behind universities' budgets.
----------------------------------------
[Edit 1 times, last edit by Former Member at Jul 11, 2020 7:20:30 PM]
[Jul 11, 2020 7:19:18 PM]   Link   Report threatening or abusive post: please login first  Go to top 
mikey
Veteran Cruncher
Joined: May 10, 2009
Post Count: 824
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Theory Crafting OPN CPU vs GPU


That is exactly what my computers are. I want my tools to be part of the solutions to the worlds problems. It is the only reason why distributed computing projects exist at all and if I have the right tools for the job I want to use them. I agree that not every project can benefit from GPU processing but I also know many projects that could benefit from it don't. Science is complex but it can be broken down into simple, repeatable steps. Unlike the politics behind universities' budgets.


The major problem in going to gpu processing is the cost involved to hire someone to make the app and then stay around to tweak it so every brand and model of gpu can be used. The more gpu's that can be used the better, no sense writing an app that can only use gpu's that have at least 12gb of onboard memory if less than .01% of crunchers have them. GOOD programmers are not cheap and with most projects being run on a shoe string budget, very few actually get grants, meaning few projects can afford them. Also programmers do not usually go from project to project to offer their services so each project has to find a new one, meaning alot of on the job training. The gpu makers mostly stopped helping Boinc projects as we are not their primary market, gamers are followed by super computers usually built by governments, again meaning alot of on the job training for someone offering their services which can mean a long lag time to get it right. That lag time means it's not a 'I'll pay you 10K to build me a gpu app' because it can't be built and tweaked in a couple of weeks again making the costs prohibitive.
----------------------------------------


[Sep 3, 2020 2:50:45 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Falconet
Master Cruncher
Portugal
Joined: Mar 9, 2009
Post Count: 3315
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Theory Crafting OPN CPU vs GPU

Fortunately, the OPN research team have an OpenCL/CUDA programmer
https://www.worldcommunitygrid.org/research/opn1/researchers.do
!
Actually, from their GitHub page, Nvidia helped develop the CUDA version:

"The Cuda version was developed in collaboration with Nvidia to run AutoDock-GPU on the Oak Ridge National Laboratory's (ORNL) Summit, and it included a batched ligand pipeline developed by Aaron Scheinberg from Jubilee Development."

https://github.com/ccsb-scripps/AutoDock-GPU


Hope we get an update on the Alpha/BETA testing of the app soon.
----------------------------------------


- AMD Ryzen 5 1600AF 6C/12T 3.2 GHz - 85W
- AMD Ryzen 5 2500U 4C/8T 2.0 GHz - 28W
- AMD Ryzen 7 7730U 8C/16T 3.0 GHz
----------------------------------------
[Edit 3 times, last edit by Mosqueteiro at Sep 3, 2020 5:55:38 PM]
[Sep 3, 2020 5:51:21 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 15   Pages: 2   [ Previous Page | 1 2 ]
[ Jump to Last Post ]
Post new Thread