Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go ยป
No member browsing this thread
Thread Status: Active
Total posts in this thread: 40
Posts: 40   Pages: 4   [ Previous Page | 1 2 3 4 ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 218516 times and has 39 replies Next Thread
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: Autodock & CUDA

Hello everyone,
Just a technical note.
of course, everyone has the right to post in the forum he or she wants, but:

Should this post not be in the suggestions and feedback forum?

One more thing:
I wish, the community adivsers had the technical possiblity to transfer posts to the forum in which they sould have been posted.

Of course, the member who postet the post in the wrong forum should be informed about the transfer of his or her post to a different forum. There could be with an automatical note in the forum, in which the post had been originally posted, This note could be for example: Dear Member, the post you posted here three hours ago, has been transferred to the xy forum. This note will dissappear in x day.

The transfer notes should ineed dissappear after a week, because otherwise they would disturb.

All the best to erveryone
Martin S
[Nov 5, 2010 3:05:39 PM]   Link   Report threatening or abusive post: please login first  Go to top 
BladeD
Ace Cruncher
USA
Joined: Nov 17, 2004
Post Count: 28976
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Autodock & CUDA

WCG for 99.999999999999999999999999999999% sure will only run research apps within the BOINC framework. Personally, I think the client is still not ready for the not tech savvy public. Continue to see alpha mail list reports of scheduler issues... idling GPU cards... idling CPU cores, overscheduling and whatnot.

And when I did it briefly to test, my case and CPU fans really started running in hoover howling max mode. Not for whimps and Saul's post shows so... not out of the box.

Looks like they have come a long way...here. The REAL shocker, they released the ATI client first! laughing
----------------------------------------
----------------------------------------
[Edit 1 times, last edit by BladeD at Nov 21, 2010 2:13:48 PM]
[Nov 21, 2010 2:11:16 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Eric-Montreal
Cruncher
Canada
Joined: Nov 16, 2004
Post Count: 34
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Autodock & CUDA

Sekerob : The setup presented by Saul Luizaga is necessary when using several cards, and that, just like setting up a server farm, is for the tech savvy. Their GPU client is not perfect but stable.

For a regular machine, just download the client, register with F@H and it'll start crunching.
http://folding.stanford.edu/English/DownloadWinOther
The work they're doing is about protein folding (different, but in the same field as HPF2). WCG & F@H happily coexist on the same machine.

The only drawbacks are :
- higher fan noise
- need to clean the card's fan a bit more often
- No side effect for most applications, but marked slowdown for a few such as Google Sketchup (at least on my machine), but there is a "Pause Work" option on the tray icon.

Checking GPU temperature can be done with a simple utility such as :
http://www.techpowerup.com/downloads/1898/TechPowerUp_GPU-Z_v0.4.8.html
Temps around 80 deg. Centigrade are normal for the GPU itself, check at least once every 3 months or if noise from the GPU fan increases (needs a bit of cleanup).

The CPU part of the application uses below 3%, so the whole CPUs remain usable for WCG projects.

My main machine is a Win XP Q9550 + GTX275 and it's been crunching for both projects for about 2 years without incident.

djibeXX wrote:
"Thanks a lot. Nvidia and ATI will soon be leaders in supercomputers ... or I hope so."

At least, on a symbolic level, it already happened, as the fastest supercomputer in the latest "TOP500" ranking is the Chinese Tianhe-1A, made using NVidia GPUs :
http://www.top500.org/lists/2010/11/press-release
[Nov 21, 2010 5:55:25 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Ingleside
Veteran Cruncher
Norway
Joined: Nov 19, 2005
Post Count: 974
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Autodock & CUDA

Sekerob : The setup presented by Saul Luizaga is necessary when using several cards, and that, just like setting up a server farm, is for the tech savvy. Their GPU client is not perfect but stable.

For a regular machine, just download the client, register with F@H and it'll start crunching.
http://folding.stanford.edu/English/DownloadWinOther
The work they're doing is about protein folding (different, but in the same field as HPF2). WCG & F@H happily coexist on the same machine.

This is possibly the case for Nvidia-cards, but not for Ati-cards, there FAH's Ati-client is old, has mediocre speed, and even if you fools-around with environmental-settings, it still uses a large part of a cpu-core to feed the GPU-client. Also, atleast for me, the EUE-rate was too high...
----------------------------------------


"I make so many mistakes. But then just think of all the mistakes I don't make, although I might."
[Nov 21, 2010 9:39:31 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Eric-Montreal
Cruncher
Canada
Joined: Nov 16, 2004
Post Count: 34
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Autodock & CUDA

Ingleside wrote:
This is possibly the case for Nvidia-cards, but not for Ati-cards

Saul Luizaga wrote it used 5% of a quad core with ATI, and my experience is around 3% with NVidia. If you have a a slower machine, the relative percentage will be higher, else there must be something wrong with your specific machine and you might get better advice on their forum at http://foldingforum.org/

The point was that it would be better if WCG had some GPU projects to keep our machines warm, but since this is quite unlikely in the foreseeable future, there are other worthwhile projects out there that behave well with WCG.

OTOH, since F@H computation is so much faster & power efficient in GPU mode, it would be a better use of computing power if people currently crunching for F@H with a CPU only machine could crunch for WCG instead.

Such cooperation between projects could bring huge benefits to both, as running their CPU client is highly inefficient at what they do (10% of their users are GPU users, yet they contribute 90% of the processing power)

Here is the average power (raw data : http://fah-web.stanford.edu/cgi-bin/main.py?qtype=osstats) :
NVidia : 335 Mflop
ATI : 149 Mflop
PS3 : 59 Mflop
Windows CPU : 1.05 Mflop
Linux CPU : 2.7 Mflop

Total contribution is 812 Tflops for 470000 active CPUs versus 9447 TFlops for only 55000 active GPUs
On average, a single GPU contributes like around 100 CPUs

To be fair, the very high numbers are a result of both GPU power, highly optimized algorithm and longer average daily run time for those machines.

An "exchange" where WCG users with an idle GPU would run the F@H GPU client while F@H CPU only users would run WCG where their machine would have a more valuable contribution would bring huge benefits to both projects, but in the current competition for users, this would require either cooperation from the projects maintainers or a group of users from both projects willing to cross the border and maximize their global contribution.
[Nov 22, 2010 5:16:49 PM]   Link   Report threatening or abusive post: please login first  Go to top 
BladeD
Ace Cruncher
USA
Joined: Nov 17, 2004
Post Count: 28976
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Autodock & CUDA

Sounds good to me, but it would be so much easier if F@H could be attached to the Bonic manager.
----------------------------------------
[Nov 22, 2010 9:05:34 PM]   Link   Report threatening or abusive post: please login first  Go to top 
petnek
Advanced Cruncher
Czech Republic
Joined: Mar 17, 2008
Post Count: 89
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Autodock & CUDA

Here is very interesting "client" for Folding@Home. Easier for beginners on this project and with a lot of options. Check it out ;)
----------------------------------------

[Nov 22, 2010 11:04:04 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Ingleside
Veteran Cruncher
Norway
Joined: Nov 19, 2005
Post Count: 974
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Autodock & CUDA

Saul Luizaga wrote it used 5% of a quad core with ATI, and my experience is around 3% with NVidia. If you have a a slower machine, the relative percentage will be higher, else there must be something wrong with your specific machine and you might get better advice on their forum at http://foldingforum.org/

Hmm, quick test, if uses the settings as Saul Luizaga recommends, and with FLUSH_INTERVAL=128, the GPU is at roughly 90%, and uses... 40% - 50% of one CPU-core.
Increasing FLUSH_INTERVAL to 256, and GPU-usage hits 99%, and one CPU-core at... 70%.

Oh, and just during the 10 minutes used on this test, had one crash in connection with stopping & re-starting client to test a different flush_interval...

This on an Ati-5850 and i7-920.

OTOH, since F@H computation is so much faster & power efficient in GPU mode, it would be a better use of computing power if people currently crunching for F@H with a CPU only machine could crunch for WCG instead.

Such cooperation between projects could bring huge benefits to both, as running their CPU client is highly inefficient at what they do (10% of their users are GPU users, yet they contribute 90% of the processing power)

You're overlooking one very important fact, while FAH's GPU-client is much faster doing some calculations, only a sub-set of wu's can take advantage of this huge speedup. For the majority of FAH-wu's, a cpu is actually more efficient than a GPU.

You can look on their GPU-client as an application that can significantly speed-up crunching of DDDT2 A-type and B-type of wu's, but that will run the C-type slower, and can't handle all the other types like FightAIDS@home, CEP2 and so on...

Another thing to consider is that atleast the SMP-client gets a points-bonus for fast turnaround-time. One effect of this bonus-system is that Ati-client gives little or no increase in points/day, but gives a huge increase in power-consumtion.

So, after FAH added the new bonus-system, many FAH-users that previously ran on their Ati-cards has now stopped using their Ati-cards, and runs cpu-only...
----------------------------------------


"I make so many mistakes. But then just think of all the mistakes I don't make, although I might."
[Nov 23, 2010 1:14:52 AM]   Link   Report threatening or abusive post: please login first  Go to top 
sk..
Master Cruncher
http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif
Joined: Mar 22, 2007
Post Count: 2324
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Autodock & CUDA

You can look on their GPU-client as an application that can significantly speed-up crunching of DDDT2 A-type and B-type of wu's, but that will run the C-type slower, and can't handle all the other types like FightAIDS@home, CEP2 and so on...

While it would make sense to run A and B DDDT2 types on GPU's and C types on CPU's, I doubt that WCG and F@H would co-operate on this and infrastructure is not in place to do all of this at WCG.

OT, Why people would want to move their ATI's to count pi and search for Spock is one for the psychologists.
[Jan 11, 2011 11:22:55 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: Autodock & CUDA

A+B Type is 108,000 tasks. If it had even the remotest sense at all to port, it would never be run on the grid.
[Jan 11, 2011 11:44:36 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 40   Pages: 4   [ Previous Page | 1 2 3 4 ]
[ Jump to Last Post ]
Post new Thread