| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 32
|
|
| Author |
|
|
Henri Tapani Heinonen
Cruncher Joined: Jun 20, 2006 Post Count: 24 Status: Offline |
Hi!
----------------------------------------Now there is a CUDA based BOINC application for SETI@home: http://setiathome.berkeley.edu/cuda.php http://setiathome.berkeley.edu/cuda_faq.php It will give lots of more crunching power for SETI@home project. Maybe we will see FightAIDS@home CUDA someday? ![]() ![]() [Edit 2 times, last edit by Henri Tapani Heinonen at Dec 18, 2008 8:52:22 AM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
We know.
Thank you. WCG's GPU plans are progressing. |
||
|
|
logaritse
Advanced Cruncher Indonesia Joined: Apr 23, 2008 Post Count: 104 Status: Offline Project Badges:
|
We know. Thank you. WCG's GPU plans are progressing. Glad to hear that WCG have a plan to implement GPU/CUDA ![]()
Simplicity meet classic
----------------------------------------[Edit 1 times, last edit by logaritse at Dec 18, 2008 9:21:19 AM] |
||
|
|
nasher
Veteran Cruncher USA Joined: Dec 2, 2005 Post Count: 1423 Status: Offline Project Badges:
|
yes it is amazing how many new grapics processing projects are starting to get up and running...
----------------------------------------would love to see all the untaped resorces being used more personaly i cant wait till they decided to bring other gameing systems besides the PS3 onto boinc projects.. I mean i have a Wii at home that probaly gets less than 10 hours of use a week and i know lots of people with Xboxes and the like who would love to use them more right now its only main processers on WCG and thats fine.. i dont want them jumping the gun and haveing to redo work, so I will happly wait.. at lest for a while ![]() |
||
|
|
GIBA
Ace Cruncher Joined: Apr 25, 2005 Post Count: 5374 Status: Offline |
Just wonder read a little bit more about the plans and time to be available it in WCG... there was a huge numbers of threads were people asking for it in the last months but until now nothing
----------------------------------------![]()
Cheers ! GIB@
![]() Join BRASIL - BRAZIL@GRID team and be very happy ! http://www.worldcommunitygrid.org/team/viewTeamInfo.do?teamId=DF99KT5DN1 |
||
|
|
Eric-Montreal
Cruncher Canada Joined: Nov 16, 2004 Post Count: 34 Status: Offline Project Badges:
|
And there is even more good news for FAAH (and Discovering Dengue Drugs), since both are based on Autodock and there is now a (commercial) version of it that uses CUDA.
http://www.siliconinformatics.com/products.html http://www.nvidia.com/object/io_1209593316409.html (I already posted those links a few month ago in another thread ) Autodock 4.0 now being GPL , either the code for the GPU version should be released as the licence requires, or (now that the feasibility is proved) the work could be duplicated. Boinc now supports GPUs Autodock now supports GPUs with 8x to 12x speed improvement (versus non GPU version on a dual Xeon) I hope WCG will make an announcement soon ... Oh, and I'm glad 'Dydactilos' finally saw the light ... |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
We knew that, too. Thank you.
It's "Didactylos". I didn't need converting. I was always very clear that the technology wasn't quite there, yet. BOINC are still having major problems coping with CUDA, and of course, before CUDA, GPU technology was incredibly hard to use for general purpose computing. I'm just very vociferous when it comes to deflating the hype surrounding GPUs. The graphics companies are very good at advertising, and less forthcoming about the problems. |
||
|
|
Eric-Montreal
Cruncher Canada Joined: Nov 16, 2004 Post Count: 34 Status: Offline Project Badges:
|
I'm just very vociferous when it comes to deflating the hype surrounding GPUs. The graphics companies are very good at advertising, and less forthcoming about the problems. When someone have been proved wrong on so many occasions for many months on the same subject, he should at least refrain from being too 'vociferous' ... No matter what your reasons are, treating anything related to GPUs as if you were a guard dog does not yield to anything productive. If you dislike those products and consider them all hype / no substance, then fine. We heard your point of view over and over again in each and every GPU related thread. Now please let the people who, backed by facts and fast advances in this field, see them as a way to significantly increase the speed of calculations performed in projects such as FAAH and others hosted by WCG discuss the subject without your constant vociferations. [Edit 1 times, last edit by Eric-Montreal at Jan 15, 2009 12:28:54 AM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Proved wrong? How so?
It's amusing the way the figures keep coming down. First GPUs were 40 times faster than a CPU, then 12 times, then 7.... We're getting a lot closer to reality now. It's easy for people to get over-excited about the promise of a new(ish) technology, without giving a thought to the complexities involved. And that's okay. But please don't be rude when I start discussing practicalities. |
||
|
|
Eric-Montreal
Cruncher Canada Joined: Nov 16, 2004 Post Count: 34 Status: Offline Project Badges:
|
Didactylos wrote:
"Proved wrong? How so?" Here are some of the things you wrote in GPU related threads : Didactylos wrote: "GPUs capable of this kind of processing remain expensive, rare, and the sole domain of hardcore gamers." This is simply false, these boards start around 100$, as they were in Oct 2008 when you wrote it. Grid projects using GPUs show that a significantly high percentage of participants have them. (answering a previous post about the possibility to use GPU in projects such as those run by WCG) Didactylos wrote: "Oh, I wish that were true. However, converting a program to be massively parallel is not easy, and is not always even possible." Now, both Boinc and Autodock run on GPUs. This fact seems to irritate you as much as it did when I first posted the link a few months ago. Didactylos wrote: "As for the benefits of GPU computing.... cut away the hype, and what do you have? A very expensive way of computing a little faster." In the case of Autodock, a little faster like 8x to 12x the speed of the same machine without GPU for 150$ to 250$ ... Not bad. Didactylos wrote: "So, it may work - kind of. But it's not ready. It's not reliable, and it's definitely not mature. It's not even finished." News Flash : all softwares have bugs. Projects that use GPUs consider them good enough to do the job, but you probably know better - kind of. Didactylos wrote: "I suppose I could try to compile a list of known problems, but it would take me all day." What kind of problems ? Give us some examples that go beyond normal software bug corrections and show those alleged flaws in GPUs ... Didactylos wrote: "Once you strip away the hype, the performance is good, but nowhere near the claims that are made (and that includes Folding@Home's claims - they don't compare like with like)." You accuse them of lying. Proofs please ? Didactylos wrote: "* Obviously, they have advantages. Less obviously, they have disadvantages, too - those get talked about less." disadvantages like what ? Tell us about the dark side of GPUs. What is so wrong with them ? Didactylos wrote: "Saying BOINC supports CUDA is a gross exaggeration. BOINC would like to support CUDA, but right now it's in such a mess that it is a miracle a few people have managed to get it working at all." Anything to support that affirmation ? Didactylos wrote: "Yes, it's fast. But it's also expensive and a power sink. " Most GPU cards are in the 100$ to 300$ range (those above are not cost effective as they offer only slightly better performance), and the power consumption is between 100W and 150W, yet they can produce about 10x more calculations. Compared to 10 machines that would cost anywhere between 5000$ and 15000$, consume over 2Kw of power and use 10x the space, that seems like both a bargain and a power efficient solution. Didactylos wrote: "Once you cut away the hype, I suspect it will prove comparable to a blade server" The two are totally unrelated. blade servers are regular servers with a small form factor that are not known for being cheap to say the least. If one GPU does the work of 10 blades, it's definitely a superior alternative for that application. Didactylos wrote: "So, is the GPU really faster? No." I guess an average of 10x speedup is worth nothing. Didactylos wrote: "It only gets more done if you have a very specific task, and are prepared to accept the limitations of the GPU, such as less precise mathematics." for a start, not all calculation require ultimate precision, but when they do, the latest GPUs from both ATI and NVIDIA natively support double precision (64 bits) floating point calculations. Didactylos wrote: "In short, the boundary between CPU and GPU is getting fainter." Totally contradicts your previous "vociferations" about GPUs being so different and narrowly specific they would be almost useless. Didactylos wrote: "Folding@Home's claims should be taken with a grain of salt*. [...] * i.e. they are totally bogus - it is not an equal comparison. Apples and oranges spring to mind. Further, the claims are inflated by Sony and ATI's marketing departments. In short, the comparison you are making is not valid." How are they bogus ? As usual, you offer little more than affirmations without any proof. When the poster asked you to explain it, your answer was : "I (and others) have posted more detailed technical explanations before." Fine. Care to give any reference (link) to those technical explanations ? Didactylos wrote: "Yes, the hype still irritates me. The strength of World Community Grid relies on the fact that it doesn't require specialised hardware. Anyone can contribute." Adding a GPU option does not mean it is *required* and you perfectly know it. Didactylos wrote: "What a joke. Andy Keane is right, parallelism is hard. [...] But, oh, the hypocrisy of Nvidia complaining about Intel's marketing!" Who cares what NVidia or Intel marketing says ? Does it make the products less capable in any way ? Didactylos wrote: "CUDA, should'a, - didn't." Very funny. Except it's false. Didactylos wrote: "Unfortunately, that's not the way most computational biology seems to work. FORTRAN seems to be the language of choice, and most software has a long history and established codebase. Porting to an entirely new platform is a massive undertaking." Maybe, but Autodock, Boinc and many others have been ported pretty fast, the problem was not *that* huge. In each case, only a small part had to be rewritten. Didactylos wrote: "It's amusing the way the figures keep coming down. First GPUs were 40 times faster than a CPU, then 12 times, then 7.... We're getting a lot closer to reality now." False, it's not getting lower with time. Some application benefit more than others. Autodock is accelerated between 8.3x and 13x according to available numbers. Not too bad for a first version. If you have numbers hat show it's all hype and real world results are lower than this, show them, else we know where the hype come from. Didactylos wrote: "It's easy for people to get over-excited about the promise of a new(ish) technology, without giving a thought to the complexities involved." I'm not "over-excited", M. "vociferous" ... Just fed up with the kind of answer you sent both to the original poster and to me in this thread. If you want to play like a bully to close the threads you dislike, I guess I'll have some fun with you. Didactylos wrote: "And that's okay. But please don't be rude" Read the shut-up lines in your previous posts and you'll see you're not in a position to accuse others of being rude, M. "vociferous" ! Didactylos wrote: "when I start discussing practicalities." You're not discussing practicalities, you're spreading disinformation in the forums. I have yet to see your countless accusations against the use of GPUs backed by any references. Repeating the same line about the supposed 'hype' and shortcomings without giving specific facts to counter said hype won't make it more credible one bit. |
||
|
|
|