| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 23
|
|
| Author |
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello WCG
Attention: WCG Forum Management / WCG Forum Policy Administrator Greetings: On the premise that: 1] It is inevitable that the GPU will play a major and vital role in any DC efforts; 2] WCG would like to be key player (if not aspiring to take the lead) in that effort; 3] GPU-related discussion topics are envisioned to grow in size commensurate with the GPU's role in the DC industry, necessitating its own space under its own heading; 4] The existing two(2) sub-headings under the "Support" heading -- Website Support, and BOINC Agent Support -- do not cover the GPU area (by itself or together) adequately nor elegantly. I propose that a new forum heading be created to centralize all discussions that has anything to do with GPU-based crunching here at WCG. I propose that a "GPU Support" sub-heading be added under the "Support" heading. For the moment, while GPU-based crunching is not yet a reality here at WCG, members may discuss about the matter in a similar setting as what happens in a pre-launch of a new product, say, of a new car, or of a new OS, etc. For your consideration, please. Good day ; |
||
|
|
Hypernova
Master Cruncher Audaces Fortuna Juvat ! Vaud - Switzerland Joined: Dec 16, 2008 Post Count: 1908 Status: Offline Project Badges:
|
I fully support this request.
----------------------------------------![]() ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Thank you for requesting a GPU forum.
At this time we are going to deny this request. And as you may know, we are interested in adding GPU but have a rather large backlog of research projects to launch first. Adding a GPU forum implies all kinds of deadlines and we're not ready to deal with that yet. Thanks!!!!! |
||
|
|
fablefox
Senior Cruncher Joined: May 31, 2010 Post Count: 168 Status: Offline Project Badges:
|
Yeah, agree - and since NVidia now came out with Tesla that AFAIK, can support double floating precision there is no reason not to support it.
-------------------------------------------------------------------------------- [Edit 1 times, last edit by fablefox at Jun 3, 2010 4:08:13 AM] |
||
|
|
nasher
Veteran Cruncher USA Joined: Dec 2, 2005 Post Count: 1423 Status: Offline Project Badges:
|
@fablefox - they said they were working on it
----------------------------------------remember WCG dosnt have thousands of people working there to get new stuff online right now they are working towards getting new research projects to launch (things that pay the bills) before working on things that will increase speed yes it would be nice to have GPU but till they get through all the thousands of things needed to get GPU running for most the projects or the upcoming projects or whatever they dont want to start it realize if they dont have GPU the will get people asking for gpu and thats just one set of complaints and they are working on it... but picture this ... if they get GPU for 1 project 1) all the beta testing to make sure the project works 2) complaints that every project dosnt Have GPU 3) all the errors and lost time do to failed work units and then tech support on problems with it 4) all the different GPU's out there that require testing cause they are like different operating systems 5) 3+ more types of work units .. currently we have work units for Windows, Mac, Linux machines would need to have ones for like GPU's as well and for each operating system 6) complains about waiting on there wing-mans 7) cuting work units to size for GPU (remember they try for about an 8 hour return on work) and thats just what i can come up with in 2-3 min. they have told us they are working on GPU but it wont be any time soon. there are other BOINC projects out there that do support GPU if you must use your GPU i hope you can find a project there you want to while you wait for WCG to get through everything they will need to do for GPU to start up. personally i want to see 1) DDT2 projects 2) the launch of phase 2 of projects like flu and CEP 3) the other few projects out there they want to launch ![]() |
||
|
|
Sid2
Senior Cruncher USA Joined: Jun 12, 2007 Post Count: 259 Status: Offline Project Badges:
|
yes it would be nice to have GPU but till they get through all the thousands of things needed to get GPU running for most the projects or the upcoming projects or whatever they dont want to start it 64-bit aps would be a nice touch. . . processing with GPU's can come later. ![]() |
||
|
|
fablefox
Senior Cruncher Joined: May 31, 2010 Post Count: 168 Status: Offline Project Badges:
|
There are already 64 bit version of BIONC. FAQ section has a link. I don't know the technical aspect of it, but the FAQ says "if you want science to run faster".
----------------------------------------AS for other GPU project - I know that Folding@home is a GPU and medicine related project. For people who is interested. http://folding.stanford.edu/ ---------------------------------------- [Edit 1 times, last edit by fablefox at Jun 6, 2010 11:47:48 AM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I think you can probably avoid a lot of double-coding (ATI vs NVidia) by using either OpenCL or (alas, win-only) DirectCompute 10 or 11. For some of the image-compare jobs, even OpenGL might suffice. In short: Avoid cuda and ati-stream, and go for the multi-platform supported variants.
//Svein |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I've read an argument against GPU support for DC Projects from here; http://boinc.bakerlab.org/rosetta/forum_thread.php?id=4755 as such:
----------------------------------------When one has a hammer, everything looks like a nail, but the reality is that there is a proper tool for each type of job. While you can pound a screw in with a hammer with some success, a screwdriver works much better for that job. You can also use the handle of a screwdriver to pound in a nail, but a hammer is better suited for that task. And good luck cutting a 2"x4" board with either a hammer or a screwdriver! Some projects will run better on a CPU and others are better suited for GPUs. Collatz runs well on GPUs because I wanted to learn about GPU programming and therefore chose an unsolved math problem that I know would run well on a GPU. Collatz needs very little external data and does the same relatively small equation over and over and over again. Since none of the results affect the next iteration of results, it runs in parallel very well. In comparison, Rosetta uses rather large data files and loading and unloading them from the GPU memory would likely end up making the GPU app run as slow or even slower than Rosetta's CPU apps. The key to fast GPU apps is to keep all those stream processors busy and not have them waiting for data to be copied to/from the GPU. GPUs may have double precision capabilities, but they don't necessarily round the same way as a CPU does and the number of digits of precision may or may not be the same. Collatz didn't have to worry about that because it uses only integer math (192 bit integers at present, but integers none the less). Collatz was also partly chosen so people without the latest and greatest GPU hardware would still be able to utilize their GPUs while crunching with their CPUs on projects such as Rosetta. Depending upon the project, a CPU may be the best tool for the job irregardless of how highly skilled the project developers are. It sounds like that is the case here. This argument is not the same in wording, but basically the message is the same from many CPU only DC Projects. I had this to add: Total N00B, so please be gentle when you impale me with my ignorance. I was just curious if it wasn't so much a right tool for the job issue, but rather the right way of going about the issue. If the problem is that you bought a screw & a hammer. You knew, or ought to have known that it wasn't a good idea before you even started hammering that screw. So if the research isn't planned in a way that is suited for the hardware, then it's not the hardware that is to blame. So now, where the future is Parallel, why fight it, instead of embrace it? If the research isn't possible in the way it's done now, how about finding a way to make the research possible in the way that has the best potential? If I wanted to go from A to B. I could could take a car, or I could take a bus. I wanted to go from A to B, but so did a lot of others, unless B wasn't that interesting a place to go. If that be the case, maybe I should go to where everbody else was going? All this is abstract & unspecific. But general rules do apply, even where specific tasks are involved. It would be nice if Rosetta could take the leap of faith, instead if comming up with reasons not to... You could have a really nice car, but you're a cab driver. Instead of making that same trip 30 times in 30 different directions. Those you transport sren't that interested to go site seeing in your cab, they just want to go to where they have to go, & that's the same place. If you were a bus driver, you could have taken all 30 in one go. If this example is relevant. Then it's more a question of logistics & conservative programming. A bus has to have a predefined route, a fixed schedule, & other buses ready to make that same trip so that everybody doesn't depend only on you. It's not even a question of RISC VS CISC. But even if that be the case, the iPad is doing quite well, Ubuntu also supports ARM. Not that you would use one for BOINC, but if ARM served the same role as Intel/AMD, it would have to. A Math Professor could outwit a room full of students. But if it were simple math & lots of it, the room full of students would overwhelm the professor, simply because it all had to be written down & the professor only has two hands. Even if he were a the fastest cowboy math teacher in the west, he'd get tired. [Edit 2 times, last edit by Former Member at Jun 23, 2010 3:27:00 PM] |
||
|
|
sk..
Master Cruncher http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif Joined: Mar 22, 2007 Post Count: 2324 Status: Offline Project Badges:
|
Welcome to these forums Jay.
----------------------------------------What you say makes plenty of sense. There is a fair chance we will see GPU's at WCG someday, but it looks like it will not be any time soon (perhaps a year, perhaps five). There has been plenty happening on the GPU front over the last year or so. When the GF104's are released next month, the scientists will see all the cards in front of them. I thought there was too much about to happen over the last year, and a lack of maturity in apps, for GPUs to be attractive here. However we are coming close to knowing what the future holds for the next few years. We just need some of the apps to mature a bit and the next Fermis to get released. These things will happen next month. Then we will know what the future of NVidia is. Hopefully ATI will have more mature apps very soon too. Then scientist will have the tools and materials to use GPUs, and be safe in the knowledge that there will be such cards available for at least a few years. So, GPU use is set to expand. Of this I have no doubt. Whether they reconsider and start to use GPU's here, any time soon, or on other Boinc projects we will have to wait and see, but there are many good projects that could be using GPUs. These are the ones I would like to see, Rosetta, Ralph, Poem, ibercivis, RNA World, Docking, and of course WCGPU ![]() WRT ATI Stream, Open CL and CUDA. It is clear from the directions taken by other projects that it is either ATI Stream or CUDA. Open CL is not well optimised so people just won't accept it. I would prefer WCG started to use ATI cards, because GPUGrid perform similar work using NVidia cards. I hope the scientists here will keep an open mind about GPUs, their potential, and fully consider when the best time to start using GPU's is. If one project outweighs another then I hope they make the correct choice, whatever it is. However, until they start to dabble with GPUs, they are probably not going to be in a position to judge their use. Perhaps some of the planned projects could easily have a GPU component. It may even be possible that existing projects be adapted to add a GPU component. Presently there are good GPU people available, perhaps another noteworthy consideration, but I dont know what is going on behind the scenes at WCG or how secure the future is financially and structurally. [Edit 2 times, last edit by skgiven at Jun 23, 2010 6:03:57 PM] |
||
|
|
|