| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 47
|
|
| Author |
|
|
mikey
Veteran Cruncher Joined: May 10, 2009 Post Count: 826 Status: Offline Project Badges:
|
That's nice, but the thread that broken link is supposed to point to was started 3 weeks after I posted the question in this one. Apparently it's who you blow around here, so I have as much chance of getting an honest answer to valid questions as I would by hopping in the delorean with doc. The main problem with a CUDA client is OpenGL! CUDA only runs on Nvidia cards NOTHING else, there are a TON of other video cards out there and by ONLY making something work on Nvidia cards without Nvidia support is not very smart or a wise use of your time and resources. OpenGL is the new video card open source standard that is coming. It is out but not holey supported by all yet. When it is fully implemented and supported by all then most Boinc projects will settle down and exchange info on the hows, whats, etc and we will be crunching with our video cards too. Right now alot of projects are just waiting for the dust to settle down a bit instead of putting resources into something they know will be useless in a couple of years. Folding@Home made it work, but they have two totally separate versions and neither is Boinc supported. ![]() ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
|
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
If I were WCG (thus IBM), I'd not want to be seen as endorsing a single video card product, so there's logic of looking at a wider applicability with OpenCL. That's not to say that the first offering might be an nVidia release with HCC, but if you digest the words of bbover3, if WCG publish a plan it will be one that will be clear on the broad forward and no opportunity for a single card maker to advertise "we are the chosen". If WCG goes life, then I'll look at products and drivers for chosen platform... something that does not scream 'Burn' and does 250 watts an hour... it's still volunteer computing using idle/spare cycles... something that does not land inadvertently on the desktops of a 1000+ partner and messes up things in the relationship.
----------------------------------------
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
|
nasher
Veteran Cruncher USA Joined: Dec 2, 2005 Post Count: 1423 Status: Offline Project Badges:
|
honestly i am not intrested that much in CUDA...
----------------------------------------i am intrested in results... last night in fact i lost one of my computers not sure why yet but while booting up it stops sending signals to monitor keyboard and mouse... i am not blaming anyone for this failure its just another computer down after 3+ years operating at 100% CPU with no breaks ![]() |
||
|
|
damir1978
Senior Cruncher Joined: Apr 16, 2007 Post Count: 397 Status: Offline Project Badges:
|
Only 3 Years?
----------------------------------------One of my computers is 6+ years, 24/7 operating environment (I use it since Grid.org) with no break on activity for hard drive or anything else there and it still crunching nicely. I guess I'm lucky. The secret (everybody is telling me) is to not close your computer at all, if possible. Once is rebooting it might have electrical issues. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
If I were WCG (thus IBM), I'd not want to be seen as endorsing a single video card product, so there's logic of looking at a wider applicability with OpenCL.(...) That is if you would care more about branding than getting work done. If WCG goes life, then I'll look at products and drivers for chosen platform... something that does not scream 'Burn' and does 250 watts an hour... it's still volunteer computing using idle/spare cycles... something that does not land inadvertently on the desktops of a 1000+ partner and messes up things in the relationship. If something "screams 'burn"" and gets you a whole lot of processing power it should be considered. It's very simple to "opt out" everybody so nobody could be upset that they used hardware they didn't wanted to be used. Sekerob, you don't have to defend IBM so passionately, we got it, they are the sponsors. But that doesn't mean that other hardware can't be used. If IBM says, there coulf even be posts dissing the new hardware supported, in the lines of: "sure we got credits from X hardware, but are they reliable? We all need to buy more from Y to be sure." |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Just today I was reading an article in Computer Power User (CPU) examining CUDA applications. It mentioned Folding@Home but concentrated on video programs. One application ran using only 13% of a CPU, but most used at least 60%. Conclusion - it will take a while before the commercial programmers learn how to make good use of CUDA. My take on this subject - until HCC or another project has a CUDA version of their project debugged and running well on a GPU, WCG will not be feeling any pressure. I expect that we will be able to move fast then. Until we reach that point, we can prepare in a slow and deliberate manner since speeding up will not accomplish anything without a project.
Lawrence |
||
|
|
steffen_moeller
Cruncher Joined: Dec 3, 2005 Post Count: 44 Status: Offline |
While what you are saying is all correct, the owners of CUDA-savvy graphics cards should use them for projects that support those - the difference is too drastic. Everything else is a waste of resources. And conversely, those users who today run for projects that support CUDA but don't have such a card - they should move to projects that cannot support CUDA yet where their wall-clock time is still valued.
The costs for the hardware is close to NIL, but the development costs a lot of time and is tedious and does not work for all projects. One should not do it in a rush, but one should always aim for the application acceleration - by hardware or other more traditional optimisation strategies. |
||
|
|
Steve WCG
Senior Cruncher Joined: May 4, 2009 Post Count: 216 Status: Offline |
I get that WCG is not currently supporting a CUDA aware version of BOINC because they are not hosting any projects that currently use CUDA. The issue I see is that people are told they have the wrong version (sometimes even when that has nothing to do with the issue at hand) if they are running a 64 bit version, CUDA aware version, or anything other than exactly what WCG's vision of what the BOINC world should be. Not everyone who crunches at WCG, only crunches for WCG.
ps. You bet I would I like to see CUDA science apps at WCG, I have a GTX295 ready to crunch away. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The World Community Grid recommended BOINC version is the only version security audited by IBM and featuring WCG-specific enhancements. It is also the only version to have been exhaustively beta tested by WCG.
You are welcome to use other versions, but you must get support directly from Berkeley. Later BOINC versions have not even been considered for beta testing by World Community Grid. Hopefully when Berkeley achieve a stable version, WCG will test it. http://wcg.wikia.com/wiki/BOINC_beta_testing BOINC 6.4 and 6.6 stalled at step 1. |
||
|
|
|