| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 47
|
|
| Author |
|
|
nasher
Veteran Cruncher USA Joined: Dec 2, 2005 Post Count: 1423 Status: Offline Project Badges:
|
On that note, why not give the other 99.92% of idle cycles to this project where we know for sure we will find answers to cures and other tangible issues. You know ET = Earthly Targets ![]() it is up to each individual what they choose to run and not myself i have run seti from now and then not curently runnin it though if people run CUDA projects and keep requesting(demanding) them then more projects will start to support them. personaly i am happy if i can get 1 person who dosnt use distributed computing to crunch 1 work unit(from any project) sure they might not decide to continue but thats there choice and there computer and there time and energy. CUDA sounds like a nice idea to me... I also understand some of the reasons wny WCG hasnt shifted to it as of yet. i just live and crunch with my 3 computers (hopefully this weekend i will reserect a 4th) please do not tell people they are wrong cause they want to use there computer there way.. myself i am crunching about 99% on WCG and Help Conquer Cancer myself right now (till i go blue) honestly i used to try to crunch at least 1 week from EVERY BOINC suported project just to see how they feel and such (again my choice) Happy crunching and hope you all stay crunching ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
if people run CUDA projects and keep requesting(demanding) them then more projects will start to support them. See, there is the problem again. Crunchers who have no technical skills on how to CUDA-enable a project do not cause projects to be CUDA-enabled. Its exactly the same as a camel in the desert DEMANDING water: The thirsty camel does not make the water appear. All the impatient or nasty attitude that the camel can muster will not make it rain. It is the scientists who decide on using CUDA. To make that decision, they have to KNOW that the following are ALL true: 1) the selected GPU chips are technically able to perform the required calculations, 2) The scientists have access to the staff that can write the software, 3) The work required to complete the programming will shorten the elapsed time for the project, 4) The scientists have the funding for the software development, 5) that WCG has trained staff in place to support the crunchers, 6) that WCG has users with the correctly configures hardware in sufficient quantities, and finally the BIG one: 7) the results returned by CUDA have the same reliability of the results as calculated by the CPU based software. If you can follow all that, you realize that the scientists have to fund and complete all the work on CUDA before they can even evaluate if 6 & 7 are even true, and then the cuda project may even get tossed after all the money is spent. I just do not see any way to get a positive ROI. Just let it go people. If it happens then it happens, but until then just keep yourself busy doing something productive. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
See, there is the problem again. Crunchers who have no technical skills on how to CUDA-enable a project do not cause projects to be CUDA-enabled. Its exactly the same as a camel in the desert DEMANDING water: The thirsty camel does not make the water appear. All the impatient or nasty attitude that the camel can muster will not make it rain. [...] Just let it go people. If it happens then it happens, but until then just keep yourself busy doing something productive. I don't know why you think it's your job to tell people what to do here, but if you'll refer back to the question that started this thread Any idea when testing on a GPU-enabled version of BOINC might start here on WCG? I fail to see where I demanded anything. Nobody is forcing anyone to use their GPU ... it's extra processor power and VRAM that goes unused most of the time, so why not start testing on a version of BOINC that meets WCG's criteria for support which also makes use of the GPU's available on more and more systems every day... since the 6.2.28 WCG compile (released 2 years ago come November) does NOT make use of them at all. There have not been any BOINC updates for almost 2 months. Why is it not time to start the vetting process mentioned in the unofficial (unsupported) wiki pointed to by the didactless one earlier in this thread? |
||
|
|
Ingleside
Veteran Cruncher Norway Joined: Nov 19, 2005 Post Count: 974 Status: Offline Project Badges:
|
Nobody is forcing anyone to use their GPU ... it's extra processor power and VRAM that goes unused most of the time, so why not start testing on a version of BOINC that meets WCG's criteria for support which also makes use of the GPU's available on more and more systems every day... since the 6.2.28 WCG compile (released 2 years ago come November) does NOT make use of them at all. There have not been any BOINC updates for almost 2 months. Why is it not time to start the vetting process mentioned in the unofficial (unsupported) wiki pointed to by the didactless one earlier in this thread? v6.6.38 was released to alpha-testing 22.07, and v6.6.39 (or later) will follow "soon", and includes a needed enhancement to the global project-backoff-mechanism re-introduced with v6.6.38, and should also fix a bug introduced by latest nvidia-drivers. With Ati/OpenCL-support scheduled for v6.10.xx expected "soon" after the special gridrepublic-v6.8.xx-client is released, waiting for this before WCG goes-through the so-called "exhaustive" beta-testing before upgrading the "recommended" version shouldn't be a big problem. But, in the mean-time, it's still in WCG's best interest to support all users, even if they runs a non-recommended v5.10.xx or v6.6.xx-release-version. Most bugs/features is either non-BOINC-client-specific, or is very easy to remember like v6.6.xx shows run-time instead of cpu-time. Then CEP (or any of the other WCG-sub-projects) starts spitting-out "Size too large"-errors or similar, asking users to downgrade or report a clearly project-specific bug to BOINC doesn't make any sence to me... ![]() "I make so many mistakes. But then just think of all the mistakes I don't make, although I might." |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
But, in the mean-time, it's still in WCG's best interest to support all users, even if they runs a non-recommended v5.10.xx or v6.6.xx-release-version. Most bugs/features is either non-BOINC-client-specific, or is very easy to remember like v6.6.xx shows run-time instead of cpu-time. I don't think it is wise for WCG to use their very limited resources to do that, especially WCG isn't responsible for those errors/problems derived from those "bugs/features", however you claim how they could be easily solved. Just because you are using a non-recommended version doesn't mean that WCG must troubleshoot the problems you've encountered, as you use it at your very own risk in the first place. [Edit 1 times, last edit by Former Member at Aug 2, 2009 4:57:28 AM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
This thread is wavering between discussion of CUDA (which is not used by any of our current projects. One section of a program used by CEP was converted to CUDA by the program's authors as an experiment. HCC might be working on a CUDA version, but I have not heard anything in almost a year - a bad sign.) and discussion of whether we should support our members no matter which version of BOINC they use. This last subject has only one possible answer - of course we must support our members. It is permissible for the supporters to groan about unnecessary BOINC upgrades but that is not very important.
Lawrence |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
whoa, ok.. i might be misunderstanding how CUDA works, I dont yet have a
----------------------------------------CUDA enabled card except for this old quadro 570.. and i don't even think it's working untill i benchmark in boinc.. on average the results are 300+ flops and 700+ integer additional un-install the card, and the diffrence is obvious.. not only is there more demand on the cpu, but the benchmark scores do not show the same increases even after overclocking.. the CUDA drivers and OCing produce 3x more then without... without the client being optimized to use cuda. thus once installed your GPU is already helping your CPU.. just not in a highly optimized way.. which further support from BOIC not WCG would provide.. but it must be in such a way as to not mess up the results.. or cause glitches or buffer overruns.. etc. A highly complex and much testing will need to be done.. probably burning up several cards in the process trying to dial it in.. patience will pay off =D ![]() [Edit 1 times, last edit by Former Member at Aug 2, 2009 7:59:25 AM] |
||
|
|
mikey
Veteran Cruncher Joined: May 10, 2009 Post Count: 826 Status: Offline Project Badges:
|
whoa, ok.. i might be misunderstanding how CUDA works, I dont yet have a CUDA enabled card except for this old quadro 570.. and i don't even think it's working untill i benchmark in boinc.. on average the results are 300+ flops and 700+ integer additional un-install the card, and the diffrence is obvious.. not only is there more demand on the cpu, but the benchmark scores do not show the same increases even after overclocking.. the CUDA drivers and OCing produce 3x more then without... without the client being optimized to use cuda. thus once installed your GPU is already helping your CPU.. just not in a highly optimized way.. which further support from BOIC not WCG would provide.. but it must be in such a way as to not mess up the results.. or cause glitches or buffer overruns.. etc. A highly complex and much testing will need to be done.. probably burning up several cards in the process trying to dial it in.. patience will pay off =D ![]() CUDA cruncing is using your gpu primarily. The cpu feeds work to the gpu not the other way around as you are describing what happens now. In cases where the gpu is usable, not all projects can use the gpu due to its limitations, it can be extremely fast and efficient! Units that can take hours on a cpu can take minutes on a gpu. As I said though there are limitations to what a gpu can do, it can't calculate Pi to the millionth digit for example, it is just not that sensitive to multiple decimal point calculations. Although more and more newer gpu's do a much better job at it than the older ones. Also CUDA ONLY refers to Nvidia video cards, ATI cards are only supported by OpenGL. OpenGL is where the industry is headed due to it's open standards. Nvidia cards are supported by OpenGL now but since Nvidia gave all the Tech Support in the beginning, CUDA was here first! ![]() ![]() |
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
whoa, ok.. i might be misunderstanding how CUDA works, I dont yet have a CUDA enabled card except for this old quadro 570.. and i don't even think it's working untill i benchmark in boinc.. on average the results are 300+ flops and 700+ integer additional un-install the card, and the diffrence is obvious.. not only is there more demand on the cpu, but the benchmark scores do not show the same increases even after overclocking.. the CUDA drivers and OCing produce 3x more then without... without the client being optimized to use cuda. thus once installed your GPU is already helping your CPU.. just not in a highly optimized way.. which further support from BOIC not WCG would provide.. but it must be in such a way as to not mess up the results.. or cause glitches or buffer overruns.. etc. A highly complex and much testing will need to be done.. probably burning up several cards in the process trying to dial it in.. patience will pay off =D ![]() CUDA cruncing is using your gpu primarily. The cpu feeds work to the gpu not the other way around as you are describing what happens now. In cases where the gpu is usable, not all projects can use the gpu due to its limitations, it can be extremely fast and efficient! Units that can take hours on a cpu can take minutes on a gpu. As I said though there are limitations to what a gpu can do, it can't calculate Pi to the millionth digit for example, it is just not that sensitive to multiple decimal point calculations. Although more and more newer gpu's do a much better job at it than the older ones. Also CUDA ONLY refers to Nvidia video cards, ATI cards are only supported by OpenGL. OpenGL is where the industry is headed due to it's open standards. Nvidia cards are supported by OpenGL now but since Nvidia gave all the Tech Support in the beginning, CUDA was here first! OpenGL or OpenCL, both maybe?)? CA's are clueless I read on a team forum, where it was not unequivocally qualified in what department, so let's ask a clueless question to underline that ;>)
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
But, in the mean-time, it's still in WCG's best interest to support all users, even if they runs a non-recommended v5.10.xx or v6.6.xx-release-version. I completely disagree. If you want HELP, it is your best interest to use supported software. If you are not going to bother the WCG staff and CAs, feel free to do as you like. |
||
|
|
|