| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 14
|
|
| Author |
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I have gf 8600, Can I let it jion into work with software ?
|
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
Hi,
----------------------------------------Fear not and to summarize all the different questions on specialized hardware for which there is nothing on the horizon to date: CUDA coded sciences - NO GPU processing - NO XBOX projects - NO PS3 projects - NO Game/Entertainment Consoles - NO Graphics card & Cell processor driven computing - NO WCG will consider any of these if A: Scientists have a viable humanitarian / life science aimed gridda-viable project and propose it to WCG B: There is sufficient work for a proposed project to make it last like 6 months on a large number of devices. C: Is fit to run on the targeted hardware D: Passes security auditing so that WCG can say it's save. E: Does not cause inhibition of the regular use of the devices, thus of a Set and Forget nature. And a bunch more reasons it needs to comply with. So far only 2 projects have gone public that I know of and are still very much in the development and testing stages at F@H and PS3Grid. Find the scientist that has a big job that is suited and send them over here to the proposal page: http://www.worldcommunitygrid.org/projects_showcase/viewSubmitAProposal.do for WCG's consideration and we might have it here one day. The next Project Proposal Review date is September 30, 2008 cheers
WCG
----------------------------------------Please help to make the Forums an enjoyable experience for All! [Edit 3 times, last edit by Sekerob at Jul 13, 2008 9:16:11 AM] |
||
|
|
twilyth
Master Cruncher US Joined: Mar 30, 2007 Post Count: 2130 Status: Offline Project Badges:
|
With ATI's new 4800 series, there is going to be a lot of unused processing power out there. The 4870 can do 1.2Tflops and the 4870x2 will be significantly more - hopefully close to double. With 4 of those in crossfire config, that might be as much as 8 Tflops. For comparison, a stock q6600 is less than 10 Gflops - so basically a difference of nearly 3 orders of magnitude. Somebody has to want to tap into that.
----------------------------------------![]() ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
With ATI's new 4800 series, there is going to be a lot of unused processing power out there. The 4870 can do 1.2Tflops and the 4870x2 will be significantly more - hopefully close to double. With 4 of those in crossfire config, that might be as much as 8 Tflops. For comparison, a stock q6600 is less than 10 Gflops - so basically a difference of nearly 3 orders of magnitude. Somebody has to want to tap into that. Right now at Folding@Home the nVidia cards are significantly outperforming the ATI cards...we'll see if the code development changes that. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello twilyth, hello Sekerob,
"... Somebody has to want to tap into that." - twilyth If my understanding of Sekerob's post (Jul 13, 2008 9:08:05 AM) is any indication of the "prospects of GPU programming for DC purposes", it seems to me that the idea of tapping into a GPU's power is equated to the idea of tapping the power of the Sun to potentially supply the entire power needs of all nations for centuries and beyond. The power is doubtless there, and everyone wants it. Could we use it? In the case of the GPU, well, you be the judge. Not that the idea of a GPU crunching DC projects is not desirable from a simply desirability point of view. The sticking point, it seems to me, is that many people dismiss the idea straightaway, equating desirability to the means to realize the object of the desirability. To these people, if there is no current means to an end, then the desirabilty of that end is thrown out the window. For example, if there is no means or technology to go to the moon, then one shouldn't want to go to the moon, possibly labeling (with uncomplimentary words) those who harbor the idea. On the same vein, if there is no way that is currently available for a GPU to be used to crunch projects, then, the thinking goes, one shouldn't want to put effort into investigating approaches to find some means to have a GPU do the work of a CPU (in crunching projects). Thankfully there is work (at Folding@Home and possibly elsewhere) involving GPU programming to crunch projects. They have a tough and difficult work ahead of them before GPU crunchng would be as common as, shall we say, going to the moon! |
||
|
|
twilyth
Master Cruncher US Joined: Mar 30, 2007 Post Count: 2130 Status: Offline Project Badges:
|
I think there is a learning curve involved and with boinc being open source, there probably aren't a lot of programmers outside of the gaming community who want to deal with a whole new way of coding. I don't think WCG has the resources for such a project so I can't really blame them. Plus nVidia still is the standard for video but ATI is taking a big bite out of their performance lead with the 4800's. And in terms of price/performance, it may be safe to say that they will take the lead.
----------------------------------------But when you stop to realize that one 4-way crossfire 4870x2 machine can do the work of maybe 800 Q6600 quad-cores, you have to give serious consideration to at least trying to port the application. Of course 4-way cross fire will be the rare exception, but even 1 1.2 Tflop 4870 will do the work of at least 100 Q6600's. I think the main problem in the past has been the fact that video cards tend to only do single precision flops. But I think - not sure - but think, that newer cards can do double precision just at half the speed. I don't really follow these things except casually so hopefully some of the more technically inclined will chime in. ![]() ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The programming of a GPU program is the aspect that interests me. Technically, BOINC 6 can run GPU programs. (It reserves the whole GPU for BOINC.) So the remaining practical difficulties are for us to check, support and maintain a GPU program. Then there is the practical aspect - just how many members will run it on their GPU. This last point is very important. Folding@Home can expect to get everybody who wants to run distributed computer programs on their GPU. Once there are multiple choices, this happy state of affairs is over.
I have not read much comment from GPU programmers. Therefore, I have to guess. My guess is that it is crucial to start with a program that is organized around data-parallel semantics. This is the crucial step. After that, it is just ( ) grinding out the code. Just off the top of my head, it sounds as though the original program (pre-GPU) should be written in APL ( http://en.wikipedia.org/wiki/APL_(programming_language) ) or written to use the BLAS (Basic Linear Algebra Subroutine) library ( http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms ). This would organize the program so that it would be (relatively) easy to use SIMD instructions (SSE extensions) or code for the PS3 or any GPU.Do current programming classes stress the importance of using the BLAS library? For a long time, parallel programming was an abstract computer science idea that was only implemented on special hardware. Now that it is starting to show up on standard real-world computers, have teachers adapted to this change? WCG just boards the programs given to us to run on BOINC. The project scientists will have to actually design and write the application program. Unless they write it from the start for parallel computing, it is likely to be a multi-year task to adapt it to a GPU. Does anybody have some additional ideas about how to structure a program to run on parallel hardware? Lawrence |
||
|
|
twilyth
Master Cruncher US Joined: Mar 30, 2007 Post Count: 2130 Status: Offline Project Badges:
|
So I guess the individual processors can't be accessed directly? I know that Intel's Larrabee will have 24-48 x86 compatible processors, maybe it would be easier to port to that sort of system? Personally I don't see how they're going to match nVidia and ATI with so few processors, but they're claiming that they will beat the current offerings IIRC. Dreamworks has already signed on so hopefully there is something to this claim.
----------------------------------------![]() ![]() |
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
Not a good sign if there is a multi-core with 1 GPU in the system or even multiple GPU's and it says:
----------------------------------------"All of these are in principle possible, but not tested yet. You should be able to use 4GPUs cores as long as you have a quad core CPU. If it does not work, we will fix it. The only known issue you have to deactivate SLI support. CUDA works only without SLI." So you have this power system which, you want to crunch on multiple cards and then one is told to disable a key feature why the hardware was procured in the first place. Who's doing that? And then it says you need Linux to run it.... very mainstream and tinker-free crunching for the masses. Prediction when first seen on a main grid using windows? Not venturing out on that limb. My accelerator of NVidia is not even recognized by current BOINC 6. developers version. I'll see it when i see it.
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
|
ALAIN_13013
Advanced Cruncher France Joined: Nov 28, 2006 Post Count: 83 Status: Offline Project Badges:
|
Hello,
----------------------------------------Look there, some URL not bad about CUDA. http://boinc.berkeley.edu/all_news.php#274 http://www.nvidia.com/object/cuda_learn_products.html I live in Marseille . ![]() |
||
|
|
|