| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 73
|
|
| Author |
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
300 Watts, 24/7 would add 850 Euro (1250+ USD) to our annual bill, top tier tariff. Had a 220 sitting in PCI-16 slot for 80 Euro (can't remember the exact price, but it was a bargain at the time). That's all it does presently... sitting.
--//-- |
||
|
|
sk..
Master Cruncher http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif Joined: Mar 22, 2007 Post Count: 2324 Status: Offline Project Badges:
|
While a 3year old stock GTX295 does 7.4 times the work of a slightly overclocked i7-920, an HD6950 does 30 times the work of an i7-920. On the meaningless maths projects the GPU/cpu difference is much greater.
The 7800T, 8400GS and GT220 are not crunching cards. These are old and entry level system cards that can just about manage to display a picture on the screen. To talk about them in the context of crunching shows a lack of knowledge in this area. It’s like having a fine art discussion and being interrupted by “my crayons won’t write on the wallpaper properly.” On cost, crunch off-peak, and a replace one of your 200 or 300W systems with a single 250W card, and a power efficient PSU. WRT the i7-2600K and GPU crunching, I have one and tested its influence on a GPU project that uses one full GPU and one full CPU; it increased performance by less than 2% over an i7-920, which is about 20 to 30% slower. The moral, get a good GPU and keep your decent CPU, not the other way round (I just needed a new system, or I would have at least waited until BD if not Q4). 22/28nm GPU’s are also in development. Though I think it will be next year before we see them. Unfortunately the reduced power requirement will be offset by the continuous increase in power and taxes. To answer Atrolab’s question about using x86/x64 with a GPU. This is being developed to some extent in a variety of ways, pretty much all having to work round ownership (patent) rights; the scourge of industry and development. Intel will never allow ATI or NVidia to use x86 or they would immediately lose control of their future and the markets. If AMD and NVidia could integrate this technology with their GPU architectures then CPU’s would quickly become a thing of the past. Of course AMD/ATI could use X64 and might well do so at some stage in the future, if they have not messed that opportunity up already. NVidia have no choice but to continue with discrete GPU systems, designed partly for CUDA and OpenCL. High end cards are basically for gaming and flag waving. These cards run CUDA and OpenCL not for scientific research (the market is still too small), but to accelerate architecture and video editing software. This is really an add-on, so we are lucky they can be used for crunching at all. The lesser cards offer sometimes support CUDA/OpenCL because their designs are based on the high end GPUs, but this is often stripped down/removed; double precision, CC1.3. Perhaps in the future computer modeling researchers can manage themselves into an organization that has enough clout to talk to ATI and NVidia directly and ask for GPU's that are specifically designed for research. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
It's what we have, skgiven, not what is expected of members!
The 7800T, 8400GS and GT220 are not crunching cards. These are old and entry level system cards that can just about manage to display a picture on the screen. To talk about them in the context of crunching shows a lack of knowledge in this area. It’s like having a fine art discussion and being interrupted by “my crayons won’t write on the wallpaper properly.” ...And thank you so much for contributing, as always, your unrestrained thoughts. --//-- |
||
|
|
robertmiles
Senior Cruncher US Joined: Apr 16, 2008 Post Count: 445 Status: Offline Project Badges:
|
Perhaps in the future computer modeling researchers can manage themselves into an organization that has enough clout to talk to ATI and NVidia directly and ask for GPU's that are specifically designed for research. I'd expect Nvidia to answer that they already offer such a series, the Tesla series. |
||
|
|
robertmiles
Senior Cruncher US Joined: Apr 16, 2008 Post Count: 445 Status: Offline Project Badges:
|
Since this thread has drifted a little bit, I'd like to go OT for a question. My "newest" desktop is a Prescott P4 3.0 on an Asus P5GDC MoBo (based on 915 chipset). It has a Nvidia 7800GT card that is not supported by BOINC (or any kind of GPU processing other than games). Any suggestion of a newer video card that could make this old system more useful, of course in another BOINC project for the time being and in WCG in a possible future? GPUGRID wants especially fast cards, such as the GTS 450 I have or higher (but Nvidia-based only). They used to recommend the GT 240 due to its high performance for the power required, but now say that they're planning to drop the ability to use GT 240 cards in a few months. I'd expect BOINC projects currently offering CPU workunits to switch to similar requirements slower, so a GT 240 is more likely to be a good choice for them. |
||
|
|
robertmiles
Senior Cruncher US Joined: Apr 16, 2008 Post Count: 445 Status: Offline Project Badges:
|
To answer Atrolabâs question about using x86/x64 with a GPU. This is being developed to some extent in a variety of ways, pretty much all having to work round ownership (patent) rights; the scourge of industry and development. Intel will never allow ATI or NVidia to use x86 or they would immediately lose control of their future and the markets. If AMD and NVidia could integrate this technology with their GPU architectures then CPUâs would quickly become a thing of the past. Of course AMD/ATI could use X64 and might well do so at some stage in the future, if they have not messed that opportunity up already. NVidia have no choice but to continue with discrete GPU systems, designed partly for CUDA and OpenCL. Another idea to consider - persuade Intel to merge with or buy Nvidia so that they can get the GPU advantages and immediately have BOINC support available. Also better software support for GPU applications than AMD/ATI has yet. Intel MIGHT currently be trying to reduce the total value of Nvidia so they can buy it at a cheaper price later. Also look into whether AMD has suitable rights to use x86 and x64. Since they now own ATI, can't they just extend any such rights they have to ATI? [Edit 3 times, last edit by robertmiles at May 6, 2011 2:13:41 PM] |
||
|
|
sk..
Master Cruncher http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif Joined: Mar 22, 2007 Post Count: 2324 Status: Offline Project Badges:
|
The GT240 is a bit of an odd card. Its a sort of mid-range card sitting between GeForce 9 series and high end GTX200 cards. On the one hand CC1.2 means no double precision, and that it was no useable at MW, on the other it meant that at GPUGrid the 40nm architecture combined with better relative performance (CC1.2) and stability than the CC1.1 cards allowed this 96 shaders card to outperform some previous generation high end cards (128 shaders). In terms of performance per Watt (69W TDP and with an idle power usage of 9W) it even outdid the high end GTX200 series cards. So for one project it filled the project entry level card gap between the CC1.1 cards and the Fermi cards.
While it's crunching days there are limited, it's been 18months since it was released, it's still a decent desktop card for office/home use (low power, low profile, HDMI, VGA and DVI). I think it can be used with Einstein and some mathematical projects, if you are so inclined. Since it entered the market two generations of Fermi's have hit the shelves. The reason for these being no longer recommended is to stop people going out and buying them before the forthcoming move to Cuda 4, which for project reasons (to facilitate Fermi cards) means changes to the apps, which in turn will cause a performance drop on these cards, further hindered by the recent drivers that cause GPU clocks to default to their lowest performance (e.g. 608MHz to 50MHz). The Tesla series is more for servers and medical analysis systems (e.g. MRI) and well funded internal research than grid computing. Tesla's are Extremely expensive so not many research projects can afford these. Relative to the overall cost of an MRI machine, several thousand pounds is neither here nor there, but to a small research project it's not money well spent. The Tesla's also under perform slightly relative to their GTX counterparts and they don't compete well against ATI for double precision. While they may last 5years, GPU's move on quickly. Such high investment is probably not that wise for researchers anyway. So while some of these can be used for research, they are not a practical choice for crunchers. It's really down to how a card can perform, how much it costs to buy and run for a given project that makes it recommended or not. Should HCC, NRW or any other project be facilitated at WCG, don't expect all cards to work well, or even to be worth the Wattage. Expect different cards to perform differently for different projects. Hundreds of people will buy cards just to participate, so it will be essential that WCG make recommendations of what cards are good and what are not. Don't disappoint them, tell them up front what to get. I'm sure Intel will not allowed AMD to extend their x86 contract into GPU's, but an x64 system is becoming more and more likely as x64 operating systems and applications are becoming more popular. Perhaps in 2 or 3 years. I think Intel and NVidia are not likely to merge any time soon, if ever; to many past spats and lack of co-operation. The financial situation is probably not right for it anyway, and the political opposition to such a merger might be too strong. Worse still for Intel, NVidia are working with ARM (Tegra). ARM is an emerging competitor of Intel and AMD, and a very big fish (>1Billion processors per year). |
||
|
|
Coleslaw
Veteran Cruncher USA Joined: Mar 29, 2007 Post Count: 1343 Status: Offline Project Badges:
|
Yeah...nVidia is more focused with integrating with ARM for the netbook/tablet market. Intel is unlikely to merge with nVidia not because of spats, but rather the US Justice dept. and as you said other political concerns. This would be seen as ani-competitive since Intel is currently looked at as number 3 on the Video card market.
----------------------------------------![]() ![]() ![]() ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Yes, the idea of something like the GT240 is more or less what I have in mind. Not trying to build a monster cruncher, but rather a modest attempt to pump an old system. I have to keep in mind the limitations of my MoBo and PSU.
I'm probably leaning towards Nvidia, so I think I should look for one of the cards in the following link, in the "GeForce GTS and GT" Desktop section: GeForce Graphics Processors
The GT240 is no longer listed in that page. Perhaps it was replaced by GT440? Of course, nothing in there can help me choose the better option for BOINC crunching. |
||
|
|
sk..
Master Cruncher http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif Joined: Mar 22, 2007 Post Count: 2324 Status: Offline Project Badges:
|
Your right about the limited info, NVidia doesn't even tell you the number of cuda cores! You really do need to get a card specifically for a project.
I would expect a GeForce GT 540 to turn up some time later this year with 144 cuda cores. In my opinion that would be an entry level crunching card. No point getting an older generation card. The shader access problem will probably still be there for some project though. The best place to ask what card to get is at the project you want to crunch at. There isn't one here. |
||
|
|
|