| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 26
|
|
| Author |
|
|
Coleslaw
Veteran Cruncher USA Joined: Mar 29, 2007 Post Count: 1343 Status: Offline Project Badges:
|
Another middle management paper pusher....here it is. People already have the low end cards. People can afford the up front cost of low and mid range cards. People can typically come up with higher operational costs as time ticks away. Most don't save the pennies from the electric they saved to put towards a top of the line efficient card. And most importantly, many of these low end cards are capable of still putting out nearly as many results of some of the current day cpu's and so makes sense to use if the option is there. Does this mean you should go out and buy a ton of low end cards for crunching? No. Simply put, we use what we can afford and we typically want to use what we are already trying to get the return on investment from. If you say buy high end now, you have to use it even when more efficient cards come out and then you are not following your own advice because you should stop using that card in favor of the next generation one. If you do contine to always upgrade, you more then likely will never see the savings because you are paying the obsurd cost of top of the line cards. It is a double edged sword.
----------------------------------------Edit: As far as helping these "clients" with their investments, look at alternatives. For example: (and I use these two cards as examples often and are truly outdated for any new purchases) Try suggesting the 210 instead of the 8400GS. The 210 is almost identicle in performance but uses a third of the power. Both cost pretty much the same as well. You will find that there are a lot of cards that do this. However, the 8400GS was out long before the 210 and would not have had this choice when it was top of the line. So, if you are advising upgrades on servers to perform better, then you really need to understand that nobody ever truly sees the savings over the long term. As I mentioned before...nobody really sets aside the difference in operational cost each month after making the decision. Edit2 & 3: typo ![]() ![]() ![]() ![]() [Edit 3 times, last edit by Coleslaw at Sep 8, 2012 9:58:02 PM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello Coleslaw
Reference: Coleslaw [Sep 8, 2012 9:42:28 PM] post Nowhere in my post have I used the phrase "high-end", and you may have not taken into account my use of the phrase "two or more generations old GPUs". The closest suspect word that you may have confused with the word "high-end" are the words "current-generation", and "new-generation". Allow me to clarify. Any top-of-the-line card is necessarily the top-of-the-line performance card -- and by definition also the bottom-of-the-line power-efficiency card amongst it's generation of GPUs. Any low-end card of a given generation of cards is most always low-cost to purchase, but not necessarily the most power-efficient in operation for that generation. Take the AMD/ATI group of cards. The top-of-the-line card, and therefore a "high-end" performance card is the HD7970 which stacks up at the bottom in terms of power-efficiency amongst it's "new-generation" of GPUs. Over at the efficiency area lives what is marketed by AMD as a "mainstream" card of the "new-generation" of GPUs: the HD7700-series -- which series of cards has the highest power-efficiency amongst AMD's "current-generation" of cards apart from having the highest-efficiency among the "new-generation" of AMD cards. There is a situation though where cost or performance takes a back seat to a factor which we all call as capability. Enter OpenCL. This is where the older cards needs to retire sooner than what their owners may have otherwise wanted regardless of any calculations of the economics or non-economics of doing so, but only for GPU-computing. The older cards may still be usable but guaranteed only for the older games. Hear the sound of progress moving forward, or is the sound of obsolescence retreating into history getting more attention? The good news is that having a good OpenCL card now need not be a pain in the neck, and I described a situation where the economics is also there as a bonus for both the client using the card and the server providing the support. ; |
||
|
|
widdershins
Veteran Cruncher Scotland Joined: Apr 30, 2007 Post Count: 677 Status: Offline Project Badges:
|
I fail to see how any model of two or more generations-old GPUs can do better than even the low-end models of current-generation GPUs in terms of support cost to operate and to maintain at both the client-side and the server-side. Over the long-term, clients who hang on to their old-generation GPUs not only pay more for the same or lower performance, but also say "No" to paying less for the same or higher performance. Over the long term, weighed down by having to also support clients who chose to hang on to their old-generation GPUs, servers can not tune their resources to maximize the cost-per-performance advantages of new-generation GPUs. ; Part of the problem is that many casual crunchers and even some more hardcore ones can't upgrade their graphics card easily. Some because they came pre-assembled with their pc and they don't have the technical skills to replace it themselves and don't want the expense/hassle of taking it to a pc repairer when it's working fine. Part of the problem is that many people nowadays have laptops. Upgrading the graphics card in one of those is even harder! So for many upgrading their graphics card means buying a new computer, and there will be a large number of crunchers with older graphics cards who simply won't update because WCG isn't important enough to them to do so. Let's not forget many who crunch for WCG do so part-time, and the vast majority will never/rarely visit these forums. By supporting the older cards you open up a large resource of this group of crunchers. There are a few thousand top crunchers who contribute large numbers of points per day and visit the forums regularly, but these points combined will be outweighed by the vast silent majority of hundreds of thousands of part-time crunchers all dripping in a few hundred points per day each. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello widdershins.
Reference: widdershins [Sep 9, 2012 7:00:06 AM] post Once upon a time, there was the GPU and it was good at rendering games. Everyone who contemplated on buying a GPU did so for only one reason: to play games rich in visuals. GPU-computing was not yet born and GPUs have no reason to believe that the world is other than games. Enter GPU-computing. Why the insistence on using a card that was not designed to do GPU-computing? It does not make sense nor cents to use a card that never had an idea of what GPU-computing is all about let alone doing GPU-computing efficiently. No amount of support for legacy non_GPU-computing_capable GPU cards will ever make up, much less overtake the GPU-computing contributions of a GPU-computing_capable cards. Another thing to keep in mind is that companies are burdened with legacy hardware and software that they need to support well after they have made use of them. WCG is yet to launch a GPU-computing application. To have WCG be burdened with supporting legacy cards and their supporting legacy OS right off the bat at launch is an interesting idea, don't you think so? ![]() ; |
||
|
|
Coleslaw
Veteran Cruncher USA Joined: Mar 29, 2007 Post Count: 1343 Status: Offline Project Badges:
|
So by your argument we shouldn't use traditional GPU's but rather TESLA cards that were ultimately designed for this purpose. Even nVidia is trying to push its consumers this direction. Most crunchers will tell you the cost benefits of using off the shelf hardware that matches or supersedes the expense of these cards. So, your argument there is lacking.
----------------------------------------Supporting legacy cards out of the gate is actually pretty important. If you are writing your code and design from scratch, it is easier to get them working later. Designing your app and trying to patch in the old cards later may render the apps broken. Any project lead will tell you to keep the entire scope in mind when you make the road map. As you have described yourself, legacy hardware isn't disappearing any time soon. ![]() ![]() ![]() ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The TESLA card is good at GPU-computing but it is priced way beyond the reach of mere mortals. The AMD card option is just about right price-wise for the rest of us for both gaming and GPU-computing of the OpenCL flavor.
; |
||
|
|
|