Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 23
|
![]() |
Author |
|
Hypernova
Master Cruncher Audaces Fortuna Juvat ! Vaud - Switzerland Joined: Dec 16, 2008 Post Count: 1908 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
It is now four years since I started crunching for WCG.
----------------------------------------When I decided to build my crunching farm in 2008/2009 I standardized it to the 1366 socket ASUS motherboards, and slowly settled on the 980X as I had some 920's and some 950's that I first used then slowly sold and replaced. I have ATX and mATX formats. The mATX are all of the Asus Rampage II and III Gene type, and the ATX are also from Asus of different types P6TD Deluxe, P6T Deluxe V2, Maximus Formula, Rampage II and III Extreme and they all run flwlessly. I have the feeling that it has been a pretty good choice, I admit somewhat expensive. Even today for the same price the 980X remains unbeatable, and is at paar with 990X (no visible difference) and the i7-3970X of which I have one running and so I can compare. Sure the 3970X consumes a little less, and can go a little higher in OC at equivalent core temp but all in all the 980X remains still to be clearly beaten. This will happen when Intel will introduce a mainstream 8 or 10 cores but I do not see it coming, and even when it comes the question will be at what frequency and TDP will they be rated? and what OC capability will they have? If you have 10 cores but at 2.40 Ghz then it is equivalent to a 980X at 4 Ghz. Maybe the direction will be Xeon Phi type coprocessors. Wait and see. The combination socket 1366/980X was a good choice to remain highly performant on the long term and for machines that crunch 24/7. With nearly 20 machines (about 14 run 24/7) running I had very little problems. No CPU failure at all and they run at 4 Ghz. The Asus Triton 88 CPU coolers are slowly dying (the bearings and motor axle fail and it is not worth repairing) and being replaced with the Noctua NH-D14. These Noctuas can be eternal as long as the CPU runs. When a fan ball bearing fails you just replace the 120 or 140 mm fan and that's it. In addition the noctuas have a much better cooling capability and the CPUs run cooler (will live longer) at the same frequency. I have about 15 Noctuas in service now and they are perfect. A few 120mm chassis fan failed and just had to be replaced. Again the reason is bearing issues. If you calculate that I have between 4 to 5 chassis fans per unit that makes around 90 chassis fans in total. having only a few fail over many years is really not a bad result. A gain the choice of a good quality chassis with a good air cooling design is important. The system disks have been all Velociraptors of 150 GB or 300 GB for the rigs which have also a personal use. I have one cruncher since one year running with a Crucial SSD M300 flawlessly. But the price difference was not worth having SSDs. This is slowly changing as prices are going down dramatically. In terms of crunching performance the difference is minimal. The PSUs have been less reliable. I started with Bronze models and quickly jumped to the Gold standard to reduce losses. The initial PSUs were Coolermaster and Thermaltake models. Last year they started failing and becoming instable. I replaced them with a unique model as standard the Corsair AX850 keeping in mind the future GPU crunching at WCG. In fact I had already some crunchers that were already running with Nvidia boards on GPU Grid so those ones had already beefed up and more modern PSUs. When I started GPU crunching 24/7 on WCG with high end boards like the ATI HD7970 and overclocked too, I had bad surprises with some thermaltake power supplies that became unstable. They had a sufficient power rating at 850 Watts but stability became an issue. When replaced by the Corsair AX850 everything was back to normal. I will not say the Thermaltakes are at fault, they were 2008 models and there has been a lot of progress in between. In fact they runned pretty well 24/7 for a few years nothing to say. Today the latest ones I installed are the AX i860 series which are platinum rated. Corsair has a 5 year warranty on the AX series and 7 years warranty on the AX i series. Excellent for long term crunching. All in all I do not regret the hardware choices as I had finally very little trouble. All these rigs will be able to crunch for a long time before they will need a mobo/cpu replacement as outdated or too inefficient. On the software side Linux would probably have been a better choice than Win7. First because Linux has no license fee. And even if I bought the Win7 licenses at OEM prices as I install my own hardware, still it is a high cost when you multiply. Second because the crunching efficiency is much higher on Linux by 20% or more. The problem is that I am a totally illiterate in Linux and do not have the time to learn and become savvy in that world. I made a few tries which ended in nervous brakedown. ![]() ![]() [Edit 1 times, last edit by Hypernova at Jan 19, 2013 6:53:37 PM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
On a slippery slide, with the "brake"down, I'd get nervous too, but since I can do it, I consider anybody can. Really since I started Ubuntu 2.5 years ago, it's become so much more slick and licked going from v8.04 to v10.10, it's near childs play [bet you your kids can teach you more than you will accept ;P]
BTW, would not dare state there to be a general 20% throughput uplift for Linux. Several sciences are exactly on par with W7-64, some are even slower than W7 [or plain less efficient... CEP2]. Opposed, the VINA sciences blow past W7-64 by over 60%... and that is worth a little crash car operating when the hearth-rhythm goes wobbly. Just needs one host of your fleet to do the trialing... barely puts a dent in your totals... fetch the latest iso, run an installer to a usb memory drive [Corsair 64GB USB 3.0 races] and choose a live-cd, try-but-not install and play. Don't like it, boot, pull the USB stick and back you are in Windows. |
||
|
twilyth
Master Cruncher US Joined: Mar 30, 2007 Post Count: 2130 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I have to say that I have gone in a completely different direction focusing this year on 2P Xeon server boards. I've focused mainly on ATX format 1366 boards since I could use existing cases but as it turned out none of those boards had PCI-E slots. However they have been great for running hyperthreaded quad and hex core chips in the 2.5 to 3ghz range and in some cases have even allowed for some minor overclocking.
----------------------------------------I have 2 2P hex core rigs like that and 2 2P quads but 1 hex is down since the chips have been moved to an in progress SR-2 build and the the second 2P quad won't be built until we go back to cpu-only crunching. I have one 2011 socket rig which is a true server/workstation setup which I resisted mightily building, but it was the only way I could put together the 2P E5-2687W octo Xeon and that has worked out to be my best choice in the long run since the X9DAi board had slots for 3 PCI-E graphics boards. Although I did spend a lot of money on these builds, it's less than you might expect if you look a list prices since I bought all of the chips used and all are engineering samples, most from China. So I think the grand total for all 5 servers is probably in the $5-7k range and I could have done it for less if I had bought used m/b's and memory too. The beautiful thing about the servers is that they require no attention at all, ever. They just run and run and run. OK, maybe once in a while one will go down, but it's rare. Just make sure you have them hooked up to a UPS and you should be golden. Plus, they are very energy efficient which is a concern for me. In the summer, my marginal kwhr rate is about 18 cents although it drops to 11 cents or less in the winter. And the 2P hex core chips only draw between 100 and 150 watts but put out about 7-8k ppd. The 2P octo, when just doing cpu crunching draws about 300w and puts out about 15k ppd. Of course when the GPU-only HCC project came along, that required even more of an investment and between 4 7950, 2 7870 and a 7750 (so far) I think that's been another couple grand - if you don't include the additional PSU's I had to buy or the new m/b's that could accommodate dual PCI-E cards or the new SR-2 board. Then it becomes it bit more expensive, but still well worth it. Long term though, it will be a hard decision as to which way to go. If we will have continuous gpu projects, then I will simply build a few 4way xfire boxes and be done with it. If we are going to have only intermittent gpu projects, that will be a much harder choice which I do not relish. ![]() ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
<comment>
Hypernova's and twilith's posts above point towards the desirability of a WCG 'home' for members. ![]() </comment> ; ; andzgridPost#810 ; |
||
|
OldChap
Veteran Cruncher UK Joined: Jun 5, 2009 Post Count: 978 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I am just a small scale guy (I currently run 5 rigs) who started with q6600 jumped to i7 920 then to 2600K and 3770K. I have tried a number of other setups like opterons etc. I had just bought a couple of 8 core cpu's (not the fast ones) in preparation for a dualie running an asus workstation board when GPU started.
----------------------------------------I had already suffered the worries of getting linux running on my rigs before that point and I have to say that you can set it up by rote, that is to say you really don't need to know how it works to get linux running if you have good support from the kind WCG folk and your team mates. Gpu made me move back to windows. I have one card on Linux but as an overclocker I am hobbled by the lack of good GPU support software in Linux. In my early years doing FaH I killed a number of PSU's including a pc p&c but most were cheap coolermaster variants. I moved to seasonic with which I have had no problems. I have had more luck with asus than gigabyte but that is not to say the latter is necessarily bad. I will complete the 16c/32t rig even if only to get a feel for crunching without overclocking and to see what impact it has on my electricity bill. I chose 7950's for their price performance. again Asus is good to me in that both that I currently own will clock to 1250 which on sandy and ivy gets me in the order of 90 wu's an hour. hypernova: you don't need to really learn linux to get crunching with it. learning can be done at your leisure. twilyth: I think you did the right thing buying cpu's from those sources I did that too. I further think that for the purposes of crunching you were probably right not to become a second user on MB's I am most impressed with the HCC team for their efforts in getting GPU running. I wonder now whether there might be some mileage in developing long work units without the midway break or perhaps instead/as well a multi-threaded work unit. ![]() [Edit 1 times, last edit by OldChap at Jan 19, 2013 11:41:21 PM] |
||
|
RicktheBrick
Senior Cruncher Joined: Sep 23, 2005 Post Count: 206 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I was a member of United Devices before I became a member of WCG. About 6 years ago without any warning UD decided to quit. WCG used UD software before going to BOINC. With the cost of hardware going down and performance going up there has to be a time when this site will not be cost effective. I have read about the upcoming 14nm chips. When they get that who knows how many cores they will be able to put on a chip? When they make a super computer that will do a million trillion flops a second our efforts will be about a tenth of a per cent of that super computer. I believe that some of the equipment that is purchased today will still be working. I am just cautioning anyone that is thinking about purchasing some computer equipment. Just make sure you can use it for something else as I would not depend on this site for too many more years.
|
||
|
twilyth
Master Cruncher US Joined: Mar 30, 2007 Post Count: 2130 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I'm not sure what you're getting at. What you're saying has been true since the invention of the abacus. Our ability to perform more computations faster will continue to progress - perhaps not exponentially, but probably at an accelerating rate, which is basically the same thing.
----------------------------------------Also I'm sure you realize that virtually all of the supercomputers in the world today are massively parallel machines with thousands if not 10's of thousands of processors. Distributed computing simply emulates that architecture. Not only that, the chips used in those machines are the same as those available to us as enthusiasts (edit: usually). So while it's true that cutting edge hardware today may not be worth the cost of electricity to run in say 4 or 5 years, I think every one who does this as a hobby understands that. I certainly do. ![]() ![]() [Edit 1 times, last edit by twilyth at Jan 20, 2013 5:44:55 AM] |
||
|
RicktheBrick
Senior Cruncher Joined: Sep 23, 2005 Post Count: 206 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
When one ask for help it had better be because one can not do whatever by themselves. From what I see in this thread spending a couple of thousand dollars would get one well over a hundred thousand results since they started the GPU. The number of results from IBM has not changed much since than too. That must mean that IBM does not consider it worthwhile to purchase any GPU. I have been doing this for over 13 years now(7 for WCG and 6 for UD) and I know it is addictive. I also believe it is addictive to a lot more people than myself. IBM must have the moral courage to not ask people to volunteer unless they can not do it themselves but I believe with the amount of money they have, they should be able to easily afford to do this by themselves with just the computers that are available to them today.
|
||
|
BladeD
Ace Cruncher USA Joined: Nov 17, 2004 Post Count: 28976 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
When one ask for help it had better be because one can not do whatever by themselves. From what I see in this thread spending a couple of thousand dollars would get one well over a hundred thousand results since they started the GPU. The number of results from IBM has not changed much since than too. That must mean that IBM does not consider it worthwhile to purchase any GPU. I have been doing this for over 13 years now(7 for WCG and 6 for UD) and I know it is addictive. I also believe it is addictive to a lot more people than myself. IBM must have the moral courage to not ask people to volunteer unless they can not do it themselves but I believe with the amount of money they have, they should be able to easily afford to do this by themselves with just the computers that are available to them today. IBM and most other companies with business PCs don't have GPUs that can be used. IBM is doing what it asks others to do...use what you have to crunch. |
||
|
twilyth
Master Cruncher US Joined: Mar 30, 2007 Post Count: 2130 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
GPU's, when not used for gaming, are only used (AFAIK) in HPC (high performance computing). As I understand it, they work a lot like the math coprocessors of old. IOW, outside of recreational use, they're only seen in very specialized applications. So as BladeD points out, there's most likely no need for the vast majority of computers that IBM uses to have high end GPU's.
----------------------------------------Just for the record, and I'm sure most are aware of this, IBM is primarily a service company. They long ago divested themselves of any manufacturing operations. Even the Lenovo brand is now owned by a Chinese company. And while they do engage in a lot of basic research such as their Watson AI project (which has actually moved on now to doing cancer research), I don't think they have much need for supercomputers in their daily operations. Of course I'll be a little upset if I'm wrong about that. ![]() ![]() ![]() [Edit 1 times, last edit by twilyth at Jan 21, 2013 5:17:40 AM] |
||
|
|
![]() |