Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 12
|
![]() |
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Since I started crunching a couple of weeks ago, I have done a little research on computers. Not the usual selecting parts for a computer build, but looking into supercomputers and the like.
I was sort of shocked at what I found, I did not know that computers (and especially graphics cards) had advanced so much. I knew before that the price of hard drive storage and memory has drastically fallen over the years, and there are charts out there that show their prices since the 1960s until today. But I found that CPUs and GPUs and have become much faster as well. Check out this wikipedia link on FLOPS (floating operations per second), and midway through the page is a chart of the cost of gigaflops. http://en.wikipedia.org/wiki/FLOPS The cost per gigaflop was $100 in 2003, and now its just .22 cents! There is a supercomputer at my local university that was built in late 2007 that runs 26 teraflops and was ranked #47 in the world at the time in 2007. Now you could achieve 26 teraflops with a couple computers running say 6-8 graphics cards! |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello Brandon Murray,
The last 10 years have been complicated as far as floating point goes. The modern floating point format was formally proposed at Kansas City in the mid-1970's and adopted by a chip company named Intel who electrified the community with the 8087 math coprocessor. I seem to recall it did about 40 Kflops double precision. Eventually the format was adopted as IEEE 754-1985. It takes a long time to adopt a standard. Having started as a hero, Intel became everybody's favorite whipping boy when they never updated the 8087. They produced variations that would cooperate with newer CPU chips (80287 with 80286; 80387 with 80386) but were easily out-computed by every competing math coprocessor. This finally began to change with the Intel 80486 DX which combined the CPU with the FPU (Floating Point Unit) on the same chip. With the Pentium progress sped up and by the turn of the century, modern CPUs were delivering supercomputer speeds, beating the early CDC supercomputers of the early 1970's. Once computers started to be measured in Gflops, we had reached the modern era that WCG is flourishing in, running science programs on home computers rather than on giant computers at national laboratories. Progress since then has been - - peculiar. Standard CPUs have slowly doubled and in some cases redoubled their speed while also adding cores per chip. Since most people are oblivious to FPU speeds, I have noticed that floating point is slower on some chips produced since 2011. It has become a commodity that sometimes plays second-fiddle in CPU designs. The joker is the GPU. Since 2007, general-purpose GPUs (GPGPUs) have been built which stream floating-point on the GPU board. The problem is the storage and retrieval of the floating point results, which require peculiarly limited algorithms although they can run superfast. There is a rainbow evolutionary development of different types of chips and boards, to see which can find or develop a profitable market. I do not doubt that some architectures will be successful and it will be interesting to see what sorts of algorithms and programs they can run. Lawrence |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Thanks for your informative comments Lawrence!
It seems to me like regular home computers are almost as good as they would ever need to be for the average user (though most people do not have SSDs, which are great for how fast they are). That is probably part of why there are so many tablet and smartphone sales. People were saying "desktops are dead", they are not dead but they did probably reach peak sales. I am starting to think sometime soon tablets will peak too, I mean people can only do so much facebook, youtube, Netflix, angry birds, etc. right? It will be interesting to see "what comes next". Regarding GPUs it would be nice if Nvidia made their cards work with the newest versions of OpenCL, they seem to be doing that on purpose so that people focus on using CUDA. I know that GPUs will continue to advance, I hope the manufacturers also keep in mind their scientific uses, not just for gaming. Maybe gaming will also soon reach a peak where a better graphics card does not help that much. I personally prefer console gaming myself to computers (although I will play some Civilization 5 sometimes). |
||
|
branjo
Master Cruncher Slovakia Joined: Jun 29, 2012 Post Count: 1892 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Brandon Murray wrote: Since I started crunching a couple of weeks ago, I have done a little research on computers. Not the usual selecting parts for a computer build, but looking into supercomputers and the like. I was sort of shocked at what I found, I did not know that computers (and especially graphics cards) had advanced so much. I knew before that the price of hard drive storage and memory has drastically fallen over the years, and there are charts out there that show their prices since the 1960s until today. But I found that CPUs and GPUs and have become much faster as well. Check out this wikipedia link on FLOPS (floating operations per second), and midway through the page is a chart of the cost of gigaflops. http://en.wikipedia.org/wiki/FLOPS The cost per gigaflop was $100 in 2003, and now its just .22 cents! There is a supercomputer at my local university that was built in late 2007 that runs 26 teraflops and was ranked #47 in the world at the time in 2007. Now you could achieve 26 teraflops with a couple computers running say 6-8 graphics cards! The interesting thing is that our (WCG crunchers) current performance is 700 TeraFLOPS and the peak (during HCC1 GPGPU phase) was 1.5 PetaFLOPS ![]() Cheers ![]() ![]() Crunching@Home since January 13 2000. Shrubbing@Home since January 5 2006 ![]() [Edit 1 times, last edit by branjo at Sep 22, 2013 6:15:09 PM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
it was a wise decision of Nvidia to develop CUDA.
Big customer of Graphic card in future are scientific community rather then gamers. there for Nvidia is grabing this opportunity by holding scientific conferences and providing support for the developers |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I remember the days I've read about the pentium 60 and 66 transistor count in 1994- 95. it was 3,3 millions which fasinated me. this summer I came accoss an artical about the Nvidia Titan. I remember something like 7 billions. along with the stuctural changes it's not suprising to see such leaps in the GFlops numbers..
I am chrunching with an 8 core I7 laptop( on ubuntu linux), The performance is out of my finest fantasies when I started crunching back in 2007.. It2s a wonderful feeling inside knowing that we let all this power to eat away the epidemic problems of the humanity.. Let's crunch on :) |
||
|
branjo
Master Cruncher Slovakia Joined: Jun 29, 2012 Post Count: 1892 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
it was a wise decision of Nvidia to develop CUDA. Big customer of Graphic card in future are scientific community rather then gamers. there for Nvidia is grabing this opportunity by holding scientific conferences and providing support for the developers Maybe from the point of Supercomputers. From the point of common crunchers and GPGPU-ing, the wisest decision has been made by AMD to leave its CAL and put emphasis on OpenCL. Cheers ![]() Crunching@Home since January 13 2000. Shrubbing@Home since January 5 2006 ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
[ot]You're reacting to someone who returned 2 results, the last 38300+ hours ago [shortly after joining].
|
||
|
Hypernova
Master Cruncher Audaces Fortuna Juvat ! Vaud - Switzerland Joined: Dec 16, 2008 Post Count: 1908 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
I really do not know if it was a wise decision to develop CUDA. They should have made a split between their Supercomputer market and the more mainstream desktop computing market. I am really unhappy that they have not implemented OpenCL in an effective way like AMD. I will not buy a Tesla supercomputer but I indeed bought 15 top AMD GPU boards. It could have been Nvidia boards if their implementation of OpenCL was effective.
----------------------------------------If I am not wrong AMD has also its own specific GPU language. I hope Sek will accept to react to someone who did post 0 results in the last 3'395 hours. ![]() ![]() |
||
|
branjo
Master Cruncher Slovakia Joined: Jun 29, 2012 Post Count: 1892 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
[ot]You're reacting to someone who returned 2 results, the last 38300+ hours ago [shortly after joining]. Yes, I checked it prior to my reaction, but posted it anyway ![]() Cheers ![]() ![]() Crunching@Home since January 13 2000. Shrubbing@Home since January 5 2006 ![]() |
||
|
|
![]() |