Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
World Community Grid Forums
Category: Completed Research Forum: Help Conquer Cancer Thread: HCC with GPU |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 486
|
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
One big difference between now and then. All roadmaps for future chips, designs, etc. are nearing their end. We are at the point where silicon can barely.retain the amount of heat we put on the chips, whether this be CPU or gpu.
CPU has not really gotten much faster in several years, new instruction sets will help, such as the new AVX on Haswell. When the map ends, where do we go from there? The roadmaps will end before 16 years is up. In 16 years we will not have 10 petaflop gpus. At best I would assume MAYBE 100 terflops and that's being generous. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
They said the same thing though, when the P4s hit the TDP of 135W.
They said it again when die size hit 32nm. Each time was supposedly the end of Moore's law, where computing gains would severely slow down, or possibly even stop. We're now running computers an order of magnitude or more, more powerful, on barely half that TDP. There's a possibility you're right this time, but if I had to bet on it, I'd say we'd still have roadmaps in 16 years times. That said, I don't think we'd hit 10 petaflops in 16 years either. With GPUs currently cranking 3-4 teraflops, assuming Moore's law (double every 2 years), we'd have 3 - 4 petaflops, but I think that's definitely the optimistic end. (3-4 * (2^8)) |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
It will definitely be interesting so see what happens.
I've been pondering the "ending of the roadmap paradox", if you will, and can't wait to see what happens. Still feel the future is all about GPUs. Can't beat the amount of cores that those babies have. |
||
|
nanoprobe
Master Cruncher Classified Joined: Aug 29, 2008 Post Count: 2998 Status: Offline Project Badges: |
Could be the wave of the future.
----------------------------------------Carbon nanotubes - the next processor technology? Ten years from now, your PCs and servers could well be running on carbon nanotube-based technology. NEC has developed a method of positioning tiny tubes of carbon in a way that it has reported will make circuits run faster and consume less power than the fastest and most powerful silicon chips. NEC said the process was an important step toward NEC's goal of developing chips that run at 15GHz to 20GHz while consuming about the same power as today's Intel Pentium 4 processors. The company says it made a breakthrough in the way it makes carbon nanotubes, bringing them closer to being used in transistors in LSI (large-scale integrated circuit) chips.
In 1969 I took an oath to defend and protect the U S Constitution against all enemies, both foreign and Domestic. There was no expiration date.
|
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Graphene's my guess. Like carbon nanotubes though. Now convincing Intel to change processing techniques may prove difficult.
|
||
|
mikey
Veteran Cruncher Joined: May 10, 2009 Post Count: 821 Status: Offline Project Badges: |
It will definitely be interesting so see what happens. I've been pondering the "ending of the roadmap paradox", if you will, and can't wait to see what happens. Still feel the future is all about GPUs. Can't beat the amount of cores that those babies have. How did we get from where we were to where we are now, basically they made things faster and added more processing power to them to do that. Most gpu's now come with 1meg of memory on them, some come with 2meg, but what if they came with 4 or 8 meg? Wouldn't processing be faster, yes the card would need 'tweaking' to take advantage of the increased memory, but the cpu folks were able to figure it out! What about 64bit cards, wouldn't that be twice as fast too? I think there is room for expansion and those that get paid to 'dream of what could be' are busily at work figuring out how to make it happen! Are there limits, sure there are limits to how fast my car can go but if I get a different car it might go even faster! Better gas mileage, I just need to beeter define my needs and there is something available for it. Maybe that is how we get to the next step, stop being so generic and better define the goal and how to get there. Many years ago they split the gpu off the cpu as it was faster that way, not Intel has put it back in, is that faster? Maybe it is for the newer smaller cpu architecture, but what about tomorrow's designs? Will the gpu come back out again for higher end machines and stay in for the 'Joe Average' machine. Already the VERY high end super computers are using gpu's to make their numbers, yes they use cpu's too. but they are there to make the gpu's work better! They already have Hybrid Hard Drives now, an SSD drive paired up with a larger Sata drive so your cpu writes to the SSD drive and then it writes to the Sata drive when it has the time. Speed AND efficiency the best of both worlds! |
||
|
mmstick
Senior Cruncher Joined: Aug 19, 2010 Post Count: 151 Status: Offline Project Badges: |
It will definitely be interesting so see what happens. I've been pondering the "ending of the roadmap paradox", if you will, and can't wait to see what happens. Still feel the future is all about GPUs. Can't beat the amount of cores that those babies have. How did we get from where we were to where we are now, basically they made things faster and added more processing power to them to do that. Most gpu's now come with 1meg of memory on them, some come with 2meg, but what if they came with 4 or 8 meg? Wouldn't processing be faster, yes the card would need 'tweaking' to take advantage of the increased memory, but the cpu folks were able to figure it out! What about 64bit cards, wouldn't that be twice as fast too? I think there is room for expansion and those that get paid to 'dream of what could be' are busily at work figuring out how to make it happen! Are there limits, sure there are limits to how fast my car can go but if I get a different car it might go even faster! Better gas mileage, I just need to beeter define my needs and there is something available for it. Maybe that is how we get to the next step, stop being so generic and better define the goal and how to get there. Many years ago they split the gpu off the cpu as it was faster that way, not Intel has put it back in, is that faster? Maybe it is for the newer smaller cpu architecture, but what about tomorrow's designs? Will the gpu come back out again for higher end machines and stay in for the 'Joe Average' machine. Already the VERY high end super computers are using gpu's to make their numbers, yes they use cpu's too. but they are there to make the gpu's work better! They already have Hybrid Hard Drives now, an SSD drive paired up with a larger Sata drive so your cpu writes to the SSD drive and then it writes to the Sata drive when it has the time. Speed AND efficiency the best of both worlds! GPUs have at minimum 1GB of VRAM not 1MB. The average GPU has 2GB VRAM, and higher end have 3-6GB of VRAM on them. The increased memory size has no affect on how fast things are processed, but the speed of the memory itself measured in a frequency affects how fast data can be accessed. There are no 64-bit cards, but they are based on 64-bit. 64-bit is in terms of memory performance and there is no GPU that is 64-bit out there. GPUs come in as low as 128-bit, to as high as 384-bit, but it in no means multiplies the speed of processing, it only multiplies the amount of data you can send through the tubes at any given time. Intel has absolutely no relevance in the GPU industry. It would have been more wise for you to say AMD has brought the GPU into the CPU with its APU and it works well, Intel is no different than integrating its onboard GPU that was on the motherboard inside the CPU because they have extra room that they should have used to put more cores on. The dedicated GPU has not been phased by the APU market, even many APU owners have to buy a dedicated card to simply get double or more FPS in their games, which the APUs arent fast enough to calculate or game smoothly with. SSD has little impact because all calculations that the CPU is actively working on are held in RAM, what goes to your SSD/HDD is a simple regular interval backup that has no affect on processing speed, and we dont care how long it takes to write a little bit of data. If you want to see tomorrows designs, there are plenty, look at China's Universal Processing Unit, look at the fab that can now manufacture 3D chips, look at the quantum and nanotech research going on in the computer industry, solar, battery, etc. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I still say silicon needs to be replaced BADLY. Although this still runs into the issue that we can't make things any smaller VERY soon. You can't control an electron if you don't know where it is.
Heisenberg always has to "ruin" the fun. Maybe changing from electrons to photons would be a nice step. Harder to do, but I've read it's coming along. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Problem is, if you look, it moves to be where you expected it to be, not where it should be!
I think materials other than silicone will be at least 10 years down the track. If anything, I think we're actually going to see massive parallelism, and we'd just retrain all our programmers to think in parallel. |
||
|
[VENETO] boboviz
Senior Cruncher Joined: Aug 17, 2008 Post Count: 183 Status: Offline Project Badges: |
CPU has not really gotten much faster in several years, new instruction sets will help, such as the new AVX on Haswell. Yes, but we pass from single core to 8/16 cores.... In 16 years we will not have 10 petaflop gpus. At best I would assume MAYBE 100 terflops and that's being generous. Example on Ati side: HD 2900XT (October 2007) - 475gflops; HD 5879 (September 2009) - 2720gflops; HD 7970 (January 2012) - 3788gflops. 5 years, 8x the sp power. I don't the future of gpu development, but i consider that 1 TeraFlops is an ENORMOUS computational power |
||
|
|