Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
World Community Grid Forums
Category: Completed Research Forum: Help Conquer Cancer Thread: HCC with GPU |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 486
|
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
"If the GPU processing will be implemented, will there be a separate badge for it ?"
Good question. I am not sure if I am for or againts. The thing is GPU crunching is quite a different beast than CPU. As it is now an 8 thread CPU might turn out 8 "days" each day. But a GPU would do one job at a time MUCH faster than CPU. A 10 hour CPU job might take 10 minutes on GPU. That might be a bit of an exaggeration but anyway the difference is quite large. To me is is a tricky subject and I am unsure how it should be handled. At anyrate I think that is something WCG techs should talk over and concider carefully to decide what would be fair. |
||
|
mikey
Veteran Cruncher Joined: May 10, 2009 Post Count: 821 Status: Offline Project Badges: |
We could have finnished all WCG projects in a couple of days, if there were any support of ATI or nVIDIA cards. Shame on the developers. I don't think so vitalidze. If scientists know what performance is available then they will taylor the project to make best use of it. If you give them 1'000 more crunching capability, no problem to make the project need it. Fundamentally there is no limit. Certain problems will need more power ad infinitum. The trick is to define the problem and the question asked so that it can be computed and analyzed in an acceptable period of time in regard of the available crunching power. If you take meteorological weather analysis. The most powerful supercomputers do that. And if you see mesh size for simulations we talk many kilometers. This is in inhabited regions. On the oceans it is much worse. Just bring it down to meters in 3 dimensions across all the planet and say the first 30 km of atmosphere and even a quantum supercomputer will not be enough by far. The other element is also the complexity and number of interaction that you will take into account to be computed. The mesh maybe gigantic but at each point you also can compute much more variables. This is why these problems which are similar in terms of complexity and interaction to molecular simulations will always ask for more, more, more, more....... But they have us here who will give them more, more.... Btw vitalidze I have to stop here I have to add another 12 core cpu. My limited experience with Geophysics computing completely supports the above conclusion by Hypernova, that demand will always outstrip supply. Frustrating for suppliers, but all for the common good. Onward and upward! No matter how fast you can return the data if no one looks at it is worthless! So sometimes just crunching faster is not helpful nor the answer. At some point PEOPLE need to look at the data and if someone can only spend 2 hours per week doing that, then a bazillion units being returned is not better!! Uh...OK but I have no indication here or previously that the data are not being examined carefully (except at the old United Devices Project -- not WCG). Are you suggesting that the scientists are not using our results If I thought that was the case...that all the effort and expense were for naught, then I would spend my resources elsewhere. NO, NO, NO I am NOT suggesting that in any way, shape or form!!!! What I am saying is there is a threshold point where there is too much data coming in to be analyzed in a timely manner, it is simple math. Whether WCG can EVER get to that point is something I would have no clue about, but hope it NEVER happens. I do know it did happen over at Seti though, a while back they had so much data being returned they just put them on tape and on the shelf for later when they had more time. Seti has had MANY growing pains over the years, while WCG does not seem to be affected as much, probably thru better planning and resources! I was just saying that the simple act of returning tons and tons of units has to be analyzed before you open the tap and let them flow. MANY projects over the years have increased the amount of crunch time for each unit so we users don't flood the Servers. |
||
|
BladeD
Ace Cruncher USA Joined: Nov 17, 2004 Post Count: 28976 Status: Offline Project Badges: |
"If the GPU processing will be implemented, will there be a separate badge for it ?" Good question. I am not sure if I am for or againts. The thing is GPU crunching is quite a different beast than CPU. As it is now an 8 thread CPU might turn out 8 "days" each day. But a GPU would do one job at a time MUCH faster than CPU. A 10 hour CPU job might take 10 minutes on GPU. That might be a bit of an exaggeration but anyway the difference is quite large. To me is is a tricky subject and I am unsure how it should be handled. At anyrate I think that is something WCG techs should talk over and concider carefully to decide what would be fair. They are Project Badges. So, there should be a GPU badge, only if there was a GPU only project. |
||
|
rilian
Veteran Cruncher Ukraine - we rule! Joined: Jun 17, 2007 Post Count: 1452 Status: Offline Project Badges: |
I am against separate badges for GPU. But I am for additional badges like for 5, 10, 25, 50, 75, 100, 250, 500, 1000 years runtime. With GPU's additional runtime should be added and so reaching higher runtime badges will be possible. agree i'd give away all my badges just if this would bring the project results faster to the scientific community |
||
|
mikey
Veteran Cruncher Joined: May 10, 2009 Post Count: 821 Status: Offline Project Badges: |
I am against separate badges for GPU. But I am for additional badges like for 5, 10, 25, 50, 75, 100, 250, 500, 1000 years runtime. With GPU's additional runtime should be added and so reaching higher runtime badges will be possible. But will one GPU count as a one CPU ? The computing power will be x times higher. I am also for adding additional badges. That is why I am for separate badges...if a gpu can crunch a unit in 1/4 the time of a cpu why should it get the same badge? Using the same old badges you are comparing apples to oranges and trying not to make a salad. Give the gpu crunchers new, separate badges that they can compete for and let the cpu folks continue on with their badges. To combine them into one badge is like putting me in a race with Usain Bolt and calling it a fair race, I am 59 years old and haven't seen my toes in over 10 years, AND I was NEVER fast even when I was younger!!! Turtles and I can compete but NOT Usain Bolt and I, neither can cpu's and gpu's. Just one of my AMD 5770's gpu's has 800 stream processors on it, that is like 800 tiny cpu's all crunching at the same time, I have 7 of them btw! NO a stream processor can not do the things a regular cpu can, but what it can do is so much faster than what a cpu can do, which is why we are even having this discussion. And there are 800 of them on just one gpu!!! The newer gpu's have LOTS more than 800! [Edit 1 times, last edit by mikey159b at Dec 9, 2011 2:12:20 PM] |
||
|
fablefox
Senior Cruncher Joined: May 31, 2010 Post Count: 161 Status: Offline Project Badges: |
First, I have to say that life isn't fair, and not everything is black and white.
----------------------------------------If an investment vehicle gives 10% return, and a millionaire invest 10 million while you invest 10 thousand, should you tell the bank to give the millionaire only one thousand, because you too only get one thousand? If your CPU can process 24 work unit per day - and you get a day - why a person that process 240 work unit per day also get a day? Why don't YOU go out and get yourself a GPU? Again, like I said, life isn't fair and not everything is black and white. In golf we have handicap, we have different league and all that. But pulling everything down to your level does not solve everything. I will give no comment to this different badge or not thingy. It just that on the basic level, it should be based on the amount of work unit they process. They produce the same thing, isn't? |
||
|
fablefox
Senior Cruncher Joined: May 31, 2010 Post Count: 161 Status: Offline Project Badges: |
As a side note, what you are suggesting could be dangerous and unhelpful. First, these cruncher felt they are being penalized for spending cash to buy GPUs. Second, GPU crunchers are serious crunchers (unless I'm missing something - specially those SLI folks). This is due to the fact that GPU control your graphics, and it's not as simple as sharing the process around like CPU does - it really take the toll on your user experience. (again, I don't know how it effect SLI machines - maybe if they just set one GPU to BOINC, their user experience doesn't hurt much).
----------------------------------------I once try running GPU project on my PC while using it - not fun. Unlike CPU version that runs in the background. So these GPU crunchers are probably a computer that truly crunching. And to penalize these folks is generally not a good idea. They spend time and money to build rigs, just to get penalized. "What, you have have trained for a year to run under 10 second? Okay, you must start 5 second later than others." How does that make you feel? ---------------------------------------- [Edit 1 times, last edit by fablefox at Dec 9, 2011 4:18:30 PM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
will one GPU count as a one CPU ? The computing power will be x times higher. Time for me to express my own personal opinion on this badge issue... Let's start with what we have now. What is a CPU? It's not (necessarily) a chip; but some machines do have multiple sockets and so multiple chips. Then again, some chips have multiple CPUs on them. And some CPUs have dual instruction pipelines for a single execution unit ("hyperthreading"). At then end of the day the CPU time we get credit for is the time it took for each WU to process, irrespective of what the execution device architecture was. When we come to GPU processing, what counts as a single execution device? Do be perfectly honest I don't really know because I haven't looked into this in any detail -- but it's fun to speculate. I would guess, and I think it actually turns out to be a very fair way of thinking, that the execution unit is a "card". Whether you have a card with 16 processors or 512, I think that all those processors will work together to to complete one WU. If you have a card with more processors then each WU will finish quicker, but you'll still do one card's worth of processing in a day. (This is just like the difference between an old P4 or a more modern 2600K -- each thread still does a day's work in a day, it's just that the 2600K gets through a lot more WUs). If you want to get more time (=badge) credit, then you'll need to get more cards. If my speculation is correct, then nothing really changes. A bronze badge represents 15 days of WU processing time, irrespective of the architecture used to process it. If you have a zillion GPUs on a card then your WU-completed count will rocket through the roof, but your processing time will still clock up a day a day (if you don't do any CPU processing as well). You might think that that's harsh if you've invested heavily in a top-of-the-line GPU card, but it seems reasonable to me. WCG is mostly about using the facilities that people already have but don't fully use to do work that benefits humanity. It's great when people invest specially to help humanity, but I see no reason to change the reward system. Helping humanity is a reward in itself. |
||
|
mikey
Veteran Cruncher Joined: May 10, 2009 Post Count: 821 Status: Offline Project Badges: |
First, I have to say that life isn't fair, and not everything is black and white. If an investment vehicle gives 10% return, and a millionaire invest 10 million while you invest 10 thousand, should you tell the bank to give the millionaire only one thousand, because you too only get one thousand? If your CPU can process 24 work unit per day - and you get a day - why a person that process 240 work unit per day also get a day? Why don't YOU go out and get yourself a GPU? Again, like I said, life isn't fair and not everything is black and white. In golf we have handicap, we have different league and all that. But pulling everything down to your level does not solve everything. I will give no comment to this different badge or not thingy. It just that on the basic level, it should be based on the amount of work unit they process. They produce the same thing, isn't? I don't think I made myself clear...I am saying add something to indicate a gpu as opposed to a cpu crunched the units. If we have an army unit with rifles and they meet an opposing unit with machine guns the rifle unit will lose every time. Since the cpu started a long time ago, it isn't fair to those folks that put their heart and soul into getting badges to just say 'too bad, so sad, you lose' and just move on. What incentive will anyone have to use the cpu to crunch again, a gpu will eventually be used and pass all their stats anyway, even if it is years away! And generally NO a cpu unit is not exactly the same as a gpu unit. A cpu is very good at things a gpu is not, a gpu is very good at things a cpu can do only fairly well. So NO the units are not interchangeable, as in you get the exact same unit no matter what you are crunching with. A gpu will only get gpu based units and a cpu will get only cpu based units. That is the way all the other Boinc Projects have done it, not being associated with WCG, I can only assume that is how it will be done here too. After all they are ALL Boinc based. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
A card if run 24/7 will register 24 ''elapsed'' hours. That's what's being used at GPU projects to compute the amount of calculations performed aka, the credit/points base **. If somehow someone is able to subdivide a GPU so it can run multiple HCC GPU tasks ***, then as with subdividing a HT capable CPU into e.g. 8 threads reporting 24 hours each instead of 4 times [the physical processors], see no problem to have a card report time in multiple for concurrently processed HCC-GPU assignments. Of course, whether that will fit into the memory banks of a GPU card is the secondary question. We'll going to learn when the extended Beta arrives, in early 2012.
** Other projects don't seem to maintain any form of time contributed statistics, AFAIHH. *** which I understand will have multiple targets enclosed, similar to what DDDT1 did, to get them to run long enough and not inundate the schedulers with avalanches of task request and reports. --//-- |
||
|
|