| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 30
|
|
| Author |
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
An order of magnitude is a pretty significant. Estimates are fun and interesting but obviously we can't rely on them very much.
Ranking supercomputers is interesting too. Here they are ranked by performance on the Linpack benchmark. By that measure IBM's BlueGene/L seems to be the reigning champ. Here one of the BlueGene's competitors in 2004 questions the Blue Gene's efficiency. In the end what matters is how fast it crunches the data. If lots of CPU cycles are spent on system overhead then the machine isn't really as fast as the Linpack benchmark indicates it is. But the BlueGene's benchmark is so much higher than its competitors' benchmarks it seems that even if it were very inefficient it might out crunch it's competitors anyway. I wonder what a BlueGene/L costs? |
||
|
|
bugsan
Cruncher Joined: Dec 30, 2005 Post Count: 4 Status: Offline Project Badges:
|
i agree.
my estimation was wrong, it was for fun. but all we need is an average of floating point operations needed to compute a task... |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
An average of FLOPS needed to compute a task. Yep. That would allow a decent estimate. So where are we going to get that number?
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Dang. Imagine that computational power if we were all in a terabit LAN using beowulf clusters. *drools*
------------------------------------------ This post has been edited for inappropriate language - nelsoc [Edit 1 times, last edit by Former Member at Jun 22, 2006 1:07:20 PM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Yes, Jackson. So many toys to choose from; so little cash
![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I'm going to play the averages game. As you know, work units are all different sizes. However, on average they take 9 hours to complete. And on average, they are all the same number of operations. The average (i.e. predicted) size given to BOINC is 3e13 FLOP.
I tested this out, taking a couple of random work units that I had crunched, then multiplying the benchmark FLOPS with the CPU time for the unit - the estimate was very close. Therefore, I'm assuming it is more than just a guess, and instead is the target work unit size. The average results per day is 70300. This goes back to the beginning of WCG, so will be a safe underestimation. I'm taking yesterday's count as the peak: 130000. Doing the mathematics... We have an average performance of 24.2 TFLOPS and a peak performance of 45.1 TFLOPS. This is the raw computing power of the grid. If we factor in the redundancy, (for the sake of argument we will ignore errors and take the basic quorum replication) then the peak WCG performance is 15 TFLOPS. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Thanks, Didactylos. 3e13 FLOP is the number we've been lacking, the number that makes a reasonable estimate possible. Repeating the first calculation using my computer's benchmark and CPU times for a few WUs indicates 3e13 is pretty close.
----------------------------------------Since the 70300 results per day is an average going back to the beginning of WCG it doesn't reflect today's power accurately. The stats show results per day has been over 100,000 almost every day since end of November, 2005. There are lots of 150,000 days. It looks to me like 130,000 results per day is more like the average over the past 6 months rather than a peak. [Edit 3 times, last edit by Former Member at Jun 23, 2006 7:21:22 AM] |
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
So we got 24, 45 and my 62 tflops using the real daily time angle based on last months real performance, then the 4 fold 230 tflops. The time per WU average is i think the weak point. Mine does FAAH at 7h28m on average......let the votes begin ;>)
----------------------------------------The redundancy, or slop as one put it, is not a factor to consider....we wanted to know the computational power. On toys....i'm going to order one of those pre-802.11n wifi's and shut down my neighbours in a 1 mile radius......got to get the whole bandwidth from the local switching station on my own ![]()
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Well, I'm tossing out bugsan's 230 estimate because it rests on very shaky assumptions.
I don't know how Sekerob's "real daily time angle based on last month's performance" works. Show me the formula/calculations and I might buy it. Didactylos provided enough info in his post to see exactly how he calculated his 24 and 45. I assume he got the 3e10 number from reliable sources inside WCG so I accept 3e10 FLOP as the size of the average WU. I'm tossing out his 24 because it's based on an average that goes way back to the start of WCG and does not accurately reflect current computing power. Today I totalled the results for the past 30 days (June 22 to May24), divided by 30 and get 130,167 for the average daily results, round off to 130,000. That simply confirms what I estimated in my previous post so I still maintain 130,000 results per day is not a peak, it's current average daily production which means 45 TFLOPS is a very reasonable estimate of current computing power. Sekerob, show us how you calculated your 62, please. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
David Autumns produced an estimate a few months ago, also. I didn't mention it because again, I didn't quite follow how he worked it out. It involved calculating the power of the "average" host equivalent.
The 3e13 is in the work unit info, if you crack open client_state.xml. It hasn't been officially confirmed by anyone, but WCG are going to use the best estimate they have, since the time estimation relies on it (initially). Please don't start on the flaws in the time estimation. I used the smaller average value because it provides a summation of the actual total WCG performance to date. I'm glad my "peak" value is closer to the current moving average. David Autumns watches spikes in the daily output, so he could perhaps provide a more reliable peak value. I know we have topped 140000 in the past. |
||
|
|
|