| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 4
|
|
| Author |
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Does anyone have any idea on the amount of instructions that run per second, on average? I mean I know that this is had to estimate given that computers go up and down all the time and that jobs are of different complexity and get lost sometimes. It might be interesting if our total amount of devices can compete with / outperform the big super computers.
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hi a0aLeave a message for David Autumns in our team thread and he will have your answer Your Friend |
||
|
|
Viktors
Former World Community Grid Tech Joined: Sep 20, 2004 Post Count: 653 Status: Offline Project Badges:
|
The "Run time" statistics unfortunately don't give an accurate measure of how much cpu time was actually computed for grid work. It includes time spent by the member devices waiting for other applications, which take priority. So there are lots of variables here. If we took the optimistic view that all of the member devices computed at an average rate of 1 billion instructions per second during all of their "run time", then using the average 40 years of run time per day from our gloabal statistics we could calculate this.
[40 years of run time per day] * [365.25 days per year] * [1000000000 instructions per second] gives 14,610,000,000,000 instructions per second or about 14.6 tera-ops. Whether our average machines can do "floating point operations" anywhere as fast as the "instructions" we guestimated with, depends on a lot of things and that is why I said teraops rather than teraflops. |
||
|
|
David Autumns
Ace Cruncher UK Joined: Nov 16, 2004 Post Count: 11062 Status: Offline Project Badges:
|
Graham you knew I wouldn't be able to resist this one
---------------------------------------- I think I've got a fairly standard Athlon XP 3200+ with your typical 512Mb of Ram and I get 29.4 points per hour through this machine. This machine benchmarks at 3.447 Gflops On an average weekday, not scuppered by firewalls blocking the latest agent deployment, the WCG averages 13,000,000 points per day or 541666 points per hour which divided by the 29.4 points per hour my machine gets gives us 18424 machines like mine powering the WCG. 18424*3.447 Gflops = 63507 Gflops or 63.5 Teraflops Any mathematicians out there please jump in and correct me if I'm wide of the mark. Now each time I make these bold claims I get shot out of the sky and I have to surround the argument with a number of Caveats. This is the raw number crunching potential of the WCG and it is up there with Blue Gene/L before its latest upgrade. However Grid Computing has its inefficiencies. 1) 100% of your CPU's time is not spent number crunching, mine is currently talking to the internet and reading my keyboard. 2) The time at the end of each work unit when your PC sends and collects info from the WCG it is not number crunching. 3) Redundancy is built in to the Work Units as this is medical research that we are carrying out here. And so I am lead to believe from other Grid Projects that each Work Unit is allocated 5 times to different WCG Members so that the results of each work unit returned can be sanity checked against the other 4 and a consensus reached as to the content of the result. So as a conservative estimate I would divide our raw number crunching power by 6 to give the true value of the workload we are achieving here - just over 10 Teraflops. OK hope this helps as I've managed to bore even myself this time. Take care and keep on crunching Regards Dave ![]() [Edit 1 times, last edit by David Autumns at May 11, 2005 12:14:32 AM] |
||
|
|
|