Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 5
|
![]() |
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Good Morning,
I have thought about how to make real supercomputers accesible for BOINC. I think that in many research institutions there might be time gaps between the projects those research institutions do on their supercomputers, time gaps, during which their computers are unused. During those time gaps between their own projects they might be willing to install BOINC instead of having their supercomputer idle. They would be even more willing to to so, if their was an instruction available on how to install BOINC on a giant machine. I am unsure, whether existing mass installation instruction is suitable, because my topic is not installation on many identical computers, but on one big machine. My hope is, that 1. The owner of such a giant machine reads this, decides to install BOINC and writes down, how he or she did it. 2. A supercomputer producent gives informations on which type of supercomputer is the most common type in the world at the moment and we can think about a BOINC installation instruction exactly for this type of supercomputer. Such an instruction would be most valuable, Thank you for reading this carefully and any answers. If this was succesful, we could make huge progress in crunching. Greetings |
||
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
Hi Martin,
----------------------------------------Yeah, would love to see that, but how much spare time do those SC's really have? Maybe an "Invitation to implement BOINC on supercomputer" kind of document could set the stage... a set of questions that will identify if the target system is suitable to include a bandwidth section... how to get all that work through a single pipe. In practice, several current implementations spring to mind that are already contributing. One stores the BOINC instances away when the diskless processor systems are needed for "business" and then restored dynamically and lossless... *Nix based. Most of this plays out behind the scenes. WCG does do allot of support outside our visibility and does bang doors so first line I think is spreading the word... lay the contact and support will assist.
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
Hypernova
Master Cruncher Audaces Fortuna Juvat ! Vaud - Switzerland Joined: Dec 16, 2008 Post Count: 1908 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
set of questions that will identify if the target system is suitable to include a bandwidth section... how to get all that work through a single pipe. Just to have a hint about the problem let see: If we take Red Sky a supercomputer which is ranked ten in the Top 500 supercomputers world rank. It has 42'440 cores of the Xeon X55xx series. It has an Rmax of 433 TFlops. Careful here. We at WCG are at the same value more or less but "daily". This supercomputer does it in 1 second. Power requirement is about 2 MW. So this machine if it would run for a day its score would be 86'400 times the power of the whole WCG grid. I know that the Linpack benchmarks that rate those machines are not equivalent to our TFlop ratings. But even if we consider that when crunching for WCG on such a monster the performance is halved you still get the picture. We will make the half performance assumption for the following cases, with approx. rounded values. Let's suppose it runs for 1 minute here and there. 1 minute equals a factor of 30 (60sec / 2). To do that you need to feed him with 30 times the number of WU's that have to be distributed along the whole WCG grid in 30 days. That is just 20 Million WU's in one shot (cache is 0)so that he can crunch for a minute. And then the 20 Million results have to flow back. If they take a one day cache you just have to send over 25 Billion WU's to fill that cache. And now the tough questions: The required bandwith is? (download and upload) The required HDD capacity to store cache and the results is? The necessary RAM is? To help you the Red Sky SC has 22 Terabytes of RAM. HDD no idea. It uses Infiniband, that is 10 to 40 Gbit/sec bandwith depending on the configuration. Or 1 to 4 GBytes/sec. And this was SC rank 10. The number 1 SC in the world the Jaguar from Cray is simply 4 times more powerful. It has 224'000 cores. It uses 6 core Opterons. And the power supply line must just be able to handle 7 MW. Just talk to your utility, they will be happy to help. I am sure you get a rebate once they have finished building their new nuclear power station. In short this means 80 Million WU's for one minute crunching etc. etc. etc. Ok WCG Tech. To your marks. When ready please make a sign. ![]() ![]() [Edit 2 times, last edit by Hypernova at Oct 24, 2010 10:57:00 AM] |
||
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
[ot]I think that what I calculate is Rmax... per second, so we're bigger than Red Sky on a sustained basis. If it were true that they were 86,400 times more powerful I'd shut down WCG today. The WCG equivalency proposed of 28,512,000 CPU years by Red Sky, daily. We're going to hit 400,000 years of computing after 6 years[/ot]
----------------------------------------
WCG
----------------------------------------Please help to make the Forums an enjoyable experience for All! [Edit 1 times, last edit by Sekerob at Oct 23, 2010 11:47:13 AM] |
||
|
Hypernova
Master Cruncher Audaces Fortuna Juvat ! Vaud - Switzerland Joined: Dec 16, 2008 Post Count: 1908 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Ooops! My big mistake Sek.
----------------------------------------![]() ![]() ![]() But believe me how much better I feel. Again the TFlop - TFlops issue. It is probably also tiredness, ozone posioning, hypoxia and all the problems when you breathe too much 980X air. ![]() ![]() |
||
|
|
![]() |