Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
World Community Grid Forums
Category: Completed Research Forum: Help Cure Muscular Dystrophy - Phase 2 Forum Thread: I Can't Get No..... |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 24
|
Author |
|
BSD
Senior Cruncher Joined: Apr 27, 2011 Post Count: 224 Status: Offline |
Thanks for the update.
|
||
|
homeslice
Cruncher USA Joined: Apr 27, 2007 Post Count: 12 Status: Offline Project Badges: |
Yes, Thank you!
---------------------------------------- |
||
|
Mysteron347
Senior Cruncher Australia Joined: Apr 28, 2007 Post Count: 179 Status: Offline Project Badges: |
Ah well, back to stomping Lieshmobugs then.
Crunch on merry crew, crunch on! |
||
|
KerSamson
Master Cruncher Switzerland Joined: Jan 29, 2007 Post Count: 1671 Status: Offline Project Badges: |
Thank you for this update.
----------------------------------------It is important to inform in order to avoid to demotivate the WCG members too much. However, even if this storage problem seems to be simple to solve (simply ordering more space i.e. more discs), it can bring strong troubles within the IT infrastructure such as: - available rooms - electrical power - cooling capacity - network - ... I would be interested to know (if possible) how many data have been generated since the start of HCMD2 ! ... --- Merci pour cet update. Il est important d'informer pour éviter une trop grande démotivation des membres de WCG. Néanmoins, même si un problème de stockage semble facile à résoudre (simplement commander plus d'espace, c'est-à-dire plus de disques), un tel problème peut causer de significatives difficultés dans l'infrastructure, telles que: - locaux disponibles - puissance électrique - capacité de climatisation (refroidissement) - réseau - ... Personnellement, je serais intéressé de connaître le volume de données qui a été généré depuis le début du projet HCMD2 ! ... --- Yves |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
(in compressed format): 18,185,294.8 MB through today.... is that 18.2 TeraByte?
--//-- |
||
|
Hypernova
Master Cruncher Audaces Fortuna Juvat ! Vaud - Switzerland Joined: Dec 16, 2008 Post Count: 1908 Status: Offline Project Badges: |
Ok let's say 18 TB.
----------------------------------------That is a mere 6 units of 3TB HDD 3.5". Retail cost here in Switzerland is 140 Euros per disk that is 840 Euros in total. It can't be that the whole project is stalled because of that. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
2 things Hypernova. First, from above.
However, even if this storage problem seems to be simple to solve (simply ordering more space i.e. more discs), it can bring strong troubles within the IT infrastructure such as: - available rooms - electrical power - cooling capacity - network Additionally, it must be nice that you've never had to work within a bureaucratic system before. Nothing happens quickly. Add to that the economic (pick one: slow recovery; stall; double dip recession). On a continent where practically EVERY government expenditure is being cut... Look, they identified the problem, they are fixing it. We seem to be forgetting that this project is their's as much, if not more, than it is ours. If we want to see this up and running, imagine how badly the scientists running it want to see it working. |
||
|
Mysteron347
Senior Cruncher Australia Joined: Apr 28, 2007 Post Count: 179 Status: Offline Project Badges: |
It's got to be WAY more complicated than simply finding a few more gigglebytes of storage. If it was that simple, engaged technicians would have "found" a few drives and used their replacements for their home machines. But then, I suppose in OZ what's in short supply is respect for authority.
The requirement for more storage would have been obvious for months before Lyons went off the air. The rate of data arrival would have been a dead giveaway, even if the requirement wasn't in the project plan. Obviously, the machinations were set in motion a long time before WCG had to pull the switch. If it's simply a matter of storage, perhaps someone could have a little word with an ISP who may be prepared to loan a little for a while. You never know - maybe even a little filler for their newsletter, a banner on their homepage. Surely that'd be something they'd Wanadoo? Or one of the domain-name registries cum hosting-providers. Horreur de horreurs! They wouldn't even actually need to be in France. We can't blame the scientists on this. In fact, let's not finger-point at all - that's totally negative. We've set ourselves a task of finding a way to combat Muscular Dystrophy. By comparison, finding a way to combat a French bureaucracy may be a problem of grid-sapping dimension. So - where do we start? Would sending angry emails to the Directeur in Lyons help? |
||
|
KerSamson
Master Cruncher Switzerland Joined: Jan 29, 2007 Post Count: 1671 Status: Offline Project Badges: |
@Hypernova,
----------------------------------------you seem to forget that these 18 TB of data are critical in terms of time to generate them and of availability for the result evaluation. If we speak of 18 TB netto, you have to consider that at least RAID 5 (probably RAID 6) will be in place. Surely, the data will be mirrored additionally. Furthermore, the access speed is important for the performance of the result evaluation. We are not speaking about a couple of HD bought in the computer shop at the corner. We speak about SAN with high performance HD, probably connected over fibre channel switched fabrics, with redundant storage controllers. You have also to consider that such data (even RAIDed and mirrored) need to be backed up. The resulting investment is far away of 1'000.- EUR. Unluckily, I have to agree with Mysteron347, curing the bureaucracy is also a big challenge in Europe (not only in France) and no WCG project can help to find the appropriate therapy. Yves |
||
|
Hypernova
Master Cruncher Audaces Fortuna Juvat ! Vaud - Switzerland Joined: Dec 16, 2008 Post Count: 1908 Status: Offline Project Badges: |
Yves, ok you made the point, and I fully agree with your comment.
----------------------------------------But still my opinion is that when confronted with a problem of this type you still need to have good quick fixes to avoid stalling the project and have scientists go on with the work and avoid as much as possible to increase your fixed costs on the project. And also keep the WCG crunchers interest active. 1) On one side you do all the paperwork and go through red-tape and bureaucracy and take the time you need until you get the 10'000 Euro solution with all bells and whistles. But you should ask for 11'000 Euros to cover for the temporary solution. A 10% over-cost to avoid five or six months stall in the project. 2) In parallel you buy 6 disks at about 1'000 Euro and you have 18TB of capacity available. To allow for any data loss or failure you get from WCG 9TB of data that you copy on three disks and then you keep a spare copy on the other three disks. Once you finished analyzing the 9TB of data you get from WCG the other 9TB. In this way at any given time 9TB of data remain on WCG storage system. In terms of risk analysis you have 9TB secured at WCG and 9TB at your facility but split on three disks that are backuped. The probability of two HDD breaking down in short period of time is low, very low if you look at the reliability data of the disks. We speak of a transitory period and that is say five, six months. That two brand new disks crash both definitively in this timeframe, and by additional badluck the two disks are the two that have the same data, that is a very low probability (Murphy's laws will say the opposite )and even if it happens you have risked 3TB in all that is 16% of the data. Now the HDD disks I am talking are already pretty good stuff. I am talking about the: Hitachi Deskstar 7K3000, 3TB at 7'200 rpm, with 64MB cache and SATA3 6GB/sec interface. This is already a good and fast enough HDD. And if you buy 6 of them in one shot you can surely get also a small rebate. Frankly with the server category HDD equivalent that is the Hitachi Ultrastar you do not gain much in speed, you have the same number of Load/Unload cycles at 600'000 but yes you have an additional 2Million hours MTBF certificate that you do not have with the Deskstar. Conclusion if I would be managing this project, then the decision would be to go for this temporary solution that keeps everybody, crunchers and scientists busy and the project running, maybe a little slower but running. You could argue that the project may even not have the 1'000 Euros. Then I will tell you how I would act. If I am a scientist working on this reasearch and this means something very important to me then I would put upfront the money. I would request from the team that everybody would contribute a share of this amount. And later when the funds arrive they will be paid back. [Edit 1 times, last edit by Hypernova at Oct 7, 2011 8:25:43 AM] |
||
|
|