| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 161
|
|
| Author |
|
|
nanoprobe
Master Cruncher Classified Joined: Aug 29, 2008 Post Count: 2998 Status: Offline Project Badges:
|
I've got two computers running GPU only with an App_info.XML file. I've got to leave them unattended for a few days. Do you recommend pulling the app file to allow them to get other work? Just so you know if you pull the app_info with tasks still in your cache they will be lost.
In 1969 I took an oath to defend and protect the U S Constitution against all enemies, both foreign and Domestic. There was no expiration date.
![]() ![]() |
||
|
|
twilyth
Master Cruncher US Joined: Mar 30, 2007 Post Count: 2130 Status: Offline Project Badges:
|
. . . The input files alone for HCC1 for storing 100 days of work would be about 5TB of storage (my quick estimate) . . . -Uplinger I don't understand what the big deal is about 5 TB of storage is. I have at least 30TB of my own between my machines and almost 20 of that is my main rig and I'm just a lowly end user. This isn't even for a home business or anything. Im sure you need some faster then you have at home ;) this isnt low end user to have 30tb eather. And....look little more forward..there are some more projects on wcg that needs fast storage ;) I believe that's why RAID was invented. I do however see your point but the difference in cost is pretty nominal. ![]() ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
18 more to gpu crunch and back to full time cpu
|
||
|
|
BladeD
Ace Cruncher USA Joined: Nov 17, 2004 Post Count: 28976 Status: Offline Project Badges:
|
I can't decide if this is good news or bad ![]() Thanks for keeping us updated. I can. ![]() |
||
|
|
uplinger
Former World Community Grid Tech Joined: May 23, 2005 Post Count: 3952 Status: Offline Project Badges:
|
OK, the time estimate everyone is asking for is unknown. We currently do not have an estimate from the researchers as to how long it'll take for them to put more batches on their server for us to download and send to you. I know, not the answer everyone is looking for, but we will be checking things over the US Thanksgiving holiday and will get things back up and running as soon as we can.
. . . The input files alone for HCC1 for storing 100 days of work would be about 5TB of storage (my quick estimate) . . . -Uplinger I don't understand what the big deal is about 5 TB of storage is. I have at least 30TB of my own between my machines and almost 20 of that is my main rig and I'm just a lowly end user. This isn't even for a home business or anything. On the surface, your comment would be correct, but our storage devices are not only used by this project but by all of them and need to store the results as well before they are sent back to the researchers and backed up. Also, I believe on average we are backing up over 1TB of data each week. These get backed up to slower devices outside of our hosting environment. The ones used for the server backend need to be high speed and with redundancies to make sure that if one hard drive goes down, World Community Grid does not lose everything. Also, as you can see, the researchers have not put all the batches up on thier server for us to download and send to the users. The researchers have storage limitations on their end as well. So under the surface, it's more complicated then it appears. Thanks, -Uplinger |
||
|
|
uplinger
Former World Community Grid Tech Joined: May 23, 2005 Post Count: 3952 Status: Offline Project Badges:
|
I can't decide if this is good news or bad ![]() Thanks for keeping us updated. It's good news, it means we're processing lots of results for the researchers and helping them towards the next step in their process. Thanks, -Uplinger |
||
|
|
twilyth
Master Cruncher US Joined: Mar 30, 2007 Post Count: 2130 Status: Offline Project Badges:
|
OK, the time estimate everyone is asking for is unknown. We currently do not have an estimate from the researchers as to how long it'll take for them to put more batches on their server for us to download and send to you. I know, not the answer everyone is looking for, but we will be checking things over the US Thanksgiving holiday and will get things back up and running as soon as we can. . . . The input files alone for HCC1 for storing 100 days of work would be about 5TB of storage (my quick estimate) . . . -Uplinger I don't understand what the big deal is about 5 TB of storage is. I have at least 30TB of my own between my machines and almost 20 of that is my main rig and I'm just a lowly end user. This isn't even for a home business or anything. On the surface, your comment would be correct, but our storage devices are not only used by this project but by all of them and need to store the results as well before they are sent back to the researchers and backed up. Also, I believe on average we are backing up over 1TB of data each week. These get backed up to slower devices outside of our hosting environment. The ones used for the server backend need to be high speed and with redundancies to make sure that if one hard drive goes down, World Community Grid does not lose everything. Also, as you can see, the researchers have not put all the batches up on thier server for us to download and send to the users. The researchers have storage limitations on their end as well. So under the surface, it's more complicated then it appears. Thanks, -Uplinger As someone who was a mainframe programmer for one of my 3 careers, I do understand SOME of the underlying complexities, I just think that in an era when areal densities increase almost monthly, one needs to be careful about appearing to blame things on a lack of storage. It tends to seem disingenuous. But thank you for trying to sketch out some of the underlying architecture. Also, thank you for trying to address the timeline issue. At the risk of sounding coy, I guess we'll assume then that the estimate for when we will have an estimate will be Friday at the earliest and since even people at work aren't actually working on Friday, most likely Monday. In which case, it does indeed make sense to switch profiles and comment out the app_info files. Thank you for your kind and consistent aid. ![]() ![]() |
||
|
|
Bearcat
Master Cruncher USA Joined: Jan 6, 2007 Post Count: 2803 Status: Offline Project Badges:
|
Thanks Uplinger. With turkey fast approaching, may be a good idea to crunch something else until the scientists tell us what's next. No reason to idle a computer when other projects can use our computers. The techs and scientist deserve a great turkey day too. Have a happy holiday!
----------------------------------------
Crunching for humanity since 2007!
![]() |
||
|
|
Hypernova
Master Cruncher Audaces Fortuna Juvat ! Vaud - Switzerland Joined: Dec 16, 2008 Post Count: 1908 Status: Offline Project Badges:
|
I can't decide if this is good news or bad ![]() Thanks for keeping us updated. It's good news, it means we're processing lots of results for the researchers and helping them towards the next step in their process. Thanks, -Uplinger Thanks uplinger for your efforts. I agree with your comment. I think we all have been so much enthusiastic about this GPU crunching opportunity that coming back to our old CPUs seems like unimaginable. We touched the Nirvana of crunching and now we must learn to come back to business as usual. Maybe this is the reason why it was so exciting. If it becomes an everyday thing than it won't be Nirvana any more. ![]() ![]() |
||
|
|
Hypernova
Master Cruncher Audaces Fortuna Juvat ! Vaud - Switzerland Joined: Dec 16, 2008 Post Count: 1908 Status: Offline Project Badges:
|
I've never seen so many members at a time browsing the same thread in the forum. WoW it is a hot topic.
----------------------------------------![]() ![]() |
||
|
|
|