| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 72
|
|
| Author |
|
|
Sgt.Joe
Ace Cruncher USA Joined: Jul 4, 2006 Post Count: 7850 Status: Offline Project Badges:
|
Thank you Keith for your hard work and nice explanations. I hope your upcoming weekend is uninterrupted.
----------------------------------------Cheers
Sgt. Joe
*Minnesota Crunchers* |
||
|
|
[CSF] Thomas Dupont
Veteran Cruncher Joined: Aug 25, 2013 Post Count: 685 Status: Offline |
I have re-enabled MCM just now. Again, this will mean that only reliable hosts will be getting resent work for the time being. You may see messages like, 'No work available for Mapping Cancer Markers' or 'Work is available but assigned to different host type'. Also, we are in discussions with the researchers on how to prevent this from being a problem in the future. They will be taking a look at the batches that were larger than usual to see if they can prevent them from happening in the future. Thank you for your patience, -Uplinger Thanks Uplinger for the heads-up ! Good job ! ![]() |
||
|
|
Seoulpowergrid
Veteran Cruncher Joined: Apr 12, 2013 Post Count: 823 Status: Offline Project Badges:
|
I understand MCM1 WUs are no longer being sent out in order to remove pressure from the server but I keep (slowly) getting new MCM1 files to my main machine. I am guessing these are the ones rejected or not finished by others but even then if you really need to shut off MCM1s then I recommend pausing the resends as well.
----------------------------------------Cheers. ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
MCM was turned back on last night, I am getting plenty of fresh ones!
![]() |
||
|
|
Ingleside
Veteran Cruncher Norway Joined: Nov 19, 2005 Post Count: 974 Status: Offline Project Badges:
|
I understand MCM1 WUs are no longer being sent out in order to remove pressure from the server but I keep (slowly) getting new MCM1 files to my main machine. I am guessing these are the ones rejected or not finished by others but even then if you really need to shut off MCM1s then I recommend pausing the resends as well. Despite MCM1 has started issuing work again, I'll answer this. While disabling sending-out resends can initially look like a good idea, as long as disk-space isn't completely full continuing sending-out resends can actually be an advantage, atleast if project needs 2 (or more) results for validation-purposes. Let's take a quick look on wu-distribution, using completely arbritary size on files. 1: A new wu's 1 MB input-file is put into the servers download-directory. 2: Two tasks for this wu is generated in the database, ready to be sent-out. 3: Two different computers downloads these tasks, and both downloads the 1 MB input-file. 4: N seconds later one of the computers is finished and upload a 10 MB result-file, this is put into the servers upload-directory. 4a: At this point, total server-side disk-usage is 11 MB. 5: Where's multiple choises here, but let's say the 2nd. computer finishes and uploads a 10 MB result-file. 5a: At this point, total server-side disk-usage is 21 MB. 6: Both computers reports their finished tasks and this is saved in the database. 7: Normally within 10 seconds after the last task was reported, Transitioner triggers Validation, tasks validated & credited, triggers Assimilation, results assimilated, triggers File-deletion, all corresponding files for wu & results is deleted from upload & download-directory. 7a: At this point, disk-usage is again down to zero. 8: After a project-defined waiting (24 hours or something), the DB-purger archieves and removes all traces about the wu & results from the database. As can be seen above, if have already got 1 result for a wu, this will use example 10 MB on disk. Until the 2nd. result is also returned and wu can be validated, the disk-usage will continue being 10 MB in upload-directory. For this reason keep re-sending any tasks will often be a good idea, since this is the only way to recover the already-used disk-usage. Especially if a task times-out by reaching the deadline a re-issue is a good idea, since in most instances there's a 10 MB result-file waiting for this 2nd. result. Now, if none of the tasks is returned, you'll still use 1 MB in the download-directory, and this can block generating of other types of work if too many wu's is in this state. Still, atleast in this example where upload-size is >> download-size, not sending-out work for wu's never sent-out at all before is a good idea. For re-issue on the other hand, in many instances this is the only way to decrease server-side disk-usage again. ![]() "I make so many mistakes. But then just think of all the mistakes I don't make, although I might." |
||
|
|
uplinger
Former World Community Grid Tech Joined: May 23, 2005 Post Count: 3952 Status: Offline Project Badges:
|
Ingelside,
You comments are valid. But we have multiple storage devices, which handle different parts of the each job. After assimilation the results are sent to a larger longer storing device. This step had a large backlog, clearing this backlog first makes room for results coming in instead of allowing the issue to compound itself. Also, In your steps above, once a work unit is validated, we delete all the result data except the canonical result. We are doing better on the upload server space. Right now we are not technically loading any new work. There was a 24 hour buffer that was loaded when the system got full, but we are mainly allowing what is being sent to be resends to do just as you say. Except step 7 is a little bit different :) Thanks, -Uplinger |
||
|
|
S_MDC_PROJECT
Cruncher Joined: Feb 17, 2005 Post Count: 16 Status: Offline Project Badges:
|
I have never encountered so many suck in a DC project as are here now - DC work is voluntary and paid for by the cruncher and for your info I have been probably crunching longer than most persons here I started back in 1999 on the SETI Project that gives me 14 years real time crunching of which I have worked on and off and clocked up 32+ years crunching with WCG since they started and they have from day one always tried to get participants to crunch the projects they want - time to move back to F@H
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Goodbye, don't slam the door on the way out, we do not need your type of insulting posts!
![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Result Status: User Aborted
![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I started DC work prior to 1999, so I have even more experience. I was also in the top 300 at F@H and there was plenty of "suck" there when I left. I also got tired of too much wasted computing time wasted for "diseases" that I didn't want to contribute too. Influenza rarely kills and is treatable; I would rather donate my computing time for something that would benefit mankind more than a cure for the flu. Their bonus structure was also unfair and they seemed to prefer GPU projects only.
Good luck with F@H, you'll need it. When they had issue previously, they were slow to announce any issues. Oh, even now I'm still in the top 900 and haven't donated a single second of computing time in over a year! |
||
|
|
|