| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 175
|
|
| Author |
|
|
SekeRob
Master Cruncher Joined: Jan 7, 2013 Post Count: 2741 Status: Offline |
Be prepared for the checkpoint interval to be again put to discussion "I can't participate because of my system booting on update... which one CANT postpone on W10". There's only registry hacks to achieve that. One copy got to the 18 hour term without being able to do it's second checkpoint... a 4770K at 3Ghz, albeit hyper-threaded. Full credit was granted rather swiftly... valid declaration. I am aware that you can at least in the professional version of windows 10 you can set it to allow you to choose one to download windows updates you do this by setting a metered connection. Unfortunately this doesn't work for ethernet connections only for Wi-Fi connections. If you want instructions on a to do this I suggest using Google or your preferred search engine. There are several tutorials on how to do this thx, aware of this metered roundabout. See now you can select a day with the restart time in home ... up to 6 days in the future, so now 14th latest. But, it says "When a restart is scheduled, this option is available", i.e. nothing to set if there is no update boot scheduled, and not having been polled, nothing in the action center, it's hard to anticipate. Only the 8 hour time frame you don't want the auto-boot to happen... so hack it is for me, not going to shell out 159 Euro to go to pro /OT [Edit 1 times, last edit by SekeRob* at Oct 8, 2016 9:15:59 AM] |
||
|
|
widdershins
Veteran Cruncher Scotland Joined: Apr 30, 2007 Post Count: 677 Status: Offline Project Badges:
|
So far every Beta I've from this test had has slammed into the 18hr wall before completing the second job, this has been the case for my wingmen also. If these job sizes are going to be the norm when the project restarts I'd suggest that consideration be given to not only raising the 18hr limit, but also sending jobs individually rather than as a set of jobs packaged together.
|
||
|
|
SekeRob
Master Cruncher Joined: Jan 7, 2013 Post Count: 2741 Status: Offline |
Because they are interdependent, Job 1 builds on Job 0 result, job 2 builds on Job 1 result etc, most very unlikely to happen.
|
||
|
|
widdershins
Veteran Cruncher Scotland Joined: Apr 30, 2007 Post Count: 677 Status: Offline Project Badges:
|
But doesn't the TB project use the results of completed WU's as feedstock for the next ones? Just because it hasn't been done for convenience with relativity short running CEP WU's doesn't mean it couldn't be done if required to help progress longer running units.
|
||
|
|
SekeRob
Master Cruncher Joined: Jan 7, 2013 Post Count: 2741 Status: Offline |
For one, 'homogeneity' issues with what you propose.
|
||
|
|
zdnko
Senior Cruncher Joined: Dec 1, 2005 Post Count: 235 Status: Offline Project Badges:
|
I ended a beta computation (the forth in few days) but I can't upload the result.
No problems with the other three or with zika result. Someone in the same situation? |
||
|
|
Crystal Pellet
Veteran Cruncher Joined: May 21, 2008 Post Count: 1403 Status: Offline Project Badges:
|
From the 5 files to upload the biggest ones ending with _4 refuse to upload (probably to Harvard).
Already 5 results Pending in upload. |
||
|
|
pvh513
Senior Cruncher Joined: Feb 26, 2011 Post Count: 260 Status: Offline Project Badges:
|
I had a WU from this batch hitting the 18h CPU time limit on an Intel Xeon E5-2630 v3. That CPU was released on the market less than two years ago, but is obviously already too slow to run this project. This is a sad day... It is not too slow to run this project. The scientists have said repeatedly that they get useful data from results that end at the 18 hour mark. Keep crunching this important project. So far 3 WUs finished. One crashed on a segfault, the other two hit the 18h mark. One was running 50m on job #0, then spent 17h10m on job #1 before it was killed. The other job spent 1h30m on job #0, then spent 16h30m on job #1 before it was killed. Somehow this distribution of CPU time between the jobs doesn't sound right. But assuming it is, how much of this would be useful for the scientists: just the 50m+1h30m, or can they also do something useful with the aborted #1 jobs? |
||
|
|
widdershins
Veteran Cruncher Scotland Joined: Apr 30, 2007 Post Count: 677 Status: Offline Project Badges:
|
Yep, I've got 4 sitting retrying uploading the _4 files every so often without success. I spotted one file uploading successfully just as that WU finished, I assume that was one of the Beta report files uploading to the WCG servers as WU's for other projects are also uploading ok. I think it's the big ones going directly to the researchers that are failing to upload.
|
||
|
|
Crystal Pellet
Veteran Cruncher Joined: May 21, 2008 Post Count: 1403 Status: Offline Project Badges:
|
Upload location not reachable: cep-boinc01.rc.fas.harvard.edu
|
||
|
|
|