| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 17
|
|
| Author |
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
My Ubu-machine is now dry: no more HCC-CPU-WUs. The WCG-server remains tight-lipped and continues to refuse to
----------------------------------------; ;edit1_2012.10.14Su.0820 -- to use the right word: strike out 'download'; insert 'upload'. ; [Edit 1 times, last edit by Former Member at Oct 14, 2012 8:19:59 AM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Based on the data and observations of my Linux and Windows, it seems to be limited to HCC only [maybe even Windows in general]. Before read it as applying to all sciences [see Amr Adam post who gets a "too many in progress" for all sciences]. My Linux quad has 16 of the same science, and the last arrived few minutes ago. My Windows octo has 45 in buffer, to include 8 HCC.
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I just got 14 HCC-CPU-v6.56 WUs a few minutes ago after I clicked on the 'Update' button under the 'Projects' tab of BOINC_v7.0.27*. I would have gone for other WCG projects, but the long runtimes of the WUs in those projects isn't exactly inviting.
Notes: *The Ubuntu_v12.10 is scheduled for release 2012.10.18Th, and I wonder what BOINC version would be bundled there. ; |
||
|
|
Ingleside
Veteran Cruncher Norway Joined: Nov 19, 2005 Post Count: 974 Status: Offline Project Badges:
|
Based on the data and observations of my Linux and Windows, it seems to be limited to HCC only [maybe even Windows in general]. Before read it as applying to all sciences [see Amr Adam post who gets a "too many in progress" for all sciences]. My Linux quad has 16 of the same science, and the last arrived few minutes ago. My Windows octo has 45 in buffer, to include 8 HCC. While it is possible to set a global limit per computer, normally all limits is per core or per GPU. A global limit per computer of only 10 would mean any 12-way and more cores would have idle cores, so it's unlikely WCG has choosen this option. Meaning, a quad-core can have 40 at a time, a hex-core can have 60 and an octo-core can have 80 at a time and so on. For GPU-limits, these are normally multiplied with a GPU-factor on top of #GPU's. Apparently GPU is limited to 60 at a time so it's possible the multiplication-factor is 6. Just to test the limits, set cache to 5 days, and this gave: cached 23 SN2S, 26 DSFL, 20 GFAM. In progress, 3 GFAM, 4 DSFL, 1 HCC. Waiting to run, 2 GFAM, 1 SN2S. Total is 80 and this fits nicely with this being an octo-core. ![]() "I make so many mistakes. But then just think of all the mistakes I don't make, although I might." [Edit 1 times, last edit by Ingleside at Oct 14, 2012 12:30:01 PM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Makes perfect sense, Ingleside. From that perspective, the only practical crunching limit is the 164 that was reported as a day quota [for a single science app I presume, since the per-core limit previously was already something like 80 or 120 per day].
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The slowing of download affects actually S2NS and GFAM, for my computer.
The work in reserve is diminishing gradually and the work units are downloaded very parsimoniously. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Read post by knreed [I did before invoking heck]: https://secure.worldcommunitygrid.org/forums/wcg/viewpostinthread?post=395403 It's not surprising that users, especially those who never run GPU work, haven't noticed a comment buried way down a thread we have no interest in within a topic area that doesn't apply to us. Might as well be in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard'. A useful place for that post would have been Known Issues, as it affects every project. While it is possible to set a global limit per computer, normally all limits is per core or per GPU. Looks right, Ingleside. I'm running HCMD2 on some dual-core linux boxes and have been hitting a maximum of 20 for the last few days. Given the recent units have been completing in about 20 mins, that makes the maximum possible cache around 7 hours! |
||
|
|
|