Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 5
|
![]() |
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
New library, new season, new testing of the limits:
Result Log Result Name: E225838_ 782_ S.300.C37H36N6.FUEUKHLJTUUKBU-UHFFFAOYSA-N.17_ s1_ 14_ 1-- <core_client_version>7.4.22</core_client_version> <![CDATA[ <message> Maximum disk usage exceeded </message> <stderr_txt> The allowed space settings per linux node event log: Disk: 30.29 GB total, 174.40 GB free The task wen tout with 'exceeded disk limit: 2645.29MB 2560.00MB' The vm use is next to always zero on 3.00 gb virtual allowed. Switching project, till resolved. |
||
|
seippel
Former World Community Grid Tech Joined: Apr 16, 2009 Post Count: 392 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
After some checking, only about four work units a day are failing due to hitting the disk usage limit which is reasonable since disk usage can only be estimated before the work unit is actually run. For these errors, Harvard is given the work unit numbers so they can run them locally.
Seippel |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
If you quantify by how much those 4 exceed the limitation on average, you could raise the artificial limit by that much and not be having to handle this chore again.
But, while before it was thought by me the limit was a static setting, 2560mb, really 2.56gb?, you infer for each job the limit is re-estimated. For what purpose would that be? Comment: would not want to run 8 of these concurrent using ramdisk. |
||
|
seippel
Former World Community Grid Tech Joined: Apr 16, 2009 Post Count: 392 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
If the work space used be the work unit exceeds this limit, the work unit is aborted (this is the same type of limit we hit in the UGM1 beta). I didn't mean to imply this is estimated on a per work unit basis, but Harvard doesn't send the most complex molecules to WCG. So there is a cut off for complexity and that part is a bit of an estimate for what can fall into our current limits.
Seippel |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I've lost over 13hrs for this task, not happy.
<core_client_version>7.2.42</core_client_version> <![CDATA[ <message> Maximum disk usage exceeded </message> <stderr_txt> I think I'll have to pull the plug on this project, to much wasted time. |
||
|
|
![]() |