Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 26
|
![]() |
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Now that the exact minima were printed for most sciences, have to see if these sciences will run concurrent in any pairing without invoking a "waiting for memory" state, the weakness in the strategy I fear. Yes, with low memory-settings you'll successfully download the work, but you can't run on all the cores, since not enough memory for this. If rillian has a 16-core system like his message atleast indicates, would expect he'll need to set memory to atleast 1 GB, even if runs the smallest-memory-WCG-project, just to keep all cores loaded. Also, if starts to hit the memory-limit, there's a chance some tasks will be removed from memory, regardless of LAIM being on, something that can lose significant amount of time if long time between checkpoints. rilian wrote having 512MB for 16 cores. Not asking what/how this system is set up. At any rate, the smallest, HCMD2 does well in 5-7MB each. Sticking to the 118MB for the duo also forced a previously cached C4CW. Employing a memory use plotter as yet it's RAM workset has not exceeded 83MB **. The overflow into VM might increase. HCC is yet to start but if 16 run concurrent in 512MB, think that except for the Water/Water any pairing of HCC/HCMD2/C4CW will run in the 118 MB On the unloading even if LAIM is on I'm glad it does (testing on 6.12.14 alpha, still). We've seen lots of discussions of BOINC trying to find WUs to fit in limited memory and then whilst trying different tasks, locking in all the RAM... like it. For check-pointing only concerned with CEP2, so "... can lose significant amount of time..." is a minimal concern and for sure CEP2 opt-in crunchers have enough options on hand to keep the concurrent number of these in check. Think though as several have surmised, that all the strange events were after effects. Saturday stats look very boisterous. --//-- ** Unsure why 384MB is specified in the fetch log. Is this a legacy value from the old compile and the new one on Windows being much more compact or is this in anticipation of much bigger Clean Water jobs... maybe Uplinger answered this before?) edit 2: Attention all... See MrKermit is on Deck... did he flip the switch again off his "crunching centre for hire";0) [Edit 2 times, last edit by Former Member at Mar 5, 2011 2:49:28 PM] |
||
|
MrKermit
Advanced Cruncher Joined: Jun 13, 2009 Post Count: 95 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Well, we tried. I didn't realize there were so many site issues this week. We only had a brief window we could help, and most of it was lost to the project web site being down. Right now we're off-line; may have a small set of machines (maybe 100?) in the next week or two.
----------------------------------------We were able to drain off the work we started, so we didn't strand any work that I know of. Cheers! MrKermit ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Bummer, still it's easy to follow if you ''hooked up'' for the moment since your sig flashes the G Flops. Few days ago it said 9 Gfl, now 911.
Thanks for contributing any of your ''spare'' cycles. --//-- |
||
|
rilian
Veteran Cruncher Ukraine - we rule! Joined: Jun 17, 2007 Post Count: 1460 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
rilian, you can trick the client a little. HCMD2 is also small as is C4CW. By limiting the permitted BOINC RAM on the host for both idle and work and setting the "if there is no work..." the clients would go to only fetch the small/lighter sciences, but not the biggies... so is the theory. I'd be interested to know if that would work by volunteering you to test this ;P If the trick works I'll be porting this into an FAQ --//-- PS: "load work from other machines" I've not found yet as option, but we know what you meant ;o) i'll try this (setting less RAM) on the new machine and will report results ![]() |
||
|
knreed
Former World Community Grid Tech Joined: Nov 8, 2004 Post Count: 4504 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Due to some of the issues last week, we got behind in loading new work into the system which resulted in HCC1 and HFCC being out of work to distributed for a period of time on Friday/Saturday. Once we got some work loaded up for them, they started distributing it. We are back to having a full queue of work for all projects. It is even raining for DDDT2 at this time.
|
||
|
knreed
Former World Community Grid Tech Joined: Nov 8, 2004 Post Count: 4504 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
And a suggestion for the techs: Have the "if there is no work..." only send work when the assigned number of tasks "In Progress" sinks below or equals 1 per core. This way, upon restore of preferred work availability, those hosts would return short order to those sciences the member has elected for the host(s)/profile(s). Would that work? Think this increases the willingness by members to select this "recommended" alternate work supply. --//-- Yes - this would work. I'll put it in the dev log. However, we will defer this change until after the server updates are complete. I hope to make more progress on that this week since much of last week was spent fighting GPFS/Kernel conflicts. |
||
|
|
![]() |