Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 39
|
![]() |
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
i cant remember what project it was but at one time i had over 8 pages of Pending Validations with just 4 cpu cores total running. The project was the initial version of HCMD2 with the tiny WU. I had over 130 pages of PV even though I only had .1 day cache. |
||
|
4WhQxmsSdepBpEBjB6rbNUMSgTfK
Veteran Cruncher The Great State of Texas Joined: Apr 27, 2007 Post Count: 1053 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Well I'm In Cali. Hope my systems finish all the work I left them. I won't know for the next 3 dys.
----------------------------------------![]() ![]() |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
esoteric17 wrote exactly what I was thinking:
One would think there should be some kind of server-side code to say "Sorry you are requesting more than the maximum allowed cached work of 7 days; sending 7 days of work." If a client setting can have a negative impact on the grid as a whole, the server should ignore/override it.Meanwhile, Sekerob wrote: If one knows these road trips to happen, briefly look in and hit the Update button again. That's what I do before closing the lid. If I'm going to turn off a headless machine, I just press the power button and let Linux perform a shutdown. To have to log in to another machine and start sending boinc commands over the network doesn't take long ... but it takes 1000 times longer than hitting the power button. |
||
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
Sure you can hit the power button and be done with it [those would probably not read these forums or topics either], or Linux probably could be set to run a script on shutdown to execute the boinccmd tools --update command. Let's see...
----------------------------------------![]()
WCG
----------------------------------------Please help to make the Forums an enjoyable experience for All! [Edit 1 times, last edit by Sekerob at Apr 22, 2010 8:55:01 AM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hmmmm..... this thread sounds vaguely familiar... pv jail and all...
Sek, I know kreed was going to mull some of this kind of stuff over with his peers... are you privy to any of those observations / conclusions possible implementation targets? |
||
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
Whilst the origin of the thread was intended as friendly recycled prod [running Linux now on the quad and that volunteer section is not much different], yes the discussion is far from new, who knows more ideas come out, like the one in the post above yours. No, whatever I'm privy on, could not share, but think you just saw my first post of today on something that might be cooking.
----------------------------------------It's strangely silent outside, not a plane to be heard either... May 1, so think with the sun out, to go for a push-bicycle ride and meet the neighbors :D
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Here's an idea about knowing if a machine will likely return a WU within time.
----------------------------------------I'm sure someone else has had this crop up... Anyway, one of the things that could be done would be to monitor (keep track) of how often a particular client responds and how many WU's are in their cache. If they have "10" days worth of work (say for arguments sake 30 WU's) and the client is only returning 1 or 2 a day..... it would be a safe bet that one or more of the WU's in that clients queue isn't going to complete within the schedule. So why not just push out another one that's marked with the original target date and not as an "emergency fix" job? [Edit 1 times, last edit by Former Member at May 1, 2010 6:12:21 PM] |
||
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
Sorry BarneyBadass, but I'd think it rather unacceptable to receive a non-repair task and then because the client being slow and some statistics based probabilities, a 3rd copy being send out [with or without original due date] and someone's bandwidth having gone to waste. The question then being in this race state who's first to return and if the other client actually communicates to WCG before the redundant task is started. Then if it were started, 3 copies being completed... a not so safe bet. Too many versions of clients around with and without the server abort feature capability, the possibility proposed in past to give a client a feature to tell WCG server if a task has been started [more scheduler loads. Once or twice we do observe an "Other" state where a last minute retraction was made because the No Reply got in.
----------------------------------------At any rate if a client is slow in returning, most probably because it's on only part time for WCG , the client takes care of fetching less work [main controls the <on_frac> and DCF and resource share] so it would still be able to complete the task based on most recent crunching pattern. My tag is convincing volunteers that huge caches should only be set if there is a true need, whilst server or client side features are developed to become even closer the an AI state.
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Sekerob,
No worries, just a thought... I wasn't particularly fond of it in the first place, but you know how it is... toss out an idea and someone else monkeys with it a bit to salt and pepper it to something better... collaborative efforts.. ![]() I won't even think about AI... (I did some work in AI a long time ago and it's something something that both fascinates me and scares the fool out of me at the same time). |
||
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
No idea ** what made me quad so deservant to get them, but a whole stack of rush jobs came in when hitting the update button to clear a dozen tasks waiting in the transfer tab for upload... I like that... same day credit, some 20 of them at a cache setting of 0.9 days. The great grand children look to do that by design, to get the batches complete.... they're normal quorum 2 distro and probably will have short run times ... good for the result rankings too :D
----------------------------------------HFCC_ s2_ 02283732_ s2_ 0001_ 1-- 1234035 In Progress 6-5-10 09:00:26 10-5-10 09:00:26 0.00 0.0 / 0.0 X0000075850193200610151619_ 2-- 1234035 In Progress 6-5-10 08:45:52 10-5-10 08:45:52 0.00 0.0 / 0.0 X0000075360722200609080838_ 2-- 1234035 In Progress 6-5-10 08:30:39 10-5-10 08:30:39 0.00 0.0 / 0.0 X0000099530135200805231111_ 0-- 1234035 In Progress 6-5-10 07:39:41 16-5-10 07:39:41 0.00 0.0 / 0.0 CMD2_ 0428-TPM3A.clustersOccur-2BOV_ A.clustersOccur_ 3_ 101860_ 105208_ 104904_ 105208_ 105029_ 105208_ 0-- 1234035 In Progress 6-5-10 05:46:14 10-5-10 05:46:14 0.00 0.0 / 0.0 CMD2_ 0431-TPM3A.clustersOccur-1YNS_ A.clustersOccur_ 6_ 193244_ 196314_ 194785_ 195091_ 194886_ 195091_ 1-- 1234035 In Progress 6-5-10 05:45:52 10-5-10 05:45:52 0.00 0.0 / 0.0 CMD2_ 0429-TPM3A.clustersOccur-1WUU_ D.clustersOccur_ 3_ 42361_ 44172_ 43710_ 44172_ 44006_ 44090_ 0-- 1234035 In Progress 6-5-10 05:45:28 10-5-10 05:45:28 0.00 0.0 / 0.0 CMD2_ 0430-TPM3A.clustersOccur-3CTZ_ A.clustersOccur_ 10_ 68238_ 68873_ 68415_ 68506_ 68439_ 68506_ 1-- 1234035 In Progress 6-5-10 05:45:28 10-5-10 05:45:28 0.00 0.0 / 0.0 CMD2_ 0431-TPM3A.clustersOccur-2J51_ A.clustersOccur_ 2_ 58627_ 60961_ 59075_ 60018_ 59584_ 59801_ 0-- 1234035 In Progress 6-5-10 05:45:28 10-5-10 05:45:28 0.00 0.0 / 0.0 CMD2_ 0431-TPM3A.clustersOccur-2O8J_ D.clustersOccur_ 1_ 30721_ 32727_ 31865_ 32727_ 32271_ 32499_ 0-- 1234035 In Progress 6-5-10 05:45:28 10-5-10 05:45:28 0.00 0.0 / 0.0 CMD2_ 0431-TPM3A.clustersOccur-3CEG_ B.clustersOccur_ 1_ 18271_ 20051_ 18608_ 18968_ 18717_ 18968_ 0-- 1234035 In Progress 6-5-10 05:45:28 10-5-10 05:45:28 0.00 0.0 / 0.0 CMD2_ 0431-TPM3A.clustersOccur-3D0G_ A.clustersOccur_ 90_ 760812_ 761644_ 760980_ 761146_ 761079_ 761146_ 1-- 1234035 In Progress 6-5-10 05:45:28 10-5-10 05:45:28 0.00 0.0 / 0.0 CMD2_ 0431-TPM3A.clustersOccur-1LM7_ A.clustersOccur_ 5_ 149897_ 152403_ 150279_ 151341_ 150554_ 150816_ 0-- 1234035 In Progress 6-5-10 05:45:28 10-5-10 05:45:28 0.00 0.0 / 0.0 CMD2_ 0406-AOC3.clustersOccur-2DYQ_ A.clustersOccur_ 9_ 63451_ 64084_ 63657_ 63799_ 63704_ 63799_ 0-- 1234035 In Progress 6-5-10 05:45:28 10-5-10 05:45:28 0.00 0.0 / 0.0 CMD2_ 0431-TPM3A.clustersOccur-1AVF_ A.clustersOccur_ 10_ 219462_ 221528_ 220424_ 220608_ 220496_ 220608_ 1-- 1234035 In Progress 6-5-10 05:45:27 10-5-10 05:45:27 0.00 0.0 / 0.0 CMD2_ 0428-TPM3A.clustersOccur-1OTZ_ 1.clustersOccur_ 5_ 266951_ 271942_ 269908_ 270925_ 270477_ 270925_ 1-- 1234035 In Progress 6-5-10 05:45:27 10-5-10 05:45:27 0.00 0.0 / 0.0 CMD2_ 0431-TPM3A.clustersOccur-3D0G_ A.clustersOccur_ 31_ 263440_ 264269_ 263696_ 263791_ 263741_ 263791_ 1-- 1234035 In Progress 6-5-10 05:45:27 10-5-10 05:45:27 0.00 0.0 / 0.0 CMD2_ 0431-TPM3A.clustersOccur-3D0G_ A.clustersOccur_ 62_ 527624_ 528443_ 528179_ 528267_ 528227_ 528267_ 0-- 1234035 In Progress 6-5-10 05:45:27 10-5-10 05:45:27 0.00 0.0 / 0.0 CMD2_ 0430-TPM3A.clustersOccur-1RXS_ C.clustersOccur_ 13_ 240410_ 242065_ 241375_ 242065_ 241764_ 242065_ 1-- 1234035 In Progress 6-5-10 05:45:27 10-5-10 05:45:27 0.00 0.0 / 0.0 CMD2_ 0431-TPM3A.clustersOccur-1AVF_ A.clustersOccur_ 26_ 551539_ 553621_ 553175_ 553621_ 553260_ 553380_ 1-- 1234035 In Progress 6-5-10 05:45:27 10-5-10 05:45:27 0.00 0.0 / 0.0 ** of course I do, but that's just to get minds whirring ;>)
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
|
![]() |