| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 11
|
|
| Author |
|
|
wolf 359
Cruncher Joined: Nov 4, 2008 Post Count: 49 Status: Offline Project Badges:
|
Elapsed time on tasks for this project have decreased by at least 90%.
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Did it really ;>) http://bit.ly/WCGART + http://bit.ly/WCGOET1 (Thru Noon)
----------------------------------------(Has been happening since the beginning of the project, up and down and up and down and...) [Edit 1 times, last edit by Former Member at Jun 4, 2015 4:34:57 PM] |
||
|
|
TPCBF
Master Cruncher USA Joined: Jan 2, 2011 Post Count: 2173 Status: Offline Project Badges:
|
Elapsed time on tasks for this project have decreased by at least 90%. There is no standard "size" on the time for OET tasks and right now, it seems to be that a whole storm of "shorties" is in the queue. The total # of WUs on my result page has more than tripled over night, will be likely over again shortly. Just a pain with wasting all the resurces sending those things back and force but surely increases the number of result stats for those people that are "into" that kind of stuff... Ralf |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Is the result file not proportional to the duration and complexity? Some do, the longer they run and CEP2 even specifies ranges from 20 to 80MB. Visit WUProp, which actually keeps track of many parameters [for those participating].
----------------------------------------Edit: Actually WUProp reveals for OET1, that uploads vary from 17 to 128kB: http://wuprop.boinc-af.org/results/projet.py?...n=Outsmart+Ebola+Together Basically, it's wurst but for the server that has to handle a bunch more entries... yesterday 223K results, today 722K [estim.] Edit2: Interesting parm: 0.2Mb data transfer per core per day... gob-smacking ;o) [Edit 2 times, last edit by Former Member at Jun 4, 2015 5:37:09 PM] |
||
|
|
TPCBF
Master Cruncher USA Joined: Jan 2, 2011 Post Count: 2173 Status: Offline Project Badges:
|
Is the result file not proportional to the duration and complexity? Some do, the longer they run and CEP2 even specifies ranges from 20 to 80MB. Visit WUProp, which actually keeps track of many parameters [for those participating]. Not sure any non-German speaking are understanding this? Edit: Actually WUProp reveals for OET1, that uploads vary from 17 to 128kB: http://wuprop.boinc-af.org/results/projet.py?...n=Outsmart+Ebola+Together Basically, it's wurst but for the server that has to handle a bunch more entries... yesterday 223K results, today 722K [estim.] Edit2: Interesting parm: 0.2Mb data transfer per core per day... gob-smacking ;o) The (possible) problem is not the amount of actual data being transmitted but there is also the overhead on the networking part opening and closing the connections much more, both on the WCG server side as well as on the client side.One firewall, at a location where there are 4 pretty quick working hosts are located, actually flagged the server IP as suspicious due to the amount of traffic (again, not the amount of actual data, but the amount of connections opened and closed)... Ralf |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Since Conchita everyone knows Wurst
---------------------------------------- As for your particular problem, it is quite easy to config the v7 client such that the connects are greatly reduced, [Edit 1 times, last edit by Former Member at Jun 4, 2015 7:43:28 PM] |
||
|
|
l_mckeon
Senior Cruncher Joined: Oct 20, 2007 Post Count: 439 Status: Offline Project Badges:
|
Sekerob says:
Did it really ;>) http://bit.ly/WCGART + http://bit.ly/WCGOET1 (Thru Noon) (Has been happening since the beginning of the project, up and down and up and down and...) ---------------------------------------- Yes, but it's useful for those like me who download WUs in batches to know, so we don't run out of work in the middle of the night. FWIW the 891 batch is really short as well. |
||
|
|
TPCBF
Master Cruncher USA Joined: Jan 2, 2011 Post Count: 2173 Status: Offline Project Badges:
|
Since Conchita everyone knows Wurst Ok, now I have to barf... As for your particular problem, it is quite easy to config the v7 client such that the connects are greatly reduced, I don't like to mess with any settings. And no harm done, as it is only flagged as a warning and as I know what is going on, it's not a big deal. Other setups might be less relaxed...Ralf ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
It's hardly 'messing', set minimum buffer low and the maximum additional work buffer to for instance 1 day. Any time the total buffer drops below minimum, a bulk request is made to top up to min+max [with limit to getting at most 35 tasks per core... yes it's Scrooge being in charge]. 70 would be better, tech [if you're reading]. That gives max of 3 days for even my slowest PC... now it's hardly 24 hours, even on the 9 year old.
Reporting, and good you have not messed, will be grouped too mostly combo'd with work requests [result files are though always immediate, unless you set a scheduled time of open connection. All in all, if you clear local prefs [if at all activated] and use default website device profile it's one change applying to all your nodes. |
||
|
|
TPCBF
Master Cruncher USA Joined: Jan 2, 2011 Post Count: 2173 Status: Offline Project Badges:
|
I like to set up any BOINC account just as "fire and forget", that's what I mean with not messing around. I might not even get to one of the hosts for a week...
Btw, FWIW, the average runtime of the OET jobs has increased again significantly, while those hosts that I just check are "filled to the brim" with OET jobs, probably enough to make it well into Sunday at least... (PST) Ralf |
||
|
|
|