| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 63
|
|
| Author |
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
"FAH very small work units. Is this normal?" ! In contrast to the short- and very short-running FAHV WUs that we've come to expect, today I've been getting ones that are taking about 2.5 - 3h on my fast desktop machines. I guess we are now crunching large ligands (potential drug candidates) or large target proteins, or both. Or perhaps the researchers have changed the way they formulate the WUs. Any comments from the techs or researchers? I have only seen one of these so far, which took 7.36 hours on an i7 laptop. Other than that, mine are all in the .02-1.0 hour range. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I have 1 wu that says it will take over 5 hours! this will be a 1st for me in this project!
|
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
It finished:
FAHV_ 1000985_ 3j3y-5h-P4_ Rigid_ 3276_ 3-- M-09 Pending Validation 1/23/17 17:48:20 1/23/17 23:56:30 5.66 / 5.93 345.2 / 0.0 |
||
|
|
DCS1955
Veteran Cruncher USA Joined: May 24, 2016 Post Count: 668 Status: Offline Project Badges:
|
I have managed to execute 20 of the "Rigid" work units. Average times of 3.5 hrs on fast desktop and up to 8hrs on older laptops. Having old laptops (with cracked/dead screens) do have some benefits in accumulating hrs. ;)
----------------------------------------![]() ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Just a heads up, we are going to be attempting to size them for about 1 hour on average going forward. But this can change in the future. We are still getting lots of sizing information back from the current workunits and hope to have more normalized sizing soon. Basically if we see a ligand for a new workunit that is being built that we do not have previous data for, then we assume it is the largest possible ligand ever. This is what causes very small work units, but prevents members from getting 50 jobs of all really large unsized ligands. We keep the data seperate between flexible and ridged ligands. Thanks, -Uplinger Emphasis mine. When is this going to be effectuated? After 23 million results for these Capsid, I'd venture to think all but a rare few ligands have been identified. (My 8 core has 260 FAHV 'Rigid' in buffer which are said to take 14 minutes TTC, courtesy of you having disabled the DCF, except, they take 6+ hours on this 4770K at 3Ghz, 99.7% efficiency. I want one (1) day cache, NOT 8 days, and then be chucked off the repair channel, so pardon me, but since I like crunching a mix as well, who said you can monopolize(?), there's an abort button being operated imminently.). |
||
|
|
andgra
Senior Cruncher Sweden Joined: Mar 15, 2014 Post Count: 195 Status: Offline Project Badges:
|
Totally agree with Rob!
----------------------------------------FAHV is deselected here.
/andgra
![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Did what I said I would, no wince on my part, on WCG's neither and got in retribution 268 Rigid back instead for 35 minutes a piece, the first two heading for, yes 6+ hours. That's still 8 days/core worth, 7 days of zero hope to get a beta job [no work requesting], lest something wises up.
![]() |
||
|
|
adriverhoef
Master Cruncher The Netherlands Joined: Apr 3, 2009 Post Count: 2346 Status: Offline Project Badges:
|
got in retribution 268 Rigid back instead for 35 minutes a piece, the first two heading for, yes 6+ hours. That's still 8 days/core worth, 7 days of zero hope to get a beta job [no work requesting], lest something wises up. ![]() To get BETA jobs (if there are any left at the moment): First deselect all except perhaps HST1, then - in BOINC Manager - open Options → Computing preferences, set this to store the max (10 days of work), then wait for BETA jobs to arrive (and perhaps HST1). As soon as you're lucky getting any BETA jobs, restore your settings. You don't have to be a genius, but you have to be creative and/or inventive. ![]() I'm running three BETAs on Android at the moment: $ wcgresults -dqq | grep -o 'android_[^ ]* beta24:[1-9][0-9]*' | sed 's/_[^ ]*//' [Edit 4 times, last edit by adriverhoef at Jan 31, 2017 7:29:55 PM] |
||
|
|
uplinger
Former World Community Grid Tech Joined: May 23, 2005 Post Count: 3952 Status: Offline Project Badges:
|
Just a heads up, we are going to be attempting to size them for about 1 hour on average going forward. But this can change in the future. We are still getting lots of sizing information back from the current workunits and hope to have more normalized sizing soon. Basically if we see a ligand for a new workunit that is being built that we do not have previous data for, then we assume it is the largest possible ligand ever. This is what causes very small work units, but prevents members from getting 50 jobs of all really large unsized ligands. We keep the data seperate between flexible and ridged ligands. Thanks, -Uplinger Emphasis mine. When is this going to be effectuated? After 23 million results for these Capsid, I'd venture to think all but a rare few ligands have been identified. (My 8 core has 260 FAHV 'Rigid' in buffer which are said to take 14 minutes TTC, courtesy of you having disabled the DCF, except, they take 6+ hours on this 4770K at 3Ghz, 99.7% efficiency. I want one (1) day cache, NOT 8 days, and then be chucked off the repair channel, so pardon me, but since I like crunching a mix as well, who said you can monopolize(?), there's an abort button being operated imminently.). So, the sizing of the workunits was never going to be exact. There are many variables to consider as I have mentioned in the past with sizing work units. We do our best to juggle all of those, but unfortunately it is a reactive system because of unknowns for the work in progress. For example, we do not know how long a given target will run against a certain ligand with a search box that is 1.1 times larger with 6 torsions instead of 5 for example. Now multiply that by millions of ligands and multiple targets. The estimation that we do is also reactive based on the results we're getting back from the members. Another part of the equation is that we like to keep a queue of workunits ready to run on the grid. This keeps the flow running smoothly for the projects. However, when we send work units out, it takes X days to get back those results, which is then put into the values used for the estimator. So, if a target gets changed, it could take more than a few days for the estimator to adjust. In short, the estimator is just that, an estimation based off the values it knows at that time. Thanks, -Uplinger |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Kindly re-enable the Duration Correction Factor, so my client, you know the meber's clients that are doing the work for you, can react in a more responsive manor than your system is able to do, they're designed to respond to that variability... action -> reaction. Your capping of 35 per core should well counter ... if my client sees long coming through, it torques off fetching really quick and only slowly will reduce TTC, whereas if Real-Time
jobs go long, the clients responds very quickly. Now, if you do not want to action this, there's always the 'OPEN' source code, and set that bit back so DCF becomes functional again. It's no rocket science. <duration_correction_factor>1.000000</duration_correction_factor> <dont_use_dcf/> < This bit https://github.com/BOINC/boinc/search?utf8=%E2%9C%93&q=%3Cdont_use_dcf%3E |
||
|
|
|