Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
World Community Grid Forums
Category: Completed Research Forum: Drug Search for Leishmaniasis Forum Thread: Is the project going faster than expected? |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 43
|
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Largely the mix is influenced as to what goes to [leftover] pool. If FAAH is substantially an 'exclusive' project as is DSFL, then that would be what you're observing.... both rather scarce in the pool, so coming down for you in fairer share. Per knreed the DSFL share is still set to low, little left to pool. That said, Clean Water are rather short, i.e. there are more going around, but still if currently that mix of CW and DSFL is chosen, no DSFL are coming through [observed for a 24 hour period, hence the micromanaging]
It's not as simple as it seems, but the techs are playing the fair share [total CPU time] and than each week look at the mean run times, then tell the distributor how many go into that X CPU years per project. Probably it's done manually as to prevent the system to go haywire when suddenly the mean times drop or skyrocket, torquing off or flooding, when there's not enough on the shelfs i.e. it's a safety so that on the forward, supply of any one science does not end up being drained or clogged, which could lead for 'exclusive' crunchers with no ''send me something else when dry'' selected to turn idle [or fetch work outside WCG] ;D Sort of. [VOT] A thought that crossed recently, but not proposed in the appropriate place was that maybe if we could have a quantity selected for all sciences same as CEP2, everyone would be able to get the mix and match. When all these slots are filled and the cache still not to the brim filled to send the random from the selected projects. Could have an impact on the scheduler, or maybe not, don't know what is tracked in terms of mix for each client. Think that the amount of visits to the device profiles would increase substantially, so a % per science is likely more suited, closer to set / forget.[/VOT] --//-- |
||
|
nasher
Veteran Cruncher USA Joined: Dec 2, 2005 Post Count: 1422 Status: Offline Project Badges: |
also right now so many people are doing DSFL badge hunting so they are grabbing them up so there are a lot less in the full pool for whats left.
----------------------------------------myself I go to gold in each new project then i decide if i want to do more. I have myself set to a send me something else when dry to make sure I always get work. |
||
|
Bearcat
Master Cruncher USA Joined: Jan 6, 2007 Post Count: 2803 Status: Offline Project Badges: |
Didn't know this project was having issues until I saw my stats showing low output, then read some of these posts. Removed one machine to HFCC. Hope it's back full throttle soon.
----------------------------------------
Crunching for humanity since 2007!
|
||
|
pcwr
Ace Cruncher England Joined: Sep 17, 2005 Post Count: 10903 Status: Offline Project Badges: |
I'm now only getting re-sent WUs now, unless I set my machines to only process DSfL.
----------------------------------------Slowly getting to silver. Patrick |
||
|
krakatuk
Advanced Cruncher Germany Joined: Oct 3, 2008 Post Count: 141 Status: Offline Project Badges: |
A very interesting conclusion can be made from the SekeRob's explanation (please correct me if I'm wrong):
----------------------------------------If you are doing more for one specific project - that means somebody else (who selects all projects) is doing less for it and more for all the other projects. So no matter which project you select to crunch for - you are always crunching for all of them. (There is only one exception - CEP2, because it's not automatically selected when you select all. So there are not enough people who have CEP2 selected and even highest feeder priority cannot help here.) Out of this conclusion I would assume that the speed of a particular project (non-CEP2) is basically dependent on the whole WCG-speed and settings in the feeder made by techs. How many people are crunching the project exclusively - is not changing much as long as there are enough people who are crunching all projects. In this case it always makes sence to crunch the project which better fits to your hardware (to make more work) and not the one which you give your preference to. Except CEP2 of course... [Edit 1 times, last edit by krakatuk at Sep 23, 2011 3:52:13 PM] |
||
|
KWSN - A Shrubbery
Master Cruncher Joined: Jan 8, 2006 Post Count: 1585 Status: Offline |
That's the way I understand it to work and an excellent explanation of how best to benefit the entire project.
----------------------------------------Distributed computing volunteer since September 27, 2000 |
||
|
Mysteron347
Senior Cruncher Australia Joined: Apr 28, 2007 Post Count: 179 Status: Offline Project Badges: |
krakatuk
A very interesting conclusion can be made from the SekeRob's explanation (please correct me if I'm wrong): So no matter which project you select to crunch for - you are always crunching for all of them. Yes - the 'Conscientious objector' idea - move those who object to one form of service into another. It would seem that no matter how 'popular' any project may be, the unseen hands can tweak the uncommitted processing power to achieve whatever project balance they desire. This defeats the entire concept of selecting projects to run - or to exclude. |
||
|
Mysteron347
Senior Cruncher Australia Joined: Apr 28, 2007 Post Count: 179 Status: Offline Project Badges: |
From an earlier post by seippel The current number of targets is 5353 and there are 58 batches per a target (0-57). The number of work units in a batch can vary, but will likely be around 1000. So there are plenty more targets remaining. Since AIUI the numbering scheme is DSFL_target_batch_... then the highest target number I've seen is 17 and I've seen batches up to 57. Without further data, we appear to be crunching 17/5353 of the work after more than 3 weeks - so we'd have 944 weeks - or 18 years still to go. I can't see that any 'faster than expected' conclusion can be drawn - about 5% of expected on these figures... The earlier targets would appear to be bigger, but there's no indication of by-how-much. I've had jobs that took ~24hrs CPU. Sure - we've had adjustments, and I'm well aware that determining the run-time per position is not an exact science. The legendary 6hr job runtime against 24hr experienced could mean that the project will take 4 years rather than the 1 estimated, but even that's vastly different from the 18 years I just calculated. In short, we don't have enough information to draw any conclusions AFAICS. Early days, though. Just keep crunching and wait for the light at the end of the tunnel. Of course, tunnels is where we mushrooms grow best.... |
||
|
KWSN - A Shrubbery
Master Cruncher Joined: Jan 8, 2006 Post Count: 1585 Status: Offline |
You aren't the first person to come up with some inordinately long estimate based on a very small sample. Ignoring the inherent bias in making predictions on a small sample, simple logic dictates that an 18 year run time would be impossible. Look at how quickly computer hardware advances in one year and extrapolate the power of the grid out to 18 years from now. If you can't draw a reasonable conclusion of why your estimate is unreasonable, then it's beyond me to explain it.
----------------------------------------Distributed computing volunteer since September 27, 2000 |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
target 18 is recent
|
||
|
|