Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
World Community Grid Forums
Category: Completed Research Forum: FightAIDS@Home Thread: Adjust limit of tasks in progress? |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 7
|
Author |
|
OldChap
Veteran Cruncher UK Joined: Jun 5, 2009 Post Count: 978 Status: Offline Project Badges: |
I wonder.... What is it that induces a limit of tasks in progress to around 50 rig hours when the cache setting is for 224 rig hours in preparation for some work on my lan?
---------------------------------------- |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The limit sems to be set as a number of tasks per active cpu core and gpgpu, nothing limiting it server side to hours of work. Practically, fast machines are never able to cache up lots of work in computing hours, except for cep2 or when other sciences happento be in a long hours batch cycle.
|
||
|
Byteball_730a2960
Senior Cruncher Joined: Oct 29, 2010 Post Count: 318 Status: Offline Project Badges: |
The limit is 25 tasks per logical core.
I keep running into this issue with one of my machines as it connects to the internet twice a week but runs 24/7. I sometimes run out of work especially when the unit lengths are under 3 hours. |
||
|
KWSN-A Shrubbery
Senior Cruncher Joined: Jan 8, 2006 Post Count: 476 Status: Offline Project Badges: |
Yes, 25 per thread. This was imposed, IIRC, for HCMD2 as it was finishing up. Run times were all over the map and otherwise reasonable machines were over-caching. The limit has simply never been removed.
----------------------------------------It worked to solve the problem at hand but as we can see it wasn't particularly elegant. |
||
|
OldChap
Veteran Cruncher UK Joined: Jun 5, 2009 Post Count: 978 Status: Offline Project Badges: |
Wow! so right now a 10 day cache could actually be under 10 hours on FA@H
---------------------------------------- |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
That would then be 'ludicrously' fast, as 25 per core and 10 hours would suggest a speed of 24 minutes per result of fahv and or faah. Barely enough to bridge a bigger maintenance outage. The project average though last 7 days was 2.4 hours, not knowing what the object mean is wcg wants.
----------------------------------------Not getting that many for a decent cache could be on a 'hidden' technician agenda of improved return times, lower storage needs, higher server/result status page performance. The deadlines are being worked too with mcm and ugm having 7 days. If your device is sparingly on, just few hours in the weekend, you're not welcome, or could we convince the technicians to give a larger 'per core' allowance? If a device is rated reliable, usually returning all results wihin 48 hours, maybe a 2 tiered 'per active core' could be set, slow and fast, but no, yet another way to destabilize what's working. [Edit 1 times, last edit by Former Member at Oct 29, 2014 12:25:50 PM] |
||
|
OldChap
Veteran Cruncher UK Joined: Jun 5, 2009 Post Count: 978 Status: Offline Project Badges: |
This rig is only running at 3.1 and completes an FAHV wu in 36 mins so it is not a stretch to think a fast rig running at say 4.7 and newer architecture might get near 24 mins.
----------------------------------------Yes, 25 per thread. This was imposed, IIRC, for HCMD2 as it was finishing up. Run times were all over the map and otherwise reasonable machines were over-caching. The limit has simply never been removed. It worked to solve the problem at hand but as we can see it wasn't particularly elegant. Thanks for the background on that. |
||
|
|