| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 79
|
|
| Author |
|
|
MStenholm
Advanced Cruncher Denmark Joined: Jan 7, 2010 Post Count: 97 Status: Offline Project Badges:
|
2 cards, and still only 600 max buffer. Is SLI /CrossFireX or whatever the "merge cards" technology is called, enabled? These card are not for gaming so no CF here ;) Edit: I restarted the PC in question and wupti I now have 2x600 WUs. What gives? I did a WIN update a few days ago including the restart and that didn't change the numbers. Thank you all for participating in the discussion. I do how ever think that 600 WU per GPU is on the low side. ![]() [Edit 1 times, last edit by MStenholm at Jan 12, 2013 4:16:49 PM] |
||
|
|
OldChap
Veteran Cruncher UK Joined: Jun 5, 2009 Post Count: 978 Status: Offline Project Badges:
|
I'm not sure I could use Xfire with such disparate cards but No these are running independently.
----------------------------------------![]() buffers are set at 0.5 days and 0.7 days ( a rough setting that just works so I did not try anything else) My only interest in having a large cache is in keeping these running during "server maintenance" so if any of those folks that decide these limits are reading.... Could you please take a look at the downtime statistics and adjust the Maximum cache accordingly? .... Pretty please ![]() EDIT: My assumption is that the fastest of cards runs at no more than 120 wu's an hour. Does anyone get more? (info needed for setting this figure) ![]() [Edit 2 times, last edit by OldChap at Jan 12, 2013 4:33:22 PM] |
||
|
|
deltavee
Ace Cruncher Texas Hill Country Joined: Nov 17, 2004 Post Count: 4894 Status: Offline Project Badges:
|
My assumption is that the fastest of cards runs at no more than 120 wu's an hour. Does anyone get more? (info needed for setting this figure) The best my 7970s have averaged over a one week period is 75 Edit: Old data from shorter WUs. [Edit 1 times, last edit by deltavee at Jan 12, 2013 5:28:54 PM] |
||
|
|
dskagcommunity
Senior Cruncher Austria Joined: May 10, 2011 Post Count: 219 Status: Offline Project Badges:
|
I have only a buffer of 90 wus or so on a 7950 so be happy with your 300 per card ^^ i dont get more units..
---------------------------------------- |
||
|
|
OldChap
Veteran Cruncher UK Joined: Jun 5, 2009 Post Count: 978 Status: Offline Project Badges:
|
dskagcommunity:
----------------------------------------See what happens if you increase your tools>computing preferences>network usage tab>minimum work buffer to maybe 0.4 or 0.5 and maybe the maximum additional work buffer to be just a little >0 <.5 ![]() |
||
|
|
twilyth
Master Cruncher US Joined: Mar 30, 2007 Post Count: 2130 Status: Offline Project Badges:
|
My assumption is that the fastest of cards runs at no more than 120 wu's an hour. Does anyone get more? (info needed for setting this figure) The best my 7970s have averaged over a one week period is 75 Edit: Old data from shorter WUs. On the 2P octo core with dual 7950's, I seem to be doing one wu in about 9-13 minutes. So with 24 simultaneous GPU threads, I think that comes out to a bit more than 120/hour since 12min/wu on 24 threads would be 30 seconds per wu. My current 'ready to start' back log is only 17 tasks even though I have it set to a cache of 10 days for all of my GPU only machines. So yeah, it would be very nice to see a change to how the cache is handed for high HCC-GPU throughput rigs. ![]() ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
As I noted a few days ago in another thread, setting a cache as high as the deadline is trouble, setting a cache to 10 days (Minimum Work Buffer or Connect about Every", when the deadline of HCC is 7 days, tells the client and the server that any task assigned is bound to not come back in time, so only as much work is given to keep all cores busy. Set the work-buffer to something sane, something above the longest out we know to happen by the servers or the supply chain and you'll be experiencing your hallelujah. Propose 1.0 days Minimum for starters, and 1.0 days additional buffer and then in 24 hours tell us if you saw a change.
----------------------------------------(By calculation, 2 cards at 600 "In Progress" allowance per card and the device doing 120 per hour, you'd be getting about 10 hours worth of work for the GPUs) [Edit 1 times, last edit by Former Member at Jan 12, 2013 8:39:07 PM] |
||
|
|
MStenholm
Advanced Cruncher Denmark Joined: Jan 7, 2010 Post Count: 97 Status: Offline Project Badges:
|
As I noted a few days ago in another thread, setting a cache as high as the deadline is trouble, setting a cache to 10 days (Minimum Work Buffer or Connect about Every", when the deadline of HCC is 7 days, tells the client and the server that any task assigned is bound to not come back in time, so only as much work is given to keep all cores busy. Set the work-buffer to something sane, something above the longest out we know to happen by the servers or the supply chain and you'll be experiencing your hallelujah. Propose 1.0 days Minimum for starters, and 1.0 days additional buffer and then in 24 hours tell us if you saw a change. (By calculation, 2 cards at 600 "In Progress" allowance per card and the device doing 120 per hour, you'd be getting about 10 hours worth of work for the GPUs) I know that I said my buffer went up to around 2*600 some hours ago but now is back to just above 600 again...what to do? Well I better live with or go back to folding if the problem persists so my workshop stays warm and some science gains. ![]() |
||
|
|
twilyth
Master Cruncher US Joined: Mar 30, 2007 Post Count: 2130 Status: Offline Project Badges:
|
Sek, what you're overlooking is the fact that it calculates the estimated completion time based on what it would be for a CPU to complete the wu.
----------------------------------------I'm looking at the estimated time in my 'ready to start' queue and they are all listed at 3hrs, 40minutes. All of those will be done in under 60 seconds. edit: 30 sec.s on avg, but for any given wu - about 6-10 minutes. ![]() ![]() [Edit 2 times, last edit by twilyth at Jan 12, 2013 8:48:33 PM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Well, things of fringe relevance will be overlooked (ignored). Even when your host "projects" them HCC-GPU jobs at 3.4 hours (riddled since my measly Q6600 does them CPU version in under 2 hours, and projects them at 2), guessing you run a CPU/GPU mix, you'd still be able to cache allot more if only you did what I proposed... set that cache/buffer to a saner level. A dual Octo with one day should have at 3.4 hours about 112 tasks queued. Others are getting their max, so what else is different, what science mix are you running on that device?
----------------------------------------edit: Are these tasks running in High Priority? (It would if either Connect about every / Minimum Work Buffer is set to 10 days). And for client 7.0.xx, the Additional Work buffer only works for 1 work fetch... As soon as the buffer is above the Minimum setting [if you have that at zero or very low], the clients stops fetching till it drops again below minimum. Saturday night live, tell us tomorrow what happened when you've decided to reduce the Minimum to the suggested 1.0 days and the additional buffer to e.g. 0.50 days value. [Edit 1 times, last edit by Former Member at Jan 12, 2013 9:08:26 PM] |
||
|
|
|