Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go ยป
No member browsing this thread
Thread Status: Active
Total posts in this thread: 47
Posts: 47   Pages: 5   [ Previous Page | 1 2 3 4 5 | Next Page ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 5777 times and has 46 replies Next Thread
branjo
Master Cruncher
Slovakia
Joined: Jun 29, 2012
Post Count: 1892
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: regarding DB2 maintenance notice

Maybe ignorant question twilyth: why do you need 10 days cache (or 8)?

AFAIK, the maximum number of HCC1 GPU tasks "In progress" per GPU in one device is 1200. Regardless your cache is 3 days, or 20 days.

The better solution IMO would be to start crunching with app_config instead of app_info what will in case of running out of GPU tasks allow you to utilize your CPU with some other WCG projects during outage (or you can add to app_info some non-GPU WCG project).

Cheers peace
----------------------------------------

Crunching@Home since January 13 2000. Shrubbing@Home since January 5 2006

----------------------------------------
[Edit 1 times, last edit by branjo at Jan 18, 2013 8:58:46 AM]
[Jan 18, 2013 8:48:35 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: regarding DB2 maintenance notice

No worries, you're well lucid to me, twylith... else will continue to try and plumb, sometimes maybe in a little a-plump way ;>)
----------------------------------------
[Edit 1 times, last edit by Former Member at Jan 18, 2013 9:07:26 AM]
[Jan 18, 2013 9:05:10 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: regarding DB2 maintenance notice

Twylith,

If you allow at least 1 CPU project on top of your GPU activity, at least you can buffer more work to keep the CPU busy in case the interruption takes longer than whatever you manage to buffer for GPU. Strongly recommend to lower your MinB though to e.g. 2 days at most [CPU and GPU buffering are quasi separate]. Since in scheduling, at least with 7.0.xx, the GPU tasks get prevalence over CPU tasks, these wont be started until CPU cores are freed up by the GPU. In your case, with high power devices at least, not high buffering since there wont be more than 1200 per GPU and you'd be getting multi days of CPU tasks as well, which you probably wont want and would be stuck with them for longer when servers start providing GPU tasks again [and you get rid of enough "in progress", to be allowed more tasks] There will be an upload/reporting cram when the servers come back.
[Jan 18, 2013 9:42:26 AM]   Link   Report threatening or abusive post: please login first  Go to top 
knreed
Former World Community Grid Tech
Joined: Nov 8, 2004
Post Count: 4504
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: regarding DB2 maintenance notice

Hopefully this won't cause any unexpected issues, but I have increased the limits to the following:
    <daily_result_quota>500</daily_result_quota>
<gpu_multiplier>15</gpu_multiplier>
<initial_daily_result_quota>5</initial_daily_result_quota>
<max_wus_to_send>50</max_wus_to_send>
<max_wus_in_progress>500</max_wus_in_progress>
<max_wus_in_progress_gpu>5000</max_wus_in_progress_gpu>

[Jan 18, 2013 7:54:57 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: regarding DB2 maintenance notice

On the client side, 5000 per GPU card ""in progress" is calling for a party... that will bridge the 24 hour scheduled outage for 99.99% of the crunchers. There wont be resistance from the volunteers side :D

Daily quota of 6500 per GPU card... dont know how that meshes with 5000 in progress. Neither seems to be attainable in 24 hours ATM (Can't remember anyone having a brag post up with that type of value... more happy volunteer faces with these quotas).

I like the 50 per fetch limit, much better than 15 or 30, so at least the Maximum additional work buffer will lift the cache somewhat over the Minimum buffer level in the v7 clients [on the CPU tasks side]. For GPU crunchers it's doing little.

edit: But please, the work/generator/feeders need to be ready to withstand that extra pre-out suction. ;D
----------------------------------------
[Edit 1 times, last edit by Former Member at Jan 18, 2013 8:10:57 PM]
[Jan 18, 2013 8:09:19 PM]   Link   Report threatening or abusive post: please login first  Go to top 
coolstream
Senior Cruncher
SCOTLAND
Joined: Nov 8, 2005
Post Count: 475
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: regarding DB2 maintenance notice

I appreciate that it is too late to do anything about it now, but wouldn't it have been better to do the maintainance over the weekend so as to have less impact on the servers?
----------------------------------------

Crunching in memory of my Mum PEGGY, cousin ROPPA and Aunt AUDREY.
[Jan 19, 2013 2:21:14 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: regarding DB2 maintenance notice

Kevin, re the below new server distribution settings:
Hopefully this won't cause any unexpected issues, but I have increased the limits to the following:
    <daily_result_quota>500</daily_result_quota>
<gpu_multiplier>15</gpu_multiplier>
<initial_daily_result_quota>5</initial_daily_result_quota>
<max_wus_to_send>50</max_wus_to_send>
<max_wus_in_progress>500</max_wus_in_progress>
<max_wus_in_progress_gpu>5000</max_wus_in_progress_gpu>

Don't understand, still, this <max_wus_to_send>. Just tested by inflating my buffer setting on a quad and got 70 in 1 call, which is fine by me, but what does it... bug such as client to quick?. There's no 11 second out, so it's not like 2 requests got combined, or was there? At Seti at home the claimed that to happen "clients to quick", requests getting lost on the way and timeouts pumping more requests, which the server then happily executed.

17365 World Community Grid 1/19/2013 4:41:08 PM [sched_op] Starting scheduler request
17366 World Community Grid 1/19/2013 4:41:08 PM Sending scheduler request: To fetch work.
17367 World Community Grid 1/19/2013 4:41:08 PM Requesting new tasks for CPU
17368 World Community Grid 1/19/2013 4:41:08 PM [sched_op] CPU work request: 1365955.63 seconds; 0.00 devices
17369 World Community Grid 1/19/2013 4:41:14 PM Scheduler request completed: got 70 new tasks

So far, and this dates back months, seen the biggest award as 86-89 tasks in one fetch. As said, fine by me, long as it does not cause overbuffering [which it has not for my side].
[Jan 19, 2013 3:48:54 PM]   Link   Report threatening or abusive post: please login first  Go to top 
OldChap
Veteran Cruncher
UK
Joined: Jun 5, 2009
Post Count: 978
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: regarding DB2 maintenance notice

I found this interesting so I will repost here a comment from one of our team:

Was it ever mentioned that these recent cache increases require Boinc 7.0.4x to work? I had 1 computer still running 7.0.28 and only had 465 wus in progress so I upgraded to 7.0.44 and now have the larger cache which matches my other computer running 7.0.42.


I would be interested to see if anyone else sees the same.
----------------------------------------

[Jan 19, 2013 4:11:53 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: regarding DB2 maintenance notice

To my knowledge there's no change between .28 and .42 on caching/buffering. Not aware that the added app_config feature of .42 in place of app_info does anything addtional, former's info does not get fed back to the server [will in a next client version]. So, yes I'd be interested if this is reproducible without changing anything to the prefs.
[Jan 19, 2013 6:11:45 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: regarding DB2 maintenance notice

BTW, I've got a 7.0.39 client running on Linux, and tried with 5 days. After some whirring, BOINCTasks indicated that the client had just over 5 days worth of work per core in queue. I'm only using the MinB setting, not MaxAB... that I've got on zero days.
[Jan 19, 2013 6:38:59 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 47   Pages: 5   [ Previous Page | 1 2 3 4 5 | Next Page ]
[ Jump to Last Post ]
Post new Thread