Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go »
No member browsing this thread
Thread Status: Active
Total posts in this thread: 23
Posts: 23   Pages: 3   [ 1 2 3 | Next Page ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 5332 times and has 22 replies Next Thread
Johnny Cool
Ace Cruncher
USA
Joined: Jul 28, 2005
Post Count: 8621
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Being flooded with GFAM work units!!!

Getting flooded with GFAM work-units; easily over 300 of them (i have 3 days of cache but I de-selected GFAM about 24 hours ago).

They are due 3-30-2013! Yikes! crying
----------------------------------------

Team Andrax Co-Captain
Free-DC Stats
Join Team Andrax at WCG
[Mar 28, 2013 7:58:36 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: Being flooded with GFAM work units!!!

Something went awry [I think] with the deadline logic. Saw wingman repairs through this morning all with 10 days deadline i.e. the original Zero Redundant was returned in 1.5 days, tasks went serial to PVer and the wingman got another full 10 days, then it flipped on the last returned to 4 days at 11:54 UTC. Monitoring closely to find an [perceived] issue.

A tech noted offline that user aborted tasks [those that were not started if understood correctly], now go to regular crunchers also in addition to reliable hosts. Logically those replacements for user-aborts are not really repairs, just hoping they get then a deadline equal to the original, meaning if it sat 2 days before being aborted, and had 10 days, it gets 8 days, but have not heard detail of what's up with that. At any rate, worked out that this may reduce the "no work for... " when the repair queue is too big.

Inundated my hosts have not been, being on DSFL/GFAM/HCC/CEP2(1ATT)... normal mix.

Edit: And though my host had a few very short GFAM, it has no effect on TTC with client v7. For those client side DCF is disabled. TTC is server controlled these days for the new client.
----------------------------------------
[Edit 1 times, last edit by Former Member at Mar 28, 2013 8:21:45 PM]
[Mar 28, 2013 8:20:10 PM]   Link   Report threatening or abusive post: please login first  Go to top 
slakin
Advanced Cruncher
Joined: Jul 4, 2008
Post Count: 79
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Being flooded with GFAM work units!!!

I have a problem occasionally getting flooded with work units if I am getting both CPU and GPU tasks toogether ..I am hoping the new release of BOINC due out shortly will resolve this. In the short term I have actually been hand managing by raising my cpu cache when I want tasks and then lowering it once a batch has been received. I continue to get GPU tasks even when the cache is set to 0 days.
[Mar 28, 2013 8:22:08 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Johnny Cool
Ace Cruncher
USA
Joined: Jul 28, 2005
Post Count: 8621
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Being flooded with GFAM work units!!!

Oh well, hoping for the best then!

Just running gpu work units; HCC at present (along with what GFAMS there are in my cache) using 6.10.58. Everything has been ok until I saw this (all those GFAMS).

Reminds me of that game "Lemmings" on the late, great Amiga 500!!!!! laughing
----------------------------------------

Team Andrax Co-Captain
Free-DC Stats
Join Team Andrax at WCG
----------------------------------------
[Edit 1 times, last edit by Johnny Cool at Mar 28, 2013 8:28:24 PM]
[Mar 28, 2013 8:27:24 PM]   Link   Report threatening or abusive post: please login first  Go to top 
PecosRiverM
Veteran Cruncher
The Great State of Texas
Joined: Apr 27, 2007
Post Count: 1054
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Being flooded with GFAM work units!!!

Just running gpu work units; HCC at present (along with what GFAMS there are in my cache) using 6.10.58. Everything has been ok until I saw this (all those GFAMS).


I may be wrong but, I noticed w/6.10.58 GPU WU's caused the time (on cached wu's) to lower causing extra wu's to download.

When I upgrade to 7.0.28 and 7.0.42 the problem stopped.

Might be worth a try.

coffee
cowboy
----------------------------------------

[Mar 29, 2013 2:46:18 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Johnny Cool
Ace Cruncher
USA
Joined: Jul 28, 2005
Post Count: 8621
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Being flooded with GFAM work units!!!

Just running gpu work units; HCC at present (along with what GFAMS there are in my cache) using 6.10.58. Everything has been ok until I saw this (all those GFAMS).


I may be wrong but, I noticed w/6.10.58 GPU WU's caused the time (on cached wu's) to lower causing extra wu's to download.

When I upgrade to 7.0.28 and 7.0.42 the problem stopped.

Might be worth a try.

coffee
cowboy


PecosRiverM, thanks for giving me an answer. Certainly something to really think about. So many confusing threads about a whack of Boinc versions (at least for me) and quite a few app_configs by a lot of folks here. What happened to me with this *GFAM flood* along with some HCC GPU work units was not good at all (to be kind).

Several days ago, I was running 20 to as many as 25 threads of GFAM and HCC concurrently. Now, because I had to clean up this mess, I am back to only 8 threads running HCC GPU. I'll figure it out via email or maybe here. Again, thanks! good luck
----------------------------------------

Team Andrax Co-Captain
Free-DC Stats
Join Team Andrax at WCG
[Mar 29, 2013 7:58:52 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: Being flooded with GFAM work units!!!

I also noticed an increasing number of repair jobs with 10 days deadlines here. It seems that those are getting more and more. There are quite a lot of replication number _0 wingmen with a "detached" status.

I also received a repair job (10 days deadline) from a wingman who had been given a "too late" "no reply" status only 50 minutes after he/she got the task. See list below (**), the example in the list has been given too late "no reply" status even immediately.

These are the error logs from the wingmen tasks i have in my results page. I have never seen an error in a GFAM tasks on my machines. My machine is running a GFAM-only profile with Boinc 6.10.58-64bit on Ubuntu. Maybe this helps:

Result Name: GFAM_ x1AWQ_ hCypA_ 0087023_ 0212_ 0--
<core_client_version>6.10.17</core_client_version>
<![CDATA[
<message>
too many exit(0)s
</message>
]]>

Result Name: GFAM_ x1AWQ_ hCypA_ 0087086_ 0171_ 0--
<core_client_version>7.0.28</core_client_version>
<![CDATA[
<message>
couldn't start CreateProcess() failed - A required privilege is not held by the client. (0x522): -148
</message>
]]>

Result Name: GFAM_ x1BCK_ hCypA_ 0087200_ 0042_ 0-- --> detached

GFAM_ x1CWJ_ hCypA_ 0087786_ 0066_ 0--
<core_client_version>6.2.28</core_client_version>
<![CDATA[
<message>
too many exit(0)s
</message>
]]>

(**) GFAM_ x1CWK_ hCypA_ 0087926_ 0005_ 0-- - No Reply 3/29/13 11:35:23 3/29/13 11:35:43 0.00 0.0 / 0.0

next 2 replications are from same task:

Result Name: GFAM_ x1FGL_ hCypA_ 0088192_ 0065_ 0--
<core_client_version>6.10.58</core_client_version>
<![CDATA[
<message>
too many exit(0)s
</message>
]]>

Result Name: GFAM_ x1FGL_ hCypA_ 0088192_ 0065_ 2--
<core_client_version>7.0.28</core_client_version>
<![CDATA[
<message>
couldn't start Can't write init file: -108: -108
</message>
]]>

Apart from this, i'm not getting flooded with jobs or anything.
----------------------------------------
[Edit 1 times, last edit by Former Member at Mar 30, 2013 3:26:33 PM]
[Mar 30, 2013 3:24:27 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Falconet
Master Cruncher
Portugal
Joined: Mar 9, 2009
Post Count: 3315
Status: Recently Active
Project Badges:
Reply to this Post  Reply with Quote 
Re: Being flooded with GFAM work units!!!

I see lots of these herna,

Result Name: GFAM_ x1AWQ_ hCypA_ 0087023_ 0212_ 0--
<core_client_version>6.10.17</core_client_version>
<![CDATA[
<message>
too many exit(0)s
</message>
]]>

The 6.10.17 boinc version is always there when this error happens to a wingman.
----------------------------------------


- AMD Ryzen 5 1600AF 6C/12T 3.2 GHz - 85W
- AMD Ryzen 5 2500U 4C/8T 2.0 GHz - 28W
- AMD Ryzen 7 7730U 8C/16T 3.0 GHz
[Mar 30, 2013 3:26:38 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: Being flooded with GFAM work units!!!

'Always' is a big statement, but yes, 6.0.17 is one of those versions to quickly skip past... It's though also quite likely it's a Linux distro [Is it?] that is overdue for an upgrade, so the repo can fetch a newer client with that.

A tech [armstrdj], has met the suggestion favorable to add future updates with more client info [CPU/platform] in the Result Log, so we wont have to second guess, or ask again what hardware/OS we're dealing with when offering help.

Noted was by uplinger on behalf of knreed a few days ago that 'user aborts' no longer only go to the repair queue, but also to regular crunchers [did not say what their deadlines were but saw they got full 10 days from date of submission, not 10 days from original date of copy _0. The 'user aborts' for v7 client still get mislabeled with 'error', so it could be those too.

And yes, there were a bunch of verifiers that got circulated with 10 days [had a boat load]. Saw this on GFAM and reported this few days ago too to techs. Now they show, for me at least, with 4 days again... since sometime the 29th.

Forums have been going and coming for me at this time, suspended client networking because of persistent upload download fails, so taking it slow at this time.
[Mar 30, 2013 5:08:34 PM]   Link   Report threatening or abusive post: please login first  Go to top 
asdavid
Veteran Cruncher
FRANCE
Joined: Nov 18, 2004
Post Count: 521
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Being flooded with GFAM work units!!!

Yesterday P Ver were with 4 days, for me also. But I got 2 GFAM this morning with 10 days again
----------------------------------------
Anne-Sophie

[Mar 30, 2013 6:11:36 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 23   Pages: 3   [ 1 2 3 | Next Page ]
[ Jump to Last Post ]
Post new Thread