Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go »
No member browsing this thread
Thread Status: Active
Total posts in this thread: 24
Posts: 24   Pages: 3   [ Previous Page | 1 2 3 ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 5512 times and has 23 replies Next Thread
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: Pending Validated tasks held due Waiting to be Sent to Wingman


When I run with 0.15 day of extra work buffer I have more or less one day of work in PV, which is more or less what I have always had with other projects.

Because of the server maintenance I had raised the buffer to 0.5 day and within one day the number of PVs decreased by 25-30 %, which tends to show that these WUs are returning quite fast in general.

Cheers. Jean.


Gosh Jean,

I wish my PV queue was only about 1 day deep. But since I only get a new WU between 3-7 minutes before the running WU is about to end, my PV queue is rather deep. At this point in time, I'm once again over 50+ WU's and by the end of this evening, I'll likely be again on my way towards the 60 pending in PV purgatory.

As it is, sometimes I get the 0-- version, sometimes I get the 1-- version and I've even had a few 2-- versions of the WU (children / aunts / uncles / grandparents / siblings ... whatever they are) and the WU's I get tend to be returned before the others almost without exception.

I sure wish there were some counters on the results status page that presented total hours of CPU time completed awaiting it's validation WU to complete as well as the total # of WU's in PV Jail by project.

Might be some interesting metric for those at WCG to observe and perhaps do something to reduce or eliminate the long / deep WU buffering some systems seem to think they need in this day and age.

The other thing I'm waiting for is for everyone to process all the WU's I have in PV purgatory and return them all within the same day just for reporting purposes and see what that does! wink

Oh Well... I seem to recall something about, "If wishes were fishes..."
----------------------------------------
[Edit 1 times, last edit by Former Member at Jul 11, 2009 12:44:44 AM]
[Jul 11, 2009 12:37:30 AM]   Link   Report threatening or abusive post: please login first  Go to top 
JmBoullier
Former Community Advisor
Normandy - France
Joined: Jan 26, 2007
Post Count: 3716
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Pending Validated tasks held due Waiting to be Sent to Wingman

At this point in time, I'm once again over 50+ WU's and by the end of this evening, I'll likely be again on my way towards the 60 pending in PV purgatory.

So, according to your latest daily results, Barney, this is less than two days of work, which is close to the average, I think.
Allow your WUs to spend half a day in your client's queue and your PV jail will be down to less than 1.5 day of work, simply because half a day will have been spent "In Progress" instead of "Pending Validation", or maybe even lower if you better match crunching habits of the majority.

Frankly I have no problem with your chosing the settings you are using, but I really do not see which benefit you are expecting. If you like to always be the first one to return a WU it's fine with me, but be not surprised if your WUs have to wait longer for others before being validated.

Personally I run with what I already consider a short buffer (0.15 day) simply because I want to be able to change my project choices quickly if I need, AND because I can react quickly in case of feeding problems. If I were leaving my clients do their work alone in the background as most members do I would then use a larger buffer. When I have been absent several days recently I have raised this setting to one day.

Cheers. Jean.
----------------------------------------
Team--> Decrypthon -->Statistics/Join -->Thread
----------------------------------------
[Edit 1 times, last edit by JmBoullier at Jul 11, 2009 2:14:30 AM]
[Jul 11, 2009 2:12:42 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: Pending Validated tasks held due Waiting to be Sent to Wingman

Jean,

You have an interesting perspective.

For example; here's my PV queue; I'll repost it tomorrow about the same time so we can see what it looks like then.


CMD2_ 0017-MYH3.clustersOccur-SYTCA.clustersOccur_ 369_ 330174_ 330476_ 0-- 7/10/09 23:17:27 7/11/09 03:05:30 2.40
CMD2_ 0017-MYH3.clustersOccur-THIMA.clustersOccur_ 161_ 284796_ 285201_ 0-- 7/10/09 22:26:26 7/11/09 01:46:00 2.22
CMD2_ 0017-2YU1_ A.clustersOccur-3BAS_ A.clustersOccur_ 3_ 65325_ 68289_ 1-- 7/10/09 22:07:10 7/11/09 01:46:00 1.86
CMD2_ 0017-MYH3.clustersOccur-PGM5A.clustersOccur_ 430_ 562020_ 562454_ 0-- 7/10/09 20:54:08 7/11/09 00:25:34 2.37
CMD2_ 0016-HSP7CA.clustersOccur-TPM2A.clustersOccur_ 67_ 383928_ 384485_ 384409_ 384485_ 384446_ 384485_ 1-- 7/10/09 20:03:26 7/10/09 21:54:19 0.88
CMD2_ 0017-MYH3.clustersOccur-SMAD4A.clustersOccur_ 290_ 581977_ 582570_ 0-- 7/10/09 19:55:15 7/10/09 22:26:26 1.96
CMD2_ 0017-MYH3.clustersOccur-SMAD4A.clustersOccur_ 309_ 620568_ 621239_ 1-- 7/10/09 19:39:41 7/10/09 23:17:27 2.43
CMD2_ 0017-HXK2A.clustersOccur-PGTBA.clustersOccur_ 14_ 1277_ 1295_ 0-- 7/10/09 17:42:18 7/10/09 20:27:32 2.09
CMD2_ 0017-MYH3.clustersOccur-TPM1A.clustersOccur_ 1639_ 5533973_ 5534999_ 1-- 7/10/09 17:36:57 7/10/09 20:27:32 2.26
CMD2_ 0017-MYH3.clustersOccur-TPM1A.clustersOccur_ 1607_ 5425929_ 5426999_ 1-- 7/10/09 17:35:36 7/10/09 20:54:08 2.52
CMD2_ 0017-MYH3.clustersOccur-UBC9.clustersOccur_ 57_ 185364_ 186701_ 0-- 7/10/09 16:11:56 7/10/09 19:55:15 3.02
CMD2_ 0017-MYH3.clustersOccur-Q01105-2A.clustersOccur_ 204_ 515647_ 516169_ 1-- 7/10/09 15:44:40 7/10/09 19:39:19 1.78
CMD2_ 0017-MYH3.clustersOccur-Q01105-2A.clustersOccur_ 131_ 332206_ 333035_ 0-- 7/10/09 14:57:04 7/10/09 19:39:19 2.33
CMD2_ 0017-MYH3.clustersOccur-RADIA.clustersOccur_ 2006_ 1996422_ 1996964_ 0-- 7/10/09 13:04:40 7/10/09 19:39:19 4.42
CMD2_ 0017-MYH3.clustersOccur-RADIA.clustersOccur_ 1671_ 1662886_ 1663137_ 1-- 7/10/09 12:49:35 7/10/09 15:44:40 1.98
CMD2_ 0017-MYH3.clustersOccur-RADIA.clustersOccur_ 456_ 453819_ 454117_ 1-- 7/10/09 12:14:58 7/10/09 15:44:40 2.23
CMD2_ 0017-MYH3.clustersOccur-MYH6.clustersOccur_ 5870_ 3616168_ 3616351_ 0-- 7/10/09 10:51:17 7/10/09 14:55:42 2.14
CMD2_ 0017-HXK2A.clustersOccur-PGTBA.clustersOccur_ 99_ 8955_ 8999_ 0-- 7/10/09 07:44:23 7/10/09 12:14:58 2.85
CMD2_ 0017-HXK2A.clustersOccur-SMAD4A.clustersOccur_ 252_ 38408_ 38455_ 0-- 7/10/09 07:13:43 7/10/09 10:47:56 1.19
CMD2_ 0016-ATPB.clustersOccur-MYH1.clustersOccur_ 121_ 234065_ 234483_ 1-- 7/10/09 03:34:38 7/10/09 07:13:37 2.86
CMD2_ 0015-MYH14.clustersOccur-MYH3.clustersOccur_ 1804_ 868051_ 868204_ 1-- 7/10/09 02:33:47 7/10/09 05:30:45 1.84
CMD2_ 0016-1433GA.clustersOccur-SAHH3A.clustersOccur_ 26_ 150114_ 151426_ 0-- 7/10/09 01:37:14 7/10/09 04:27:34 1.86
CMD2_ 0015-HSP7CA.clustersOccur-TPM1A.clustersOccur_ 101_ 721788_ 723179_ 1-- 7/9/09 23:02:31 7/10/09 04:27:34 4.16
CMD2_ 0017-MYH3.clustersOccur-SAHH3A.clustersOccur_ 714_ 655953_ 656369_ 0-- 7/9/09 20:10:02 7/9/09 23:02:30 2.31
CMD2_ 0017-2YU3_ A.clustersOccur-3BHX_ A.clustersOccur_ 46_ 302746_ 304263_ 1-- 7/9/09 19:04:40 7/9/09 22:40:29 2.14
CMD2_ 0016-1433GA.clustersOccur-MYH2A.clustersOccur_ 118_ 275321_ 276116_ 1-- 7/9/09 18:20:39 7/9/09 22:40:29 2.69
CMD2_ 0016-HSP7CA.clustersOccur-ITB5A.clustersOccur_ 1733_ 214950_ 214971_ 214956_ 214963_ 0-- 7/9/09 17:31:31 7/9/09 19:04:37 0.79
CMD2_ 0016-2YUS_ A.clustersOccur-3B7B_ A.clustersOccur_ 3_ 114040_ 117014_ 115298_ 115870_ 1-- 7/9/09 15:47:33 7/9/09 19:04:37 2.37
CMD2_ 0016-1433GA.clustersOccur-EZRIA.clustersOccur_ 20_ 65896_ 67198_ 67090_ 67198_ 0-- 7/9/09 15:17:52 7/9/09 17:31:31 1.15
CMD2_ 0017-MYH1.clustersOccur-PGM1A.clustersOccur_ 801_ 0-- 7/9/09 11:45:31 7/9/09 16:42:05 4.00
CMD2_ 0015-HSP7CA.clustersOccur-RADIA.clustersOccur_ 92_ 120074_ 120203_ 120187_ 120203_ 0-- 7/9/09 09:16:59 7/9/09 11:45:10 1.69
CMD2_ 0017-MYH1.clustersOccur-NALDLA.clustersOccur_ 773_ 0-- 7/9/09 05:40:03 7/9/09 09:51:22 4.00
CMD2_ 0017-MYH1.clustersOccur-VINCA.clustersOccur_ 259_ 0-- 7/9/09 05:14:41 7/9/09 09:37:06 4.00
CMD2_ 0017-MYH1.clustersOccur-TCPZ.clustersOccur_ 372_ 0-- 7/9/09 02:44:49 7/9/09 09:16:59 4.01
CMD2_ 0017-MYH1.clustersOccur-MYH1.clustersOccur_ 2056_ 1-- 7/9/09 00:41:53 7/9/09 07:21:04 4.00
CMD2_ 0017-MYH1.clustersOccur-MYH3.clustersOccur_ 4616_ 1-- 7/8/09 22:32:41 7/9/09 04:41:26 4.01
CMD2_ 0017-MYH1.clustersOccur-MYH3.clustersOccur_ 1780_ 0-- 7/8/09 20:27:50 7/9/09 01:09:19 4.03
CMD2_ 0017-MYH1.clustersOccur-MYH14.clustersOccur_ 1301_ 0-- 7/8/09 16:51:22 7/8/09 22:32:41 4.01
CMD2_ 0017-MYH1.clustersOccur-MYH14.clustersOccur_ 685_ 0-- 7/8/09 16:21:32 7/8/09 20:53:56 4.01
CMD2_ 0017-MYH1.clustersOccur-MYH2A.clustersOccur_ 2151_ 1-- 7/8/09 12:48:34 7/8/09 18:25:13 4.01
CMD2_ 0017-MYH1.clustersOccur-SOS1A.clustersOccur_ 867_ 0-- 7/8/09 09:03:51 7/8/09 16:21:32 5.58
CMD2_ 0017-MYH3.clustersOccur-MYH3.clustersOccur_ 3548_ 0-- 7/7/09 17:54:47 7/8/09 05:02:12 4.01
CMD2_ 0017-2YU1_ A.clustersOccur-3B7B_ A.clustersOccur_ 69_ 1-- 7/7/09 15:34:55 7/7/09 19:16:03 2.22
CMD2_ 0015-HSP7CA.clustersOccur-MOESA.clustersOccur_ 507_ 825873_ 826515_ 1-- 7/7/09 13:24:16 7/7/09 15:34:55 1.88
CMD2_ 0017-MYH3.clustersOccur-SMAD4A.clustersOccur_ 62_ 0-- 7/7/09 05:03:58 7/7/09 13:08:25 6.63
CMD2_ 0014-GRP75A.clustersOccur-SOS1A.clustersOccur_ 121_ 216015_ 216671_ 216400_ 216671_ 0-- 7/6/09 21:35:44 7/6/09 23:02:42 1.14
CMD2_ 0016-CUL3A.clustersOccur-PAK2.clustersOccur_ 11_ 28960_ 29827_ 1-- 7/6/09 12:05:57 7/6/09 15:31:18 2.57
CMD2_ 0017-MYH3.clustersOccur-MYH6.clustersOccur_ 3528_ 0-- 7/6/09 08:03:25 7/6/09 13:00:58 4.00
CMD2_ 0016-1433GA.clustersOccur-MYH3.clustersOccur_ 341_ 712495_ 713069_ 1-- 7/5/09 00:36:40 7/5/09 02:57:48 1.94
CMD2_ 0016-CUL3A.clustersOccur-XPO2A.clustersOccur_ 0_ 0-- 7/4/09 20:24:07 7/5/09 01:53:40 4.01
CMD2_ 0014-GRP75A.clustersOccur-ITB5A.clustersOccur_ 1810_ 231758_ 231782_ 0-- 7/3/09 06:32:14 7/3/09 12:27:49 2.33
CMD2_ 0015-HSP7CA.clustersOccur-NALDLA.clustersOccur_ 34_ 0-- 7/2/09 04:37:16 7/2/09 09:32:11 4.00



From what I've continually observed, it appears there is a fairly high incidence where the WU's are for whatever reason, not returned by their "Due Time".

A reasonable (although not the only) explanation for this is someone requests say some high buffer value of WU's and for any number of reasons not capable of returning the WU's within the allotted period of time. Power failure; system maintenance; extended periods of time using the machine for other activities just to speculate on a few possible contributors.

There are a number of ways to address this; one was a queue for fast responders. Perhaps a better resolution is for WCG to significantly reduce the amount of additional work buffer anyone can have to 24 hours or less, period.

We can be collaborators and agree to disagree, that's not an issue. We're still getting the job done.

But there are likely better ways for the additional buffer to be managed rather than letting the users attempt to tweak these parameters.

Surely, WCG knows when they are likely going to be doing scheduled service. So they could load up your WU task queue with sufficient data to continue on without the need to communicate to the server during that period of time.

Unscheduled downtime, again, having more than a 24 hour buffer appears to be simply absurd.

Just because it's "Always been this way" isn't a good enough reason not to consider changing it.

Where would we be with air flight if we had stuck with the Wright Brothers / Louis Bleriot / Henri Farman or even Clarence "Kelly" Johnson's ( SR-71 Blackbird ) accomplishments, where would we be now?
----------------------------------------
[Edit 3 times, last edit by Former Member at Jul 11, 2009 4:04:05 AM]
[Jul 11, 2009 3:41:09 AM]   Link   Report threatening or abusive post: please login first  Go to top 
JmBoullier
Former Community Advisor
Normandy - France
Joined: Jan 26, 2007
Post Count: 3716
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Pending Validated tasks held due Waiting to be Sent to Wingman

Barney,
Looking for the detailed post that knreed has taken time to write in answer to your concerns I realize that it is not in this thread but in yours, the well named "Let's discuss this one more time" where many people have spend time to explain why one more day in PV has no meaningful incidence on WCG's global efficiency.

That means that you are currently hijacking another thread for reactivating an older discussion, which some might consider as trolling.

If you have really new elements (not based on speculation) to bring to your discussion, please go back to your thread and stop it here.

I really have nothing against academic discussions but when the same wrong assumptions are pushed again and again I tend to feel a little bored.

Thank you for understanding. Jean.
----------------------------------------
Team--> Decrypthon -->Statistics/Join -->Thread
[Jul 11, 2009 4:59:43 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 24   Pages: 3   [ Previous Page | 1 2 3 ]
[ Jump to Last Post ]
Post new Thread