Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go »
No member browsing this thread
Thread Status: Active
Total posts in this thread: 22
Posts: 22   Pages: 3   [ Previous Page | 1 2 3 ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 3716 times and has 21 replies Next Thread
Sekerob
Ace Cruncher
Joined: Jul 24, 2005
Post Count: 20043
Status: Offline
Reply to this Post  Reply with Quote 
Re: High Priority Lockout

Think it has been asked before. "For whatever reason", in effect such a function then loads more repair/priority work onto fewer computers, to include those who are connected to multiple grids who just happen to run with small caches. Is that OK?
----------------------------------------
WCG Global & Research > Make Proposal Help: Start Here!
Please help to make the Forums an enjoyable experience for All!
[Sep 14, 2010 2:32:01 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Dena
Cruncher
USA
Joined: Sep 9, 2006
Post Count: 13
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: High Priority Lockout


The reason for the use of reliable work is that we receive groups of data in batches from the researchers. For many reasons, this is the fundamental unit of data transfer between us and them. Within each batch, the strong majority of workunits finish quite quickly - within 2-3 days of them being sent out. However, there are a few stragglers that prevent the batch as a whole from being completed. Without the 'reliable' mechanism in place, these stragglers were taking weeks to complete and thus delaying completing batches. Depending on the particular project, having more batches in progress at the same time consumes significantly more storage and increases the size of the database. These have both caused us issues on the backend in the past. With the reliable mechanism in place, batches finish about 1.5*deadline. Without it, they were taking 3-4*deadline.

I work as a real time programmer that has to contend with data flow problems so I knew the reason why priority processing was being triggered in repair work units before this was posted. I can agree with the idea of sending repair units to systems with short turn around times. I only object to the short deadline because it could cause other records to be late and force the creation of more repair units. In my case I receive so many repair units that it force my queue to move close to the deadline. Which is better? Process one repair unit a little slow or create a bunch of repair units by giving repair units priority.
The fact that people figured out that a longer queue would avoid this problem indicates that others have had the problem and worked around it. In distributive processing your best bet is to give important work to reliable processors but you shouldn't try to take over their computer.
[Sep 14, 2010 6:16:46 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 22   Pages: 3   [ Previous Page | 1 2 3 ]
[ Jump to Last Post ]
Post new Thread