Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go »
No member browsing this thread
Thread Status: Active
Total posts in this thread: 20
Posts: 20   Pages: 2   [ 1 2 | Next Page ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 2897 times and has 19 replies Next Thread
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Someone may make more sense out of this than I can.

I was just playing around with the global statistics and thought I would look at the average CPU time per result. The figure is both the daily values and cumulative.



The mix of computations obviously has a lot to do with this. My first couple of results returned took between 4 and 30 CPU hours to complete.
----------------------------------------
[Edit 1 times, last edit by Former Member at Dec 6, 2006 4:37:07 AM]
[Oct 30, 2006 5:45:20 PM]   Link   Report threatening or abusive post: please login first  Go to top 
David Autumns
Ace Cruncher
UK
Joined: Nov 16, 2004
Post Count: 11062
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Someone may make more sense out of this than I can.

It was bad in the early days with frequent attacks of the monster work units 100+ hours were regular occurances

The IBM crew have got better at slicing and dicing up the WU's with experience

Now with the shorter HDC's interspersed with the longer FAAH's the average spread time/work unit has narrowed as your graph beautiffuly describes

certainly adds some meat to Sek's suggestion on the contentious points front

Thanks geopsychic

Dave
----------------------------------------

[Oct 30, 2006 7:41:37 PM]   Link   Report threatening or abusive post: please login first  Go to top 
David Autumns
Ace Cruncher
UK
Joined: Nov 16, 2004
Post Count: 11062
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Someone may make more sense out of this than I can.

It also confirms Sek's other graph which suggests that recently the WU have got that bit longer to crunch
----------------------------------------

[Oct 30, 2006 7:43:04 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: Someone may make more sense out of this than I can.

Just for the heck of it. I will update this figure irregurarly.

Happy New Year all!
[Dec 30, 2006 6:03:54 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Sgt.Joe
Ace Cruncher
USA
Joined: Jul 4, 2006
Post Count: 7849
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Someone may make more sense out of this than I can.

geopsychic

It is an interesting graph. I think that without further background information you may draw erroneous conclusions. Mr. Autumns has, I'm sure, the correct explanation for the initial peak. There were far fewer computers on the project at that time and with workunits being in the 100+ hours range, it makes sense. As they got better at paring down the size of the work units, it makes sense that the graph would flatten out.
There are a number of factors in play which could affect the outcome of the average time:
1. The relative number of long workunits vs. short ones (Fa@H vs.GC)
2. The ratio of speedy computers to slower ones
3. The completion of projects and the addition of projects over time.
4. The targeting of various projects to particular kinds of machines.

For instance, if you look at the stats for RCTC Grid and Xtreme Systems. Xtreme Systems is probably running all high end fast machines whereas RCTC Grid is probably running stock P4's. Xtreme Systems turns in many more results on less run time. If you knew the ratio of WU were identical for each, you would know Xtreme Systems had faster machines. However, if they were only specifying GC units and RCTC Grid were getting a mix of all the projects, the results returned figures could also reflect this disparity.

Also, with the advent of GC and its lower requirements, I was able to put several machines into use which would otherwise have been idle. These are all lower end PII and PIII's with neither the memory nor speed to attempt any other current project on BOINC. When I look at the results from my slowest machine under workunit status I see that that it is about 6 times slower than the fastest result, i.e. 12 hours vs. 2 hours. This addition of lower end machines, if large enough, could have had the effect of raising the average daily time.

Gradually as people upgrade their systems and retire older machines, the average daily time per result should decrease if the size of the WU's remains on average about the same as it is currently.

It would be interesting to know the percentage of machines at various benchmark strata. If these figures were known, an efficieny coefficient could be calculated. If the mix of the sizes of the WU's would remain constant, this would give an indication of the efficiency of the grid as a whole.

I am sure that someone with a statistical background could come up with all sorts of different ways to measure the work and would be able to give an explanation of what the implications for those figures might be.

You have provided food for more thought. Thanks

Cheers

Sgt.Joe

Minnesota Crunchers
----------------------------------------
Sgt. Joe
*Minnesota Crunchers*
[Dec 30, 2006 9:53:46 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Sekerob
Ace Cruncher
Joined: Jul 24, 2005
Post Count: 20043
Status: Offline
Reply to this Post  Reply with Quote 
Re: Someone may make more sense out of this than I can.

The explanation is simple. The average WU at the beginning of WCG were shorter than recently. Now with the equally shorter GC and HCDM again the average time per WU is decreasing. Add to that the by Sgt. Joe cited (or was it sighted ;>) noted ever increasing component of fast(er) machines. E.g. my old P4 does 7.5 to 8.5 hours on an FA@H versus the 4 to 6 hours average on my new C2D and one can predict for the ratio of WU's per CPU year to increase.

NB: FA@H has been by far the longest running project demanding the bulk of the total work capacity. With 39,000 out of almost 70,000 CPU years it's weight is felt.
----------------------------------------
WCG Global & Research > Make Proposal Help: Start Here!
Please help to make the Forums an enjoyable experience for All!
----------------------------------------
[Edit 2 times, last edit by Sekerob at Dec 31, 2006 10:16:47 AM]
[Dec 31, 2006 10:12:34 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: Someone may make more sense out of this than I can.

Hey
Did a small bit of deriving from the graph ( thank you for the graph! ), fumbled some coordinates, and did a snakes butt regression analysis. My hand to screen skills being what they are, here goes a brief synop...
Robust marks show between 7/05, and present. Probably when the ud grid took a bad dump, wcg picked up crunchers... The lows, delineated by x's, are wcg downtimes; note the 12/06 midmonth fall when boinc was down. Any increases following these downs are just fair weather sunshine. but the rise,in late /06 is indicative of 1) More projects 2) More cpu's online 3) More networked cpu's online 4)The advent of significant amounts of dual core cpu's online, expanding the range of results in the domain of a narrower list of projects... It just might be that another marker for cpu time, maybe as a factor being project particular, can be considered for scoring or cutting pieces of pie!
[Dec 31, 2006 8:43:13 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: Someone may make more sense out of this than I can.

Hello 7cures,
One factor to keep in mind is the throttle for applications running on the UD client. In late June 2006 the default throttle setting ( http://www.worldcommunitygrid.org/forums/wcg/viewthread?thread=2683 ) was changed from 100% to 60%, which immediately increased the hours per result.

Lawrence
[Dec 31, 2006 10:25:20 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: Someone may make more sense out of this than I can.

Just to cloud / clarify the discussion here are the CPU time, results and points graphs.

There is a strong weekday vrs weekend modulation - no real surprise there except possibly the amplitude.






[Jan 2, 2007 1:43:42 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: Someone may make more sense out of this than I can.

Heya
Nice graph, geopsychic! Thanks, Lawrence for pointing that out. It's staring at me everytime i open my agent. Why was the throttle reduced? To make the agents/grid user friendly? It is a great selling point! I do seem to remember some problems b4 that happened, cool smile
[Jan 5, 2007 2:39:50 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 20   Pages: 2   [ 1 2 | Next Page ]
[ Jump to Last Post ]
Post new Thread