Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Locked Total posts in this thread: 70
|
![]() |
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Slapshot, I can only give estimates as I haven the same CPU as you available to me.
On Windows those WU would take possibly 8-10 hours to complete on a 3Ghz machine scoring 60-65 Boinc Points, multiply by 7 for WCG points. So you can see that you are completing WU`s quicker yet being substantially underpaid......Is Linux the third world ? As you can see from my previous a AMD X2 5000+ completes WU`s on linux in approx 3.4 hours. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The problem is with the Windows benchmark. It has been compiled in such a way that part of the benchmark has been optimised away, and it provides an inflated score. This has been an issue for some years, and I am at a loss to explain why it hasn't been addressed properly before now.
It has been reported to the BOINC devs on at least three occassions. They finally seem to be moving on it. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
We are in this for the long haul. Consider both over a period of time with lots of different projects. And remember, I am hypothesizing that benchmarks for each type of core will vary from compiler to compiler in a way that is not necessarily closely related to the true optimizing ability of that compiler. Run a Beta project over a series of different machines/OS`s with upcoming WU`s....There are plenty who will volunteer. They can be paid at a later stage, it works on other projects. This gives you an average per WU or whatever Unit is scientifically better. Go over to Rosetta they`ll be glad to tell you how they worked it all out. I`m really not in to the science side. What I`m trying to get across at present is that what we have is very discriminatory, unfair and I feel holding back the project in a lot of ways. If I can crunch a WU in half the time yet choose not too because of points something is wrong. It is to the benefit of the project to sort this out and quickly, a quick fix for Linux has been suggested..... Unfortunately, the problem with the difference in benchmarking is unlikely to be addressed adequately by BOINC, even if they were inclined to do so. Casual observation of this issue on various projects and forums reveals that it is not consistent between projects. As has been previously stated some do not have the problem at all and some have as much as 800% difference. It seems to depend on the type of work undertaken and which processor resources are most utilized by that project. WCG have a consistant difference of 55%. This would need to be corrected by the formula proposed by Sek "25 x 7 / 0.55 = 318 WCG points." or BOINC points x 1.818 = WCG BOINC points. Then apply the 7x multiplier for WCG points as is the norm. The first part of this would need to be done, by WCG, before the points were reported to BOINC statistics in order to create a fair distribution of points between the different OSs, for those who are interested in overall, cross project, BOINC scores. Cheers. ozylynx |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The problem is with the Windows benchmark Being as we have to accept that Windows is the average system OS we take that benchmark as the norm and bring other OS`s inline with it. I have also emailed Mr. Anderson |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Didactylos follows the BOINC boards much more closely than I do. In another thread, he has just reminded me of the recent BOINC conference report on points: http://boinc.berkeley.edu/ws_06/credit.txt
|
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
carl.h, the benchmark doesn't use an arbitrary scale. It attempts to define a processor's capabilities in the precise number of operations it can perform per second. BOINC uses the well-known and fairly well respected Dhrystone and Whetstone benchmarks.
Mr Anderson (I feel like I'm in the Matrix when I say that) has used your email as the basis for a request to the BOINC alpha testers to gather statistics on the issue. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Didactylos thanks for that feedback, it means a lot !
![]() As I say I`m not up on the science end, I spend my life chasing MS errors and hardware errors.....isn`t that enough ![]() I hope this has gone someway to at least help with the process of finding some type of solution and yes I agree perfection is not about to happen. |
||
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
Lawrence,
----------------------------------------Have started to run Rosetta a few months ago for 5%, just to see how it works against the theory. Done 50 or so and it works very well. Each structure/decoy has a given value.....the more structures one works thru the more credit. U can set a WU to run 2, 4, 6, 8, 24 hours or more. After the chosen time it will stop and send the result with how ever many decoys completed. Some WU's have been run for Tens of thousands of times from different seeds by different crunchers. They are thus extremely well known, thus the predictability of how much credit should be awarded can be accurate. Now on FAAH or HDC, i care to think that model may not work, not technically informed enough if it is comparable. That said, FAAH has now been crunched 29 million times. There must be statistical data derivable from that to come to an award system, but per the table i put somewhere, we have single structure, 10, even 26 and above structured WU's, but all seem to go in 7.5 to 8 hours on my machine...... thus why not give it just the K.I.S.S. and make them e.g. 60 BOINC credits and whilst at it 25 for HDC, Period.....kills all discussion on points keeping all camps happy, ignoring the truth police. Oh, but only for 'valid' as i've now seen too many with excessive hi claim that without fail, can be predicted to be invalid. ciao
WCG
----------------------------------------Please help to make the Forums an enjoyable experience for All! [Edit 1 times, last edit by Sekerob at Oct 30, 2006 11:23:29 AM] |
||
|
David Autumns
Ace Cruncher UK Joined: Nov 16, 2004 Post Count: 11062 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Now on a first pass that seems like the most sane reasonable workable suggestion we've had so far the statistical average of work units on each project to points conversion
----------------------------------------faster machines more work units more points ticks all the boxes (man you can hear the but stampeeding to the post) I'm liking this idea it's a good 'un Sek (it's almost upon us - quick jump out of the way) It's certainly the best so far (sorry it's just arrived) But ![]() Could we implement a scheme where points could go up as well as down based on a statistical average for each project that would converge to an ever more accurate point? ![]() |
||
|
Movieman
Veteran Cruncher Joined: Sep 9, 2006 Post Count: 1042 Status: Offline |
Hello carl.h, Rosetta now keeps to a pretty good average re Boinc points .....A given machine will earn X an hour. I stopped reading most of the points arguments at Rosetta@home when they got so ferocious. How did it end up? I thought some people left the project because they felt so strongly against the proposed change? ![]() Lawrence Added: Are we talking projects as in the Boinc sphere or the projects in WCG ? We are in this for the long haul. Consider both over a period of time with lots of different projects. And remember, I am hypothesizing that benchmarks for each type of core will vary from compiler to compiler in a way that is not necessarily closely related to the true optimizing ability of that compiler. Hello Lawrence, Contrary to what you may have heard, XS left Rosetta in force( some stayed) just before the change in the credit structure.This was an issue between us and David Baker. So did others from Teddies,Free DC and other teams as well as individuals not affiliated with any teams. From what I saw after we left, the first weekend Rosetta had issues with bad WU but sorted that out and to the best of my understanding has a pretty reliable system in place that also is pretty fair to Linux users but isn't to any mac user. Let's face it, any system that uses an artificial benchmark is never going to be "fair" especially when it fails to make use of the capabilities of 70%(a guess) of todays computers. I'd love to see a system that is work based that's fair to all no matter the paltform or OS. I know, very easy to say, much more difficult to do. That said, and this is my personal opinion only as I do not speak for XS, I'm here to stay no matter if you change or not. It's become a matter of principle to me now. So as much as I'd love to see some changes, it is not at the top of my list and you won't hear me passionately speaking on the topic. You have my best wishes no matter which direction you go. In 2-3 weeks we'll be bringing in a few "surprises" that you will enjoy. Some of which will rank up there with the fastest air cooled Intel based systems on the planet. When they come online I'll give you a yell with the specs. Good Luck! ![]() |
||
|
|
![]() |