| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Locked Total posts in this thread: 277
|
|
| Author |
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
That statement is not legally binding in any Country I know of. The word of an agreement is often to thwart legal responsibilities but ultimately it must conform to law.
WCG policy was to award credit in a way that we have known for a long time, this policy has changed whether you like it or not, it is a fact. pol‧i‧cy1 /ˈpɒləsi/ Pronunciation Key - Show Spelled Pronunciation[pol-uh-see] Pronunciation Key - Show IPA Pronunciation –noun, plural -cies. 1. a definite course of action adopted for the sake of expediency, facility, etc.: We have a new company policy. 2. a course of action adopted and pursued by a government, ruler, political party, etc.: our nation's foreign policy. 3. action or procedure conforming to or considered with reference to prudence or expediency: It was good policy to consent. Now you may like to quote an agreement as policy but this is not legally so as the above clearly shows. As I stated before if you are unable to follow the discussion do not join it. A policy of punishing has been implemented THAT IS A FACT, therefore WCG policy has changed. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
rob725, you are not in possession of all the facts. By analysing all the claimed credit, WCG were able to see the exact extent of the problem. I believe one of the staff explained in detail how this impacts granted credit. I refer you to that discussion, since it was totally comprehensive. I'm not sure where to look. I was referring to to knreed's post explaining that 800 wu's were sent out and 5% had unreasonably high claims. I freely admit that my stats may be low, which is why I used his 5% to calculate actual effect on quroum scores as opposed to claimed scores. Is there another discussion that adddresses this? |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Didy:
----------------------------------------Has there been any progress on the older computer benchmarking problem yet? I thought it might have been ram or FSB dependent but can't see evidence to support that theory here. Nor can I see CPU architecture being the problem as my Celeron and PIII machines display a near straight line deviation seemingly dependent on CPU Mhz rating only. 'Tis a puzzlement. Has anybody worked out what the average and mean points for the 800 unit Beta test was? This could give us an idea how much actual deviation we are looking at. What is a normal score? I'd guess between 62 and 68. {edit}Here's the latest: aah0890_ bdb174_ mx4phv_ 07 Valid 11/09/2006 13:43:22 11/09/2006 23:27:37 5.22 45 / 56 faah0890_ bdb174_ mx4phv_ 07 Invalid 11/09/2006 13:32:28 11/11/2006 06:35:05 38.67 117 / 28 faah0890_ bdb174_ mx4phv_ 07 Valid 11/09/2006 13:23:03 11/10/2006 05:48:07 7.61 66 / 56 My poor old 598Mhz comp got caught in the trap after toiling for over 38 hours I get the feeling that if the 45 point claim had been 52 we'd have all been happy with 59 points. It's becoming more important now to deal with both high and low outliers from legitimate clients. 45 really isn't very low. I'm surprised this doesn't happen more often. {end edit}Cheers. ozylynx ![]() [Edit 1 times, last edit by Former Member at Nov 11, 2006 5:14:19 PM] |
||
|
|
zombie67 [MM]
Senior Cruncher USA Joined: May 26, 2006 Post Count: 228 Status: Offline Project Badges:
|
Optimized clients required for other projects? Which clients and projects would that be? I'm totally naive on that one. Feel free to educate yourself. Google is your friend. If you make a claim. provide the evidence. I have no interest in finding it for you. It's not for me. I already know. In a nutshell: Because linux machines have lower benchmarks for the *same hardware*, it is required to use a boinc manager that increases the benchmark to make it even to the windows machines. ![]() |
||
|
|
zombie67 [MM]
Senior Cruncher USA Joined: May 26, 2006 Post Count: 228 Status: Offline Project Badges:
|
The issue is not that the policy changed. The issue is that insufficient implementation time was given (0 days). Longer implementation time would allow people to 1) have an opportunity to see the announcement. We don't all live on the internet 24/7. A week would be reasonable, IMO. and 2) have an opportunity to make the necessary changes to comply with the policy changes. Some of us have many machines (hundreds), and/or machines that are not immediately reachable (at multiple sites), and/or have other obligations in our life that don't allow us to implement in 0 days.
----------------------------------------Policy change is a fact of life. But the goal should be to implement policy change with as little disruption as possible. Clearly, that wasn't done in this case. ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
In a nutshell: Because linux machines have lower benchmarks for the *same hardware*, it is required to use a boinc manager that increases the benchmark to make it even to the windows machines. What you did is an unsupported option, not a requirement. This is a prime example of 'skewing' points so that problems are difficult to identify and hence difficult to fix. If everyone used the client in its standard form this type of thing could be seen easily and fixed. Your methods muddy the waters, this could have been done years ago. BTW. This problem was brought to the attention of WCG, tested as valid by them and WCG contacted BOINC to correct it in BOINC 5.8. That information is contained in the first post of this forum. The more obvious answer was given to Dagorath when he made the same claim but its content was removed as being offensive so I wont repeat it here. Cheers. ozylynx ![]() |
||
|
|
zombie67 [MM]
Senior Cruncher USA Joined: May 26, 2006 Post Count: 228 Status: Offline Project Badges:
|
What you did is an unsupported option, not a requirement. ? I don't recall ever saying I used an optimized BOINC client. This is a prime example of 'skewing' points so that problems are difficult to identify and hence difficult to fix. If everyone used the client in its standard form this type of thing could be seen easily and fixed. Your methods muddy the waters, this could have been done years ago. Until a fix is made for the benchmark flaw, users must take appropriate action to keep things even. Several projects (SETI, Rosetta, SIMAP) have finally solved the problem, and made the benchmark method obsolete. The point of this part of the discussion is that there *is* a legitimate reason to use an optimized BOINC client. You said there wasn't and asked for proof. Now you have it. But again, the change in policy is not the issue here. Projects have the right to do whatever they like. The problem is with implementation, and the 0 day notification. People need time to 1) learn of the change in policy, and 2) implement changes to comply with the new policy. ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello Zombie67
----------------------------------------I understand your points here. In fact there was 6 days notice given. Not 7 but not 0 either. Please check the times and dates of the two forum announcements. The forum is the official vehicle for such things and it is our responsibility to check these things out. This particular thread, containing 'copies' of the official news notice, was kept on the front page of the 'recent threads' menu for the vast majority of those 6 days. {Edit} Just saw your latest post. Please type "KISS method" into the search box and read my thread on benchmarking. Note the dates too. To save another post: Further to comments on slow machine benchmarks... I might be on to it!! The formula used for benchmark calculation gives equal weight to Gflops and Giops. The science , I think, is more dependent on floating point than integer operations. On my machines the Gflops fairly closely represent the clock speed of the CPU as a proportion. The Integer ops do not. So a legitimate benchmark is obtained giving too much weight to the Integer ops which are not as critical to the completion of the WU. Thus the older slower machine takes longer than its indicated benchmark to complete a WU and claims more points. I have found that a benchmark calculation using Gflops scores only will in fact produce a proportion which is much more accurate. Hope this helps. Cheers. ozylynx ![]() [Edit 1 times, last edit by Former Member at Nov 11, 2006 6:36:15 PM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The forum is the official vehicle for such things and it is our responsibility to check these things out. How often then should a member be MADE to visit the forum ? |
||
|
|
staffann
Cruncher Joined: Nov 19, 2005 Post Count: 26 Status: Offline Project Badges:
|
I always thought the way point are awarded in most BOINC projects are flawed. The problem is thinking that one benchmark can reflect the computers capabilities for any project. Ozylunx gives one example of how the science application can differ from the benchmark routine.
----------------------------------------If one want to have a really fair points system, the benchmark should be individual for each science application, and sent out by the science project. It could be the time to crunch kind of a "mini work-unit", thus reflecting the true speed of the science app. An alternative for some projects could be to ignore benchmarks altogether and do it the CPDN way. That could be a problem for projects where the computing time can vary a lot between work-units though. Optimised science applications is a problem in the context of awarding points (although they are good for the science - more work will be done). If benchmarks are unchanged, people with optimised science will claim lower points. That may affect people using a standard science app, leaving them with too few points. OTOH, people with optimised benchmarks (BOINC clients) will claim too many points if they are using a standard science app. Now if I use an optimised science app in one project and not in another, there really is no solution with the current design of one generic benchmark. I myself was for a while running WCG with an optimised BOINC client. That was because I started out with SETI and optimised science apps. I then started cruching CPDN, which was no problem since they ignore the benchmark results and just award points per trickle. Then however I also added WCG, and it became a problem. I kept the optimised BOINC client though, because I crunched less WCG than SETI, and I reckoned that I did the least damage that way. Now I have left SETI (I think there are more important projects to work on) and am therefore running a standard BOINC client. ![]() [Edit 1 times, last edit by staffann at Nov 11, 2006 7:10:13 PM] |
||
|
|
|