| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 15
|
|
| Author |
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
*sigh* You ignored everything I said, didn't you? Well, you're welcome to do that. Just don't be insulted if your "solution" is ignored. I'm sorry that you feel that way. I simply stated my belief based on a previous post by another respected member of this community and am open to any and all suggestions. No I wont be insulted or upset if this idea isn't used. It is merely a topic of discussion and in fact not my idea at all but a system already being used successfully elsewhere. I do have an idea though: It would seem, as sekerob has stated, that the benchmarking system in its current form is a problem. What if the WUs were benchmarked instead of the computers? Two possible ways of doing this would be either inhouse, where WCG test a number of the new batch of WUs prior to distribution, thus creating a universal benchmark for that batch, or, a number of "trusted" community members with stable, known machines, do the testing in a similar way to the Beta testing already used. Universal benchmarks could then be extrapolated from those resultsand the appropriate points distributed to the testers after this phase is complete. I am aware that this doesn't cover the anomoly situation, but excuse my naievite just once more, how often does this happen and why? If, as I suspect it is because of a significant 'energgy variation' within the experiment, then my understanding is that one is, most often, working on a highly significant experiment. I know that if this situation is explained to donors they will invariably accept an unusually slow WU. In fact some degree of self satisfaction is acheived from just that. Cheers. ozylynx ![]() |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Bear in mind that we also have variation between the different projects running on WCG. It is possible that in the future we may see a project with work units of a uniform size. Equally, we may see a project with even more variation than the current norm.
I have found that the most reassuring thing about a large work unit is the member running it will get a lot of points for it in the end. If size is ignored, the danger is that unscrupulous members will abort work units that seem to take a long time. There's a lot to take into account. Personally, I feel that instrumented, FLOP estimated work units will be the ideal solution. Failing that, baselining batches is a possibility. As Sekerob says, improving the benchmark to give more consistent results is a priority if we stick with the benchmark system. You may be able to get a more stable benchmark and point-claim by using a calibrated client. I, too, await detailed discussion of the 2nd Pan-Galactic BOINC Thingy with bated breath. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Some info on the proceedings at the 2nd Annual Pangalactic BOINC Workshop here: http://boinc.berkeley.edu/ws_06.php
...I do believe that may be our very own Kevin Reed pictured in the back row second from the left ![]() |
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
not much there in terms of a short term x-project credit parity solution, but this did catch my eye in the notes ..... so much for the idle CPU ticks utilisation....they (whoever 'they' are) want more!
----------------------------------------Several goals were identified: 1) make the credit system more "fair": the claimed credit should the proportional to the work actually done. In particular, reduce the variation in claimed credit for identical work. 2) social engineering: use the credit system to steer people's PC purchasing decisions in a way that maximally benefits science projects, and to steer people towards the project that can best use their particular resource. 3) simplicity: make it easy for users to understand
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Very interesting indeed.
It goes without further comment that I'm in total agreement with the points goals. I feel some concern about the concept of "steering" participants and like comments, as underlined. It is my experience that when projects forget that they are dealing with the generosity of donors and begin using tactical manipulation instead of advice and good communication skills..... well, I've already left one such project. Nuff said. It'll all be good if they remember that you catch more flies with honey than vinegar. Cheers. ozylynx ![]() |
||
|
|
|