Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
World Community Grid Forums
Category: Support Forum: Suggestions / Feedback Thread: How about having a benchmark? |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 11
|
Author |
|
OldChap
Veteran Cruncher UK Joined: Jun 5, 2009 Post Count: 978 Status: Offline Project Badges: |
Thinking about WCG having the ability for users to download a benchmark/load test à la Prime95.
----------------------------------------Something we could use to optimise machines until Points is fixed |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Was there not one wcg project that had a little benchmark/reference job at the front? That would be the way, no project comparison, no rolling statistics, automatic compensation if the device is hot and clocked down or supercooled and running turbo. Simply the reference job has x-flops. Run it in 100 seconds which is always worth 0.2 credit, then if the job takes 100000 seconds your credit would be 100000 / 100 * 0.2 = 200 credit. Of course, there's a little dynamic crc control on this for the cheaters.
|
||
|
noderaser
Senior Cruncher United States Joined: Jun 6, 2006 Post Count: 297 Status: Offline Project Badges: |
BOINC has built-in CPU benchmarks that are run periodically, and are used by other projects to determine if a job will run in time. Not sure if WCG makes use of that, since they have such a kludgy custom system.
---------------------------------------- |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
It is Boinc that runs benchmark, not WCG, happens if you have been offline for a period or at regular intevals ....
|
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Search Results for: benchmark
How are points calculated? Points are calculated in a two-step process which attempts to give a consistent number of points for similar amounts of research computation. First, the computational power/speed of the computer is determined by periodically running a benchmark calculation. Then, based on the central processing unit (CPU) time spent computing the research result for a work unit, the benchmark result is used to convert the time spent on a work unit into points. This adjusts the point value so that a slow computer or a fast computer would produce about the same number of points for calculating the research result for the same work unit. This value is the number of point credits "claimed" by the client. More information about that formula is available here. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
BOINC has built-in CPU benchmarks that are run periodically, and are used by other projects to determine if a job will run in time. Not sure if WCG makes use of that, since they have such a kludgy custom system. It's because the agent benchmark is screwed with by the 'cheaters' that credit_new was implemented, by and large eliminating the benchmark weight, it's become only a portion of the points system. Running the mini-test with each task, eliminates the borked server maintained credit system need. How to screw with the benchmark: Stop agent, go into client_state and change the fpops/mips values. Go into cc_config and use the skip_cpu_benchmark and forever you have a manipulated benchmark value. It could not be any easier to cheat. What people do to get their points up is mind-boggling. The benchmark values are still part of the computation of the buffer completion times, but since many projects have implemented dcf=1, actual computation times no longer adjust buffered work times. It's controlled by the server and that one has great latency to adjust to actual completion times. Basically, the current credit system and time projection is never taking variable run-times into account, well it does, so you get pittens credit when very long jobs like the fahv we had, and then when they shorten again, you get totally ballooned credit. Someone wrote about getting 1.9 million per day or something like that. Ahum. And for the coming ugm, smells like we're in for another variable runtime project, credit chaos encore. Had 6 which completed in the range from 2:43 to 8:22 hours, all of the 0025 batch. Ps, why the 'how are points calculated' policy is quoted when it's not been in use since credit_new is used by wcg, pass. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
....and should you be showing how to screw it?
|
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Reminding me of one person who commented 'too simple', this is why credit_new was implemented.
|
||
|
OldChap
Veteran Cruncher UK Joined: Jun 5, 2009 Post Count: 978 Status: Offline Project Badges: |
I was thinking more along the lines of a cross between Prime 95 where you get an option to run the science or just run the load test
----------------------------------------and Superpi where the result is known and recorded allowing the process to be validated then the time it took to run becomes the useful metric. |
||
|
rembertw
Senior Cruncher Belgium Joined: Nov 21, 2005 Post Count: 275 Status: Offline Project Badges: |
I seem to remember from discussions before that a "general benchmark" in WCG is not really useful because of the different types of projects. This means that a "general WCG benchmark" for everybody is not useful.
If I remember well, then to make it useful for optimising the machine(s), the benchmark should have the possibility for the users to individualise, taking in account at least these user options: - run with only selected projects - set the % project mix as chosen by the user in case the user chooses not to run every active project - import the % project mix as delivered by WCG to your machine, - and separately import the % project mix as delivered by WCG to individual machines of every user From WCG side, it would also need to be updated regularely to take in account - new projects - intermittent projects - Beta with hugely fluctuating demands Personally, in the past, I did keep WCG in mind when buying new machines so I would buy more powerful machines than strickly necessary, or even more machines than necessary. Now I am at a point where I only buy the machines that I really need, with the strength that I really need. Crisis, you know. This fits much better with the original idea of Distributed Computing, where the crunching happens with the spare cycles. Spare, as well in raw computer power, as in the financial aspect. Keeping all this in mind, I see the benchmarking more as a "hobby project" that can maybe set up by some people with much time on their hands, and who are willing to consider this as an eternal Beta. |
||
|
|