| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 21
|
|
| Author |
|
|
sk..
Master Cruncher http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif Joined: Mar 22, 2007 Post Count: 2324 Status: Offline Project Badges:
|
Using "overclaimer" to describe a “FASTER” system is masking the problem.
The points system is fundamentally flawed because it is based on a theoretical measurement that makes no allowance for the operating system version. It is only a theoretical measurement of the CPU potential and a CPU is only part of a system of physical components and software. Therefore it fails to accurately reflect the ability of the system that is crunching, and as a result awards incorrectly. While limited work rounds may exist, getting +200points rather than +600 it’s not a good thing, it’s just a not-as-bad situation. This work around may not be the most recommendable and its use highlights the need for systemic correction. While pairing Linux systems with other Linux systems might ease the situation, such a quick fix/workaround may not be the best solution. The best solution would be to have a real “system” benchmark with a system variable that took note of the different operating systems, and included X86 and X64 variations. |
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
skgiven,
----------------------------------------So the topic was discussing the differential of Linux to Windows credit and here we are again spinning the same turd around that is discussed in one or the other current thread such as here: http://www.worldcommunitygrid.org/forums/wcg/viewthread_thread,28035. Tell us something new about it being fundamentally broken. We have previously eluded that Berkeley is developing a new scheme and until then WCG is very unlikely to do anything substantial in form of customizations to pull the million wrinkles out that Berkeley allowed to be put in and become a carbuncle that everyone, no one excluded likes to be removed with all it's bad seeds and loonesies. It's an inter project train wreck. -- SekeRob edit: insert ''credit'' in first line.
WCG
----------------------------------------Please help to make the Forums an enjoyable experience for All! [Edit 2 times, last edit by Sekerob at Sep 4, 2010 6:59:23 PM] |
||
|
|
sk..
Master Cruncher http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif Joined: Mar 22, 2007 Post Count: 2324 Status: Offline Project Badges:
|
We are not discussing "BOINC credits vs WGC"; we are discussing the discrepancies in the credit system WRT Linux vs Windows, and to some extent X64 systems.
----------------------------------------I'm not the dog that left you a present on the mat, I'm just pointing it out and that I don't like it, so don't blame me for the smell. If you/WCG want to wait for Berkeley to clean it up, hold your nose perhaps, but not your breath. PS. I was specifically asked to start a new thread on this topic by a Jean, as I raised it within another somewhat dead thread. So I removed it from the other thread, and started a new one, after looking for similarly named threads! If the same subject was raised within one or more other miss-named threads, do a group huddle thing with the other CA's and formulate a mutual thread management plan rather than berating me for doing what I was asked. [Edit 2 times, last edit by skgiven at Sep 4, 2010 7:33:02 PM] |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello skgiven
Reference: skgiven [Sep 4, 2010 7:25:04 PM] post Greetings. We are not discussing "BOINC credits vs WGC"; we are discussing the discrepancies in the credit system WRT Linux vs Windows, and to some extent X64 systems. The things is, the topic of the creditSystem for the case of Linux versus the case for Windows as done in WCG cannot be discussed in isolation with the creditSystem under BOINC. There needs to be a consistent and coherent BOINC creditSystem before any specific implementation, say for WCG purposes, can be fashioned. Throw in other aspects, say x64/x32 systems into the works, and the inter-dependencies self-manifests. One cannot talk about doing a heartSurgery without talking about anesthesia and other areas for example, so that if someone drops by and insists that he/she is talking about anesthesia in isolation of the heartSurgery procedures or the other way around is.., well.., "out of context" to say the least.I'm not the dog that left you a present on the mat, I'm just pointing it out and that I don't like it, so don't blame me for the smell. Of course you are not the dog that did anything let alone left a present on someone's mat. But I hope that does not mean that you would always point someone else as cover/excuse/justification for the things that you actually end up doing. Also, I don't think you are being "berated". It's more like your ideas are being challenged. Please do not take it personally.PS. I was specifically asked to start a new thread on this topic by a Jean, as I raised it within another somewhat dead thread. So I removed it from the other thread, and started a new one, after looking for similarly named threads! If the same subject was raised within one or more other miss-named threads, do a group huddle thing with the other CA's and formulate a mutual thread management plan rather than berating me for doing what I was asked. If you/WCG want to wait for Berkeley to clean it up, hold your nose perhaps, but not your breath. The bottom line for WCG, as I see it, is to get the science calculations moving. To allow perfection of the creditSystem to slow down the science calculations from moving will never happen. That does not mean you do not have a good idea. The good news is that there is acknowledgment that the BOINC creditSystem has plenty of ground to cover. It is just that the science calculations cannot be placed hostage while the creditSystem is undergoing perfection.Good day ; |
||
|
|
TimAndHedy
Senior Cruncher Joined: Jan 27, 2009 Post Count: 267 Status: Offline Project Badges:
|
WCG is very unlikely to do anything substantial in form of customizations to pull the million wrinkles out that Berkeley allowed to be put in and become a carbuncle that everyone Not being a expert in the scoring system. Is it the Berkley software or WCG that is reducing the grant in the single redundancy cases under linux? HFCC_ n1_ 01207671_ n1_ 0001_ 1-- HexComputer Valid 9/4/10 03:47:52 9/4/10 15:02:16 4.60 144.1 / 107.1 HFCC_ n1_ 01149092_ n1_ 0000_ 1-- HexComputer Valid 9/4/10 03:41:26 9/4/10 14:07:13 3.51 109.3 / 80.8 HFCC_ n1_ 01203544_ n1_ 0001_ 0-- HexComputer Valid 9/4/10 03:03:45 9/4/10 14:07:13 4.53 141.0 / 104.5 HFCC_ n1_ 01198379_ n1_ 0000_ 0-- HexComputer Valid 9/3/10 20:51:45 9/4/10 07:54:38 3.43 108.0 / 85.6 |
||
|
|
KerSamson
Master Cruncher Switzerland Joined: Jan 29, 2007 Post Count: 1684 Status: Offline Project Badges:
|
After reading the last comments, I like to come back to my previous PS:
----------------------------------------With other words, the computational Windows overhead seems to be "honored" like the effective computation. ![]() The fundamental question is really: "is it acceptable that more efficient systems do receive more or less a similar credit per hour (per day) than less efficient systems, just because they have a similar theoretical computational power (see the Whetstone and Dhrystone values)?" We are facing today the simple reality that Windows overhead is considered as a part of the crunching effort and that it is honored as such. I share Jean's opinion: I don't think that this situation is OK. The real computational contribution only should be considered and in the fact systems delivering more results should receive more credits. Even if it would look like a workaround (because it would be one), I can imagine that WCG could "weighting" the credit claims based on speed consideration. Such "weighting" can only be based on a "experience/knowledge database" which needs to be designed, challenged, and implemented as well as integrated within the result validation process! ... Not easy and probably less important for the WCG tech team than supporting new and existing projects. Cheers, Yves |
||
|
|
BobCat13
Senior Cruncher Joined: Oct 29, 2005 Post Count: 295 Status: Offline Project Badges:
|
What I would like to know is how the credit for C4CW is calculated. It appears to be an ever changing amount despite the fact that each target is very steady in it's runtime. The example below shows that with each group of tasks my linux box has reported, the granted credit has been reduced each time. It would seem that if the CPU time stays consistent then the credit granted should stay consistent.
c4cw_ target02_ 000130022_ 0-- quad-linux Valid 9/2/10 15:47:26 9/3/10 20:16:32 1.85 47.2 / 47.5 c4cw_ target02_ 000127638_ 0-- quad-linux Valid 9/2/10 15:47:27 9/3/10 23:45:18 1.85 47.3 / 46.8 c4cw_ target02_ 000349583_ 0-- quad-linux Valid 9/2/10 21:52:56 9/4/10 02:09:09 1.86 47.5 / 45.7 c4cw_ target02_ 000343176_ 0-- quad-linux Valid 9/2/10 21:52:56 9/4/10 22:03:13 1.86 47.5 / 45.5 c4cw_ target02_ 000342656_ 0-- quad-linux Valid 9/2/10 21:52:56 9/4/10 23:40:53 1.85 47.4 / 45.0 |
||
|
|
anhhai
Veteran Cruncher Joined: Mar 22, 2005 Post Count: 839 Status: Offline Project Badges:
|
TimandHerdy:
----------------------------------------boinc grants all credits, WCG just multiple it by 7. KerSamson: I personally don't like the fact that you get less pts per equivalent WU in linux then windows, but honestly the thing that really is annoying is if you do a WU that is not zero redundancy then you will get even less points per hour of work. If you run the benchmark test on a system in linux and the same system with windows, you will notice that there is a big difference in results. I don't remember the exact numbers but for a fairly recent system with 32 bit boinc on windows or 64 bit boinc on windows, you get about the same numbers. However, a 64 bit boinc on linux will get about 20% less. Worst yet, a 32 bit boinc on linux will get you about 40% less. I didn't actually see any real difference in the performance of the WUs, but the amounts of points for each was different. BobCat: I am guessing that those WUs of yours didn't need to be validated, if so this is how it does the crediting. There is a small WU within the WU that gets processed to figure out how many points you get per hour of work. It is kinda strange that your credit varies by that level because most of mine are the same or of by no more then .2 pts Edit: I meant, I don't see any performance diff between 64 and 32 bit on linux ![]() [Edit 1 times, last edit by anhhai at Sep 5, 2010 2:37:15 AM] |
||
|
|
TimAndHedy
Senior Cruncher Joined: Jan 27, 2009 Post Count: 267 Status: Offline Project Badges:
|
boinc grants all credits, WCG just multiple it by 7. I think you may have misunderstood my question. Although the root of it is in you next explanation. There is a small WU within the WU that gets processed to figure out how many points you get per hour of work. My question is, who designs the WU score. Is it a generic BOINC functionality or designed by WCG for each project? Also is the small work unit involved with requested or granted? Is it really just standard BOINC functionality that is responsible? [Edit 1 times, last edit by TimAndHedy at Sep 5, 2010 2:06:36 AM] |
||
|
|
Sekerob
Ace Cruncher Joined: Jul 24, 2005 Post Count: 20043 Status: Offline |
For good order, except for HPF2, all current sciences at WCG have wingman of the same platform i.e. Linux-Linux matching (homogeneity requirement). There is no Windows-Linux quorum distortions happening. The variety of benchmarks and compiles for Linux is much greater... an old client that classically only benchmarks 55% of Windows is enough to screw things up... 32 bit client meeting 64 bit client is another spanner, and there being umpteen more. So as said, it's broken, the HFCC/FAAH mini benchmark is broken (WIFI will do that e.g. during the start andend phase with concurrent up/downloading)... the whole system is a train where each wagon has only 3 wheels.
----------------------------------------C4CW... don't know if there's a mini QA benchmark inside. The techs never told. The servers slowly learn for each device (last 15-20 units per science / per device) how many fpops per second are processed and compare that to a reference set, probably that what was learned during beta. I've seen the same fine tuning of the claim for ZR runs as BobCat13. Grant started high, about 10% above claim and now sits steady at about 1 below claim, which is when enough tasks were returned to have a good claim mean (which is until next time your BOINC benchmark will change hundreds of points, it will recalibrate again). It's still more than what Windows gives per hour. Though I understand that if e.g. 2 HCC on Linux processed in the same time as 1 on Windows yet no double credit per hour does not look right, we do not know if it's more cycles for actual computing per unit of time or less cycles due removal of loads of wait states (if you catch my drift). The machine has not 1 cycle more per hour is what we all know. I'm patiently waiting on the final solution (it's in beta on Seti) and till then use the best suited client for my setup (the .sh compile for Ubuntu 64) that gives a 'the' benchmark for best balance.
WCG
Please help to make the Forums an enjoyable experience for All! |
||
|
|
|