Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
World Community Grid Forums
Category: Support Forum: Suggestions / Feedback Thread: pseudo projects with a constant typical WU of corresponding real project |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 13
|
Author |
|
ttt67
Cruncher Joined: Nov 6, 2010 Post Count: 7 Status: Offline Project Badges: |
For optimizing purposes I'd like to choose a project that sends the same Work Unit over and over. So I could change parameters (RAM, operating system, cores, hyperthreading, power limit, ...) and compare effects (points per hour, points per watt) without noise of different WUs of compare runs.
Each real project would provide its pseudo project - no validation, no points, no run time credits, just a single typical WU of the project. Perhaps not too hard to implement and set up ... Supporters of suggestion? |
||
|
BladeD
Ace Cruncher USA Joined: Nov 17, 2004 Post Count: 28976 Status: Offline Project Badges: |
Perhaps not too hard to implement and set up ... Oh, I don't think so. |
||
|
Sgt.Joe
Ace Cruncher USA Joined: Jul 4, 2006 Post Count: 7545 Status: Offline Project Badges: |
Each real project would provide its pseudo project - no validation, no points, no run time credits, just a single typical WU of the project. Perhaps not too hard to implement and set up ... Supporters of suggestion? If the scientists/researchers thought it was necessary, I could see where they might incorporate this in a limited fashion in a beta run. Cheers
Sgt. Joe
*Minnesota Crunchers* |
||
|
Falconet
Master Cruncher Portugal Joined: Mar 9, 2009 Post Count: 3294 Status: Offline Project Badges: |
While I understand your point, that's not going to happen at WCG or most other BOINC projects.
----------------------------------------Ultimately, most of your questions already have an answer. Hyperthreading increases the runtime of a task but ultimately the the number of results returned is still higher. Operating system matters for *some* projects. RIght now, having Linux or Windows is basically the same for every WCG project but if sometime in the future you see the SCC project coming back, feel free to fire up that Linux OS if you want a speed up of around 50% per task. As for points, I don't remember since I really don't care but there have been some difference between OS's. Faster RAM hasn't made any significant differences in project's runtime, IIRC. I think TN-Grid sort of offers what you are looking for but I simply have no idea how to use their "validation suite ". But again, what you are looking for is also known there: Linux runs like 20% faster than Windows. AMD Ryzen 5 1600AF 6C/12T 3.2 GHz - 85W AMD Ryzen 5 2500U 4C/8T 2.0 GHz - 28W AMD Ryzen 7 7730U 8C/16T 3.0 GHz |
||
|
Sgt.Joe
Ace Cruncher USA Joined: Jul 4, 2006 Post Count: 7545 Status: Offline Project Badges: |
Operating system matters for *some* projects. Linux does run faster on any of the projects which use "VINA" docking program like SCC1. At the moment Windows run quicker on MCM. I have two similar sized systems an I7-3770 (Windows 7)and I7 2600K. (Linux Mint 17 ?). The Windows system runs MCM in about 2 to 2.5 hours while the 2600k takes about 3 to 3.5 hours. Other projects have run quicker on the 2600K. Cheers
Sgt. Joe
*Minnesota Crunchers* |
||
|
Falconet
Master Cruncher Portugal Joined: Mar 9, 2009 Post Count: 3294 Status: Offline Project Badges: |
I actually switched a laptop from Windows 10 to Linux Mint 20.2 less than an hour ago so I'll see if I can track the difference. It's an old AMD A8-4500M which ran 4 MCM workunits a few days ago. 3 took around 6.2 hours and another ran for 8.44. Small sample, I know.
----------------------------------------AMD Ryzen 5 1600AF 6C/12T 3.2 GHz - 85W AMD Ryzen 5 2500U 4C/8T 2.0 GHz - 28W AMD Ryzen 7 7730U 8C/16T 3.0 GHz |
||
|
alanb1951
Veteran Cruncher Joined: Jan 20, 2006 Post Count: 858 Status: Offline Project Badges: |
Operating system matters for *some* projects. Linux does run faster on any of the projects which use "VINA" docking program like SCC1. At the moment Windows run quicker on MCM. I have two similar sized systems an I7-3770 (Windows 7)and I7 2600K. (Linux Mint 17 ?). The Windows system runs MCM in about 2 to 2.5 hours while the 2600k takes about 3 to 3.5 hours. Other projects have run quicker on the 2600K. Cheers I only have Linux machines, so the below is a Linux-only observation... When they ran the last MCM1 Beta, the long-running (VMethod=NFCV) tasks were running a lot faster than their production equivalents, and the shorter (VMethod=LOO) tasks were running a bit faster. A new compiler and/or new compiled-in libraries can make a big difference in performance, and the current production MCM1 program build on Linux seems to date from June 2018, so the Beta should/would have been built with later versions of things, better default optimizations, et cetera! Of course, I have no idea whether the MCM1 Beta also improved times on Windows or not... In general, I suspect the key is more how the executables are built rather than the actual operating systems on which they run :-) Even with equivalent software on identical data, there's a chance one is learning more about the compiler tool chains than the benefits of the actual Operating System! (And ideally one should do any tests on identical hardware , either stand-alone or under identical workloads - otherwise aspects such as general hardware performance (instruction pipelining?) and L3 cache usage will make a difference... Not that MCM1 troubles cache much, but lots of other applications do!) Only an opinion, of course, and [in the specific case of MCM1 Beta] based on a relatively small dataset... Cheers - Al. |
||
|
ttt67
Cruncher Joined: Nov 6, 2010 Post Count: 7 Status: Offline Project Badges: |
While I understand your point, that's not going to happen at WCG I assume that it won't happen because it is not of interest for most of contributors...Ultimately, most of your questions already have an answer. I appreciate your following sharing of experiences. Actually no questions but examples (incomplete) of what could be tested. Hyperthreading increases the runtime of a task but ultimately the the number of results returned is still higher. Statement probably based on comparing runs of WU sets, including the noise of running not the same sets, which is what the suggestion would improve... In theory HT could hamper over all performance: if L3 cache is much faster than RAM and full number (HT) of threads invalidates cache too fast and reduced number can run within L3 - no high latency RAM access... Needs to be tested - in Germany people say "Versuch macht kluch": experiments rule. Operating system matters for *some* projects. ... Statement probably based on comparing runs including noise...As for points, I don't remember since I really don't care I saw points as measurement of how valueable the result of a WU is - wrong?Faster RAM hasn't made any significant differences Statement probably based on comparing runs including noise...I'd like to have a tool that allows me to compare settings or hardware - more precise and with a single WU instead of sets for averaging. And as I don't know about implementation of project system it could be possible to have a pseudo project with low effort: setting up projects is known (and easy - hopefully), WU generation is just copying (hard links) the same typical (weakness!) WU to the pool with new name, validation is: valid, 0 points, 0s run time. Without knowledge things are that easy I'd like to have it so much that I write lines and lines and... |
||
|
BladeD
Ace Cruncher USA Joined: Nov 17, 2004 Post Count: 28976 Status: Offline Project Badges: |
I'd like to have a tool that allows me to compare settings or hardware - more precise and with a single WU instead of sets for averaging. Well, you should start working on that. |
||
|
ttt67
Cruncher Joined: Nov 6, 2010 Post Count: 7 Status: Offline Project Badges: |
I actually switched a laptop from Windows 10 to Linux Mint 20.2 less than an hour ago so I'll see if I can track the difference. It's an old AMD A8-4500M which ran 4 MCM workunits a few days ago. 3 took around 6.2 hours and another ran for 8.44. Small sample, I know. Quote is good example where suggestion wins: Next set of 4 you may get 3@8.44 and 1@6.2 - sum: 31.5h First set sum: 27h - 15% difference Better for larger samples? How large? BTW: OPNG run times can differ by factor 8! https://www.worldcommunitygrid.org/forums/wcg...ad,43752_offset,20#666040 |
||
|
|