| Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
| World Community Grid Forums
|
| No member browsing this thread |
|
Thread Status: Active Total posts in this thread: 8
|
|
| Author |
|
|
Mysteron347
Senior Cruncher Australia Joined: Apr 28, 2007 Post Count: 179 Status: Offline Project Badges:
|
I believe that it would be useful to be able to call for a dummy test unit for a device.
What I have in mind is a page where you specify your destination devicename, and may push a button to queue a short workunit specifically for that device from any of the applications, crunching just a few positions on a very short deadline. When the device asked for the unit, BOINC should prioritise it, but it would run in a very short time. The returned result would be validated and discarded. The use would be to test that a real unit for the application in question should (theoretically) be processed and returned correctly without changing the individual project selection or relying on the distribution randomiser. You could be sure of receiving a unit for the project in question with a total processing time that would be reasonably predictable and manageable. Why? To test that your latest hardware and software changes don't have unintended consequences, and that new graphics card you've acquired actually does things without waiting for a beta-gpu unit to appear. Perhaps even a 'standard test' button and a 'real hard' button to send a unit containing selected data that will give the silicon a real work-out. I know there's a lot going on at present, so next week will do fine. |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I'm just interjecting my extraneous opinion. I don't have anything to do with this subject. I think that WCG has always used other people's software (UD, BOINC) so that they can minimize programming time on such issues. BOINC is working on so much, I doubt that they would jump on this either. Just another kibitzer - Lawrence |
||
|
|
sk..
Master Cruncher http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif Joined: Mar 22, 2007 Post Count: 2324 Status: Offline Project Badges:
|
I suggested this at another project. It's something that would benefit many projects, especially those that only let you fail a handful of tasks before blocking you for 24h, and those frequently testing new work units. As WCG has many projects it would be more useful here than at other projects. It would also facilitate Boinc comparing CPU and GPU tasks in order to calibrate their software.
----------------------------------------[Edit 1 times, last edit by skgiven at Mar 17, 2012 7:37:49 AM] |
||
|
|
mikey
Veteran Cruncher Joined: May 10, 2009 Post Count: 826 Status: Offline Project Badges:
|
I believe that it would be useful to be able to call for a dummy test unit for a device. What I have in mind is a page where you specify your destination devicename, and may push a button to queue a short workunit specifically for that device from any of the applications, crunching just a few positions on a very short deadline. When the device asked for the unit, BOINC should prioritise it, but it would run in a very short time. The returned result would be validated and discarded. The use would be to test that a real unit for the application in question should (theoretically) be processed and returned correctly without changing the individual project selection or relying on the distribution randomiser. You could be sure of receiving a unit for the project in question with a total processing time that would be reasonably predictable and manageable. Why? To test that your latest hardware and software changes don't have unintended consequences, and that new graphics card you've acquired actually does things without waiting for a beta-gpu unit to appear. Perhaps even a 'standard test' button and a 'real hard' button to send a unit containing selected data that will give the silicon a real work-out. I know there's a lot going on at present, so next week will do fine. Many, MANY years ago this was suggested at Seti, the writers and still maintainers of Boinc, but it was rejected for MANY reasons. One was do they give you credits for crunching this 'test' unit? If so WHY? it didn't DO anything but prove your hardware worked, if it doesn't the project just does a resend of the unit anyway. And they have to send it to you to test it, why not just wait for some pc to not return it on time and send it again then? And who would crunch for no credits, what is the point from the users perspective, if it works you get credits, if it doesn't what did they lose? In the end it was decided to not do it for these and MANY other reasons, so they do a weekly set of benchmarks instead. ![]() ![]() |
||
|
|
sk..
Master Cruncher http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif Joined: Mar 22, 2007 Post Count: 2324 Status: Offline Project Badges:
|
'This' is not exactly the same suggestion. The purpose would be to test the system setup, and this is especially needed for GPU projects. The task/WU would just be a test one, no credits. The only time people would crunch these tasks would be when testing setup (system, GPU clocks, drivers, Boinc connectivity...). The reasons for rejection by SETI 5 or 10 years ago were because a benchmark was used instead. So it was a benchmarks vs test WU decision. As there isn't a GPU benchmark, that argument is irrelevant. As different projects award credit as they see fit, SETI's decision is very debatable.
|
||
|
|
widdershins
Veteran Cruncher Scotland Joined: Apr 30, 2007 Post Count: 677 Status: Offline Project Badges:
|
I'd say the suggestion would be very easy to implement in a very simple way. Find an already crunched and validated WU of a short duration for each active project. Say 1hr long, set up a special project 'system testing' with a check box similar to that to opt-in to beta testing. Set replication of each WU to say 10,000,000 copies and load the feeders plus load one of the already validated results into the validator. Limit each machine to 2 copies of each wu type per core per day.
A person wishing to test their new set-up clicks the check box for the test WU's and a suite of units from all the active projects is downloaded and processed. When uploaded, since there is already a valid result sitting in the validator an immediate pass/fail of users results would be available for each project type. A 1 Hour test for each unit would be long enough to ensure any problems would show up (e.g overheating). |
||
|
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I second the idea of a test facility/functionality with a comment that the benchmarks fulfill a different need and answers a different set-of-questions than what a test (a machine via a testWU) does. The actualization of the idea is of course another question and a big part in that question is whether that is BOINC's job, or WCG's job, or both in some form of combination. BOINC needs to set the direction first and grids would fill in the details particular to their respective requirements. Not much good news here though, for "BOINC is working on so much, I doubt that they would jump on this either" as CA lawrencehardin commented.
; |
||
|
|
Bearcat
Master Cruncher USA Joined: Jan 6, 2007 Post Count: 2803 Status: Offline Project Badges:
|
Why not send just one wu for a new project the computer asks for (if it's never validated a wu before). If it validates, then the system can send whatever the preferences are for the particular computer. If it doesn't validate, have a generated message sent to inform the user that the particular computer failed and to either select another project, or adjust the computer. This would stop someone asking for 5 or whatever days extra at the beginning, to find out they all error out later. Since certain projects are computed different than others, the one wu per project to validate would be more precise than a generic wu. Just a suggestion.
----------------------------------------![]()
Crunching for humanity since 2007!
![]() |
||
|
|
|