Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
World Community Grid Forums
Category: Beta Testing Forum: Beta Test Support Forum Thread: Beta: What does one expect of you? |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 27
|
Author |
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
[warning as some abrasives could have slipped in ;-)] There's the Beta enrollment page and several Help and Start Here items on the topic of beta testing. There's also the last Beta testing announcement of "Please let us know if you encounter *any* strange behaviors with this Beta within the Beta Support Forum." and a member inquiring what the objective was of the test. Think most all know: Break them in any conceivable way, in scenarios that would mostly occur during your daily operation. We all do that of course!
For those who like making the extra effort, and accept things will not always work as expected (points issues being one *not* on the list), here an attempt of capturing most looked for items, in no particular order at this time: 1) Resuming after suspend, with LAIM (leave application in memory when suspended) on and off, without hanging or crashing. 2) Restarting the system to test if results will resume in proper, as per 1). ..a) do not return to the beginning (zero progress, when for instance the task was at 40%) ..b) start running without needing manual intervention, beyond what the client would schedule in order when there are other tasks in "waiting to run" state. 3) Progress is recorded in the client, and sticks after any of the above. ... (Does progress oscillate, retreating, progressing, retreating...) 4) If checkpoints are recorded at intervals set in the client (the "no more than" frequency rule) 5) If tasks run much longer, never ending. 6) If [average] CPU time use is per client settings (e.g. 100% orof spare cycles). 7) If the tasks in anyway impair use of computer (brand new "anonymous" betas in particular). 8) If the graphics window comes up (Beta does not have the full production graphics though) 9) If progress shows in 8) 10) If graphics window closes correctly 11) Any very extraneous RAM/VM/Swap file memory use, not warned for by the technicians 12) Any extraneous disk IO, not warned for by the technicians .... (operators of e.g. Process Explorer look for Large Paging Fault delta's) 13) Any errors (not warnings) logged in the event/message log. 14) Any Result log errors, not the benign warnings that show in every task result output. .... (close up watchers can monitor the stderr.txt file in the task slots\xx for up to the minute detail) 15) ... Now you're invited to copy the list in a reply and add more item(s) that bare concern. When exhausted, a sort and clean up can be undertaken and the Help or Start Here can be expanded, so it's no question what is expected of the beta participant and what techs may want to here off, but broadly as it was put "Please let us know if you encounter any strange behaviors with this Beta within the Beta Support Forum." On topic plz, generating a list of items, not your "how it should be". Thanks in advance --//-- |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello SekeRob,
Reference: SekeRob [May 10, 2012 10:34:35 AM] post There are many ways to skin a cat. What way of skinning a cat are we talking about for any given beta-test? Your method is from the general going in to the specific whereas my method is from the specific going out to the general. Your method will require a prior shotgun list of items to carry in a mission; my method will have a list of items needed to be generated -- depending on what the objective of a particular mission is. I propose that all beta-test be predicated with a specific (and not general) context and that context will be the rationale, objective, and motivation that caused the need to come up with a particular beta-test. We are doing things with a specific rationale, a specific objective, and a specific motivation in mind, aren't we? What is wrong with having those explicitly stated (and not implied from any other context) in any beta-test? Good day ; |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The specific rationale is to test how a program runs on YOUR computer, which probably represents a unique software environment. Or at least unlike any other system that the program has ever run on. Lawrence |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
The specific rationale is to test how a program runs on YOUR computer, which probably represents a unique software environment. Or at least unlike any other system that the program has ever run on. Lawrence ; |
||
|
Dataman
Ace Cruncher Joined: Nov 16, 2004 Post Count: 4865 Status: Offline Project Badges: |
[warning as some abrasives could have slipped in ;-)] There's the Beta enrollment page and several Help and Start Here items on the topic of beta testing. There's also the last Beta testing announcement of "Please let us know if you encounter *any* strange behaviors with this Beta within the Beta Support Forum." and a member inquiring what the objective was of the test. Think most all know: Break them in any conceivable way, in scenarios that would mostly occur during your daily operation. We all do that of course! For those who like making the extra effort, and accept things will not always work as expected (points issues being one *not* on the list), here an attempt of capturing most looked for items, in no particular order at this time: 1) Resuming after suspend, with LAIM (leave application in memory when suspended) on and off, without hanging or crashing. 2) Restarting the system to test if results will resume in proper, as per 1). ..a) do not return to the beginning (zero progress, when for instance the task was at 40%) ..b) start running without needing manual intervention, beyond what the client would schedule in order when there are other tasks in "waiting to run" state. 3) Progress is recorded in the client, and sticks after any of the above. ... (Does progress oscillate, retreating, progressing, retreating...) 4) If checkpoints are recorded at intervals set in the client (the "no more than" frequency rule) 5) If tasks run much longer, never ending. 6) If [average] CPU time use is per client settings (e.g. 100% orof spare cycles). 7) If the tasks in anyway impair use of computer (brand new "anonymous" betas in particular). 8) If the graphics window comes up (Beta does not have the full production graphics though) 9) If progress shows in 8) 10) If graphics window closes correctly 11) Any very extraneous RAM/VM/Swap file memory use, not warned for by the technicians 12) Any extraneous disk IO, not warned for by the technicians .... (operators of e.g. Process Explorer look for Large Paging Fault delta's) 13) Any errors (not warnings) logged in the event/message log. 14) Any Result log errors, not the benign warnings that show in every task result output. .... (close up watchers can monitor the stderr.txt file in the task slots\xx for up to the minute detail) 15) ... Now you're invited to copy the list in a reply and add more item(s) that bare concern. When exhausted, a sort and clean up can be undertaken and the Help or Start Here can be expanded, so it's no question what is expected of the beta participant and what techs may want to here off, but broadly as it was put "Please let us know if you encounter any strange behaviors with this Beta within the Beta Support Forum." On topic plz, generating a list of items, not your "how it should be". Thanks in advance --//-- Nice post Seke. I remember back in the day when few wanted to beta test. Then came badges and EVERYONE suddenly was a beta tester even though they had no idea what they were doing. |
||
|
roundup
Veteran Cruncher Switzerland Joined: Jul 25, 2006 Post Count: 831 Status: Offline Project Badges: |
SekeRob and Dataman, sooo true.
I was really looking for the "like"-button next to your postings |
||
|
yoro42
Ace Cruncher United States Joined: Feb 19, 2011 Post Count: 8976 Status: Offline Project Badges: |
SekeRob,
----------------------------------------Thank you for the check list. I hope to cover some of them. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
1) Resuming after suspend, with LAIM (leave application in memory when suspended) on and off, without hanging or crashing. 15) Check if the screensaver operates correctly.2) Restarting the system to test if results will resume in proper, as per 1). ..a) do not return to the beginning (zero progress, when for instance the task was at 40%) ..b) start running without needing manual intervention, beyond what the client would schedule in order when there are other tasks in "waiting to run" state. 3) Progress is recorded in the client, and sticks after any of the above. ... (Does progress oscillate, retreating, progressing, retreating...) 4) If checkpoints are recorded at intervals set in the client (the "no more than" frequency rule) 5) If tasks run much longer, never ending. 6) If [average] CPU time use is per client settings (e.g. 100% orof spare cycles). 7) If the tasks in anyway impair use of computer (brand new "anonymous" betas in particular). 8) If the graphics window comes up (Beta does not have the full production graphics though) 9) If progress shows in 8) 10) If graphics window closes correctly 11) Any very extraneous RAM/VM/Swap file memory use, not warned for by the technicians 12) Any extraneous disk IO, not warned for by the technicians .... (operators of e.g. Process Explorer look for Large Paging Fault delta's) 13) Any errors (not warnings) logged in the event/message log. 14) Any Result log errors, not the benign warnings that show in every task result output. .... (close up watchers can monitor the stderr.txt file in the task slots\xx for up to the minute detail) |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
I have to say, good post Seke. The purpose of Beta Testing is to find and work out bugs of ANY sort on the pieces etc being sent. For the point whoors who just want to grab the beta points for their next badge but not give one letter of input... thanks but no thanks.
Beta tests have a purpose BESIDES letting you brag about another badge in the forums, they are to debug apps prior to sending them to the mass public. For those who don't want to use their machine as a test platform, or, more specifically, those who want to gobble up all the beta tasks they can, but never give any input on how they ran.... please go elsewhere. There.... I said it, banish me to AOL. Aaron |
||
|
Dataman
Ace Cruncher Joined: Nov 16, 2004 Post Count: 4865 Status: Offline Project Badges: |
For those of you that want to be beta TESTERS and not just beta COLLECTORS, a good place to start is the boinc beta test script. I don't have the link right now but it is out there in the Bezerkly Land beta test forum. Over the years I have added things to it that pertain to science apps. I'd share but it is really not ready for prime time viewing.
----------------------------------------I hope some more of you will help us perform more through testing. It doesn't hurt and it is non fattening. [Edit 1 times, last edit by Dataman at May 12, 2012 12:04:21 AM] |
||
|
|