This is a process developed by Cem Kaner and Brian Lawrence for technical information-sharing across different groups. It’s not very new. We adapted it from some academic models and models from other industries. Our goal is to facilitate sharing of information in depth. We don’t see enough of that happening today. Instead, what we see today is that:
In GUI test automation, ideas that were being presented as state of the art at some conferences had been in quiet
use for years in some companies or that had consistently failed or found false in practice. The level of snake oil in test automation presentations was unacceptable. Soon after Kaner started to do on-the-road consulting, he realized that many of his clients had strong solutions to isolated pieces of the test automation puzzle, but no strategy for filling in the rest. The state of the art was much higher than the conferences, but it wasn’t getting across to the rest of us. Kaner hosted the first Los Altos Workshop with the goal of creating a situation that allowed for transfer of information about test automation in depth, without the BS. A meeting like this has to be well managed. As you’ll see, there are no structured presentations. This is two days of probing discussion. Lawrence volunteered to facilitate the meeting. Various attendees worked for a few hours as reporters (public note-takers).
The workshops (we’re now planning our third) are structured as follows:
OBJECTIVE: Develop a list of 3 or 4 practical architectural strategies for GUI-level automated testing using such tools as QA Partner & Visual Test. In particular, we want to understand how to develop a body of test cases that meet the following criteria:
- with knowledge of what testing of the program is actually being done (i.e. what’s actually being covered by this testing?)
- with the ability to tell whether the program has passed, failed, or punted each particular test case.
(a) war stories to provide context. Up to 5 volunteers describe a situation in which they were personally involved. The rest of us ask the storyteller questions, in order to determine what “really” happened or what details were important. Generally, stories are success stories (“we tried this and it worked because”) rather than dismal failure stories, but instructive failures are welcome. No one screens the stories in advance.
(b) general discussion
(c) boil down some apparent points of agreement or lessons into short statements and then
vote on each one. Discussion is allowed. The list of statements is a group deliverable, which will probably be published.
We agreed that any of us can publish the results as we see them. No one is the official reporter of the meeting. We agreed that any materials that are presented to the meeting or developed at the meeting could be posted to any of our web sites. If one of us writes a paper to present at the meeting, everyone else can put it up on their sites. Similarly for flipchart notes, etc. No one has exclusive control over the workshop material. We also agreed that any publications from the meeting would list all attendees, as contributors to the ideas published. The following people have attended the first and/or second workshops:
Chris Agruss (Autodesk), Tom Arnold (ST Labs), James Bach (ST Labs), Dick Bender (Richard Bender & Associates), Jim Brooks (Adobe Systems, Inc.), Elisabeth Hendrickson (Quality Tree Consulting), Doug Hoffman (Software Quality Methods), Cem Kaner (kaner.com), Brian Lawrence (Coyote Valley Software Consulting), Tom Lindemuth (Adobe Systems, Inc.), Brian Marick (Testing Foundations), Noel Nyman (Microsoft), Bret Pettichord (Unison), Drew Pritsker (Pritsker Consulting), and Melora Svoboda (Electric Communities). Organizational affiliations are given for identification purposes only. Participants’ views are their own, and do not necessarily reflect the views of the companies listed.
HANDOUT: Paper published at Software Quality Week and in Software QA. An earlier version appeared in IEEE Computer.