WTST 2012: Workshop on Teaching Software Testing

November 27th, 2011

TEACHING SECURITY-RELATED SOFTWARE TESTING

WTST 2012: The 11th Annual Workshop on Teaching Software Testing

January 27-29, 2012

at the Harris Institute for Assured Information

Florida Institute of Technology, Melbourne, Florida

www.wtst.org.

Software testing is often described as a central part of software security, but it has a surprisingly small role in security-related curricula. Over the next 5 years, we hope to change this. If there is sufficient interest, we hope to focus WTSTs 2012-2016 on instructional support for teaching security-related testing.

OUR GOALS FOR WTST 2012

  • Survey the domain: What should we consider as part of “security-related software testing”?
  • Cluster the domain: What areas of security-related testing would fit well together in the same course?
  • Characterize some of the key tasks:
    • Some types of work are (or should be) routine. To do them well, an organization needs a clearly defined, repeatable process that is easy to delegate.
    • Other types are cognitively complex. Their broader goals might stay stable, but the details constant change as circumstances and threats evolve.
    • And other types are centered on creating, maintaining and extending technology, such as tools to support testing.
  • Publish this overview (survey / clustering / characterization)
  • Apply for instructional development grants. We (CSTER) intend to apply for funding. We hope to collaborate with other institutions and practitioners and we hope to foster other collaborations that lead to proposals that are independent of CSTER.

UNDERLYING VIEWPOINT

The Workshop on Teaching Software Testing is concerned with the practical aspects of teaching university-caliber software testing courses to academic or commercial students.

We see software testing as a cognitively complex activity, an active search for quality-related information rather than a tedious collection of routines. We see it as more practical than theoretical, more creative than prescriptive, more focused on investigation than assurance (you can’t prove a system is secure by testing it), more technical than managerial, and more interested in exploring risks than defining processes.

We think testing is too broad an area to cover fully in a single course. A course that tries to teach too much will be too superficial to have any real value. Rather than designing a single course to serve as a comprehensive model, we think the field is better served with several designs for several courses.

We are particularly interested in online courses that promote deeper knowledge and skill. You can see our work on software testing at www.testingeducation.org/BBST. Online courses and courseware, especially Creative Commons courseware, make it possible for students to learn multiple perspectives and to study new topics and learn new skills on a schedule that works for them.

WHO SHOULD ATTEND

We invite participation by:

  • academics who have experience teaching courses on testing or security
  • practitioners who teach professional seminars on software testing or security
  • one or two graduate students
  • a few seasoned teachers or testers who are beginning to build their strengths in teaching software testing or security.

There is no fee to attend this meeting. You pay for your seat through the value of your participation. Participation in the workshop is by invitation based on a proposal. We expect to accept 15 participants with an absolute upper bound of 22.

TO APPLY TO ATTEND

Send an email to Cem Kaner (kaner@cs.fit.edu) by December 20, 2011.

Your email should describe your background and interest in teaching software testing or security. What skills or knowledge do you bring to the meeting that would be of interest to the other participants?

If you are willing to make a presentation, send an abstract. Along with describing the proposed concepts and/or activities, tell us how long the presentation will take, any special equipment needs, and what written materials you will provide. Along with traditional presentations, we will gladly consider proposed activities and interactive demonstrations.

We will begin reviewing proposals on December 1. We encourage early submissions. It is unlikely but possible that we will have accepted a full set of presentation proposals by December 20

Proposals should be between two and four pages long, in PDF format. We will post accepted proposals to www.wtst.org.

We review proposals in terms of their contribution to knowledge of HOW TO TEACH software testing and security. We will not accept proposals that present a theoretical advance with weak ties to teaching and application. Presentations that reiterate materials you have presented elsewhere might be welcome, but it is imperative that you identify the publication history of such work.

By submitting your proposal, you agree that, if we accept your proposal, you will submit a scholarly paper for discussion at the workshop by January 15, 2010. Workshop papers may be of any length and follow any standard scholarly style. We will post these at www.wtst.org as they are received, for workshop participants to review before the workshop.

HOW THE MEETING WILL WORK

WTST is a workshop, not a typical conference.

  • We will have a few presentations, but the intent of these is to drive discussion rather than to create an archivable publication.
    • We are glad to start from already-published papers, if they are presented by the author and they would serve as a strong focus for valuable discussion.
    • We are glad to work from slides, mindmaps, or diagrams.
  • Some of our sessions will be activities, such as brainstorming sessions, collaborative searching for information, creating examples, evaluating ideas or workproducts and lightning presentations (presentations limited to 5-minutes, plus discussion).
  • In a typical presentation, the presenter speaks 10 to 90 minutes, followed by discussion. There is no fixed time for discussion. Past sessions’ discussions have run from 1 minute to 4 hours. During the discussion, a participant might ask the presenter simple or detailed questions, describe consistent or contrary experiences or data, present a different approach to the same problem, or (respectfully and collegially) argue with the presenter.

Our agenda will evolve during the workshop. If we start making significant progress on something, we are likely to stick with it even if that means cutting or timeboxing some other activities or presentations.

Presenters must provide materials that they share with the workshop under a Creative Commons license, allowing reuse by other teachers. Such materials will be posted at www.wtst.org.

HOSTS

The hosts of the meeting are:

  • Cem Kaner (www.kaner.com and www.testingeducation.org)
  • Rebecca Fiedler (www.beckyfiedler.com)
  • Richard Ford (harris-institute.fit.edu)
  • Michael D. Kelly (michaeldkelly.com) (Meeting Facilitator)

LOCATION AND TRAVEL INFORMATION

We will hold the meetings at

Harris Center for Assured Information, Room 327

Florida Tech, 150 W University Blvd,

Melbourne, FL

Airport

Melbourne International Airport is 3 miles from the hotel and the meeting site. It is served by Delta Airlines and US Airways. Alternatively, the Orlando International Airport offers more flights and more non-stops but is 65 miles from the meeting location.

Hotel

We recommend the Courtyard by Marriott – West Melbourne located at 2101 W. New Haven Avenue in Melbourne, FL.

Please call 1-800-321-2211 or 321-724-6400 to book your room by January 2. Be sure to ask for the special WTST rates of $89 per night. Tax is an additional 11%.

All reservations must be guaranteed with a credit card by January 2, 2010 at 6:00 pm. If rooms are not reserved, they will be released for general sale. Following that date reservations can only be made based upon availability.

For additional hotel information, please visit the www.wtst.org or the hotel website at www.marriott.com/hotels/travel/mlbch-courtyard-melbourne-west/

OUR INTELLECTUAL PROPERTY AGREEMENT

We expect to publish some outcomes of this meeting. Each of us will probably have our own take on what was learned. Participants (all people in the room) agree to the following:

  • Any of us can publish the results as we see them. None of us is the official reporter of the meeting unless we decide at the meeting that we want a reporter.
  • Any materials initially presented at the meeting or developed at the meeting may be posted to any of our web sites or quoted in any other of our publications, without further permission. That is, if I write a paper, you can put it on your web site. If you write a problem, I can put it on my web site. If we make flipchart notes, those can go up on the web sites too. None of us has exclusive control over this material. Restrictions of rights must be identified on the paper itself.
    • NOTE: Some papers are circulated that are already published or are headed to another publisher. If you want to limit republication of a paper or slide set, please note the rights you are reserving on your document. The shared license to republish is our default rule, which applies in the absence of an asserted restriction.
  • The usual rules of attribution apply. If you write a paper or develop an idea or method, anyone who quotes or summarizes you work should attribute it to you. However, many ideas will develop in discussion and will be hard (and not necessary) to attribute to one person.
  • Any publication of the material from this meeting will list all attendees as contributors to the ideas published as well as the hosting organization.
  • Articles should be circulated to WTST-2012 attendees before being published when possible. At a minimum, notification of publication will be circulated.
  • Any attendee may request that his or her name be removed from the list of attendees identified on a specific paper.
  • If you have information which you consider proprietary or otherwise shouldn’t be disclosed in light of these publication rules, please do not reveal that information to the group.

ACKNOWLEDGEMENTS

Support for this meeting comes from the Harris Institute for Assured Information at the Florida Institute of Technology, and Kaner, Fiedler & Associates, LLC.

Funding for WTST 1-5 came primarily from the National Science Foundation , under grant EIA-0113539 ITR/SY+PE “Improving the Education of Software Testers.” Partical funding for the Advisory Board meetings in WTST 6-10 came from the the National Science Foundation, under grant CCLI-0717613 “Adaptation & Implementation of an Activity-Based Online or Hybrid Course in Software Testing”.

Opinions expressed at WTST or published in connection with WTST do not recessarily reflect the views of NSF.

WTST is a peer conference in the tradition of the Los Altos Workshops of Software Testing.

 

Posted in education, security, testing | Comments Off

Please update your links to this blog

November 20th, 2011

A few new posts will be coming soon. I’m hoping they won’t be missed.

I moved my blog to kaner.com over a year ago — If you’re still linking to the old site, please update it.

Thanks!

Posted in Uncategorized | Comments Off

A welcome addition to the scholarship of exploratory software testing

November 16th, 2011

Juha Itkonen will be defending his dissertation on “Empirical Studies on Exploratory Software Testing” this Friday. I haven’t read the entire document, but what I have read looks very interesting.

Juha has been studying real-world approaches to software testing for about a decade (maybe longer–when I met him almost a decade ago, his knowledge of the field was quite sophisticated). I’m delighted to see this quality of academic work and wish him well in his final oral exam.

For a list of soon-to-be-Dr. Itkonen’s publications, see https://wiki.aalto.fi/display/~jitkonen@aalto.fi/Full+list+of+publications.

Posted in education, testing | 1 Comment »

Emphasis & Objectives of the Test Design Course

October 8th, 2011

Becky and I are getting closer to rolling out Test Design. Here’s our current summary of the course:

Learning Objectives for Test Design

This is an introductory survey of test design. The course introduces students to:

  • Many (over 100) test techniques at a superficial level (what the technique is).
  • A detailed-level of familiarity with a few techniques:
    • function testing
    • testing tours
    • risk-based testing
    • specification-based testing
    • scenario testing
    • domain testing
    • combination testing.
  • Ways to compare strengths of different techniques and select complementary techniques to form an effective testing strategy
  • Using the Heuristic Test Strategy Model for specification analysis and risk analysis
  • Using concept mapping tools for test planning.

I’m still spending about 6 hours per night on video edits, but our most important work is on the assessments. To a very large degree, my course designs are driven by my assessments. That’s because there’s such a strong conviction in the education community–which I share–that students learn much more from the assessments (from all the activities that demand that they generate stuff and get feedback on it) than from lectures or informal discussions. The lectures and slides are an enabling backdrop for the students’ activities, rather than the core of the course.

In terms of design decisions, deciding what I will hold my students accountable for knowing requires me to decide what I will hold myself accountable for teaching well.

If you’re intrigued by that way of thinking about course design, check out:

  • Angelo & Cross (1993, 2nd Ed.), Classroom Assessment Techniques: A Handbook for College Teachers. Josey-Bass.
  • Wiggins & McTighe (2005, 2nd Ed.), Understanding by Design. Prentice Hall

I tested the course’s two main assignments in university classrooms several times before starting on the course slides (and wrote 104 first-draft multiple-quess questions and maybe 200 essay questions). But now that the course content is almost complete, we’re revisiting (and of course rewriting) these materials. In the process, we’ve been gaining perspective.

I think the most striking feature of the new course is its emphasis on content.

Let me draw the contrast with a chart that compares the BBST courses (Foundations, Bug Advocacy, and Test Design) and some other courses still on the drawing boards:

spacer

A few definitions:

  • Course Skills: How to be an effective student. Working effectively in online courses. Taking tests. Managing your time.
  • Social Skills: Working together in groups. Peer reviews. Using collaboration tools (e.g. wikis).
  • Learning Skills: How to gather, understand, organize and be able to apply new information. Using lectures, slides, and readings effectively. Searching for supplementary information. Using these materials to form and defend your own opinion.
  • Testing Knowledge: Definitions. Facts and concepts of testing. Structures for organizing testing knowledge.
  • Testing Skills: How to actually do things. Getting better (through practice and feedback) at actually doing them.
  • Computing Fundamentals: Facts and concepts of computer science and computing-relevant discrete mathematics.

As we designed the early courses, Becky Fiedler and I placed a high emphasis on course skills and learning skills. Students needed to (re)learn how to get value from online video instruction, how to take tests, how to give peer-to-peer feedback, etc.

The second course, Bug Advocacy, emphasizes specific testing skills–but the specific skills are the ones involved in bug investigation and reporting. Even though these critical thinking, research, and communication skill have strong application to testing, they are foundational for knowledge-related work.

Test Design is much more about the content (testing knowledge). We survey (depends on how you count) 70 to 150 test techniques. We look for ways to compare and contrast them. We consider how to organize projects around combinations of a few techniques that complement each other (make up for each other’s weaknesses and blindnesses). The learning skills component is active reading–This is certainly generally useful, but its context and application is specification analysis.

Test Design is more like the traditional Software Testing Course firehose. Way too much material in way too little time, with lots of reference material to help students explore the underemphasized parts of the course when they need it on the job.

The difference is that we are relying on the students’ improved learning skills. The assignments are challenging. The labs are still works-in-progress and might not be polished until the third iteration of the course, but labs-plus-assignments being home a bunch of lessons.

Whether the students’ skills are advanced enough to process over 500 slides efficiently, integrate material across sections, integrate required readings, and apply them to software — all within the course’s 4-week timeframe — remains to be seen.

Posted in education, testing | 3 Comments »

Learning Objectives of the BBST Courses

September 19th, 2011

As I finish up the post-production and assessment-design for the Test Design course, I’m writing these articles as a set of retrospectives on the instructional design of the series.

For me, this is a transition point. The planned BBST series is complete with lessons to harvest as we create courseware on software metrics, development of skills with specific test techniques, computer-related law/ethics (in progress), cybersecurity, research methods and instrumentation applied to quantitative finance, qualitative research methods, and analysis of requirements.

Instructional Design

I think course design is a multidimensional challenge, focused around seven core questions:

    1. Content: What information, or types of information do I want the students to learn?
    2. Skills: What skills do I want the students to develop?
    3. Level of Learning: What level of depth do I want to students to learn this material at?
    4. Learning activities: How will the course’s activities (including the assessments, such as assignments and exams) that I use support the students’ learning?
    5. Instructional Technologies: What technologies will I use to support the course?
    6. Assessment: How will I assess the course: How will I find out what the students have learned, and at what level?
    7. Improvement: How will I use the assessment results to guide my improvement of the course?

This collection of articles will probably skip around in these questions as I take inventory of my last 7 years of notes on online and hybrid course development.

Objectives of the BBST Courses

The changing nature of the objectives of the BBST courses.

Courses differ in more than content. They differ in the other things you hope students learn along with the content.

The BBST courses include a variety of types of assessment: quizzes, labs, assignments and exams. For instructional designers, the advantage of assessments is that we can find out what the students know (and don’t know) and what they can apply.

spacer

The BBST courses have gone through several iterations. Becky Fiedler and I used the performance data to guide the evolutions.

Based on what we learned, we place a higher emphasis in the early courses on course skills and learning skills and a greater emphasis in the later courses on testing skills.

A few definitions:

  • Course Skills: How to be an effective student. Working effectively in online courses. Taking tests. Managing your time.
  • Social Skills: Working together in groups. Peer reviews. Using collaboration tools (e.g. wikis).
  • Learning Skills: How to gather, understand, organize and be able to apply new information. Using lectures, slides, and readings effectively. Searching for supplementary information. Using these materials to form and defend your own opinion.
  • Testing Knowledge: Definitions. Facts and concepts of testing. Structures for organizing testing knowledge.
  • Testing Skills: How to actually do things. Getting better (through practice and feedback) at actually doing them.
  • Computing Fundamentals: Facts and concepts of computer science and computing-relevant discrete mathematics.

You can see the evolution of emphasis in the course’s specific learning objectives.

Learning Objectives of the 3-Course BBST Set

  • Understand key testing challenges that demand thoughtful tradeoffs by test designers and managers.
  • Develop skills with several test techniques.
  • Choose effective techniques for a given objective under your constraints.
  • Improve the critical thinking and rapid learning skills that underlie good testing.
  • Communicate your findings effectively.
  • Work effectively online with remote collaborators.
  • Plan investments (in documentation, tools, and process improvement) to meet your actual needs.
  • Create work products that you can use in job interviews to demonstrate testing skill.

Learning Objectives for the First Course (Foundations)

This is the first of the BBST series. We address:

  • How to succeed in online classes
  • Fundamental concepts and definitions
  • Fundamental challenges in software testing

Improve academic skills

  • Work with online collaboration tools
    • Forums
    • Wikis
  • Improve students’ precision in reading
  • Create clear, well-structured communication
  • Provide (and accept) effective peer reviews
  • Cope calmly and effectively with formative assessments (such as tests designed to help students learn).

Learn about testing

  • Key challenges of testing
    • Information objectives drive the testing mission and strategy
    • Oracles are heuristic
    • Coverage is multidimensional
    • Complete testing is impossible
    • Measurement is important, but hard
  • Introduce you to:Basic vocabulary of the field
    • Basic facts of data storage and manipulation in computing
    • Diversity of viewpoints
    • Viewpoints drive vocabulary

Learning Objectives for the Second Course (Bug Advocacy)

Bug reports are not just neutral technical reports. They are persuasive documents. The key goal of the bug report author is to provide high-quality information, well written, to help stakeholders make wise decisions about which bugs to fix.

Key aspects of the content of this course include:

  • Defining key concepts (such as software error, quality, and the bug processing workflow)
  • The scope of bug reporting (what to report as bugs, and what information to include)
  • Bug reporting as persuasive writing
  • Bug investigation to discover harsher failures and simpler replication conditions
  • Excuses and reasons for not fixing bugs
  • Making bugs reproducible
  • Lessons from the psychology of decision-making: bug-handling as a multiple-decision process dominated by heuristics and biases.
  • Style and structure of well-written reports

Our learning objectives include this content, plus improving your abilities / skills to:

  • Evaluate bug reports written by others
  • Revise / strengthen reports written by others
  • Write more persuasively (considering the interests and concerns of your audience)
  • Participate effectively in distributed, multinational workgroup projects that are slightly more complex than the one in Foundations

Learning Objectives for the Third Course (Test Design)

This is an introductory survey of test design. The course introduces students to:

  • Many (nearly 200) test techniques at a superficial level (what the technique is).
  • A detailed-level of familiarity with a few techniques:
    • function testing
    • testing tours
    • risk-based testing
    • specification-based testing
    • scenario testing
    • domain testing
    • combination testing.
  • Ways to compare strengths of different techniques and select complementary techniques to form an effective testing strategy
  • Using the Heuristic Test Strategy Model for specification analysis and risk analysis
  • Using concept mapping tools for test planning.

 

Posted in education, testing | Comments Off

A New Course on Test Design: The Bibliography

September 13th, 2011

Back in 2004, I started developing course videos on software testing and computer-related law/ethics. Originally, these were for my courses at Florida Tech, but I published them under a Creative Commons license so that people could incorporate the materials in their own courses.

Soon after that, a group of us (mainly, I think, Scott Barber, Doug Hoffman, Mike Kelly, Pat McGee, Hung Nguyen, Andy Tinkham, and Ben Simo) started planning the repurposing of the academic course videos for professional development. I put together some (failing) prototypes and Becky Fiedler took over the instructional design.

  • We published the first version of BBST-Foundations and taught the courses through AST (Association for Software Testing). It had a lot of rough edges, but people liked it at lot.
  • So Becky and I created course #2, Bug Advocacy, with a lot of help from Scott Barber (and many other colleagues). This was new material, a more professional effort than Foundations, but it took a lot of time.

That took us to a fork in the road.

  • I was working with students on developing skills with specific techniques (I worked with Giri Vijayaraghavan and Ajay Jha on risk-based testing; Sowmya Padmanabhan on domain testing; and several students with not-quite-successful efforts on scenario testing). Sowmya and I proved (not that we were trying to) that developing students’ testing skills was more complex than I’d been thinking. So, Becky and I were rethinking our skills-development-course designs.
  • On the other hand, AST’s Board wanted to pull together a “complete” introductory series in black box testing. Ultimately, we went that way.

The goal was a three-part series:

  1. A reworked Foundations that fixed many of the weaknesses of Version 1. We completed that one about a year ago (Becky and Doug Hoffman were my main co-creators, with a lot of design guidance from Scott Barber).
  2. Bug Advocacy, and
  3. a new course in Test Design.

Test Design is (finally) almost done (many thanks to Michael Bolton and Doug Hoffman). I’ll publish the lectures as we finish post-production on the videos. Lecture 1 should be up around Saturday.

Test Design is a survey course. We cover a lot of ground. And we rely heavily on references, because we sure don’t know everything there is to know about all these techniques.

To support the release of the videos, I’m publishing our references now. (The final course slides will have the references too, but those won’t be done until we complete editing the last video.):

  • As always, it has been tremendously valuable reading books and papers suggested by colleagues and rereading stuff I’ve read before. A lot of careful thinking has gone into the development and analysis of these techniques.
  • As always, I’ve learned a lot from people whose views differ strongly from my own. Looking for the correctness in their views–what makes them right, within their perspective and analysis, even if I disagree with that perspective–is something I see as a basic professional skill.
  • And as always, I’ve not only learned new things: I’ve discovered that several things I thought I knew were outdated or wrong. I can be confident that the video is packed with errors–but plenty fewer than there would have been a year ago and none that I know about now.

So… here’s the reference list. Video editing will take a few weeks to complete–if you think we should include some other sources, please let me know. I’ll read them and, if appropriate, I’ll gladly include them in the list.

Active reading (see also Specification-based testing and Concept mapping)

  • Adler, M. (1940). How to mark a book. academics.keene.edu/tmendham/documents/AdlerMortimerHowToMarkABook_20060802.pdf
  • Adler, M., & Van Doren, C. (1972). How to Read a Book. Touchstone.
  • Beacon Learning Center. (Undated). Just Read Now. www.justreadnow.com/strategies/active.htm
  • Active reading (summarizing Bean, J. Engaging Ideas). (Undated). titan.iwu.edu/~writcent/Active_Reading.htm
  • Gause, D.C., & Weinberg, G.M. (1989). Exploring Requirements: Quality Before Design. Dorset House.
  • W.D. Hurley (1989) “A generative taxonomy of application domains based on interaction semantics” at dl.acm.org/citation.cfm?id=75960. (see also Jha’s and Vijayaraghavan’s writing on generative taxonomy in the section on Failure Mode Analysis.)
  • PLAN: Predict/Locate/Add/Note. (Undated). www.somers.k12.ny.us/intranet/reading/PLAN.html
  • MindTools. (Undated). Essential skills for an excellent career. www.mindtools.com
  • Penn State University Learning Centers. (Undated). Active Reading. istudy.psu.edu/FirstYearModules/Reading/Materials.html
  • Weill, P. (Undated). Reading Strategies for Content Areas: Part 1 After Reading. www.douglasesd.k12.or.us/LinkClick.aspx?fileticket=6FrvrocjBnk%3d&​tabid=1573

All-pairs testing

See www.pairwise.org/ for more references generally and www.pairwise.org/tools.asp for a list of tools.

  • Bach, J., & Schroeder, P. (2004). Pairwise Testing: A Best Practice that Isn’t. Proceedings of the 22nd Pacific Northwest Software Quality Conference, 180–196. www.testingeducation.org/wtst5/​PairwisePNSQC2004.pdf. For Bach’s tool, see www.satisfice.com/tools/pairs.zip
  • Bolton, M. (2007). Pairwise testing. www.developsense.com/pairwiseTesting.html
  • Chateauneuf, M. (2000). Covering Arrays. Ph.D. Dissertation (Mathematics). Michigan Technological University.
  • Cohen, D. M., Dalal, S. R., Fredman, M. L., & Patton, G. C. (1997). The AETG system: An approach to testing based on combinatorial design. IEEE Transactions on Software Engineering, 23(7). aetgweb.argreenhouse.com/papers/1997-tse.html. For more references, see aetgweb.argreenhouse.com/papers.shtml
  • Czerwonka, J. (2008). Pairwise testing in the real world: Practical extensions to test-case scenarios. msdn.microsoft.com/en-us/library/cc150619.aspx. Microsoft’s PICT tool is at download.microsoft.com/download/f/5/5/f55484df-8494-48fa-8dbd-8c6f76cc014b/pict33.msi
  • Jorgensen, P. (2008, 3rd Ed.). Software Testing: A Craftsman’s Approach. Auerbach Publications.
  • Kuhn, D. R. & Okun, V. (2006). Pseudo-exhaustive testing for software. 30th Annual IEEE/NASA Software Engineering Workshop. csrc.nist.gov/acts/PID258305.pdf
  • Zimmerer, P. (2004). Combinatorial testing experiences, tools, and solutions. International Conference on Software Testing, Analysis & Review (STAR West). www.stickyminds.com

Alpha testing

See references on tests by programmers of their own code, or on relatively early testing by development groups. For a good overview from the viewpoint of the test group, see Schultz, C.P., Bryant, R., & Langdell, T. (2005). Game Testing All in One. Thomson Press

Ambiguity analysis (See also specification-based testing)

  • Bender, R. (Undated). The Ambiguity Review Process, benderrbt.com/Ambiguityprocess.pdf
  • Berry, D.M., Kamisties, E., & Krieger, M.M. (2003) From contract drafting to software specification: Linguistic sources of ambiguity. se.uwaterloo.ca/~dberry/handbook/ambiguityHandbook.pdf
  • Fabbrini, F., Fusani, M., Gnesi, S., & Lami, G. (2000) Quality evaluation of software requirements specifications. Thirteenth International Software & Internet .Quality Week. citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.98.4333&rep=rep1&type=pdf
  • Spector, C.C. (1997) Saying One Thing, Meaning Another, Eau Claire, WI: Thinking Publications (reprint at www.superduperinc.com/products/view.aspx?pid=tpx12901)
  • Spector, C.C. (2001). As Far As Words Go: Activities for Understanding Ambiguous Language and Humor, Baltimore: Brookes Publishing.

Best representative testing (See domain testing)

Beta testing

  • Bolton, M. (2001). Effective beta testing. www.developsense.com/EffectiveBetaTesting.html
  • Fine, M.R. (2002). Beta Testing for Better Software. Wiley.
  • Schultz, C.P., Bryant, R., & Langdell, T. (2005). Game Testing All in One. Thomson Press.
  • Spolsky, J. (2004). Top twelve tips for running a beta test. www.joelonsoftware.com/articles/BetaTest.html
  • Wilson, R. (2005). Expert user validation testing. Unpublished manuscript.

Boundary testing (See domain testing)

Bug bashes

  • Berkun, S. (2008). How to run a bug bash. www.scottberkun.com/blog/2008/how-to-run-a-bug-bash/
  • Powell, C. (2009). Bug bash. blog.abakas.com/2009/01/bug-bash.html
  • Schroeder, P.J. & Rothe, D. (2007). Lessons learned at the stomp. Conference of the Association for Software Testing, www.associationforsoftwaretesting.org/?dl_name=Lessons_Learned_at_the_Stomp.pdf

Build verification

  • Guckenheimer, S. & Perez, J. (2006). Software Engineering with Microsoft Visual Studio Team System. Addison Wesley.
  • Page, A., Johnston, K., & Rollison, B.J. (2009). How We Test Software at Microsoft. Microsoft Press.
  • Raj, S. (2009). Maximize your investment in automation tools. Software Testing Analysis & Review. www.stickyminds.com

Calculations

Note: There is a significant, relevant field: Numerical Analysis. The list here merely points you to a few sources I have personally found helpful, not necessarily to the top references in the field.

  • Arsham, H. (2010) Solving system of linear equations with application to matrix inversion. home.ubalt.edu/ntsbarsh/business-stat/otherapplets/SysEq.htm
  • Boisvert, R.F., Pozo, R., Remington, K., Barrett , R.F., & Dongarra, J.J. (1997) Matrix Market: A web resource for test matrix collections. In Boisvert, R.F. (1997) (Ed.) Quality of Numerical Software: Assessment and Enhancement. Chapman & Hall.
  • Einarsson, B. (2005). Accuracy and Reliability in Scientific Computing. Society for Industrial and Applied Mathematics (SIAM).
  • Gregory, R.T. & Karney, D.L. (1969). A Collection of Matrices for Testing Computational Algorithms. Wiley.
  • Kaw, A.K. (2008), Introduction to Matrix Algebra. Available from www.lulu.com/browse/preview.php?fCID=2143570. Chapter 9, Adequacy of Solutions, numericalmethods.eng.usf.edu/mws/gen/04sle/mws_gen_sle_spe_adequacy.pdf

Combinatorial testing. See All-Pairs Testing

Concept mapping

  • Hyerle, D.N. (2008, 2nd Ed.). Visual Tools for Transforming Information into Knowledge, Corwin.
  • Margulies, N., & Maal, N. (2001, 2nd Ed.) Mapping Inner Space: Learning and Teaching Visual Mapping. Corwin.
  • McMillan, D. (2010). Tales from the trenches: Lean test case design. www.bettertesting.co.uk/content/?p=253
  • McMillan, D. (2011). Mind Mapping 101. www.bettertesting.co.uk/content/?p=956
  • Moon, B.M., Hoffman, R.R., Novak, J.D., & Canas, A.J. (Eds., 2011). Applied Concept Mapping: Capturing, Analyzing, and Organizing Knowledge. CRC Press.
  • Nast, J. (2006). Idea Mapping: How to A
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.