Wednesday, 18 May 2011

Agile Testing

Historically, software testing has always been about finding where the software breaks. In fact, when we talk about the qualities of a good software tester we quote anecdotes about how "she was able to break my program in 3 minutes" or that "he found 27 bugs when running on Linux with Apache". In other words, software testing was about finding bugs and software testers would get rewarded for and take satisfaction from finding bugs.

Whereas agile testing is more about keeping the bugs out than finding the ones that are in there. The (automated) software tests that are written in agile testing are more about showing that the software works the way we expect than about finding where it breaks.

This doesn’t mean that agile testers should not attempt to find bugs nor that pre-agile testers never write tests to show what works. It means agile thinking has introduced a change in emphasis from “fault-finding” to “working software”.

In pre-agile days, developers would bang away on the code until they considered it done. Then they would throw the code over the wall to QA to find the bugs. QA's first task would be to find the places where it would break. They'd submit a whole bunch of bugs which developers would fix. This cycle would repeat until they couldn't find any more.

Software Testing now happens closer to the developer in space and in time. Software Testers design tests to show that the software works as expected. The developers and testers work together in a room to expand the set of cases that works. The bugs should never get introduced in the first place.

In the old days the conversation might have gone something like this:

=========================================================
Software Developer: The baby has been delivered.
Software Tester: Your baby is early,Software Developer: Everything works.Software Tester: C is brokenSoftware Developer: fixed CSoftware Tester: E doesn't work with X
Developer: fixed E with X
and so on until the tester can't find any more bugs.
=========================================================

In agile environment, it would go more like this:

=========================================================
Software Developer: A is doneSoftware Tester: A works
Software Developer: B is doneSoftware Tester: B works
and so on until every case works
=========================================================

Nature of Agile Testing:
The biggest difference between agile projects and most ‘traditional’ software development projects is the concept of test driven development. With agile testing, every chunk of code is covered by unit tests, which must all pass all the time. The absence of unit level and regression bugs means that testers actually get to focus on their job: making sure the code does what the customer wanted. The acceptance tests define the level of quality the customer has specified (and paid for!).

Testers who are new to agile testing should keep in mind the agile values: communication, simplicity, feedback and courage.

Difference between Traditional Software Testing and Agile Testing:

Agile testing was different in many ways from ‘traditional’ software testing. The biggest difference is that on an agile project, the entire development team takes responsibility for quality. This means the whole team is responsible for all software testing tasks, including acceptance test automation. When software testers and programmers work together, the approaches to test automation can be pretty creative. Software Testing is integrated into software development. Having everyone in one room speeds up communication by an astonishing amount. Questions that might otherwise take a couple days to be answered over email are answered within a couple of minutes. There can be numerous occasions when hours of time could be saved by overhearing a conversation in the team room.

On traditional projects, folks with Quality somewhere in their title (Quality Assurance, Quality Engineers, et al) perform Independent Verification and Validation (IV&V) activities to assess the quality of the system. Often these teams also review design artifacts. Sometimes they also have a hand in defining and/or enforcing the process by which the software is made.

Agile project teams generally reject the notion that they need an independent group to assess their work products or enforce their process. They value the information that testing provides and they value testing activities highly. Indeed they value testing so much, they practice Test Driven Development (TDD), writing and executing test code before writing the production code to pass the tests. However, even though agile teams value testing, they don't always value independent testers. And they're particularly allergic to the auditing or policing aspects of heavyweight, formal QA.

So how can testers make themselves useful on a team that does not see much use in traditional, formal QA methodologies? Here's what they can do:

1. Supporting Programmer Testing (Technology facing):

When supplementing programmer testing the testers support the programmers in creating the software. They sit in the synergy with the programmers. If the programmers are practicing agile, it's a given that they have an extensive set of unit tests. The software tester’s role is not to do the programmers' unit testing for them. Instead, most of their work involves manual exploratory testing to discover important information about the software that the unit tests failed to reveal.

In order to do this, they have to:

- Get and build the latest source code.
- Run all the unit tests to verify they are starting from a "known good" place.
- Run the application (usually locally from the IDE).
- Minimize documentation: Test documentation can account for a large percentage of the test effort. An informal poll among 135 testers across 57 organizations revealed that testers spend about one third of their time just documenting test cases
- Occasionally add to the automated unit or acceptance test suites.
- Asking “what if” questions to programmers in the planning game. For example, “What if the migrated data has null values?” Analyzing risks and providing information early.
- Offering information about external dependencies or requirements that the team might not otherwise know about.

Unlike unit tests that use mocks and stubs to isolate the code under test, this exploratory testing is as end-to-end as possible. As a result, they are usually able to find issues and risks that the unit tests don't reveal. And they are finding them sooner than we would if we waited for the customer to try things end-to-end. (Early feedback is good.) In doing so, testers not only help the programmers improve the software, they also help them improve their unit tests.

To support programmer testing, you're going to have to be comfortable mucking about in source code. If you haven't coded in a long time (or ever) that means more than just learning Java or whatever language your organization uses. You'll also have to be comfortable:

- Working in the development environment the developers are using, whether that's Visual Studio, Eclipse, IntelliJ, or something else.
- Fetching and building the latest code from the source control system.
- Using the test frameworks the developers are using such as jUnit or NUnit.
- Configuring your own system and setting up your own data, so you may have to learn more about operating systems, networks, and databases

In short, technology-facing testers become members of the development team. You'll need to grow your technical skills accordingly.

Supporting Customer Testing (Business Facing):

The word "Customer" is used in the sense: The set of people who represent the business-facing stakeholders on a project, and not referring to the customers who pay for the software. When supplementing customer testing, they support them by helping them define and execute acceptance tests. Specifically, they:

- Walk through the existing software with them, using an interview-style conversation to surface assumptions and expectations.
- Use the information from walkthroughs and other sources to design and articulate acceptance tests.
- Use a variety of analysis techniques to discover risks and implications of decisions.
- Help the customers and programmers define what "good enough" means for their context.
- Execute acceptance tests manually where needed.
- Automate acceptance tests (either by automating them themselves or by working with a programmer or existing automation engineer)
- Provide metrics or other high level data as needed to help the Customer satisfy his management's need for numbers

In this second role, software testers spend as much of their time facilitating communication and clarifying expectations.

To help business stakeholders articulate their needs, you're going to end up doing a whole lot of requirements extraction. Business-facing testing is as much about distilling requirements and testing assumptions as it is about testing software. There's an entire body of knowledge on requirements analysis, design analysis, and modeling that can help.

- Consider brushing up on UML.
-Learn how to be an effective interviewer and facilitator.
-Grow both your analysis skills and soft skills.

No comments:

Post a Comment