Thursday 31 October 2013

If information is king, then coverage metrics must be the joker?!


Problem: Test coverage reporting can easily lead a false sense of security.

One of my peers asked me to review an acceptance test summary report before it was sent to his customer. It was an excellent report, with loads of information and a nice management summary that suggested exactly what to do. The test coverage section did however catch my eye, as it was a table showing test cases passes per use cases and the execution status.

The table looked something like this (simplified). Just looking at the % in the right column would suggest that everything is good…
 
Test coverage and execution status per use case:
Use Case
Test case
Execution status
% Run
UC1 – [Title]
17
12 Passed
5 Failed
100%
UC2 – [Title]
11
11 Passed
100%
UC3 – [Title]
14
12 Run
2 No Run
86%
Total
42
35 Passed
5 failed
2 No Run
95%

Solution: Be VERY careful when reporting on coverage, and make sure to explain the meaning.

1st problem when looking at coverage is the set the level of measurement. I like exact numbers like code coverage, but in some cases this is impossible to get. The report I reviewed was covering acceptance testing of two deliveries from 3rd party vendors, making code-related metrics impossible to obtain.

Basing coverage on functionality is like navigating a minefield wearing flippers. It raises two problems; How do you measure if a function is covered, and how do you measure if alternative scenarios are sufficiently covered?

I would base my coverage measurements against acceptance criterias for the userstories to see functional coverage. If the user stories are broken down in acceptance criterias, then the customer will have a very clear idea of which features have been tested. Drilling down into the use case specifics and changing the focus from run to passed cases.
 
Use Case /
Acceptance criteria
Test case
Execution status
% Passed
UC1 – [Title]
·        AC 1
·        AC 2
·        AC 3
 
8
5
4
 
7 Passed
3 Failed
1 Passed, 2 Failed
 
88%
0%
25%
There are shortcomings in reporting like this, but in the case where code is a black box you will have to take what you can get. Keeping this in mind there are things that must be communicated as part of the report:
·        Functional Test Coverage gives an indication of what features are done, in that they satisfy the acceptance criteria.
·        Many cases per Acceptance criteria is not equal to better coverage than a few.
·        This approach requires acceptance criterias to be very crisp, a missing acceptance criteria will mean a lot.
For each of these bullets you have to tell what risks you see, and what they mean for the project.

Nonetheless if you decide to put a coverage numbers into the reports, make sure to tell the reader what it means. In my opinion you need both code and functional coverage umbers for a complete coverage report, but can live with one if in a tight spot.

Happy testing!

/Nicolai

Friday 25 October 2013

Parallel testing as part of a platform upgrade

Problem: Testing an application after platform upgrades can be tricky.

We are about to start a project where an applications is moving from an old platform to a new one, and this calls for testing. Unfortunately, this old application is not well documented, neither in requirements nor in test cases. This makes the test tricky, because determining expected result is not possible from existing documentation.

Solution: Deploy parallel testing techniques!

We do in fact have the expected result in a well documented manner – We have a running system in production. The production deployment has been running for years and no bugs are raised against it. The system is a number-cruncher based on a huge order database, making it perfect for some parallel testing, and this is the prospect for our test:

We assume that the result in production running on the old platform is valid, hence equal to our expected result in the test cases to be run on the application after deployment to the new platform. Our test cases are the functions that can be invoked in the production environment, and the input data is the datasets from the database.

This means that we need the following setup to run our test:
Two test environments running the old (same as in production) and new platform (same as to be production) pointing at the same test data. We can use one database for input data, as the application makes calculations on data rather than changing it. This means that we will copy data from production and use that as a foundation for our test.

This is how we will create our test cases:
Reverse engineering on the production system. For each screen, we will list all functions, and break that down into steps. From production we get the test scenarios for each test case from business examples, and that will dictate what test data we will need for the test. On top of that, there are some batchjobs and other ‘hidden’ functions that will require attention.

This is how we will run the test:
First we will run the case in the old environment, and do the same in the new environment if the result is the same then we’ll move on to the next one.

The cool thing about doing it this way is that we now have documented test cases and a very good foundation for regression testing the application in the following releases.

Happy testing & Have a nice weekend!

/Nicolai

Friday 18 October 2013

Businessdriven test vs. Application

Problem: Users/business reps can have a hard time using their business knowledge in a new system context.
Test sessions and workshops that involve users / business reps (acceptance, prototype etc.) relies heavily on the business knowledge from the participants and the ability to turn this into meaningful test cases and scenarios. Seeing (and understanding) an application for the first time combines with a request to combine new impressions with business knowledge can be a though cookie for some.

Solution: Roleplay your way through the business scenarios.

I have had the pleasure of running quite a few test sessions involving real users, either with the purpose of writing test scenarios or running explorative test vs. a release or prototype. Common for the sessions is that they involve clever people that knows about the business, but not necessary anything about the new application that they are about to test. In order to unlock the business knowledge in the system context I have found that roleplaying with the business reps is very efficient.
 
 
This is how I would structure a business test workshop:
  • Welcome, meet & greet + expectations
  • Demo of application, showing a standard flow through the application
  • Go play session, where participants get some time with hands on the application
  • Roleplaying, explanation of the rules and concept
  • Roleplaying session 1
  • Recap of findings and recording of additional scenarios that might have been discovered.
  • Roleplaying session 2
  • Recap of findings & new scenarios
  • Repeat until all scenarios have been played out / recorded

Rules are simple
Everybody gets to play a role as either business rep, customer or any other applicable role. Participants that usually deals with the customers are excellent for the customer role, as their first hand knowledge of customer requests will come in handy.
The gamemaster will be the test lead, he makes sure that scope of the scenario is not creaping, keeps flow in the test and records spinoff scenarios and defects for the recap session that follows the scenario.


Sessions consist of a scenario, that is defined in a headline and the roles involved. If you have some use cases start playing those, but refrain from giving the steps to the users, as it will kill their creativity. A word of advise here – Keep it simple, as scenario execution becomes time-consuming with complexity.

Preparations for a session is required much like any other test activity, where systems, data, access, scenarios etc. needs to be planned up front. If you are running this without a working prototype you need a mockup of the central parts of the system in order to facilitate the roleplaying sessions. 

Spin offs that you get from doing this: Useability, or lack of same will show instantly when the users gets their hands on the application for the first time – Make sure to take a lot of notes when users gets stuck, that is where useability bugs will be hiding.

Happy roleplaying & Have a nice weekend!

/Nicolai

Sunday 6 October 2013

We do not have time to…

Problem: There is not enough time to do everything by the book

One of my peers told me, that his clients said that they did not have time to attend sprint demos for sprints that were not directly linked to a release. That made me think of all the projects I have seen where someone did not have the time to do something that they should have done.

There can be a lot of reasons for not doing various tasks, but the argument that there is not enough time is rarely a good sign.

Solution: Be aware of, and communicate the consequences of not doing something.

Let us look at the consequences of some of the “We don’t have time to…” statements:

“We do not have time to attend sprint demos” Feedback is needed in order to ensure that solution meets requirement – The longer feedback time you have in your project the more rework will be needed and the impact of misunderstandings will increase.

“We do not have time for reviewing our documentation” Reviewing for grammar and spelling mistakes really adds little value, but missing reviews for code- and testability can soon become expensive.  Think cost escalation here, the later the discovery the more expensive the fix will be. Some argue that static testing or a review is actually one of the activities where you can get the biggest RoI in your projects.

“We do not have time to do test” Not testing, allows defects to go undetected to production, leaving little chance of success with implementing the application. Not testing is a huge risk, not only do you risk application failure but also business failure, that equals loss of money and prestige.

“We do not have time to write unit test” Some less mature projects I have seen shipped code for test if it could compile, skipping all developer driven test. Unit test allows early defect detection and reduces turnaround in projects, meaning that cost will escalate. On top of this, a strategy like this will result in more pressure on the test team, who are often already under pressure when the release approaches.

Common for all the above is the fact that they are investments that someone might not have time to undertake. Some investments are not a necessity, but postponing investments in quality is something that is likely to drive cost up and customer satisfaction down.

I will end post this with a quote from something I read recently: “Postponing investments in software quality entails risks for the business. When investments are delayed too long, business continuity can be put at risk - sometimes sooner than expected.” – from “You can’t avoid investing in your software forever” By Rick KlompĂ©

Happy testing & investing :)

/Nicolai

Wednesday 2 October 2013

To test in Production, or not...


Test or verification in production? This discussion emerge from time to time, usually following the statement “We will verify in production!”

I came across this example of test in production on YouTube: http://www.youtube.com/watch?v=URJmiAPNMmg
It is Matt Davis from Armor Express testing a bullet proof vest while wearing it. I sincerely hope that this is not real a test, but rather a demonstration of a well-tested product…

I really enjoy atsay714's comment: "This is a test of his balls, not his vest." From at QA and test perspective this comment really nails the concept of test in production. It is a test of the guts of the system owner, as HUGE risk is accepted while doing the testing in production.

Remember to tell your stakeholders what kind of risk they accept if they accept to exercise test in production.

Happy testing!

/Nicolai

Tuesday 1 October 2013

Use tools to help you gather information


Problem: Writing and documenting bugs can be time-consuming.

A lot of the test execution we do these days is exploratory. This means that we have little information from scripted test cases that can be copy pasted to bug reports and scenario descriptions. In order to transfer bugs and other information to development, there is a considerable workload of writing everything in steps and do-this-do-that descriptions. This offers two problems: It is boring to describe everything in detail and details are easily forgotten in the process.

Solution: Let snipping and recording tools ease your work

Snipping tools
A picture is worth a thousand words” This goes for defect reports as well – The better the picture the less explanation is needed when pointing development towards the problem. In my experience a good picture on a bugreport is a screenshot with some pointers and a maybe a little text pointing out areas of attention.

Capturing and sharing pictures can be done easily by using a screen capture tool. I use the snipping tool already build into Windows, primarily because it is free, and easily available. Snipping Tool in windows: http://windows.microsoft.com/en-US/windows7/products/features/snipping-tool

More sophisticated versions of screen capture tools are on the marked, and I suggest that you check som of them out, as they might be a shortcut to faster feedback to your peers. A colleague of mine demonstrated Snagit a while back, if you are for a more feature rich tool than the windows snipping tool: http://www.techsmith.com/snagit.html

Recording tools
Problem Steps Recorder included in Windows 7 and 8 allows you to record and share scenarios with step descriptions and screenshots recorded. Before we started using MS Test Manager for exploratory testing this was frequently used to make recordings of repro steps for a bug. Check it out: http://windows.microsoft.com/en-us/windows7/how-do-i-use-problem-steps-recorder

MS Test Manager takes the recording to a new level with the possibility of recording both bugs and test cases using the features build into the tool. http://msdn.microsoft.com/en-us/library/vstudio/hh191621.aspx

There are lots of other recording tools, but if you go for one I suggest that you select one that is more than a Video recording of the screen.

One last thing to remember: “A fool with a tool is still a fool” – Tools are not a silver bullet, but a way to increase productivity and information flow in your organization. Evaluate a new tool in a short period of time, and scrap it if does not give you the results you expect.
 
Have a nice day & Happy testing!
/Nicolai