Ten months ago my company didn’t test their own development, sure the developer would debug their code a number of times to make sure it “worked*” and then the customer would perform “Acceptance Testing”. I don’t think I really need to describe the problems that this approach had, but suffice to say most development work overran and there was a lot of free of charge work to fix issues!
*For worked, read what Jerry Weinberg said about people saying “It works”: “… immediately translate that into, ‘We haven’t tried very hard to make it fail, and we haven’t been running it very long or under very diverse conditions, but so far we haven’t seen any failures, though we haven’t been looking too closely, either.’ (Zero or more successes) “
I started as the first software tester Ten months ago and started the testing journey off; this post is about taking stock of where we are today.
I’m part of a Research and Development Team specialising in CRM, SharePoint, ERP and Accounting product development. We are currently working on three core products and a vast amount of new and legacy projects across our supported technology stack.
The development pods work through a prioritised backlog of work items within sprints that last between three and five weeks. We have four development pods working on independent sprints. Sprint kick-offs always happen on a Monday and Sprint retrospectives always happen on a Friday but not necessarily the same week.
The test team (I apply this term loosely as there are currently only two of us, soon to be three) don’t work in sprints – instead we support all development tasks across all sprints. In a nutshell there is a lot to test with not much time to do it – sound familiar?
Our test process:
We use Confluence and Jira to contain our test management process, we could easily use a different toolset but we don’t, so feel free to substitute references to these systems with your own as you read this if you wish.
The current process has been influenced by the rapid software testing course taught by James Bach (http://www.satisfice.com/info_rst.shtml) and Michael Bolton (http://www.developsense.com/courses.html), session based test management (http://www.satisfice.com/sbtm/) and thread based test management (http://www.satisfice.com/blog/archives/503).
Our testing is at the exploratory testing end of the scale not the pre-scripted testing end (Originally from Jon Bach, contained in the Rapid Software Testing material). Instead of writing lots of large documents à la ISO 29119 we create:
A Test strategy page in confluence for collection of work items allocated to a sprint as per the sprint kick off meeting and subsequent document. As a rule of thumb each sprint gets at least one test strategy page in confluence but may get more if it make sense to split the development work up (i.e. if the sprint covers some new features to a product and a series of support tickets for legacy projects). The scope of the test strategy is outlined along with a bullet point list of significant risks, issues, assumptions and constraints as we think of them. This is followed by the fundamental part of the test strategy: the proposed test approach, this is often in the form of a mind map (influenced by the heuristic test strategy model http://www.satisfice.com/tools/htsm.pdf). And finally a set of test ideas, which we define as the proposed testing missions (similar to charters from SBTM). We call them test ideas to discourage the notion that these are set in stone once written. A test idea can be created at any time and can become obsolete at any time if we discover they are no longer relevant. We’ve created an “issue type” in Jira for test ideas so that we can maintain a repository of test ideas; we use the close integration with confluence to provide a view on the test strategy page to the relevant test ideas.
For each test strategy page we create we also create a test report page, this is another living confluence page that is updated dynamically because it contains a series of views from Jira; it can therefore be viewed at any given point for the latest snapshot. Our test reports provides a link to the bugs that we have raised against the sprint but we don’t provide any bug related metrics as we view these as misleading. There is a summary of bugs that are still outstanding and these serve to prompt further discussions with the Product Managers for determining if they should be addressed. There is a summary of the test ideas that we feel have been sufficiently covered and those that we feel require more or didn’t even start, again the objective here is to prompt further discussions with the Product Managers about the information they require. We add our “Testing Story” (see Rapid Software Testing) to the page and share with our colleagues. Jira and Confluence require user credentials to login and view our content and therefore we have a limited audience, this allows us to indulge in a much more social approach as we know who can consume our information and we can supplement it with conversations. We haven’t covered external test reports to date, only a handful of customers have asked about the testing and we’ve normally been able to deal with it via conversations rather than reports and metrics. I’m keen to provide information to our customers about our testing but need to be mindful of what my company wishes to share and what I think is useful to share.
Our initial approach to managing the day-to-day testing was aligned with SBTM but in our context we found it hard to dedicate uninterrupted sessions to testing just one test idea (or charter) at a time. We then adapted our test management approach to a more thread based test management approach. This gave us much more flexibility to work with the development pods when we needed to. We can pick up multiple threads (or test ideas) in a day across different sprints, and if we get interrupted we simply park what we’re doing and kick off some other test activity. This is where having test ideas held in Jira as a repository really became useful; we were able to create a basic workflow that keeps track of our progress (I mean basic – we weren’t interested in creating a bureaucratic nightmare). We are also able to query the repository for information in many different places i.e. dashboards, ad-hoc searches and as I mentioned previously the test strategy itself has a view of the linked test ideas. We make use of labels to tag our test ideas and make searching for things easier.
Supporting the test strategies, test reports and test ideas are the test notes that we take on a daily basis. Because we no longer work within defined sessions we each create a notes page for the day’s testing, we can then insert references to test ideas we have worked on and bugs we have raised during the day. This gives us some form of traceability when looking back through our notes. It also means we don’t need to duplicate effort in writing things up twice, for example I was doing some bug retesting today and wanted to describe the testing I was doing to the developer so that he could see why I wasn’t closing the bug, I was able to capture this against the bug so that it would send him the update and then because I inserted the link in my test notes I didn’t need to duplicate this detail for my notes. We also attach any additional material we’ve created or the application we’re testing has produced so that it is linked to where we reference it.
The test strategies, test reports, bug reports, test notes and any data or other type of files created during testing are all instantly available to the rest of the team and are at least discussed in the sprint retrospectives if not more.
This is the test management process that we currently follow, we know that there are still plenty of improvements to make but we have succeeded in engaging our developers (who were sceptical of testers at first) and reducing the amount of bugs that our customers report and in that sense our testing is good enough for today!
Our company recently applied for ISO 9001 certification and as part of the certification process I was grilled by the auditor (and I mean grilled – he was a contractor to the auditing firm and didn’t want to be seen as being too lenient! The company even made a complaint against him for his approach when auditing other areas of the business.) Under this scrutiny the exploratory testing supported by our Test Management process was deemed satisfactory and passed the audit so the next time someone tries the BS excuse that you must script every test, count defects, provide dangerous metrics or any other crazy sounding idea “because of auditing purposes” this proves them wrong!
How do manage your testing? I would love to hear the different ways in which expert testers approach test management.