Testing Thoughts

Thoughts on software testing and lessons that I have learnt along the way

BBST and Testing planet

I’ve been busy learning over the past few weeks, studying for the BBST foundations. If you’ve done it you’ll know how much work is involved if you’ve not done it I seriously recommend that you check it out.

I did manage to get a post written up that has been on my todo list for a while though, it is a follow up to my 99 second talk at Test Bash and was posted on testing planet under the title How we ran a bug hunt last week.

Check it out and let me know what you think.

Shake and Bake test sessions

As part of our testing approach we have a testing activity which we call a shake and bake. These are similar to paired exploratory surveys but the developer is the “driver” of the session and the tester is the “navigator”.

There are several reasons why we do shake and bakes, these include, but are not limited to:

1. The development work to test is a small feature request or a bug fix to a legacy application that the test team have no prior exposure to.
2. The effort to setup a test environment exceeds a sensible proportion of the quoted development and test effort time.
3. The tester wants to rapidly learn about the development work so that further test planning can occur.
4. The developer wants to walk someone through their work to help them de focus and step out of the detail briefly.
5. The developer can’t (or doesn’t know how to) test their code.
6. The effort sold to the customer doesn’t contain much room for testing, but we want to perform some high-level tests that are more than a simple demo of things “working”

They usually take place at the developers desk except when the developer and tester aren’t in the same location, in which case they happen via VOIP and screen sharing (we use Lync).

Shake and Bakes take very little time to setup and can therefore be scheduled on short notice. Sometimes the developer will request one if they have hit a blocker or are running into issues debugging; or the tester may request one to quickly learn about something they are about to test.

The effectiveness of the shake and bake is based on a number of variables, a major factor is the skill of the tester; in the questions and suggestions they make during the session. Equally important is the role of the developer, the most effective shake and bakes that I’ve been part of are the ones where the developer is really engaged and has their own suggestions for test ideas this facilitates creativity in the testing.

We’ve really benefited from using shake and bakes in our testing approach, I’ve been surprised at the number of problems that we’ve actually found using these as part of our test strategies. The benefits are not limited to finding problems though, they are effective for rapid learning and discovery, and for building better rapport with the developers.

As with all good testing approaches it is important to recognise that these are fallible, we use them as part of a test strategy, they are not a silver bullet (or best practice). We use the information gathered from these sessions to drive further testing. There are situations where they are the only testing activity that happens for a particular piece of development work, but these are likely to be situations where previously testing would have been missed out or would have been a series of basic checks that were unlikely to find problems.

Do you do something similar in your test process? Let me know what you think.

How we manage our software testing

Ten months ago my company didn’t test their own development, sure the developer would debug their code a number of times to make sure it “worked*” and then the customer would perform “Acceptance Testing”. I don’t think I really need to describe the problems that this approach had, but suffice to say most development work overran and there was a lot of free of charge work to fix issues!

*For worked, read what Jerry Weinberg said about people saying “It works”: “… immediately translate that into, ‘We haven’t tried very hard to make it fail, and we haven’t been running it very long or under very diverse conditions, but so far we haven’t seen any failures, though we haven’t been looking too closely, either.’ (Zero or more successes) “

I started as the first software tester Ten months ago and started the testing journey off; this post is about taking stock of where we are today.

The context:

I’m part of a Research and Development Team specialising in CRM, SharePoint, ERP and Accounting product development. We are currently working on three core products and a vast amount of new and legacy projects across our supported technology stack.

The development pods work through a prioritised backlog of work items within sprints that last between three and five weeks. We have four development pods working on independent sprints. Sprint kick-offs always happen on a Monday and Sprint retrospectives always happen on a Friday but not necessarily the same week.

The test team (I apply this term loosely as there are currently only two of us, soon to be three) don’t work in sprints – instead we support all development tasks across all sprints. In a nutshell there is a lot to test with not much time to do it – sound familiar?

Our test process:

We use Confluence and Jira to contain our test management process, we could easily use a different toolset but we don’t, so feel free to substitute references to these systems with your own as you read this if you wish.

The current process has been influenced by the rapid software testing course taught by James Bach (http://www.satisfice.com/info_rst.shtml) and Michael Bolton (http://www.developsense.com/courses.html), session based test management (http://www.satisfice.com/sbtm/) and thread based test management (http://www.satisfice.com/blog/archives/503).

Our testing is at the exploratory testing end of the scale not the pre-scripted testing end (Originally from Jon Bach, contained in the Rapid Software Testing material). Instead of writing lots of large documents à la ISO 29119 we create:

A Test strategy page in confluence for collection of work items allocated to a sprint as per the sprint kick off meeting and subsequent document. As a rule of thumb each sprint gets at least one test strategy page in confluence but may get more if it make sense to split the development work up (i.e. if the sprint covers some new features to a product and a series of support tickets for legacy projects). The scope of the test strategy is outlined along with a bullet point list of significant risks, issues, assumptions and constraints as we think of them. This is followed by the fundamental part of the test strategy: the proposed test approach, this is often in the form of a mind map (influenced by the heuristic test strategy model http://www.satisfice.com/tools/htsm.pdf). And finally a set of test ideas, which we define as the proposed testing missions (similar to charters from SBTM). We call them test ideas to discourage the notion that these are set in stone once written. A test idea can be created at any time and can become obsolete at any time if we discover they are no longer relevant. We’ve created an “issue type” in Jira for test ideas so that we can maintain a repository of test ideas; we use the close integration with confluence to provide a view on the test strategy page to the relevant test ideas.

For each test strategy page we create we also create a test report page, this is another living confluence page that is updated dynamically because it contains a series of views from Jira; it can therefore be viewed at any given point for the latest snapshot. Our test reports provides a link to the bugs that we have raised against the sprint but we don’t provide any bug related metrics as we view these as misleading. There is a summary of bugs that are still outstanding and these serve to prompt further discussions with the Product Managers for determining if they should be addressed. There is a summary of the test ideas that we feel have been sufficiently covered and those that we feel require more or didn’t even start, again the objective here is to prompt further discussions with the Product Managers about the information they require. We add our “Testing Story” (see Rapid Software Testing) to the page and share with our colleagues. Jira and Confluence require user credentials to login and view our content and therefore we have a limited audience, this allows us to indulge in a much more social approach as we know who can consume our information and we can supplement it with conversations. We haven’t covered external test reports to date, only a handful of customers have asked about the testing and we’ve normally been able to deal with it via conversations rather than reports and metrics. I’m keen to provide information to our customers about our testing but need to be mindful of what my company wishes to share and what I think is useful to share.

Our initial approach to managing the day-to-day testing was aligned with SBTM but in our context we found it hard to dedicate uninterrupted sessions to testing just one test idea (or charter) at a time. We then adapted our test management approach to a more thread based test management approach. This gave us much more flexibility to work with the development pods when we needed to. We can pick up multiple threads (or test ideas) in a day across different sprints, and if we get interrupted we simply park what we’re doing and kick off some other test activity. This is where having test ideas held in Jira as a repository really became useful; we were able to create a basic workflow that keeps track of our progress (I mean basic – we weren’t interested in creating a bureaucratic nightmare). We are also able to query the repository for information in many different places i.e. dashboards, ad-hoc searches and as I mentioned previously the test strategy itself has a view of the linked test ideas. We make use of labels to tag our test ideas and make searching for things easier.

Supporting the test strategies, test reports and test ideas are the test notes that we take on a daily basis. Because we no longer work within defined sessions we each create a notes page for the day’s testing, we can then insert references to test ideas we have worked on and bugs we have raised during the day. This gives us some form of traceability when looking back through our notes. It also means we don’t need to duplicate effort in writing things up twice, for example I was doing some bug retesting today and wanted to describe the testing I was doing to the developer so that he could see why I wasn’t closing the bug, I was able to capture this against the bug so that it would send him the update and then because I inserted the link in my test notes I didn’t need to duplicate this detail for my notes. We also attach any additional material we’ve created or the application we’re testing has produced so that it is linked to where we reference it.

The test strategies, test reports, bug reports, test notes and any data or other type of files created during testing are all instantly available to the rest of the team and are at least discussed in the sprint retrospectives if not more.

Final Words

This is the test management process that we currently follow, we know that there are still plenty of improvements to make but we have succeeded in engaging our developers (who were sceptical of testers at first) and reducing the amount of bugs that our customers report and in that sense our testing is good enough for today!

Our company recently applied for ISO 9001 certification and as part of the certification process I was grilled by the auditor (and I mean grilled – he was a contractor to the auditing firm and didn’t want to be seen as being too lenient! The company even made a complaint against him for his approach when auditing other areas of the business.) Under this scrutiny the exploratory testing supported by our Test Management process was deemed satisfactory and passed the audit so the next time someone tries the BS excuse that you must script every test, count defects, provide dangerous metrics or any other crazy sounding idea “because of auditing purposes” this proves them wrong!

How do manage your testing? I would love to hear the different ways in which expert testers approach test management.

Q&A With Pradeep Soundararajan from Moolya

I’ve lost my way a bit recently in terms of my on-line presence both on my blog and on Twitter. I am changing jobs soon so I should have a lot more time for such activities!

I was lucky enough to attend RST with James Bach in March and rounded the week off with Test Bash. At the pre-Test Bash drinks I also met Michael Bolton and Keith Klain as well as some other very interesting testers. On my way home I emailed some of the directors of my current company to spread my enthusiasm about Context Driven Testing and one of them asked me to write a blog article about what I had learnt. The article is currently in the QA stage and should be published shortly; during the writing of the article I decided to ask Moolya some questions as I had read a lot about their successes and wanted to learn more about their journey. Pradeep Soundararajan answered my questions the very next day!

I didn’t expect my company to want to publish a Q&A session with another consultancy (even though it is unlikely they will ever compete for the same clients given that Moolya are in India and we’re are in the UK) so I thought I would publish it here!

LB: What made you decide to follow a context driven approach?

PS: Who we are determines what we do. This is an approach that suits creative and value driven testing. We want to be highly valuable to our customers and having been aware of how other approaches are a disservice to customers, in our humble-less opinion, we made context driven approach our own. We own it.

LB: What are the benefits to you as a consultancy?

PS: We are a services company. The benefits to us – we are being of value to our customers. A value that our customer sees which they can’t get from anybody else. We help in solving their business problems through testing. We work with great pride, we are happy as human beings, our testers smile more, their brains are more agile and creative. We are laying foundation for what we will be doing in the future – change the way the world tests and result in a change of how value from testing and testers are perceived.

LB: What benefits do your clients get?

PS: They get a lot of stuff. Most importantly, they smile more. They smile because – they see us as partners in helping them achieve their goals of revenues, sales, happy customers, being aware of risks, having greater test coverage, having more indepth information about the quality of their product, they see us celebrating their success, they see us wanting to do more for them. They see us as helping them on what they should ask from every other vendor. So, we are setting the bar high for other vendors to perform.

LB: What is the most important piece of advice for a company wanting to take a similar approach?

PS: Oh, general advice? So here it is – if you really work to solve your customers business and technical problems and then look back at what you did – you would have taken this approach.

LB: What is your vision for the future of testing?

PS: The vision is if Moolya continues to grow, the world at some point is going to stop doing what they are doing and going to start paying serious attention to how we do things and maybe we can influence the way the world would test and perceive. The vision that we have is we will be able to make a significant change to

The way testers are hired, coached, mentored
The way customers ask, get and see the testing value
The way companies serve, chose not to serve and make money with testing as a business
The way the world will perceive testing, understand and have better opinions
The way media would project testing
The way experts consult
The way millions of testing institutes in India and across the world will offer their training programs

That simple!

I would like to issue a big thank you to Pradeep and Moolya for your prompt answers.

 

Have you tried testing lately?

I’ve just got back from the ACCU event in London entitled “Modern Testing”. The speakers all focused on test automation which is to be expected given that the ACCU is an association for programmers. In the introduction the term “Robot tester” was mentioned to describe a tester that focuses on running repetitive, non-challenging, no brainier tests and that automating tests was a way of avoiding this. To be fair there was a balanced view on the use of automation testing (some may call it checking).

Listening to the talks prompted me to write a post that I’ve been meaning to write for a while.

Recently I’ve heard a number of (fairly junior) testers express the desire to move out of testing or at least they want to be promoted / given more responsibility because they find their job boring. I wasn’t too surprised that some testers find their jobs boring so I ask them “Have you tried testing recently?”

The trouble is that too many companies define testing as this very mechanical, process based, one size fits all, no thinking necessary activity aka robot testing. They advocate writing test scripts down to such a level that the poor tester writing and executing them looks up to a robot as being something to aspire too! It is unlikely that they are going to find many bugs and if they do, it is most likely that the bugs are so apparent (and uninteresting) that the tester could have found them with their eyes closed! For me there are two fundamental issues that this sort of approach poses for the tester: 1) It doesn’t mentally stimulate their brain; 2) It is very hard to see what value your effort is delivering. There is a reason that these boring tasks are normally given to the junior testers – the more experienced testers have escaped these tasks and don’t want to do them again!

The mission of testing has been mis-communicated somewhere along the line – it must have been because, I very much doubt an organisation’s testing mission specifically calls for producing 1000′s of detailed test scripts. It’s more likely that their mission for testing is something akin to: find as many bugs as possible, in the shortest time possible, spending the least amount possible. I’m putting this in the context of my experiences working on projects, I’m not saying this is valid for all projects within all organisations and there are obviously other motives for an organisation to carry out testing. So does writing x number of scripts and then applying arbitrary risk values to determine which must run and which can be dropped when time runs out really fit this mission? Does creating test scripts guarantee that we will find the majority of the bugs?

I don’t see that there is a great deal of value for robot testing but this is what most organisations seem to get. So how do we avoid becoming a robot tester?

There are lots of different ways to avoid the mundane life of a robot tester, these may include:

  1. Reading more blogs to see what other people are doing
  2. Asking more questions – don’t be afraid to ask yourself, your peers and those more experienced what value you’re adding by the testing you’re doing and if it fits the mission
  3. Learning the technical skills required for test automation and then automating the mundane checks that don’t require human interpretation, thus freeing yourself to investigate the application more thoroughly
  4. Attending conferences, joining communities, starting your own blog, discussing your opinions, getting feedback on your ideas
  5. Going on a course I.e. rapid software testing
  6. I’m sure you can come up with more in the comments section

These are the things I do to avoid becoming a robot tester, I pledge to do more to help avoiding the proliferation of robot testers by spreading my passion for the true skills of testing. I’ve realised that you don’t have to be a James Bach, Michael Bolton, Keith Klain, Rosie Sherry, or any of the other well respected thought leaders to achieve this and to contribute to better software testing. I’m of the opinion that the more of the community that contributes, the richer the community becomes. I’ve also heeded some of my own advise recently which is asking yourself the question, “what’s the worst that can happen?”

I decided that the answer for me was this: If I voice my opinion people could disagree with me or I could be wrong and this could mean people form a lower opinion of me. But for Yin there is always a Yang. The flip side to my concerns are: it could spark a debate on why people disagree or why they think I’m wrong and I may learn something new! If I don’t publish my thoughts then most people won’t ever know who I am so an opinion (good or bad) is at least an opinion.

So I’ll ask again, “Have you tried testing recently?”

Helping clients to succeed with testing

I’ve recently been reading / listening to the audio book of Let’s Get Real or Let’s not play by Mahan Khalsa. Although this book is on the reading list for the company’s sales staff, the book’s blurb talks about helping client’s succeed; in my role as a test consultant I also have this goal so I figure it would be a useful way to spend my twice a day commute to and from work.

In the audio book Mahan starts off by introducing his acronym for helping clients to succeed as ORDER: Opportunity, Resources, Decision Process, Exact Solution and Relationship. I have listened to the Opportunity chapters a number of times as I keep drifting into thinking about how much of this resonates with the approach to software testing. During Opportunity Mahan discusses the need to “first take the solution off of the table” and I started thinking about how many companies already have a solution in place before talking to the client be it a pre-defined framework, preferred model (e.g. V-Model) or a number of “Best Practices” that they follow and how they are looking to apply this before they’ve really heard the client’s needs and established a context.

Mahan frequently talks about a golden rule of “no guessing” and again I think to testing and a recent assignment and in particular about the assumptions we make on a daily basis e.g. If a client asks for a test manager for a UAT phase of a project: how often do we dig deeper for context? I would estimate that the majority of people hear these keywords and assume the client must take these terms to mean the same as we do and rush to provide them with a test manager with x years of experience managing UAT. I heard an interview on the radio with the producer of QI talking about their latest book of facts and how they researched it, they talked about how as children we possess an unquenchable curiosity bombarding our parents with question after question. But as we get older we seem to lose this curiosity or is it that we are too scared to ask for clarity in case it is somehow perceived as a weakness. I’ve seen people guessing on many projects and if I’m honest I’ve been guilty of it in the past too. Sometimes this manifests itself as “well the client always does it this way so …” Or “it has to be done like this because of such and such regulation …” But when you ask if the client specifically said that you get a shrug, “well no they didn’t but …”

So far although the intended use of this book is to be applied to selling, the wanting our client to succeed fits pretty well with testing. As I continue listening Mahan moves onto questioning the client in order to establish the results that they want to achieve or the problems they want to solve by implementing the solution, after all if it doesn’t fix something or try to improve on something it isn’t really a solution. Once the client can list a few of the issues, the consultant probes further until we have them all and then we need to prioritise them. In the context of testing this is often achieved by analysing documentation (business requirements document, functional specification, etc…) and maybe talking to a business analyst or subject matter expert. However I see a problem in this approach, if testing is solely based upon documentation (or claims made about a product) then we are in danger of making an assumption that these documents are complete. I say danger because I’ve yet to see a document or set of documents that can be classed as complete; testers readily accept that exhaustive testing is not possible but seem to believe that exhaustive documentation is? So are we violating a key principle of helping our clients succeed by making this assumption?

Once we have established a prioritised list of the results that the client would like to achieve or the problems they would like to resolve, Mahon describes how to dig deeper, establishing context through critical thinking and questioning. If the client is looking to achieve a specific result, how will they know that this has been achieved? If they are trying to improve a specific problem they are facing how will they know that it has been fixed? Mahan talks about his “measurable alert”, he is looking for something tangible, trying to establish how the client is going to judge success or gain a return on investment. Again if we relate this back to testing we’re try to establish a success criteria and the key metrics that the client is looking for to assess this against. It is important to note here I’m not suggesting that we use prescribed metrics that tell us how many tests we have run today or what percentage of tests have passed or failed; I’m suggesting we get to the root of what the stakeholders need to know in order to make a decision and we build an appropriate solution based upon this criteria. After all the job of a tester is to provide the stakeholders with as much information as possible to make critical decisions about the product.

The chapters on opportunity really got me drawing comparisons with how to approach testing but the subsequent components of the ORDER acronym also can be applied to helping the client to succeed through testing. The R stands for resources. Resources include people, time and budget. This is an area that I would suggest that most of the guessing happens (I realise the irony that I am in fact guessing here!). For testing this should be a key factor in the approach that we propose: who makes up the project team? What skills can they offer to the testing? What are their requirements for our testing? What skills does the project team lack? How do we fill the gaps? What are the timescales for the project? What is driving these deadlines? Can testing be completed within these timescales? Does testing have a budget? What is this based on? What does this cover? Who is responsible for the budget?. The information gathered during this stage is all about understanding the project environment, which is critical for determining the test techniques that will be required to assess the quality criteria we’ve established.

D is for Decision Process, for testing we need to find all of the decision makers that is anyone or anything that could be impacted by a lack of quality. Don’t forget other systems could have a vested interest, who makes decisions about quality on their behalf? By understanding the decision process that will be followed testing can serve the stakeholders by providing relevant and timely information. Another key thing to consider here is if you need to influence and shape the test approach away from pre-conceived ideas if you’ve taken the time to establish who really makes decisions and what their criteria for these decisions is then you can demonstrate a much better business case for following your approach.

E is for Exact solution. We are now quite a way into the process of helping our clients to succeed and we are now just proposing the solution. In the the context of testing this probably relates to the Test Strategy, Test Plan and any other derivative that you decide is necessary at this point. But notice that before the word solution is the word EXACT, now clearly it wouldn’t make a good acronym without the E but lets not gloss over this as a way to create an acronym; how many projects have you worked on where large parts of the documentation is generic, maybe even cut and pasted from previous projects? How many projects follow the V-Model, just because that is so called “best practice” or “industry standard”. If the main justification for the solution is that it follows “best practices” or “Industry Standards” then I would suggest that the solution is not exact for the context of the project. If you have done all the hard work upfront then the solution should have pretty much written itself and should be quite easy to document and get stakeholder buy-in. The test strategy can be a simple outline of the intended approach to testing, summarising the information that has been gathered so far see the Heuristic Test Strategy Model written by James Bach for an example of what a Test Strategy could look like.

Finally R is for Relationship. I’m not going to go into too much detail here but if you’ve helped the client to succeed by truly understanding their needs and stopped some critical bugs going into production in the process the chances are you will have a good relationship and a good chance of getting future work if that is what you desire.

During this comparison I have mentioned “context” a number of times, that is because in order to help our clients succeed we need to first truly understand their needs, the context in which they operate in and the specific drivers for this project. Maybe the frameworks and approaches built upon ISEB / ISTQB, IEEE, etc can accommodate this but so far my experience of these approaches seem to be about finding a silver bullet, a one size fits all, the best practices, the industry standard the magic formula that makes the consultancies lots of money and the clients out of pocket. If these approaches really worked then there should be no reports of production defects, projects overrunning or being delivered with a fraction of the scope because according to recruiters most companies in the UK want ISTQB certified testers which must mean they are all following the ISTQB approach?

For those of us working on normal projects: you know, the ones that don’t quite fit the perfect mould that we have learnt about; maybe the documentation is poor, or out of date; you’ve just found out about a must have feature from a stakeholder you didn’t realise had a stake in the project; or the “even though it doesn’t make sense can you just do it this time anyway”, or maybe you’ve heard something like this said to a junior colleague: “so you want to learn about testing? ok lets send you on the ISTQB foundation”

If that rings true, or even if you are happy with your beliefs about testing best practices etc: if you haven’t heard of the context-driven school of testing I urge you to search and read about it and see what you think. I also suggest that you read about other topics non-testing related and see how they might improve your testing.

Please leave your comments, I’m particularly interested in your views on this post, experiences with good and bad testing and suggestions of non-testing related topics to enhance approaches to testing. Thanks for reading.