Founder and CEO of @TentamenHr. Organizer of Zagreb Software Testing Club meetups.
2070 stories
·
0 followers

All testing is exploratory: change my mind

1 Share

I’ve recently returned to Australia after several weeks in Europe, mainly for pleasure with a small amount of work along the way. Catching up on some of the testing-related chatter on my return, I spotted that Rex Black repeated his “Myths of Exploratory Testing” webinar in September. I respect the fact that he shares his free webinar content every month and, even though I often find myself disagreeing with his opinions, hearing what others think about software testing helps me to both question and cement my own thoughts and refine my arguments about what I believe good testing looks like.

Rex started off with his definition of exploratory testing (ET), viz.

A technique that uses knowledge, experience and skills to test software in a non-linear and investigatory fashion

He claimed that this is a “pretty widely shared definition of ET” but I don’t agree. The ISTQB Glossary uses the following definition:

An approach to testing whereby the testers dynamically design and execute tests based on their knowledge, exploration of the test item and the results of previous tests.

The definition I hear most often is something like the following James Bach/Michael Bolton effort (which they used until 2015):

An approach to software testing that emphasizes the personal freedom and responsibility of each tester to continually optimize the value of his work by treating learning, test design and test execution as mutually supportive activities that run in parallel throughout the project

They have since deprecated the term “exploratory testing” in favour of simply “testing” (from 2015), defining testing as:

Evaluating a product by learning about it through exploration and experimentation, including to some degree: questioning, study, modeling, observation, inference, etc.

Rex went on to say that the test basis and test oracles in ET “are primarily skills, knowledge and experience” and any such testing is referred to as “experience-based testing” (per the ISTQB definition, viz. “Testing based on the tester’s experience, knowledge and intuition.”). Experience-based testing that is investigatory is then deemed to be exploratory. I have several issues with this. There is an implication here that ET involves testing without using a range of oracles that might include specifications, user stories, or other more “formal” sources of what the software is meant to do. Rex reinforces this when he goes on to say that ET is a form of validation and “may tell us little or nothing about conformance to specification because the specification may not even be consulted by the tester”. Also, I can’t imagine any valuable testing that doesn’t rely on the tester’s skills, knowledge and experience so it seems to me that all testing would fall under this “experience-based testing” banner.

The first myth Rex discussed was the “origin myth”, that ET was invented in the 1990s in Silicon Valley or at least that was when a “name got hung on it” (e.g. Cem Kaner). He argued instead that it was invented by whoever wrote the first program, that IBM were doing it in the 1960s, that the independent test teams in Fred Brooks’s 1975 book Mythical Man Month were using ET, and “error guessing” as introduced by Glenford Myers in the classic book Art of Software Testing sounds “a whole lot like a form of ET”. The History of Definitions of ET on James Bach’s blog is a good reference in this regard, in my opinion. While I agree that programmers have been performing some kind of investigatory or unscripted testing in their development and debugging activities as long as programming has been a thing, it’s important that we define our testing activities in a way that makes the way we talk about what we do both accurate and credible. I see the argument for suggesting that error guessing is a form of ET, but it’s just one tactic that might be employed by a tester skilled in the much broader approach that is ET.

The next myth Rex discussed was the “completeness myth”, that “playing around” with the software is sufficient to test it. He mentioned that there is little education around testing in degrees in Software Engineering so people don’t understand what testing can and cannot do, which leads to myths like this. I agree that there is a general lack of understanding in our industry of how important structured ET is as part of a testing strategy, I haven’t personally heard this myth being espoused anywhere recently though.

Next up was the “sufficiency myth”, that some teams bring in a “mighty Jedi warrior of ET & this person has helped [them] to find every bug that can matter”. He mentioned a study from Microsoft where they split their testing groups for the same application, with one using ET (and other reactive strategies) only, while the other used pre-designed tests (including automated tests) only. The sets of bugs found by these two teams was partially but not fully overlapping, hence proving that ET alone is not sufficient. I’m confident that even if the groups had been divided up and did the same kind of testing (be it ET or pre-designed), then the sets of bugs from the two teams would also have been partially but not fully overlapping (there is some evidence to support this, albeit from a one-off small case study, from Aaron Hodder & James Bach in their article Test Cases Are Not Testing)! I’m not sure where this myth comes from, I’ve not heard it from anyone in the testing industry and haven’t seen a testing strategy that relies solely on ET. I do find that using ET as an approach can really help in focusing on finding bugs that matter, though, and that seems like a good thing to me.

Rex continued with the “irrelevance myth”, that we don’t have to worry about ET (or, indeed, any validation testing at all) because of the use of ATDD, BDD, or TDD. He argued that all of these approaches are verification rather than validation, so some validation is still relevant (and necessary). I’ve seen this particular myth and, if anything, it seems to be more prevalent over time especially in the CI/CD/DevOps world where automated checks (of various kinds) are viewed as sufficient gates to production deployment. Again, I see this as a lack of understanding of what value ET can add and that’s on us as a testing community to help people understand that value (and explain where ET fits into these newer, faster deployment approaches).

The final myth that Rex brought up was the “ET is not manageable myth”. In dispelling this myth, he mentioned the Rapid Reporter tool, timeboxed sessions, and scoping using charters (where a “charter is a set of one or more test conditions”). This was all quite reasonable, basically referring to session-based test management (SBTM) without using that term. One of his recommendations seemed odd, though: “record planned session time versus actual [session] time” – sessions are strictly timeboxed in an SBTM situation so planned and actual time are always the same. While this seems to be one of the more difficult aspects of SBTM at least initially for testers in my experience, sticking to the timebox is critical if ET is to be truly manageable.

Moving on from the myths, Rex talked about “reactive strategies” in general, suggesting they were suitable in agile environments but that we also need risk-based strategies and automation in addition to ET. He said that the reliance on skills and experience when using ET (in terms of the test basis and test oracle) mean that heuristics are a good way of triggering test ideas and he made the excellent point that all of our “traditional” test techniques still apply when using ET.

Rex’s conclusion was also sound, “I consider (the best practice of) ET to be essential but not sufficient by itself” and I have no issue with that (well, apart from his use of the term “best practice”) – and again don’t see any credible voices in the testing community arguing otherwise.

The last twenty minutes of the webinar was devoted to Q&A from both the online and live audience (the webinar was delivered in person at the STPCon conference). An interesting question from the live audience was “Has ET finally become embedded in the software testing lifecycle?” Rex responded that the “religious warfare… in the late 2000s/early 2010s has abated, some of the more obstreperous voices of that era have kinda taken their show off the road for various reasons and aren’t off stirring the pot as much”. This was presumably in reference to the somewhat heated debate going on in the context-driven testing community in that timeframe, some of which was unhelpful but much of which helped to shape much clearer thinking around ET, SBTM and CDT in general in my opinion. I wouldn’t describe it as “religious warfare”, though.

Rex also mentioned in response to this question that he actually now sees the opposite problem in the DevOps world, with “people running around saying automate everything” and the belief that automated tests by themselves are sufficient to decide when software is worthy of deployment to production. In another reference to Bolton/Bach, he argued that the “checking” and “testing” distinction was counterproductive in pointing out the fallacy of “automate everything”. I found this a little ironic since Rex constantly seeks to make the distinction between validation and verification, which is very close to the distinction that testing and checking seeks to draw (albeit in much more lay terms as far as I’m concerned). I’ve actually found the “checking” and “testing” terminology extremely helpful in making exactly the point that there is “testing” (as commonly understood by those outside of our profession) that cannot be automated, it’s a great conversation starter in this area for me.

One of Rex’s closing comments was again directed to the “schism” of the past with the CDT community, “I’m relieved that we aren’t still stuck in these incredibly tedious religious wars we had for that ten year period of time”.

There was a lot of good content in Rex’s webinar and nothing too controversial. His way of talking about ET (even the definition he chooses to use) is different to what I’m more familiar with from the CDT community but it’s good to hear him referring to ET as an essential part of a testing strategy. I’ve certainly seen an increased willingness to use ET as the mainstay of so-called “manual” testing efforts and putting structure around it using SBTM adds a lot of credibility. For the most part in my teams across Quest, we now consider test efforts to be considered ET only if they are performed within the framework of SBTM so that we have that accountability and structure in place for the various stakeholders to treat this approach as credible and worthy of their investment.

So, finally getting to the reason for the title of this post, both by Rex’s (I would argue unusual) definition (and even the ISTQB’s definition) or by what I would argue is the more widely accepted definition (Bach/Bolton above), it seems to me that all testing is exploratory. I’m open to your arguments to change my mind!

(For reference, Rex publishes all his webinars on the RBCS website at http://rbcs-us.com/resources/webinars/ The one I refer to in this blog post has not appeared there as yet, but the audio is available via https://rbcs-us.com/resources/podcast/)

Read the whole story
karlosmid
1 day ago
reply
Zagreb
Share this story
Delete

BDD Addict Newsletter September 2019

1 Share

Sorry for not writing earlier, but this does not mean that there is nothing to say! Fortunately there have been a plenty of interesting posts and articles about BDD, agile testing and test automation...

The post BDD Addict Newsletter September 2019 appeared first on Gáspár Nagy on software.

Read the whole story
karlosmid
4 days ago
reply
Zagreb
Share this story
Delete

Kiss Goodbye To Your Comfort Zone – Big Ideas from TestBash Manchester 2019

1 Share

Another year, another series of three posts highlighting my threads and takeaways from this year’s TestBash Manchester. A big thanks to the speakers, attendees and of course to the Ministry of Testing for laying on such a great conference!

For the third and final post in this series of three reflecting on my personal takeaways from this year’s TestBash Manchester I knew I wanted to focus on Lisi Hocke‘s closing talk “A Code Challenge of Confidence” – well, ‘talk’ may be a bit weak given the actual demonstration of her topic live in the session! – but I found there were complimentary threads in what Lisi talked about which chimed with several other talks as well.

Each year, Lisi challenges herself to learn a new skill. Something she wants to be able to do but can’t, and something outside of her comfort zone. She commits to doing this each year with a friend, themselves taking on a personal challenge. Once, for example, Lisi’s her challenge was public speaking (something she clearly overcame!). This year, she’s challenged herself not just to learn to code, but to become “code confident”. Lisi took us through her progress, and her process. It’s always fascinating to me to hear how others approach problems; we are all involved in a problem-solving profession which requires creativity, deep awareness and clarity of thought. Lisi is definitely not the same as me, and I have no desire to learn code myself, but I found much to reflect on in the differences in how we thought about the same kinds of problems and challenges.

Lisi gave some great advice for the whole process of achieving one’s challenge, from start to finish. She suggests that when you face a personal challenge, it helps to set a clear goal, and also to make it public so you are accountable. In Lisi’s case, this was to produce a working application, which she’d post on GitHub, as well as periodic updates and code snippets to demonstrate her progress on her blog.

It’s important to set yourself up for success by thinking about how you do your best work and figuring out how that maps to the work of your challenge. In Lisi’s case, this meant pairing on her challenge, but for others it might mean any variety of different styles and approaches – for example I like to use trello boards and lists to get things done, because my thoughts are often highly chaotic and stuff easily gets missed or forgotten! Think about what’s worked for you before, and let that guide your approach. I like the idea here that you can take a lead from one part of your life (say, your career) and use it to show you how to approach problems in another area (eg personal life).

Lisi advised us not to let the personal challenge become a chore, or overwhelm your personal life. She had the great idea of considering some markers in her personal life which will let her know if she was going overboard; in Lisi’s case, this meant if she hadn’t played computer games in a week, she knew she was letting her challenge eat into the rest of her life.

Lisi showed us her progress towards her goal, painstakingly working towards her first goal. A great observation was that confidence comes well before mastery – there can be a ton left to learn and you can still be achieving your aim. Lisi is confident now, but there’s still a ton to learn. Knowing that, knowing you won’t need to learn everything to feel “OK” at something can help when you’re not at that point of the journey!

The final part of Lisi’s talk involved the full-on challenge of coding live on stage in front of a conference hall of testers! Suffice to say Lisi did great and even had she not, really success or failure was not the point: daring was! As Lisi put it, the difference is daring:  daring to try. That is what confidence looks like!

Lisi peppered her talk with some inspiring faces and tweets which lit the way ahead of her – a selection of these below:

After Lisi’s talk I found myself thinking back to Kwesi Peterson‘s talk earlier in the day (“How I Learned to Be a Better Tester Through Practising “Humble Inquiry””), specifically his call on us to be vulnerable. Vulnerability is a key to opening up new avenues – by exposing ourselves to the risk of failure, we open ourselves to the possibility of growth (and vice versa!). Vulnerability takes many forms, Kwesi was focused on our conversations and questions but I think this common thread permeated Lisi’s talk too – how much more vulnerable can you get than coding live, with limited experience, in front of a room of software testers?! I definitely give Lisi points for vulnerability there.

But I also found myself reflecting on what was arguably the day’s most technical talk, from Saskia Coplans – “Threat Modelling: How Software Survives in a Hacker’s Universe“. I am the least technical tester you’re likely to meet* and as such I appreciated the approach Saskia, someone very technical indeed, must have gone to to make her explanation of how modelling security vulnerabilities in a system is a tactic hackers of all hats can use. She used the known model of the Death Star to break us into her world, and took us through the various styles of attack in ways we could relate to.

Two comments on comfort zones here: Saskia slowly expanded on our existing comfort zone by introducing a familiar model and using straightforward descriptions of her concepts and work, but I can’t help but imagine approaching a non-technical crowd would also be outside of her immediate comfort zone of being highly technical – she would have to put herself into a non-technical person’s shoes to plan a talk like this, digestible to a room full of people with varying technical knowledge. Given both of these things, it was a great talk which meaningfully increased my knowledge of an area I’ve had limited direct contact with before now, so it expanded my own comfort zone!

Lisi’s talk reminded me that by starting, and then taking small steps, we can travel a really great distance in a year. Her advice to be mindful of our other commitments and establish strong behaviours to protect them (and, therefore, the rest of our lives) spoke volumes to me. I was genuinely inspired to see her face her fears live on stage, and I know many of the other attendees felt the same – including a friend of mine who immediately reached out to Lisi to thank her. All in all, this was a great wakeup call, full of practical tips.

TestBash_Manchester_2019_TWITTER.png

What’s holding you back? What are you afraid of doing, but deep down really want to? Let me know in the comments!


Here’s my wife and I getting outside of our comfort zone in New Zealand earlier this year, on our honeymoon! Turns out when we told the jump people she wasn’t pregnant, we were mistaken…

2019-05-25 19-09-38 - IMG_20190525_190938_428.jpg

Part 1 of this round-up published on Monday, read it here
Part 2 of this round-up published yesterday, read it here

* I actively cultivate and protect this aspect of what I have to offer as my USP – I believe by maintaining a user’s non-technical perspective on things, I don’t get bogged down in technicality in the same way the majority of people I’m surrounded by can. This requires strong facilitation skills on my part, getting layman’s answers to technical questions from my peers – but in the right environment, it adds a ton of value. Certainly respect my more technical colleagues in testing!
Advertisements
Read the whole story
karlosmid
4 days ago
reply
Zagreb
Share this story
Delete

My Other Passion Project - ENSIGN RED

1 Share
This is about as indulgent as they come but I don't care. While this blog is mostly devoted to software testing, it also is the repository for many of my thoughts, ideas and half-baked schemes that I want to record and tell the world about in some way.

There's a running joke in the software testing world about the "rock star tester" and the fact that, at one point in my life, the first two words better described me than the third word did. In truth, I was never comfortable with the term "rock star", mainly because even at the most celebrated our material ever became, any "stardom" was a very local and niche kind of thing. I've actually been OK with that. I got to do all of the things I set out to do as a musician (well, most of them, in any event) without having to experience too many of the downsides. At the age of twenty-five, I chose to "retire" and live a more normal kind of life, one which involved a more standard kind of career and a family. Still, the desire to create, to write, to express my self in what is basically bad poetry, and to sing on a stage has never been far from my thoughts. Now, I'm actively enjoying that process once again and I'd like to tell you all about it.

I am the lead singer of a hard rock/heavy metal band called Ensign Red!



First, a disclaimer. I have no idea what any one person's definition of hard rock or heavy metal is. If you hear us and consider us metal, great. If you don't, that's fine too :).


Second, we are probably never going to win this battle of pronunciation but I'm going to state it anyway. Our name is "En-SIGN Red"! Think of a flag. An Ensign. Specifically, we are named after the red flag pirates would raise that meant "no quarter given" (more appropriately referred to as "The Red Ensign". We switched the words because we felt it sounded more interesting. We've also heard people refer to us as "EN-sin Red", as though it were a formal name of a person holding the military rank. The fact is, many people find it easier to say the latter so we're cool with either. It's all good :).

Third, this is a project that takes up a fairly large amount of my time and attention and I want a place to talk about it. I want to talk about the creative process of songwriting, of oddball source materials, of inspirations that make their way into our songs. Some things will be kept secret (so to speak) but otherwise, any thoughts or questions about being a later in life band member trying to exercise a bit of creativity are totally open.

I may even drop in some software testing relevant content with these posts from time to time. Who knows. In any event, here's to future days :).


Read the whole story
karlosmid
4 days ago
reply
Zagreb
Share this story
Delete

A Sympathetic Sceptic

1 Share

X: What's your dream job title?

Me: Sympathetic Sceptic.

X: ?

Me: I want to help people to get to the best thing they can, given their constraints, for their definition of best, at this time.

X: ...

Me: I want to help them by probing the idea of the thing, and the motivation for it, and the way it is made, and any other relevant and important factors to them right now

X: ... 

Me: And I want to offer to propose potential factors too, for as long as they feel it's helpful.

X: ...

Me: And I want to suggest alternative perspectives, with varying degrees of viability, to compare their thing to, until they are happy enough with a version of their thing for their current purpose and costs.

X: ...

Me: And I want to do that with humility, remembering that it's their thing not mine, and they can do whatever they want with whatever I have to say.

X: ...

Me: Not that I've given it a lot of thought, you understand.
Image: https://flic.kr/p/krEfv
Read the whole story
karlosmid
4 days ago
reply
Zagreb
Share this story
Delete

Regression Testing in Agile – All you need to know

1 Share

Overview In modern software development, Regression testing plays a vital role in the sustainability of product behaviors as a whole. In this article, we will walk you through some concepts of Regression testing in Agile development approach. What is regression testing? Regression Testing is a testing practice that helps ensure the performance of the application as expected after any code changes, updates, or modifications. In a complex software system nowadays, even a mild code alternative can lead to dependencies, defects, or malfunctions. Regression Testing emerges as an ultimate solution to mitigate these risks. Generally, the software has to pass multiple tests before the changes are integrated into the main development branch. Regression testing is the final step, ensuring the overall stability and functionality of the application. What are typical circumstances to apply regression testing? Regression testing is executed under these circumstances: A new requirement is added to an existing feature A new feature or functionality is added The codebase is fixed to solve defects The source code is optimized to improve performance Patch fixes are added Changes in configuration What are the benefits of regression testing? Regression testing is vital in software development because of these benefits:  Regression testing locates

The post Regression Testing in Agile – All you need to know appeared first on Abode QA.

Read the whole story
karlosmid
5 days ago
reply
Zagreb
Share this story
Delete
Next Page of Stories