Founder and CEO of @TentamenHr. Organizer of Zagreb Software Testing Club meetups.
1556 stories
·
0 followers

If tradition trumps the common sense

1 Share

It’s uncanny how easily tradition trumps common sense. Just recently I had a chat with a friend. He told about how he had started in the telecoms industry. He worked in a factory making infrastructure devices.

One of their customer requirements for a specific circuit board was called the ‘burn in’ process. The units were powered up and put in a large thermal chamber where the units would go through a 24-hour process of heating and cooling. The units were tested before and after this process before they could be delivered to the customer. The idea was to stress test all of the solder joints to reveal defects before the units shipped.

This ‘burn in’ phase took up a huge amount of resources and added over 24 hours to the production time. It was a profound productivity trap. But what could they do? The customer is always right, and they demanded it. This was the way it had always been done and so it must be in the future too. Right?

Tradition trumps common sense.

Luckily, my friend was one of the few renegades who wasn’t convinced. Something had to be done so they set out to demonstrate why the burn-in test was a residue of the previous century. They secretly put in the extra hours to collect the data of 200 units from both before and after the burn-in.

It turned out that the results barely changed in the process. The tests were completely unnecessary.

By going the extra mile and questioning the status quo, the team successfully shaved more than 24 hours off the throughput of the production line. That kind of time-saver can easily be worth millions of dollars in a factory.

In software testing, I often see a similar setting. There are test sets and testing phases similar to the burn-in process and it hasn’t occurred to the team to question the status quo. Tradition trumps the common sense.

Only twice I’ve met with a software testing team that had seemingly endless resources. Other one worked in the aviation and the second one was involved in the medical devices. Human lives were at stake. But for the rest of us, there will always be a limited amount of resources. And in testing, the resources usually are minimal at best.

The only way to make good use of the scarce resource is to constantly question which phases of the process are the burn-in and which are the absolute must. If you need of a good rule of thumb in questioning and prioritizing, the Pareto principle could be a perfect starting point.

20% of the activities contribute to the 80% of the results. My approach is to choose that most important fifth of the activities and then double down on them.

Don’t let the tradition trump common sense in your team. Don’t insist on doing things right but instead focus first on doing the right things!

Read the whole story
karlosmid
10 hours ago
reply
Zagreb
Share this story
Delete

Collaboration and roles, learning from Rugby union

1 Share

rugby

I keep on hearing team mates say things like

“it’s not my job to test, I am a <insert_role>” or “It’s not my job to design the product, I am a <insert_role>”

and I am quite tired of the behaviours caused by the message when left unchecked.

A team is more than the sum of its parts, a team has the power of collaboration.

When I was young I used to play Rugby (union).

Rugby union is a highly specialised sport, in fact the 15 players on the pitch are divided in 2 main silos, “forwards” and “backs” and within the silos there are these following roles:

Forwards: 1. Loose-head prop, 2. Hooker, 3. Tight-head prop, 4 and 5. Lock, 6. Flanker, 7. Wing Forward, 8. Number eight, 9. Scrum half

Backs: 10. Fly half, 11 and 14. Wing, 12. First centre, 13. Second centre, 15. Fullback

WOW 13 different roles for 15 people in the same team, more than the usual PO/BA/DEV/TEST/UX etc. we find in modern agile teams. How come they are able to collaborate so effectively?

The difference is that nobody in a rugby team will ever use a sentence of this type:

“I am not doing X because my role is Y”

In fact the very best rugby players are not the super specialists but the ones that are good at every different skill and activity required to play rugby. (Research the case of Brian O’Driscoll to me the synthesis of excellence in collaboration skills)

When there is a ruck 2 meters from a goal line you will see the ten stone (63Kg) Scrum Half stick his head in and push the 15+ stone (95Kg+) players away from his goal line.

He won’t say, I’m a scrum half I don’t do rucks, I guarantee 100% he won’t, because if he does he will lose the respect of his teammates, his coach and his fans and never play the game again.

Why do rugby players collaborate so well even though they are such a specialistic group? Because they have one clear goal, the clear goal is to score more points than the opponents. They all get that and do their utmost to help their teammates achieve it.

Why are agile teams not collaborating like rugby players?

One of the reasons is that they don’t see a common goal in the customer value to be delivered but see the beauty of the “elegant code”, “smart test strategy”, “beautiful solution”, outstanding “user experience” and so on.

So if you want to get your team to collaborate better together you got to give them a common cause to fight for. And just to save you time, it is not lines of code, story points, tests passed, number of bugs or lack there of, it is something bigger and more important.

Discover what it is together with your team.

 

Advertisements
Read the whole story
karlosmid
10 hours ago
reply
Zagreb
Share this story
Delete

Lessons From Bob Ross

1 Share

Recently my girlfriend has been watching a lot of Bob Ross. I never really knew much about him at the time. I vaguely remember him being on the TV when I was a kid, and also some references to him in shows like Family Guy. 

bobross


But I had never really watched him. Watching how he paints is incredibly calming. I was curious about his attitude to life as he uses expressions like, “We don’t make mistakes, just happy little accidents.” Or, “Talent is a pursued interest. Anything that you’re willing to practice, you can do.”

So I did a little research on him. It turns out he was in the military before paining, and used to have to shout at people all day. He decided when he left the military, he would never raise his voice again.

So years of shouting at people to do their jobs made Bob Ross become someone who people now watch to relax as he is so calm. So what can we learn from this?

Well, maybe being calm is infectious? People have actually done studies into Bob Ross and his recent fame resurgence. He has become very famous online, he was one of the most streamed artists on twitch recently. Folks have been using him to help with their anxiety. Bob realised that by being calm and helping others it would improve his own life.

Could we do this in our day to day lives? On the team you work in is it easier to blame others or try and help? Personally I would say it is much easier to blame someone else and pass along the problem. Do you ever catch yourself saying, “Thats above my pay grade,” or “thats another teams problem”.

I have definitely used that last excuse before. But it doesn’t fix anything, and it also doesn’t make me feel any better. I don’t get home from work feeling like I have done a good job.

It also doesn’t allow for collaboration in a company. So instead of quickly getting annoyed and blaming someone else. Maybe we could try Bobs approach and calmly turn the mistake into a happy little accident and use empathy to help solve the problem.

bobross2

Instead of passing the problem, try and fix it by working with the other team. Maybe it’s a problem between test and dev, can we pair together and see if understanding each others view point can help? Or is it a problem between the front end team and the back end team, could we mix up the teams skills for a week and see what happens when we work together to fix the problem?

Yes, this is much easier said that done. But I think if we start small and try help others we will find our jobs to be much more rewarding.

I will leave you with some wise words from Bob Ross: “Didn’t you know you had that much power? You can move mountains. You can do anything.”

Read the whole story
karlosmid
10 hours ago
reply
Zagreb
Share this story
Delete

The Value in Values

1 Share



The testers at Linguamatics decided to explore the adoption of a set of team values and this short series of posts describes how we got to them through extended and open discussion.

If you find the posts read like one of those "what I did in the holidays" essays you used to be forced to write at school then I'll have achieved my aim. I don't have a recipe to be followed here, only the story of what we did, in the order we did it, with a little commentary and hindsight.
  • Introduction
  • Why?
  • Teasing Them Out
  • Living With Them
  • Reflection
--00--

Our team provides testing services to other teams in the company, in their contexts. That means we cover a selection of products, domains, and technologies across several development groups, operations, professional services projects, our internal compliance process, and more.

In terms of methodology, we are in permanent Scrum teams, we join time-bounded projects set up to implement a particular feature or satisfy a particular customer need, and we work day-to-day with groups whose priorities and resources are subject to change at very short notice.

In spite of the varied nature of our assignments it's historically been our desire to maintain strong team bonds and an information-sharing culture and so we've engineered some formal and informal opportunities to do that.

Amongst other things, each week we have a catch-up with some kind of presentation (such as a feature, a tool, an approach), we have a daily stand up (roughly: prefer outcomes over outputs, share issues, ask for data or help), and we have a tradition of optional, opportunistic, 5-10 minute overviews on topics that are potentially interesting right now but too deep for stand up.

We also have a regular team retrospective in which we allow ourselves to discuss pretty much anything about the way we work. It tends to stay out of project specifics — because they'll be discussed within the projects — but recent topics have included dedicating time to shortening the run time of a particular test suite to enable developers to get faster feedback from it, creating a specific type of virtual machine for us to share, and reviewing how we schedule work.

At the start of 2018, a retro topic that I proposed after hearing Keith Klain speak at Quality Jam 2017 was voted up. In his talk, Keith said that one of things he likes to see in a team is a shared understanding of the important factors that frame how they work. Based on that, I asked should we establish a set of team values, principles, or a mission statement?

The resulting discussion generated enthusiasm. And questions, naturally. They included:
  • What do we want to create: a defined mission? principles? values?
  • ... and how do these things relate to one another?
  • It shouldn't be be too low-level; it should apply across teams, projects, and so on.
  • It shouldn't be restrictive or prescriptive; there should be flexibility.
  • It should be a framework for decision-making, not a decision-maker.
  • Do we really need anything different to the company values?
  • Do we want it to change the way we work, or encapsulate the way we work?
  • Do we want others in the company to see it?
  • ... and might it change how others see us?

None of us had ever tried to externalise group values before so we began by researching what others had done. Here's a few examples from within the testing space:

Some of these were published after we started so didn't have as much chance to influence what we did. Iain McCowatt's Principles Not Rules was inspiring to me, but is unavailable as I write this. It's such strong material that I've left the links in the list above in the hope that it'll come back. Small comfort: I saw his talk on the same topic at EuroSTAR 2015 and a handful of my notes are here.

Outside of testing, in development and more generally, we looked at pieces like these:

Closer to home, we observed that our company has some useful data to contribute: our corporate values published on the internal wiki, and a set of informal values that are regularly called out verbally at all-hands meetings.

Finally, we looked to see whether values are encoded implicitly in our tester job adverts, which include lines like these:
  • We strive to provide relevant information to stakeholders and we're flexible about how we do it.
  • We use and we eagerly solicit peer review, we’re open to new ideas, and we perform regular retrospectives to help us improve ourselves and our work.
  • Our company respects what we do, and we’re a core part of our company’s work and culture.
  • Linguamatics is active in the local testing community, regularly hosting meetups and Lean Coffee.
  • We have regular in-house training in testing and other skills.
  • If you get the job you will be expected to
  • ... take responsibility for your work,
  • ... apply intelligence and judgement at all times,
  • ... be able to justify your position and be prepared to discuss alternatives,
  • ... look for ways to improve yourself, your work, the team and the company.

To summarise how we started down this road, then:
  • We wondered if we should think about making our implicit shared values explicit.
  • We discussed it, and decided that we'd give it a go.
  • We did some research to see what was out there, and what we already had.

In the next few posts I'll describe how we moved from this point to a set of values that we can agree on as a team.
Image: https://flic.kr/p/oGMUQ
Read the whole story
karlosmid
1 day ago
reply
Zagreb
Share this story
Delete

Breaking the Test Case Addiction (Part 2)

1 Share

Last time out, I was responding to a coaching client, a tester who was working in an organization fixated on test cases. Here, I’ll call her Frieda. She had some more questions about how to respond to her managers.

What if they want another tester to do your tests if you are not available?

“‘Your tests’, or ‘your testing’?”, I asked.

From what I’ve heard, your tests. I don’t agree with this but trying to see it from their point of view, said Frieda.

I wonder what would happen if we asked them “What happens when you want another manager to do your managing if you are not available?” Or “What happens when you want another programmer to do programming if the programmer is not available?” It seems to me that the last thing they would suggest would be a set of management cases, or programming cases. So why the fixation on test cases?

Fixation is excessive, obsessive focus on something to the exclusion of all else. Fixation on test cases displaces people’s attention from other important things: understanding of how the testing maps to the mission; whether the testers have sufficient skill to understand and perform the testing; the learning comes from testing and that feeds back into more testing; whether formalization is premature or even necessary…

A big problem, as I suggested last time, is a lack of managers’ awareness of alternatives to test cases. That lack of awareness feeds into a lack of imagination, and then loops back into a lack of awareness. What’s worse is that many testers suffer from the same problem, and therefore can’t help to break the loop. Why do managers keep asking for test cases? Because testers keep providing them. Why do testers keep providing them? Because managers keep asking for them, because testers keep providing them…, and the cycle continues.

That cycle also continues because there’s an attractive, even seductive, aspect to test cases: they can make testing appear legible. Legibility, as Venkatesh Rao puts it beautifully here, “quells the anxieties evoked by apparent chaos”.

Test cases help to make the messy, complex, volatile landscape of development and testing seem legible, readable, comprehensible, quantifiable. A test case either fails (problem!) or passes (no problem!). A test case makes the tester’s behaviours seem predictable and clear, so clear that the tester could even be replaced by a machine. At the beginning of the project, we develop 782 test cases. When we’ve completed 527 of them, the testing is 67.39% done!

Many people see testing as rote, step-by-step, repetitive, mechanical keypressing to demonstrate that the product can work. That gets emphasized by the domain we’re in: one that values the writing of programs. If you think keypressing is there is to it, it makes a certain kind of sense to write programs for a human to follow so that you can control the testing.

Those programs become “your tests”. We would call those “your checks—where checking is the mechanistic process of applying decision rules to observations of the software.

On the other hand, if you are willing to recognize and accept testing as a complex, cognitive investigation of products, problems, and risks, your testing is a performance. No one else can do just as you do it. No one can do again just what you’ve done before. You yourself will never do it the same way twice. If managers want people to do “your testing” when you’re not available, it might be more practical and powerful to think of it as “performing their investigation on something you’ve been investigating”.

Investigation is structured and can be guided, but good investigation can’t be scripted. That’s because in the course of a real investigation, you can’t be sure of what you’re going to find and how you’re going to respond to it. Checking can be algorithmic; the testing that surrounds and contains checking cannot.

Investigation can be influenced or guided by plenty of things that are alternatives to test cases:

Last time out, I mentioned almost all of these as things that testers could develop while learning about the product or feature. That’s not a coincidence. Testing happens in tangled loops and spirals of learning, analysis, exploration, experimentation, discovery, and investigation, all feeding back into each other. As testing proceeds, these artifacts and—more importantly—the learning they represent can be further developed, expanded, refined, overproduced, put aside, abandoned, recovered, revisited…

Testers can use artifacts of these kinds as evidence of testing that has been done, problems that have been found, and learning that has happened. Testers can include these artifacts in test reports, too.

But what if you’re in an environment where you have to produce test cases for auditors or regulators?

Good question. We’ll talk about that next time.

Read the whole story
karlosmid
1 day ago
reply
Zagreb
Share this story
Delete

Breaking the Test Case Addiction (Part 1)

1 Share

Recently, during a coaching session, a tester was wrestling with something that was a mystery to her. She asked:

Why do some tech leaders (for example, CTOs, development managers, test managers, and test leads) jump straight to test cases when they want to provide traceability, share testing efforts with stakeholders, and share feature knowledge with testers?

I’m not sure. I fear that most of the time, fixation on test cases is simply due to ignorance. Many people literally don’t know any other way to think about testing, and have never bothered to try. Alarmingly, that seems to apply not only to leaders, but to testers, too. Much of the business of testing seems to limp along on mythology, folklore, and inertia.

Testing, as we’ve pointed out (many times), is not test cases; testing is a performance. Testing, as we’ve pointed out, is the process of learning about a product through exploration and experimentation, which includes to some degree questioning, studying, modeling, observation, inference, etc. You don’t need test cases for that.

The obsession with procedurally scripted test cases is painful to see, because a mandate to follow a script removes agency, turning the tester into a robot instead of an investigator. Overly formalized procedures run a serious risk of over-focusing testing and testers alike. As James Bach has said, “testing shouldn’t be too focused… unless you want to miss lots of bugs.”

There may be specific conditions, elements of the product, notions of quality, interactions with other products, that we’d like to examine during a test, or that might change the outcome of a test. Keeping track of these could be very important. Is a procedurally scripted test case the only way to keep track? To guide the testing? The best way? A good way, even?

Let’s look at alternatives for addressing the leaders’ desires (traceability, shared knowledge of testing effort, shared feature knowledge).

Traceability. It seems to me that the usual goal of traceability is be able to narrate and justify your testing by connecting test cases to requirements. From a positive perspective, it’s a good thing to make those connections to make sure that the tester isn’t wasting time on unimportant stuff.

On the other hand, testing isn’t only about confirming that the product is consistent with the requirements documents. Testing is about finding problems that matter to people. Among other things, that requires us to learn about things that the requirements documents get wrong or don’t discuss at all. If the requirements documents are incorrect or silent on a given point, “traceable” test cases won’t reveal problems reliably.

For that reason, we’ve proposed a more powerful alternative to traceability: test framing, which is the process of establishing and describing the logical connections between the outcome of the test at the bottom and the overarching mission of testing at the top.

Requirements documents and test cases may or may not appear in the chain of connections. That’s okay, as long as the tester is able to link the test with the testing mission explicitly. In a reasonable working environment, much of the time, the framing will be tacit. If you don’t believe that, pause for a moment and note how often test cases provide a set of instructions for the tester to follow, but don’t describe the motivation for the test, or the risk that informs it.

Some testers may not have sufficient skill to describe their test framing. If that’s so, giving test cases to those testers papers over that problem in an unhelpful and unsustainable way. A much better way to address the problem would, I believe, would be to train and supervise the testers to be powerful, independent, reliable agents, with freedom to design their work and responsibility to negotiate it and account for it.

Sharing efforts with stakeholders. One key responsibility for a tester is to describe the testing work. Again, using procedurally scripted test cases seems to be a peculiar and limited means for describing what a tester does. The most important things that testers do happen inside their heads: modeling the product, studying it, observing it, making conjectures about it, analyzing risk, designing experiments… A collection of test cases, and an assertion that someone has completed them, don’t represent the thinking part of testing very well.

A test case doesn’t tell people much about your modeling and evaluation of risk. A suite of test cases doesn’t either, and typical test cases certainly don’t do so efficiently. A conversation, a list, an outline, a mind map, or a report would tend to be more fitting ways of talking about your risk models, or the processes by which you developed them.

Perhaps the worst aspect of using test cases to describe effort is that tests—performances of testing activity—become reified, turned into things, widgets, testburgers. Effort becomes recast in terms of counting test cases, which leads to no end of mischief.

If you want people to know what you’ve done, record and report on what you’ve done. Tell the testing story, which is not only about the status of the product, but also about how you performed the work, and what made it more and less valuable; harder or easier; slower or faster.

Sharing feature knowledge with testers. There are lots of ways for testers to learn about the product, and almost all of them would foster learning better than procedurally scripted test cases. Giving a tester a script tends to focus the tester on following the script, rather than learning about the product, how people might value it, and how value might be threatened.

If you want a tester to learn about a product (or feature) quickly, provide the tester with something to examine or interact with, and give the tester a mission. Try putting the tester in front of

  • the product to be tested (if that’s available)
  • an old version of the product (while you’re waiting for a newer one)
  • a prototype of the product (if there is one)
  • a comparable or competitive product or feature (if there is one)
  • a specification to be analyzed (or compared with the product, if it’s available)
  • a requirements document to be studied
  • a standard to review
  • a user story to be expanded upon
  • a tutorial to walk through
  • a user manual to digest
  • a diagram to be interpreted
  • a product manager to be interviewed
  • another tester to pair with
  • a domain expert to outline a business process

Give the tester the mission to learn something based on one or more of these things. Require the tester to take notes, and then to provide some additional evidence of what he or she learned.

(What if none of the listed items is available? If none of that is available, is any development work going on at all? If so, what is guiding the developers? Hint: it won’t be development cases!)

Perhaps some people are concerned not that there’s too little information, but too much. A corresponding worry might be that the available information is inconsistent. When important information about the product is missing, or unclear, or inconsistent, that’s a test result with important information about the project. Bugs breed in those omissions or inconsistencies.

What could be used as evidence that the tester learned something? Supplemented by the tester’s notes, the tester could

  • have a conversation with a test lead or test manager
  • provide a report on the activities the tester performed, and what the tester learned (that is, a test report)
  • produce a description of the product or feature, bugs and all (see The Honest Manual Writer Heuristic)
  • offer proposed revisions, expansions, or refinements of any of the artifacts listed above
  • identify a list of problems about the product that the tester encountered
  • develop a list of ways in which testers might identify inconsistencies between the product and something desirable (that is, a list of useful oracles)
  • report on a list of problems that the tester had in fulfilling the information mission
  • in a mind map, outline a set of ideas about how the tester might learn more about the product (that is, a test strategy)
  • list out a set of ideas about potential problems in the product (that is, a risk list)
  • develop a set of ideas about where to look for problems in product (that is, a product coverage outline)

Then review the tester’s work. Provide feedback, coaching and mentoring. Offer praise where the tester has learned something well; course correction where the tester hasn’t. Testers will get a lot more from this interactive process than from following step-by-step instructions in a test case.

My coaching client had some more questions about test cases. We’ll get to those next time.

Read the whole story
karlosmid
1 day ago
reply
Zagreb
Share this story
Delete
Next Page of Stories