152: Lisa Crispin

 

Listen to full episode:

Joe Krebs speaks with Lisa Crispin about Agile Testing and product quality in general. Together with Janet Gregory she co-authored the books Agile Testing, More Agile Testing and Agile Testing Condensed many of you have probably heard of.

 

Transcript:

Agile FM radio for the agile community.

[00:00:04] Joe Krebs: In today's episode of Agile FM, I have Lisa Crispin with me. She can be reached at very easy to remember lisacrispin. com. Lisa is an author of a total of five books. There's three I want to highlight here or four actually is Obviously, a lot of people have talked in 2009, when the book Agile Testing came out, a practical guide for testers and Agile teams.

Then following that more Agile Testing, right? Then I thought it would be most Agile Testing, but it turned into Agile Testing Condensed in 2019 and just very recently a downloadable book, Holistic Testing, a mini book. Welcome to the podcast, Lisa.

[00:00:47] Lisa Crispin: Thank you for inviting me. I'm honored to be part of the podcast.

You've had so many amazing people on so many episodes. So it's great.

[00:00:54] Joe Krebs: Thank you. And now it's one more with you. So thank you for joining. And we will be talking a little bit about a totally different topic than maybe the last 20 episodes I had maybe way back. I did some testing topics, but I cannot recall maybe the last 20 episodes.

So we're not about testing a super important topic. I would not consider myself an expert in that. And I don't know of the audience who has been listening to maybe the last 20 episodes are very familiar with agile testing. Maybe everybody has a feeling about, when they hear the word testing, but there is a huge difference between agile testing.

And let's say traditional testing methods. If you just want to summarize like very brief, I know a lot of people are familiar with some of those things, but what it is, if somebody says what is agile testing, why was this different to traditional testing methods?

[00:01:47] Lisa Crispin: Yeah. I think that there are a couple of big differences.

One is that testing this is just a truth and not necessarily something to do with agile, but testing is really just part of software development. So many people think of it as a phase that happens after you write code, but in modern software development we're testing all the time, all the way around that whole DevOps loop, really.

And and so the whole team's getting engaged in it through the whole lifecycle and the focus. Is on bug prevention rather than bug detection. Of course, we want to detect the bugs that make it out to production so we can fix them quickly. But really what we want to do is prevent those bugs from happening in the first place.

So there are all these great practices that were popularized by that extreme programming and agile, things like test driven development, continuous integration, test automation all the things that go into, the planning. Workshops and things where we talk about our new features and break them into stories and what's going to be valuable to customers, having those early conversations, getting that shared understanding, things like behavior driven development, where we think about what we're going to code before we code it.

That's all really different from, I guess I would say a more traditional software development approach where, Oh, we focus on these requirements. The requirements and a lot of people think about testing is just make sure it met the requirements. But there's so much more to that. We've got all these quality attributes, like security and performance and all the things that we also need to test.

So it's a huge area, but it's woven into software development, just like coding, just like design, just like architecture, just like monitoring and observability. It's all part of the process.

[00:03:31] Joe Krebs: Yeah. It's like a QA baked in, if you want to see it this way. And then also the automation of all that, right?

So automating everything you just said is probably also a concern. Not that's necessarily new to agile, but that's a focus as well now I don't know if I don't have necessarily data points around that but I have worked with a lot of Scrum teams and Agile teams in my career.

And it seems, if somebody would say what are the challenges within these teams? And one of them is, you can almost always highlight that, and I say almost purposely because there are good exceptions, is to build an increment of work once per sprint. A lot of teams do not accomplish that, and it's often related to testing activities.

Why is that, in your opinion, like when we're seeing these teams struggle to put an increment of work out or a piece of the product or whatever you want to call it if you don't use Scrum necessarily, but to build something that could potentially go out. It's the quality standards of going out. What are the struggles out there for teams, especially on the testing side?

I see that as you just said, like it's always happening or often happens at the end, rather than in the front.

[00:04:46] Lisa Crispin: Yes. Unfortunately, I see, still see a whole lot of scrum teams and other agile teams doing a mini waterfall where they have testers on the cross functional team. But. The testers are not being involved in the whole process, and the developers aren't taking up practices like tester development, because those things are hard to learn and a lot of places don't enable.

The non testers to learn testing skills because they don't put those skills into the skills matrix that those people need to advance their careers. And the places I've worked where we succeeded with this sort of whole team holistic approach, everybody had testing skills in their skills matrix.

And we all had to learn from each other and, testers had other skills in their taste, matrix, like database skills and at least the ability to read code and be able to pair or ensemble with somebody. So that's part of it. And I just think. It's people don't focus enough on that, on the early process of the business stakeholder has brought us a new feature.

We need to test that idea before we do anything. Is this really giving, what value, what's the purpose of the feature? What value is it going to give to the customer and to the business? And a lot of times we don't ask those questions up front. And the stakeholders don't ask themselves and then they get, you deliver the feature and it's something the customers didn't even want.

[00:06:11] Joe Krebs: Lisa, we need to code. We need to get software. Why would we talk about that? Why would we not just code? I'm kidding.

[00:06:18] Lisa Crispin: Yeah. Yeah. And that's also required, that's why the whole concept of self organizing team works really well. When you really let the teams be autonomous, because then they can think how best, how can we best accomplish this than they can do?

Let's do some risk storing before we try to slice this into stories and let's do good practices to slice that feature into small, consistently sized stories that give us a reliable cadence predictable cadence of the business can plan and. Take those risks that we identified, get concrete examples for the business stakeholders of how this should behave and turn those into tests that guide development.

Then we can automate those tests. And now we have regression tests to provide a safety net. So that all fits together. And of course, these days, we also need to put the effort into kind of the right side of the DevOps loop. We're not going to prevent all the bugs. We're not going to know about all the unknown unknowns, no matter how hard we try.

And. These cloud architectures are very complex. Our test environments never look like production, so there's always something unexpected that happens. And so we have to really do a good job of the telemetry for our code, gathering all the data, all the events, all the logging data for monitoring. For alerting and also for observability, if something happens that we didn't anticipate, so it wasn't on our dashboard.

We didn't have an alert for it. We need to be able to quickly diagnose that problem and know what to do. And if we didn't have. Enough telemetry for diagnosing that problem without having to, Oh, we've got to go back and add more logging and redeploy to production so we could figure it out. Oh, how many times has my team done that?

That's all part of it. And then learning from production using those. And we've got fantastic analytics tools these days. Learning from those and what are the customers do? What was most valuable to them? What did they do when they, especially I mostly have worked on web applications.

What did they do again? We released this new feature in the UI. How did they use it? And it's, we can learn, we know that stuff now. So that feeds back into what changes should we make next?

[00:08:29] Joe Krebs: All right. So it's, it comes full circle, right? What's interesting is there's this company, it's all over the news.

It's Boeing, right? We're recording this in 2024 quality related issues. Now, that is an extreme example, obviously, but. We do have these kind of aha and wake up moments in software development too, right? So that we're shipping products and I remember times where testing, I purposely call it testing and not QA, testing personnel was outsourced.

That was like many years ago. We actually felt oh, this activity can be outsourced somewhere else. And you just made a point of if we have self organizing teams, And we're starting with it and we're feeding in at the end of a loop back into the development efforts, how important that is and how we treated these activities in the past and how, what we thought of it is, it's shocking now looking back in 24, isn't it?

[00:09:23] Lisa Crispin: Yeah, it's just, it just became so much part of our lives to run into that. And the inevitable happened, it generally didn't work very well. I've actually known somebody who led an outsourcing test team in India and was working with companies in the UK and Europe.

They actually were able to take an agile approach and keep the testers involved through the whole loop. They had to work really hard to do that. And there were a lot of good practices they embraced to make that work. But you have to be very conscious. And and both sides have to be willing to do that extra work.

[00:09:56] Joe Krebs: You just mentioned that there were some really cool analytics tools. I don't know if you want to share any of those because you seem very excited about this,

[00:10:05] Lisa Crispin: the most, the one that I found the most useful and I, a couple of different places I worked at used it.

It's called full story. And it actually. It captures all the events that are happening in the user interface and plays it back for you as a screencast. Now, it does block anything they type in. It keeps it anonymized. But you can see the cursor. And I can remember one time a team I was on, it's we put a whole new page in our UI, a new feature.

We thought people would really love it. And we worked really hard on it and we, we tried to do a minimum viable version of it, but we still put some effort in it and we put it out there. And then we looked at the analytics and full story and we could see that people got to the page. Their cursor moved around and then they navigated off the page.

So either it wasn't clear what that page was for, or they just couldn't figure it out. So that was really valuable. I was like, okay, can we come up with a new design for this page? If we think that's what the problem is, or should we just, okay, that was a good. Good learning opportunity. But as a tester, especially there, because we can't reproduce problems, we know there's a problem in production, can't reproduce it.

But if we go and watch a session where somebody had the problem, and there are other things, mixed panel, won't play it back for you, but you can see every step that the person did. And even observability tools Honeycomb and LightStep can show you like the whole, they can trace the whole path of what did the user do.

And that really helps us not only understand the production problem, but, Oh, there's a whole scenario. We didn't even think about testing that. And so there's so much we can learn because we're so bound by our cognitive bias, our unconscious biases that we know how we wanted it to work.

[00:11:54] Joe Krebs: Yeah.

[00:11:55] Lisa Crispin: And it's really hard to think outside the box and get away from your biases and really approach it like a customer who never saw it before would do.

[00:12:03] Joe Krebs: Yeah. It's this is the typical thing, right? If a software engineer demonstrates their own software they produce and was like eight books on my machine, I'm sure you have heard that.

And it's it's obvious that you would do this, right? And it's just not necessarily obvious for somebody else. But if you're like sitting in front of a screen developing something for a long time, it just becomes natural that you would be working like this. I myself have engineered software and and fell into that trap, right?

It's oh my God, eye opening event. If somebody else looks at you or. Yeah,

[00:12:33] Lisa Crispin: Even when you sometimes have different people, like I can remember an occasion that Timo was on with a, again, a web application and I was just changed in the UI, just adding something in the UI and I tested it. My, my manager tested it.

One of the product owner tested it. And we all thought it looked great and it did look great. We didn't notice the other thing we had broken on the screen until we put it in the production and customers were like, Hey, I really do think things like pair programming, pair testing, ensemble, working in ensembles for both programming and testing, doing all the work together.

Getting those diversity points does help hugely with that. My theory is we all have different unconscious biases. So maybe if we're all together, somebody will notice a problem. I don't have any science to back that up, but But that's why those kind of practices are especially important.

[00:13:28] Joe Krebs: Yeah.

[00:13:28] Lisa Crispin: To catch as many things as we can.

[00:13:30] Joe Krebs: Yeah. So we both didn't have any kind of science to back this up, but let's talk a little bit about science. Okay. Because metrics, data points, evidence. What are some of the KPIs if somebody listens to this and says Oh, that sounds interesting. And we definitely have shortcomings on testing activities within Agile teams.

Obviously there's the traditional way of testing. They're using very different data points. I have used some in the past, and I just want to verify those with you too. It's that's even useful and still up to date. What would be some good KPIs when somebody approaches you and says that's got to have that on your dashboard?

[00:14:08] Lisa Crispin: I think you, I actually think one of my favorite metrics to use is cycle time, although that encompasses so many things, but just watching trends and cycle time. And if you're, if you've got, for example, if you've got good test coverage with your automated regression tests, you're going to be able to make changes really quickly and confidently.

And if you have, a good deployment pipeline, you're going to Again, there's a lot of testing that goes into making sure your infrastructure is good and your pipeline is performing as it should, because it's all code to that reflects a whole lot of things. It's hard to isolate one thing in your cycle time but what counts is, how consistent are we at being able to frequently deliver small changes?

So I think that's an important one. And in terms of. Did we catch all the problems? I think it gets really dangerous to do things like, Oh, let's count how many bugs got in production because all measures can be gained, but that's a really easy one to gain. But things like how many times did we have to roll back or revert a change in product in production?

Because there was something we didn't catch and hopefully we detected that ourselves with an alert or with monitoring before the customers saw it. And now that we have so many release strategies where we can do, Canary releases or blue green deploy so that we can do testing and production safely.

But still how many times did we have to roll back? How many times did we get to that point and realize we didn't catch everything? That can be a good, that can be a good thing to track. And depending on what caused it. If we had to, if we had a production failure because, somebody pulled the plug of the server out of the wall.

That's not, that's just something that happened, but if it is something that the team's process failed in some way, we want to know about that. We want to improve it. And, just how frequently can we deploy I think, the thing with continuous delivery, so many teams are practicing that are trying to practice that you're not going to succeed at that if you're.

If you're not preventing defects , and if you don't have good test automation, good automation the whole way through.

[00:16:08] Joe Krebs: Yeah.

[00:16:08] Lisa Crispin: And I think, deployment frequency, that's another one of the Dora key metrics. Yeah. That's a real that we know that correlates with high performing teams.

And of course we shouldn't ignore, how do people feel are people burned out or do they feel happy about their jobs? That's a little harder metric to get. I was on a team, my last full time job, we really focused on cycle time as a metric and we didn't really have that many problems in production.

So we didn't really bother to track how many times we had to revert because we were doing a good job, but. But but, how frequently were we going? What was our cycle time? But also we did a little developer joy survey. So once a week, we sent out a little 5 question survey based on Amy Edmondson's work.

And now I would base it on I would also use Nicole Forsgren's space survey. Model, but that was just a little before that came out, but just asking just a few questions and multiple, from one to five, how did you feel about this? And it was really interesting because over time, if cycle time was longer, developer joy was down.

So there's something happening here that people are not happy. And. Something's going wrong. That's affecting our cycle time. And then the reverse is true. When our cycle time was shorter, joy went up. So I think it's I think it's important and, you don't have to get real fancy with your measurements, just start just, I think you should first focus on what are we trying to improve and then find a metric to guide, to measure that.

[00:17:41] Joe Krebs: I'm glad you said or mentioned code coverage. That's one of one of those I mentioned earlier. I've been working with it quite a bit and cycle time. Um, very powerful stuff. Now, with you, such, somebody who has written published about agile testing extensively we are in 2024. There was the years ahead.

There are agile conferences. There is a lot going on. What are the trends you currently see in the testing world? What is what's happening right now? What do you think is influencing maybe tomorrow? The days coming, I know you have holistic testing yourself. So maybe that is one but I just want to see, what do you see happening in the agile testing?

[00:18:24] Lisa Crispin: Oh, just all of software development, definitely evolving. I think one of the things is that we're starting to be more realistic and realize that executives don't care about testing. They care about how well does their product sell?

How much money is the company making? We know that. Product quality and process product quality obviously affects that. And that's from a customer point of view. It's the customer who defines quality. And, back in the nineties, we testers thought we were defining quality. So that's a thing, a change that's occurred over time and really thinking about that and also knowing that our process quality has a huge impact on product quality and what are our, are the, What are practices?

What are the most important practices we can be doing? Janet Gregory and who is my coauthor on four of those books and Selena Delesie they've been consultants for years and helped so many huge, even big companies through an agile transformation. And they've distilled their magic into their, what they call a quality practices assessment model.

And they identified 10 quality practices that they feel are the most important and things like feedback loops. Things like communication, right? And the model helps you ask questions and see where is the team in these different different aspects of practices that would make them have a quality process, which would help them have a quality product.

And it gives teams a kind of a roadmap. It's here's where we are now. What do we need to improve? Oh, we really need to get the continuous delivery and these things are on our way, things like that. So I think that's one realization that it ties back to the idea that testing is just part of software development and we've had for years.

So like, how can I make the, president of this company understand that we testers are so important. We're not, but it's important that the team build that quality.

[00:20:29] Joe Krebs: But you could also argue that maybe a CEO of a company or the leadership team would say, we also don't care if this is being expressed in one line of code or two lines of code.

So it's not necessarily to testing. I think they're just saying we have, here's our product. But I think what has changed is that your competition is just one mouse click away. Yeah. Quality is a determining factor. Now, let's take this hypothetical CEO out there right now listening to our conversation and saying I do want to start to embrace agile testing and agile in general, but more of those things you just mentioned, what would be a good starting point for them?

Obviously there's a lot of information right now keywords and buzzwords we just shared today. What would be a good starting point for them to start that journey, because that is obviously not something that's coming overnight.

[00:21:20] Lisa Crispin: I think one of the most important things that leadership can do is to make, to enable the whole team to learn testing skills that will help them build quality.

And that means making it part of their job description, making it part of their skills matrix for career advancement, because that gives them time. If developers are paid to write lines of code, that's what they're going to do. But if they're, it's okay, you're an autonomous team.

You decide what practice you think will work for you. We're going to support you. It's going to, it's going to slow things down at first. Okay, like I was on a team in 2003 that was given this mission. Okay. Do what you think you need to do first. We decided what level of quality we wanted, of course.

We wanted to write code that we would take home and show our moms and put it on our refrigerators, and and we all commit to that level of quality. How can we achieve that? We're seeing that test driven development has worked really well for a lot of teams. So let's do test driven development, which is really.

Not that easy to learn, but when you have a leadership that lets you have that time to learn and support you, it pays off in the long run because eventually you're a lot more confident. You've got all these regression tests. You can go faster and things like continuous integration, refactoring, all these practices that we know are good.

And we were empowered to adopt those. It was all part of all of our job descriptions. And that's, so we became a high performing team, not overnight. Yeah, within a few years and a part of our, part of what we did was spend a lot of time learning the business domain. It's a very complicated business domain.

And so when the stakeholders came and said we want this feature and we asked them what, why do they want it? What was it supposed to do? What was the value? We could usually cut out half of what they thought they wanted. We can say, okay, if we did all of this, we think it's going to be this much effort, but we could do 80 percent of it for half the cost.

How's that? Oh yeah. Oh yeah. Nobody ever turned us down on that one. So that's another way where you go fast, we eliminate things that customers don't want or need.

And so yeah, it's the unicorn magic of a self true self organizing team.

[00:23:30] Joe Krebs: Yeah. But I do think what you said is, , this one thing that just stood out to me It is an investment, it is an investment into the future.

It's a really good feeling to have later on the capability of releasing software whenever you want. If that is not becoming a massive burden and the whole company needs to come together for all nighters to get a piece of software out of the door. Now you're not only an author here you're also a practitioner.

You also work with teams and I just want to come back to that business case of agile testing. One more time. Do you have an example from a client recent or further back where you would say that stands out or that's an easy one? You remember where agile testing made a huge difference for an organization.

I'm sure there are tons you have where you would say there was a significant impact for them based on introducing agile testing practices.

[00:24:29] Lisa Crispin: I certainly, especially early on in the extreme programming and Agile adoption there was a few occasions where I joined a team that never had testers.

They were doing the extreme programming practices and you may recall that the original extreme programming publications did not mention testers. They were all about testing and quality, but they didn't mention testers. And. So these teams were doing test driven development, continuous integration. They were writing really good code and then they were doing, they were doing their two week sprints and maybe, maybe it took them three sprints to develop what the customer wanted and then they give it to the customer and the customer is but that's not what I wanted.

So they like, maybe we need a tester. So then they hired me. And I was like, oh okay, let's let's have some of the, some, okay, we're gonna do a new, some new features. Let's have some brainstorming sessions. How are we gonna, what is this new feature for? How are we gonna implement it?

What are the risks? And start doing risk assessments. And how are we gonna mitigate those risks? Are we gonna do it through testing? Are we gonna do it through monitoring? And just asking those what if questions? What's the worst thing that could happen? That's my favorite question when we release this.

And could we deploy this feature to production and have it not solve the customer problem? And just add, anyone could ask those questions. It doesn't have to be a tester, but I find the teams that don't have professional testers, specialists, they, nobody else thinks of those questions. They could. But they just, testing is a big area.

It is a big set of skills. And anybody on that team, I know lots of developers who have those skills, but not every team has a developer like that, other specialists, like business analysts could also help, but there were even fewer business analysts back in the day than there were testers.

And as soon, so as soon as the tester, and when I, one team I joined early on, okay, they're like, okay, Lisa you can be our tester. But you can't come to the planning meetings and you can't come to the standups.

That's a little weird. I did as best I could without being involved in any of the planning.

And so that's the end of the two weeks. They weren't finished. Nothing was really working. And I said, Oh, Hey, can we try it my way? Let me be involved in those early planning discussions. Let me be part of the standup and Oh, amazing. Next time we met our target. And and I was I couldn't support all the, there were 30 developers and one tester, but we agreed that one other person or two other people would wear the testing hat along with me every sprint or at least on a daily basis.

And so they all started to get those testing skills. Yeah, they just test, like I say, testing is a big area and you don't know what you don't know. I see teams even today. That they don't have any testers because years ago they were told they didn't need them if they did these extreme programming practices and they're doing testers involvement, they're doing continuous integration.

They're maybe even doing a little exploratory testing. They're doing pair programming, even some ensemble or mob programming. They're doing great stuff, but they're missing out all that stuff at the beginning to get the shared understanding with the stakeholders of what to build.

[00:27:43] Joe Krebs: All those lines of code that were needed. Wouldn't need to be tested.

[00:27:48] Lisa Crispin: And so they release the feature and bugs come in. And they're really, they're missing features. It's not what the customer needed. Too many gaps. .

And of course, I want to say those aren't really bugs. But they're bad. Yeah. And if you'd had a risk storming session, if you had planning sessions where you got.

Example mapping sessions, for example, where you got the business rules for the story, concrete examples for his business role, and then you turn those into tests to guide your development with behavior driven development. This would have solved your problem, but they didn't know to do that. Anybody could have learned those things, but we can't all know everything.

[00:28:25] Joe Krebs: Yeah. We're almost out of time.

But there's one question I wanted to ask you and it might be a short answer. I hope you condense it a little bit. But when somebody gets on your LinkedIn page Lisa Crispin there is you in a picture plus a donkey. And you have donkeys yourself.

And how does this relate to, does it relate to your work? What do you find inspirational about donkeys? And what, why is, why did you even make your LinkedIn profile? It has to be, it has to be a story around it.

[00:28:55] Lisa Crispin: It's interesting. A few years ago at European testing conference, we had an open space and somebody said, Oh, let's have an open space session on Lisa's donkeys.

And then we got to talking about this and I actually have learned a lot. About Agile for my donkeys. And I think the biggest thing is trust. So donkeys are work on trust. So with horses, I've ridden horses all my life and had horses all my life as well. You can bribe or bully a horse into doing something, they're just, they're different.

If you reward them enough, okay they'll go along with you. If you kick them hard enough, maybe they'll go. Donkeys are not that way. They're looking out for number one. They're looking out for their own safety. And if they think you might be getting them into a situation that's bad for them, they just flat won't do it.

So that's how they get the reputation of being stubborn. You could beat a bloody, you could offer them any bribe you want. They're not doing it. And so I learned I had to earn my donkey's trust. That's so true of teams. We all have to trust each other. And when we don't trust each other. We can't make progress and the teams I've been on that have been high performing teams We had that trust so we could have discussions where we had different opinions We could express our opinions without anyone taking it personally Because we knew that we were all in it together and it was okay Anybody could feel safe To ask a question, anybody can feel safe to fail, but you have that trust that there's nothing bad is going to happen.

And so I could bring my donkey right in that door in the house. I've taken them in schools. I've taken them to senior centers because they trust me. And if I did anything, if they came to harm while in my care, if I, let's say I was driving the cart and the collar rubbed a big sore on them, that would destroy the trust.

And it would be really hard to build it back. And so we always need to be conscious of how we're treating each other in our software teams.

[00:30:55] Joe Krebs: Yeah, wonderful. I did hear about the rumor of being stubborn. But I also always knew that donkeys are hardworking animals.

[00:31:02] Lisa Crispin: They love to work hard.

Yeah.

[00:31:05] Joe Krebs: Awesome. Lisa, what a great ending. I'm glad we had time to even touch on that. That was a great insight. Thank you so much for all your insights around testing, but also at the end about donkeys. Thank you so much, Lisa.

[00:31:17] Lisa Crispin: Oh, it's my pleasure.

Previous
Previous

153: Luke Hohmann

Next
Next

151: Maureen McCarthy and Zelle Nelson