JetBlue is a brand that's well known for their outstanding customer service. Creating an outstanding digital experience that matches their in-person experience is a top priority at JetBlue, and one of the keys to their success has been elevating the importance of quality and testing in the development process. In this session, learn how JetBlue is adopting quality engineering and the strategies that helped them to build a long-lasting culture of quality that enables faster innovation.
transcript
Kuhu Singh
Welcome. Hello, everyone. We are happy to have you attending Experience and I'm happy to introduce the session: How JetBlue delivers a high-quality digital experience. Reed, could you please change the slide? Thank you. Before we get started just a few housekeeping items. If you have any questions throughout the presentation, please leave those in the q&a panel on the right side of your screen. For comments and discussions. You can use the chat feature when the video is minimized. You'll find both of these on the right side of your sessions page. We will have time at the end of the presentation for Q&A. With that, I will hand it off to Vince. Take it away.
Vincent Esquilin
Hey, everyone. Hey, Reed, thanks for joining. Just want to start out. Reed, Do you want to go first?
Reed Porter
Sorry, I just had to unmute. Yeah. Welcome, everybody. My name is Reed. I'm head of the Account Management Team here at mabl. I was actually Vince's CSM here at JetBlue for a number of years working with him here at mabl. So pretty familiar. And we've been working together for a while. I've been in client facing roles in the tech sector for about a decade now between sales and CS. And was previously in San Francisco, but am now based in France. So just a little bit about me. But I'm sure that people are here to learn more about you than me.
Vincent Esquilin
Hey everyone. So my name is Vincent Esquilin. I've been in IT for over 20 years with most of that time, focusing on quality. Long time back, I was in a similar role to Reed, but I was doing it for Mercury. For those that know, Mercury Interactive. For those of you who go way back, those were the load runners, the QTP’s, test directors, and if you're been in the business a really long time, it's a win runner. So I have quite a bit of history with automation tools. And my experience in QA has always been working in automation, it's a little bit different of a track, a lot of times you hear about folks that are manual or kind of and then they venture into automation. My focus has always been on automation, and specifically on performance testing. And then I kind of transitioned later on into more functional testing and test automation.
Vincent Esquilin
A little bit about JetBlue. I came to JetBlue 15 years ago, the company has grown significantly. When I first came here, we had a startup mentality. We purchased a lot of software. I came here in 2006. At that point, we had already won some JD Power. And we kind of broke the airline model. There were a lot of airlines that were a little bit dismissive of us small airlines starting in the northeast, which is a challenging place to run an airline. But we did pretty well. And we did that by delivering many customers with very high customer satisfaction. At that time, like I said it was very much like a startup mentality in IT. And we used to purchase a lot of software. I don't know a lot of companies at that time that were doing scrum or agile. So we were very much in the waterfall mentality especially because we were buying a lot of software and then incorporating it into our ecosystem. So later, we did go towards an agile methodology. And what you'll see here is kind of a representation of the way that my teams have been broken up into different areas. You'll notice like at the top, you have the commercial area, those are kind of the customer-facing things that you would expect, right? So everybody who has ever flown on JetBlue has probably gone to jetblue.com, found their seats, picked them, and, you know, decided your dates and what kind of amenities you wanted. And then kind of just circling to the right it’s kind of the lifecycle or the user journey that our customers typically go through.
Right. So at JetBlue, the way that we're organized is we have many different products, whether you realize it or not, jetblue.com is a portal. For us, it starts with the actual static content of providing pictures and destinations where we fly, and then you can go into our booking site, then you can go into our check-in, you can view other things like see flight disruptions, check status, things like that. So it's the way that we're organized. It's many different products and the products kind of flow into each other. What I noticed, when I became the general manager at JetBlue, a few years back is that we spent a lot of time executing, we did have automation, and our automation at that time was very much in that legacy kind of Mercury then became microfocus UFT, unified Functional Tester automation framework. And we struggled with it. Quite honestly, the reason that we struggled with it was we had that had a few different things that made us struggle. But mainly, you had to be kind of a black belt, to run the automation or to change automation. And it kind of locked out a lot of our testers. So we wanted to have something that was going to take the learning curve off of automation and get more of the team involved.
And I think mabl was very important in that journey for a lot of my team. We didn't have to worry as much about licenses, right?With mabl’s platform, we don't have to worry about the number of users that are actually logged into our system, or accounts being allocated to testers versus developers and who's going to run, that whole model didn't exist. And that was a relief for JetBlue. And the other thing that really was a boon for us was the concurrency. In our previous model, we had very sequential tests. So it would start on something like jetblue.com, you'd go into our booking, and then later it would get over to the check-ins and further on down in the user journey. And if one test failed early in the flow, it disrupted everything that was behind it. And that was painful, because not only did it take a long time. But sometimes these tests would just run for a long time and then fail in the middle. And you'd have to basically start at the beginning. So that was a big part of our or that was most of our challenges with our legacy framework.
The other thing that we can't easily do at JetBlue is to create data. It may come as a surprise, but we don't have the ability to create datasets. So we have to create a lot of our own test data. So the check team has to create their own bookings in order to check them in later on in their journey. So that was one of the things that we wanted to focus on first. And that's what's depicted here is that we wanted to first work on the prerequisite data and automate that. So creating accounts and creating bookings, is kind of our foundation, for our functionality on some of these products, and then that would flow later to the product team. So if we were able to like here as an example, I have like, purchased seats on jetblue.com. That gets us our confirmation. Later the check-in team would use that confirmation. That was automated. So for the check-in team, we saw huge time savings because they didn't have to go in and automate or they didn't have to go in and manually create that data. As they had in the past. They were relying on a different team to have those account to have those bookings created for us. And then they would flow into and then validate the check-in app. And so we saw pretty significant time savings there. So Reed, I know we've gotten to the point where we wanted to talk through the milestones.
Reed Porter
Did you want to walk through the milestones?
Vincent Esquilin
Oh, yeah. Thank you. These are the steps that we took. In order to get to this. We assessed what our test cases were. The first part was, we were looking at coverage. And then the second part was, we wanted to start with actually going from operating over, and identify which tests are most commonly used, and then start from there, the more you use them, the better the return on investment is for each of those tests. So that's where we started. And then once we had tests, we want to identify which tests are flaky, anybody who's been doing automation for a while knows about flaky tests. What we did in order to identify those, is we would run the test multiple times. And if one or two times, maybe out of 10, they failed, we would kind of identify that test set and move it off to, a separate place where we can kind of look for anomalies on why that was happening, and then get very comfortable with our regression set. That was part of our automation. And that helped us move more quickly.
Reed Porter
All right. Wow. Okay, so did I hear that right? Because that was a 90% reduction in test execution time.
Vince Esquilin
Yes, some teams, definitely had some really big wins. Like I said, depending on the team, if you're further down the in the user journey, with the automation and the prerequisite data being created for you and run, either, sometimes offline, at night, these testers were coming into a pretty minimal amount of tests that needed to be run during the day. And a lot of times, they were just reviewing the test results. So that made significant savings for them.
Reed Porter
That's awesome. Climbing with that much time saving, I imagine that the teams, kind of the roles and responsibilities almost day to day probably had a pretty significant change as well. Did anything change about their goals or their responsibilities in relation to testing and their job?
Vince Esquilin
Yeah, you know, with that, I would say that the team really had much more time to focus on quality. There have always been deadlines. Before we had a lot of this automated with mabl, and there are still pieces that we want to take on more with mabl, not to say that we've won every battle and gotten everything automated, but we're not there yet. I don't think we'll ever get there. But you know, with the teams that are highly automated, what we're seeing is that they're less focused on execution and more focused on the defects that come out of the automation. So there are more conversations that are happening with development. On the follow-ups, there's more checking that each of these defects that we caught in automation is reproducible. And that we can explain exactly what happened to the developer in this particular case. And, you know, one of the things I like about mabl, is you can also have the developers in the environment with you, right, they can review the results, and it doesn't cost us any additional licenses or anything like that. So that's, that's another win. For us.
Reed Porter
It's pretty interesting. So it sounds like just as a result of implementing automation successfully and having this reduction in test time, the QA's are able to collaborate a lot more with the developers than they were before. And spending less time executing tests and more time triaging and understanding what took place as a result.
Vince Esquilin
Yeah, we're finding defects earlier in the cycle. So the defects are coming out earlier in the sprint. And so that's what's actually happening. What it looked like in the beginning, was that we were finding more defects. Now I don't think our developers are writing worse code. But what we've discovered is that we were finding more defects, Probably because they were going undetected before. So there was kind of an uptick in the number of defects that we found. And so it actually got kind of depending on the team, it got kind of busy. Because it was like, oh, there's a lot more defects now with what's going on. And it was a weird dynamic. Like, we don't actually, we didn't actually dip in quality. But what we're doing is detecting them at a higher rate and basically being able to test more often using new automation.
Reed Porter
That's pretty interesting. And that leads to my next question, which is, how after implementing automation, and when seeing these changes take place? How did you measure your success with this tool? How did you kind of measure what sort of results it was, getting you after you were able to fully implement automation?
Vince Esquilin
I think when we first started looking at automation, we kind of had a plan, like, we knew this particular app had 100, test cases, whatever the number was. We wanted to get to a certain amount of test cases, like let's say, okay, well, let's see how much we can get, maybe we can get 80% coverage, that'd be great. I don't think it's 100% realistic, you're probably not creating enough tests if you can get to 100% automation. But if you can all, you know, tip of the hat to you. But I think what we saw, in the beginning, was getting to test coverage. And then later, it became a reduction in test time. So if we were detecting once we were successful in getting some automation, it was run it and then okay, well, now we have these defects. And now development would take those defects to work on them. And it was could we squeeze in another cycle before the end of this the end of sprint? So what ended up really happening in the beginning, was we were focused on test coverage, then later became how many tests executions could we get through in that cycle. So because the development team was coming up with their fixes, and in the past, we would have said, we need X amount of time. But since that had been reduced, it was rerun the automation, and then see if we could get to reduce the mountain of defects and just continue.
Reed Porter
Interesting. I want to ask you a question that, I think would be on top of mind, for a lot of the QA directors or managers of some of these quality engineering groups in the audience. Quite often, we'll see that teams will implement a solution or implement automation, but never really find the time to fully implement it and kind of make this transition of either moving from manual to automation, or whatever it may be. How do you ensure your team's success to make sure that they have the time to come in, learn this tool, get the tests up and going, and set up the integration? And make sure you are going to see the benefit that you got.
Vince Esquilin
Yeah, well, in the beginning, it is a struggle, especially if somebody is very used to like a manual test, right? And they're, used to manual testing. But, you know, for those for the managers, once it's built, I would say, or once it's in, mabl is there and you can start building on the test, you should really look at, it's almost like taking your vitamins, you have to do it on a daily basis, or cleaning your room. If you wait too long, it's going to get to a point where it's so staggering. You’ve got a mess. So you have to do it every single day. And otherwise, it's going to become too much. I've asked, and my team knows I'm really passionate about automation. So if they ever tell me, they don't have time, it's I kind of almost instantly retort back. It's like you don't have time because you haven’t automated. If the conversation is that you don't have time for automation, then, I think they already know my answer. Once something is stable in the agile methodology, once something is stable, and it's marked as done, it really can be considered a candidate for automation or should be unless there's a compelling reason that it can't be automated. We should probably do it on the next sprint and almost allocate you know, storyboard points to that automation.
Reed Porter
Yeah and I recall when I was working with you as your CSM in years past you also kind of transitioned to an automation-first approach, something like that. Would you know a little bit more about that as well?
Vince Esquilin
Yeah, um, well, in that we had to convert, as I, as we said, some of these applications already had legacy frameworks, right? So we needed to kind of move off of that. So it was kind of like a Delta team or some specialists that came in, that set up the core functionality. And then at that point, we started to incorporate the subject matter experts of those product teams to supplement them, right? And then it was just a matter of getting everyone comfortable with either bolt execution. And I guess, you could almost look at it from that perspective, like, if you were taking, if somebody was not familiar with automation at all, they're a manual tester, they're trying to go to an automated tool, you know, maybe the first thing that you do is you introduce them to the platform, and then show them how to run tests and how to review tests. And then later, once they start reviewing these tests, they're going to want to know how to fix the tests, because they're gonna see oh, this is, this is actually working manually, but it's not working in automation or, or vice versa. So they're gonna be there, I almost feel like there's a natural desire to want to fix it. Right. So then they start to kind of work with the other subject matter experts. But it does have to be automation, first, it really has to be a focus on it. And as a leader, you kind of, you're gonna hear a lot of different reasons for why, but you have to stick to it. And, you know, like everybody like I just said before, it's something you have to do consistently like, and just kind of live and breathe automation, every single day.
Reed Porter
Since, I mean, you have the results to show for it. So obviously, you made some good decisions.
Vince Esquilin
All credit goes to the team, I have a great team. And, like me, I got some fanatics that are really crazy about automation, you know, and if for anybody who's on the call that is, and there's probably a lot of them, that have gone from open source automation to now, getting started with mabl, you know, I've had some, some testers take the journey from open source to mabl and, you know, they love it, they're talented, it's like a hot knife through butter, you know, they're able to do so much more with the platform than then they could with open source because everything that just fills an open source, a lot of it is, is fragile. And there's a lot of rework. And that can be costly. And it obviously, takes time. So I think that those experienced testers, those really, those people that have spent time in open source, once they get familiar with mabl, and they understand how to do what they want to do, there's just a general excitement to come in and get things done for automation.
Reed Porter
So one of my last questions for you is, a QA leader out there that is trying to get buy-in from the organization or get budget approval to purchase automation and implement automation tooling? What advice would you have for those folks?
Vince Esquilin
Yeah, you know, it's a good question. It's something that probably needs to be handled differently at every organization, but not, you know, notwithstanding, I would tell the General Manager, Director, Manager in that case, you know, really look at, at all your, all the capital that you have, you know, if you are an open source shop, a lot of those are open source tools. While they're free to use, the people that are on your team generally have to be more talented, and they run at a more expensive rate. So you can, you could probably have the costs or the capital that you need, somewhere in that budget, it might just be focused on consulting as opposed to tools. So if you kind of look at it from that perspective, you might be able to find the money dollar for dollar. I'm very happy with what we've done with mabl. I don't think, you know, as a platform, the way that it's structured, I have a lot more people. Now part of the automation process, whether it be executing, whether it be building tests, then there could be if we were doing open source, right and also with the open source, it feels a little bit disconnected. Having that platform where you can just go into your workspace and see all the tests that have already been built and then call on those tests using reusable flows and snippets. That's key, that's a little bit more difficult in a Selenium type of environment. But I like the way that we have it set up in our workspace. And I think it's something that takes the technical edge off of automation.
Reed Porter
Cool. I know that we also want to open it up to the audience and see if anyone has any questions for us. Kuhu, do we have any questions?
Kuhu Singh
Yes. Thank you, Reed and Vince. What questions does everybody have? Please remember to put it in the chat panel on the right side of your screen. We already have a couple of questions lined up. So let me get started with them.
First question, what are the other tools used and integrated with mabl? For example, test case management, defect management, CI/CD, etc. which worked well with Jet Blue?
Vince Esquilin
All right. I might ask you about that one more time. Okay.
Kuhu Singh
Yeah. Okay, I'll go slow. What are the other tools used, integrated with mabl example test case management, defect management, CI/CD, etc, which worked well with Jet Blue?
Vince Esquilin
So for defect management, we do have JIRA here. Because we are a JIRA shop, we also have Q-test for defect management. And that does some JIRA integrations. We do have a little CI/CD going on, where we have some Jenkins going on in the background, depending on the team. We also have some Postman that we have built Postman scripts that have been built over the past couple of years. And we're looking to integrate those into mabl as well. And, yeah, that's the majority of it.
Kuhu Singh
Another question. Have you got end-to-end automation flows or word automation, that’s different for UI and back end?
Vince Esquilin
Are their end-to-end test for the front end and back end?
Kuhu Singh
Yes.
Vince Esquilin
We do have different tiers of testing. So we do have a data testing team. And we do have, obviously, we have UI testing. One of the things that we've done in small spots, in order to check and validate the data is that we've actually requested APIs be built specifically for test purposes. So and then that way, we can validate that the data is being correctly stored in the backend. So it's almost building just for testing purposes, right to have an API that would retrieve certain items that we are validating against in the same UI. So we do have some data testers. And we have API testers. But there are certain instances where we've asked development to kind of create our own API so that we can validate things that are not so easily validated at the database, or at the data level.
Kuhu Singh
Now the next question is, what does the process of reviewing the test results look like in your team? How often does it happen and by whom?
Vince Esquilin
Oh, that's a good one. It usually happens by the designated lead. And there's always either a morning Scrum or some type of morning meeting, where everyone kind of gets together and either exit looks at what was executed recently. And if there are any failures, then there's either a replay or a manual execution of that test case, but that usually happens by the leads in JetBlue, and then we'll make a determination. If is this a defect then we'll work with the product owners if it is a defect and kind of prioritize and just kind of communicate what is going on with that defect.
Kuhu Singh
Last question, how do your devs collaborate with QA now that QA has more time to dig into the data?
Vince Esquilin
Um, again, I think that probably varies from team to team. We do have some developers that will ask for things like our files, which is, you know, kind of technical. And then there are other instances where maybe those developers just want the steps and they like to try to reproduce it themselves. We usually have a defect call, every day during that bad execution phase. And we either tell them what's going on or send them the steps and then the results with it. I think they've gotten to a point. And every organization will probably go through this, where they almost don't trust automation. I've seen this in the past, they, they want to see it for themselves. So they'll ask a lot for snapshots and you know, for replays. And then once you get a couple of runs, or you've been doing it for a couple of months, they tend to trust and then they just ask for the steps.
Kuhu Singh
Um, with that we're unfortunately running out of time now. So if you have any questions that we could not get to, you can connect with our speakers at the conference. Thank you so much for joining and see you at our next session at Experience. Cheers, everyone.