Data is incredibly important to making informed decisions about our quality strategies - and there is an incredible amount of it at our fingertips. In this session, learn how to harness the test data at your fingertips to make data-driven decisions. You’ll learn how to set up your workspace to capture the data you need, and leverage BigQuery to structure your data in an actionable manner.
Transcription
Lauren Clayberg
Hello, everybody. Welcome to data is a superpower - how to become data-driven. So data is a really important part of making informed decisions for your teams’ quality strategies. There's a lot of data available to you right at your fingertips. So in this session, I'm hoping to show you how you can leverage your mabl data using existing data-driven testing features within mabl as well as the BigQuery integration so that you can make the best decisions for your team.
But first, let me introduce myself. So my name is Lauren Clayberg and I am one of the software engineers on the team. I've been at mabl for about two and a half years, and I come from a machine learning background. Primarily in the time that I've been here, I've worked on a lot of our reporting-related features, as well as performance-related features. Today, to dive a little bit deeper into the different topics that I'm going to cover, the first thing I'm going to talk about is the importance of making data-driven decisions and then go into more detail about the actual data-driven testing features that are built-in within mabl. I also want to talk to you about our BigQuery integration and as well as setting up your workspace such that you can make the most out of these data-driven features.
So to start off, the importance of data-driven decisions, if you've been able to attend some of the other sessions, you will have heard of people mention data being one of the pillars of quality engineering, because it allows you to make informed decisions about your application quality and your quality strategies.
There are a few ways that you can use data to do this. So the two different ways are, one is measuring changes. So this is really important so that you understand the impact that your team is making, and also how different things that you are doing for your application affect the application quality. So some examples here are if you have a lot of small improvements, maybe it's hard to see those being impactful. But over time, they can make up and make a really big difference. So you want to be able to use data to track those changes, and set concrete goals for your team. The same can happen with a lot of small regressions building up over time, you want to make sure that you're tracking those so that your team is going in the right direction for quality assurance testing.
You also can use data to make sure that you're making objective decisions. You don't have to rely as much on intuition or feeling like you have specific application quality standards, you can prioritize with confidence because you can actually measure exactly what's going on in your application and in your testing strategies. Then you can also use data to make decisions about different areas to focus your team's efforts on. For example, you could discover quality gaps that maybe your team wants to focus on fixing. So one example of a quality gap that you might catch is different types of regressions that maybe would be difficult to see if you weren't leveraging all of your data within mabl.
Here's an example of the average app load time over a period of 60 days. As you can see, for this application, the average app load time is 0.2 seconds. But at one point around the beginning of September, it jumped up to a little bit over 0.3 seconds, this might be difficult for you to notice without having concrete data. But here on this graph, it's very noticeable. That's something that when we noticed that we would want to fix and then you can track that that change has actually been fixed and then you can feel more confident in the quality of your app.
Another example is making sure that you can prioritize things with confidence. So one of the things that previous talks have discussed is about how testing is one of the biggest pain points for a lot of teams and teams that have a lot of really high-quality testing, generally report that their customers are much happier with their products. So you want to make sure that you are making the right prioritization decisions when your team is focusing on testing and data is a great way to make sure that you are tackling the areas that will make the biggest difference for your customers.
So what are some of the data-driven testing features that you can leverage such that you can make these informed decisions about the quality of your application? So the first one that I want to talk about is release coverage. Think of the scenario where you have been working really hard on a project with your Team, and you've been testing it and maybe a lot of it is in the development or staging environment, but you want to see if you're ready to actually send this out to your customers. So if I was making that decision, I would come to release coverage and I would look for a few specific things within the dashboard.
So the first thing is, I would look at the chart called cumulative tests run and I would make sure that all of my test coverage that I've been working on adding throughout the project is actually being run against my application. So here, for example, you can see 57 out of 60 tests were run. So before I would send the feature to production, I would want to make sure that I ran those other three tests, and I wasn't missing any testing that I was thinking was happening. The same with the latest pass rate of my tests. It's great if you run all of your tests, but if they're all failing, that's not good either. So here's an example of I can see one of the tests, the most recent time it was run, it did not pass. So that's another thing I would want to look into before I actually send this feature to production.
Some other charts that I would take a look at on the release coverage dashboard are test run history to make sure that we've had a consistent or increasing number of test runs throughout the time period that we were working on the feature, as well as a low number of failing tests. Because if I saw a lot of failures here, then that's something that I would want to look into, and maybe would indicate that our feature isn't ready yet. The same thing with the average app load time chart that I discussed previously and here, I would be looking for any trends making the chart look as if the application was getting slower. That would be something that I would want to look into and potentially resolve before sending out the feature. When I talked about looking into the tests that were failing, I would jump into the test statuses table that's also within the release dashboard. I can jump quickly to the tests that were failing most recently, and look at some more information about them.
So here, I can see that the latest run of the test was actually earlier in September instead of October as a lot of the other tests. So that would be an indication that I probably want to rerun that test to make sure it's working before I send off this feature. It also has a lower pass rate. So I think that's something I would also want to be more cautious about and verify a second way that the feature is working the way I expect it to.
There are some other features that are data-driven that can be very helpful. Page coverage is another one. I think one of the most straightforward reasons that you would come to page coverage is to help your team prioritize where to add new tests for your application. So if you have some pages that haven't been covered with any tests, this is a great indication that you might want to add test coverage there. So that if your team is making changes to those pages, any regressions can be caught. But another reason I would look at this feature is that if I were to see a regression that maybe did slip through to production, and I want to make sure that something similar doesn't happen again, one thing that you can do with this dashboard is look for the specific page that that error occurred on, and then click into the tests to see what tests have been hitting that page. If I noticed that there were no tests that were hitting that page that would be an indication that I would want to create one to catch that regression if it were to happen in the future. But if there are tests that are hitting that page, I should probably look into those and reevaluate what those tests are doing and maybe add to them so that we're checking more of the functionality and we can improve our test coverage.
Another thing that I might want to look at this performance chart for is, maybe I was making a lot of changes to one particular customer flow and I want to make sure that that customer journey isn't getting any slower or we don't have any regressions in the feeling for the customer. So I can come to the performance chart within the test details of these specific tests that are covering that user journey, and make sure I don't see any bad trends here. As I mentioned before, a lot of small changes can build up over time. So maybe if you made 20 or 30 different changes that were all small, each one might not have caused that much of an issue, but overall that could potentially cause a bigger problem. So I would want to verify here that the performance still looks good before I would send off my feature.
Another way that I might want to determine if my development environment is okay to send those features to production is maybe take a look at insights. So Insights is another great place where you can get an idea of the overall application quality and quality within your workspace. You can use things like auto heals and visual changes to understand exactly how your app has been changing recently. Then you can also see things like broken links and JavaScript errors as just another indication of whether you feel confident and the quality of your application before you're letting your customers use it. So those are some of the built-in data-driven testing features within mabl.
But next, I would like to talk to you about using the BigQuery integration. So this is a great way for you to be able to customize your views into your mabl data. So what are some things that the BigQuery integration offers? Well, you can get detailed information about your plan runs and with a lot of extra information, like what plan it was, application environment, deployment, labels, and a bunch of more different things. You can also get your test run information. So these will be filterable by similar things, but then it will also include information like what was the status, and what was the browser it was run on? What are the tests, labels, things like that? Then the third thing that you can get is the failure categorizations by test. So that is another feature that we offer within mabl, you can categorize the failure anytime one of your tests fails. This is another great way to improve communication across your team and understand what is going on in terms of quality within your app and see trends and maybe if there are more regressions or fewer things in that nature.
The BigQuery integration is pretty quick to set up. I've included some of the key steps here. But you can find more detailed information in the mabl help docs. But it's not too many steps and then once you have your data flowing to your paid GCP BigQuery account, you can use whatever dashboarding software you feel comfortable with to create your own custom dashboards if the ones that are built into mabl already don't fit all of your needs. So all of the members of your team play really important roles in creating a culture of quality across your organization and each of these different team members has a different role to play in that culture. You might care about different quality assurance metrics, in terms of measuring that quality and the things that they care about in helping to improve your application quality. So some of the roles that I'm going to discuss are general and every team is different. But these are some of the general categories that some of these team members might fall into and what types of metrics they might be interested in. So the first one is QA engineers, these members of the team might be focused on the day-to-day creating and updating tests, making sure that they can monitor it, understand test performance and test failures, as well as make sure tests are stable and reliable.
Some of the example metrics that they might be interested in are things like tests with the lowest passing rates, or lowest passing rates for specific browsers. Also, the counts of new tests that are added for different features and a lot of people will use labels to determine different features. So that's one way that you can break down that data. There are also QA managers that tend to have the role of owning the quality of a specific product area and prioritizing the quality engineering efforts within their team. So some of the example metrics that might be useful for QA managers are things like unique test runs or unique tests that are being created per feature. Also counts of failing runs, maybe in development, and also production to see where the team might need to devote more resources, as well as maybe pass rates per browser. That could be another situation that a team could be interested in making sure that their product areas have high quality across all different browsers.
At an even higher level, you have executive teams. A lot of times executive teams will be more focused broadly across understanding how mabl is used across multiple teams, as well as tracking progress for multiple teams over time. I've included some example metrics here. But one of the really cool things you can do with the BigQuery integration looks at your mabl data across different workspaces. Then you can actually look at how different teams are doing and see maybe where to devote more resources for teams that are starting to use mabl a lot more. See how product areas are doing in terms of having a number of automated tests running and also time spent testing. These are just some different things that executive team members might be interested in.
Developers have an important role in this culture of quality as well. It's important that developers are focusing on creating high-quality software, and also minimizing regressions, and minimizing adding tech debt. So some things that they might be interested in are pass rates for their branch or by browser feature, as well as counting failed runs due to regressions over a period of time, and specifically ones that made it to production. That's another reason why it's important to make sure you're labeling your test failures. Because that's something that can be very important for some members of the team. So these are a lot of different examples of the data that you can get from mabl and some of the ways that you can inform your quality strategies through these data-driven testing features.
But how can you get the most out of these data-driven features, there are a few ways you can set up your workspace such that you can do that. You may have attended some of the other sessions that talk about workspace best practices and this is going to be specific to data-driven testing. So the first is making sure that you are running your tests with plans and deployments. So one of the reasons that this is really great is because it allows you to maintain a consistent level of test coverage across your application, and also makes your team a lot more efficient and having that test coverage. When you're running a lot more tests, you also generate a lot more data so that you can have a more realistic view of your application quality over time. This is an example of the mabl team actually using deployments within our own pull requests. So that's one of the things that we do is we will run a deployment whenever we make changes to the user interface and we'll run a lot of mabl tests there and you can see this is an example output of one of the pull requests that we would be running mabl tests against.
Another thing that you want to make sure that you're doing in your workspace is labeling your features. There are a lot of different data-driven features within mabl that I showed you that do rely on these labels as being one of the key ways to narrow down your data. So for example, the release coverage uses test labels, as well as these labels appearing in your BigQuery integration. So this is just another way to make sure that your tests are organized within your workspace and it's easy to find the exact data that you need.
You can also make sure that you have a clean environment and application integrations. This will help your team collaborate better and understand exactly where to run different quality engineering tests and some of the insights are also dependent on these two values, as well as a lot of the performance charts, you can narrow down by environment. Page coverage is an important one that you can narrow down by application. So having these clean environment integrations is important for getting a lot of use out of those features. Then also make sure that you are setting up the link crawler and the segment integration if that's relevant to your organization because that will help you to utilize page coverage in order to make those prioritization decisions and also track down regressions. Like I mentioned previously, as well as understanding what broken links you might have throughout your application to have an idea of things you might want to fix and improve in the quality of your app.
You should also make sure that you are labeling your failures. I know I said this a few times. But there are a lot of benefits to this, including improving communication across the team, and understanding where to spend developer and QA engineer time and energy. Then also just find quality trends in general for your application. You can see a chart of the test failure categories over time within the release coverage dashboard. So that would be a great way to look for trends, maybe to see if you have more regressions than usual, for example.
To wrap things up, I want to make sure that your team is using data to track your team's progress over time as well as make informed decisions with your team. Two very important uses of data and it will definitely help you improve the quality of your application if you are using data to inform your decisions, as well as making use of our built-in data-driven features, and also integrating with BigQuery, so that you can customize your own views of your mabl data. And with that, I would like to thank you for joining me today and open it up to questions.
Katie Staveley
Perfect. Thank you, Lauren. And as a reminder, if you have any questions, feel free to put those into the Q&A tab on the platform. We did have a couple of questions come in, as you were going through your session, Lauren. So why don't we start with this one? How does mabl know if you have a page that does not have a test?
Lauren Clayberg
So one thing that we do is we look at all of the pages that we see from either the link crawler or your segment integration and then we will look at what pages are hit during your test runs. So if you have a test that actually goes to one of the pages during a test run, that test will show up in the page coverage report. But if there weren't any tests that hit that page, in the last 14 days, we will say that there have been no tests that have been covering it.
Katie Staveley
Great. This next one, hopefully, will be pretty easy for you to answer. As an engineer how has data helped you in your role?
Lauren Clayberg
So I actually have a pretty good example of this recently. So there was a new API endpoint that I was working on and it had existed before, but I was making some changes to it. One thing that I was doing was some performance testing to make sure that the performance of the endpoint was good. So when I was running the testing, I thought, like, it seemed very quick. If I hadn't been using a lot of data in the decision, then I would have probably thought it was fine. But when I compared to the speed that the endpoint was previously, it was actually much, much faster. So is a difference of 50 milliseconds versus, 300 milliseconds. While the 300 milliseconds isn't super noticeable to me as a person, using the actual data, and doing that performance testing made me realize that there was actually a lot I could have improved before making the change, because of the big difference. Even though it wasn't noticeable to me, those kinds of changes can build up over time and it was something I definitely wouldn't have noticed without using a lot of data for that analysis.
Katie Staveley
Awesome. The next question we have here is BigQuery, a part of mabl or an extra product?
Lauren Clayberg
So BigQuery is a part of the Google Cloud Platform. It's one of the database solutions there. But it's a great way to store a lot of your data and easily get a lot of your data from mabl. So you'd have to sign up for a GCP account to be able to use that feature.
Katie Staveley
Great, I know that we use that quite a bit here at mabl and a bunch of other customers are using it as well. So it can be a really valuable tool. Here's another question. Can you explain what auto-heal is?
Lauren Clayberg
Yeah, so whenever you're making changes to your application, there are a lot of subtle things that might change, for example, something moving a few pixels to the left, or the text of something changing. So one thing that mabl tries to do when we are running your tests is we try to minimize the amount that you have to maintain your own tests. So we will auto-heal your tests, which is figuring out what that element was that you wanted us to click on or assert on, even when it has changed slightly so that you don't have to update your tests every single time your application changes. We try to make those changes for you, and we call that auto heals.
Katie Staveley
Awesome, thanks, Lauren. The next question: can you define what a pass rate is?
Lauren Clayberg
Yeah, so the way that we were using pass rate was over the period of time against the environment that you selected. How often was that test passed? So if the test was run 10 times and it passed seven out of the 10 times we would say the pass rate of that test was 70 percent.
Katie Staveley
All right, this next one should be another pretty easy one for you, is release coverage a new feature, and where in mabl do I access it?
Lauren Clayberg
Yes, release coverage is a new feature. We're really excited about it and it's under the coverage tab. If you go on to the left side menu within the mabl app, you can go to the coverage tab. Before that was page coverage, but now it holds release coverage and page coverage. So that is where you will find the new feature.
Katie Staveley
It looks like we have just one more question here. Lauren. Are there any resources published on the BigQuery integration?
Lauren Clayberg
Yeah. So you can check out the mabl help docs. There are some great resources there to help you set it up and if you look up BigQuery integration for mabl it should be pretty easy to find, as I've linked it within my slides as well. So that would be another way that you could get to it if you would like.