Trilogy builds software that connects the world. Serving the automotive, insurance, and telecom industries, their business management solutions unlock innovation at the speed of modern development. As they transitioned to automated pipelines, testing was key to ensuring Trilogy could innovate quickly, with confidence. In this session, Milos will share how his team transitioned from a manual testing strategy to implementing automation, how they embedded testing activities into their CI pipeline, and how elevating testing helped his team innovate faster.
Transcript
Kuhu Singh
Hello, everyone. Welcome. We are happy to have you attending Experience. And I'm excited to introduce you to the session, the journey from manual to integrated pipelines. Before we get started, we just have a few housekeeping items that I'd like to go through. If you have any questions throughout the presentation, please feel free to leave them in the q&a panel on the right side of your screen throughout the presentation. For comments and discussions, you can use the chat feature, you will find both of these on the right side of the sessions page when the video is minimized. We will leave time at the end of the presentation for q&a. And with that, I will hand it off to Milos, take it away.
Milos Stretin
Thank you. Okay, good morning, good afternoon, or good evening, wherever you are in the world. Thanks for joining. I would like to share the experience with my teams. As I already mentioned, journey from manual to automated pipelines. First of all, I'm Milos Stretin. I work as a Vice President of Engineering for Trilogy and as a chief executive officer..
To start I would actually like to go through the whole experience of having reconstructed a team. When it comes to the testing, coming from the common state, what usually you would find in a company up to where we are now because the whole thing is that I want us to improve. How to actually ship product faster, and maintain the high quality.
So let's first start with the common state that you would find in a company. We have the product team that will be responsible for requirements, then usually it will go through UX UI design, they will need to get familiarized with the requirements, create designs, create mockups, high fidelity designs, and then ship everything to the engineering thing, where again, we need to understand the requirements, develop the feature, and then move to the QA. And as you may notice here that we have three steps where everybody needs to understand the requirements, even before we go to release.
Now to me, it was like really slow process. The thing that I wanted to fix is we had a slow process, it's low maintainability. There is a fragile development, which I will explain in a moment why and how and there was no sense of ownership. And also, it's an unstructured process. Like what I mentioned, it's low maintainability, fragile development and no sense of ownership is because developers usually tend to rely on the QA team, knowing that before release, they will find something that's missing, which is what I want us to shift left and give more responsibilities to engineering. Then once we did that, it's a question: What's to test? Now we are within engineering and what would we need to test? If you go to like common standards, it's like, should we do 70% 80% 100% coverage? That's the happy part, edge cases, what exactly we need to test. And we thought that actually, the purpose of test is to catch the bugs to prevent something going to production. So that's the whole point of tests, the main point of tests, which we then give the responsibility of the chief architect who's responsible for building it, it's not a QA, I will explain how we are using QA team shortly. But the Chief Architect responsible for developing is actually the person who needs to understand the requirements to see what are the important elements of the feature we are building and to come up with important tests to ensure that it can go to production working as intended. And that's why we started organizing tests like testing 20% of the test to catch like 80% of the bugs.
After that, we still were in a path where we were using the protractor Cypress depending on the framework that you were using and we were actually looking for, what is the better solution. How to actually ship it faster, how to improve the maintainability and delivery. And with all of those, at that time standard tools, we have like both protractor and cypress, it seems the standard, it's community supported plenty of resources, you can easily find a developer that can do that. But the cons are like, we have a slower process. And it requires a lot of coding, even for some small change, you would need to modify the code all the time. So that's why we started going no code and started using mabl, which looked like a really, really good tool.
And still it is, but then we decided, like, let's do it properly, let's see how to properly configure it so that we can ship fast. And one of the most important things to get is the feedback as early as possible, so we see what we missed so that we can fix it. Which is why we actually started thinking about how to structure the whole application, how to split our architecture into different components, knowing what are the features that we are building, and then we will create a test map, one to one to those features, and only run what's actually required. And this is important, because as I already mentioned, we wanted to get the past feedback to fail fast. And to say, like, hey, this feature, we don't need to run the whole test suite for the whole application. If we modified only as you can see here in this example, like the grid list, it may be used by three screens, four screens, why would we build a whole rebuild and retest the whole application.
So that's where we started mapping tests properly, and experimenting how to split it. So that as you can see here on the pros, we can have at the end of the well organized as to it's the faster CICD, the faster feedback and to actually test what is important to ultimately be extended past run going to production faster. Then, once we finish, once we realize that this architecture works as expected, the idea here was to actually structure the work so that everybody in our team can work the same way which we're calling: the one best way. That's when we come to the playbooks really defining how should we create a test? Define clear steps for everybody, whether it's like a QA or it is like a chief architect, as we mentioned, who's responsible? What are the steps that makes a high quality test? Like we need to analyze the plan to create, clean up, test, explained in those playbooks, we have a clear explanation of what are the themes standards that everybody must follow and explain to the naming convention.
What is important though, is that we weren't focused on testing the particular feature here, it's more like a process that we were testing. So if we realize that something is wrong, then instead of fixing that particular thing we need to fix it in a process so that it doesn't happen again, which is what I mentioned here that it is extremely important that a whole team works exactly the same. If someone believes that there is another approach, then we need to discuss it as a team. And let's experiment with the whole thing following that.
If we want continuous improvement, a side of that we also have is the quality bars like what I mentioned, there are steps that defines what are the high quality tests in order to create high quality test, we also needed to create like, a sort of a checklist, what makes a high quality test, what are the items that we need to check what we need to check why we need to check and how we need to check?
And those are important to be binaries, because everybody who's checking if the test is correct, they should easily do that. Say, Yes, it is passing our quality bar, or it doesn't pass the quality bar. And then, as I mentioned in the previous slide, where we discussed about 70% 80% coverage to meet those metrics, or I wouldn't say invaluable, but it's not that important. Like why would you consider 70% versus 80%, It's low quality. I'm not sure about that. As already mentioned, the main point of having tests is to prevent us from having bugs from going into production, which is why we slightly modified our measures and what we are monitoring here, which is important is can we prevent bugs with those tests. If we cannot, then what's the point of this test and we don’t need that test, we only need those tests that actually can catch bugs. Can we have a high maintainability if we are updating the feature so that it is easier and faster to go to production.
And of course, we are measuring the number of bugs that goes to production, which helps us understand what we missed before and going back again to the initial playbook to define things like, having the process of what to test, how to test. But if we miss something, then we need to revisit and go back to that step and say, Okay, next time, we shouldn't pass this thing that there needs to be some sort of pattern that we need to check in order to prevent this kind of bug.
Milos Stretin
Now, with all that said, let's go back to the initial slide that I saw that I showed there, which all the teams now, according to our process, after the UX design team, let's just focus on engineering. Only chief architects and engineers need to understand, fully understand the requirements to, build it, test it, and then we can immediately go to production, because we are sure that all the important things that are required for production are tested. Once we go to production, there is a QA team that will further extend those tests and see what we may miss or report some bugs. But it's important that they report back so that we can continue that loop and improve our testing, and also the QA team here actually act as a collaborator that can further improve those testing playbooks that we discussed.
And ultimately, as stated here in the slide, the important impact that we had is that QA should not be biased, which means in our team, it acts like blackbox, they don't know the requirements, they are the real users. Acting actually as a real users testing what is what might be missed, and then reporting back. Also, they are not actually the real users, they can collaborate on a process and further improvement, they have the access to those playbooks and say, Okay, I caught this bug, let me go back and see what Chief Architect what they were doing what they were testing, and coming up with ideas of how to improve their tests. So that the next time again, when we go to the, to release, even before reaching the QA, we can go with a higher quality. And technically, with all of this, the main impact here is that we can really, with a higher confidence go into production quite faster, like comparing like a couple of years ago, where our teams were shipping every week. Now we can go and ship multiple releases a day. For like, literally every change, we can directly go to production, which is, in my opinion, really, really great. Great improvement. And that would be technically it for this for this presentation. So any questions, and we're open to questions.
Kuhu Singh
Yes, um, thank you Milos. What questions Does everybody have? Please remember to submit those in the q&a panel. Now, we already have a couple of questions lined up. So I'm gonna get started. First question, how is your team organized? Do you have a QA professional, or just engineers who are creating/running tests?
Milos Stretin
Well, actually, as mentioned already, I'm working at two companies, but there is a difference. But technically, yes, I have both of those. As I already mentioned, the QA team that I have acts more like a collaborator that can further improve the tests. They are testing after we release to production. But also I'm running the teams where I'm doing an experiment and see what happens if we exclude QA, if we move everything to QA. There are experiments in parallel, but I would say that, in general, there should be additional QA that can be a collaborator and constantly work on those improvements in the process.
Depending on a team it might be like really before the release or after release, in my opinion, it should be After release, because you need to focus to ship faster and you need to focus on how, what exactly you need to test before going to release. Anytime, for example, I see that it switches from one person to another person, there is a certain delay, like if you if engineering finishes something and they say, Okay, we are done, we are waiting for a QA. It adds some delay. So this is what I wanted to do to reduce it to say like, let's see how we can go before QA, but let's still keep the QA there to give us additional feedback. Does that answer your question?
Kuhu Singh
Totally. Another question. When should we start automating test scripts and SDLC?
Milos Stretin
Well, automating test scripts. Well, what we are doing currently is actually we are trying to reduce the number of writing scripts, that's why we switched to mabl but whenever we need to automate something, it depends on that chief architect to like, what is actually the feature that we need to test? What is the thing that without it we are not confident to go to production?
Kuhu Singh
Another question, how do you ensure developers and QA have a shared understanding of the purpose of a test as it's prompted/promoted through the pipeline?
Milos Stretin
Again, that goes back to what I mentioned there. I'm not trying to do that. The main point that I added on the first slide is that if we have everybody needs to, if everybody needs to understand the requirements, then we have certain delays. So what I tried here is that only engineering needs to understand the requirements. And they will need to build those tests. But the QA should act as a blackbox as a person who will act as a user and just use the application. They shouldn't be biased and say, Oh, I know what this needs to be done. So let me just write some tests, it actually needs to come up like a real user. I'm a real user. And I noticed this bug. So let's go back. And let's see what and why we missed testing this before it reaches production. So that being said, I don't believe that the end, the QA testers would need all the product documentation, they should act as an, let's say, better testers.
Kuhu Singh
Next question, what percentage of regression bugs are identified by automation?
Milos Stretin
That actually fluctuates. We, as we are improving our process, we were actually getting more and more tests there. But it's not just this point that we are improving in our process. We have other things that we're referring to, to actually do the first time right, even development without just the QA, but I would say roughly about, I don't know 15 to 20%.
Kuhu Singh
How do you test without knowing the actual requirements?
Milos Stretin
I never said that I'm testing without knowing the actual requirements. There are two types of testing. First, it's within the company. Well, all of that is within the company. But let's just say that within the engineering team, they know the requirements, the engineer who builds the feature, whatever. He knows the requirements, and he's testing what's important. But again, when it comes to QA team, they are acting as a user as a better tester. It's not like they don't know the requirements. They know what the application should do, what are the features of the application and they are just acting as a regular users, which gives them the real ability to test what is important for the users.
Otherwise, if the QA knows all the requirements, first of all, they will spend time on understanding those requirements. And then there is a possibility that they might focus on something that's less important acting as a real user, they would really use the application as a real user to do so they will just report a bug and compared to the real user, beyond that, they will also collaborate on the improvement and say, Okay, I am not just the user. I am the expert. I'm the QA, let me go back and see why you guys missed it so that the next time we are not missing so that we have that continuous improvement loop.,
Kuhu Singh
Okay, next question. How can QA do a proper testing assuming a customer in place without being involved in requirements gathering?
Milos Stretin
It's sort of the similar question like the previous one. They at least in our team, they are not there in like gathering the requirements. But they would need to have some simplified documentation of how to use the application, like, what is this application? What does this application do? And how should we use it? But really like a blackbox. Like, it's not that you would need to know all the tiny, tiny pieces, what happens under the hood, you would need to know some some requirements, but just as a user.
Kuhu Singh
Okay, another question. When you say QA must be an expert in creating tests, but does not need to create tests. What does that mean? Does your organization not document and execute test plans?
Milos Stretin
No, that's not true. We are executing test plans. But as I mentioned before, I shifted that to the left, where engineering is responsible for owning the feature, again, for understanding the requirements for building the feature and for testing what is important so that we can go to production. Now, when it comes to the QA, I mentioned that they must be experts also in this because even though they are acting as a real users, they are not just the real users, they need to collaborate on further improvements. What I say that they are not writing test is because I want every feature to be done the first time, right. So if they spot some issues, they realize that there is something that we missed in the test and there is a bug, instead of them fixing, just writing the test creating those tests.
This is what I mentioned, we need to focus on a process not on a single unit. That's why instead of them doing that, they should come step back and see what we missed. And review that test. And as an expert, understand, Oh, I see how you did it. And I see what you did. But as an expert, I will say that this is what you missed. And if you just add this thing, then this bug wouldn't happen. And we wouldn't catch it right now in a production so that we can improve the process and the next time something like that can be caught earlier in the process. Otherwise, that means that we would never improve the process. That means that the engineering will actually develop the feature, they might miss something, they will go there and ensuring Yeah, okay, we have the QA team that will catch that they will fix it. But even though that's right, I'm not saying that it's wrong. I want to shift it left to say let's do it the first time, right. So if we miss something, there will be a QA who's the next person will tell us what we missed? And how to avoid that in the future.
Kuhu Singh
Another question, do you utilize other testing phrases such as UAT and beta sites or early adopters?
Milos Stretin
Not at the moment.
Kuhu Singh
If test coverage isn’t a helpful metric for you, what are the metrics that are most useful to your team?
Milos Stretin
That's a good question. As mentioned before, test coverage is like, at least to me, it's not that bad metric. Of course, you can have that metric. But to me, it doesn't give me the good signal whether we are high quality or not. If it's again, going back to what I mentioned, it's 70% versus 80%. Does that mean that we're lowering the quality? No, I don't believe that at least. So what is good to me is that I need to ensure that the test we write actually can catch bugs. So what we are measuring there is the first thing, how many features were prevented going to production with failed tests so that we say oh, we have like 20%, like I mentioned before, and like roughly 20% things that were blocked before going into production, and we need to fix it. That's a good metric to me, because without those tests, those 20% will go to production. That's not what we're looking for.
So this is one of the metric and the other metric is of course, the number of bugs reported. Every bug that got reported is a potential miss in our QA team. So we are doing the root cause analysis and with QA team and engineering team we are realizing what we means there. So that's, again next time we can be better. So I will say there's two metrics would be, I would highlight the first number of failed tests that actually blocked something go into production. And the second number of bugs that tells us what we missed.
Kuhu Singh
You mentioned well-organized test suites. Do you have any tips for achieving this?
Milos Stretin
Ah, this is something we're still working on. But yeah, I would have some tips there. It's, as mentioned in one of the previous slides, we were re-architecture in our application so that we can split it into different features. And what I would say that is that if you split the application into clear features, and you have tests that can be mapped one to one to that feature, then this is, in my opinion, what we should aim, because, first of all, you would have well organized test suits, knowing AB, this is how we are mapping tested the features.
And second, we can actually configure our CI CD system to run only those tests for affected features. Again, this is important because it reduces the time required for CI CD, it gives us faster feedback. And ultimately, it's just clear how the tests are structured and how they are mapped to the features.
Kuhu Singh
Um, last question for today, if I heard correctly, your automation catches 15 to 20% of regressions. Is this acceptable? What do you think is a realistic goal percentage?
Milos Stretin
Well, actually, right now, it still isn't acceptable. If I remember correctly. Last time I checked for the full quarter, we had around like, 2% to 3% bugs that went to production. And to me, it should really be less than 1%. Something always, of course, something can happen. But compared to like two years ago, where we had around 10% bugs going into production, now we are to 2% to 3%. But still, I would say that it really needs to go to less than 1%.
Kuhu Singh
We have time for another question. Last one. How do you onboard new team members?
Milos Stretin
Great question, which is actually related to the playbooks that I mentioned I was mentioning before. So again, it's important to do what I mentioned before, it's important to know how to do test properly. And what are the steps to do the proper test and what is the quality like checklist that you would have quality bar however you want, like what makes a high quality test. Now with that definition, we can actually give any new developer examples and say, first read our process, read how we are doing things. Once you understand we'll give you a couple of example tasks to say like that's this feature, you would of course need to go through the requirements, understand it and test it out. And we can give you the exact like the fast beat that whether you are doing that good or not. And with all of this structuring of the work that we need to do, we are actually able to onboard new engineers, new QA testers like in less than a week to any of the products that we need. One of the reasons is also that well organized structure that I mentioned, if you know all the features, you don't need to know all the product, you need to know that module of the product and say, Okay, I need to test this module. I'm not that worried about the whole product. So let me understand this. And I will then be able to write the test following the process that the whole team, the whole team is using.
Kuhu Singh
Thank you, Milos. With that we are now running out of time. Unfortunately, if you had any questions we could not get to in the session. We will connect you with our speakers after the conference. Thank you so much for joining us today and see you at our next session at Experience.
Milos Stretin
Thank you have a nice day.