DevOps and digital transformation is making waves in testing. No matter where your team is on their quality engineering journey, mabl is on a mission to enable high-velocity organizations to create great experiences for their users. Join this session to learn how we've been working closely with our customers over the past year to deliver on this mission, enable your QE journey, and get a sneak peek into what we're planning for the future.
Transcript
Fernando Mattos
Hi, everyone, welcome to day 2 of mabl experience. We're so excited to welcome you back for another day of elevating testing. I'm Fernando Mattos, I have the pleasure of leading product marketing here at mabl, and I'm going to be your moderator for today's session.
Before we jump to the keynote, I'd like to take a moment to thank the amazing group of speakers who presented yesterday. We had a diverse group of both customers and mablers who talked about the momentum of the community, the success they've achieved adopting quality engineering, and then we closed with Debbie keeping us real and making us focus on the customer's definition of quality and done, not our own right. We know it takes a lot of effort to participate in these events. And we appreciate your dedication yesterday.
I'd also like to thank the audience: we loved the engagement and participation yesterday - they made the event particularly special. We'd also love to hear from you now via chat on one key takeaway that you had from yesterday's sessions: it can be a takeaway, something you learned, something that's new, memorable quotes, or something funny that you heard during the sessions - so please feel free to drop those in the chat as I continue to speak here.
As for today, Day 2 of mabl experience, we're going to have another set of amazing sessions. Right after the Keynote you hear from Bharti, who is the VP of Engineering at its, on how they optimize their functional coverage to ensure that their travel booking process is simple and delightful to customers. And then right after that, you're going to have two options for sessions running in parallel at 12:10. One is from Aniket from Community Brands, where he'll talk about the migrating process to local test automation. And then from Milos from Trilogy: he’ll share how they transitioned from manual testing to having automated tests that are integrated into their CI pipeline. We’ll then have a second virtual piano bar with Dan. At 1pm Jess from Kintent will share strategies for breaking down those silos that we all have and help you build a quality mindset. She has tons of experience doing just that. She will be followed by Adeeb from Sensormatic on scaling quality strategies across teams. And then we're going to have our own Juliette and Eva who will explain how non-functional testing can help you achieve great customer experience. And then finally, don't miss the closing session on elevating your career in quality, which will be hosted by Dennis - our customer education lead.
So with that, let's move on to today's keynote. Just remember, if you have any questions, please use the Q&A panel on the right side of the screen. For comments and discussions, feel free to use the main chat. And with that, I'd like to welcome Owen Hendley, who is one of our amazing product managers. He's going to kick off the day with how mabl is delivering on quality engineering. Owen - take it away.
03:13
Thank you, Fernando. Great to be chatting with everyone. My name is Owen Hendley. I am a product manager at mabl. I joined the team earlier this summer. And in particular, I work on our reporting and insights capabilities, as well as some of the new features that we're in the early phases of building for performance testing.
So in terms of the agenda here, we'll start by reviewing our role in your quality engineering progression. I'll then spend most of the session talking about some highlighted recent releases, as well as provide some context on where we're going next. And then I'll finish up with some insights on how we are evolving mabl as a company based on our customer's needs. And then we'll have some time at the end for questions.
So if we look at our mission here, it's one that I am particularly passionate about and what got me excited about mabl since from my prior experience working in product management that at various other technology companies, quality has always seemed to be a bottleneck. But at mabl we're really focused on helping teams adopt Quality Engineering and deliver software at DevOps speeds.
So in yesterday's keynote, we explored the many different types of transformations that are taking place in the software industry, but quality is really at the center and delivering on quality drives better business outcomes by accelerating time to market and improving your overall customers’ experience.
Yesterday we also heard from Darrel and Gev speak about this quality engineering maturity model, where companies are moving from manual testing to incorporating automation for their functional tests, shifting left and adopting automated testing in their development pipelines, validating non functional quality such as accessibility and performance, or optimizing an otherwise already well oiled quality engineering process by making data driven decisions through quality metrics. So this is by no means a prescriptive order of steps, but rather a general framework for how we think about the general adoption of quality engineering. And so we'll use this framework as the lens through which we'll review the investments that we've made this past year to help our customers mature their quality engineering practices.
So in 2022, our product focus includes deepening our functional testing capabilities, building features that make incorporating automated testing in your development processes easier, as well as opening up avenues for validating nonfunctional quality. So each investment that we shipped this year, brought us a step closer towards our vision of providing the easiest to use unified solution for all things quality engineering. And so we'll go through each of these highlighted investments in more detail. But let's start by looking at the capabilities that we launched to help users gain that high coverage of functional test automation.
So APIs, as we know, are critical enablers in digital transformation. And businesses today are increasingly leveraging APIs as a core pillar in their applications. So of course, testing for APIs to determine if they're functioning appropriately is important for mitigating any sort of negative business impact. But APIs can also be a faster way to identify defects during development, given how much faster you can run an API test relative to a browser test. So in mabl we offer the ability to run API and browser tests side by side with centralized reporting. And getting started with API tests is as easy as importing a collection from Postman.
So our primary focus this past year was maturing our API testing capabilities in a couple of main ways based on customer feedback. So we've added support for console logs. So developers typically use console logs to better understand the behavior of code during runtime by injecting human readable output messages. And so by adding support for console logs, it's easier for testers to understand errors and generally makes troubleshooting more efficient. We also enabled static IPs for test traffic. So this helps customers test APIs that are behind a firewall or otherwise not exposed to the public Internet. And then we added file upload capabilities for API tests as well. So this enables users to test API requests that contain data such as a PNG image or zip files, as well as populate the test files within an environment. And so this functionality, in particular, we heard was critical for customers who were moving their existing API tests, with file uploads from Postman into mabl.
Now, the use of Shadow DOM has also become increasingly popular in modern applications, where you want to ensure visual consistency by encapsulating style for web components, and it's commonly used for embedding third party components, like dialogue boxes, and live chat bots, as well as Salesforce applications. And so with the introduction of support for Shadow DOM, the mabl trainer can now identify, train and run tests against Shadow DOM elements. So when you're in the trainer, you just train your steps as you normally would. And then if there are elements that are within the Shadow DOM, then you'll see an icon indicating that. So providing this support for Shadow DOM has really extended our functional UI testing, and improved the test coverage across some of those more complex user journeys that include isolated web components and third party apps like Salesforce.
Now, cross browser testing is also one of the most critical and most challenging aspects of a thorough test automation strategy that accurately reflects the needs of your users. And Microsoft's plans around the deprecation of Internet Explorer really cemented the importance of Edge support for enterprise users. And providing the support in mabl allows quality teams to expand their cross browser test coverage at scale.
And then a final big feature that we wanted to highlight here in the area of functional testing is Intelligent Wait. So this is an enhancement to our find strategies for test execution. So here we're using a machine learning approach to solve a really core challenge around identifying when an application has reached that expected or actionable state before you execute a step. And so by learning about the timing of an application by analyzing historic test run data in the cloud, mabl can now determine how long to wait before interacting with an element. And this has a number of benefits. It certainly helps to reduce the manual burden of configuring a bunch of waits or configured find steps when you're building a test, but it can also drive a lot of efficiency during test runs for test duration. So, for example, instead of waiting like a couple of seconds by default, mabl can adjust over time to only wait for the necessary amount of time. And so when you add up those time savings across the many steps in a test, then the time savings can become significant. And then on the other hand, mabl can automatically increase wait times when needed for higher reliability to reduce any unnecessary test failures.
So next, let's look at the capabilities that we launched to help customers shift left to incorporate automated testing into their development processes. So in order for automated testing to be successfully incorporated in development processes, engineering teams are typically adopting techniques that improve the communication and collaboration to ultimately improve the quality of tests. And later today, as Fernando mentioned, we'll hear from Jess at Kintent and Adeeb at Sensormatic, about their approaches to collaboration and scalability across teams. But branching is a technique that's commonly used in software development to promote that collaboration and increase efficiency. And although branching is not new to mabl, we've expanded our capabilities here to include a much more robust workflow for conflict resolution, allowing users to easily resolve those step and variable changes between test versions.
Another enhancement that we made to our branching workflow is the ability to view differences across versions of a test. So this is commonly known as diffing. And this also helps to enable users to more easily track test changes over time, and have a much more explicit view into the differences between versions of a test.
And then another strategy for improving communication on development teams is adopting a test review process where fellow team members can leave comments on work in progress, before it gets merged to production. And so with the release of our new commenting feature, users can now more easily collaborate when they work on tests by providing feedback on specific test versions that ultimately help to improve the test reliability and the speed at which you can evolve your test suite.
So this past year, we've also made large strides in the area of non functional quality, with the release of our accessibility testing, and enhancements to our performance reporting. So building applications that are accessible to all is imperative for providing those inclusive and equitable experiences. And we know that an estimated 15% of the global population lives with some form of disability, while vision impairment alone is one of the most common health issues in the United States. So this really makes it clear that accessible digital experiences need to be the norm and not the exception. And companies can risk damaging their brand or undermining the trust of their users, or potentially even lawsuits when accessibility issues are resolved reactively in production versus proactively further upstream. And so, mabl now offers this ability to embed accessibility tests in your delivery pipeline with our low code accessibility checks against specific pages or elements as well as an overview dashboard to help you identify and resolve issues across your entire application.
Performance is another aspect of non functional quality that's really essential to ensuring a great end user experience as we all know, and this year, we've expanded the availability of the performance data that we collect from functional tests to provide a better insight into the speed of page loads and responsiveness of API endpoints. So since we collect this performance data from cloud runs already, it's very low lift for you to take advantage of these features, since you don't have to do anything other than just running your test suite as you normally do. And actually, a few weeks ago, we added the ability to view trends in API response time across an entire test, which is the screenshot here that you're looking at, as well as response times within the individual steps of a test. So more at that endpoint level, which is a level deeper than what we're looking at here. And these enhancements really help you identify and troubleshoot potential regressions in API performance as you're making updates to your application.
Now we've also, just this week, actually on Monday, we launched the ability to identify performance regressions at the release level. So this is across both browser and API tests. So we've added a new widget to the release coverage page that you can see here, over on the right side of this screenshot where we're highlighting tests where those underlying pages and endpoints had the biggest slowdowns in performance. So this is to help users really focus their attention on the most problematic areas of their application when they're evaluating the health of the release. And these enhancements are also a preview of what's to come with many more performance testing features once we expand into full performance testing, as well as continuing to build out the unified platform across all aspects of quality in the software development lifecycle.
So as you can see, it's been a busy year building features that support you on your quality engineering journey. And now let's take a look at what's on the horizon for mabl.
So we couldn't be more excited for where we're headed with mabl and plan to continue making progress towards our goal of being the easiest to use unified platform for quality engineering. So one horizon where we're continuing to invest is improving our existing capabilities. So there's a much longer list of improvements than what we could fit here on the slide. But just to highlight a couple. We're making improvements to data tables and variables to make troubleshooting and validating test cases easier. We're working on making mabl APIs available to customers to make it easier to interact with mabl programmatically. So for example, exposing a reporting API to make it easy to export and analyze test results in the tool of your choosing, as well as exposing a data table API to more easily manage your test data. And we'd love to hear from you on additional APIs that might be valuable to integrate maybe more more completely into your organization. And on the Unified Runner front, we're working on adding Firefox to the unified runner to ensure a consistent and performant cross browser testing experience as well as support for WebKit to mature our safari experience. We're working on static email addresses for mabl mailbox. So this will enable users to to effectively test workflows like password recovery and multi factor authentication. And we're working on version control for JavaScript snippets. So this will round out our branching capabilities, and help to increase efficiency for troubleshooting and test authoring.
So as we're continuing to work on expanding and polishing our existing capabilities, we're also committed to finding new domains where we can continue to meet your quality engineering needs. And some of the exciting areas that we're exploring include performance testing as I mentioned, which is a crucial input to understanding how your system responds under high user load. So here, we aim to provide some efficiency by enabling you to reuse your functional tests in a performance test. And through our low code approach we hope to make the practice of performance testing generally more available for non technical team members. And additionally, we're exploring options for integrating native mobile testing into mabl. This is one of our most frequently requested features and we know that based on the customer conversations we've had, we know that this would be a groundbreaking addition to the unified platform. So specifically here we're investigating test creation against iOS and Android applications with execution in the cloud, alongside insightful results, diagnostic data, and all in one place.
So we've also made big strides this year in evolving mabl as an organisation to better support our customers. And we'd love learning from our customers and, and know that you all are incredibly knowledgeable about the product and provide really insightful feedback that helps to shape the future of mabl. So we've made a number of operational improvements to align mabl even closer to our customers. So at mabl our mission is to transform quality engineering around the world. And so by definition, we aim to empower anyone regardless of where they're located on the globe, to deliver high quality software. And as part of this mission, we're thrilled to address the needs of our Japanese customers by offering our platform and documentation in Japanese. And we look forward to the prospect of supporting other geographic areas like this in the future. So Japanese users now when they log in, they can choose their language of choice. And we've translated the majority of the static content in the application as well as 95% of our user documentation, which was a big milestone for us.
Now, we've also matured our product feedback process. And one of the big changes there has been, we've upgraded our customer portal to a more robust system in a tool called product board. So customers have access to our product portal, or they can browse our current and our upcoming investments, you can vote on specific feature ideas, you can submit new ideas for consideration. So whether you're curious about what's on the horizon for mabl or you just want to share some feedback on how to improve the product, the product portal is the place to look. And you can find that resource at productportal.mabl.com. Now, internally, within mabl, we use this platform as a centralized repository of customer feedback. So we have it connected to many different channels, including slack and intercom and Salesforce, so any feedback regardless of where it comes from is immediately sent to the product team. So some of you who are active in the Friends of mabl Slack channel, you may have noticed that some of your messages there get tagged with the product board Slack bot. So those messages are getting sent into this tool so the product team can take a look. And really the focus here is to pull all insights into a single place to make sure that we as a product team have full visibility into the customer feedback as we're making prioritization decisions. And personally, I've found it really helpful with the research that we've been doing on performance testing, where we can very easily see comments from all the customers who requested that feature.
We've also introduced mabl labs where customers have the ability to opt into early access features and provide feedback for a feature before it goes to general availability. And this helps us get features in front of you faster. And the feedback that you provide through this mechanism helps to ensure that we ended up delivering the best possible product. So Microsoft Edge as an example, was a feature that we put through the labs program.
Now we know there's a lot of information to digest and skills to develop when it comes to using automation tools for quality. So we're excited to have launched our mabl University resource where teams can come to get on-demand training for topics that are relevant to their needs. You can browse How to videos, I think last time I counted there were over 30 How To videos up there. And you can follow a guided curriculum to get up to speed as fast as possible. So that resource if you haven't checked it out yet is university.mabl.com.
And then lastly, we've also been maturing our product discovery process for how we research design and get feedback on new features. This year, we've more than doubled the size of our product management and UX design team. And that's really helped us perform a deeper amount of research and validation across a broader array of potential new features. So when you combine that with the new customer portal to better track which customers are asking for which features this has meant that we're able to have a much more targeted and thorough approach to learning about the problems that customers are looking to be able to solve, designing solutions to meet those needs, and then working with customers to refine designs, test prototypes, as well as rollout beta and early access programs through mabl labs prior to making a feature generally available. So we really appreciate the deep engagement and partnership that we've had with many of our customers who we've worked with to develop new features, and I really can't emphasize enough how helpful it is to the product management team to have such great customer partners to provide insight and help guide our thought process as we're continuing to evolve mabl.
So finally, we'd love to hear from you. So please, please reach out about what's on your mind, you can comment in the product portal, and reach out to your customer success manager. And now I think we have a few minutes to take some questions.
That's great. Thank you. Oh, and that's an impressive list of enhancements that you guys have pushed through the last year. I can see here we have several questions already in the Q&A. Let's see if we can get through as many as possible of this.
The one with the most votes is, you mentioned all the API testing enhancements, how does it compare to postman?
Yeah, so postman is a great API development and documentation tool. Whereas mabl is great for like end to end API testing and monitoring across public and private environments. So I'd say that the key difference there is mabl as the unified platform where you can combine those UI tests, and an API tests side by side. But all in all, I'd say that mabl and postman complement each other nicely to help you develop and document and test and monitor your web services.
That's great. Thanks. So next one here, you're gonna love this, any timeline that can be shared for native mobile?
Great question, it's one that's a large, large engineering and product and design effort. So we have gone deep on the kind of phase one of research and understanding customer needs. And, and are now continuing the discovery process. We have a mabl labs team working hard on native mobile, but it's tough to to give an exact time on there. Because there are a lot of unknowns, and overall it is quite a large project. But we're working hard on it. And we'd love to hear from you. If you haven't already indicated interest in native mobile through the product portal, then please send us a note and we'd love to take your use case into account.
That's great. Thanks. So we have one from Cindy here, I don't know if you know how to answer this one: I have several tests, they need to loop. Each loop is considered a test and thus counts towards the total number of tests per month. How will API testing work in this relation? Right? The same issue? I’m assuming so, right Owen?
Yeah, I believe so. Yeah. Okay. So Cindy is the same way right, it will be able to loop through them, and it will count through to the counter.
And then another one, when can we expect performance testing? Is there a beta release planned?
Yeah, so that's one. We're currently designing a prototype there. And you saw yesterday, from Joe's Joe's demo. There's a lot of activity happening on the backend side there. So we've done a bunch of classic product research there. And now we're actually in the build phase, where we're going to be both on the engineering side, building a lot of the back end as well as on the UX side testing designs with customers. So if you haven't already interacted with us on performance testing, we'd love to get in contact with you and keep you updated as these activities progress. But the plan is to to get to two final sort of designs, at least for a beta program, and then through mobile labs, go through that process of beta and early access. So timelines there are a bit dependent on what we hear back from our customers and any changes that we might need to make before going to general availability. But that specifically is a project that I'm working on. And please also reach out to me just at owen@mabl.com If you have specific interest there, and we'd love to get on a call with you.
Thanks so. Alright, I think we're almost out of time. We have a moment to get just one more. Can you give more details on what improvements are coming for data tables?
Data tables. Yeah, so we've recently released a new trainer launch modal there. So that enables you to specify data table scenarios when you're launching a new trainer session. And we're actively working on, on ticking that ability to change these session configurations while you're inside of a trainer, trainer session. And then next, we'll support data table scenarios for ad hoc test runs, as well as working towards support for plan at the plan level. All right.
I think that's it. We're running out of time here. If we didn't get to your question, I'll personally get Owen to answer the question right after this.
Thanks, everyone for joining. Don't forget to join the next session. It's at 11:35 Eastern. Barti at its is going to be talking about optimizing functional coverage to build better traveler experiences.
Thanks everyone. Bye bye.