Cross-Platform Testing in a Dash
One of the perks of my job is that we have lunch brought in a couple of times a week. On those days, we’re all eagerly awaiting an email from our office manager with the link to a Doordash order form. We have less than an hour to get our orders in, so everyone stops whatever they are doing to make their selections before it closes. And of course, we’re all doing different things.
I’m a late bird, so I’m usually in transit to the office during this time. So, I need to make the order from my Android cell phone. And on some days, I’m watching videos when the pleasant interruption comes through, so my phone is in landscape mode.
Another coworker may be at their desk — plugged into a huge desktop monitor.
Someone else may be in a meeting and need to use their laptop to place their order using Chrome. There will be others in that same meeting who may be using Firefox or Safari. Then of course, there will be someone in the meeting who didn’t bring their laptop, so they’ll need to order from their iPhone.
Being a tester, I got to thinking “wow, that sure is a lot of configurations Doordash has to test”. All of us are accessing the same form from different browsers, devices, and viewports, and yet we all have the expectation of a flawless experience. Our lunch depends on it!
How do I test for this?
As I enjoyed the chicken chili that eventually arrived from The Cheesecake Factory, I pondered on this some more: “as an automation engineer, what would be the most efficient way to verify this?”.
I could write my test once and have it executed across all of these configurations. That would certainly work, but I dread writing cross-platform test automation. I have to account for responsive changes of the app within my code.
For example, in this web version of the site, notice the header has the delivery address in the left corner next to the menu icon, the word “DOORDASH”, a search field, and the quantity specified on the cart.
However, on this mobile version, the header is different. The delivery address has been moved under a new horizontal rule, the name of the app is now gone, the search field has been replaced with a simple icon, and the quantity count has been removed from the shopping cart.
From an automation standpoint, this is a pain. I have to either write separate tests for each of the various configurations or account for these variations within my test code by using conditional statements.
Is there a better way?
When the mobile space exploded with popularity, it became clear that we needed to test on every single device and configuration. My Doordash scenario illustrates how prevalent mobile technology is in our everyday lives. However, as I’ve shown above, it’s really a pain to test all of these configurations, especially for some applications where even your Android and iOS apps differ in appearance. Not only is it a pain to write these automated tests, it’s also a pain to run them. Managing your own device lab is no easy feat. Fortunately, there are cloud providers to help manage this but many companies are starting to wonder if cross-platform testing is even worth it.
Most cross-platform bugs that are found are not functional bugs. In this day in age, browsers have gotten to a point where it’s rare to find a feature that works on, say, Chrome but not Safari. Today, it’s the change in the viewport size where most cross-platform bugs lie.
Here’s a browser-view of an airline’s website on a mobile device. Imagine the annoyance of trying to check on a flight while heading to the airport and receiving this.
Here’s an app asking a user to activate their account. We all know how annoying it is to have extra steps when signing up for an account, especially when having to do this on a mobile view, but it is multitudes more annoying when there are also bugs like the one here where you can’t even see what you’re typing.
The interesting thing about these examples is that our functional tests would never catch such errors. In the airline example, our test code would verify that the expected text exists. Well, the text does exist, but so does other text as well. And it’s all overlapping and making it unreadable for the user. Yet, the test would pass.
In the account activation example, the functional test would make sure the user can type in the activation code. Well, sure, they can, but there’s clearly an issue here which could lead to sign-up abandonments. Yet, the test would pass.
These are not functional bugs; they are visual bugs. So even if your test automation runs cross-platform, you’d still miss these types of bugs if you don’t have visual validation.
Visual validation certainly solves the problem of catching the real cross-platform bugs and providing a return on your automation investment. However, is this, alone, the most efficient approach?
Run cross-platform, but visually
If the bugs that we’re finding are visual ones, does it really make sense to automate all of these conditional cross-platform routes and then execute functional steps across all of these viewports?
Probably not. UI tests tend to be fragile, by nature. Running these types of tests multiple times across various browsers and viewports increases the chances for instability. For example, if due to infrastructure issues, your tests fail 2% of the time on average, then running those tests on 10 more configurations now increases the chances of flakiness tenfold.
Fortunately, there’s a new, more efficient way to do this…via Applitools Visual Grid!
The Visual Grid runs visual checks across all of the configurations that you’d like. Write your test for one configuration, specify all of your other configurations, then insert visual checks at the points where you’d like cross-platform checks executed.
The Visual Grid will then extract the DOM, CSS, and all other pertinent assets and render them across all of the configurations specified. The visual checks are all done in parallel, making this lightning fast!
To gain a better understanding, let’s consider this in terms of the Doordash scenario.
As opposed to running the following steps across every configuration:
- opening the link
- specifying my name
- choosing items to add to my order
- finalizing my order
I’d, instead, write and execute this test for one configuration and specify an array of configurations that I’d like Applitools to run against. Within my scenario, I’d make calls to Applitools at any point where I need something visually validated.
Let’s say one of the key steps is adding an item to the order. I call Applitools and say hey, verify this across all my configurations. With the Visual Grid, there’s no need to execute the steps before this, or even this step. The Visual Grid will instead capture the current state of the app and render that state across all of the other specified configurations — essentially showing what this app looks like when something is added to the cart, and validating that everything is displayed the way it’s intended for every configuration specified!
And because we are able to skip the irrelevant functional steps that are only executed to get the app in a given state, the parallel tests run in seconds rather than several minutes or even hours — making it perfect for continuous integration/deployment where fast feedback is key.
What are the drawbacks?
Again, these are visual checks, not functional ones. So, if your app’s functionality varies across different configurations, then it’s still wise to actually execute that functionality across said configs.
Also, the Visual Grid runs across emulators, not real devices. This makes sense when you’re interested in testing how something looks on a given viewport. However, if you’re testing something that is specific to the device itself such as native gestures (pinching, swiping, etc), then an actual device would be more suitable. With that being said, the Visual Grid runs on the exact same browsers that are used on the real devices, so your tests will be executed with the correct size, user agent, and pixel density.
Where do I sign up?!
The Visual Grid proves that test automation doesn’t have to be a slow bottleneck for our deployment initiatives. The ability to run UI tests across a plurality of configurations within seconds is a game changer! In fact, I was able to run hundreds of UI tests across ten configurations before I could even get the lid off of the cheesecake I ordered with my lunch. So much for dessert…but that’s ok, the Visual Grid is sweeter than my Dulce de Leche could ever be.
Originally published at Applitools Blog.