We are thrilled to announce the Visual AI Rockstar Hackathon winners. We think the people who participated and won the Hackathon are not just some of the top QA engineers but also trailblazers who are leading the way in pushing the QA industry forward. Congrats to all of them!
You can find all the Hackathon results here.
In this blog, we provide you a summary of the Hackathon concept. Also, we share how we designed it, and some of the things we learned.
The Hackathon idea
Our idea behind the Hackathon started with a question: what incentive gets an engineer to try a new approach to functional test? Our customers described how they use Applitools to accelerate the development of automated functional tests. We hoped engineers who weren’t regular Applitools users could have similar experiences and get comparable value from Visual AI. The Hackathon seemed to give engineers like you an incentive to compare traditional functional test code with Visual AI.
When you think about it, you realize that all the user-observable functionality has an associated UI change to it. So if you simply take a screenshot after the function has run, if the UI didn’t change as expected, you found a functional bug. And if the functionality worked but something else in the UI didn’t work, then you found a visual bug. Since the screenshot captures both, you can very easily do both visual and functional testing through our Visual AI.
Visual validation overcomes a lot of functional test coding challenges. For example, many apps include tables — like a compare list of purchase options. If you let your customers sort the table by price, how do you validate that the output results from a sort match the correct order — and that all the row contents beyond the sort column behave as expected? Or, what happens when you use a graphics library that must behave correctly, for which you only have HTML checks? For example, your app creates bar-charts using Canvas technology. How do you automate the validation of the Canvas code?
Since we are simply taking screenshots after the functionality, we can capture everything. That simplifies all your validation tasks.
Creating the Hackathon Tests
Realistically, we know that free items have real costs. To use our free Applitools account, you need to take the time to learn Visual AI and try it out. While you might be willing to try the free account, would your selected tests highlight the value of Applitools’s Visual AI? We were confident that giving you the right experience would give you an easy way to see the value of Applitools. So, we built the test environment for the Hackathon in which you could run your tests.
We built a sample Hackathon app. Next, we designed 5 common use cases where Applitools and Visual AI result in simpler functional test code or make test automation possible. Finally, we ran the Hackathon and gave people like you the chance to compare writing tests using the traditional approach versus using Appllitools. Engineers like you who tried the Hackathon generated many surprising and valuable experiences. Cumulatively, your experiences show the value of Visual AI in the workflows of many app development teams.
How We Judged The Hackathon Winners
We graded on a scale of 1–100 points. We divided the points across all five use cases. Within each case, we treated the points between Visual AI and the traditional approaches separately. What mattered included:
- Coverage completeness on each test case with for traditional functional test code
- Coverage completeness using Visual AI
- The economy of coding with both traditional approach and Visual AI submissions
Our judges spent weeks looking through all the submissions and judged every submission very carefully. The winners scored anywhere from 79 points and all the way to a perfect 100!
Summary Of The Results
Part of our test design included building on-page functionality that required thoughtful test engineering. Hackathon winners needed to cover relevant test cases as well as ensure page functionality. Generally speaking, people who scored the highest points wrote a lot of code and spent a lot of time in the traditional approach. Even with economic coding, the winners wrote many lines of code to validate a large number of on-page changes, such as sorting a table.
Unfortunately, we also found that many participants struggled writing proper tests in the traditional approach. Some struggled with test design, some with the validation of a given page structure, and some struggled with other technical limitations of the code-based approach.
While participants either struggled or succeeded with traditional test code, pretty much every participant excelled at using Visual AI. Many of them succeeded on their first try! We found this to be very gratifying.
We plan to discuss more this in a future webinar, so stay tuned. But in the meantime, check out the Hackathon winners below.
Additional Hackathon Winners
After judging we found that there were some people with the same score or were very close calls. So we have decided to award an additional 9 people a $200 prizes each! So instead of 80, $200 winners, we’ll have 89, $200 winners!
What The Hackathon Winners Say
We wanted to leave you with quotes from the Hackathon winners. We are glad to recognize them for their achievements and are pleased with their success.
About the Hackathon
- “This hackathon was a great way to get introduced to Applitools and the power of Visual AI testing. I will definitely be using it in my next automation project.” — Gavin Samuels, Lead Consultant
- “This is the most interesting and useful event of the year in the field of testing automation. This allows you to take a look at test automation from a different point of view and gives an opportunity to radically improve your existing approaches.” — Viktar Silakou, Lead QA Automation Engineer
- “This hackathon was a fun and challenging way of getting to know Applitools. It made great use of common day-to-day problems to show where Applitools clearly outperforms traditional approaches in speed, simplicity, and coverage.” — Arjan Blok, QA lead
- “Completing the Applitools Hackathon was a keystone achievement in my career! I learned more by participating in this hackathon than any other automation instruction I’ve taken in the past number of years. I’m now 100% convinced that visual AI testing is an essential tool for efficiently validating web and mobile software applications.” — Tracy Mazelin QA Engineer
- “Solid Hackathon by Applitools! Provided a great experience to showcase the power of Visual AI Testing and how they are a leader in this field with functionality that their competitors do not have.” — Hung Hau, Sr. QA Automation Engineer
- “I liked that the hackathon had practical applications and links with examples to work on. It was a pragmatic approach. It provided a good way of practicing and comparing it with the traditional method. Applitools provided an alternative to UI testing that was easy to learn and fast to set up. It’s interesting to explore further on its applicability.” — Adina Nicolae, QA Team Leader
About Visual AI
- “Tool will be a game-changer in the near future if it hasn’t already.” — Oluseun Orebajo, Lead Test Practioner
- “This challenge propelled me into digging for alternative ways to traditional testing. While solving the challenge, I realized that a tool like Applitools will save time on the proposed scenarios while still delivering the same value as other traditional frameworks. Congratulations for the initiative and the elegant manner chosen for making “the rockstars understand how powerful and awesome is Applitools.” — Corina Zaharia, Test Engineer
- “Although I prefer making apps accessible to screen readers so that they can also be tested, the importance of visual testing cannot be understated. Applitools made it so much easier to check if things are properly aligned, icons stay intact, and visualizations look correct, with just one line of code.” — Thai Pangsakulyanont, Frontend Architect
- “It was a fun way to discover the limitations of current “traditional way” versus what AppliTools can provide: from simple image comparisons, broken layout, intended changes, broken sort algorithms, dynamic content, and JIRA integration. Bottom line: you still need the screenshot anyway (for bug reporting or discussion around the topic) !” — Ioan Cimpean, Senior QA Automation Engineer
For More Information
Blogs about chapters in Raja Rao DV’s series on Modern Functional Testing:
- Modernize Your Functional Testing — Chapter 1 of Raja’s course.
- Advanced Tools for Testing Tables — Chapter 2 of Raja’s course.
- Data-Driven Testing with Visual AI — Chapter 3 of Raja’s course.
- How Do you Test Dynamic Content? — Chapter 4 of Raja’s course.
- Testing iFrames with Visual AI — Chapter 5 of Raja’s course.
- Complex Functional Testing Simplified — Chapter 6 of Raja’s course.
- Dynamic Data with Visual Validation — Chapter 7 of Raja’s Course.
- Modern Functional Testing — Caveats and Conclusions — Chapter 8 of Raja’s course.
Actions you can take today:
Originally published at https://applitools.com.