Back to the news
Gravity News Gravity Smartesting

Gravity : From logs to a fresh new start

Gravity testing new start

“Not all those who wander are lost” 

– J.R.R. Tolkien, The Fellowship of the Ring

A few months ago, if you visited our homepage, you would have read the following tagline: “From logs to tests – An innovative web app that helps you create functional tests based on your user’s application flows, assisted by AI” The (former) Gravity home page

Very descriptive, yet accurate with what our platform was back then. Gravity was expecting logs from a production and a testing environment in order to figure out by itself the main user activity patterns in the application under test, evaluate the coverage of these patterns against a test suite, and generate automated test scripts.

This is basically the concept we developed and then showcased to a bunch of software development and QA teams. Some of them even tried using it and got some encouraging results. 

Except it didn’t work out that well.

Framing the problem 

Let’s mitigate the dramatic effect of the last sentence. A few things actually did work, starting with the validation of the problem we were trying to solve with our product: scoping high-level test suites to only what’s necessary without compromising on quality.

In the era of Agile and DevOps practices, software development teams are able to continuously deliver value to their customers, up to multiple times a day. And they are doing it while keeping the quality checking process as targeted and short as possible. 

New test automation tools and processes (frameworks, continuous integration systems, reporting…) are released on a daily basis to help these teams improve their process and perform even faster. Even though, the checking process is still considered a bottleneck, particularly when it comes to functional, high-level tests, which are known to be extremely expensive to maintain.

We discussed with a lot of people involved in designing quality assurance processes and we discovered common challenges and requirements.

Assumption-driven testing

When you design tests during the development of a new feature, you will rely on assumptions about how your users will interact with this new feature (their path, the data they will enter, the default choice they will make). But users are human beings and human beings are keen on chaotic behavior. So, they will never ever take the nice nominal path you just designed and wrote an automated script for. And so, they may encounter issues that you did not anticipate.

Over-testing (the product manager pet feature)

When considering your test suite, you might have the hunch that a big part of your high-level (end-to-end) tests are over-checking features that are not important for your users and you are losing a lot of time executing and maintaining them.

The imitation game

When an issue occurs in production, it is always really complicated to reproduce the path that led to it. It can become a challenging investigation process in which will be involved:

  • Ops who can access technical metrics and errors monitored in observability tools
  • Product people who have the knowledge of how the features are supposed to work
  • Developers who will be able to understand cryptic errors and target the piece of code responsible for the issue

(and of course, “It does work well on my machine” is not an acceptable answer)

“From logs to tests” – a first step to “Usage-Driven Testing”

These problems are the reasons which motivated us to start building a “Usage-Driven Testing” platform. We think that learning from how an application is actually used is key to making our test suite as focused as possible. We think behavioral data can be leveraged to increase the quality of our software and accelerate the delivery:

  • By discovering the actual paths our users take, we can update existing functional tests to fit the reality and reduce the risk of regressions on key features
  • Let’s compare the sequences from tests with user paths. We can evaluate which tests are not “relevant” to users’ behavior and redesign them (move them to a lower level of implementation for instance) or even remove them. Therefore, we can gain time when it comes to executing the test suite and reducing the feedback loop
  • Reproduce more easily paths that led users to regressions

Technical logs

We start building Gravity by using technical logs as input data. At first, this solution had many advantages:

  • No need to add tracking code (unlike product analytics tools)
  • Most teams rely on logs to measure performances/track issues so there is already plenty of available data
  • Good fit for API testing (same technical level)

We built the first Minimal Viable Product of the platform. It allowed users to import logs from many providers (Datadog, ElasticSearch, Dynatrace…), digest them into business-readable user sessions (if the logs were holding “business-readable” data), and create functional tests from them (Postman and Cucumber). 

Minimal viable product gravity

We demoed it and got marks of interest in the concept. And we discussed with a lot of QA people and found out that some of them were already trying to implement “Usage-Driven testing” their own way (by building homemade “testing-fitted” analytics frameworks for example). We even had a couple of early adopters who managed to leverage their logs to generate tests.

But, just at the end of last spring, after a year of work, we decided to pause and take a step back. Why?

Because we learned we wouldn’t be able to release a commercial product with this solution.

Limitation of technical logs

Logs are technical, they :

  • are made for engineers to help them monitor their applications, measure their performances, and track issues.
  • usually don’t contain any business information that could help reproduce easily the course of a user session.
  • can’t be modified easily to add that information and their non-standard nature makes them hard to be used by an external tool like ours.
  • And for obvious security reasons, companies are generally reluctant to let them get out of their servers. 

When an organization wanted to try Gravity, we had to spend a significant time auditing the logs of the application they wanted to test and wait for the technical teams to integrate our feedback. Most of the time, we didn’t get there and just gave up. It was a major usability issue (people could not start using the product despite being interested in the promise) and a viability issue (we could not afford to dedicate a team just to bootstrap new projects).

The problem we were trying to tackle seemed to be the right one. The “Usage-Driven Testing” approach resonated with the people we discussed with. But the solution was not. 

“Usage-Driven testing” – Season 2

“Obsess about the problem, not the solution”

Richard Banfield, Product Leadership

We gathered the whole team and brainstormed. In what ways might we feed Gravity with usage data from which we could create functional tests? 

We decided to test the idea of collecting those data by ourselves, by providing a JavaScript library to include in the web app or website to monitor. Then, we quickly prototyped (in a week) a “new” Gravity app for demonstration purposes. We wanted to validate this new concept before investing too much time and energy in it as we did for the “log version” and tackle some risks (“Will they be ok with adding another script on their app? Won’t they be afraid about their user privacy?”). 

We showcased this prototype and, in a week, we hit the “5 teams want to install Gravity to test their application” goal we set in order to decide if we should go further in this direction. 

Which we did. With the whole team, and armed with the early feedback we got from our discussions with the QA engineers, developers, and project managers who saw our prototype, we defined a new MVP and timeboxed its execution to a quarter. We wanted to have our first users actually play with a production-ready application in mid-September.

And now we are.

Gravity’s first release scope

With this first release, we wanted to build a minimal set of features that would enable Agile/DevOps teams to design the most targeted high-level test suite possible, with the help of usage data.

Collect data

We released the first version of our data collection JavaScript library. It is released on NPM and on a CDN. It is compatible with browsers only. All you have to do is to initialize it in your JS code, and it will automatically record your users’ interactions, in a GDPR-compliant way. It will collect click and form interaction events, and obfuscate the data on the client side before anything is sent to us. 

We decided to make it open-source, so it can be easily reviewed by engineering teams. 

The repository is on GitHub:
Feel free to contribute!

Explore user sessions

Once a few sessions have been recorded, you will need to browse them and find the one that could be interesting for us.

In order to do that, we added some basic filtering capabilities, so you can search sessions that contain a sequence of pages:

Session page, Gravity

On a session page, you will find all the information needed to reproduce the user path across your application:

Session page details, Gravity


Evaluate test coverage against actual usage

You can define new usage from a session page:



Save and find items in the “Usages” section to track usage and test coverage.

Generate Cypress tests

Did you spot an important usage not covered by any test? We added the capability of exporting user session as Cypress test ready for execution.

Export test, Gravity

Two exports are available:

  • Download a full bootstrapped project (the test with all the JS dependencies)
  • View the script that you can just copy and paste into your code

All you have to do is to add assertions and data to script for custom testing.


What’s next?

Our goal for the last quarter was to build a new Minimum Viable Product more consistent with our first-semester learnings. The product team displayed bravery by discarding their previous work and starting anew. We hope the new Gravity version will enable Agile and DevOps teams to deliver faster without compromising quality.

All we want is feedback. Feedback about the value of our product and its usability (and of course, about all the issues we surely have :-)). And that’s why we need you.

If you are interested in Gravity, I invite you to book a demo here so we can present you all the features and advancement.

We hope to meet you soon, on Gravity

Stay tuned!

Smoke Testing, Sanity Testing, and Regression Testing

📈 Quality with Smoke Testing, Sanity Testing & Regression Testing

Gravity Testing Smartesting

Smoke Testing, Sanity Testing, and Regression testing: the Trifecta Understanding the differences between Smoke, Sanity, and Regression Testing is crucial…

Usage-Centric Testing

Unlocking Quality Insights with Usage-Centric Testing

Gravity Testing Smartesting

Historically, software testing was confined to pre-production environments to validate the application against the written requirements. (also known as requirement-based…

Fine-Tuning LLMs For Testers: A Cost-Effective Alternative to GPT

Fine-Tuning LLMs For Testers: A Cost-Effective Alternative to GPT

AI Smartesting

The surge of Large Language Models (LLM) like GPT has undoubtedly revolutionized the way we approach natural language understanding and…