The Future of Test Management
Testing in the V-Model: only for mountaineers
I like the V-model. Not necessarily the procedure, but the visualization as a V. I think the model is honest because it describes our situation in software testing quite well. To the left, on the design side, it goes steeply downhill, and one is well advised to take the stairs. But even if you miss a step, you always go down, just not always as controlled as you would like.
However, on the test side to the right, you have to laboriously climb back up. Having build steps is essential for success. The ascent is all the more difficult if you sprained your foot on the left when descending. That’s exactly what testing is like: slower and more tedious than you would wish and often hampered by unclear specifications.
Scrum : Between rollercoaster and chaos
I contrast, I find the usual depiction of Scrum as a rollercoaster (see Fig. 1) illogical. Okay, it symbolizes the iterative process, but where does the momentum to overcome the first incline actually come from? And where does testing fit in? After all, not every company relies on Test-Driven-Development and even if they do, we’re talking more about component testing. Are system and end-to-end tests in Scrum carried out head over heels at the end? At least, that’s what it looks like.

Fig. 1: Typical representation of Scrum (decorations not included in product)
If I had to depict Scrum, the image would look more like the left part of Fig. 2. At the beginning of an iteration, a little forward planning is done in the sprint planning. Then the team returns to the core of the sprint and eagerly dives into development. This momentum carries them through the testing activities in the second half. In between, some time is spent on backlog refinement. At the end, the team looks back again and compares the result with the plan in the sprint review. Do you recognize the similarity to the Mandelbrot set? I think this explains why agile projects occasionally seem chaotic, but it is still simply beautiful to work agile…

Fig. 2: A different representation of Scrum
(Mandelbrot set: Pixabay-ID 2512689, christianpackenius)
The dream of complete test automation
One question remains open even with my Scrum presentation: What about system and end-to-end tests? In an agile environment, ideally everything should be automated, including the higher test levels. But this is usually not that easy and sometimes simply not sensible. Arguments against automation often include the fact that the necessary implementations are not yet available, that the object to be tested is not yet stable enough, and that automated verification is not always that simple.
The decision between manual and automated testing is not necessarily binary. Often, it makes sense to first test manually and later add selected tests to automated regression tests. Remember, I’m still talking about system and E2E tests! But no matter what test managers decide, time is and remains the most pressing problem. In the first case, there is usually a lack of personnel. Repeated changes to the system under test or the test environment cause automated tests to fail. Then employees with programming skills are needed, but they are already scheduled elsewhere. In the second case, every change means that the manual tests have to be repeated. Manual testers work overtime so as not to be blamed for missing the milestone.
Beacon of hope “Generative AI”
Luckily we now have generative AI! With ChatGPT, Copilot & Co, we can automate much faster and, above all, much more. This applies not only to test case or test script creation, but to the test process in general. Look at the ads touting for tools and their promises: test case creation with one push of a button, self-healing scripts, intelligent defect analysis… All these tools are certainly helpful. I also praise many use cases of generative AI in the test process in our training course on “Testing with Generative AI”. ChatGPT & Co definitely make us more productive, but they don’t really solve the underlying problem. We still need developers who have time to write test scripts with Copilot and manual testers who think outside the box. After all, exploratory testing is considered a highly successful testing method. But you have to have the time for that.
Lynqa, the AI agent for manual test execution
With Lynqa, a whole new strategy opens up for test managers. Lynqa takes natural language specifications for manual tests without further additions or adaptations and performs them on its own. The agent recognizes the intention in the test case and looks for ways to execute the test steps. The expected result is examined in detail and documented point by point. Screenshots make the process transparent and the conclusions verifiable.
Unlike an automated test script, Lynqa is able to react to unforeseen events. An unexpected cookie or advertising popup is detected and clicked away. Even missing test data can be supplemented by Lynqa, as shown in Fig. 3.
Fig. 3: Test data generation “on-the-fly”
New possibilities for test managers
With Lynqa, test managers can implement a much more flexible test strategy than was previously possible. While test design remains firmly in the hands of manual testers, the execution of these tests can now be scheduled overnight, on weekends, or simply in the background parallel to other activities. Certainly – there are efforts involved in controlling the agent’s work, but thanks to the detailed reporting by the agent, outliers can be quickly identified.
Test automation remains unaffected. It still makes sense to automate frequently performed regression tests. Thus, the test strategy of the future therefore provides for three execution methods:
- Manual tests without Lynqa, e.g., for exploratory tests or pen tests;
- Manual tests with Lynqa for systematic, specification-based tests, e.g., for new functions of the system under test;
- Automated regression tests, e.g., to safeguard critical functions or for load and performance tests.
In other words: with Lynqa, a completely new form of automated test execution becomes possible.
New tasks for test managers
New test processes and new tools also mean new tasks for test managers. These include:
- the adaptation of process specifications (SOPs, test strategies) to the new approach;
- more attention to “testing the test”, i.e., to the quality assurance of the generated execution reports;
- systematic training of employees to ensure their AI competence;
- especially in safety-critical industries, the qualification/validation of the AI agent, and
- generally, close monitoring of the agent’s performance through – ideally automatically collected – metrics.
The last two points sound worse than they are. We will go into more detail in a separate article. Here’s just this much: Lynqa integrates seamlessly into a risk-based test strategy and contributes significantly to test process optimization.
Conclusion
Especially in an agile environment, test managers often find it difficult to integrate system and end-to-end tests into the iteration flow. These tests are primarily designed for manual execution, aligning with the principles of the test pyramid. Agile principles, on the other hand, rely on the most complete automation possible.
Lynqa opens up a new form of automation that requires neither programming nor a framework. The AI agent for performing manual tests reads manual test specifications and executes the test steps directly. This relieves manual testers and allows them to devote themselves to more value-adding tasks.

