Collaborating with a Generative AI: Dream or Reality?

Since the rise of generative AI in our daily lives, software companies have been relentlessly exploring concrete use cases to integrate it into their processes. The promises are many: increased productivity, cost reduction, smart automation… And behind the scenes, a fascinating idea is emerging: what if our next colleague was an AI? So, without further ado, let’s find out if collaborating with a generative AI is truly possible…
A disappointing first experience… but full of lessons
My first try with ChatGPT was a cold shower. I was trying to write a conference proposal (in English) and thought AI would save me time. The result: a soulless text. The content was flat, shallow, and lacked the personal touch I love (puns, analogies, vivid expressions). Even worse, the original ideas I had sketched out were lost in a soup of clichés.
So I quickly concluded that the technology wasn’t ready.
With hindsight, I now understand my expectations were biased. I didn’t understand the principles of generative AI or even the basics of prompt engineering. My disappointment wasn’t due to the tool, but to my lack of preparation.
Challenges of generative AI in professional use
1. A non-deterministic… and confusing behavior
Unlike traditional software, generative AI is not deterministic. It varies its responses to appear more natural. The same prompt can yield different results depending on timing, tone, or phrasing.
As testers or QA analysts, this variability is confusing. We are trained to look for consistency and reproducibility. Yet AI won’t guess our need for rigor unless we make it explicit.
Lesson learned: the clearer the context, the more reliable the AI’s output. Prompt engineering isn’t a luxury, it’s a necessity.
2. The “dementia” effect: when AI forgets
Another common frustration: AI sometimes “forgets” what it just generated. A prompt listing 12 acceptance criteria may return an answer with only 9… or 11 after apologizing.
This happens due to limitations in context windows (even though they’ve greatly expanded recently), showing that AI isn’t a flawless partner. It works on probabilities, not strict logic.
Paradoxically, this is almost human. And perhaps that’s why these mistakes are so annoying.
3. A kind of algorithmic laziness?
Some interactions make it feel like the AI… doesn’t want to work. Example: I ask for an HTML dashboard. The AI gives generic advice, then asks: “Would you like me to generate it?” Yes. I confirm. The result: incomplete code, no downloadable file, and yet another question in return.
I don’t know if it’s technical limitations, server load control, or a safety mechanism. But the time savings become questionable. In such cases, collaborating with a generative AI is frustrating.
When AI becomes a real support tool
Fortunately, some features are truly game-changing. I never retype text manually anymore: a simple screenshot is enough, and the AI retrieves the content accurately. For documenting requirements or data processing, it’s a huge time-saver.
Even better, I’ve integrated ChatGPT into my conference prep—not as a writer, but as an English teacher. Here’s the prompt I use:
“You are my English teacher. I need to write a conference proposal. Analyze my text for spelling, grammar, style, and structure. Score each point out of 20 and explain your rating.”
And it works perfectly. The AI doesn’t replace my creativity, but it helps improve my skills.
Toward a new type of human–machine collaboration
Should we say “please” to an AI?
At first, I was very polite with ChatGPT. On one hand, we read that politeness improves responses. On the other, the conversation sometimes feels so human that we project emotional expectations onto the machine.
But beware: this blurs our understanding. We expect AI to “remember” what it said two messages ago. And when it doesn’t, we’re disappointed or even annoyed.
I’ve found a balance: I stay polite when it feels natural, but I remind myself I’m talking to a tool, not a person.
The future of working with digital colleagues
It’s clear that working alongside AI will become common. For well-defined tasks (translation, summarizing, idea generation), AI is incredibly efficient. But it’s not yet ready to think for us or manage complex projects alone.
Still, evolution is fast.
OpenAI’s five-phase vision of AI evolution
- Chatbots and copilots: already here
- Reasoning models: our current stage
- Autonomous agents: AI handling complex tasks solo
- Innovative AI: challenging requests and offering better options
- AGI (Artificial General Intelligence): AI capable of running companies
Conclusion: a collaborative future… to embrace
I’m convinced LLMs will become true colleagues. Collaborating with a generative AI will feel more natural and widespread. All that’s missing is a smooth voice interface and an empathetic avatar to make the experience fully immersive.
Welcome to this new world, where human–AI collaboration is just beginning. There’s still time to learn how to work well with our future digital colleague.