Depending on the field, these tools can be virtual assistants, data analysis facilitators, or even catalysts for innovation. In any case, they’re tools that are hard to ignore today. Behind every relevant answer often lies a good prompt — that precise instruction guiding AI to deliver the expected result. But crafting a good prompt takes time, requires adjustments, and often relies on personal experience. As a result, everyone works in isolation, repeats the same tests… and collective efficiency suffers.
In this article, we explore how to move from an individual to a collaborative approach to AI, by sharing our best practices and implementing our prompts collectively to boost productivity, inspire team members, and improve workflow and output quality.
Loss of efficiency
We examined a problem we’ve been facing for several months. Each person using generative AI tools works independently, based on their own experience. As a result, a considerable loss of efficiency can occur.
Anyone who has worked with generative AI knows that the first prompt is rarely optimal. Not all specificities are accounted for initially — it’s an iterative process. A first prompt is written, tested, adjusted, and tested again. This cycle of writing and testing can take a significant amount of time, to the point where one might wonder if it wouldn’t have been faster to write the desired result manually.
The most effective users have learned to record their prompts that provide high added value — and the components that lead to good results.
Human collaboration in all this
Given the effort required to craft a good prompt, we asked ourselves how best to share these practices.
First, we set up dedicated communication channels to discuss everything related to AI integration.
Beyond that, we now have a shared catalog among all employees — a workspace where people can add their most frequently used prompts and refer back to them later.
This catalog serves two purposes. First, it allows everyone to get inspired by the capabilities of different AI tools (ChatGPT, Gemini, Grok, etc.). In other words, by seeing the prompts others use, team members can identify ways to use AI to make their own processes more efficient. Second, since writing a good prompt requires considerable effort, it makes sense to let colleagues benefit from the results of that iterative work.
Just like sharing any reusable internal work document, it’s logical to adopt the same reflex when it comes to prompts.
Predictive analysis
At Libéo, our prompt catalog is part of our internal knowledge base, equipped with categories, filters, and keyword search. This allows everyone to quickly find the right tool for the right situation.
For a team just starting out, a simple shared Excel or Google Sheet can do the job. The key is to make the content easily accessible and collaborative.
For example, we use generative AI to help us write test suites that ensure the quality of our applications.
Concrete example: Generate a single unit test using the createMock method when needed to mock methods involving database and API requests.
Whatever the format, one must keep in mind that these practices don’t eliminate bias, hallucinations, or possible errors generated by AI tools. They simply help increase efficiency and share learnings.
In summary
To collaborate effectively with artificial intelligence, it’s not just about knowing how to write good prompts — it’s about learning to build collectively on that work.
As we’ve seen, a good prompt takes time, iteration, and experience. Without teamwork, everyone repeats the same process alone, at the expense of productivity. By centralizing our best prompts in a shared database — whether sophisticated or as simple as a shared file — we save time, inspire one another, and expand our range of possible uses.
The key is to view AI integration as a collective field of experimentation rather than an individual tool. By sharing best practices, we not only improve the quality of results but also our capacity for innovation. And that’s something no algorithm can do for us.