How to Engage with Generative AI
- Andrew Patricio
- Mar 17
- 5 min read

Gen AI seems to be everywhere these days but when it comes down to it, very few organizations really understand what to do with it outside of off the shelf tools such as copilot & chatGPT, or the occasional custom chatbot.
Somewhat uniquely in the world of AI engaging with Gen AI requires the use of a model without having to actually train it yourself. This is exponentially cheaper because while it cost hundreds of millions of dollars to train a Large Language Model (LLM), it requires mere hundreds of dollars to run it.
Which means that the barrier to entry for Gen AI is very low. Add to that the fact that there are many foundational LLMs to choose from and it means that the best way to start working with Gen AI in your business is to start working with it in your business: experiment rather than plan.
The question then becomes how exactly does one experiment?
Gen AI as Word Association Game
Key to figuring out the best use cases for your business is to understand what Gen AI is and what it is not. There is a real revolution but there is also a lot of hype.
The first thing to understand is that Gen AI is not a model of knowledge, it is a model of language. At a very simplistic level, it’s really nothing more than a word association game.
The reason why you call the input to a gen AI tool a “prompt” and not a “query” is because the former is a more accurate description of what you are doing: You are not asking it for information, you are asking it to guess the next word based on the words you type in.
Want to understand what Gen AI does? Then think of it this way: "Fill in the blank: A children’s book about a cat is The Cat in the ___" If you grew up in the US, the clear answer is “hat”. Boom. Welcome to Gen AI.
While that may sound trivial, it’s actually a very complex task because the prompt could be anything. This is why these models require billions of parameters in order to make a good attempt at capturing all the different meanings and relationships between words.
At a very basic level when you enter a prompt what the LLM does is reach down through it’s massive model of how words are related to each other and determine what is the most likely word that would be expected in response to that prompt.
Then it takes that word and the original prompt, feeds all of that back into itself and guesses the next expected word. Over and over until the entire response is complete.

So when you use a Gen AI tool you are “prompting” it to start a series of word association games, you are not “asking” it to answer a question. Where this begins to move from a cool toy to a useful tool is when the prompt is much more ambiguous.
Distillation not Creation
What does this mean for your business? One way to figure this out is to recognize that Gen AI doesn't really generate anything from scratch. It pulls from the huge set of data it has been trained from.
Which means a better description of what Gen AI does is distillation.
It is not actually creating anything new, it is taking a very complete model of the relationships between words and coming up with a specific distillation of that complexity into a smaller set of words based on the prompt entered.
So when you are first engaging with gen AI the best way to think about it is NOT as an actual intelligence (even agentic systems), it's not HAL or C3PO or even skynet for that matter. Instead treat it as an information assistant. Very powerful but not a peer to human work.
A good analogy for Gen AI are supreme court clerks. The clerks do not replace the justices but they allow them to do their work much more effectively.
The justice comes up with the overall structure of their argument and then asks their clerks to help flesh it out. Maybe find a precedent for certain aspects or look into the nuances of particular laws. Its a conversation between the justices and their clerks.

In the same way as a clerk, a Gen AI tool is not able to create anything brand new but it is very effective at finding and effectively summarizing relevant information from what it's been trained and the documents its been fed along with the prompt.
Gen AI only knows what it has seen
LLMs are able to perform well in this situation because they have been trained by billions of examples of high quality text and images. But despite that, these Gen AI "clerks" only know what they've been told and sometimes the responses can get a little squirrelly.
The best example by far of how this can go awry is to ask a tool like chatGPT to create an image of a clock displaying a particular time, say 7:45. No matter what time you ask it to show, it will always display a clock showing 10:10.
This is because 99% of the clock images from the internet in its training set show the hands in that position because they are balanced in a pleasingly symmetrical way.

While this is obviously wrong, when you are dealing with a more particular domain of knowledge this means that in the same way as a supreme court justice, the Gen AI response must be vetted and edited into the final product by the person with expertise. It’s not so much human in the loop as it is human in charge.
Human + Gen AI
Lacking this, you will often still get some value. However, inevitably there will be nuance lost. And in a world where everyone is using Gen AI, the human component will start becoming the differentiator. Not merely in expertise, but in how to use Gen AI to effectively augment that expertise.
So when you think about engaging with Gen AI in your business, start with your domain subject matter experts and give them the freedom to experiment with how it can best be deployed. Only by developing the expertise of your business in how to use Gen AI at the same time you develop the actual tools themselves will you meet with sustainable success.
Comments