mPrompto

The other day, I was watching my daughter learn about word associations by my wife, Aditi. Aditi was explaining the concept of “friends of letters,” where each letter is associated with words that start with it, much like “H” has friends in “horse” and “helicopter.”

This got me thinking about the world of generative AI, a field I find myself deeply immersed in. Just like how my daughter is taught to associate letters with words, generative AI operates on a similar principle, predicting the next word or phrase in a sequence based on associations and patterns it has learned from vast datasets.

Now, the analogy of letter friends is a great starting point to help someone grasp the basics of how generative AI works. However, it’s a bit simplistic when we consider the full complexity of these models. Generative AI doesn’t just consider the “friends” or associations of a word. It also takes into account the context of the entire sentence, paragraph, or even document to generate text that is coherent and contextually relevant. It’s like taking the concept of word association to a whole new level. So, when the AI sees the word “beach,” it doesn’t just think of “sand,” “waves,” and “sun” as its friends. It also considers the entire context of the sentence or paragraph to understand the specific relationship between “beach” and its friends in that situation.

Let’s take a moment to delve deeper into how a Large Language Model (LLM) like GPT-3 works. When you input a phrase or sentence, the model processes each word and its relation to the others in the sequence. It analyzes the patterns and associations it has learned from the vast amounts of text in its training data. Then, using complex algorithms, it predicts what the next word or phrase should be. It’s a bit like a super-advanced version of predictive text on your phone, but on steroids.

But here’s where it gets tricky. These models are, at the end of the day, making educated guesses, and sometimes, those guesses can be off the mark. We’ve all had our share of amusing auto-correct fails, haven’t we?

Let me try to explain it with an example in a B2B scenario, particularly in drafting emails or reports, which is a common task in the business world. Imagine you’re a sales professional at a B2B company and you’re tasked with writing a tailored business proposal to a potential client. You start by typing the opening line of the proposal into a text editor that’s integrated with an LLM like GPT-3.

As you begin typing “Our innovative solutions are designed to…” the LLM predicts and suggests completing the sentence with “…meet your company’s specific needs and enhance operational efficiency.” The model has learned from a myriad of business documents that such phrases are commonly associated with the words “innovative solutions” in a sales context.

However, the trickiness comes into play when the LLM might not have enough industry-specific knowledge about the potential client’s niche market or the technical specifics of your product. If you just accept the LLM’s suggestions without customization, the result might be a generic statement that doesn’t resonate with the client’s unique challenges or objectives.

For instance, if the client is in the renewable energy sector and your product is a software that optimizes energy storage, the LLM might not automatically include the relevant jargon or address industry-specific pain points unless it has been fine-tuned on such data.

So, while the LLM can give you a head start, you need to guide it by adding specifics:

“Our innovative solutions are designed to optimize energy storage and distribution, allowing your renewable energy company to maximize ROI and reduce waste.

Here, the bolded text reflects the necessary B2B customization that an educated guess from an LLM might not get right on its own.

This illustrates the importance of human oversight in using LLMs for B2B applications. While these models can enhance productivity and efficiency, they still require human expertise to ensure that the output meets the high accuracy and relevancy standards expected in the business-to-business arena. My next blog will cover just that and help you understand how a ‘Responsible Human in the Loop’ makes all the difference for an organization while working with some use cases of generative AI.

I’m really looking forward to diving deeper into the realm of human dependency in using Generative AI my subsequent blog post, where we’ll explore how they add a layer of safety and relevance to the responses generated by LLMs. So, stay tuned!

If you think you have understood Generative AI better with this blog, do drop me a line. I always look forward to a healthy conversation around Generative AI.

Remember, the world of AI is constantly evolving, and there’s always something new and exciting just around the corner. Let’s explore it together!

Author – Ketan Kasabe, Co-founder: mPrompto