Sherlock Holmes of Generative AI
Sherlock Holmes meets a stranger. Within moments, he begins his deductions, speaking with confidence and precision. “You’re a surgeon,” he declares, noting the gentleman’s precise yet calloused hands, a tell-tale sign of frequent surgeries. He observes a faint stain of iodine on the man’s shirt cuff, common in medical settings, and spots a ticket stub for a medical lecture poking out of his pocket. Holmes continues, “Recently returned from abroad.” The sun tan, uneven, indicates the man had been to a sunny region and wore a hat most of the time, suggesting not a holiday but possibly working under the sun. The type of hat? A pith helmet, typical of those worn in tropical regions by Europeans at the time. Just as Sherlock Holmes unravels the intricate details of a stranger’s life with his sharp observations and deductions, Generative AI, through its advanced algorithms, embarks on a similar journey of piecing together information to generate insightful responses. This process, often referred to as the “Chain of Thought,” mirrors Holmes’ methodical approach. In the world of AI, this involves the model sifting through vast amounts of data, identifying relevant patterns, and logically connecting them to address complex queries. When Holmes observes the surgeon’s calloused hands or the uneven suntan, he’s not just seeing isolated facts; he’s linking them to a broader narrative. Similarly, when tasked with a question, an LLM (Large Language Models) doesn’t merely spit out pre-programmed responses. Instead, it navigates through a multitude of data points, much like stepping stones, to reach a conclusion. Each step in this process is akin to Holmes deducing the surgeon’s profession or his recent journey abroad – a calculated, sequential progression towards understanding. Thus, the LLM’s ‘thinking’ – its ability to process and connect information in a coherent, step-by-step manner – is not unlike the legendary detective’s famed deductive reasoning, offering a glimpse into how artificial intelligence can mimic sophisticated human thought processes. The Chain of Thought capability in AI opens a treasure trove of opportunities for businesses and individuals seeking innovation and efficiency. For businesses, this feature can be a game-changer in areas like customer service, where AI can not only respond to queries but also anticipate and address underlying concerns, leading to a more intuitive and satisfying customer experience. In decision-making, executives can use AI to simulate various scenarios and outcomes, providing a detailed analysis of each step in the decision chain, thereby enhancing strategic planning with data-driven insights. For creatives, this aspect of AI can inspire new directions in projects, offering a fresh perspective by logically connecting disparate ideas. On an individual level, AI’s Chain of Thought can aid in personal development and learning, tailoring educational content based on the individual’s learning style and progression. It can also assist in daily tasks, like budgeting or scheduling, by understanding patterns in behavior and preferences, thus offering optimized, personalized solutions. By harnessing this sophisticated aspect of AI, both businesses and individuals can not only streamline operations but also foster a culture of innovation, making strides towards a more efficient, data-informed future. Generative AI, empowered by its chain of reasoning, is making significant strides in solving complex problems across diverse sectors. In healthcare, AI models are revolutionizing diagnostic processes. For instance, AI systems can analyze medical images, using a series of logical steps to identify patterns indicative of diseases like cancer, often with greater accuracy and speed than human specialists [1]. This method of reasoning not only improves diagnosis but also helps in personalizing treatment plans based on a patient’s unique medical history. In finance, AI is used for risk assessment and fraud detection. By logically analyzing spending patterns and account behavior, AI can flag anomalies that might indicate fraudulent activities, thereby enhancing security and efficiency in financial transactions. In the creative industries, AI’s chain of reasoning is being harnessed for tasks like scriptwriting and music composition. By understanding and connecting various elements of storytelling or music theory, AI can assist artists in generating novel and intricate works, opening new avenues for creativity. These examples underscore how AI’s advanced reasoning capabilities are not just automating tasks but are also providing deeper insights and innovations, thereby transforming the landscape of these fields. I invite you to embark on a detective journey akin to Sherlock Holmes’ adventures by engaging with a LLM like ChatGPT. Request it to reveal its ‘Chain of Thought’ while responding to your queries or crafting an email. This experience will allow you to observe the intricacies of how Generative AI models ‘think’ and piece together information, much like Holmes unravels a mystery. If this glimpse into the AI’s deductive prowess intrigues you, do share your experience with a friend. For any inquiries or discussions about the fascinating world of Generative AI and its ‘Sherlockian’ reasoning abilities, feel free to reach out. I am always keen to investigate further into these modern-day mysteries with curious minds. Author – Ketan Kasabe, Co-founder: mPrompto Reference: [1] https://openmedscience.com/revolutionising-medical-imaging-with-ai-and-big-data-analytics/#:~:text=AI%20and%20Deep%20Learning%20for%20Medical%20Image%20Analysis,-Artificial%20intelligence%20(AI&text=AI%20algorithms%20can%20detect%20early,function%20and%20diagnose%20heart%20disease.
Role of Subject Matter Experts in B2B Generative AI Applications
This blog is an attempt to highlight the ever-evolving landscape of generative AI in the business-to-business (B2B) sector and the crucial role of humans will be playing to make it relevant. I still remember when I worked @ Rediff , I used to see great editors will give the apt title to the story ( only after verifying it from 3 independent sources ) and highlight what “matters” to make it interesting to read. The otherwise standard ANI or PTI stories were – boring. Drawing correlation , I can see AI driven response mirrors a time-tested practice in the news industry , where an expert editor crafts a story’s impact through a well-chosen title and slug. In this same manner , when deploying generative AI applications like ChatGPT in the B2B space , the focus shifts to editing and templatizing responses, ensuring relevance and accuracy – only to be done by “responsible” humans. This approach is essential because, while AI can generate vast amounts of content , it lacks the understanding of a subject matter expert (SME). An SME is not just any human in the loop but they are the responsible humans in the loop. This distinction is crucial because an SME brings depth of knowledge and context-specific insight that a general overseer cannot. In the context of generative AI , such as the GPT models , this expertise becomes invaluable in addressing one of the model’s notable weaknesses : hallucinations , or the generation of plausible but incorrect or nonsensical information which can lead to wrong path. Having an SME as part of the process is more than a quality check. It is about understanding the nuances of the industry , the specific needs of the business , and the balance between finesse of the language and accuracy of the information that AI alone might miss. This level of involvement ensures that the responses generated by AI are not just accurate but also relevant and tailored to the specific context of the B2B environment. Furthermore , incorporating SMEs into the AI loop acts as a form of Reinforcement Learning from Human Feedback (RLHF). This methodology allows for the continuous improvement of AI models based on these “responsible” human input. By identifying and correcting errors or shortcomings in AI-generated content , SMEs highlight areas for future work and development in these models. Their insights help in fine-tuning the AI’s output , making it more reliable and effective for B2B applications and truly putting Human + Computer blended Interaction in effect. To conclude , while generative AI holds immense potential for the B2B sector , its successful deployment hinges on the integration of SMEs into the process. They bring a level of scrutiny , understanding and context that goes beyond what AI can achieve alone. By acting as a bridge between AI capabilities and real-world knowledge , SMEs ensure that the technology is not just a tool for generating content, but a reliable partner in the B2B narrative. However building too much reliance on well trained AI model resulting into SMEs stop thinking out of the box will hamper the creativity while creations will continue – and time will tell if the danger is real , if the machine will start being creative and if humans will evolve !! Author – Sumit Rajwade, Co-founder: mPrompto
Friends like Horses and Helicopters
The other day, I was watching my daughter learn about word associations by my wife, Aditi. Aditi was explaining the concept of “friends of letters,” where each letter is associated with words that start with it, much like “H” has friends in “horse” and “helicopter.” This got me thinking about the world of generative AI, a field I find myself deeply immersed in. Just like how my daughter is taught to associate letters with words, generative AI operates on a similar principle, predicting the next word or phrase in a sequence based on associations and patterns it has learned from vast datasets. Now, the analogy of letter friends is a great starting point to help someone grasp the basics of how generative AI works. However, it’s a bit simplistic when we consider the full complexity of these models. Generative AI doesn’t just consider the “friends” or associations of a word. It also takes into account the context of the entire sentence, paragraph, or even document to generate text that is coherent and contextually relevant. It’s like taking the concept of word association to a whole new level. So, when the AI sees the word “beach,” it doesn’t just think of “sand,” “waves,” and “sun” as its friends. It also considers the entire context of the sentence or paragraph to understand the specific relationship between “beach” and its friends in that situation. Let’s take a moment to delve deeper into how a Large Language Model (LLM) like GPT-3 works. When you input a phrase or sentence, the model processes each word and its relation to the others in the sequence. It analyzes the patterns and associations it has learned from the vast amounts of text in its training data. Then, using complex algorithms, it predicts what the next word or phrase should be. It’s a bit like a super-advanced version of predictive text on your phone, but on steroids. But here’s where it gets tricky. These models are, at the end of the day, making educated guesses, and sometimes, those guesses can be off the mark. We’ve all had our share of amusing auto-correct fails, haven’t we? Let me try to explain it with an example in a B2B scenario, particularly in drafting emails or reports, which is a common task in the business world. Imagine you’re a sales professional at a B2B company and you’re tasked with writing a tailored business proposal to a potential client. You start by typing the opening line of the proposal into a text editor that’s integrated with an LLM like GPT-3. As you begin typing “Our innovative solutions are designed to…” the LLM predicts and suggests completing the sentence with “…meet your company’s specific needs and enhance operational efficiency.” The model has learned from a myriad of business documents that such phrases are commonly associated with the words “innovative solutions” in a sales context. However, the trickiness comes into play when the LLM might not have enough industry-specific knowledge about the potential client’s niche market or the technical specifics of your product. If you just accept the LLM’s suggestions without customization, the result might be a generic statement that doesn’t resonate with the client’s unique challenges or objectives. For instance, if the client is in the renewable energy sector and your product is a software that optimizes energy storage, the LLM might not automatically include the relevant jargon or address industry-specific pain points unless it has been fine-tuned on such data. So, while the LLM can give you a head start, you need to guide it by adding specifics: “Our innovative solutions are designed to optimize energy storage and distribution, allowing your renewable energy company to maximize ROI and reduce waste.” Here, the bolded text reflects the necessary B2B customization that an educated guess from an LLM might not get right on its own. This illustrates the importance of human oversight in using LLMs for B2B applications. While these models can enhance productivity and efficiency, they still require human expertise to ensure that the output meets the high accuracy and relevancy standards expected in the business-to-business arena. My next blog will cover just that and help you understand how a ‘Responsible Human in the Loop’ makes all the difference for an organization while working with some use cases of generative AI. I’m really looking forward to diving deeper into the realm of human dependency in using Generative AI my subsequent blog post, where we’ll explore how they add a layer of safety and relevance to the responses generated by LLMs. So, stay tuned! If you think you have understood Generative AI better with this blog, do drop me a line. I always look forward to a healthy conversation around Generative AI. Remember, the world of AI is constantly evolving, and there’s always something new and exciting just around the corner. Let’s explore it together! Author – Ketan Kasabe, Co-founder: mPrompto