Kumo AI Wins Most Innovative AI Technology at 2024 A.I. Awards! Learn more

04/18/2023

Leveraging Predictive ML to Maximize the ROI of Every Customer Chat

Author: Tin-Yun Ho

TL;DR

The explosive emergence of OpenAI’s ChatGPT has generated a wave of intense interest among enterprises of all sizes and industries in leveraging Large Language Models (LLMs) to create chat-based interfaces for their end users. However, while LLMs on their own are already surprisingly powerful, there is still significant untapped potential for further enhancing the ROI of every customer chat by complementing LLMs with the power of predictive Machine Learning (ML). 

In this blog post, we will explore how predictive ML can be used to better turn every step of your LLM-powered customer chat into a revenue-generating opportunity.

LLM-Based Chat System Status Quo

LLMs are experts at generating plausible-sounding text: given a piece of text as a prompt, they are trained to predict the next tokens in an autoregressive way. A well-documented risk of LLMs designed this way, however, is their tendency to hallucinate plausible text that is factually inaccurate, sometimes dangerously so. 

The current generation of LLM-based chat systems, in order to prevent this, typically ‘ground’ their LLMs by retrieving relevant facts and incorporating them into the input prompt, thus better ensuring that the output generated text reflects the retrieved facts used as input. As an example of this kind of architecture, ChatGPT recently released a Retrieval Plugin which enables GPT-based chat systems to automatically query a vector database (such as Pinecone, Weaviate, Redis, etc.) consisting of your enterprise’s documents and other content, to retrieve relevant snippets to use as prompt input.

Currently, this type of query is usually only done based on textual similarity, which enables the LLM-based chat systems to provide factually accurate responses to customer inquiries and service requests. However, the approach does not connect these chats to potential business value or any downstream decision-making based on the chat session.

The Missing Link: Predictive ML

Predictive ML leverages patterns and relationships in your historic data to make predictions about the future. It has been successfully applied across various industries to improve customer experiences, make more profit-maximizing decisions, and streamline processes.

In the context of LLM-based chat systems, seamlessly integrating predictive ML outputs into the LLM input prompts can unlock even greater potential. By providing valuable context about each customer and their predicted behavior/preferences, chatbots can go beyond just answering the question posed by the customer with factually accurate responses, to better leverage in-session opportunities for personalization, upselling, cross-selling, retention, and more.

Now let’s explore how predictive ML can be used to turn every step of your LLM-powered customer chat into a revenue-generating opportunity:

  • Personalized search: Predictive ML can enable chatbots to provide search results that are not just a textual match, but also optimized to increase the likelihood of a sale. For example, if a customer recently bought a Pokemon themed children’s hat, when they ask the chatbot for children’s sunglasses suggestions, instead of just returning the most popular pair of sunglasses, the chatbot might ideally return Pokemon (or other high predicted affinity children anime theme) sunglasses as well. This is especially important in the chat context since the chat box likely only has room for 1-3 results!
  • Cross-selling and upselling: Predictive ML can be used to identify what additional products, services, upgrades, etc. a customer is likely to be interested in, and then weave these suggestions into the conversation in a context-aware and personalized manner. For example, if a customer is in the middle of purchasing a laptop, the chatbot could slip in recommendations for a whole portfolio of relevant accessories that are commonly purchased with that specific laptop, like a carrying case, wireless mouse, warranty, etc. thus increasing the order size.
  • Customer segmentation: Predictive ML can be used to group customers based on their predicted preferences and needs. With this information, the chatbot can tailor its responses and offer personalized marketing strategies during the chat session. For example, based on a customer’s predicted affinity for eco-friendly themes, the chatbot could recommend sustainable alternatives, promotions, or informative content related to those preferences, increasing customer satisfaction and engagement.
  • Customer retention: Predictive ML can be used to identify which customers are likely to churn, the underlying reasons, and which go to market actions (discounts, trial extensions, support call, etc.) might turn them around. This context can be provided as input to the chatbot, enabling the chatbot to strategically slip in the right remedial actions at the right time, thus reducing churn and improving customer loyalty.

Kumo Bridges the Gap Between Predictive ML and LLM-based Chat 

Kumo.ai enables users across the enterprise, regardless of ML background, to rapidly build and deploy predictive ML in production with best-in-class accuracy in hours instead of months. It automates all the significant steps in a typical ML pipeline, such as target label engineering, feature engineering, model architecture, hyperparameter search, and model deployment. This is done through a declarative interface for defining Predictive Queries that is optimized for Graph Neural Network (GNN) specific approaches that we discuss here

By leveraging Kumo.ai’s powerful platform, enterprises can quickly and efficiently implement all the predictive ML strategies discussed in this blog post. The result is potentially a dramatic improvement in the ROI of LLM-based chat systems, transforming every customer interaction into a revenue-generating opportunity while enhancing customer satisfaction and engagement.

To learn more, reach out to us for a demo!