AI for Retail Execution. A Best Practice Guide for CPG's. Part Two: Increasing CPG sales with private LLM integration

Welcome to the second part of our comprehensive 5-part best practice guide, where we will explore the transformative potential of Private Large Language Models to turn Consumer Packaged Goods companies' data into expertise. 

Privat-LLM__Banner_1200x800

In 2024, large language models (LLMs) are the fastest growing topic discussed by CEOs. The frequency of mentioning the topic during Q2 2024 earnings calls of US-listed companies increased by approximately one-and-a -half times compared to Q1, according to an IOT Analytics report. This trend is validated by a 2.5-fold increase in budgets for the implementation of LLMs in 2024 compared to the previous year. Companies are considering both open-source and proprietary models, with decisions being based on a combination of cost, performance, and customization capabilities. However, no one is willing to compromise on data security or the reliability of provided results. These can only be offered through private large language models.

What is the difference between public and private LLMs?

The explosion of interest in generative AI and large language models is a result of the availability of public LLMs. The accessibility of models like ChatGPT or Gemini led to 75% of knowledge workers using AI at work by May 2024. Public LLMs are trained on publicly available data, easily accessible, and cost-effective. They are constantly improving because of widespread use. However, alongside these advantages, they create potential risks of data leakage or exposure of sensitive information when used for business purposes.

In contrast, private LLMs are trained on a company's proprietary data and kept within the organization's infrastructure, ensuring data privacy and security. There are also other important differences, such as customization capabilities and control over results. The most important differences between the models are shown in the following table.

  Private LLMs Public LLMs
Access and Availability Limited to the organization Widely available to anyone
Data Privacy and Security High - data stays within organization Low - potential for data leakage
Customization Highly customizable to specific needs Limited customization options
Results Control Full control over outputs Limited control, may produce unexpected results
Cost High - development and maintenance costs Low - often free or pay-per-use models

It is important that Private LLMs, developed specifically for an organization's unique needs and specific knowledge should become a factor in ensuring competitive advantage, something public LLMs cannot provide.

Private LLMs turn data into expertise

Every organization accumulates experience, knowledge, and expertise in its industry, which are the most valuable assets of a business. Preserving and disseminating this knowledge among employees, using best practices, scaling successful innovations, and avoiding past mistakes is one of the most challenging tasks facing businesses. This is where private LLMs can help.

Solutions based on private LLMs can independently monitor and analyze all processes, conduct root cause analysis, and generate deep and meaningful insights that have the greatest impact on the business. Most importantly, such solutions can recommend the most effective strategies and tactics in response to each situation, evaluate their effectiveness, and repeat the cycle again. This ensures a process of continuous learning and improvement of applied strategies. A company with the most powerful and sophisticated model gains an advantage over competitors by making a higher rate of correct decisions in less time.

Private LLM in consumer packaged goods (CPG)

Generative AI was adopted in the CPG industry almost immediately, but the number of use cases for its practical application was very limited at first. Initially, it was mainly used for creating marketing content, improving customer and consumer service, and attempting to develop new products and packaging. Today, both CPG companies and vendors are actively seeking new opportunities to generate additional value with generative AI. According to a McKinsey report, this primarily concerns consumer insights and demand creation, as well as channel and customer management, which can generate one third of the overall impact of digital and AI products. In addition, 60% of businesses intend to use LLM for enterprise knowledge management.

Spring Global has implemented an LLM for turning CPG sales data into expertise. The AI Performance Engine is an LLM-based solution capable of independently analyzing data from various sources, benchmarking points of sale and field team employees, identifying growth opportunities, and suggesting the best playbooks. The solution can also find connections between team performance to propose individual skills development plans, build sales forecasts, or optimize promotions.

PE Private LLM

Private models in CPG retail execution

Private LLM unlocks a wealth of opportunities for leveraging company data and knowledge assets. Here are just two possible examples:

Decision automation for CPG sales leaders

Machine learning (ML) models transform the work of CPG sales leaders by eliminating the need for manual data analysis and opportunity identification. The system independently analyzes data from all sources, providing the five most critical insights for managers to focus on.

Furthermore, the private LLM selects the most relevant playbooks for each insight. For example, it might recommend, "Update credit terms to 15 days." Or "Ensure promotion activation is back on schedule," leaving the execution of those recommendations to field sales teams.

AI Command Center 5 Insights

This approach frees managers from reading reports, allowing them to concentrate on data-driven decisions. This not only increases the accuracy and efficiency of decision-making, but also enables the scaling of effective playbooks and best practices across the organization.

Transformation of field sales rep visits from transactional to consultative

Private LLM will provide field sales representatives with specific information about each store during the visit through their mobile salesforce automation (SFA) application. The model will offer important insights, such as, “The revenue in this store is below average compared to stores with the same profile in the same region.” Or “There is an opportunity to sell certain SKUs X, Y and Z, as they typically sell well alongside the existing assortment.”

Armed with this data, sales reps can discuss the revenue gap with the store manager and suggest adding new products, demonstrating their popularity and potential impact on revenue. In this way, custom models fundamentally change the role of the sales reps, turning them into both a consultant for the client and a revenue generator for the company.

Deployment of private LLMs in CPG: Options and challenges

When considering the implementation of private LLMs in the CPG industry, companies have several deployment options to choose from. These range from creating an in-house system to partnering with specialized AI providers. Each option has its own set of advantages and challenges that require careful consideration.

Regardless of the chosen approach, CPG companies must address several common challenges:

  • Data Management: Ensuring the quality and relevance of data used to train these models is crucial for their effectiveness. (For more insights, explore Best Practice Guide for CPG: a Data-Centric AI Approach.)
  • Resource Allocation: Implementing and maintaining LLMs can be computationally intensive, potentially requiring significant investment in new IT infrastructure.
  • Talent Acquisition: There is often a shortage of professionals with the specialized skills needed to work with advanced AI technologies.
  • System Integration: Incorporating LLMs into existing business processes and IT systems can be complex and time-consuming.
  • Ongoing Optimization: The rapidly evolving nature of the CPG industry means these AI models will need regular updates to remain relevant and accurate.

Successfully implementing private LLMs in a CPG context requires a strategic approach that balances technological capabilities with business needs. Crucially, it must also consider ethical responsibility. The role of ethics in AI implementation will significantly influence the use of artificial intelligence in general, and LLM-based tools in particular, over the coming years.

Balancing tactical effectiveness and ethical responsibility in LLM deployment

Spring Global's approach to AI implementation for CPG strikes a balance between utilitarian and ethical considerations. The AI performance engine represents a shift from theoretical to tactical AI, addressing a crucial industry trend: the gap between AI investments and realized ROI. By executing AI-driven insights in the field and creating feedback loops to improve models, Spring Global ensures tangible value from AI, boosting solution adoption and bridging the gap between theoretical optimization and real-world revenue impact.

Spring Global also recognizes the importance of ethical responsibility in AI implementation. The company emphasizes ethical AI through responsible AI committees as part of project governance, ensuring that ethical guidelines are integrated into products and outputs from the outset. This mitigates potential risks and embarrassments associated with AI deployment. While large language models are powerful tools, they lack social skills and require careful consideration in their application. Spring Global addresses this challenge by deploying private models configured to comply with internal policies and organizational requirements, thus setting a new standard for responsible and effective use of AI in the CPG sector.