Reimagining Recommendations: LLMs as a New Frontier (LLM Part 12)

Chris Shayan
9 min readNov 11, 2024

--

The postings on this site are my own and do not necessarily represent the postings, strategies or opinions of my employer.

Recommendation systems have become an integral part of our digital lives, influencing our choices from movies and music to products and services. These systems aim to provide personalized recommendations by analyzing user behavior and preferences.

Traditional recommendation systems, such as collaborative filtering and content-based filtering, often struggle with providing highly personalized and relevant recommendations. These methods rely on historical data and explicit user preferences, which can limit their ability to capture complex user behaviors and preferences.

LLMs, on the other hand, offer a powerful solution to these limitations. By leveraging their ability to understand and generate human language, LLMs can provide more accurate and personalized recommendations. They can consider a wide range of factors, including user demographics, past behavior, and contextual information, to deliver tailored recommendations. Additionally, LLMs can effectively address the cold-start problem, where users have limited interaction history, by leveraging knowledge from other users and external information sources.

Types of AI-powered Recommendation Systems

AI-driven recommendation systems have become ubiquitous in our digital lives, shaping our experiences on various platforms. These systems leverage sophisticated algorithms to analyze user behavior and preferences, delivering personalized recommendations that drive engagement and satisfaction.

Collaborative Filtering Systems

Collaborative filtering systems recommend items based on the preferences of similar users. By analyzing past user interactions, these systems identify patterns and correlations to predict future preferences. For example, if two users have similar taste in movies, the system might recommend a movie that one user has liked to the other.

Content-based systems recommend items based on their attributes or features. These systems analyze the characteristics of items a user has interacted with in the past and suggest similar items. For instance, if a user frequently listens to rock music, a content-based system might recommend other rock artists or albums.

Hybrid Recommendation Systems

Hybrid recommendation systems combine the strengths of collaborative filtering and content-based systems. By leveraging both user-based and item-based approaches, these systems can provide more accurate and personalized recommendations. For example, Netflix employs a hybrid approach to recommend movies and TV shows based on a user’s viewing history and the content’s attributes.

Knowledge-Based Systems

Knowledge-based systems utilize domain knowledge and expert rules to generate recommendations. These systems can consider factors such as user demographics, preferences, and contextual information to provide tailored recommendations. For instance, a knowledge-based system might recommend a specific financial product to a user based on their income level, risk tolerance, and financial goals.

LLMs: The Next Frontier for Recommendation Systems

While traditional recommendation systems have proven effective, they often face limitations in terms of scalability, cold-start problems, and the ability to capture complex user preferences. Large Language Models (LLMs) offer a promising solution to these challenges.

LLMs can enhance recommendation systems in several ways:

  1. Contextual Understanding: LLMs can analyze vast amounts of text data to understand the context of user queries and preferences. This enables them to provide more relevant and personalized recommendations.
  2. Improved Personalization: By leveraging LLMs, recommendation systems can go beyond simple user-item interactions. They can analyze user demographics, browsing history, and social media activity to create detailed user profiles. This enables the system to provide highly personalized recommendations that cater to individual needs and preferences.
  3. Enhanced Explanations: LLMs can generate human-readable explanations for their recommendations, improving user trust and satisfaction. For example, an LLM-powered recommendation system might explain why a particular product or service is recommended based on the user’s past behavior and preferences.
  4. Cold-Start Problem Mitigation: LLMs can help address the cold-start problem by leveraging external knowledge sources and generating recommendations based on general trends and popular items.
leewayhertz : How to use LLMs for creating a content-based recommendation system for entertainment platforms?

By integrating LLMs into recommendation systems, businesses can deliver more accurate, personalized, and engaging experiences for their customers, driving increased satisfaction and loyalty. Here’s how:

1. Enhanced Content Understanding:

  • Semantic Understanding: LLMs can delve deeper into the semantic meaning of content, going beyond simple keyword matching. This allows for more accurate recommendations based on themes, genres, and underlying concepts.
  • Contextual Awareness: LLMs can consider the context of a user’s viewing history and preferences to provide highly relevant recommendations. For example, if a user has watched a sci-fi movie, the LLM might recommend other sci-fi movies or shows with similar themes or genres.

2. Personalized Recommendations:

  • User Profiling: LLMs can analyze user behavior, preferences, and demographics to create detailed user profiles. This enables more precise targeting of content recommendations.
  • Dynamic Recommendations: LLMs can adapt recommendations in real-time based on user interactions, such as pausing, fast-forwarding, or rewinding. This allows for a more dynamic and personalized viewing experience.

3. Improved Cold-Start Recommendations:

  • Leveraging External Knowledge: LLMs can utilize external knowledge sources, such as movie reviews, critic ratings, and social media discussions, to provide recommendations for new users with limited viewing history.
  • Content-Based Recommendations: By analyzing the content itself, LLMs can suggest similar items to new users, even without a personalized history.

4. Enhanced User Experience:

  • Personalized Recommendations: LLMs can provide tailored recommendations for each user, increasing engagement and satisfaction.
  • Explanatory Recommendations: LLMs can generate explanations for their recommendations, helping users understand the rationale behind the suggestions and building trust.
  • Interactive Recommendations: LLMs can enable interactive recommendation experiences, where users can provide feedback and refine their preferences to get even more accurate recommendations.

LLMs for Recommender Systems

LLM’s ability to understand natural language empowers them to generate insightful recommendations, often without relying solely on explicit user behavior data. Imagine an LLM seamlessly recommending a Thanksgiving turkey even without purchase history — that’s the power we’re unlocking.

Research is exploding as experts explore how LLMs can be applied to recommender systems. By reframing recommendation tasks as language comprehension or generation challenges, they’re pushing the boundaries of what’s possible. Let’s delve into the key strengths of LLMs in this domain:

  • Contextual Understanding: LLMs excel at integrating user behavior data into prompts. Combining this with their vast knowledge base, they can craft highly personalized recommendations tailored to individual needs.
  • Adaptability: LLMs demonstrate impressive robustness when transitioning to new domains with limited data (zero-shot or few-shot learning). This flexibility empowers businesses, even startups, to explore novel applications for their recommendation tools.
  • Unified Approach: Traditional recommendation engines often require complex, multi-layered processes. LLMs offer a streamlined solution. They can handle tasks like bias mitigation, traditionally spread across different stages, within a single model. This centralized approach also reduces the environmental impact by eliminating the need for separate training for each recommendation task.
  • Holistic Learning: Many recommendation tasks share a common user-item pool and operate in similar contexts. LLMs leverage this overlap through unified learning, leading to improved predictions for unforeseen tasks and optimized use of data.
  • Transparency & Interactivity: LLMs can explain their reasoning behind a recommendation, improving system clarity. This allows users to understand the logic behind suggested choices and make more informed decisions.
  • Iterative Refinement: By incorporating user feedback, LLM-powered recommenders can continuously learn and refine their suggestions, leading to more accurate and enjoyable user experiences.

Sumit’s Diary rightly highlights the benefits of LLM-driven recommendations, but it’s worth exploring them further:

  • Conquering Data Scarcity: LLMs excel in situations with limited data or when dealing with “cold-start” scenarios (new users with minimal interaction history). Their vast parameter set empowers them to overcome data sparsity.
  • Dynamic Adaptability: LLMs can readily adapt to new data streams without requiring major architectural changes or retraining. This allows them to stay up-to-date with evolving customer preferences and market trends.
  • User-Centric Design: LLMs pave the way for user expression in natural language, often through conversational interfaces. This more active role in the recommendation process translates to more personalized and relevant suggestions.
  • Versatility Beyond Tradition: Unlike traditional algorithms tailored for specific tasks, LLMs can handle user interactions as sequences, integrating them with their extensive knowledge base. This versatility opens up new possibilities for personalized experiences.
  • Data Efficiency: LLMs, armed with inherent world knowledge, require less data compared to techniques like collaborative filtering, which rely heavily on large training datasets.
  • Streamlined Features: LLMs eliminate the need for complex feature engineering, a hallmark of traditional methods. The prompt-based approach simplifies and refines the recommendation process.
  • Recommendation Rationale: LLMs can explain their recommendations. They can articulate the reasoning behind their prompts and suggestions in clear natural language, boosting system transparency and trust.

While LLMs offer immense potential, they also come with challenges:

The future of recommendation systems is intertwined with the advancements in LLMs. As research progresses and these challenges are addressed, we can expect even more powerful and personalized recommendation experiences across various industries.

Fundamentally, LLMs function as the brain, while recommendation models serve as tools that supply domain-specific knowledge.

Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations Xu Huang , Jianxun Lian, Yuxuan Lei , Jing Yao , Defu Lian, Xing Xie
import pandas as pd import tiktoken import lancedb from openai import OpenAI from langchain_openai import OpenAIEmbeddings from langchain_openai import ChatOpenAI from langchain.chains import RetrievalQA from langchain.prompts import PromptTemplate from langchain_community.callbacks import get_openai_callback from langchain_community.vectorstores import LanceDB openai_api_key = "" client = OpenAI(api_key=openai_api_key) anime = pd.read_csv('data/anime_with_synopsis.csv') # anime.head() anime['combined_info'] = anime.apply( lambda row: f"Title: {row['Name']}. Overview: {row['sypnopsis']} Genres: {row['Genres']}", axis=1) # anime.head(2) # print(anime) embedding_model = "text-embedding-ada-002" embedding_encoding = "cl100k_base" # this the encoding for text-embedding-ada-002 max_tokens = 8000 # the maximum for text-embedding-ada-002 is 8191 encoding = tiktoken.get_encoding(embedding_encoding) # omit descriptions that are too long to embed anime["n_tokens"] = anime.combined_info.apply(lambda x: len(encoding.encode(x))) anime = anime[anime.n_tokens <= max_tokens] def get_embedding(text, model="text-embedding-3-small"): text = text.replace("\n", " ") return client.embeddings.create(input=[text], model=model).data[0].embedding anime["embedding"] = anime.combined_info.apply(lambda x: get_embedding(x, model=embedding_model)) # anime.head() anime.rename(columns={'embedding': 'vector'}, inplace=True) anime.rename(columns={'combined_info': 'text'}, inplace=True) anime.to_pickle('data/anime.pkl') uri = "dataset/sample-anime-lancedb" db = lancedb.connect(uri) table = db.create_table("anime", anime) # embeddings = OpenAIEmbeddings(engine="text-embedding-ada-002") embeddings = OpenAIEmbeddings( deployment="SL-document_embedder", model="text-embedding-ada-002", show_progress_bar=True, openai_api_key=openai_api_key) docsearch = LanceDB(connection=table, embedding=embeddings) # simple similarity computation # query = "I'm looking for an animated action movie. What could you suggest to me?" # docs = docsearch.similarity_search(query, k=1) llm = ChatOpenAI( model_name="gpt-3.5-turbo-1106", temperature=0, api_key=openai_api_key ) # without prompt # qa_chain = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=docsearch.as_retriever(), # return_source_documents=True) # let's say we are only interested in anime that, among their genres, are tagged as "Action". # df_filtered = anime[anime['Genres'].apply(lambda x: 'Action' in x)] # qa_chain = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", # retriever=docsearch.as_retriever(search_kwargs={'data': df_filtered}), # return_source_documents=True) # define custom prompt template = """You are a movie recommender system that help users to find anime that match their preferences. Use the following pieces of context to answer the question at the end. For each question, suggest three anime, with a short description of the plot and the reason why the user migth like it. If you don't know the answer, just say that you don't know, don't try to make up an answer. {context} Question: {question} Your response:""" PROMPT = PromptTemplate( template=template, input_variables=["context", "question"]) chain_type_kwargs = {"prompt": PROMPT} qa_chain = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=docsearch.as_retriever(), return_source_documents=True, chain_type_kwargs=chain_type_kwargs) query = "I'm looking for an action anime. What could you suggest to me?" # Query and Response with get_openai_callback() as cb: result = qa_chain({"query": query}) print(result['result'])

Originally published at https://www.ChrisShayan.com

--

--

Chris Shayan
Chris Shayan

Written by Chris Shayan

Head of AI at Backbase The postings on this site are my own and do not necessarily represent the postings, strategies or opinions of my employer.

No responses yet