This blog post, part 6 of the LLM Series, dives into containerizing Ollama and LangChain web applications for deployment. We’ll explore the benefits of containerization and provide a step-by-step guide using a real-world example: WillowPath.ai, an LLM for career coaching.

What is WillowPath.ai ?

Feeling lost in the labyrinth of career choices? Unsure of the skills and knowledge needed to land your dream job? WillowPath.ai is your friendly LLM (Large Language Model) career coach, here to guide you through the ever-evolving job market. Whether you’re seeking a complete career shift or looking to refine your current skillset, WillowPath.ai offers a personalized roadmap to success.

This innovative platform leverages the power of artificial intelligence to analyze your goals and aspirations, recommending tailored learning paths, books, and resources to bridge the gap between where you are and where you want to be. WillowPath.ai doesn’t stop there.

It also empowers you with valuable insights into potential job opportunities, including salary ranges, helping you make informed career decisions with confidence. So, take the first step towards your dream career and explore the possibilities with WillowPath.ai!

Dockerize your LangChain App

Step 0. LangSmith

Make you setup your LangSmith as well:

os.environ['LANGCHAIN_TRACING_V2'] = "True"
os.environ['LANGCHAIN_ENDPOINT'] = "https://api.smith.langchain.com"
os.environ['LANGCHAIN_API_KEY'] = ""
os.environ['LANGCHAIN_PROJECT'] = ""

If you also integrate the streamlit-feedback to it, it can give you good traceability matters.

Step 1. Dockerfile

FROM ubuntu:latest
LABEL authors="chrisshayan"
FROM python:3.12-slimWORKDIR /app/RUN apt-get update && apt-get install -y \
build-essential \
curl \
software-properties-common \
git \
gh \
&& rm -rf /var/lib/apt/lists/*
#ensure your GITHUB_TOKEN is set if you wanna use gh
RUN gh repo clone chrisshayan/ hr-coach .
RUN pip install -r requirements.txt
EXPOSE 80HEALTHCHECK CMD curl --fail http://localhost:80/_stcore/health
ENTRYPOINT ["streamlit", "run", "main.py", "--server.port=80", "--server.address=0.0.0.0"]

Step 2. Build Your Image

docker build -t hr-coach .
docker images //to verify your image is there

Step 3. Pull your Ollama Docker Image

You can find details in here.

Step 4. Build your docker-compose.yml

Let’s now build a docker-compose configuration file, to define the network of the hr-coach application and the Ollama container, so that they can interact with each other.

version: '3'
services:
ollama-container:
image: ollama/ollama
volumes:
- ./data/ollama:/root/.ollama
ports:
- 11434:11434
streamlit-app:
image: hr-coach:latest
ports:
- 80:80

If you are using Nvidia GPU then you need to run:

$  nvidia-smi
Result of nvidia-smi

Then add to your compose this section:

You also need to pull the model you need:

docker exec -it hr-coach_ollama-container_1 ollama run <modelname>

Step 5. Bringing up

Simply run:

docker-compose up

--

--

Chris Shayan

Head of AI at Backbase The postings on this site are my own and do not necessarily represent the postings, strategies or opinions of my employer.