14 April 2025

LangChain Overview

by Carson Kempf

LangChain



Introduction

Installation & Setup

  1. Install package
    pip install -qU "langchain[deepseek]" 
    
  2. Get Environment Variables and Chat Model
import getpass
import os

if not os.environ.get("OPENAI_API_KEY"):
  os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")

from langchain.chat_models import init_chat_model

model = init_chat_model("gpt-4o-mini", model_provider="openai")
  1. First API Call
    model.invoke("Hello, world!")
    

Architecture

Some Open Source Libraries

langchain-core

Tutorials

Simple LLM with Chat Model and Prompt Templates

  1. Installation
    pip install langchain
    

Note: Use LangSmith!

export LANGSMITH_TRACING="true"
export LANGSMITH_API_KEY="..."
export LANGSMITH_PROJECT="default" # or any other project name
  1. Simple Call to a Chat Model
from langchain_core.messages import HumanMessage, SystemMessage

messages = [
    SystemMessage("Translate the following from English into Italian"),
    HumanMessage("hi!"),
]

model.invoke(messages)

Message Objects

model.invoke("Hello")

model.invoke([{"role": "user", "content": "Hello"}])

model.invoke([HumanMessage("Hello")])

Streaming

Prompt Templates

system_template = “Translate the following from English into {language}”

prompt_template = ChatPromptTemplate.from_messages( [(“system”, system_template), (“user”, “{text}”)] )

language

* The language to translate text into

text

* The text to be translated

* Note:
  * ChatPromptTemplate supports multiple message roles
  * In our example we formatted the language variable as a system message
  * We formatted the user text into a user message

* How to Invoke a Template with Correct Parameters

prompt = prompt_template.invoke({“language”: “Italian”, “text”: “hi!”})

prompt


* What  **prompt**  Returns:

ChatPromptValue(messages=[SystemMessage(content=’Translate the following from English into Italian’, additional_kwargs={}, response_metadata={}), HumanMessage(content=’hi!’, additional_kwargs={}, response_metadata={})])

* What **prompt.to_messages()** Returns:

[SystemMessage(content=’Translate the following from English into Italian’, additional_kwargs={}, response_metadata={}), HumanMessage(content=’hi!’, additional_kwargs={}, response_metadata={})]

* How to Actually Get a Response From our Invocation:

response = model.invoke(prompt) print(response.content) ```

Topics To Explore Next