Build, learn, and ship production-ready LLM applications — from your first prompt to full-scale RAG pipelines and autonomous AI agents.
⭐ If this repo helps you, please give it a star — it helps others discover it too! ⭐
Most LangChain resources are scattered across tutorials, notebooks, and outdated blog posts. This repository brings everything into one place — structured, runnable, and ready for production.
| 🎯 | What You Get |
|---|---|
| 📚 | Structured learning path — progress from zero to LLM expert in a logical sequence |
| 💻 | 100% runnable code — every concept has a working example you can run immediately |
| 🔍 | RAG pipelines — full retrieval-augmented generation with Chroma, FAISS, and more |
| 🤖 | AI Agents — ReAct-pattern agents with custom tools and LangGraph workflows |
| 🏠 | Local LLM support — run everything offline with Ollama (zero API costs) |
| 🌐 | Multi-provider — OpenAI, Google Gemini, Hugging Face, and Ollama side-by-side |
| 📝 | Rich notes & docs — detailed markdown notes for every topic covered |
| 🔧 | Production patterns — LCEL, tool calling, structured output, and memory |
- Why This Repository?
- Learning Roadmap
- Repository Structure
- Notes Index
- Getting Started
- Local LLM Support (Ollama)
- Tech Stack
- Security
- Future Enhancements
- Topics & Keywords
- Contributing
- Connect
The content is organized to be followed in order:
| # | Topic | Description |
|---|---|---|
| 1 | Chat Models | LLM initialization with Ollama, Gemini, and OpenAI (chatmodel/) |
| 2 | Messages | SystemMessage, HumanMessage, AIMessage, and chat history (Messages/) |
| 3 | Prompt Engineering | Prompt templates, dynamic generation, and a Streamlit prompt UI (Prompts/) |
| 4 | Structured Output | Enforcing JSON schema, Pydantic, and TypedDict outputs (Structured Output/) |
| 5 | Output Parsers | JSON, Pydantic, string, and structured output parsers (Output_Parsers/) |
| 6 | Chains | Sequential, parallel, and conditional chains (Chain/) |
| 7 | Runnables (LCEL) | LangChain Expression Language — sequences, parallelism, branching, lambdas, passthrough (Runnables/) |
| 8 | Embeddings | Document similarity using embedding models (EmbeddingModel/) |
| 9 | RAG Components | Document loaders, text splitters, vector stores (Chroma), and retrievers (Rag Components/) |
| 10 | Tools & Agents | Custom tool creation, tool calling, built-in tools, and agent workflows (Tools/) |
| 11 | Mini Projects | Applied projects — AI agents, currency conversion, agent templates (Mini_projects/) |
langchain-playground--for-llms-/
├── Chain/ # Sequential, parallel, and conditional chain examples
├── chatmodel/ # LLM initialization (Ollama, Gemini)
├── chroma_db/ # Persisted Chroma vector database
├── EmbeddingModel/ # Document similarity with embeddings
├── Messages/ # Chat message types and prompt templates
├── Mini_projects/ # End-to-end LLM applications
├── Output_Parsers/ # JSON, Pydantic, string, and structured parsers
├── Prompts/ # Prompt templates and Streamlit UI
├── Rag Components/ # RAG pipeline modules
│ ├── Document Loaders/ # PDF, CSV, text, web, and directory loaders
│ ├── Retrievers/ # MultiQuery, contextual compression, MMR
│ ├── Text Splitters/ # Length, markdown, code, and semantic splitting
│ └── Vectors Stores/ # Chroma vector database integration
├── Runnables/ # LCEL runnables (sequence, parallel, branch, lambda)
├── Sample Data/ # Reference PDFs for RAG examples
├── Structured Output/ # JSON schema, Pydantic, and TypedDict outputs
├── Tools/ # Custom and built-in tool calling, agent toolkits
├── notes.md # Core LangChain concept notes (Chat Models, Prompts, Messages)
├── requirements.txt # Python dependencies
├── test.py # Quick-start test script
├── sample.pdf # Sample document for loader testing
├── sample_data.md # Sample text data for RAG testing
├── Titanic-Dataset.csv # Sample dataset for data-loading examples
└── README.MD
All concept notes are organized by topic. Click any link to jump directly to the notes.
| Topic | Notes File |
|---|---|
| Chat Models, Prompts & Messages | notes.md |
| Chat History | Messages/chat_history.md |
| Topic | Notes File |
|---|---|
| Simple & Sequential Chains | Chain/chain.md |
| Parallel Chain | Chain/parallel_chain.md |
| Conditional Chain (IF/ELSE routing) | Chain/conditional_Chain.md |
| Topic | Notes File |
|---|---|
| Runnables, LCEL, Primitive vs Task-Specific | Runnables/Runnables.md |
| Topic | Notes File |
|---|---|
| Output Parsers Overview & Ollama | Output_Parsers/notes1.md |
| String Output Parser | Output_Parsers/stringoutputParser.md |
| JSON Output Parser | Output_Parsers/jsonoutput_parser.md |
| Pydantic Output Parser | Output_Parsers/PydanticOutputParser.md |
| Structured Output Parser (legacy) | Output_Parsers/Structured Output Parser.md |
| TypedDict vs Pydantic vs JSON Schema | Structured Output/structured.md |
| Topic | Notes File |
|---|---|
| Document Loaders (PDF, Web, CSV, Directory) | Rag Components/rag_notes.md |
| Text Splitters | Rag Components/text_splitter_langchain_notes.md |
| Vector Stores Overview | Rag Components/Vectors Stores/vector_store_notes.md |
| Chroma DB | Rag Components/Vectors Stores/chroma_db_notes.md |
| Chroma Internal Architecture | Rag Components/Vectors Stores/chroma_internal_architecture.md |
| FAISS vs Chroma vs Pinecone Comparison | Rag Components/Vectors Stores/vector_db_architecture_comparison.md |
| Retrievers Overview | Rag Components/Retrievers/retriver.md |
| MMR Retriever | Rag Components/Retrievers/mmr.md |
| MultiQuery Retriever | Rag Components/Retrievers/MultiQuery_Retriever_Notes.md |
| Contextual Compression Retriever | Rag Components/Retrievers/Contextual_Compression_Retriever_Notes.md |
| Topic | Notes File |
|---|---|
| Tools Overview | Tools/tools.md |
| Tool Binding | Tools/tool_binding.md |
| Tool Calling | Tools/tools_calling .md |
| Custom Tools | Tools/custom_tools.md |
| Toolkits | Tools/toolkit.md |
| AI Agents (ReAct Pattern) | Mini_projects/agent_notes.md |
| File | Description |
|---|---|
| sample_data.md | AI overview text for RAG testing |
| sample.pdf | Sample PDF for document loader testing |
| Titanic-Dataset.csv | Sample CSV for data loading examples |
- Python 3.10 or higher
- Ollama (optional, for local LLM inference)
git clone https://github.com/himanshu231204/langchain-playground--for-llms-.git
cd langchain-playground--for-llms-python -m venv venv
source venv/bin/activate # macOS / Linux
venv\Scripts\activate # Windowspip install -r requirements.txtCreate a .env file in the project root:
OPENAI_API_KEY=your_openai_key
GOOGLE_API_KEY=your_google_api_key
HUGGINGFACEHUB_API_TOKEN=your_hf_tokenNote: The
.envfile is listed in.gitignoreand should never be committed.
This repository supports local LLM inference via Ollama, allowing experimentation without API costs or internet access.
# Install Ollama from https://ollama.com, then:
ollama pull llama3
ollama run llama3from langchain_ollama import ChatOllama
llm = ChatOllama(model="llama3")
response = llm.invoke("Explain Retrieval Augmented Generation in simple words")
print(response.content)Benefits of local inference:
- No API costs
- Full data privacy
- Offline development
- Faster iteration for prototyping
- API keys are managed through environment variables (
.env) .envis excluded from version control via.gitignore- No secrets are hard-coded in source files
| Layer | Technologies |
|---|---|
| LLM Providers | OpenAI GPT-4, Google Gemini, Ollama (Llama 3, Mistral), Hugging Face |
| Frameworks | LangChain v1.0, LangGraph v1.0, LangChain Expression Language (LCEL) |
| Vector Stores | Chroma DB, FAISS |
| Embeddings | OpenAI Embeddings, HuggingFace Sentence Transformers |
| Output Validation | Pydantic v2, TypedDict, JSON Schema |
| UI | Streamlit |
| Data | Pandas, CSV, PDF, Markdown document loaders |
| Language | Python 3.10+ |
- Multi-agent systems with LangGraph
- Conversational memory integration
- Evaluation and monitoring pipelines
- Deployment-ready GenAI applications
- Streamlit-based interactive demos
This repository covers the following areas of Generative AI and LLM development:
langchain llm generative-ai rag retrieval-augmented-generation
langchain-python ai-agents openai google-gemini ollama
vector-database chromadb pydantic lcel langchain-expression-language
huggingface streamlit python machine-learning nlp
large-language-models prompt-engineering ai deep-learning chatbot
langsmith langgraph embeddings transformers tutorial
Contributions are welcome. To get started:
- Fork the repository
- Create a feature branch (
git checkout -b feature/my-feature) - Commit your changes (
git commit -m "Add my feature") - Push to the branch (
git push origin feature/my-feature) - Open a pull request
Made with ❤️ by Himanshu — Happy Learning! 🚀