AI agents for IBM i system administration and monitoring built with LangChain/LangGraph framework and Model Context Protocol (MCP) tools. This project provides intelligent agents that can analyze IBM i system performance, manage security, and assist with administrative tasks.
The IBM i LangChain Agents project provides Python-based intelligent agents that leverage MCP tools to perform system administration tasks on IBM i systems. Built on LangGraph, these agents feature advanced capabilities like human-in-the-loop approval for security operations and persistent conversation memory.
- Five Specialized Agents: Purpose-built agents for different IBM i tasks
- Multi-Model Support: Works with OpenAI, Anthropic Claude, and local Ollama models
- MCP Integration: Connects to the IBM i MCP Server for system operations
- Human-in-the-Loop: Security operations require manual approval for write operations
- Persistent Memory: Agents maintain context across sessions using in-memory checkpointing
- Rich Logging: Detailed tool call and response logging for debugging
- Interactive & Batch Modes: CLI for interactive chat or automated testing
- Performance Agent - Monitor and analyze system performance metrics (CPU, memory, I/O) - 12 tools
- Discovery Agent - High-level system discovery, inventory, and service summaries - 5 tools
- Browse Agent - Detailed exploration of system services by category or schema - 4 tools
- Search Agent - Find specific services, programs, or system resources - 5 tools
- Security Agent - Security vulnerability assessment and remediation - 25 tools with human-in-the-loop
- Python 3.13+ - The project requires Python 3.13 or newer
- uv - Python package manager for installing dependencies and managing virtual environments (Install uv)
- IBM i MCP Server - Must be installed and running on your system
- API Keys - For your chosen LLM provider (OpenAI, Anthropic, or Ollama)
Follow these step-by-step instructions to set up and run the IBM i LangChain MCP Agents.
1.1 Install Python 3.13+
# Check your Python version
python --version # or python3 --version
# If you need to install Python 3.13+, visit:
# https://www.python.org/downloads/1.2 Install uv (Python package manager)
# On macOS and Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
# On Windows (PowerShell):
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
# Alternative: Install via pip
pip install uvEnsure you have the IBM i MCP Server installed and running.
Note
Follow the MCP Server installation guide → Quickstart Guide
Configure the server → Server Configuration Guide
2.1 Install dependencies and build the server:
cd ibmi-mcp-server
npm install
npm run build2.2 Start the MCP server:
npm run devThe server will start on http://127.0.0.1:3010/mcp by default.
2.3 Verify the server is running:
curl http://127.0.0.1:3010/mcpExpected response:
{"status":"ok","server":{"name":"ibmi-mcp-server","version":"1.9.1",...}}3.1 Install the ibmi-agent-sdk package:
cd agents/packages/ibmi-agent-sdk
uv pip install -e .3.2 Install the LangChain agents package:
cd ../../frameworks/langchain
uv pip install -e .Create a .env file in the agents/frameworks/langchain directory:
cd agents/frameworks/langchain
cp .env.example .env4.1 Edit the .env file with your configuration:
# Choose your default model (Ollama recommended for local development)
DEFAULT_MODEL=ollama:llama3.2
# Add API keys only for providers you plan to use:
# OpenAI (for GPT-4, GPT-4o models)
OPENAI_API_KEY=
# Anthropic (for Claude models)
ANTHROPIC_API_KEY=
# MCP Server Configuration (usually defaults are fine)
MCP_URL=http://127.0.0.1:3010/mcp
MCP_TRANSPORT=streamable_http
# Logging (set to false for minimal output)
VERBOSE_LOGGING=true
# Security (enable human-in-the-loop for write operations)
ENABLE_HUMAN_IN_LOOP=true4.2 For Ollama (local models - no API key needed):
# Install Ollama from: https://ollama.ai
# Pull a model:
ollama pull llama3.2Note: You only need API keys for the providers you plan to use.
Run a quick test to verify all agents can be created:
uv run src/ibmi_agents/agents/test_agents.py --quickExpected output:
================================================================================
Quick Agent Creation Test
================================================================================
Model: ollama:llama3.2
Creating performance agent... ✅ IBM i Performance Monitor
Creating discovery agent... ✅ IBM i SysAdmin Discovery
Creating browse agent... ✅ IBM i SysAdmin Browser
Creating search agent... ✅ IBM i SysAdmin Search
Creating security agent... ✅ IBM i Security Operations
🔒 Human-in-the-loop enabled for 2 non-readonly tools:
- repopulate_special_authority_detail
- execute_impersonation_lockdown
────────────────────────────────────────────────────────────────────────────────
Result: 5/5 agents created successfully
================================================================================
uv run src/ibmi_agents/agents/test_agents.py --quick# Test performance agent with sample query
uv run src/ibmi_agents/agents/test_agents.py --agent performance
# Test with quiet mode (minimal output)
uv run src/ibmi_agents/agents/test_agents.py --agent performance --quiet# Chat interactively with an agent
uv run src/ibmi_agents/agents/test_agents.py --agent performance --interactiveExample session:
👤 You: What is my system status?
🤖 IBM i Performance Monitor:
### Quick‑look snapshot
| Metric | Value | Why it matters |
|--------|-------|----------------|
| **Configured CPUs** | 3 | The number of logical CPUs the system is allowed to use. |
| **Average CPU utilization** | **0 %** | Indicates the system is essentially idle. |
| **Main storage size** | **133 MB** | Total RAM allocated to the system. |
| **System ASP used** | 56 % | How much of the ASP's storage is in use. |
> **Bottom line:** CPU is not a bottleneck right now, and memory usage is comfortably below capacity.
────────────────────────────────────────────────────────────────────────────────
👤 You: Show me memory pool information
🤖 IBM i Performance Monitor: [analyzes memory pools]
👤 You: quit
👋 Goodbye!
# Test only vulnerability assessment tools
uv run src/ibmi_agents/agents/test_agents.py --agent security --category vulnerability-assessment
# Available categories:
# - vulnerability-assessment: Identify security vulnerabilities
# - audit: Audit security configurations
# - remediation: Generate and execute security fixes
# - user-management: Manage user capabilities and permissions# Use OpenAI GPT-4o
uv run src/ibmi_agents/agents/test_agents.py --agent performance --model openai:gpt-4o
# Use Anthropic Claude
uv run src/ibmi_agents/agents/test_agents.py --agent discovery --model anthropic:claude-3-7-sonnet-20250219
# Use local Ollama model
uv run src/ibmi_agents/agents/test_agents.py --agent search --model ollama:llama3.2# Run comprehensive test suite
uv run src/ibmi_agents/agents/test_agents.pyPurpose: Monitor and analyze IBM i system performance
Tools Available: 12 performance monitoring tools
Example Questions:
- "What is my system status?"
- "Show me memory pool utilization"
- "Which jobs are consuming the most CPU?"
- "What is the current system activity level?"
Sample Output:
### Quick‑look snapshot
| Metric | Value | Why it matters |
|--------|-------|----------------|
| **Configured CPUs** | 3 | The number of logical CPUs the system is allowed to use. |
| **Current CPU capacity** | 3 | How many CPUs are actually available right now. |
| **Average CPU utilization** | **0 %** | Indicates the system is essentially idle. |
| **Main storage size** | **133 MB** | Total RAM allocated to the system. |
| **System ASP used** | 56 % | How much of the ASP's storage is in use. |
> **Bottom line:** CPU is not a bottleneck right now, and memory usage is comfortably below capacity.
Purpose: High-level system discovery and summarization
Tools Available: 5 discovery tools
Example Questions:
- "Give me an overview of available system services"
- "How many services are in each schema?"
- "What types of SQL objects are available?"
Purpose: Detailed system browsing and exploration
Tools Available: 4 browse tools
Example Questions:
- "Show me services in the QSYS2 schema"
- "List all views related to system monitoring"
- "What procedures are available for job management?"
Purpose: Find specific services and usage information
Tools Available: 5 search tools
Example Questions:
- "Search for services related to system status"
- "Find examples of using ACTIVE_JOB_INFO"
- "Where is the SYSTEM_STATUS service located?"
Purpose: Security vulnerability assessment and remediation
Tools Available: 25 security tools
Special Feature: 🔒 Human-in-the-Loop Approval
The security agent automatically enables human-in-the-loop approval for non-readonly operations:
- Read-only tools (vulnerability assessment, auditing) execute immediately
- Write operations (remediation, configuration changes) require manual approval
How it works:
- The agent identifies tools marked with
readOnly: falsein their metadata - Before executing these tools, the agent pauses and requests approval
- You can approve or reject the operation
- This prevents accidental system changes during security assessments
Tools requiring approval:
repopulate_special_authority_detail- Refreshes security audit dataexecute_impersonation_lockdown- Applies security remediation commands
Example Questions:
- "Check for user profiles vulnerable to impersonation attacks"
- "List database files with public write access"
- "Analyze library list security vulnerabilities"
- "Generate commands to lock down impersonation vulnerabilities"
Security Categories (use with --category flag):
vulnerability-assessment: Identify security vulnerabilitiesaudit: Audit security configurationsremediation: Generate and execute security fixesuser-management: Manage user capabilities and permissions
import asyncio
from ibmi_agents import create_ibmi_agent, chat_with_agent
async def main():
# Create a performance agent
ctx = await create_ibmi_agent("performance", model_id="ollama:llama3.2")
# Use the agent within its context
async with ctx as (agent, session):
# Send a query
response = await chat_with_agent(
agent,
"What is my system status? Give me CPU and memory metrics.",
thread_id="my-session-1"
)
print(response)
if __name__ == "__main__":
asyncio.run(main())from ibmi_agents import (
create_ibmi_agent, # Create any agent by type
create_performance_agent, # Create performance agent
create_sysadmin_discovery_agent, # Create discovery agent
create_sysadmin_browse_agent, # Create browse agent
create_sysadmin_search_agent, # Create search agent
create_security_ops_agent, # Create security agent
chat_with_agent, # Send message to agent
list_available_agents, # Get agent information
set_verbose_logging, # Control logging verbosity
get_verbose_logging, # Check logging status
)Enable verbose logging to see detailed tool calls and responses:
# Via command line
uv run src/ibmi_agents/agents/test_agents.py --agent performance --verbose
# Or in .env
VERBOSE_LOGGING=trueFor advanced debugging, enable LangSmith tracing:
# In .env
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=your_langsmith_key_here
LANGCHAIN_PROJECT=ibmi-agentsThen view traces at: https://smith.langchain.com/
- Agent Selection: You choose an agent specialized for a specific task (performance, discovery, security, etc.)
- MCP Connection: The agent connects to the IBM i MCP Server via HTTP
- Tool Filtering: Each agent only has access to relevant tools (e.g., performance agent gets 12 performance tools)
- Model Execution: Your chosen LLM model processes requests and generates tool calls
- Human-in-the-Loop: Security operations pause for approval before executing write operations
- Persistent Memory: Agent sessions maintain context using in-memory checkpointing
Error: Connection refused to http://127.0.0.1:3010/mcp
Solution: Start the IBM i MCP Server
cd /path/to/ibmi-mcp-server
npm run devError: model 'llama3.2' not found
Solution: Pull the model
ollama pull llama3.2Error: OpenAI API key not found
Solution: Set the API key in .env
OPENAI_API_KEY=sk-...ModuleNotFoundError: No module named 'ibmi_agent_sdk'
Solution: Install the SDK package
cd ../../packages/ibmi-agent-sdk
uv pip install -e .- IBM i MCP Server Documentation: See main project README
- LangChain Documentation: https://python.langchain.com/
- LangGraph Documentation: https://langchain-ai.github.io/langgraph/
- Ollama Documentation: https://ollama.ai/docs