An intelligent assistant for ReportPortal that syncs test data to a vector database and enables natural language querying using Large Language Models (LLMs). This tool helps you analyze test failures, identify patterns, and get insights from your ReportPortal data using simple English questions.
- Data Synchronization: Sync ReportPortal data to ChromaDB vector database
- Natural Language Queries: Ask questions in plain English about your test data
- Multiple LLM Support: OpenAI GPT-4, Anthropic Claude, and local Ollama models
- Full & Incremental Sync: Efficient data synchronization strategies
- Smart Search: Vector-based semantic search for relevant test data
- CLI Interface: Easy-to-use command-line interface
- Docker Support: Containerized deployment options
- Python 3.9 or higher
- Poetry for dependency management
- Access to a ReportPortal instance
- API token for ReportPortal
- LLM API key (OpenAI, Anthropic) or local Ollama setup
git clone <repository-url>
cd test_insights-assistant
poetry installCopy the example environment file and configure your settings:
cp .env.example .envEdit .env file with your configuration:
# ReportPortal Configuration
REPORTPORTAL_URL=https://your-reportportal-instance.com
REPORTPORTAL_API_TOKEN=your_api_token_here
REPORTPORTAL_PROJECT=your_default_project_name
# ChromaDB Configuration
CHROMA_PERSIST_DIRECTORY=./chroma_db
CHROMA_COLLECTION_NAME=reportportal_data
# LLM Provider (choose one)
LLM_PROVIDER=openai # or anthropic, ollama
# OpenAI Configuration (if using OpenAI)
OPENAI_API_KEY=your_openai_api_key
OPENAI_MODEL=gpt-4-turbo-preview
# Anthropic Configuration (if using Claude)
ANTHROPIC_API_KEY=your_anthropic_api_key
ANTHROPIC_MODEL=claude-3-opus-20240229
# Ollama Configuration (if using local LLM)
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama2
# Sync Configuration
SYNC_BATCH_SIZE=100
SYNC_RATE_LIMIT=10Run the configuration wizard:
poetry run test_insights config initCheck your configuration:
poetry run test_insights config showSync all data from ReportPortal (replaces existing data):
# Sync all projects and entity types
poetry run test_insights sync run --full
# Sync specific project
poetry run test_insights sync run --full --project YOUR_PROJECT_NAME
# Sync specific entity types
poetry run test_insights sync run --full --project YOUR_PROJECT -e launch -e test_item -e log
# Sync multiple projects
poetry run test_insights sync run --full -p project1 -p project2Sync only recent changes (default behavior):
# Incremental sync (last 7 days by default)
poetry run test_insights sync run
# Incremental sync for specific project
poetry run test_insights sync run --project YOUR_PROJECT_NAME
# Multiple projects incremental sync
poetry run test_insights sync run -p project1 -p project2# Check sync status
poetry run test_insights sync status
# View storage statistics
poetry run test_insights storage search "test" --json-outputConfigure your preferred LLM provider:
# OpenAI setup
poetry run test_insights query configure --openai-key YOUR_KEY
# Anthropic setup
poetry run test_insights query configure --anthropic-key YOUR_KEY
# Ollama setup (requires Ollama running locally)
poetry run test_insights query configure --ollama-url http://localhost:11434# Simple question
poetry run test_insights query ask "What tests failed today?"
# With source documents shown
poetry run test_insights query ask "Find timeout errors in tier1 tests" --show-sources
# Streaming response
poetry run test_insights query ask "Analyze test failure trends this week" --stream
# Use specific provider
poetry run test_insights query ask "Why are encryption tests failing?" --provider anthropic# Find recent failures
poetry run test_insights query ask "Show me all failed tests from the last 24 hours"
# Specific error types
poetry run test_insights query ask "Find tests that failed with timeout errors"
# Component-specific failures
poetry run test_insights query ask "What tests failed in the tier1 marker"
# Pattern analysis
poetry run test_insights query ask "What are the most common error messages in failed tests?"# Why questions
poetry run test_insights query ask "Why did the test_add_capacity_ui.py tests fail yesterday?"
# Deep analysis
poetry run test_insights query ask "What's causing the functional test failures? Analyze the error patterns"
# Infrastructure issues
poetry run test_insights query ask "Are there any network connection errors in the failed tests?"
# Environment-specific issues
poetry run test_insights query ask "Compare failures between 4.18 and 1.19 version"# Success rates
poetry run test_insights query ask "What's the success rate for tier1 tests this month?"
# Failure trends
poetry run test_insights query ask "What's the test failure trend over the last week?"
# Counts and totals
poetry run test_insights query ask "How many tests passed vs failed today?"
# Performance metrics
poetry run test_insights query ask "Which tests are taking the longest to run?"# Time-based comparisons
poetry run test_insights query ask "Compare test results between this week and last week"
# Component comparisons
poetry run test_insights query ask "Compare the failure rates between tier1 and tier2 tests"
# Release comparisons
poetry run test_insights query ask "How do test results compare between version 1.2 and 1.3?"
# Environment comparisons
poetry run test_insights query ask "What's the difference in failure rates between aws and baremetal environment?"# Historical analysis
poetry run test_insights query ask "Show me the history of the encryption tests"
# Stability analysis
poetry run test_insights query ask "Identify tests that pass and fail intermittently"
# Regression detection
poetry run test_insights query ask "Which tests started failing after the latest deployment?"
# Long-term trends
poetry run test_insights query ask "How has our overall test stability changed over the last month?"# Individual test analysis
poetry run test_insights query ask "Tell me about the 'test_create_pool_block_pool.py' - when did it last pass?"
# Test suite analysis
poetry run test_insights query ask "Analyze all tests in the performance suite"
# Error log analysis
poetry run test_insights query ask "Show me the error logs for failed integration tests"
# Stack trace analysis
poetry run test_insights query ask "Find all tests with assertion CommandFailed errors"# Search vector database directly
poetry run test_insights storage search "failed test timeout" --limit 10
# Search with entity type filters
poetry run test_insights storage search "error" -e log -e test_item
# Clear all data
poetry run test_insights storage clearimport asyncio
from test_insights import SyncOrchestrator, RAGPipeline
from test_insights.llm.providers.openai_provider import OpenAIProvider
async def main():
# Sync data
orchestrator = SyncOrchestrator()
stats = await orchestrator.sync(
sync_type="incremental",
project_names=["my-project"]
)
# Query with LLM
llm = OpenAIProvider()
rag = RAGPipeline(llm)
result = await rag.query("What tests failed today?")
print(result['response'])
asyncio.run(main())# Build and run with Docker Compose
docker-compose up -d
# Run sync in container
docker-compose exec test_insights poetry run test_insights sync run
# Run queries in container
docker-compose exec test_insights poetry run test_insights query ask "What tests failed?"-
Be Specific: Include test names, error types, or time ranges
- ✅ "Find timeout errors in payment API tests from last 3 days"
- ❌ "Show errors"
-
Use Natural Language: Ask as you would ask a colleague
- ✅ "Why are the login tests failing after the latest deployment?"
- ✅ "What's the pattern in database connection failures?"
-
Include Context: Mention specific modules, environments, or time periods
- ✅ "Compare UI test stability between staging and production this week"
- ✅ "Analyze error patterns in the checkout flow since Monday"
-
Request Metrics: Ask for specific measurements
- ✅ "What's the failure rate for integration tests this month?"
- ✅ "How many tests passed vs failed in the last build?"
No data found:
# Check if data is synced
poetry run test_insights sync status
# Try broader search terms
poetry run test_insights query ask "Show me any test results from today"LLM API errors:
# Verify API keys
poetry run test_insights config show
# Test with different provider
poetry run test_insights query ask "test query" --provider ollamaSync issues:
# Check ReportPortal connection
curl -H "Authorization: Bearer YOUR_TOKEN" "YOUR_REPORTPORTAL_URL/api/v1/project/list"
# Try smaller batch size
export SYNC_BATCH_SIZE=10
poetry run test_insights sync runpoetry run pytest# Format code
poetry run black src tests
# Type checking
poetry run mypy src
# Linting
poetry run flake8 src tests- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- Create an issue for bugs or feature requests
- Check existing issues for solutions
- Refer to ReportPortal documentation for API details