Skip to content

Add ollama for local llms to handover documentation tool#69

Open
thammel wants to merge 1 commit intorwth-iat:developfrom
thammel:add/local-llms
Open

Add ollama for local llms to handover documentation tool#69
thammel wants to merge 1 commit intorwth-iat:developfrom
thammel:add/local-llms

Conversation

@thammel
Copy link
Copy Markdown
Contributor

@thammel thammel commented Mar 24, 2026

This pull request adds support for using Ollama (local LLMs and embeddings) in the Handover Documentation Tool, and introduces a more flexible and user-friendly way to select and configure both LLM and embedding providers in the UI. The documentation and configuration logic have been updated accordingly, and users can now choose between HuggingFace, OpenAI, and Ollama for embeddings, with provider-specific options surfaced in the UI.

The most important changes are:

Ollama Integration:

  • Added support for Ollama as both an LLM and embedding provider, including configuration of custom models and base URLs. This enables fully local document processing using models such as llama3.2 and nomic-embed-text for RAG. (aas_editor/tools/handover_doc_llm/config.py, aas_editor/tools/handover_doc_llm/handover_documentation_tool.py, pyproject.toml) [1] [2] [3] [4] [5]

UI Enhancements for Provider Selection:

  • The tool’s dialog now allows users to select both LLM and embedding providers from dropdowns, and dynamically shows/hides relevant fields (API key, base URL, model) based on the selected provider. Provider-specific hints and placeholders are shown for better usability. (aas_editor/tools/handover_doc_llm/handover_documentation_tool.py) [1] [2] [3]

Configuration and Backend Refactoring:

  • Refactored the initialization logic for embeddings and LLMs to support the new provider structure, passing provider/model/api_key as needed and defaulting sensibly. (aas_editor/tools/handover_doc_llm/config.py, aas_editor/tools/handover_doc_llm/handover_documentation_tool.py) [1] [2] [3] [4]

Documentation Update:

  • The README.md has been rewritten to reflect the new provider options, explain how to use Ollama, and clarify the workflow for both LLM and embedding selection. Tables for supported providers and step-by-step instructions for Ollama are included. (aas_editor/tools/handover_doc_llm/README.md)

Dependency Update:

  • Added langchain-ollama to the project dependencies to support Ollama integration. (pyproject.toml)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant