Introduces the LightRAG Retrieval-Augmented Generation framework as an Apolo app, including input/output schemas, types, and processors.
Adds Helm chart value processing, environment and persistence configurations, and output service discovery for deployment.
Includes scripts for generating type schemas and testing support, along with CI and linting setup tailored for the new app.
Provides a documentation loader script to ingest markdown files into LightRAG with flexible referencing modes.
Relates to MLO-469
* Updates LLM and embedding configurations to use OpenRouter and Gemini
* Renames and significantly expands environment configuration template
Renames the environment example file to a standard hidden env template to align with common conventions.
Extensively updates and reorganizes configuration options, adding detailed setup for LLM, embedding, storage backends, PostgreSQL, and overall LightRAG processing parameters.
Comments out some legacy and optional configuration lines to streamline initial setup and clarify default recommended values.
Updates gitignore to exclude various env-related files to protect sensitive keys and improve environment management.
* Updates default config with improved LLM and processing settings
* Adds openai-compatible environment file to .gitignore
* Adds new environment files to ignore list
- Add top-k and cosine-threshold parms for api server
- Update .env and cli parms handling with new parameters
- Improve splash screen display
- Update bash and storage classes to read new parameters from .env file.
- Remove model parameter from azure_openai_complete (all LLM complete functions must have the same parameter structure)
- Use LLM_MODEL env var in Azure OpenAI function
- Comment out Lollms example in .env.example (duplication with Ollama example)