This commit is contained in:
Raphaël MANSUY 2025-12-04 19:19:24 +08:00
parent 7a9ebbedb7
commit 3f70dd04da

View file

@ -50,6 +50,7 @@ LightRAG necessitates the integration of both an LLM (Large Language Model) and
* openai or openai compatible
* azure_openai
* aws_bedrock
* gemini
It is recommended to use environment variables to configure the LightRAG Server. There is an example environment variable file named `env.example` in the root directory of the project. Please copy this file to the startup directory and rename it to `.env`. After that, you can modify the parameters related to the LLM and Embedding models in the `.env` file. It is important to note that the LightRAG Server will load the environment variables from `.env` into the system environment variables each time it starts. **LightRAG Server will prioritize the settings in the system environment variables to .env file**.
@ -72,6 +73,8 @@ EMBEDDING_DIM=1024
# EMBEDDING_BINDING_API_KEY=your_api_key
```
> When targeting Google Gemini, set `LLM_BINDING=gemini`, choose a model such as `LLM_MODEL=gemini-flash-latest`, and provide your Gemini key via `LLM_BINDING_API_KEY` (or `GEMINI_API_KEY`).
* Ollama LLM + Ollama Embedding:
```