Update readme
This commit is contained in:
parent
0216325e0f
commit
831e658ed8
2 changed files with 3 additions and 2 deletions
|
|
@ -132,8 +132,6 @@ cp env.example .env # Update the .env with your LLM and embedding configuration
|
|||
docker compose up
|
||||
```
|
||||
|
||||
> Tip: When targeting Google Gemini, set `LLM_BINDING=gemini`, choose a model such as `LLM_MODEL=gemini-flash-latest`, and provide your Gemini key via `LLM_BINDING_API_KEY` (or `GEMINI_API_KEY`). The server now understands this binding out of the box.
|
||||
|
||||
> Historical versions of LightRAG docker images can be found here: [LightRAG Docker Images]( https://github.com/HKUDS/LightRAG/pkgs/container/lightrag)
|
||||
|
||||
### Install LightRAG Core
|
||||
|
|
|
|||
|
|
@ -50,6 +50,7 @@ LightRAG necessitates the integration of both an LLM (Large Language Model) and
|
|||
* openai or openai compatible
|
||||
* azure_openai
|
||||
* aws_bedrock
|
||||
* gemini
|
||||
|
||||
It is recommended to use environment variables to configure the LightRAG Server. There is an example environment variable file named `env.example` in the root directory of the project. Please copy this file to the startup directory and rename it to `.env`. After that, you can modify the parameters related to the LLM and Embedding models in the `.env` file. It is important to note that the LightRAG Server will load the environment variables from `.env` into the system environment variables each time it starts. **LightRAG Server will prioritize the settings in the system environment variables to .env file**.
|
||||
|
||||
|
|
@ -72,6 +73,8 @@ EMBEDDING_DIM=1024
|
|||
# EMBEDDING_BINDING_API_KEY=your_api_key
|
||||
```
|
||||
|
||||
> When targeting Google Gemini, set `LLM_BINDING=gemini`, choose a model such as `LLM_MODEL=gemini-flash-latest`, and provide your Gemini key via `LLM_BINDING_API_KEY` (or `GEMINI_API_KEY`).
|
||||
|
||||
* Ollama LLM + Ollama Embedding:
|
||||
|
||||
```
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue