diff --git a/README.md b/README.md index b2941b2e..7a366c05 100644 --- a/README.md +++ b/README.md @@ -132,8 +132,6 @@ cp env.example .env # Update the .env with your LLM and embedding configuration docker compose up ``` -> Tip: When targeting Google Gemini, set `LLM_BINDING=gemini`, choose a model such as `LLM_MODEL=gemini-flash-latest`, and provide your Gemini key via `LLM_BINDING_API_KEY` (or `GEMINI_API_KEY`). The server now understands this binding out of the box. - > Historical versions of LightRAG docker images can be found here: [LightRAG Docker Images]( https://github.com/HKUDS/LightRAG/pkgs/container/lightrag) ### Install LightRAG Core diff --git a/lightrag/api/README.md b/lightrag/api/README.md index bc21fac4..f62e24d3 100644 --- a/lightrag/api/README.md +++ b/lightrag/api/README.md @@ -50,6 +50,7 @@ LightRAG necessitates the integration of both an LLM (Large Language Model) and * openai or openai compatible * azure_openai * aws_bedrock +* gemini It is recommended to use environment variables to configure the LightRAG Server. There is an example environment variable file named `env.example` in the root directory of the project. Please copy this file to the startup directory and rename it to `.env`. After that, you can modify the parameters related to the LLM and Embedding models in the `.env` file. It is important to note that the LightRAG Server will load the environment variables from `.env` into the system environment variables each time it starts. **LightRAG Server will prioritize the settings in the system environment variables to .env file**. @@ -72,6 +73,8 @@ EMBEDDING_DIM=1024 # EMBEDDING_BINDING_API_KEY=your_api_key ``` +> When targeting Google Gemini, set `LLM_BINDING=gemini`, choose a model such as `LLM_MODEL=gemini-flash-latest`, and provide your Gemini key via `LLM_BINDING_API_KEY` (or `GEMINI_API_KEY`). + * Ollama LLM + Ollama Embedding: ```