diff --git a/docs/docs/core-components/ingestion.mdx b/docs/docs/core-components/ingestion.mdx index 9491652d..7e5afb20 100644 --- a/docs/docs/core-components/ingestion.mdx +++ b/docs/docs/core-components/ingestion.mdx @@ -22,7 +22,7 @@ These settings configure the Docling ingestion parameters. OpenRAG will warn you if `docling-serve` is not running. To start or stop `docling-serve` or any other native services, in the TUI main menu, click **Start Native Services** or **Stop Native Services**. -**Embedding model** determines which AI model is used to create vector embeddings. The default is `text-embedding-3-small`. ` +**Embedding model** determines which AI model is used to create vector embeddings. The default is `text-embedding-3-small`. **Chunk size** determines how large each text chunk is in number of characters. Larger chunks yield more context per chunk, but may include irrelevant information. Smaller chunks yield more precise semantic search, but may lack context.