From 7370ad3bf71107e78b44f8ddc0a0235f3129cdff Mon Sep 17 00:00:00 2001 From: Mendon Kissling <59585235+mendonk@users.noreply.github.com> Date: Wed, 1 Oct 2025 11:07:40 -0400 Subject: [PATCH] Apply suggestion from @mfortman11 Co-authored-by: Mike Fortman --- docs/docs/core-components/ingestion.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/core-components/ingestion.mdx b/docs/docs/core-components/ingestion.mdx index 9491652d..7e5afb20 100644 --- a/docs/docs/core-components/ingestion.mdx +++ b/docs/docs/core-components/ingestion.mdx @@ -22,7 +22,7 @@ These settings configure the Docling ingestion parameters. OpenRAG will warn you if `docling-serve` is not running. To start or stop `docling-serve` or any other native services, in the TUI main menu, click **Start Native Services** or **Stop Native Services**. -**Embedding model** determines which AI model is used to create vector embeddings. The default is `text-embedding-3-small`. ` +**Embedding model** determines which AI model is used to create vector embeddings. The default is `text-embedding-3-small`. **Chunk size** determines how large each text chunk is in number of characters. Larger chunks yield more context per chunk, but may include irrelevant information. Smaller chunks yield more precise semantic search, but may lack context.