yangdx
5311083f43
Rename "Process" entity type to "Method" across all components
2025-09-14 02:30:05 +08:00
yangdx
7060cf17f0
Add Process and Data entity types to LLM extraction system
...
• Add Process and Data to default types
• Update env.example configuration
• Add translations for new entities
• Support 5 languages (en/zh/fr/ar/tw)
2025-09-14 01:14:47 +08:00
yangdx
2686fc526e
Change entity type from CreativeWork to Content and update delimiter
...
• Replace CreativeWork with Content type
• Improve LLM output error messages
• Update prompt for binary relationships
• Fix delimiter corruption examples
2025-09-14 00:55:15 +08:00
yangdx
41cdeaeaad
Add Concept and NaturalObject to default entity types
2025-09-13 15:37:11 +08:00
yangdx
f7aa108cc2
Update env.example
2025-09-13 11:27:02 +08:00
yangdx
87f1b47218
Update env.examples
2025-09-11 15:50:16 +08:00
yangdx
4a21b7f53f
Update OpenAI API config docs for max_tokens and max_completion_tokens
...
• Clarify max_tokens vs max_completion_tokens
• Add Gemini exception note
• Update parameter descriptions
• Add new completion tokens option
2025-09-10 16:23:10 +08:00
Daniel.y
298037d8f7
Merge pull request #2076 from danielaskdd/prompt-refactor
...
refactor: Optimize Entity Extraction for Small Parameter LLMs with Enhanced Prompt Caching
2025-09-08 15:40:13 +08:00
yangdx
d218f15a62
Refactor entity extraction with system prompts and output limits
...
- Add system/user prompt separation
- Set max tokens for endless output fix
- Improve extraction error logging
- Update cache type from extract to summary
2025-09-08 15:20:45 +08:00
Shlomi
8d7ef07bbf
fix env file example
2025-09-07 15:22:24 +03:00
yangdx
64bbe7233b
Update env.example
2025-09-06 01:24:12 +08:00
yangdx
2db7e4a3e8
Update env.example
2025-09-05 17:13:29 +08:00
yangdx
c903b14849
Bump AIP version to 0214 and update env.example
2025-09-04 12:04:50 +08:00
yangdx
78abb397bf
Reorder entity types and add Document type to extraction
2025-09-03 12:44:40 +08:00
yangdx
c86f863fa4
feat: optimize entity extraction for smaller LLMs
...
Simplify entity relationship extraction process to improve compatibility
and performance with smaller, less capable language models.
Changes:
- Remove iterative gleaning loop with LLM-based continuation decisions
- Simplify to single gleaning pass when entity_extract_max_gleaning > 0
- Streamline entity extraction prompts with clearer instructions
- Add explicit completion delimiter signals in all examples
2025-09-03 10:33:01 +08:00
yangdx
9d81cd724a
Fix typo: change "Equiment" to "Equipment" in entity types
2025-09-02 03:19:31 +08:00
yangdx
c8c59c38b0
Fix entity types configuration to support JSON list parsing
...
- Add JSON parsing for list env vars
- Update entity types example format
- Add list type support to get_env_value
2025-09-01 00:14:57 +08:00
yangdx
57fe1403c3
Update default entity types in env.example configuration
2025-08-31 22:33:34 +08:00
yangdx
d9aa021682
Update env.example
2025-08-30 11:02:53 +08:00
Pedro Fernandes Steimbruch
8430e1a051
fix: adjust the EMBEDDING_BINDING_HOST for openai in the env.example
2025-08-29 09:48:42 -03:00
yangdx
d39afcb831
Add temperature guidance for Qwen3 models in env example
2025-08-29 15:13:52 +08:00
yangdx
925e631a9a
refac: Add robust time out handling for LLM request
2025-08-29 13:50:35 +08:00
yangdx
ac2db35160
Update env.example
2025-08-29 10:18:12 +08:00
Sandmeyer
1cd27dc048
docs(config): fix typo in .env comments
2025-08-28 20:23:51 +08:00
yangdx
0be4f0144b
Merge branch 'entityTypesServerSupport'
2025-08-27 12:23:58 +08:00
yangdx
ff0a18e08c
Unify SUMMARY_LANGUANGE and ENTITY_TYPES implementation method
2025-08-27 12:23:22 +08:00
yangdx
cb0a035076
Update env.example
2025-08-27 11:12:52 +08:00
Thibo Rosemplatt
c3aabfc251
Merge branch 'main' into entityTypesServerSupport
2025-08-26 21:48:20 +02:00
yangdx
6bcfe696ee
feat: add output length recommendation and description type to LLM summary
...
- Add SUMMARY_LENGTH_RECOMMENDED parameter (600 tokens)
- Optimize prompt temple for LLM summary
2025-08-26 14:41:12 +08:00
yangdx
84416d104d
Increase default LLM summary merge threshold from 4 to 8 for reducing summary trigger frequency
2025-08-26 03:57:35 +08:00
yangdx
de2daf6565
refac: Rename summary_max_tokens to summary_context_size, comprehensive parameter validation for summary configuration
...
- Update algorithm logic in operate.py for better token management
- Fix health endpoint to use correct parameter names
2025-08-26 01:35:50 +08:00
Thibo Rosemplatt
d054ec5d00
Added entity_types as a user defined variable (via .env)
2025-08-23 20:16:11 +02:00
yangdx
3d5e6226a9
Refactored rerank_example file to utilize the updated rerank function.
2025-08-23 22:51:41 +08:00
yangdx
9bc349ddd6
Improve Empty Keyword Handling logic
2025-08-23 11:50:58 +08:00
yangdx
1be9a54c8d
Rename ENABLE_RERANK to RERANK_BY_DEFAULT and update default to true
2025-08-23 09:46:51 +08:00
yangdx
47485b130d
refac(ui): Show rerank binding info on status card
...
- Remove separate ENABLE_RERANK flag in favor of rerank_binding="null"
- Change default rerank binding from "cohere" to "null" (disabled)
- Update UI to display both rerank binding and model information
2025-08-23 02:04:14 +08:00
yangdx
580cb7906c
feat: Add multiple rerank provider support to LightRAG Server by adding new env vars and cli params
...
- Add --enable-rerank CLI argument and ENABLE_RERANK env var
- Simplify rerank configuration logic to only check enable flag and binding
- Update health endpoint to show enable_rerank and rerank_configured status
- Improve logging messages for rerank enable/disable states
- Maintain backward compatibility with default value True
2025-08-22 19:29:45 +08:00
yangdx
16a1ef1178
Update summary_max_tokens default from 10k to 30k tokens
2025-08-21 23:16:07 +08:00
yangdx
718025dbea
Update embedding configuration docs and add aws_bedrock option
2025-08-21 17:55:04 +08:00
yangdx
4b2ef71c25
feat: Add extra_body parameter support for OpenRouter/vLLM compatibility
...
- Enhanced add_args function to handle dict types with JSON parsing
- Added reasoning and extra_body parameters for OpenRouter/vLLM compatibility
- Updated env.example with OpenRouter/vLLM parameter examples
2025-08-21 13:06:28 +08:00
yangdx
5d34007f2c
Add presence penalty config option for smaller models
...
- Add OPENAI_LLM_PRESENCE_PENALTY setting
- Recommend 1.5 for Qwen3 <32B params
- Update max completion tokens comment
2025-08-21 11:35:23 +08:00
yangdx
0dd245e847
Add OpenAI reasoning effort and max completion tokens config options
2025-08-21 11:04:06 +08:00
yangdx
0e67ead8fa
Rename MAX_TOKENS to SUMMARY_MAX_TOKENS for clarity
2025-08-21 10:15:20 +08:00
yangdx
aa22772721
Refactor LLM temperature handling to be provider-specific
...
• Remove global temperature parameter
• Add provider-specific temp configs
• Update env example with new settings
• Fix Bedrock temperature handling
• Clean up splash screen display
2025-08-20 23:52:33 +08:00
yangdx
df7bcb1e3d
Add LLM_TIMEOUT configuration for all LLM providers
...
- Add LLM_TIMEOUT env variable
- Apply timeout to all LLM bindings
2025-08-20 23:50:57 +08:00
yangdx
4c556d8aae
Set default TIMEOUT value to 150, and gunicorn timeout to TIMEOUT+30
2025-08-20 22:04:32 +08:00
yangdx
d5e8f1e860
Update default query parameters for better performance
...
- Increase chunk_top_k from 10 to 20
- Reduce max_entity_tokens to 6000
- Reduce max_relation_tokens to 8000
- Update web UI default values
- Fix max_total_tokens to 30000
2025-08-18 19:32:11 +08:00
yangdx
da7e4b79e5
Update documentation in README files
2025-08-17 02:23:14 +08:00
yangdx
2a781dfb91
Update Neo4j database naming in env.example
2025-08-15 19:14:38 +08:00
yangdx
6cab68bb47
Improve KG chunk selection documentation and configuration clarity
2025-08-15 10:09:44 +08:00