chore: Remove trailing whitespaces in the project, fix YAMLs (#1980)
<!-- .github/pull_request_template.md --> ## Description Removes trailing whitespaces from all files in the project. Needed by https://github.com/topoteretes/cognee/pull/1979 ## Acceptance Criteria <!-- * Key requirements to the new feature or modification; * Proof that the changes work and meet the requirements; * Include instructions on how to verify the changes. Describe how to test it locally; * Proof that it's sufficiently tested. --> ## Type of Change <!-- Please check the relevant option --> - [ ] Bug fix (non-breaking change that fixes an issue) - [ ] New feature (non-breaking change that adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) - [ ] Documentation update - [ ] Code refactoring - [ ] Performance improvement - [ ] Other (please specify): ## Screenshots/Videos (if applicable) <!-- Add screenshots or videos to help explain your changes --> ## Pre-submission Checklist <!-- Please check all boxes that apply before submitting your PR --> - [ ] **I have tested my changes thoroughly before submitting this PR** - [ ] **This PR contains minimal changes necessary to address the issue/feature** - [ ] My code follows the project's coding standards and style guidelines - [ ] I have added tests that prove my fix is effective or that my feature works - [ ] I have added necessary documentation (if applicable) - [ ] All new and existing tests pass - [ ] I have searched existing PRs to ensure this change hasn't been submitted already - [ ] I have linked any relevant issues in the description - [ ] My commits have clear and descriptive messages ## DCO Affirmation I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Added `topK` parameter support in search functionality to control result count (1-100). * Added Python tool configuration via mise.toml. * **Documentation** * Enhanced issue templates with improved UI metadata, labels, and clearer guidance for bug reports, feature requests, and documentation issues. * Expanded CONTRIBUTING.md with comprehensive contribution guidelines and community information. * **Chores** * Removed unused modules: `cognee.modules.retrieval` and `cognee.tasks.temporal_graph`. * Applied consistent formatting and whitespace normalization across configuration files, workflows, and documentation. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai -->
This commit is contained in:
commit
fde921ca3e
90 changed files with 196 additions and 218 deletions
|
|
@ -3,7 +3,7 @@
|
||||||
language: en
|
language: en
|
||||||
early_access: false
|
early_access: false
|
||||||
enable_free_tier: true
|
enable_free_tier: true
|
||||||
reviews:
|
reviews:
|
||||||
profile: chill
|
profile: chill
|
||||||
instructions: >-
|
instructions: >-
|
||||||
# Code Review Instructions
|
# Code Review Instructions
|
||||||
|
|
@ -118,10 +118,10 @@ reviews:
|
||||||
- E117
|
- E117
|
||||||
- D208
|
- D208
|
||||||
line_length: 100
|
line_length: 100
|
||||||
dummy_variable_rgx: '^(_.*|junk|extra)$' # Variables starting with '_' or named 'junk' or 'extras', are considered dummy variables
|
dummy_variable_rgx: '^(_.*|junk|extra)$' # Variables starting with '_' or named 'junk' or 'extras', are considered dummy variables
|
||||||
markdownlint:
|
markdownlint:
|
||||||
enabled: true
|
enabled: true
|
||||||
yamllint:
|
yamllint:
|
||||||
enabled: true
|
enabled: true
|
||||||
chat:
|
chat:
|
||||||
auto_reply: true
|
auto_reply: true
|
||||||
|
|
|
||||||
|
|
@ -28,4 +28,4 @@ secret-scan:
|
||||||
- path: 'docker-compose.yml'
|
- path: 'docker-compose.yml'
|
||||||
comment: 'Development docker compose with test credentials (neo4j/pleaseletmein, postgres cognee/cognee)'
|
comment: 'Development docker compose with test credentials (neo4j/pleaseletmein, postgres cognee/cognee)'
|
||||||
- path: 'deployment/helm/docker-compose-helm.yml'
|
- path: 'deployment/helm/docker-compose-helm.yml'
|
||||||
comment: 'Helm deployment docker compose with test postgres credentials (cognee/cognee)'
|
comment: 'Helm deployment docker compose with test postgres credentials (cognee/cognee)'
|
||||||
|
|
|
||||||
16
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
16
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
|
|
@ -8,7 +8,7 @@ body:
|
||||||
attributes:
|
attributes:
|
||||||
value: |
|
value: |
|
||||||
Thanks for taking the time to fill out this bug report! Please provide a clear and detailed description.
|
Thanks for taking the time to fill out this bug report! Please provide a clear and detailed description.
|
||||||
|
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: description
|
id: description
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -17,7 +17,7 @@ body:
|
||||||
placeholder: Describe the bug in detail...
|
placeholder: Describe the bug in detail...
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
|
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: reproduction
|
id: reproduction
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -29,7 +29,7 @@ body:
|
||||||
3. See error...
|
3. See error...
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
|
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: expected
|
id: expected
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -38,7 +38,7 @@ body:
|
||||||
placeholder: Describe what you expected...
|
placeholder: Describe what you expected...
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
|
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: actual
|
id: actual
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -47,7 +47,7 @@ body:
|
||||||
placeholder: Describe what actually happened...
|
placeholder: Describe what actually happened...
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
|
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: environment
|
id: environment
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -61,7 +61,7 @@ body:
|
||||||
- Database: [e.g. Neo4j]
|
- Database: [e.g. Neo4j]
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
|
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: logs
|
id: logs
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -71,7 +71,7 @@ body:
|
||||||
render: shell
|
render: shell
|
||||||
validations:
|
validations:
|
||||||
required: false
|
required: false
|
||||||
|
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: additional
|
id: additional
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -80,7 +80,7 @@ body:
|
||||||
placeholder: Any additional information...
|
placeholder: Any additional information...
|
||||||
validations:
|
validations:
|
||||||
required: false
|
required: false
|
||||||
|
|
||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
id: checklist
|
id: checklist
|
||||||
attributes:
|
attributes:
|
||||||
|
|
|
||||||
13
.github/ISSUE_TEMPLATE/documentation.yml
vendored
13
.github/ISSUE_TEMPLATE/documentation.yml
vendored
|
|
@ -8,7 +8,7 @@ body:
|
||||||
attributes:
|
attributes:
|
||||||
value: |
|
value: |
|
||||||
Thanks for helping improve our documentation! Please provide details about the documentation issue or improvement.
|
Thanks for helping improve our documentation! Please provide details about the documentation issue or improvement.
|
||||||
|
|
||||||
- type: dropdown
|
- type: dropdown
|
||||||
id: doc-type
|
id: doc-type
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -22,7 +22,7 @@ body:
|
||||||
- New documentation request
|
- New documentation request
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
|
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: location
|
id: location
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -31,7 +31,7 @@ body:
|
||||||
placeholder: https://cognee.ai/docs/... or specific file/section
|
placeholder: https://cognee.ai/docs/... or specific file/section
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
|
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: issue
|
id: issue
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -40,7 +40,7 @@ body:
|
||||||
placeholder: The documentation is unclear about...
|
placeholder: The documentation is unclear about...
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
|
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: suggestion
|
id: suggestion
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -49,7 +49,7 @@ body:
|
||||||
placeholder: I suggest changing this to...
|
placeholder: I suggest changing this to...
|
||||||
validations:
|
validations:
|
||||||
required: false
|
required: false
|
||||||
|
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: additional
|
id: additional
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -58,7 +58,7 @@ body:
|
||||||
placeholder: Additional context...
|
placeholder: Additional context...
|
||||||
validations:
|
validations:
|
||||||
required: false
|
required: false
|
||||||
|
|
||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
id: checklist
|
id: checklist
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -71,4 +71,3 @@ body:
|
||||||
required: true
|
required: true
|
||||||
- label: I have specified the location of the documentation issue
|
- label: I have specified the location of the documentation issue
|
||||||
required: true
|
required: true
|
||||||
|
|
||||||
|
|
|
||||||
15
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
15
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
|
|
@ -8,7 +8,7 @@ body:
|
||||||
attributes:
|
attributes:
|
||||||
value: |
|
value: |
|
||||||
Thanks for suggesting a new feature! Please provide a clear and detailed description of your idea.
|
Thanks for suggesting a new feature! Please provide a clear and detailed description of your idea.
|
||||||
|
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: problem
|
id: problem
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -17,7 +17,7 @@ body:
|
||||||
placeholder: I'm always frustrated when...
|
placeholder: I'm always frustrated when...
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
|
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: solution
|
id: solution
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -26,7 +26,7 @@ body:
|
||||||
placeholder: I would like to see...
|
placeholder: I would like to see...
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
|
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: alternatives
|
id: alternatives
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -35,7 +35,7 @@ body:
|
||||||
placeholder: I have also considered...
|
placeholder: I have also considered...
|
||||||
validations:
|
validations:
|
||||||
required: false
|
required: false
|
||||||
|
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: use-case
|
id: use-case
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -44,7 +44,7 @@ body:
|
||||||
placeholder: This feature would help me...
|
placeholder: This feature would help me...
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
|
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: implementation
|
id: implementation
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -53,7 +53,7 @@ body:
|
||||||
placeholder: This could be implemented by...
|
placeholder: This could be implemented by...
|
||||||
validations:
|
validations:
|
||||||
required: false
|
required: false
|
||||||
|
|
||||||
- type: textarea
|
- type: textarea
|
||||||
id: additional
|
id: additional
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -62,7 +62,7 @@ body:
|
||||||
placeholder: Additional context...
|
placeholder: Additional context...
|
||||||
validations:
|
validations:
|
||||||
required: false
|
required: false
|
||||||
|
|
||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
id: checklist
|
id: checklist
|
||||||
attributes:
|
attributes:
|
||||||
|
|
@ -75,4 +75,3 @@ body:
|
||||||
required: true
|
required: true
|
||||||
- label: I have described my specific use case
|
- label: I have described my specific use case
|
||||||
required: true
|
required: true
|
||||||
|
|
||||||
|
|
|
||||||
8
.github/actions/setup_neo4j/action.yml
vendored
8
.github/actions/setup_neo4j/action.yml
vendored
|
|
@ -34,14 +34,14 @@ runs:
|
||||||
-e NEO4J_apoc_export_file_enabled=true \
|
-e NEO4J_apoc_export_file_enabled=true \
|
||||||
-e NEO4J_apoc_import_file_enabled=true \
|
-e NEO4J_apoc_import_file_enabled=true \
|
||||||
neo4j:${{ inputs.neo4j-version }}
|
neo4j:${{ inputs.neo4j-version }}
|
||||||
|
|
||||||
- name: Wait for Neo4j to be ready
|
- name: Wait for Neo4j to be ready
|
||||||
shell: bash
|
shell: bash
|
||||||
run: |
|
run: |
|
||||||
echo "Waiting for Neo4j to start..."
|
echo "Waiting for Neo4j to start..."
|
||||||
timeout=60
|
timeout=60
|
||||||
counter=0
|
counter=0
|
||||||
|
|
||||||
while [ $counter -lt $timeout ]; do
|
while [ $counter -lt $timeout ]; do
|
||||||
if docker exec neo4j-test cypher-shell -u neo4j -p "${{ inputs.neo4j-password }}" "RETURN 1" > /dev/null 2>&1; then
|
if docker exec neo4j-test cypher-shell -u neo4j -p "${{ inputs.neo4j-password }}" "RETURN 1" > /dev/null 2>&1; then
|
||||||
echo "Neo4j is ready!"
|
echo "Neo4j is ready!"
|
||||||
|
|
@ -51,13 +51,13 @@ runs:
|
||||||
sleep 2
|
sleep 2
|
||||||
counter=$((counter + 2))
|
counter=$((counter + 2))
|
||||||
done
|
done
|
||||||
|
|
||||||
if [ $counter -ge $timeout ]; then
|
if [ $counter -ge $timeout ]; then
|
||||||
echo "Neo4j failed to start within $timeout seconds"
|
echo "Neo4j failed to start within $timeout seconds"
|
||||||
docker logs neo4j-test
|
docker logs neo4j-test
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
- name: Verify GDS is available
|
- name: Verify GDS is available
|
||||||
shell: bash
|
shell: bash
|
||||||
run: |
|
run: |
|
||||||
|
|
|
||||||
2
.github/core-team.txt
vendored
2
.github/core-team.txt
vendored
|
|
@ -8,5 +8,3 @@ lxobr
|
||||||
pazone
|
pazone
|
||||||
siillee
|
siillee
|
||||||
vasilije1990
|
vasilije1990
|
||||||
|
|
||||||
|
|
||||||
|
|
|
||||||
2
.github/release-drafter.yml
vendored
2
.github/release-drafter.yml
vendored
|
|
@ -3,7 +3,7 @@ tag-template: 'v$NEXT_PATCH_VERSION'
|
||||||
|
|
||||||
categories:
|
categories:
|
||||||
- title: 'Features'
|
- title: 'Features'
|
||||||
labels: ['feature', 'enhancement']
|
labels: ['feature', 'enhancement']
|
||||||
- title: 'Bug Fixes'
|
- title: 'Bug Fixes'
|
||||||
labels: ['bug', 'fix']
|
labels: ['bug', 'fix']
|
||||||
- title: 'Maintenance'
|
- title: 'Maintenance'
|
||||||
|
|
|
||||||
|
|
@ -31,54 +31,54 @@ WORKFLOWS=(
|
||||||
for workflow in "${WORKFLOWS[@]}"; do
|
for workflow in "${WORKFLOWS[@]}"; do
|
||||||
if [ -f "$workflow" ]; then
|
if [ -f "$workflow" ]; then
|
||||||
echo "Processing $workflow..."
|
echo "Processing $workflow..."
|
||||||
|
|
||||||
# Create a backup
|
# Create a backup
|
||||||
cp "$workflow" "${workflow}.bak"
|
cp "$workflow" "${workflow}.bak"
|
||||||
|
|
||||||
# Check if the file begins with a workflow_call trigger
|
# Check if the file begins with a workflow_call trigger
|
||||||
if grep -q "workflow_call:" "$workflow"; then
|
if grep -q "workflow_call:" "$workflow"; then
|
||||||
echo "$workflow already has workflow_call trigger, skipping..."
|
echo "$workflow already has workflow_call trigger, skipping..."
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Get the content after the 'on:' section
|
# Get the content after the 'on:' section
|
||||||
on_line=$(grep -n "^on:" "$workflow" | cut -d ':' -f1)
|
on_line=$(grep -n "^on:" "$workflow" | cut -d ':' -f1)
|
||||||
|
|
||||||
if [ -z "$on_line" ]; then
|
if [ -z "$on_line" ]; then
|
||||||
echo "Warning: No 'on:' section found in $workflow, skipping..."
|
echo "Warning: No 'on:' section found in $workflow, skipping..."
|
||||||
continue
|
continue
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Create a new file with the modified content
|
# Create a new file with the modified content
|
||||||
{
|
{
|
||||||
# Copy the part before 'on:'
|
# Copy the part before 'on:'
|
||||||
head -n $((on_line-1)) "$workflow"
|
head -n $((on_line-1)) "$workflow"
|
||||||
|
|
||||||
# Add the new on: section that only includes workflow_call
|
# Add the new on: section that only includes workflow_call
|
||||||
echo "on:"
|
echo "on:"
|
||||||
echo " workflow_call:"
|
echo " workflow_call:"
|
||||||
echo " secrets:"
|
echo " secrets:"
|
||||||
echo " inherit: true"
|
echo " inherit: true"
|
||||||
|
|
||||||
# Find where to continue after the original 'on:' section
|
# Find where to continue after the original 'on:' section
|
||||||
next_section=$(awk "NR > $on_line && /^[a-z]/ {print NR; exit}" "$workflow")
|
next_section=$(awk "NR > $on_line && /^[a-z]/ {print NR; exit}" "$workflow")
|
||||||
|
|
||||||
if [ -z "$next_section" ]; then
|
if [ -z "$next_section" ]; then
|
||||||
next_section=$(wc -l < "$workflow")
|
next_section=$(wc -l < "$workflow")
|
||||||
next_section=$((next_section+1))
|
next_section=$((next_section+1))
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Copy the rest of the file starting from the next section
|
# Copy the rest of the file starting from the next section
|
||||||
tail -n +$next_section "$workflow"
|
tail -n +$next_section "$workflow"
|
||||||
} > "${workflow}.new"
|
} > "${workflow}.new"
|
||||||
|
|
||||||
# Replace the original with the new version
|
# Replace the original with the new version
|
||||||
mv "${workflow}.new" "$workflow"
|
mv "${workflow}.new" "$workflow"
|
||||||
|
|
||||||
echo "Modified $workflow to only run when called from test-suites.yml"
|
echo "Modified $workflow to only run when called from test-suites.yml"
|
||||||
else
|
else
|
||||||
echo "Warning: $workflow not found, skipping..."
|
echo "Warning: $workflow not found, skipping..."
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
|
||||||
echo "Finished modifying workflows!"
|
echo "Finished modifying workflows!"
|
||||||
|
|
|
||||||
2
.github/workflows/dockerhub.yml
vendored
2
.github/workflows/dockerhub.yml
vendored
|
|
@ -45,4 +45,4 @@ jobs:
|
||||||
cache-to: type=registry,ref=cognee/cognee:buildcache,mode=max
|
cache-to: type=registry,ref=cognee/cognee:buildcache,mode=max
|
||||||
|
|
||||||
- name: Image digest
|
- name: Image digest
|
||||||
run: echo ${{ steps.build.outputs.digest }}
|
run: echo ${{ steps.build.outputs.digest }}
|
||||||
|
|
|
||||||
2
.github/workflows/label-core-team.yml
vendored
2
.github/workflows/label-core-team.yml
vendored
|
|
@ -72,5 +72,3 @@ jobs:
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
core.warning(`Failed to add label: ${error.message}`);
|
core.warning(`Failed to add label: ${error.message}`);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
|
||||||
2
.github/workflows/load_tests.yml
vendored
2
.github/workflows/load_tests.yml
vendored
|
|
@ -66,5 +66,3 @@ jobs:
|
||||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_S3_DEV_USER_KEY_ID }}
|
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_S3_DEV_USER_KEY_ID }}
|
||||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_S3_DEV_USER_SECRET_KEY }}
|
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_S3_DEV_USER_SECRET_KEY }}
|
||||||
run: uv run python ./cognee/tests/test_load.py
|
run: uv run python ./cognee/tests/test_load.py
|
||||||
|
|
||||||
|
|
||||||
|
|
|
||||||
2
.github/workflows/pre_test.yml
vendored
2
.github/workflows/pre_test.yml
vendored
|
|
@ -17,6 +17,6 @@ jobs:
|
||||||
uses: astral-sh/setup-uv@v4
|
uses: astral-sh/setup-uv@v4
|
||||||
with:
|
with:
|
||||||
enable-cache: true
|
enable-cache: true
|
||||||
|
|
||||||
- name: Validate uv lockfile and project metadata
|
- name: Validate uv lockfile and project metadata
|
||||||
run: uv lock --check || { echo "'uv lock --check' failed."; echo "Run 'uv lock' and push your changes."; exit 1; }
|
run: uv lock --check || { echo "'uv lock --check' failed."; echo "Run 'uv lock' and push your changes."; exit 1; }
|
||||||
|
|
|
||||||
26
.github/workflows/release.yml
vendored
26
.github/workflows/release.yml
vendored
|
|
@ -42,10 +42,10 @@ jobs:
|
||||||
|
|
||||||
echo "tag=${TAG}" >> "$GITHUB_OUTPUT"
|
echo "tag=${TAG}" >> "$GITHUB_OUTPUT"
|
||||||
echo "version=${VERSION}" >> "$GITHUB_OUTPUT"
|
echo "version=${VERSION}" >> "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
git tag "${TAG}"
|
git tag "${TAG}"
|
||||||
git push origin "${TAG}"
|
git push origin "${TAG}"
|
||||||
|
|
||||||
|
|
||||||
- name: Create GitHub Release
|
- name: Create GitHub Release
|
||||||
uses: softprops/action-gh-release@v2
|
uses: softprops/action-gh-release@v2
|
||||||
|
|
@ -54,8 +54,8 @@ jobs:
|
||||||
generate_release_notes: true
|
generate_release_notes: true
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
release-pypi-package:
|
release-pypi-package:
|
||||||
needs: release-github
|
needs: release-github
|
||||||
name: Release PyPI Package from ${{ inputs.flavour }}
|
name: Release PyPI Package from ${{ inputs.flavour }}
|
||||||
permissions:
|
permissions:
|
||||||
|
|
@ -67,25 +67,25 @@ jobs:
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v4
|
||||||
with:
|
with:
|
||||||
ref: ${{ inputs.flavour }}
|
ref: ${{ inputs.flavour }}
|
||||||
|
|
||||||
- name: Install uv
|
- name: Install uv
|
||||||
uses: astral-sh/setup-uv@v7
|
uses: astral-sh/setup-uv@v7
|
||||||
|
|
||||||
- name: Install Python
|
- name: Install Python
|
||||||
run: uv python install
|
run: uv python install
|
||||||
|
|
||||||
- name: Install dependencies
|
- name: Install dependencies
|
||||||
run: uv sync --locked --all-extras
|
run: uv sync --locked --all-extras
|
||||||
|
|
||||||
- name: Build distributions
|
- name: Build distributions
|
||||||
run: uv build
|
run: uv build
|
||||||
|
|
||||||
- name: Publish ${{ inputs.flavour }} release to PyPI
|
- name: Publish ${{ inputs.flavour }} release to PyPI
|
||||||
env:
|
env:
|
||||||
UV_PUBLISH_TOKEN: ${{ secrets.PYPI_TOKEN }}
|
UV_PUBLISH_TOKEN: ${{ secrets.PYPI_TOKEN }}
|
||||||
run: uv publish
|
run: uv publish
|
||||||
|
|
||||||
release-docker-image:
|
release-docker-image:
|
||||||
needs: release-github
|
needs: release-github
|
||||||
name: Release Docker Image from ${{ inputs.flavour }}
|
name: Release Docker Image from ${{ inputs.flavour }}
|
||||||
permissions:
|
permissions:
|
||||||
|
|
@ -128,7 +128,7 @@ jobs:
|
||||||
context: .
|
context: .
|
||||||
platforms: linux/amd64,linux/arm64
|
platforms: linux/amd64,linux/arm64
|
||||||
push: true
|
push: true
|
||||||
tags: |
|
tags: |
|
||||||
cognee/cognee:${{ needs.release-github.outputs.version }}
|
cognee/cognee:${{ needs.release-github.outputs.version }}
|
||||||
cognee/cognee:latest
|
cognee/cognee:latest
|
||||||
labels: |
|
labels: |
|
||||||
|
|
@ -163,4 +163,4 @@ jobs:
|
||||||
-H "Authorization: Bearer ${{ secrets.REPO_DISPATCH_PAT_TOKEN }}" \
|
-H "Authorization: Bearer ${{ secrets.REPO_DISPATCH_PAT_TOKEN }}" \
|
||||||
-H "X-GitHub-Api-Version: 2022-11-28" \
|
-H "X-GitHub-Api-Version: 2022-11-28" \
|
||||||
https://api.github.com/repos/topoteretes/cognee-community/dispatches \
|
https://api.github.com/repos/topoteretes/cognee-community/dispatches \
|
||||||
-d '{"event_type":"new-main-release","client_payload":{"caller_repo":"'"${GITHUB_REPOSITORY}"'"}}'
|
-d '{"event_type":"new-main-release","client_payload":{"caller_repo":"'"${GITHUB_REPOSITORY}"'"}}'
|
||||||
|
|
|
||||||
1
.github/workflows/release_test.yml
vendored
1
.github/workflows/release_test.yml
vendored
|
|
@ -15,4 +15,3 @@ jobs:
|
||||||
name: Load Tests
|
name: Load Tests
|
||||||
uses: ./.github/workflows/load_tests.yml
|
uses: ./.github/workflows/load_tests.yml
|
||||||
secrets: inherit
|
secrets: inherit
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -10,7 +10,7 @@ on:
|
||||||
required: false
|
required: false
|
||||||
type: string
|
type: string
|
||||||
default: '["3.10.x", "3.12.x", "3.13.x"]'
|
default: '["3.10.x", "3.12.x", "3.13.x"]'
|
||||||
os:
|
os:
|
||||||
required: false
|
required: false
|
||||||
type: string
|
type: string
|
||||||
default: '["ubuntu-22.04", "macos-15", "windows-latest"]'
|
default: '["ubuntu-22.04", "macos-15", "windows-latest"]'
|
||||||
|
|
|
||||||
2
.github/workflows/test_llms.yml
vendored
2
.github/workflows/test_llms.yml
vendored
|
|
@ -173,4 +173,4 @@ jobs:
|
||||||
EMBEDDING_MODEL: "amazon.titan-embed-text-v2:0"
|
EMBEDDING_MODEL: "amazon.titan-embed-text-v2:0"
|
||||||
EMBEDDING_DIMENSIONS: "1024"
|
EMBEDDING_DIMENSIONS: "1024"
|
||||||
EMBEDDING_MAX_TOKENS: "8191"
|
EMBEDDING_MAX_TOKENS: "8191"
|
||||||
run: uv run python ./examples/python/simple_example.py
|
run: uv run python ./examples/python/simple_example.py
|
||||||
|
|
|
||||||
4
.github/workflows/test_suites.yml
vendored
4
.github/workflows/test_suites.yml
vendored
|
|
@ -18,11 +18,11 @@ env:
|
||||||
RUNTIME__LOG_LEVEL: ERROR
|
RUNTIME__LOG_LEVEL: ERROR
|
||||||
ENV: 'dev'
|
ENV: 'dev'
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
pre-test:
|
pre-test:
|
||||||
name: basic checks
|
name: basic checks
|
||||||
uses: ./.github/workflows/pre_test.yml
|
uses: ./.github/workflows/pre_test.yml
|
||||||
|
|
||||||
basic-tests:
|
basic-tests:
|
||||||
name: Basic Tests
|
name: Basic Tests
|
||||||
uses: ./.github/workflows/basic_tests.yml
|
uses: ./.github/workflows/basic_tests.yml
|
||||||
|
|
|
||||||
|
|
@ -6,4 +6,4 @@ pull_request_rules:
|
||||||
actions:
|
actions:
|
||||||
backport:
|
backport:
|
||||||
branches:
|
branches:
|
||||||
- main
|
- main
|
||||||
|
|
|
||||||
|
|
@ -7,6 +7,7 @@ repos:
|
||||||
- id: trailing-whitespace
|
- id: trailing-whitespace
|
||||||
- id: end-of-file-fixer
|
- id: end-of-file-fixer
|
||||||
- id: check-yaml
|
- id: check-yaml
|
||||||
|
exclude: ^deployment/helm/templates/
|
||||||
- id: check-added-large-files
|
- id: check-added-large-files
|
||||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||||
# Ruff version.
|
# Ruff version.
|
||||||
|
|
|
||||||
|
|
@ -128,5 +128,3 @@ MCP server and Frontend:
|
||||||
## CI Mirrors Local Commands
|
## CI Mirrors Local Commands
|
||||||
|
|
||||||
Our GitHub Actions run the same ruff checks and pytest suites shown above (`.github/workflows/basic_tests.yml` and related workflows). Use the commands in this document locally to minimize CI surprises.
|
Our GitHub Actions run the same ruff checks and pytest suites shown above (`.github/workflows/basic_tests.yml` and related workflows). Use the commands in this document locally to minimize CI surprises.
|
||||||
|
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,16 +1,16 @@
|
||||||
> [!IMPORTANT]
|
> [!IMPORTANT]
|
||||||
> **Note for contributors:** When branching out, create a new branch from the `dev` branch.
|
> **Note for contributors:** When branching out, create a new branch from the `dev` branch.
|
||||||
|
|
||||||
# 🎉 Welcome to **cognee**!
|
# 🎉 Welcome to **cognee**!
|
||||||
|
|
||||||
We're excited that you're interested in contributing to our project!
|
We're excited that you're interested in contributing to our project!
|
||||||
We want to ensure that every user and contributor feels welcome, included and supported to participate in cognee community.
|
We want to ensure that every user and contributor feels welcome, included and supported to participate in cognee community.
|
||||||
This guide will help you get started and ensure your contributions can be efficiently integrated into the project.
|
This guide will help you get started and ensure your contributions can be efficiently integrated into the project.
|
||||||
|
|
||||||
## 🌟 Quick Links
|
## 🌟 Quick Links
|
||||||
|
|
||||||
- [Code of Conduct](CODE_OF_CONDUCT.md)
|
- [Code of Conduct](CODE_OF_CONDUCT.md)
|
||||||
- [Discord Community](https://discord.gg/bcy8xFAtfd)
|
- [Discord Community](https://discord.gg/bcy8xFAtfd)
|
||||||
- [Issue Tracker](https://github.com/topoteretes/cognee/issues)
|
- [Issue Tracker](https://github.com/topoteretes/cognee/issues)
|
||||||
- [Cognee Docs](https://docs.cognee.ai)
|
- [Cognee Docs](https://docs.cognee.ai)
|
||||||
|
|
||||||
|
|
@ -106,7 +106,7 @@ Make sure to run ```shell uv sync ``` in the root cloned folder or set up a virt
|
||||||
```shell
|
```shell
|
||||||
python cognee/cognee/examples/python/simple_example.py
|
python cognee/cognee/examples/python/simple_example.py
|
||||||
```
|
```
|
||||||
or
|
or
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
uv run python cognee/cognee/examples/python/simple_example.py
|
uv run python cognee/cognee/examples/python/simple_example.py
|
||||||
|
|
|
||||||
12
README.md
12
README.md
|
|
@ -65,12 +65,12 @@ Use your data to build personalized and dynamic memory for AI Agents. Cognee let
|
||||||
|
|
||||||
## About Cognee
|
## About Cognee
|
||||||
|
|
||||||
Cognee is an open-source tool and platform that transforms your raw data into persistent and dynamic AI memory for Agents. It combines vector search with graph databases to make your documents both searchable by meaning and connected by relationships.
|
Cognee is an open-source tool and platform that transforms your raw data into persistent and dynamic AI memory for Agents. It combines vector search with graph databases to make your documents both searchable by meaning and connected by relationships.
|
||||||
|
|
||||||
You can use Cognee in two ways:
|
You can use Cognee in two ways:
|
||||||
|
|
||||||
1. [Self-host Cognee Open Source](https://docs.cognee.ai/getting-started/installation), which stores all data locally by default.
|
1. [Self-host Cognee Open Source](https://docs.cognee.ai/getting-started/installation), which stores all data locally by default.
|
||||||
2. [Connect to Cognee Cloud](https://platform.cognee.ai/), and get the same OSS stack on managed infrastructure for easier development and productionization.
|
2. [Connect to Cognee Cloud](https://platform.cognee.ai/), and get the same OSS stack on managed infrastructure for easier development and productionization.
|
||||||
|
|
||||||
### Cognee Open Source (self-hosted):
|
### Cognee Open Source (self-hosted):
|
||||||
|
|
||||||
|
|
@ -81,8 +81,8 @@ You can use Cognee in two ways:
|
||||||
- Offers high customizability through user-defined tasks, modular pipelines, and built-in search endpoints
|
- Offers high customizability through user-defined tasks, modular pipelines, and built-in search endpoints
|
||||||
|
|
||||||
### Cognee Cloud (managed):
|
### Cognee Cloud (managed):
|
||||||
- Hosted web UI dashboard
|
- Hosted web UI dashboard
|
||||||
- Automatic version updates
|
- Automatic version updates
|
||||||
- Resource usage analytics
|
- Resource usage analytics
|
||||||
- GDPR compliant, enterprise-grade security
|
- GDPR compliant, enterprise-grade security
|
||||||
|
|
||||||
|
|
@ -119,7 +119,7 @@ To integrate other LLM providers, see our [LLM Provider Documentation](https://d
|
||||||
|
|
||||||
### Step 3: Run the Pipeline
|
### Step 3: Run the Pipeline
|
||||||
|
|
||||||
Cognee will take your documents, generate a knowledge graph from them and then query the graph based on combined relationships.
|
Cognee will take your documents, generate a knowledge graph from them and then query the graph based on combined relationships.
|
||||||
|
|
||||||
Now, run a minimal pipeline:
|
Now, run a minimal pipeline:
|
||||||
|
|
||||||
|
|
@ -157,7 +157,7 @@ As you can see, the output is generated from the document we previously stored i
|
||||||
Cognee turns documents into AI memory.
|
Cognee turns documents into AI memory.
|
||||||
```
|
```
|
||||||
|
|
||||||
### Use the Cognee CLI
|
### Use the Cognee CLI
|
||||||
|
|
||||||
As an alternative, you can get started with these essential commands:
|
As an alternative, you can get started with these essential commands:
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1 +1 @@
|
||||||
Generic single-database configuration with an async dbapi.
|
Generic single-database configuration with an async dbapi.
|
||||||
|
|
|
||||||
|
|
@ -43,10 +43,10 @@ Saiba mais sobre os [casos de uso](https://docs.cognee.ai/use-cases) e [avaliaç
|
||||||
|
|
||||||
## Funcionalidades
|
## Funcionalidades
|
||||||
|
|
||||||
- Conecte e recupere suas conversas passadas, documentos, imagens e transcrições de áudio
|
- Conecte e recupere suas conversas passadas, documentos, imagens e transcrições de áudio
|
||||||
- Reduza alucinações, esforço de desenvolvimento e custos
|
- Reduza alucinações, esforço de desenvolvimento e custos
|
||||||
- Carregue dados em bancos de dados de grafos e vetores usando apenas Pydantic
|
- Carregue dados em bancos de dados de grafos e vetores usando apenas Pydantic
|
||||||
- Transforme e organize seus dados enquanto os coleta de mais de 30 fontes diferentes
|
- Transforme e organize seus dados enquanto os coleta de mais de 30 fontes diferentes
|
||||||
|
|
||||||
## Primeiros Passos
|
## Primeiros Passos
|
||||||
|
|
||||||
|
|
@ -108,7 +108,7 @@ if __name__ == '__main__':
|
||||||
Exemplo do output:
|
Exemplo do output:
|
||||||
```
|
```
|
||||||
O Processamento de Linguagem Natural (NLP) é um campo interdisciplinar e transdisciplinar que envolve ciência da computação e recuperação de informações. Ele se concentra na interação entre computadores e a linguagem humana, permitindo que as máquinas compreendam e processem a linguagem natural.
|
O Processamento de Linguagem Natural (NLP) é um campo interdisciplinar e transdisciplinar que envolve ciência da computação e recuperação de informações. Ele se concentra na interação entre computadores e a linguagem humana, permitindo que as máquinas compreendam e processem a linguagem natural.
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Visualização do grafo:
|
Visualização do grafo:
|
||||||
|
|
|
||||||
|
|
@ -141,7 +141,7 @@ if __name__ == '__main__':
|
||||||
2. Простая демонстрация GraphRAG
|
2. Простая демонстрация GraphRAG
|
||||||
[Видео](https://github.com/user-attachments/assets/d80b0776-4eb9-4b8e-aa22-3691e2d44b8f)
|
[Видео](https://github.com/user-attachments/assets/d80b0776-4eb9-4b8e-aa22-3691e2d44b8f)
|
||||||
|
|
||||||
3. Cognee с Ollama
|
3. Cognee с Ollama
|
||||||
[Видео](https://github.com/user-attachments/assets/8621d3e8-ecb8-4860-afb2-5594f2ee17db)
|
[Видео](https://github.com/user-attachments/assets/8621d3e8-ecb8-4860-afb2-5594f2ee17db)
|
||||||
|
|
||||||
## Правила поведения
|
## Правила поведения
|
||||||
|
|
|
||||||
|
|
@ -114,7 +114,7 @@ if __name__ == '__main__':
|
||||||
示例输出:
|
示例输出:
|
||||||
```
|
```
|
||||||
自然语言处理(NLP)是计算机科学和信息检索的跨学科领域。它关注计算机和人类语言之间的交互,使机器能够理解和处理自然语言。
|
自然语言处理(NLP)是计算机科学和信息检索的跨学科领域。它关注计算机和人类语言之间的交互,使机器能够理解和处理自然语言。
|
||||||
|
|
||||||
```
|
```
|
||||||
图形可视化:
|
图形可视化:
|
||||||
<a href="https://rawcdn.githack.com/topoteretes/cognee/refs/heads/main/assets/graph_visualization.html"><img src="https://rawcdn.githack.com/topoteretes/cognee/refs/heads/main/assets/graph_visualization.png" width="100%" alt="图形可视化"></a>
|
<a href="https://rawcdn.githack.com/topoteretes/cognee/refs/heads/main/assets/graph_visualization.html"><img src="https://rawcdn.githack.com/topoteretes/cognee/refs/heads/main/assets/graph_visualization.png" width="100%" alt="图形可视化"></a>
|
||||||
|
|
|
||||||
|
|
@ -34,4 +34,4 @@
|
||||||
"tailwindcss": "^4.1.7",
|
"tailwindcss": "^4.1.7",
|
||||||
"typescript": "^5"
|
"typescript": "^5"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -55,7 +55,7 @@ export default function CogneeAddWidget({ onData, useCloud = false }: CogneeAddW
|
||||||
setTrue: setProcessingFilesInProgress,
|
setTrue: setProcessingFilesInProgress,
|
||||||
setFalse: setProcessingFilesDone,
|
setFalse: setProcessingFilesDone,
|
||||||
} = useBoolean(false);
|
} = useBoolean(false);
|
||||||
|
|
||||||
const handleAddFiles = (dataset: Dataset, event: ChangeEvent<HTMLInputElement>) => {
|
const handleAddFiles = (dataset: Dataset, event: ChangeEvent<HTMLInputElement>) => {
|
||||||
event.stopPropagation();
|
event.stopPropagation();
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -111,7 +111,7 @@ export default function GraphControls({ data, isAddNodeFormOpen, onGraphShapeCha
|
||||||
|
|
||||||
const [isAuthShapeChangeEnabled, setIsAuthShapeChangeEnabled] = useState(true);
|
const [isAuthShapeChangeEnabled, setIsAuthShapeChangeEnabled] = useState(true);
|
||||||
const shapeChangeTimeout = useRef<number | null>(null);
|
const shapeChangeTimeout = useRef<number | null>(null);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
onGraphShapeChange(DEFAULT_GRAPH_SHAPE);
|
onGraphShapeChange(DEFAULT_GRAPH_SHAPE);
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -57,7 +57,7 @@ export default function GraphVisualization({ ref, data, graphControls, className
|
||||||
// Initial size calculation
|
// Initial size calculation
|
||||||
handleResize();
|
handleResize();
|
||||||
|
|
||||||
// ResizeObserver
|
// ResizeObserver
|
||||||
const resizeObserver = new ResizeObserver(() => {
|
const resizeObserver = new ResizeObserver(() => {
|
||||||
handleResize();
|
handleResize();
|
||||||
});
|
});
|
||||||
|
|
@ -216,7 +216,7 @@ export default function GraphVisualization({ ref, data, graphControls, className
|
||||||
}, [data, graphRef]);
|
}, [data, graphRef]);
|
||||||
|
|
||||||
const [graphShape, setGraphShape] = useState<string>();
|
const [graphShape, setGraphShape] = useState<string>();
|
||||||
|
|
||||||
const zoomToFit: ForceGraphMethods["zoomToFit"] = (
|
const zoomToFit: ForceGraphMethods["zoomToFit"] = (
|
||||||
durationMs?: number,
|
durationMs?: number,
|
||||||
padding?: number,
|
padding?: number,
|
||||||
|
|
@ -227,15 +227,15 @@ export default function GraphVisualization({ ref, data, graphControls, className
|
||||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||||
return undefined as any;
|
return undefined as any;
|
||||||
}
|
}
|
||||||
|
|
||||||
return graphRef.current.zoomToFit?.(durationMs, padding, nodeFilter);
|
return graphRef.current.zoomToFit?.(durationMs, padding, nodeFilter);
|
||||||
};
|
};
|
||||||
|
|
||||||
useImperativeHandle(ref, () => ({
|
useImperativeHandle(ref, () => ({
|
||||||
zoomToFit,
|
zoomToFit,
|
||||||
setGraphShape,
|
setGraphShape,
|
||||||
}));
|
}));
|
||||||
|
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<div ref={containerRef} className={classNames("w-full h-full", className)} id="graph-container">
|
<div ref={containerRef} className={classNames("w-full h-full", className)} id="graph-container">
|
||||||
|
|
|
||||||
|
|
@ -1373,4 +1373,4 @@
|
||||||
"padding": 20
|
"padding": 20
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -134,7 +134,7 @@ export default function DatasetsAccordion({
|
||||||
} = useBoolean(false);
|
} = useBoolean(false);
|
||||||
|
|
||||||
const [datasetToRemove, setDatasetToRemove] = useState<Dataset | null>(null);
|
const [datasetToRemove, setDatasetToRemove] = useState<Dataset | null>(null);
|
||||||
|
|
||||||
const handleDatasetRemove = (dataset: Dataset) => {
|
const handleDatasetRemove = (dataset: Dataset) => {
|
||||||
setDatasetToRemove(dataset);
|
setDatasetToRemove(dataset);
|
||||||
openRemoveDatasetModal();
|
openRemoveDatasetModal();
|
||||||
|
|
|
||||||
|
|
@ -45,7 +45,7 @@ export default function Plan() {
|
||||||
<div className="bg-white rounded-xl px-5 py-5 mb-2">
|
<div className="bg-white rounded-xl px-5 py-5 mb-2">
|
||||||
Affordable and transparent pricing
|
Affordable and transparent pricing
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div className="grid grid-cols-3 gap-x-2.5">
|
<div className="grid grid-cols-3 gap-x-2.5">
|
||||||
<div className="pt-13 py-4 px-5 mb-2.5 rounded-tl-xl rounded-tr-xl bg-white h-full">
|
<div className="pt-13 py-4 px-5 mb-2.5 rounded-tl-xl rounded-tr-xl bg-white h-full">
|
||||||
<div>Basic</div>
|
<div>Basic</div>
|
||||||
|
|
|
||||||
|
|
@ -40,7 +40,7 @@ export default function useChat(dataset: Dataset) {
|
||||||
setTrue: disableSearchRun,
|
setTrue: disableSearchRun,
|
||||||
setFalse: enableSearchRun,
|
setFalse: enableSearchRun,
|
||||||
} = useBoolean(false);
|
} = useBoolean(false);
|
||||||
|
|
||||||
const refreshChat = useCallback(async () => {
|
const refreshChat = useCallback(async () => {
|
||||||
const data = await fetchMessages();
|
const data = await fetchMessages();
|
||||||
return setMessages(data);
|
return setMessages(data);
|
||||||
|
|
|
||||||
|
|
@ -46,7 +46,7 @@ function useDatasets(useCloud = false) {
|
||||||
// checkDatasetStatuses(datasets);
|
// checkDatasetStatuses(datasets);
|
||||||
// }, 50000);
|
// }, 50000);
|
||||||
// }, [fetchDatasetStatuses]);
|
// }, [fetchDatasetStatuses]);
|
||||||
|
|
||||||
// useEffect(() => {
|
// useEffect(() => {
|
||||||
// return () => {
|
// return () => {
|
||||||
// if (statusTimeout.current !== null) {
|
// if (statusTimeout.current !== null) {
|
||||||
|
|
|
||||||
|
|
@ -7,7 +7,7 @@ export default function createNotebook(notebookName: string, instance: CogneeIns
|
||||||
headers: {
|
headers: {
|
||||||
"Content-Type": "application/json",
|
"Content-Type": "application/json",
|
||||||
},
|
},
|
||||||
}).then((response: Response) =>
|
}).then((response: Response) =>
|
||||||
response.ok ? response.json() : Promise.reject(response)
|
response.ok ? response.json() : Promise.reject(response)
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -6,7 +6,7 @@ export default function getNotebooks(instance: CogneeInstance) {
|
||||||
headers: {
|
headers: {
|
||||||
"Content-Type": "application/json",
|
"Content-Type": "application/json",
|
||||||
},
|
},
|
||||||
}).then((response: Response) =>
|
}).then((response: Response) =>
|
||||||
response.ok ? response.json() : Promise.reject(response)
|
response.ok ? response.json() : Promise.reject(response)
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -7,7 +7,7 @@ export default function saveNotebook(notebookId: string, notebookData: object, i
|
||||||
headers: {
|
headers: {
|
||||||
"Content-Type": "application/json",
|
"Content-Type": "application/json",
|
||||||
},
|
},
|
||||||
}).then((response: Response) =>
|
}).then((response: Response) =>
|
||||||
response.ok ? response.json() : Promise.reject(response)
|
response.ok ? response.json() : Promise.reject(response)
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -7,4 +7,4 @@ export default function GitHubIcon({ width = 24, height = 24, color = 'currentCo
|
||||||
</g>
|
</g>
|
||||||
</svg>
|
</svg>
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -46,7 +46,7 @@ export default function Header({ user }: HeaderProps) {
|
||||||
|
|
||||||
checkMCPConnection();
|
checkMCPConnection();
|
||||||
const interval = setInterval(checkMCPConnection, 30000);
|
const interval = setInterval(checkMCPConnection, 30000);
|
||||||
|
|
||||||
return () => clearInterval(interval);
|
return () => clearInterval(interval);
|
||||||
}, [setMCPConnected, setMCPDisconnected]);
|
}, [setMCPConnected, setMCPDisconnected]);
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -90,7 +90,7 @@ export default function SearchView() {
|
||||||
scrollToBottom();
|
scrollToBottom();
|
||||||
|
|
||||||
setSearchInputValue("");
|
setSearchInputValue("");
|
||||||
|
|
||||||
// Pass topK to sendMessage
|
// Pass topK to sendMessage
|
||||||
sendMessage(chatInput, searchType, topK)
|
sendMessage(chatInput, searchType, topK)
|
||||||
.then(scrollToBottom)
|
.then(scrollToBottom)
|
||||||
|
|
@ -171,4 +171,4 @@ export default function SearchView() {
|
||||||
</form>
|
</form>
|
||||||
</div>
|
</div>
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,2 @@
|
||||||
export { default as Modal } from "./Modal";
|
export { default as Modal } from "./Modal";
|
||||||
export { default as useModal } from "./useModal";
|
export { default as useModal } from "./useModal";
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -74,4 +74,3 @@ function MarkdownPreview({ content, className = "" }: MarkdownPreviewProps) {
|
||||||
}
|
}
|
||||||
|
|
||||||
export default memo(MarkdownPreview);
|
export default memo(MarkdownPreview);
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -534,7 +534,7 @@ function transformInsightsGraphData(triplets: Triplet[]) {
|
||||||
target: string,
|
target: string,
|
||||||
label: string,
|
label: string,
|
||||||
}
|
}
|
||||||
} = {};
|
} = {};
|
||||||
|
|
||||||
for (const triplet of triplets) {
|
for (const triplet of triplets) {
|
||||||
nodes[triplet[0].id] = {
|
nodes[triplet[0].id] = {
|
||||||
|
|
|
||||||
|
|
@ -34,8 +34,8 @@ export default function TextArea({
|
||||||
// Cache maxHeight on first calculation
|
// Cache maxHeight on first calculation
|
||||||
if (maxHeightRef.current === null) {
|
if (maxHeightRef.current === null) {
|
||||||
const computedStyle = getComputedStyle(textarea);
|
const computedStyle = getComputedStyle(textarea);
|
||||||
maxHeightRef.current = computedStyle.maxHeight === "none"
|
maxHeightRef.current = computedStyle.maxHeight === "none"
|
||||||
? Infinity
|
? Infinity
|
||||||
: parseInt(computedStyle.maxHeight) || Infinity;
|
: parseInt(computedStyle.maxHeight) || Infinity;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -10,4 +10,4 @@ export { default as NeutralButton } from "./NeutralButton";
|
||||||
export { default as StatusIndicator } from "./StatusIndicator";
|
export { default as StatusIndicator } from "./StatusIndicator";
|
||||||
export { default as StatusDot } from "./StatusDot";
|
export { default as StatusDot } from "./StatusDot";
|
||||||
export { default as Accordion } from "./Accordion";
|
export { default as Accordion } from "./Accordion";
|
||||||
export { default as Notebook } from "./Notebook";
|
export { default as Notebook } from "./Notebook";
|
||||||
|
|
|
||||||
|
|
@ -57,7 +57,7 @@ export default async function fetch(url: string, options: RequestInit = {}, useC
|
||||||
new Error("Backend server is not responding. Please check if the server is running.")
|
new Error("Backend server is not responding. Please check if the server is running.")
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (error.detail === undefined) {
|
if (error.detail === undefined) {
|
||||||
return Promise.reject(
|
return Promise.reject(
|
||||||
new Error("No connection to the server.")
|
new Error("No connection to the server.")
|
||||||
|
|
@ -74,7 +74,7 @@ export default async function fetch(url: string, options: RequestInit = {}, useC
|
||||||
fetch.checkHealth = async () => {
|
fetch.checkHealth = async () => {
|
||||||
const maxRetries = 5;
|
const maxRetries = 5;
|
||||||
const retryDelay = 1000; // 1 second
|
const retryDelay = 1000; // 1 second
|
||||||
|
|
||||||
for (let i = 0; i < maxRetries; i++) {
|
for (let i = 0; i < maxRetries; i++) {
|
||||||
try {
|
try {
|
||||||
const response = await global.fetch(`${backendApiUrl.replace("/api", "")}/health`);
|
const response = await global.fetch(`${backendApiUrl.replace("/api", "")}/health`);
|
||||||
|
|
@ -90,7 +90,7 @@ fetch.checkHealth = async () => {
|
||||||
await new Promise(resolve => setTimeout(resolve, retryDelay));
|
await new Promise(resolve => setTimeout(resolve, retryDelay));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
throw new Error("Backend server is not responding after multiple attempts");
|
throw new Error("Backend server is not responding after multiple attempts");
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -105,14 +105,14 @@ If you'd rather run cognee-mcp in a container, you have two options:
|
||||||
```bash
|
```bash
|
||||||
# For HTTP transport (recommended for web deployments)
|
# For HTTP transport (recommended for web deployments)
|
||||||
docker run -e TRANSPORT_MODE=http --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main
|
docker run -e TRANSPORT_MODE=http --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main
|
||||||
# For SSE transport
|
# For SSE transport
|
||||||
docker run -e TRANSPORT_MODE=sse --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main
|
docker run -e TRANSPORT_MODE=sse --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main
|
||||||
# For stdio transport (default)
|
# For stdio transport (default)
|
||||||
docker run -e TRANSPORT_MODE=stdio --env-file ./.env --rm -it cognee/cognee-mcp:main
|
docker run -e TRANSPORT_MODE=stdio --env-file ./.env --rm -it cognee/cognee-mcp:main
|
||||||
```
|
```
|
||||||
|
|
||||||
**Installing optional dependencies at runtime:**
|
**Installing optional dependencies at runtime:**
|
||||||
|
|
||||||
You can install optional dependencies when running the container by setting the `EXTRAS` environment variable:
|
You can install optional dependencies when running the container by setting the `EXTRAS` environment variable:
|
||||||
```bash
|
```bash
|
||||||
# Install a single optional dependency group at runtime
|
# Install a single optional dependency group at runtime
|
||||||
|
|
@ -122,7 +122,7 @@ If you'd rather run cognee-mcp in a container, you have two options:
|
||||||
--env-file ./.env \
|
--env-file ./.env \
|
||||||
-p 8000:8000 \
|
-p 8000:8000 \
|
||||||
--rm -it cognee/cognee-mcp:main
|
--rm -it cognee/cognee-mcp:main
|
||||||
|
|
||||||
# Install multiple optional dependency groups at runtime (comma-separated)
|
# Install multiple optional dependency groups at runtime (comma-separated)
|
||||||
docker run \
|
docker run \
|
||||||
-e TRANSPORT_MODE=sse \
|
-e TRANSPORT_MODE=sse \
|
||||||
|
|
@ -131,7 +131,7 @@ If you'd rather run cognee-mcp in a container, you have two options:
|
||||||
-p 8000:8000 \
|
-p 8000:8000 \
|
||||||
--rm -it cognee/cognee-mcp:main
|
--rm -it cognee/cognee-mcp:main
|
||||||
```
|
```
|
||||||
|
|
||||||
**Available optional dependency groups:**
|
**Available optional dependency groups:**
|
||||||
- `aws` - S3 storage support
|
- `aws` - S3 storage support
|
||||||
- `postgres` / `postgres-binary` - PostgreSQL database support
|
- `postgres` / `postgres-binary` - PostgreSQL database support
|
||||||
|
|
@ -160,7 +160,7 @@ If you'd rather run cognee-mcp in a container, you have two options:
|
||||||
# With stdio transport (default)
|
# With stdio transport (default)
|
||||||
docker run -e TRANSPORT_MODE=stdio --env-file ./.env --rm -it cognee/cognee-mcp:main
|
docker run -e TRANSPORT_MODE=stdio --env-file ./.env --rm -it cognee/cognee-mcp:main
|
||||||
```
|
```
|
||||||
|
|
||||||
**With runtime installation of optional dependencies:**
|
**With runtime installation of optional dependencies:**
|
||||||
```bash
|
```bash
|
||||||
# Install optional dependencies from Docker Hub image
|
# Install optional dependencies from Docker Hub image
|
||||||
|
|
@ -357,7 +357,7 @@ You can configure both transports simultaneously for testing:
|
||||||
"url": "http://localhost:8000/sse"
|
"url": "http://localhost:8000/sse"
|
||||||
},
|
},
|
||||||
"cognee-http": {
|
"cognee-http": {
|
||||||
"type": "http",
|
"type": "http",
|
||||||
"url": "http://localhost:8000/mcp"
|
"url": "http://localhost:8000/mcp"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -7,11 +7,11 @@ echo "Environment: $ENVIRONMENT"
|
||||||
# Install optional dependencies if EXTRAS is set
|
# Install optional dependencies if EXTRAS is set
|
||||||
if [ -n "$EXTRAS" ]; then
|
if [ -n "$EXTRAS" ]; then
|
||||||
echo "Installing optional dependencies: $EXTRAS"
|
echo "Installing optional dependencies: $EXTRAS"
|
||||||
|
|
||||||
# Get the cognee version that's currently installed
|
# Get the cognee version that's currently installed
|
||||||
COGNEE_VERSION=$(uv pip show cognee | grep "Version:" | awk '{print $2}')
|
COGNEE_VERSION=$(uv pip show cognee | grep "Version:" | awk '{print $2}')
|
||||||
echo "Current cognee version: $COGNEE_VERSION"
|
echo "Current cognee version: $COGNEE_VERSION"
|
||||||
|
|
||||||
# Build the extras list for cognee
|
# Build the extras list for cognee
|
||||||
IFS=',' read -ra EXTRA_ARRAY <<< "$EXTRAS"
|
IFS=',' read -ra EXTRA_ARRAY <<< "$EXTRAS"
|
||||||
# Combine base extras from pyproject.toml with requested extras
|
# Combine base extras from pyproject.toml with requested extras
|
||||||
|
|
@ -28,11 +28,11 @@ if [ -n "$EXTRAS" ]; then
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
|
||||||
echo "Installing cognee with extras: $ALL_EXTRAS"
|
echo "Installing cognee with extras: $ALL_EXTRAS"
|
||||||
echo "Running: uv pip install 'cognee[$ALL_EXTRAS]==$COGNEE_VERSION'"
|
echo "Running: uv pip install 'cognee[$ALL_EXTRAS]==$COGNEE_VERSION'"
|
||||||
uv pip install "cognee[$ALL_EXTRAS]==$COGNEE_VERSION"
|
uv pip install "cognee[$ALL_EXTRAS]==$COGNEE_VERSION"
|
||||||
|
|
||||||
# Verify installation
|
# Verify installation
|
||||||
echo ""
|
echo ""
|
||||||
echo "✓ Optional dependencies installation completed"
|
echo "✓ Optional dependencies installation completed"
|
||||||
|
|
@ -93,19 +93,19 @@ if [ -n "$API_URL" ]; then
|
||||||
if echo "$API_URL" | grep -q "localhost" || echo "$API_URL" | grep -q "127.0.0.1"; then
|
if echo "$API_URL" | grep -q "localhost" || echo "$API_URL" | grep -q "127.0.0.1"; then
|
||||||
echo "⚠️ Warning: API_URL contains localhost/127.0.0.1"
|
echo "⚠️ Warning: API_URL contains localhost/127.0.0.1"
|
||||||
echo " Original: $API_URL"
|
echo " Original: $API_URL"
|
||||||
|
|
||||||
# Try to use host.docker.internal (works on Mac/Windows and recent Linux with Docker Desktop)
|
# Try to use host.docker.internal (works on Mac/Windows and recent Linux with Docker Desktop)
|
||||||
FIXED_API_URL=$(echo "$API_URL" | sed 's/localhost/host.docker.internal/g' | sed 's/127\.0\.0\.1/host.docker.internal/g')
|
FIXED_API_URL=$(echo "$API_URL" | sed 's/localhost/host.docker.internal/g' | sed 's/127\.0\.0\.1/host.docker.internal/g')
|
||||||
|
|
||||||
echo " Converted to: $FIXED_API_URL"
|
echo " Converted to: $FIXED_API_URL"
|
||||||
echo " This will work on Mac/Windows/Docker Desktop."
|
echo " This will work on Mac/Windows/Docker Desktop."
|
||||||
echo " On Linux without Docker Desktop, you may need to:"
|
echo " On Linux without Docker Desktop, you may need to:"
|
||||||
echo " - Use --network host, OR"
|
echo " - Use --network host, OR"
|
||||||
echo " - Set API_URL=http://172.17.0.1:8000 (Docker bridge IP)"
|
echo " - Set API_URL=http://172.17.0.1:8000 (Docker bridge IP)"
|
||||||
|
|
||||||
API_URL="$FIXED_API_URL"
|
API_URL="$FIXED_API_URL"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
API_ARGS="--api-url $API_URL"
|
API_ARGS="--api-url $API_URL"
|
||||||
if [ -n "$API_TOKEN" ]; then
|
if [ -n "$API_TOKEN" ]; then
|
||||||
API_ARGS="$API_ARGS --api-token $API_TOKEN"
|
API_ARGS="$API_ARGS --api-token $API_TOKEN"
|
||||||
|
|
|
||||||
|
|
@ -16,4 +16,4 @@ EMBEDDING_API_VERSION=""
|
||||||
|
|
||||||
|
|
||||||
GRAPHISTRY_USERNAME=""
|
GRAPHISTRY_USERNAME=""
|
||||||
GRAPHISTRY_PASSWORD=""
|
GRAPHISTRY_PASSWORD=""
|
||||||
|
|
|
||||||
|
|
@ -14,7 +14,7 @@ This starter kit is deprecated. Its examples have been integrated into the `/new
|
||||||
# Cognee Starter Kit
|
# Cognee Starter Kit
|
||||||
Welcome to the <a href="https://github.com/topoteretes/cognee">cognee</a> Starter Repo! This repository is designed to help you get started quickly by providing a structured dataset and pre-built data pipelines using cognee to build powerful knowledge graphs.
|
Welcome to the <a href="https://github.com/topoteretes/cognee">cognee</a> Starter Repo! This repository is designed to help you get started quickly by providing a structured dataset and pre-built data pipelines using cognee to build powerful knowledge graphs.
|
||||||
|
|
||||||
You can use this repo to ingest, process, and visualize data in minutes.
|
You can use this repo to ingest, process, and visualize data in minutes.
|
||||||
|
|
||||||
By following this guide, you will:
|
By following this guide, you will:
|
||||||
|
|
||||||
|
|
@ -80,7 +80,7 @@ Custom model uses custom pydantic model for graph extraction. This script catego
|
||||||
python src/pipelines/custom-model.py
|
python src/pipelines/custom-model.py
|
||||||
```
|
```
|
||||||
|
|
||||||
## Graph preview
|
## Graph preview
|
||||||
|
|
||||||
cognee provides a visualize_graph function that will render the graph for you.
|
cognee provides a visualize_graph function that will render the graph for you.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -2,4 +2,4 @@
|
||||||
# Example:
|
# Example:
|
||||||
# CORS_ALLOWED_ORIGINS="https://yourdomain.com,https://another.com"
|
# CORS_ALLOWED_ORIGINS="https://yourdomain.com,https://another.com"
|
||||||
# For local development, you might use:
|
# For local development, you might use:
|
||||||
# CORS_ALLOWED_ORIGINS="http://localhost:3000"
|
# CORS_ALLOWED_ORIGINS="http://localhost:3000"
|
||||||
|
|
|
||||||
|
|
@ -71,7 +71,7 @@ def get_sync_router() -> APIRouter:
|
||||||
-H "Content-Type: application/json" \\
|
-H "Content-Type: application/json" \\
|
||||||
-H "Cookie: auth_token=your-token" \\
|
-H "Cookie: auth_token=your-token" \\
|
||||||
-d '{"dataset_ids": ["123e4567-e89b-12d3-a456-426614174000", "456e7890-e12b-34c5-d678-901234567000"]}'
|
-d '{"dataset_ids": ["123e4567-e89b-12d3-a456-426614174000", "456e7890-e12b-34c5-d678-901234567000"]}'
|
||||||
|
|
||||||
# Sync all user datasets (empty request body or null dataset_ids)
|
# Sync all user datasets (empty request body or null dataset_ids)
|
||||||
curl -X POST "http://localhost:8000/api/v1/sync" \\
|
curl -X POST "http://localhost:8000/api/v1/sync" \\
|
||||||
-H "Content-Type: application/json" \\
|
-H "Content-Type: application/json" \\
|
||||||
|
|
@ -88,7 +88,7 @@ def get_sync_router() -> APIRouter:
|
||||||
- **413 Payload Too Large**: Dataset too large for current cloud plan
|
- **413 Payload Too Large**: Dataset too large for current cloud plan
|
||||||
- **429 Too Many Requests**: Rate limit exceeded
|
- **429 Too Many Requests**: Rate limit exceeded
|
||||||
|
|
||||||
## Notes
|
## Notes
|
||||||
- Sync operations run in the background - you get an immediate response
|
- Sync operations run in the background - you get an immediate response
|
||||||
- Use the returned run_id to track progress (status API coming soon)
|
- Use the returned run_id to track progress (status API coming soon)
|
||||||
- Large datasets are automatically chunked for efficient transfer
|
- Large datasets are automatically chunked for efficient transfer
|
||||||
|
|
@ -179,7 +179,7 @@ def get_sync_router() -> APIRouter:
|
||||||
```
|
```
|
||||||
|
|
||||||
## Example Responses
|
## Example Responses
|
||||||
|
|
||||||
**No running syncs:**
|
**No running syncs:**
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
|
|
|
||||||
|
|
@ -21,7 +21,7 @@ binary streams, then stores them in a specified dataset for further processing.
|
||||||
|
|
||||||
Supported Input Types:
|
Supported Input Types:
|
||||||
- **Text strings**: Direct text content
|
- **Text strings**: Direct text content
|
||||||
- **File paths**: Local file paths (absolute paths starting with "/")
|
- **File paths**: Local file paths (absolute paths starting with "/")
|
||||||
- **File URLs**: "file:///absolute/path" or "file://relative/path"
|
- **File URLs**: "file:///absolute/path" or "file://relative/path"
|
||||||
- **S3 paths**: "s3://bucket-name/path/to/file"
|
- **S3 paths**: "s3://bucket-name/path/to/file"
|
||||||
- **Lists**: Multiple files or text strings in a single call
|
- **Lists**: Multiple files or text strings in a single call
|
||||||
|
|
|
||||||
|
|
@ -17,7 +17,7 @@ The `cognee config` command allows you to view and modify configuration settings
|
||||||
|
|
||||||
You can:
|
You can:
|
||||||
- View all current configuration settings
|
- View all current configuration settings
|
||||||
- Get specific configuration values
|
- Get specific configuration values
|
||||||
- Set configuration values
|
- Set configuration values
|
||||||
- Unset (reset to default) specific configuration values
|
- Unset (reset to default) specific configuration values
|
||||||
- Reset all configuration to defaults
|
- Reset all configuration to defaults
|
||||||
|
|
|
||||||
|
|
@ -290,7 +290,7 @@ class NeptuneAnalyticsAdapter(NeptuneGraphDB, VectorDBInterface):
|
||||||
query_string = f"""
|
query_string = f"""
|
||||||
CALL neptune.algo.vectors.topKByEmbeddingWithFiltering({{
|
CALL neptune.algo.vectors.topKByEmbeddingWithFiltering({{
|
||||||
topK: {limit},
|
topK: {limit},
|
||||||
embedding: {embedding},
|
embedding: {embedding},
|
||||||
nodeFilter: {{ equals: {{property: '{self._COLLECTION_PREFIX}', value: '{collection_name}'}} }}
|
nodeFilter: {{ equals: {{property: '{self._COLLECTION_PREFIX}', value: '{collection_name}'}} }}
|
||||||
}}
|
}}
|
||||||
)
|
)
|
||||||
|
|
@ -299,7 +299,7 @@ class NeptuneAnalyticsAdapter(NeptuneGraphDB, VectorDBInterface):
|
||||||
|
|
||||||
if with_vector:
|
if with_vector:
|
||||||
query_string += """
|
query_string += """
|
||||||
WITH node, score, id(node) as node_id
|
WITH node, score, id(node) as node_id
|
||||||
MATCH (n)
|
MATCH (n)
|
||||||
WHERE id(n) = id(node)
|
WHERE id(n) = id(node)
|
||||||
CALL neptune.algo.vectors.get(n)
|
CALL neptune.algo.vectors.get(n)
|
||||||
|
|
|
||||||
|
|
@ -10,4 +10,4 @@ Extraction rules:
|
||||||
5. Current-time references ("now", "current", "today"): If the query explicitly refers to the present, set both starts_at and ends_at to now (the ingestion timestamp).
|
5. Current-time references ("now", "current", "today"): If the query explicitly refers to the present, set both starts_at and ends_at to now (the ingestion timestamp).
|
||||||
6. "Who is" and "Who was" questions: These imply a general identity or biographical inquiry without a specific temporal scope. Set both starts_at and ends_at to None.
|
6. "Who is" and "Who was" questions: These imply a general identity or biographical inquiry without a specific temporal scope. Set both starts_at and ends_at to None.
|
||||||
7. Ordering rule: Always ensure the earlier date is assigned to starts_at and the later date to ends_at.
|
7. Ordering rule: Always ensure the earlier date is assigned to starts_at and the later date to ends_at.
|
||||||
8. No temporal information: If no valid or inferable time reference is found, set both starts_at and ends_at to None.
|
8. No temporal information: If no valid or inferable time reference is found, set both starts_at and ends_at to None.
|
||||||
|
|
|
||||||
|
|
@ -22,4 +22,4 @@ The `attributes` should be a list of dictionaries, each containing:
|
||||||
- Relationships should be technical with one or at most two words. If two words, use underscore camelcase style
|
- Relationships should be technical with one or at most two words. If two words, use underscore camelcase style
|
||||||
- Relationships could imply general meaning like: subject, object, participant, recipient, agent, instrument, tool, source, cause, effect, purpose, manner, resource, etc.
|
- Relationships could imply general meaning like: subject, object, participant, recipient, agent, instrument, tool, source, cause, effect, purpose, manner, resource, etc.
|
||||||
- You can combine two words to form a relationship name: subject_role, previous_owner, etc.
|
- You can combine two words to form a relationship name: subject_role, previous_owner, etc.
|
||||||
- Focus on how the entity specifically relates to the event
|
- Focus on how the entity specifically relates to the event
|
||||||
|
|
|
||||||
|
|
@ -27,4 +27,4 @@ class Event(BaseModel):
|
||||||
time_from: Optional[Timestamp] = None
|
time_from: Optional[Timestamp] = None
|
||||||
time_to: Optional[Timestamp] = None
|
time_to: Optional[Timestamp] = None
|
||||||
location: Optional[str] = None
|
location: Optional[str] = None
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -2,12 +2,12 @@ You are an expert query analyzer for a **GraphRAG system**. Your primary goal is
|
||||||
|
|
||||||
Here are the available `SearchType` tools and their specific functions:
|
Here are the available `SearchType` tools and their specific functions:
|
||||||
|
|
||||||
- **`SUMMARIES`**: The `SUMMARIES` search type retrieves summarized information from the knowledge graph.
|
- **`SUMMARIES`**: The `SUMMARIES` search type retrieves summarized information from the knowledge graph.
|
||||||
|
|
||||||
**Best for:**
|
**Best for:**
|
||||||
|
|
||||||
- Getting concise overviews of topics
|
- Getting concise overviews of topics
|
||||||
- Summarizing large amounts of information
|
- Summarizing large amounts of information
|
||||||
- Quick understanding of complex subjects
|
- Quick understanding of complex subjects
|
||||||
|
|
||||||
**Best for:**
|
**Best for:**
|
||||||
|
|
@ -16,7 +16,7 @@ Here are the available `SearchType` tools and their specific functions:
|
||||||
- Understanding relationships between concepts
|
- Understanding relationships between concepts
|
||||||
- Exploring the structure of your knowledge graph
|
- Exploring the structure of your knowledge graph
|
||||||
|
|
||||||
* **`CHUNKS`**: The `CHUNKS` search type retrieves specific facts and information chunks from the knowledge graph.
|
* **`CHUNKS`**: The `CHUNKS` search type retrieves specific facts and information chunks from the knowledge graph.
|
||||||
|
|
||||||
**Best for:**
|
**Best for:**
|
||||||
|
|
||||||
|
|
@ -122,4 +122,4 @@ Response: `NATURAL_LANGUAGE`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Your response MUST be a single word, consisting of only the chosen `SearchType` name. Do not provide any explanation.
|
Your response MUST be a single word, consisting of only the chosen `SearchType` name. Do not provide any explanation.
|
||||||
|
|
|
||||||
|
|
@ -1 +1 @@
|
||||||
Respond with: test
|
Respond with: test
|
||||||
|
|
|
||||||
|
|
@ -105,4 +105,3 @@
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -973,4 +973,4 @@
|
||||||
"python_version": null,
|
"python_version": null,
|
||||||
"pep_status": null
|
"pep_status": null
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|
|
||||||
|
|
@ -76,4 +76,4 @@ Section: Open Questions or TODOs
|
||||||
Create a checklist of unresolved decisions, logic that needs clarification, or tasks that are still pending.
|
Create a checklist of unresolved decisions, logic that needs clarification, or tasks that are still pending.
|
||||||
|
|
||||||
Section: Last Updated
|
Section: Last Updated
|
||||||
Include the most recent update date and who made the update.
|
Include the most recent update date and who made the update.
|
||||||
|
|
|
||||||
|
|
@ -72,4 +72,3 @@ profile = "black"
|
||||||
- E501: line too long -> break with parentheses
|
- E501: line too long -> break with parentheses
|
||||||
- E225: missing whitespace around operator
|
- E225: missing whitespace around operator
|
||||||
- E402: module import not at top of file
|
- E402: module import not at top of file
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -72,4 +72,3 @@ Use modules/packages to separate concerns; avoid wildcard imports.
|
||||||
- Is this the simplest working solution?
|
- Is this the simplest working solution?
|
||||||
- Are errors explicit and logged?
|
- Are errors explicit and logged?
|
||||||
- Are modules/namespaces used appropriately?
|
- Are modules/namespaces used appropriately?
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1 +0,0 @@
|
||||||
|
|
||||||
|
|
@ -1 +0,0 @@
|
||||||
|
|
||||||
|
|
@ -46,10 +46,10 @@ async def test_textdocument_cleanup_with_sql():
|
||||||
|
|
||||||
# Step 1: Add and cognify a test document
|
# Step 1: Add and cognify a test document
|
||||||
dataset_name = "test_cleanup_dataset"
|
dataset_name = "test_cleanup_dataset"
|
||||||
test_text = """
|
test_text = """
|
||||||
Machine learning is a subset of artificial intelligence that enables systems to learn
|
Machine learning is a subset of artificial intelligence that enables systems to learn
|
||||||
and improve from experience without being explicitly programmed. Deep learning uses
|
and improve from experience without being explicitly programmed. Deep learning uses
|
||||||
neural networks with multiple layers to process data.
|
neural networks with multiple layers to process data.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
await setup()
|
await setup()
|
||||||
|
|
|
||||||
|
|
@ -47,20 +47,20 @@ async def main():
|
||||||
|
|
||||||
# Test data
|
# Test data
|
||||||
text_1 = """
|
text_1 = """
|
||||||
Apple Inc. is an American multinational technology company that specializes in consumer electronics,
|
Apple Inc. is an American multinational technology company that specializes in consumer electronics,
|
||||||
software, and online services. Apple is the world's largest technology company by revenue and,
|
software, and online services. Apple is the world's largest technology company by revenue and,
|
||||||
since January 2021, the world's most valuable company.
|
since January 2021, the world's most valuable company.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
text_2 = """
|
text_2 = """
|
||||||
Microsoft Corporation is an American multinational technology corporation which produces computer software,
|
Microsoft Corporation is an American multinational technology corporation which produces computer software,
|
||||||
consumer electronics, personal computers, and related services. Its best known software products are the
|
consumer electronics, personal computers, and related services. Its best known software products are the
|
||||||
Microsoft Windows line of operating systems and the Microsoft Office suite.
|
Microsoft Windows line of operating systems and the Microsoft Office suite.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
text_3 = """
|
text_3 = """
|
||||||
Google LLC is an American multinational technology company that specializes in Internet-related services and products,
|
Google LLC is an American multinational technology company that specializes in Internet-related services and products,
|
||||||
which include online advertising technologies, search engine, cloud computing, software, and hardware. Google has been
|
which include online advertising technologies, search engine, cloud computing, software, and hardware. Google has been
|
||||||
referred to as the most powerful company in the world and one of the world's most valuable brands.
|
referred to as the most powerful company in the world and one of the world's most valuable brands.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -19,7 +19,7 @@ Clone the Repository Clone this repository to your local machine and navigate t
|
||||||
helm install cognee ./cognee-chart
|
helm install cognee ./cognee-chart
|
||||||
```
|
```
|
||||||
|
|
||||||
**Uninstall Helm Release**:
|
**Uninstall Helm Release**:
|
||||||
```bash
|
```bash
|
||||||
helm uninstall cognee
|
helm uninstall cognee
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -43,4 +43,3 @@ networks:
|
||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
postgres_data:
|
postgres_data:
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -11,4 +11,3 @@ spec:
|
||||||
targetPort: {{ .Values.postgres.port }}
|
targetPort: {{ .Values.postgres.port }}
|
||||||
selector:
|
selector:
|
||||||
app: {{ .Release.Name }}-postgres
|
app: {{ .Release.Name }}-postgres
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -3,4 +3,4 @@ numpy==1.26.4
|
||||||
matplotlib==3.10.0
|
matplotlib==3.10.0
|
||||||
seaborn==0.13.2
|
seaborn==0.13.2
|
||||||
scipy==1.11.4
|
scipy==1.11.4
|
||||||
pathlib
|
pathlib
|
||||||
|
|
|
||||||
|
|
@ -34,4 +34,4 @@ What began as an online bookstore has grown into one of the largest e-commerce p
|
||||||
Meta, originally known as Facebook, revolutionized social media by connecting billions of people worldwide. Beyond its core social networking service, Meta is investing in the next generation of digital experiences through virtual and augmented reality technologies, with projects like Oculus. The company's efforts signal a commitment to evolving digital interaction and building the metaverse—a shared virtual space where users can connect and collaborate.
|
Meta, originally known as Facebook, revolutionized social media by connecting billions of people worldwide. Beyond its core social networking service, Meta is investing in the next generation of digital experiences through virtual and augmented reality technologies, with projects like Oculus. The company's efforts signal a commitment to evolving digital interaction and building the metaverse—a shared virtual space where users can connect and collaborate.
|
||||||
|
|
||||||
Each of these companies has significantly impacted the technology landscape, driving innovation and transforming everyday life through their groundbreaking products and services.
|
Each of these companies has significantly impacted the technology landscape, driving innovation and transforming everyday life through their groundbreaking products and services.
|
||||||
"""
|
"""
|
||||||
|
|
|
||||||
|
|
@ -63,10 +63,10 @@ async def main():
|
||||||
traversals.
|
traversals.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
sample_text_2 = """Neptune Analytics is an ideal choice for investigatory, exploratory, or data-science workloads
|
sample_text_2 = """Neptune Analytics is an ideal choice for investigatory, exploratory, or data-science workloads
|
||||||
that require fast iteration for data, analytical and algorithmic processing, or vector search on graph data. It
|
that require fast iteration for data, analytical and algorithmic processing, or vector search on graph data. It
|
||||||
complements Amazon Neptune Database, a popular managed graph database. To perform intensive analysis, you can load
|
complements Amazon Neptune Database, a popular managed graph database. To perform intensive analysis, you can load
|
||||||
the data from a Neptune Database graph or snapshot into Neptune Analytics. You can also load graph data that's
|
the data from a Neptune Database graph or snapshot into Neptune Analytics. You can also load graph data that's
|
||||||
stored in Amazon S3.
|
stored in Amazon S3.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -165,8 +165,8 @@ async def main():
|
||||||
// If a stored preference exists and it does not match the new value,
|
// If a stored preference exists and it does not match the new value,
|
||||||
// raise an error using APOC's utility procedure.
|
// raise an error using APOC's utility procedure.
|
||||||
CALL apoc.util.validate(
|
CALL apoc.util.validate(
|
||||||
preference IS NOT NULL AND preference.value <> new_size,
|
preference IS NOT NULL AND preference.value <> new_size,
|
||||||
"Conflicting shoe size preference: existing size is " + preference.value + " and new size is " + new_size,
|
"Conflicting shoe size preference: existing size is " + preference.value + " and new size is " + new_size,
|
||||||
[]
|
[]
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -35,16 +35,16 @@ biography_1 = """
|
||||||
|
|
||||||
biography_2 = """
|
biography_2 = """
|
||||||
Arnulf Øverland Ole Peter Arnulf Øverland ( 27 April 1889 – 25 March 1968 ) was a Norwegian poet and artist . He is principally known for his poetry which served to inspire the Norwegian resistance movement during the German occupation of Norway during World War II .
|
Arnulf Øverland Ole Peter Arnulf Øverland ( 27 April 1889 – 25 March 1968 ) was a Norwegian poet and artist . He is principally known for his poetry which served to inspire the Norwegian resistance movement during the German occupation of Norway during World War II .
|
||||||
|
|
||||||
Biography .
|
Biography .
|
||||||
Øverland was born in Kristiansund and raised in Bergen . His parents were Peter Anton Øverland ( 1852–1906 ) and Hanna Hage ( 1854–1939 ) . The early death of his father , left the family economically stressed . He was able to attend Bergen Cathedral School and in 1904 Kristiania Cathedral School . He graduated in 1907 and for a time studied philology at University of Kristiania . Øverland published his first collection of poems ( 1911 ) .
|
Øverland was born in Kristiansund and raised in Bergen . His parents were Peter Anton Øverland ( 1852–1906 ) and Hanna Hage ( 1854–1939 ) . The early death of his father , left the family economically stressed . He was able to attend Bergen Cathedral School and in 1904 Kristiania Cathedral School . He graduated in 1907 and for a time studied philology at University of Kristiania . Øverland published his first collection of poems ( 1911 ) .
|
||||||
|
|
||||||
Øverland became a communist sympathizer from the early 1920s and became a member of Mot Dag . He also served as chairman of the Norwegian Students Society 1923–28 . He changed his stand in 1937 , partly as an expression of dissent against the ongoing Moscow Trials . He was an avid opponent of Nazism and in 1936 he wrote the poem Du må ikke sove which was printed in the journal Samtiden . It ends with . ( I thought: : Something is imminent . Our era is over – Europe’s on fire! ) . Probably the most famous line of the poem is ( You mustnt endure so well the injustice that doesnt affect you yourself! )
|
Øverland became a communist sympathizer from the early 1920s and became a member of Mot Dag . He also served as chairman of the Norwegian Students Society 1923–28 . He changed his stand in 1937 , partly as an expression of dissent against the ongoing Moscow Trials . He was an avid opponent of Nazism and in 1936 he wrote the poem Du må ikke sove which was printed in the journal Samtiden . It ends with . ( I thought: : Something is imminent . Our era is over – Europe’s on fire! ) . Probably the most famous line of the poem is ( You mustnt endure so well the injustice that doesnt affect you yourself! )
|
||||||
|
|
||||||
During the German occupation of Norway from 1940 in World War II , he wrote to inspire the Norwegian resistance movement . He wrote a series of poems which were clandestinely distributed , leading to the arrest of both him and his future wife Margrete Aamot Øverland in 1941 . Arnulf Øverland was held first in the prison camp of Grini before being transferred to Sachsenhausen concentration camp in Germany . He spent a four-year imprisonment until the liberation of Norway in 1945 . His poems were later collected in Vi overlever alt and published in 1945 .
|
During the German occupation of Norway from 1940 in World War II , he wrote to inspire the Norwegian resistance movement . He wrote a series of poems which were clandestinely distributed , leading to the arrest of both him and his future wife Margrete Aamot Øverland in 1941 . Arnulf Øverland was held first in the prison camp of Grini before being transferred to Sachsenhausen concentration camp in Germany . He spent a four-year imprisonment until the liberation of Norway in 1945 . His poems were later collected in Vi overlever alt and published in 1945 .
|
||||||
|
|
||||||
Øverland played an important role in the Norwegian language struggle in the post-war era . He became a noted supporter for the conservative written form of Norwegian called Riksmål , he was president of Riksmålsforbundet ( an organization in support of Riksmål ) from 1947 to 1956 . In addition , Øverland adhered to the traditionalist style of writing , criticising modernist poetry on several occasions . His speech Tungetale fra parnasset , published in Arbeiderbladet in 1954 , initiated the so-called Glossolalia debate .
|
Øverland played an important role in the Norwegian language struggle in the post-war era . He became a noted supporter for the conservative written form of Norwegian called Riksmål , he was president of Riksmålsforbundet ( an organization in support of Riksmål ) from 1947 to 1956 . In addition , Øverland adhered to the traditionalist style of writing , criticising modernist poetry on several occasions . His speech Tungetale fra parnasset , published in Arbeiderbladet in 1954 , initiated the so-called Glossolalia debate .
|
||||||
|
|
||||||
Personal life .
|
Personal life .
|
||||||
In 1918 he had married the singer Hildur Arntzen ( 1888–1957 ) . Their marriage was dissolved in 1939 . In 1940 , he married Bartholine Eufemia Leganger ( 1903–1995 ) . They separated shortly after , and were officially divorced in 1945 . Øverland was married to journalist Margrete Aamot Øverland ( 1913–1978 ) during June 1945 . In 1946 , the Norwegian Parliament arranged for Arnulf and Margrete Aamot Øverland to reside at the Grotten . He lived there until his death in 1968 and she lived there for another ten years until her death in 1978 . Arnulf Øverland was buried at Vår Frelsers Gravlund in Oslo . Joseph Grimeland designed the bust of Arnulf Øverland ( bronze , 1970 ) at his grave site .
|
In 1918 he had married the singer Hildur Arntzen ( 1888–1957 ) . Their marriage was dissolved in 1939 . In 1940 , he married Bartholine Eufemia Leganger ( 1903–1995 ) . They separated shortly after , and were officially divorced in 1945 . Øverland was married to journalist Margrete Aamot Øverland ( 1913–1978 ) during June 1945 . In 1946 , the Norwegian Parliament arranged for Arnulf and Margrete Aamot Øverland to reside at the Grotten . He lived there until his death in 1968 and she lived there for another ten years until her death in 1978 . Arnulf Øverland was buried at Vår Frelsers Gravlund in Oslo . Joseph Grimeland designed the bust of Arnulf Øverland ( bronze , 1970 ) at his grave site .
|
||||||
|
|
||||||
|
|
@ -56,7 +56,7 @@ biography_2 = """
|
||||||
- Vi overlever alt ( 1945 )
|
- Vi overlever alt ( 1945 )
|
||||||
- Sverdet bak døren ( 1956 )
|
- Sverdet bak døren ( 1956 )
|
||||||
- Livets minutter ( 1965 )
|
- Livets minutter ( 1965 )
|
||||||
|
|
||||||
Awards .
|
Awards .
|
||||||
- Gyldendals Endowment ( 1935 )
|
- Gyldendals Endowment ( 1935 )
|
||||||
- Dobloug Prize ( 1951 )
|
- Dobloug Prize ( 1951 )
|
||||||
|
|
|
||||||
|
|
@ -14,7 +14,7 @@
|
||||||
.nodes circle { stroke: white; stroke-width: 0.5px; filter: drop-shadow(0 0 5px rgba(255,255,255,0.3)); }
|
.nodes circle { stroke: white; stroke-width: 0.5px; filter: drop-shadow(0 0 5px rgba(255,255,255,0.3)); }
|
||||||
.node-label { font-size: 5px; font-weight: bold; fill: white; text-anchor: middle; dominant-baseline: middle; font-family: 'Inter', sans-serif; pointer-events: none; }
|
.node-label { font-size: 5px; font-weight: bold; fill: white; text-anchor: middle; dominant-baseline: middle; font-family: 'Inter', sans-serif; pointer-events: none; }
|
||||||
.edge-label { font-size: 3px; fill: rgba(255, 255, 255, 0.7); text-anchor: middle; dominant-baseline: middle; font-family: 'Inter', sans-serif; pointer-events: none; }
|
.edge-label { font-size: 3px; fill: rgba(255, 255, 255, 0.7); text-anchor: middle; dominant-baseline: middle; font-family: 'Inter', sans-serif; pointer-events: none; }
|
||||||
|
|
||||||
.tooltip {
|
.tooltip {
|
||||||
position: absolute;
|
position: absolute;
|
||||||
text-align: left;
|
text-align: left;
|
||||||
|
|
@ -76,7 +76,7 @@
|
||||||
// Create tooltip content for edge
|
// Create tooltip content for edge
|
||||||
var content = "<strong>Edge Information</strong><br/>";
|
var content = "<strong>Edge Information</strong><br/>";
|
||||||
content += "Relationship: " + d.relation + "<br/>";
|
content += "Relationship: " + d.relation + "<br/>";
|
||||||
|
|
||||||
// Show all weights
|
// Show all weights
|
||||||
if (d.all_weights && Object.keys(d.all_weights).length > 0) {
|
if (d.all_weights && Object.keys(d.all_weights).length > 0) {
|
||||||
content += "<strong>Weights:</strong><br/>";
|
content += "<strong>Weights:</strong><br/>";
|
||||||
|
|
@ -86,23 +86,23 @@
|
||||||
} else if (d.weight !== null && d.weight !== undefined) {
|
} else if (d.weight !== null && d.weight !== undefined) {
|
||||||
content += "Weight: " + d.weight + "<br/>";
|
content += "Weight: " + d.weight + "<br/>";
|
||||||
}
|
}
|
||||||
|
|
||||||
if (d.relationship_type) {
|
if (d.relationship_type) {
|
||||||
content += "Type: " + d.relationship_type + "<br/>";
|
content += "Type: " + d.relationship_type + "<br/>";
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add other edge properties
|
// Add other edge properties
|
||||||
if (d.edge_info) {
|
if (d.edge_info) {
|
||||||
Object.keys(d.edge_info).forEach(function(key) {
|
Object.keys(d.edge_info).forEach(function(key) {
|
||||||
if (key !== 'weight' && key !== 'weights' && key !== 'relationship_type' &&
|
if (key !== 'weight' && key !== 'weights' && key !== 'relationship_type' &&
|
||||||
key !== 'source_node_id' && key !== 'target_node_id' &&
|
key !== 'source_node_id' && key !== 'target_node_id' &&
|
||||||
key !== 'relationship_name' && key !== 'updated_at' &&
|
key !== 'relationship_name' && key !== 'updated_at' &&
|
||||||
!key.startsWith('weight_')) {
|
!key.startsWith('weight_')) {
|
||||||
content += key + ": " + d.edge_info[key] + "<br/>";
|
content += key + ": " + d.edge_info[key] + "<br/>";
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
tooltip.html(content)
|
tooltip.html(content)
|
||||||
.style("left", (d3.event.pageX + 10) + "px")
|
.style("left", (d3.event.pageX + 10) + "px")
|
||||||
.style("top", (d3.event.pageY - 10) + "px")
|
.style("top", (d3.event.pageY - 10) + "px")
|
||||||
|
|
@ -209,4 +209,3 @@
|
||||||
</svg>
|
</svg>
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
|
|
||||||
2
mise.toml
Normal file
2
mise.toml
Normal file
|
|
@ -0,0 +1,2 @@
|
||||||
|
[tools]
|
||||||
|
python = "latest"
|
||||||
2
mypy.ini
2
mypy.ini
|
|
@ -118,4 +118,4 @@ ignore_missing_imports = True
|
||||||
ignore_missing_imports = True
|
ignore_missing_imports = True
|
||||||
|
|
||||||
[mypy-pytz.*]
|
[mypy-pytz.*]
|
||||||
ignore_missing_imports = True
|
ignore_missing_imports = True
|
||||||
|
|
|
||||||
|
|
@ -63,10 +63,10 @@ async def main():
|
||||||
traversals.
|
traversals.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
sample_text_2 = """Neptune Analytics is an ideal choice for investigatory, exploratory, or data-science workloads
|
sample_text_2 = """Neptune Analytics is an ideal choice for investigatory, exploratory, or data-science workloads
|
||||||
that require fast iteration for data, analytical and algorithmic processing, or vector search on graph data. It
|
that require fast iteration for data, analytical and algorithmic processing, or vector search on graph data. It
|
||||||
complements Amazon Neptune Database, a popular managed graph database. To perform intensive analysis, you can load
|
complements Amazon Neptune Database, a popular managed graph database. To perform intensive analysis, you can load
|
||||||
the data from a Neptune Database graph or snapshot into Neptune Analytics. You can also load graph data that's
|
the data from a Neptune Database graph or snapshot into Neptune Analytics. You can also load graph data that's
|
||||||
stored in Amazon S3.
|
stored in Amazon S3.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -36,4 +36,3 @@
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -50,4 +50,3 @@
|
||||||
"department": "HR"
|
"department": "HR"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -106,4 +106,3 @@
|
||||||
"action": "purchased"
|
"action": "purchased"
|
||||||
}]
|
}]
|
||||||
}]
|
}]
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -165,8 +165,8 @@ async def main():
|
||||||
// If a stored preference exists and it does not match the new value,
|
// If a stored preference exists and it does not match the new value,
|
||||||
// raise an error using APOC's utility procedure.
|
// raise an error using APOC's utility procedure.
|
||||||
CALL apoc.util.validate(
|
CALL apoc.util.validate(
|
||||||
preference IS NOT NULL AND preference.value <> new_size,
|
preference IS NOT NULL AND preference.value <> new_size,
|
||||||
"Conflicting shoe size preference: existing size is " + preference.value + " and new size is " + new_size,
|
"Conflicting shoe size preference: existing size is " + preference.value + " and new size is " + new_size,
|
||||||
[]
|
[]
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -35,16 +35,16 @@ biography_1 = """
|
||||||
|
|
||||||
biography_2 = """
|
biography_2 = """
|
||||||
Arnulf Øverland Ole Peter Arnulf Øverland ( 27 April 1889 – 25 March 1968 ) was a Norwegian poet and artist . He is principally known for his poetry which served to inspire the Norwegian resistance movement during the German occupation of Norway during World War II .
|
Arnulf Øverland Ole Peter Arnulf Øverland ( 27 April 1889 – 25 March 1968 ) was a Norwegian poet and artist . He is principally known for his poetry which served to inspire the Norwegian resistance movement during the German occupation of Norway during World War II .
|
||||||
|
|
||||||
Biography .
|
Biography .
|
||||||
Øverland was born in Kristiansund and raised in Bergen . His parents were Peter Anton Øverland ( 1852–1906 ) and Hanna Hage ( 1854–1939 ) . The early death of his father , left the family economically stressed . He was able to attend Bergen Cathedral School and in 1904 Kristiania Cathedral School . He graduated in 1907 and for a time studied philology at University of Kristiania . Øverland published his first collection of poems ( 1911 ) .
|
Øverland was born in Kristiansund and raised in Bergen . His parents were Peter Anton Øverland ( 1852–1906 ) and Hanna Hage ( 1854–1939 ) . The early death of his father , left the family economically stressed . He was able to attend Bergen Cathedral School and in 1904 Kristiania Cathedral School . He graduated in 1907 and for a time studied philology at University of Kristiania . Øverland published his first collection of poems ( 1911 ) .
|
||||||
|
|
||||||
Øverland became a communist sympathizer from the early 1920s and became a member of Mot Dag . He also served as chairman of the Norwegian Students Society 1923–28 . He changed his stand in 1937 , partly as an expression of dissent against the ongoing Moscow Trials . He was an avid opponent of Nazism and in 1936 he wrote the poem Du må ikke sove which was printed in the journal Samtiden . It ends with . ( I thought: : Something is imminent . Our era is over – Europe’s on fire! ) . Probably the most famous line of the poem is ( You mustnt endure so well the injustice that doesnt affect you yourself! )
|
Øverland became a communist sympathizer from the early 1920s and became a member of Mot Dag . He also served as chairman of the Norwegian Students Society 1923–28 . He changed his stand in 1937 , partly as an expression of dissent against the ongoing Moscow Trials . He was an avid opponent of Nazism and in 1936 he wrote the poem Du må ikke sove which was printed in the journal Samtiden . It ends with . ( I thought: : Something is imminent . Our era is over – Europe’s on fire! ) . Probably the most famous line of the poem is ( You mustnt endure so well the injustice that doesnt affect you yourself! )
|
||||||
|
|
||||||
During the German occupation of Norway from 1940 in World War II , he wrote to inspire the Norwegian resistance movement . He wrote a series of poems which were clandestinely distributed , leading to the arrest of both him and his future wife Margrete Aamot Øverland in 1941 . Arnulf Øverland was held first in the prison camp of Grini before being transferred to Sachsenhausen concentration camp in Germany . He spent a four-year imprisonment until the liberation of Norway in 1945 . His poems were later collected in Vi overlever alt and published in 1945 .
|
During the German occupation of Norway from 1940 in World War II , he wrote to inspire the Norwegian resistance movement . He wrote a series of poems which were clandestinely distributed , leading to the arrest of both him and his future wife Margrete Aamot Øverland in 1941 . Arnulf Øverland was held first in the prison camp of Grini before being transferred to Sachsenhausen concentration camp in Germany . He spent a four-year imprisonment until the liberation of Norway in 1945 . His poems were later collected in Vi overlever alt and published in 1945 .
|
||||||
|
|
||||||
Øverland played an important role in the Norwegian language struggle in the post-war era . He became a noted supporter for the conservative written form of Norwegian called Riksmål , he was president of Riksmålsforbundet ( an organization in support of Riksmål ) from 1947 to 1956 . In addition , Øverland adhered to the traditionalist style of writing , criticising modernist poetry on several occasions . His speech Tungetale fra parnasset , published in Arbeiderbladet in 1954 , initiated the so-called Glossolalia debate .
|
Øverland played an important role in the Norwegian language struggle in the post-war era . He became a noted supporter for the conservative written form of Norwegian called Riksmål , he was president of Riksmålsforbundet ( an organization in support of Riksmål ) from 1947 to 1956 . In addition , Øverland adhered to the traditionalist style of writing , criticising modernist poetry on several occasions . His speech Tungetale fra parnasset , published in Arbeiderbladet in 1954 , initiated the so-called Glossolalia debate .
|
||||||
|
|
||||||
Personal life .
|
Personal life .
|
||||||
In 1918 he had married the singer Hildur Arntzen ( 1888–1957 ) . Their marriage was dissolved in 1939 . In 1940 , he married Bartholine Eufemia Leganger ( 1903–1995 ) . They separated shortly after , and were officially divorced in 1945 . Øverland was married to journalist Margrete Aamot Øverland ( 1913–1978 ) during June 1945 . In 1946 , the Norwegian Parliament arranged for Arnulf and Margrete Aamot Øverland to reside at the Grotten . He lived there until his death in 1968 and she lived there for another ten years until her death in 1978 . Arnulf Øverland was buried at Vår Frelsers Gravlund in Oslo . Joseph Grimeland designed the bust of Arnulf Øverland ( bronze , 1970 ) at his grave site .
|
In 1918 he had married the singer Hildur Arntzen ( 1888–1957 ) . Their marriage was dissolved in 1939 . In 1940 , he married Bartholine Eufemia Leganger ( 1903–1995 ) . They separated shortly after , and were officially divorced in 1945 . Øverland was married to journalist Margrete Aamot Øverland ( 1913–1978 ) during June 1945 . In 1946 , the Norwegian Parliament arranged for Arnulf and Margrete Aamot Øverland to reside at the Grotten . He lived there until his death in 1968 and she lived there for another ten years until her death in 1978 . Arnulf Øverland was buried at Vår Frelsers Gravlund in Oslo . Joseph Grimeland designed the bust of Arnulf Øverland ( bronze , 1970 ) at his grave site .
|
||||||
|
|
||||||
|
|
@ -56,7 +56,7 @@ biography_2 = """
|
||||||
- Vi overlever alt ( 1945 )
|
- Vi overlever alt ( 1945 )
|
||||||
- Sverdet bak døren ( 1956 )
|
- Sverdet bak døren ( 1956 )
|
||||||
- Livets minutter ( 1965 )
|
- Livets minutter ( 1965 )
|
||||||
|
|
||||||
Awards .
|
Awards .
|
||||||
- Gyldendals Endowment ( 1935 )
|
- Gyldendals Endowment ( 1935 )
|
||||||
- Dobloug Prize ( 1951 )
|
- Dobloug Prize ( 1951 )
|
||||||
|
|
|
||||||
|
|
@ -58,4 +58,3 @@ fi
|
||||||
|
|
||||||
# error in package
|
# error in package
|
||||||
exit 1
|
exit 1
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -26,4 +26,4 @@ if [ -z $error ]; then
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# error in package
|
# error in package
|
||||||
exit 1
|
exit 1
|
||||||
|
|
|
||||||
Loading…
Add table
Reference in a new issue