Compare commits
3 commits
main
...
browser-de
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c69a12ccba | ||
|
|
f12bfd6a5f | ||
|
|
4d886fb929 |
46 changed files with 516 additions and 910 deletions
123
.github/workflows/daily_issue_maintenance.yml
vendored
Normal file
123
.github/workflows/daily_issue_maintenance.yml
vendored
Normal file
|
|
@ -0,0 +1,123 @@
|
|||
name: Daily Issue Maintenance
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 0 * * *" # Every day at midnight
|
||||
workflow_dispatch: # Manual trigger option
|
||||
|
||||
jobs:
|
||||
find-legacy-duplicates:
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event_name == 'workflow_dispatch'
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
id-token: write
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
prompt: |
|
||||
REPO: ${{ github.repository }}
|
||||
|
||||
Find potential duplicate issues in the repository:
|
||||
|
||||
1. Use `gh issue list --state open --limit 1000 --json number,title,body,createdAt` to get all open issues
|
||||
2. For each issue, search for potential duplicates using `gh search issues` with keywords from the title and body
|
||||
3. Compare issues to identify true duplicates using these criteria:
|
||||
- Same bug or error being reported
|
||||
- Same feature request (even if worded differently)
|
||||
- Same question being asked
|
||||
- Issues describing the same root problem
|
||||
|
||||
For each duplicate found:
|
||||
- Add a comment linking to the original issue
|
||||
- Apply the "duplicate" label using `gh issue edit`
|
||||
- Be polite and explain why it's a duplicate
|
||||
|
||||
Focus on finding true duplicates, not just similar issues.
|
||||
|
||||
claude_args: |
|
||||
--allowedTools "Bash(gh issue:*),Bash(gh search:*)"
|
||||
--model claude-sonnet-4-5-20250929
|
||||
|
||||
check-stale-issues:
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event_name == 'schedule'
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
id-token: write
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
prompt: |
|
||||
REPO: ${{ github.repository }}
|
||||
|
||||
Review stale issues and request confirmation:
|
||||
|
||||
1. Use `gh issue list --state open --limit 1000 --json number,title,updatedAt,comments` to get all open issues
|
||||
2. Identify issues that are:
|
||||
- Older than 60 days (based on updatedAt)
|
||||
- Have no comments with "stale-check" label
|
||||
- Are not labeled as "enhancement" or "documentation"
|
||||
3. For each stale issue:
|
||||
- Add a polite comment asking the issue originator if this is still relevant
|
||||
- Apply a "stale-check" label to track that we've asked
|
||||
- Use format: "@{author} Is this still an issue? Please confirm within 14 days or this issue will be closed."
|
||||
|
||||
Use:
|
||||
- `gh issue view` to check issue details and labels
|
||||
- `gh issue comment` to add comments
|
||||
- `gh issue edit` to add the "stale-check" label
|
||||
|
||||
claude_args: |
|
||||
--allowedTools "Bash(gh issue:*)"
|
||||
--model claude-sonnet-4-5-20250929
|
||||
|
||||
close-unconfirmed-issues:
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event_name == 'schedule'
|
||||
needs: check-stale-issues
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
id-token: write
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
prompt: |
|
||||
REPO: ${{ github.repository }}
|
||||
|
||||
Close unconfirmed stale issues:
|
||||
|
||||
1. Use `gh issue list --state open --label "stale-check" --limit 1000 --json number,title,comments,updatedAt` to get issues with stale-check label
|
||||
2. For each issue, check if:
|
||||
- The "stale-check" comment was added 14+ days ago
|
||||
- There has been no response from the issue author or activity since the comment
|
||||
3. For issues meeting the criteria:
|
||||
- Add a polite closing comment
|
||||
- Close the issue using `gh issue close`
|
||||
- Use format: "Closing due to inactivity. Feel free to reopen if this is still relevant."
|
||||
|
||||
Use:
|
||||
- `gh issue view` to check issue comments and activity
|
||||
- `gh issue comment` to add closing comment
|
||||
- `gh issue close` to close the issue
|
||||
|
||||
claude_args: |
|
||||
--allowedTools "Bash(gh issue:*)"
|
||||
--model claude-sonnet-4-5-20250929
|
||||
141
.github/workflows/issue-triage.yml
vendored
Normal file
141
.github/workflows/issue-triage.yml
vendored
Normal file
|
|
@ -0,0 +1,141 @@
|
|||
name: Issue Triage and Deduplication
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
triage:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
id-token: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Run Claude Code for Issue Triage
|
||||
uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
allowed_non_write_users: "*"
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
prompt: |
|
||||
You're an issue triage assistant for GitHub issues. Your task is to analyze the issue and select appropriate labels from the provided list.
|
||||
|
||||
IMPORTANT: Don't post any comments or messages to the issue. Your only action should be to apply labels. DO NOT check for duplicates - that's handled by a separate job.
|
||||
|
||||
Issue Information:
|
||||
- REPO: ${{ github.repository }}
|
||||
- ISSUE_NUMBER: ${{ github.event.issue.number }}
|
||||
|
||||
TASK OVERVIEW:
|
||||
|
||||
1. First, fetch the list of labels available in this repository by running: `gh label list`. Run exactly this command with nothing else.
|
||||
|
||||
2. Next, use gh commands to get context about the issue:
|
||||
- Use `gh issue view ${{ github.event.issue.number }}` to retrieve the current issue's details
|
||||
- Use `gh search issues` to find similar issues that might provide context for proper categorization
|
||||
- You have access to these Bash commands:
|
||||
- Bash(gh label list:*) - to get available labels
|
||||
- Bash(gh issue view:*) - to view issue details
|
||||
- Bash(gh issue edit:*) - to apply labels to the issue
|
||||
- Bash(gh search:*) - to search for similar issues
|
||||
|
||||
3. Analyze the issue content, considering:
|
||||
- The issue title and description
|
||||
- The type of issue (bug report, feature request, question, etc.)
|
||||
- Technical areas mentioned
|
||||
- Database mentions (neo4j, falkordb, neptune, etc.)
|
||||
- LLM providers mentioned (openai, anthropic, gemini, groq, etc.)
|
||||
- Components affected (embeddings, search, prompts, server, mcp, etc.)
|
||||
|
||||
4. Select appropriate labels from the available labels list:
|
||||
- Choose labels that accurately reflect the issue's nature
|
||||
- Be specific but comprehensive
|
||||
- Add database-specific labels if mentioned: neo4j, falkordb, neptune
|
||||
- Add component labels if applicable
|
||||
- DO NOT add priority labels (P1, P2, P3)
|
||||
- DO NOT add duplicate label - that's handled by the deduplication job
|
||||
|
||||
5. Apply the selected labels:
|
||||
- Use `gh issue edit ${{ github.event.issue.number }} --add-label "label1,label2,label3"` to apply your selected labels
|
||||
- DO NOT post any comments explaining your decision
|
||||
- DO NOT communicate directly with users
|
||||
- If no labels are clearly applicable, do not apply any labels
|
||||
|
||||
IMPORTANT GUIDELINES:
|
||||
- Be thorough in your analysis
|
||||
- Only select labels from the provided list
|
||||
- DO NOT post any comments to the issue
|
||||
- Your ONLY action should be to apply labels using gh issue edit
|
||||
- It's okay to not add any labels if none are clearly applicable
|
||||
- DO NOT check for duplicates
|
||||
|
||||
claude_args: |
|
||||
--allowedTools "Bash(gh label list:*),Bash(gh issue view:*),Bash(gh issue edit:*),Bash(gh search:*)"
|
||||
--model claude-sonnet-4-5-20250929
|
||||
|
||||
deduplicate:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
needs: triage
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
id-token: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Check for duplicate issues
|
||||
uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
allowed_non_write_users: "*"
|
||||
prompt: |
|
||||
Analyze this new issue and check if it's a duplicate of existing issues in the repository.
|
||||
|
||||
Issue: #${{ github.event.issue.number }}
|
||||
Repository: ${{ github.repository }}
|
||||
|
||||
Your task:
|
||||
1. Use mcp__github__get_issue to get details of the current issue (#${{ github.event.issue.number }})
|
||||
2. Search for similar existing OPEN issues using mcp__github__search_issues with relevant keywords from the issue title and body
|
||||
3. Compare the new issue with existing ones to identify potential duplicates
|
||||
|
||||
Criteria for duplicates:
|
||||
- Same bug or error being reported
|
||||
- Same feature request (even if worded differently)
|
||||
- Same question being asked
|
||||
- Issues describing the same root problem
|
||||
|
||||
If you find duplicates:
|
||||
- Add a comment on the new issue linking to the original issue(s)
|
||||
- Apply the "duplicate" label to the new issue
|
||||
- Be polite and explain why it's a duplicate
|
||||
- Suggest the user follow the original issue for updates
|
||||
|
||||
If it's NOT a duplicate:
|
||||
- Don't add any comments
|
||||
- Don't modify labels
|
||||
|
||||
Use these tools:
|
||||
- mcp__github__get_issue: Get issue details
|
||||
- mcp__github__search_issues: Search for similar issues (use state:open)
|
||||
- mcp__github__list_issues: List recent issues if needed
|
||||
- mcp__github__create_issue_comment: Add a comment if duplicate found
|
||||
- mcp__github__update_issue: Add "duplicate" label
|
||||
|
||||
Be thorough but efficient. Focus on finding true duplicates, not just similar issues.
|
||||
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
claude_args: |
|
||||
--allowedTools "mcp__github__get_issue,mcp__github__search_issues,mcp__github__list_issues,mcp__github__create_issue_comment,mcp__github__update_issue,mcp__github__get_issue_comments"
|
||||
--model claude-sonnet-4-5-20250929
|
||||
87
README.md
87
README.md
|
|
@ -81,7 +81,7 @@ We're excited to open-source Graphiti, believing its potential reaches far beyon
|
|||
|
||||
| Aspect | Zep | Graphiti |
|
||||
|--------|-----|----------|
|
||||
| **What they are** | Fully managed platform for context engineering and AI memory | Open-source graph framework |
|
||||
| **What they are** | Complete managed platform for AI memory | Open-source graph framework |
|
||||
| **User & conversation management** | Built-in users, threads, and message storage | Build your own |
|
||||
| **Retrieval & performance** | Pre-configured, production-ready retrieval with sub-200ms performance at scale | Custom implementation required; performance depends on your setup |
|
||||
| **Developer tools** | Dashboard with graph visualization, debug logs, API logs; SDKs for Python, TypeScript, and Go | Build your own tools |
|
||||
|
|
@ -388,32 +388,49 @@ graphiti = Graphiti(graph_driver=driver)
|
|||
|
||||
## Using Graphiti with Azure OpenAI
|
||||
|
||||
Graphiti supports Azure OpenAI for both LLM inference and embeddings using Azure's OpenAI v1 API compatibility layer.
|
||||
Graphiti supports Azure OpenAI for both LLM inference and embeddings. Azure deployments often require different
|
||||
endpoints for LLM and embedding services, and separate deployments for default and small models.
|
||||
|
||||
### Quick Start
|
||||
> [!IMPORTANT]
|
||||
> **Azure OpenAI v1 API Opt-in Required for Structured Outputs**
|
||||
>
|
||||
> Graphiti uses structured outputs via the `client.beta.chat.completions.parse()` method, which requires Azure OpenAI
|
||||
> deployments to opt into the v1 API. Without this opt-in, you'll encounter 404 Resource not found errors during episode
|
||||
> ingestion.
|
||||
>
|
||||
> To enable v1 API support in your Azure OpenAI deployment, follow Microsoft's
|
||||
> guide: [Azure OpenAI API version lifecycle](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/api-version-lifecycle?tabs=key#api-evolution).
|
||||
|
||||
```python
|
||||
from openai import AsyncOpenAI
|
||||
from openai import AsyncAzureOpenAI
|
||||
from graphiti_core import Graphiti
|
||||
from graphiti_core.llm_client.azure_openai_client import AzureOpenAILLMClient
|
||||
from graphiti_core.llm_client.config import LLMConfig
|
||||
from graphiti_core.embedder.azure_openai import AzureOpenAIEmbedderClient
|
||||
from graphiti_core.llm_client import LLMConfig, OpenAIClient
|
||||
from graphiti_core.embedder.openai import OpenAIEmbedder, OpenAIEmbedderConfig
|
||||
from graphiti_core.cross_encoder.openai_reranker_client import OpenAIRerankerClient
|
||||
|
||||
# Initialize Azure OpenAI client using the standard OpenAI client
|
||||
# with Azure's v1 API endpoint
|
||||
azure_client = AsyncOpenAI(
|
||||
base_url="https://your-resource-name.openai.azure.com/openai/v1/",
|
||||
api_key="your-api-key",
|
||||
# Azure OpenAI configuration - use separate endpoints for different services
|
||||
api_key = "<your-api-key>"
|
||||
api_version = "<your-api-version>"
|
||||
llm_endpoint = "<your-llm-endpoint>" # e.g., "https://your-llm-resource.openai.azure.com/"
|
||||
embedding_endpoint = "<your-embedding-endpoint>" # e.g., "https://your-embedding-resource.openai.azure.com/"
|
||||
|
||||
# Create separate Azure OpenAI clients for different services
|
||||
llm_client_azure = AsyncAzureOpenAI(
|
||||
api_key=api_key,
|
||||
api_version=api_version,
|
||||
azure_endpoint=llm_endpoint
|
||||
)
|
||||
|
||||
# Create LLM and Embedder clients
|
||||
llm_client = AzureOpenAILLMClient(
|
||||
azure_client=azure_client,
|
||||
config=LLMConfig(model="gpt-5-mini", small_model="gpt-5-mini") # Your Azure deployment name
|
||||
embedding_client_azure = AsyncAzureOpenAI(
|
||||
api_key=api_key,
|
||||
api_version=api_version,
|
||||
azure_endpoint=embedding_endpoint
|
||||
)
|
||||
embedder_client = AzureOpenAIEmbedderClient(
|
||||
azure_client=azure_client,
|
||||
model="text-embedding-3-small" # Your Azure embedding deployment name
|
||||
|
||||
# Create LLM Config with your Azure deployment names
|
||||
azure_llm_config = LLMConfig(
|
||||
small_model="gpt-4.1-nano",
|
||||
model="gpt-4.1-mini",
|
||||
)
|
||||
|
||||
# Initialize Graphiti with Azure OpenAI clients
|
||||
|
|
@ -421,19 +438,29 @@ graphiti = Graphiti(
|
|||
"bolt://localhost:7687",
|
||||
"neo4j",
|
||||
"password",
|
||||
llm_client=llm_client,
|
||||
embedder=embedder_client,
|
||||
llm_client=OpenAIClient(
|
||||
config=azure_llm_config,
|
||||
client=llm_client_azure
|
||||
),
|
||||
embedder=OpenAIEmbedder(
|
||||
config=OpenAIEmbedderConfig(
|
||||
embedding_model="text-embedding-3-small-deployment" # Your Azure embedding deployment name
|
||||
),
|
||||
client=embedding_client_azure
|
||||
),
|
||||
cross_encoder=OpenAIRerankerClient(
|
||||
config=LLMConfig(
|
||||
model=azure_llm_config.small_model # Use small model for reranking
|
||||
),
|
||||
client=llm_client_azure
|
||||
)
|
||||
)
|
||||
|
||||
# Now you can use Graphiti with Azure OpenAI
|
||||
```
|
||||
|
||||
**Key Points:**
|
||||
- Use the standard `AsyncOpenAI` client with Azure's v1 API endpoint format: `https://your-resource-name.openai.azure.com/openai/v1/`
|
||||
- The deployment names (e.g., `gpt-5-mini`, `text-embedding-3-small`) should match your Azure OpenAI deployment names
|
||||
- See `examples/azure-openai/` for a complete working example
|
||||
|
||||
Make sure to replace the placeholder values with your actual Azure OpenAI credentials and deployment names.
|
||||
Make sure to replace the placeholder values with your actual Azure OpenAI credentials and deployment names that match
|
||||
your Azure OpenAI service configuration.
|
||||
|
||||
## Using Graphiti with Google Gemini
|
||||
|
||||
|
|
@ -479,7 +506,7 @@ graphiti = Graphiti(
|
|||
cross_encoder=GeminiRerankerClient(
|
||||
config=LLMConfig(
|
||||
api_key=api_key,
|
||||
model="gemini-2.5-flash-lite"
|
||||
model="gemini-2.5-flash-lite-preview-06-17"
|
||||
)
|
||||
)
|
||||
)
|
||||
|
|
@ -487,7 +514,7 @@ graphiti = Graphiti(
|
|||
# Now you can use Graphiti with Google Gemini for all components
|
||||
```
|
||||
|
||||
The Gemini reranker uses the `gemini-2.5-flash-lite` model by default, which is optimized for
|
||||
The Gemini reranker uses the `gemini-2.5-flash-lite-preview-06-17` model by default, which is optimized for
|
||||
cost-effective and low-latency classification tasks. It uses the same boolean classification approach as the OpenAI
|
||||
reranker, leveraging Gemini's log probabilities feature to rank passage relevance.
|
||||
|
||||
|
|
@ -496,8 +523,6 @@ reranker, leveraging Gemini's log probabilities feature to rank passage relevanc
|
|||
Graphiti supports Ollama for running local LLMs and embedding models via Ollama's OpenAI-compatible API. This is ideal
|
||||
for privacy-focused applications or when you want to avoid API costs.
|
||||
|
||||
**Note:** Use `OpenAIGenericClient` (not `OpenAIClient`) for Ollama and other OpenAI-compatible providers like LM Studio. The `OpenAIGenericClient` is optimized for local models with a higher default max token limit (16K vs 8K) and full support for structured outputs.
|
||||
|
||||
Install the models:
|
||||
|
||||
```bash
|
||||
|
|
|
|||
|
|
@ -34,7 +34,7 @@ You are not expected to provide support for Your Contributions, except to the ex
|
|||
|
||||
## Third-Party Submissions
|
||||
|
||||
Should You wish to submit work that is not Your original creation, You may submit it to Zep separately from any Contribution, identifying the complete details of its source and of any license or other restriction (including, but not limited to, related patents, trademarks, and license agreements) of which you are personally aware, and conspicuously marking the work as "Submitted on behalf of a third party: [named here]".
|
||||
Should You wish to submit work that is not Your original creation, You may submit it to Zep separately from any Contribution, identifying the complete details of its source and of any license or other restriction (including, but not limited to, related patents, trademarks, and license agreements) of which you are personally aware, and conspicuously marking the work as "Submitted on behalf of a third-party: [named here]".
|
||||
|
||||
## Notifications
|
||||
|
||||
|
|
|
|||
|
|
@ -22,8 +22,8 @@ services:
|
|||
environment:
|
||||
- OPENAI_API_KEY=${OPENAI_API_KEY}
|
||||
- NEO4J_URI=bolt://neo4j:${NEO4J_PORT:-7687}
|
||||
- NEO4J_USER=${NEO4J_USER:-neo4j}
|
||||
- NEO4J_PASSWORD=${NEO4J_PASSWORD:-password}
|
||||
- NEO4J_USER=${NEO4J_USER}
|
||||
- NEO4J_PASSWORD=${NEO4J_PASSWORD}
|
||||
- PORT=8000
|
||||
- db_backend=neo4j
|
||||
neo4j:
|
||||
|
|
@ -45,7 +45,7 @@ services:
|
|||
volumes:
|
||||
- neo4j_data:/data
|
||||
environment:
|
||||
- NEO4J_AUTH=${NEO4J_USER:-neo4j}/${NEO4J_PASSWORD:-password}
|
||||
- NEO4J_AUTH=${NEO4J_USER}/${NEO4J_PASSWORD}
|
||||
|
||||
falkordb:
|
||||
image: falkordb/falkordb:latest
|
||||
|
|
|
|||
|
|
@ -1,10 +0,0 @@
|
|||
# Neo4j connection settings
|
||||
NEO4J_URI=bolt://localhost:7687
|
||||
NEO4J_USER=neo4j
|
||||
NEO4J_PASSWORD=password
|
||||
|
||||
# Azure OpenAI settings
|
||||
AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com
|
||||
AZURE_OPENAI_API_KEY=your-api-key-here
|
||||
AZURE_OPENAI_DEPLOYMENT=gpt-5-mini
|
||||
AZURE_OPENAI_EMBEDDING_DEPLOYMENT=text-embedding-3-small
|
||||
|
|
@ -1,154 +0,0 @@
|
|||
# Azure OpenAI with Neo4j Example
|
||||
|
||||
This example demonstrates how to use Graphiti with Azure OpenAI and Neo4j to build a knowledge graph.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Python 3.10+
|
||||
- Neo4j database (running locally or remotely)
|
||||
- Azure OpenAI subscription with deployed models
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Install Dependencies
|
||||
|
||||
```bash
|
||||
uv sync
|
||||
```
|
||||
|
||||
### 2. Configure Environment Variables
|
||||
|
||||
Copy the `.env.example` file to `.env` and fill in your credentials:
|
||||
|
||||
```bash
|
||||
cd examples/azure-openai
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
Edit `.env` with your actual values:
|
||||
|
||||
```env
|
||||
# Neo4j connection settings
|
||||
NEO4J_URI=bolt://localhost:7687
|
||||
NEO4J_USER=neo4j
|
||||
NEO4J_PASSWORD=your-password
|
||||
|
||||
# Azure OpenAI settings
|
||||
AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com
|
||||
AZURE_OPENAI_API_KEY=your-api-key-here
|
||||
AZURE_OPENAI_DEPLOYMENT=gpt-5-mini
|
||||
AZURE_OPENAI_EMBEDDING_DEPLOYMENT=text-embedding-3-small
|
||||
```
|
||||
|
||||
### 3. Azure OpenAI Model Deployments
|
||||
|
||||
This example requires two Azure OpenAI model deployments:
|
||||
|
||||
1. **Chat Completion Model**: Used for entity extraction and relationship analysis
|
||||
- Set the deployment name in `AZURE_OPENAI_DEPLOYMENT`
|
||||
|
||||
2. **Embedding Model**: Used for semantic search
|
||||
- Set the deployment name in `AZURE_OPENAI_EMBEDDING_DEPLOYMENT`
|
||||
|
||||
### 4. Neo4j Setup
|
||||
|
||||
Make sure Neo4j is running and accessible at the URI specified in your `.env` file.
|
||||
|
||||
For local development:
|
||||
- Download and install [Neo4j Desktop](https://neo4j.com/download/)
|
||||
- Create a new database
|
||||
- Start the database
|
||||
- Use the credentials in your `.env` file
|
||||
|
||||
## Running the Example
|
||||
|
||||
```bash
|
||||
cd examples/azure-openai
|
||||
uv run azure_openai_neo4j.py
|
||||
```
|
||||
|
||||
## What This Example Does
|
||||
|
||||
1. **Initialization**: Sets up connections to Neo4j and Azure OpenAI
|
||||
2. **Adding Episodes**: Ingests text and JSON data about California politics
|
||||
3. **Basic Search**: Performs hybrid search combining semantic similarity and BM25 retrieval
|
||||
4. **Center Node Search**: Reranks results based on graph distance to a specific node
|
||||
5. **Cleanup**: Properly closes database connections
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### Azure OpenAI Integration
|
||||
|
||||
The example shows how to configure Graphiti to use Azure OpenAI with the OpenAI v1 API:
|
||||
|
||||
```python
|
||||
# Initialize Azure OpenAI client using the standard OpenAI client
|
||||
# with Azure's v1 API endpoint
|
||||
azure_client = AsyncOpenAI(
|
||||
base_url=f"{azure_endpoint}/openai/v1/",
|
||||
api_key=azure_api_key,
|
||||
)
|
||||
|
||||
# Create LLM and Embedder clients
|
||||
llm_client = AzureOpenAILLMClient(
|
||||
azure_client=azure_client,
|
||||
config=LLMConfig(model=azure_deployment, small_model=azure_deployment)
|
||||
)
|
||||
embedder_client = AzureOpenAIEmbedderClient(
|
||||
azure_client=azure_client,
|
||||
model=azure_embedding_deployment
|
||||
)
|
||||
|
||||
# Initialize Graphiti with custom clients
|
||||
graphiti = Graphiti(
|
||||
neo4j_uri,
|
||||
neo4j_user,
|
||||
neo4j_password,
|
||||
llm_client=llm_client,
|
||||
embedder=embedder_client,
|
||||
)
|
||||
```
|
||||
|
||||
**Note**: This example uses Azure OpenAI's v1 API compatibility layer, which allows using the standard `AsyncOpenAI` client. The endpoint format is `https://your-resource-name.openai.azure.com/openai/v1/`.
|
||||
|
||||
### Episodes
|
||||
|
||||
Episodes are the primary units of information in Graphiti. They can be:
|
||||
- **Text**: Raw text content (e.g., transcripts, documents)
|
||||
- **JSON**: Structured data with key-value pairs
|
||||
|
||||
### Hybrid Search
|
||||
|
||||
Graphiti combines multiple search strategies:
|
||||
- **Semantic Search**: Uses embeddings to find semantically similar content
|
||||
- **BM25**: Keyword-based text retrieval
|
||||
- **Graph Traversal**: Leverages relationships between entities
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Azure OpenAI API Errors
|
||||
|
||||
- Verify your endpoint URL is correct (should end in `.openai.azure.com`)
|
||||
- Check that your API key is valid
|
||||
- Ensure your deployment names match actual deployments in Azure
|
||||
- Verify API version is supported by your deployment
|
||||
|
||||
### Neo4j Connection Issues
|
||||
|
||||
- Ensure Neo4j is running
|
||||
- Check firewall settings
|
||||
- Verify credentials are correct
|
||||
- Check URI format (should be `bolt://` or `neo4j://`)
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Explore other search recipes in `graphiti_core/search/search_config_recipes.py`
|
||||
- Try different episode types and content
|
||||
- Experiment with custom entity definitions
|
||||
- Add more episodes to build a larger knowledge graph
|
||||
|
||||
## Related Examples
|
||||
|
||||
- `examples/quickstart/` - Basic Graphiti usage with OpenAI
|
||||
- `examples/podcast/` - Processing longer content
|
||||
- `examples/ecommerce/` - Domain-specific knowledge graphs
|
||||
|
|
@ -1,225 +0,0 @@
|
|||
"""
|
||||
Copyright 2025, Zep Software, Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from datetime import datetime, timezone
|
||||
from logging import INFO
|
||||
|
||||
from dotenv import load_dotenv
|
||||
from openai import AsyncOpenAI
|
||||
|
||||
from graphiti_core import Graphiti
|
||||
from graphiti_core.embedder.azure_openai import AzureOpenAIEmbedderClient
|
||||
from graphiti_core.llm_client.azure_openai_client import AzureOpenAILLMClient
|
||||
from graphiti_core.llm_client.config import LLMConfig
|
||||
from graphiti_core.nodes import EpisodeType
|
||||
|
||||
#################################################
|
||||
# CONFIGURATION
|
||||
#################################################
|
||||
# Set up logging and environment variables for
|
||||
# connecting to Neo4j database and Azure OpenAI
|
||||
#################################################
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(
|
||||
level=INFO,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
|
||||
datefmt='%Y-%m-%d %H:%M:%S',
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
load_dotenv()
|
||||
|
||||
# Neo4j connection parameters
|
||||
# Make sure Neo4j Desktop is running with a local DBMS started
|
||||
neo4j_uri = os.environ.get('NEO4J_URI', 'bolt://localhost:7687')
|
||||
neo4j_user = os.environ.get('NEO4J_USER', 'neo4j')
|
||||
neo4j_password = os.environ.get('NEO4J_PASSWORD', 'password')
|
||||
|
||||
# Azure OpenAI connection parameters
|
||||
azure_endpoint = os.environ.get('AZURE_OPENAI_ENDPOINT')
|
||||
azure_api_key = os.environ.get('AZURE_OPENAI_API_KEY')
|
||||
azure_deployment = os.environ.get('AZURE_OPENAI_DEPLOYMENT', 'gpt-4.1')
|
||||
azure_embedding_deployment = os.environ.get(
|
||||
'AZURE_OPENAI_EMBEDDING_DEPLOYMENT', 'text-embedding-3-small'
|
||||
)
|
||||
|
||||
if not azure_endpoint or not azure_api_key:
|
||||
raise ValueError('AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_API_KEY must be set')
|
||||
|
||||
|
||||
async def main():
|
||||
#################################################
|
||||
# INITIALIZATION
|
||||
#################################################
|
||||
# Connect to Neo4j and Azure OpenAI, then set up
|
||||
# Graphiti indices. This is required before using
|
||||
# other Graphiti functionality
|
||||
#################################################
|
||||
|
||||
# Initialize Azure OpenAI client
|
||||
azure_client = AsyncOpenAI(
|
||||
base_url=f'{azure_endpoint}/openai/v1/',
|
||||
api_key=azure_api_key,
|
||||
)
|
||||
|
||||
# Create LLM and Embedder clients
|
||||
llm_client = AzureOpenAILLMClient(
|
||||
azure_client=azure_client,
|
||||
config=LLMConfig(model=azure_deployment, small_model=azure_deployment),
|
||||
)
|
||||
embedder_client = AzureOpenAIEmbedderClient(
|
||||
azure_client=azure_client, model=azure_embedding_deployment
|
||||
)
|
||||
|
||||
# Initialize Graphiti with Neo4j connection and Azure OpenAI clients
|
||||
graphiti = Graphiti(
|
||||
neo4j_uri,
|
||||
neo4j_user,
|
||||
neo4j_password,
|
||||
llm_client=llm_client,
|
||||
embedder=embedder_client,
|
||||
)
|
||||
|
||||
try:
|
||||
#################################################
|
||||
# ADDING EPISODES
|
||||
#################################################
|
||||
# Episodes are the primary units of information
|
||||
# in Graphiti. They can be text or structured JSON
|
||||
# and are automatically processed to extract entities
|
||||
# and relationships.
|
||||
#################################################
|
||||
|
||||
# Example: Add Episodes
|
||||
# Episodes list containing both text and JSON episodes
|
||||
episodes = [
|
||||
{
|
||||
'content': 'Kamala Harris is the Attorney General of California. She was previously '
|
||||
'the district attorney for San Francisco.',
|
||||
'type': EpisodeType.text,
|
||||
'description': 'podcast transcript',
|
||||
},
|
||||
{
|
||||
'content': 'As AG, Harris was in office from January 3, 2011 – January 3, 2017',
|
||||
'type': EpisodeType.text,
|
||||
'description': 'podcast transcript',
|
||||
},
|
||||
{
|
||||
'content': {
|
||||
'name': 'Gavin Newsom',
|
||||
'position': 'Governor',
|
||||
'state': 'California',
|
||||
'previous_role': 'Lieutenant Governor',
|
||||
'previous_location': 'San Francisco',
|
||||
},
|
||||
'type': EpisodeType.json,
|
||||
'description': 'podcast metadata',
|
||||
},
|
||||
]
|
||||
|
||||
# Add episodes to the graph
|
||||
for i, episode in enumerate(episodes):
|
||||
await graphiti.add_episode(
|
||||
name=f'California Politics {i}',
|
||||
episode_body=(
|
||||
episode['content']
|
||||
if isinstance(episode['content'], str)
|
||||
else json.dumps(episode['content'])
|
||||
),
|
||||
source=episode['type'],
|
||||
source_description=episode['description'],
|
||||
reference_time=datetime.now(timezone.utc),
|
||||
)
|
||||
print(f'Added episode: California Politics {i} ({episode["type"].value})')
|
||||
|
||||
#################################################
|
||||
# BASIC SEARCH
|
||||
#################################################
|
||||
# The simplest way to retrieve relationships (edges)
|
||||
# from Graphiti is using the search method, which
|
||||
# performs a hybrid search combining semantic
|
||||
# similarity and BM25 text retrieval.
|
||||
#################################################
|
||||
|
||||
# Perform a hybrid search combining semantic similarity and BM25 retrieval
|
||||
print("\nSearching for: 'Who was the California Attorney General?'")
|
||||
results = await graphiti.search('Who was the California Attorney General?')
|
||||
|
||||
# Print search results
|
||||
print('\nSearch Results:')
|
||||
for result in results:
|
||||
print(f'UUID: {result.uuid}')
|
||||
print(f'Fact: {result.fact}')
|
||||
if hasattr(result, 'valid_at') and result.valid_at:
|
||||
print(f'Valid from: {result.valid_at}')
|
||||
if hasattr(result, 'invalid_at') and result.invalid_at:
|
||||
print(f'Valid until: {result.invalid_at}')
|
||||
print('---')
|
||||
|
||||
#################################################
|
||||
# CENTER NODE SEARCH
|
||||
#################################################
|
||||
# For more contextually relevant results, you can
|
||||
# use a center node to rerank search results based
|
||||
# on their graph distance to a specific node
|
||||
#################################################
|
||||
|
||||
# Use the top search result's UUID as the center node for reranking
|
||||
if results and len(results) > 0:
|
||||
# Get the source node UUID from the top result
|
||||
center_node_uuid = results[0].source_node_uuid
|
||||
|
||||
print('\nReranking search results based on graph distance:')
|
||||
print(f'Using center node UUID: {center_node_uuid}')
|
||||
|
||||
reranked_results = await graphiti.search(
|
||||
'Who was the California Attorney General?',
|
||||
center_node_uuid=center_node_uuid,
|
||||
)
|
||||
|
||||
# Print reranked search results
|
||||
print('\nReranked Search Results:')
|
||||
for result in reranked_results:
|
||||
print(f'UUID: {result.uuid}')
|
||||
print(f'Fact: {result.fact}')
|
||||
if hasattr(result, 'valid_at') and result.valid_at:
|
||||
print(f'Valid from: {result.valid_at}')
|
||||
if hasattr(result, 'invalid_at') and result.invalid_at:
|
||||
print(f'Valid until: {result.invalid_at}')
|
||||
print('---')
|
||||
else:
|
||||
print('No results found in the initial search to use as center node.')
|
||||
|
||||
finally:
|
||||
#################################################
|
||||
# CLEANUP
|
||||
#################################################
|
||||
# Always close the connection to Neo4j when
|
||||
# finished to properly release resources
|
||||
#################################################
|
||||
|
||||
# Close the connection
|
||||
await graphiti.close()
|
||||
print('\nConnection closed')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
asyncio.run(main())
|
||||
|
|
@ -60,7 +60,7 @@ def setup_logging():
|
|||
shoe_conversation = [
|
||||
"SalesBot: Hi, I'm Allbirds Assistant! How can I help you today?",
|
||||
"John: Hi, I'm looking for a new pair of shoes.",
|
||||
'SalesBot: Of course! What kind of material are you looking for?',
|
||||
'SalesBot: Of course! What kinde of material are you looking for?',
|
||||
"John: I'm looking for shoes made out of wool",
|
||||
"""SalesBot: We have just what you are looking for, how do you like our Men's SuperLight Wool Runners
|
||||
- Dark Grey (Medium Grey Sole)? They use the SuperLight Foam technology.""",
|
||||
|
|
|
|||
|
|
@ -540,7 +540,7 @@
|
|||
"submit_button = widgets.Button(description='Send')\n",
|
||||
"submit_button.on_click(on_submit)\n",
|
||||
"\n",
|
||||
"conversation_output.append_stdout('Assistant: Hello, how can I help you find shoes today?')\n",
|
||||
"conversation_output.append_stdout('Asssistant: Hello, how can I help you find shoes today?')\n",
|
||||
"\n",
|
||||
"display(widgets.VBox([input_box, submit_button, conversation_output]))"
|
||||
]
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ else:
|
|||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DEFAULT_MODEL = 'gemini-2.5-flash-lite'
|
||||
DEFAULT_MODEL = 'gemini-2.5-flash-lite-preview-06-17'
|
||||
|
||||
|
||||
class GeminiRerankerClient(CrossEncoderClient):
|
||||
|
|
|
|||
|
|
@ -134,7 +134,7 @@ class KuzuDriver(GraphDriver):
|
|||
return KuzuDriverSession(self)
|
||||
|
||||
async def close(self):
|
||||
# Do not explicitly close the connection, instead rely on GC.
|
||||
# Do not explicity close the connection, instead rely on GC.
|
||||
pass
|
||||
|
||||
def delete_all_indexes(self, database_: str):
|
||||
|
|
|
|||
|
|
@ -18,9 +18,7 @@ import logging
|
|||
from collections.abc import Coroutine
|
||||
from typing import Any
|
||||
|
||||
import neo4j.exceptions
|
||||
from neo4j import AsyncGraphDatabase, EagerResult
|
||||
from neo4j.exceptions import ClientError
|
||||
from typing_extensions import LiteralString
|
||||
|
||||
from graphiti_core.driver.driver import GraphDriver, GraphDriverSession, GraphProvider
|
||||
|
|
@ -72,15 +70,6 @@ class Neo4jDriver(GraphDriver):
|
|||
|
||||
try:
|
||||
result = await self.client.execute_query(cypher_query_, parameters_=params, **kwargs)
|
||||
except neo4j.exceptions.ClientError as e:
|
||||
# Handle race condition when creating indices/constraints in parallel
|
||||
# Neo4j 5.26+ may throw EquivalentSchemaRuleAlreadyExists even with IF NOT EXISTS
|
||||
if 'EquivalentSchemaRuleAlreadyExists' in str(e):
|
||||
logger.info(f'Index or constraint already exists, continuing: {cypher_query_}')
|
||||
# Return empty result to indicate success (index exists)
|
||||
return EagerResult([], None, None) # type: ignore
|
||||
logger.error(f'Error executing Neo4j query: {e}\n{cypher_query_}\n{params}')
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f'Error executing Neo4j query: {e}\n{cypher_query_}\n{params}')
|
||||
raise
|
||||
|
|
@ -99,21 +88,6 @@ class Neo4jDriver(GraphDriver):
|
|||
'CALL db.indexes() YIELD name DROP INDEX name',
|
||||
)
|
||||
|
||||
async def _execute_index_query(self, query: LiteralString) -> EagerResult | None:
|
||||
"""Execute an index creation query, ignoring 'index already exists' errors.
|
||||
|
||||
Neo4j can raise EquivalentSchemaRuleAlreadyExists when concurrent CREATE INDEX
|
||||
IF NOT EXISTS queries race, even though the index exists. This is safe to ignore.
|
||||
"""
|
||||
try:
|
||||
return await self.execute_query(query)
|
||||
except ClientError as e:
|
||||
# Ignore "equivalent index already exists" error (race condition with IF NOT EXISTS)
|
||||
if 'EquivalentSchemaRuleAlreadyExists' in str(e):
|
||||
logger.debug(f'Index already exists (concurrent creation): {query[:50]}...')
|
||||
return None
|
||||
raise
|
||||
|
||||
async def build_indices_and_constraints(self, delete_existing: bool = False):
|
||||
if delete_existing:
|
||||
await self.delete_all_indexes()
|
||||
|
|
@ -124,8 +98,15 @@ class Neo4jDriver(GraphDriver):
|
|||
|
||||
index_queries: list[LiteralString] = range_indices + fulltext_indices
|
||||
|
||||
await semaphore_gather(*[self._execute_index_query(query) for query in index_queries])
|
||||
|
||||
await semaphore_gather(
|
||||
*[
|
||||
self.execute_query(
|
||||
query,
|
||||
)
|
||||
for query in index_queries
|
||||
]
|
||||
)
|
||||
|
||||
async def health_check(self) -> None:
|
||||
"""Check Neo4j connectivity by running the driver's verify_connectivity method."""
|
||||
try:
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ limitations under the License.
|
|||
import logging
|
||||
from typing import Any
|
||||
|
||||
from openai import AsyncAzureOpenAI, AsyncOpenAI
|
||||
from openai import AsyncAzureOpenAI
|
||||
|
||||
from .client import EmbedderClient
|
||||
|
||||
|
|
@ -25,16 +25,9 @@ logger = logging.getLogger(__name__)
|
|||
|
||||
|
||||
class AzureOpenAIEmbedderClient(EmbedderClient):
|
||||
"""Wrapper class for Azure OpenAI that implements the EmbedderClient interface.
|
||||
"""Wrapper class for AsyncAzureOpenAI that implements the EmbedderClient interface."""
|
||||
|
||||
Supports both AsyncAzureOpenAI and AsyncOpenAI (with Azure v1 API endpoint).
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
azure_client: AsyncAzureOpenAI | AsyncOpenAI,
|
||||
model: str = 'text-embedding-3-small',
|
||||
):
|
||||
def __init__(self, azure_client: AsyncAzureOpenAI, model: str = 'text-embedding-3-small'):
|
||||
self.azure_client = azure_client
|
||||
self.model = model
|
||||
|
||||
|
|
|
|||
|
|
@ -47,9 +47,6 @@ else:
|
|||
logger = logging.getLogger(__name__)
|
||||
|
||||
AnthropicModel = Literal[
|
||||
'claude-sonnet-4-5-latest',
|
||||
'claude-sonnet-4-5-20250929',
|
||||
'claude-haiku-4-5-latest',
|
||||
'claude-3-7-sonnet-latest',
|
||||
'claude-3-7-sonnet-20250219',
|
||||
'claude-3-5-haiku-latest',
|
||||
|
|
@ -65,39 +62,7 @@ AnthropicModel = Literal[
|
|||
'claude-2.0',
|
||||
]
|
||||
|
||||
DEFAULT_MODEL: AnthropicModel = 'claude-haiku-4-5-latest'
|
||||
|
||||
# Maximum output tokens for different Anthropic models
|
||||
# Based on official Anthropic documentation (as of 2025)
|
||||
# Note: These represent standard limits without beta headers.
|
||||
# Some models support higher limits with additional configuration (e.g., Claude 3.7 supports
|
||||
# 128K with 'anthropic-beta: output-128k-2025-02-19' header, but this is not currently implemented).
|
||||
ANTHROPIC_MODEL_MAX_TOKENS = {
|
||||
# Claude 4.5 models - 64K tokens
|
||||
'claude-sonnet-4-5-latest': 65536,
|
||||
'claude-sonnet-4-5-20250929': 65536,
|
||||
'claude-haiku-4-5-latest': 65536,
|
||||
# Claude 3.7 models - standard 64K tokens
|
||||
'claude-3-7-sonnet-latest': 65536,
|
||||
'claude-3-7-sonnet-20250219': 65536,
|
||||
# Claude 3.5 models
|
||||
'claude-3-5-haiku-latest': 8192,
|
||||
'claude-3-5-haiku-20241022': 8192,
|
||||
'claude-3-5-sonnet-latest': 8192,
|
||||
'claude-3-5-sonnet-20241022': 8192,
|
||||
'claude-3-5-sonnet-20240620': 8192,
|
||||
# Claude 3 models - 4K tokens
|
||||
'claude-3-opus-latest': 4096,
|
||||
'claude-3-opus-20240229': 4096,
|
||||
'claude-3-sonnet-20240229': 4096,
|
||||
'claude-3-haiku-20240307': 4096,
|
||||
# Claude 2 models - 4K tokens
|
||||
'claude-2.1': 4096,
|
||||
'claude-2.0': 4096,
|
||||
}
|
||||
|
||||
# Default max tokens for models not in the mapping
|
||||
DEFAULT_ANTHROPIC_MAX_TOKENS = 8192
|
||||
DEFAULT_MODEL: AnthropicModel = 'claude-3-7-sonnet-latest'
|
||||
|
||||
|
||||
class AnthropicClient(LLMClient):
|
||||
|
|
@ -212,45 +177,6 @@ class AnthropicClient(LLMClient):
|
|||
tool_choice_cast = typing.cast(ToolChoiceParam, tool_choice)
|
||||
return tool_list_cast, tool_choice_cast
|
||||
|
||||
def _get_max_tokens_for_model(self, model: str) -> int:
|
||||
"""Get the maximum output tokens for a specific Anthropic model.
|
||||
|
||||
Args:
|
||||
model: The model name to look up
|
||||
|
||||
Returns:
|
||||
int: The maximum output tokens for the model
|
||||
"""
|
||||
return ANTHROPIC_MODEL_MAX_TOKENS.get(model, DEFAULT_ANTHROPIC_MAX_TOKENS)
|
||||
|
||||
def _resolve_max_tokens(self, requested_max_tokens: int | None, model: str) -> int:
|
||||
"""
|
||||
Resolve the maximum output tokens to use based on precedence rules.
|
||||
|
||||
Precedence order (highest to lowest):
|
||||
1. Explicit max_tokens parameter passed to generate_response()
|
||||
2. Instance max_tokens set during client initialization
|
||||
3. Model-specific maximum tokens from ANTHROPIC_MODEL_MAX_TOKENS mapping
|
||||
4. DEFAULT_ANTHROPIC_MAX_TOKENS as final fallback
|
||||
|
||||
Args:
|
||||
requested_max_tokens: The max_tokens parameter passed to generate_response()
|
||||
model: The model name to look up model-specific limits
|
||||
|
||||
Returns:
|
||||
int: The resolved maximum tokens to use
|
||||
"""
|
||||
# 1. Use explicit parameter if provided
|
||||
if requested_max_tokens is not None:
|
||||
return requested_max_tokens
|
||||
|
||||
# 2. Use instance max_tokens if set during initialization
|
||||
if self.max_tokens is not None:
|
||||
return self.max_tokens
|
||||
|
||||
# 3. Use model-specific maximum or return DEFAULT_ANTHROPIC_MAX_TOKENS
|
||||
return self._get_max_tokens_for_model(model)
|
||||
|
||||
async def _generate_response(
|
||||
self,
|
||||
messages: list[Message],
|
||||
|
|
@ -278,9 +204,12 @@ class AnthropicClient(LLMClient):
|
|||
user_messages = [{'role': m.role, 'content': m.content} for m in messages[1:]]
|
||||
user_messages_cast = typing.cast(list[MessageParam], user_messages)
|
||||
|
||||
# Resolve max_tokens dynamically based on the model's capabilities
|
||||
# This allows different models to use their full output capacity
|
||||
max_creation_tokens: int = self._resolve_max_tokens(max_tokens, self.model)
|
||||
# TODO: Replace hacky min finding solution after fixing hardcoded EXTRACT_EDGES_MAX_TOKENS = 16384 in
|
||||
# edge_operations.py. Throws errors with cheaper models that lower max_tokens.
|
||||
max_creation_tokens: int = min(
|
||||
max_tokens if max_tokens is not None else self.config.max_tokens,
|
||||
DEFAULT_MAX_TOKENS,
|
||||
)
|
||||
|
||||
try:
|
||||
# Create the appropriate tool based on whether response_model is provided
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ limitations under the License.
|
|||
import logging
|
||||
from typing import ClassVar
|
||||
|
||||
from openai import AsyncAzureOpenAI, AsyncOpenAI
|
||||
from openai import AsyncAzureOpenAI
|
||||
from openai.types.chat import ChatCompletionMessageParam
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
|
@ -28,17 +28,14 @@ logger = logging.getLogger(__name__)
|
|||
|
||||
|
||||
class AzureOpenAILLMClient(BaseOpenAIClient):
|
||||
"""Wrapper class for Azure OpenAI that implements the LLMClient interface.
|
||||
|
||||
Supports both AsyncAzureOpenAI and AsyncOpenAI (with Azure v1 API endpoint).
|
||||
"""
|
||||
"""Wrapper class for AsyncAzureOpenAI that implements the LLMClient interface."""
|
||||
|
||||
# Class-level constants
|
||||
MAX_RETRIES: ClassVar[int] = 2
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
azure_client: AsyncAzureOpenAI | AsyncOpenAI,
|
||||
azure_client: AsyncAzureOpenAI,
|
||||
config: LLMConfig | None = None,
|
||||
max_tokens: int = DEFAULT_MAX_TOKENS,
|
||||
reasoning: str | None = None,
|
||||
|
|
|
|||
|
|
@ -48,11 +48,7 @@ def get_extraction_language_instruction(group_id: str | None = None) -> str:
|
|||
Returns:
|
||||
str: Language instruction to append to system messages
|
||||
"""
|
||||
return (
|
||||
'\n\nAny extracted information should be returned in the same language as it was written in. '
|
||||
'Only output non-English text when the user has written full sentences or phrases in that non-English language. '
|
||||
'Otherwise, output English.'
|
||||
)
|
||||
return '\n\nAny extracted information should be returned in the same language as it was written in.'
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ else:
|
|||
logger = logging.getLogger(__name__)
|
||||
|
||||
DEFAULT_MODEL = 'gemini-2.5-flash'
|
||||
DEFAULT_SMALL_MODEL = 'gemini-2.5-flash-lite'
|
||||
DEFAULT_SMALL_MODEL = 'gemini-2.5-flash-lite-preview-06-17'
|
||||
|
||||
# Maximum output tokens for different Gemini models
|
||||
GEMINI_MODEL_MAX_TOKENS = {
|
||||
|
|
@ -53,6 +53,7 @@ GEMINI_MODEL_MAX_TOKENS = {
|
|||
'gemini-2.5-pro': 65536,
|
||||
'gemini-2.5-flash': 65536,
|
||||
'gemini-2.5-flash-lite': 64000,
|
||||
'models/gemini-2.5-flash-lite-preview-06-17': 64000,
|
||||
# Gemini 2.0 models
|
||||
'gemini-2.0-flash': 8192,
|
||||
'gemini-2.0-flash-lite': 8192,
|
||||
|
|
|
|||
|
|
@ -31,8 +31,8 @@ from .errors import RateLimitError, RefusalError
|
|||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DEFAULT_MODEL = 'gpt-4o-mini'
|
||||
DEFAULT_SMALL_MODEL = 'gpt-4o-mini'
|
||||
DEFAULT_MODEL = 'gpt-5-mini'
|
||||
DEFAULT_SMALL_MODEL = 'gpt-5-nano'
|
||||
DEFAULT_REASONING = 'minimal'
|
||||
DEFAULT_VERBOSITY = 'low'
|
||||
|
||||
|
|
@ -166,17 +166,13 @@ class BaseOpenAIClient(LLMClient):
|
|||
except openai.RateLimitError as e:
|
||||
raise RateLimitError from e
|
||||
except openai.AuthenticationError as e:
|
||||
logger.error(
|
||||
f'OpenAI Authentication Error: {e}. Please verify your API key is correct.'
|
||||
)
|
||||
logger.error(f'OpenAI Authentication Error: {e}. Please verify your API key is correct.')
|
||||
raise
|
||||
except Exception as e:
|
||||
# Provide more context for connection errors
|
||||
error_msg = str(e)
|
||||
if 'Connection error' in error_msg or 'connection' in error_msg.lower():
|
||||
logger.error(
|
||||
f'Connection error communicating with OpenAI API. Please check your network connection and API key. Error: {e}'
|
||||
)
|
||||
logger.error(f'Connection error communicating with OpenAI API. Please check your network connection and API key. Error: {e}')
|
||||
else:
|
||||
logger.error(f'Error in generating LLM response: {e}')
|
||||
raise
|
||||
|
|
|
|||
|
|
@ -74,9 +74,7 @@ class OpenAIClient(BaseOpenAIClient):
|
|||
):
|
||||
"""Create a structured completion using OpenAI's beta parse API."""
|
||||
# Reasoning models (gpt-5 family) don't support temperature
|
||||
is_reasoning_model = (
|
||||
model.startswith('gpt-5') or model.startswith('o1') or model.startswith('o3')
|
||||
)
|
||||
is_reasoning_model = model.startswith('gpt-5') or model.startswith('o1') or model.startswith('o3')
|
||||
|
||||
response = await self.client.responses.parse(
|
||||
model=model,
|
||||
|
|
@ -102,9 +100,7 @@ class OpenAIClient(BaseOpenAIClient):
|
|||
):
|
||||
"""Create a regular completion with JSON format."""
|
||||
# Reasoning models (gpt-5 family) don't support temperature
|
||||
is_reasoning_model = (
|
||||
model.startswith('gpt-5') or model.startswith('o1') or model.startswith('o3')
|
||||
)
|
||||
is_reasoning_model = model.startswith('gpt-5') or model.startswith('o1') or model.startswith('o3')
|
||||
|
||||
return await self.client.chat.completions.create(
|
||||
model=model,
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ limitations under the License.
|
|||
import json
|
||||
import logging
|
||||
import typing
|
||||
from typing import Any, ClassVar
|
||||
from typing import ClassVar
|
||||
|
||||
import openai
|
||||
from openai import AsyncOpenAI
|
||||
|
|
@ -59,20 +59,15 @@ class OpenAIGenericClient(LLMClient):
|
|||
MAX_RETRIES: ClassVar[int] = 2
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
config: LLMConfig | None = None,
|
||||
cache: bool = False,
|
||||
client: typing.Any = None,
|
||||
max_tokens: int = 16384,
|
||||
self, config: LLMConfig | None = None, cache: bool = False, client: typing.Any = None
|
||||
):
|
||||
"""
|
||||
Initialize the OpenAIGenericClient with the provided configuration, cache setting, and client.
|
||||
Initialize the OpenAIClient with the provided configuration, cache setting, and client.
|
||||
|
||||
Args:
|
||||
config (LLMConfig | None): The configuration for the LLM client, including API key, model, base URL, temperature, and max tokens.
|
||||
cache (bool): Whether to use caching for responses. Defaults to False.
|
||||
client (Any | None): An optional async client instance to use. If not provided, a new AsyncOpenAI client is created.
|
||||
max_tokens (int): The maximum number of tokens to generate. Defaults to 16384 (16K) for better compatibility with local models.
|
||||
|
||||
"""
|
||||
# removed caching to simplify the `generate_response` override
|
||||
|
|
@ -84,9 +79,6 @@ class OpenAIGenericClient(LLMClient):
|
|||
|
||||
super().__init__(config, cache)
|
||||
|
||||
# Override max_tokens to support higher limits for local models
|
||||
self.max_tokens = max_tokens
|
||||
|
||||
if client is None:
|
||||
self.client = AsyncOpenAI(api_key=config.api_key, base_url=config.base_url)
|
||||
else:
|
||||
|
|
@ -107,25 +99,12 @@ class OpenAIGenericClient(LLMClient):
|
|||
elif m.role == 'system':
|
||||
openai_messages.append({'role': 'system', 'content': m.content})
|
||||
try:
|
||||
# Prepare response format
|
||||
response_format: dict[str, Any] = {'type': 'json_object'}
|
||||
if response_model is not None:
|
||||
schema_name = getattr(response_model, '__name__', 'structured_response')
|
||||
json_schema = response_model.model_json_schema()
|
||||
response_format = {
|
||||
'type': 'json_schema',
|
||||
'json_schema': {
|
||||
'name': schema_name,
|
||||
'schema': json_schema,
|
||||
},
|
||||
}
|
||||
|
||||
response = await self.client.chat.completions.create(
|
||||
model=self.model or DEFAULT_MODEL,
|
||||
messages=openai_messages,
|
||||
temperature=self.temperature,
|
||||
max_tokens=self.max_tokens,
|
||||
response_format=response_format, # type: ignore[arg-type]
|
||||
response_format={'type': 'json_object'},
|
||||
)
|
||||
result = response.choices[0].message.content or ''
|
||||
return json.loads(result)
|
||||
|
|
@ -147,6 +126,14 @@ class OpenAIGenericClient(LLMClient):
|
|||
if max_tokens is None:
|
||||
max_tokens = self.max_tokens
|
||||
|
||||
if response_model is not None:
|
||||
serialized_model = json.dumps(response_model.model_json_schema())
|
||||
messages[
|
||||
-1
|
||||
].content += (
|
||||
f'\n\nRespond with a JSON object in the following format:\n\n{serialized_model}'
|
||||
)
|
||||
|
||||
# Add multilingual extraction instructions
|
||||
messages[0].content += get_extraction_language_instruction(group_id)
|
||||
|
||||
|
|
|
|||
|
|
@ -129,8 +129,8 @@ def get_entity_edge_save_bulk_query(provider: GraphProvider, has_aoss: bool = Fa
|
|||
MATCH (source:Entity {uuid: edge.source_node_uuid})
|
||||
MATCH (target:Entity {uuid: edge.target_node_uuid})
|
||||
MERGE (source)-[r:RELATES_TO {uuid: edge.uuid}]->(target)
|
||||
SET r = edge
|
||||
SET r.fact_embedding = vecf32(edge.fact_embedding)
|
||||
SET r = {uuid: edge.uuid, name: edge.name, group_id: edge.group_id, fact: edge.fact, episodes: edge.episodes,
|
||||
created_at: edge.created_at, expired_at: edge.expired_at, valid_at: edge.valid_at, invalid_at: edge.invalid_at, fact_embedding: vecf32(edge.fact_embedding)}
|
||||
WITH r, edge
|
||||
RETURN edge.uuid AS uuid
|
||||
"""
|
||||
|
|
|
|||
|
|
@ -41,16 +41,6 @@ class DateFilter(BaseModel):
|
|||
)
|
||||
|
||||
|
||||
class PropertyFilter(BaseModel):
|
||||
property_name: str = Field(description='Property name')
|
||||
property_value: str | int | float | None = Field(
|
||||
description='Value you want to match on for the property'
|
||||
)
|
||||
comparison_operator: ComparisonOperator = Field(
|
||||
description='Comparison operator for the property'
|
||||
)
|
||||
|
||||
|
||||
class SearchFilters(BaseModel):
|
||||
node_labels: list[str] | None = Field(
|
||||
default=None, description='List of node labels to filter on'
|
||||
|
|
@ -63,7 +53,6 @@ class SearchFilters(BaseModel):
|
|||
created_at: list[list[DateFilter]] | None = Field(default=None)
|
||||
expired_at: list[list[DateFilter]] | None = Field(default=None)
|
||||
edge_uuids: list[str] | None = Field(default=None)
|
||||
property_filters: list[PropertyFilter] | None = Field(default=None)
|
||||
|
||||
|
||||
def cypher_to_opensearch_operator(op: ComparisonOperator) -> str:
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ limitations under the License.
|
|||
import re
|
||||
|
||||
# Maximum length for entity/node summaries
|
||||
MAX_SUMMARY_CHARS = 500
|
||||
MAX_SUMMARY_CHARS = 250
|
||||
|
||||
|
||||
def truncate_at_sentence(text: str, max_chars: int) -> str:
|
||||
|
|
|
|||
|
|
@ -1,49 +0,0 @@
|
|||
# Graphiti MCP Server Environment Configuration
|
||||
MCP_SERVER_HOST=gmakai.online
|
||||
# Neo4j Database Configuration
|
||||
# These settings are used to connect to your Neo4j database
|
||||
NEO4J_URI=bolt://neo4j:7687
|
||||
NEO4J_USER=neo4j
|
||||
NEO4J_PASSWORD=kg3Jsdb2
|
||||
|
||||
# OpenAI API Configuration
|
||||
# Required for LLM operations
|
||||
OPENAI_API_KEY=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
|
||||
MODEL_NAME=gpt5.1-nano
|
||||
|
||||
# Optional: Only needed for non-standard OpenAI endpoints
|
||||
OPENAI_BASE_URL=https://openrouter.ai/api/v1
|
||||
|
||||
# Optional: Group ID for namespacing graph data
|
||||
# GROUP_ID=my_project
|
||||
|
||||
# Concurrency Control
|
||||
# Controls how many episodes can be processed simultaneously
|
||||
# Default: 10 (suitable for OpenAI Tier 3, mid-tier Anthropic)
|
||||
# Adjust based on your LLM provider's rate limits:
|
||||
# - OpenAI Tier 1 (free): 1-2
|
||||
# - OpenAI Tier 2: 5-8
|
||||
# - OpenAI Tier 3: 10-15
|
||||
# - OpenAI Tier 4: 20-50
|
||||
# - Anthropic default: 5-8
|
||||
# - Anthropic high tier: 15-30
|
||||
# - Ollama (local): 1-5
|
||||
# See README.md "Concurrency and LLM Provider 429 Rate Limit Errors" for details
|
||||
SEMAPHORE_LIMIT=10
|
||||
|
||||
# Optional: Path configuration for Docker
|
||||
# PATH=/root/.local/bin:${PATH}
|
||||
|
||||
# Optional: Memory settings for Neo4j (used in Docker Compose)
|
||||
# NEO4J_server_memory_heap_initial__size=512m
|
||||
# NEO4J_server_memory_heap_max__size=1G
|
||||
# NEO4J_server_memory_pagecache_size=512m
|
||||
|
||||
# Azure OpenAI configuration
|
||||
# Optional: Only needed for Azure OpenAI endpoints
|
||||
# AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint_here
|
||||
# AZURE_OPENAI_API_VERSION=2025-01-01-preview
|
||||
# AZURE_OPENAI_DEPLOYMENT_NAME=gpt-4o-gpt-4o-mini-deployment
|
||||
# AZURE_OPENAI_EMBEDDING_API_VERSION=2023-05-15
|
||||
# AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME=text-embedding-3-large-deployment
|
||||
# AZURE_OPENAI_USE_MANAGED_IDENTITY=false
|
||||
|
|
@ -8,7 +8,7 @@ server:
|
|||
|
||||
llm:
|
||||
provider: "openai" # Options: openai, azure_openai, anthropic, gemini, groq
|
||||
model: "gpt-4o-mini"
|
||||
model: "gpt-5-mini"
|
||||
max_tokens: 4096
|
||||
|
||||
providers:
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ server:
|
|||
|
||||
llm:
|
||||
provider: "openai" # Options: openai, azure_openai, anthropic, gemini, groq
|
||||
model: "gpt-4o-mini"
|
||||
model: "gpt-5-mini"
|
||||
max_tokens: 4096
|
||||
|
||||
providers:
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ server:
|
|||
|
||||
llm:
|
||||
provider: "openai" # Options: openai, azure_openai, anthropic, gemini, groq
|
||||
model: "gpt-4o-mini"
|
||||
model: "gpt-5-mini"
|
||||
max_tokens: 4096
|
||||
|
||||
providers:
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ server:
|
|||
|
||||
llm:
|
||||
provider: "openai" # Options: openai, azure_openai, anthropic, gemini, groq
|
||||
model: "gpt-4o-mini"
|
||||
model: "gpt-5-mini"
|
||||
max_tokens: 4096
|
||||
|
||||
providers:
|
||||
|
|
|
|||
|
|
@ -33,24 +33,20 @@ ENV UV_COMPILE_BYTECODE=1 \
|
|||
WORKDIR /app/mcp
|
||||
|
||||
# Accept graphiti-core version as build argument
|
||||
ARG GRAPHITI_CORE_VERSION=0.23.1
|
||||
ARG GRAPHITI_CORE_VERSION=0.22.0
|
||||
|
||||
# Copy project files for dependency installation
|
||||
COPY pyproject.toml uv.lock ./
|
||||
|
||||
# Remove the local path override for graphiti-core in Docker builds
|
||||
# and regenerate lock file to match the PyPI version
|
||||
RUN sed -i '/\[tool\.uv\.sources\]/,/graphiti-core/d' pyproject.toml && \
|
||||
if [ -n "${GRAPHITI_CORE_VERSION}" ]; then \
|
||||
sed -i "s/graphiti-core\[falkordb\]>=[0-9]\+\.[0-9]\+\.[0-9]\+$/graphiti-core[falkordb]==${GRAPHITI_CORE_VERSION}/" pyproject.toml; \
|
||||
fi && \
|
||||
echo "Regenerating lock file for PyPI graphiti-core..." && \
|
||||
rm -f uv.lock && \
|
||||
uv lock
|
||||
sed -i "s/graphiti-core\[falkordb\]>=0\.16\.0/graphiti-core[falkordb]==${GRAPHITI_CORE_VERSION}/" pyproject.toml; \
|
||||
fi
|
||||
|
||||
# Install Python dependencies (exclude dev dependency group)
|
||||
# Install Python dependencies
|
||||
RUN --mount=type=cache,target=/root/.cache/uv \
|
||||
uv sync --no-group dev
|
||||
uv sync --no-dev
|
||||
|
||||
# Store graphiti-core version
|
||||
RUN echo "${GRAPHITI_CORE_VERSION}" > /app/mcp/.graphiti-core-version
|
||||
|
|
@ -106,13 +102,13 @@ fi
|
|||
# Start MCP server in foreground
|
||||
echo "Starting MCP server..."
|
||||
cd /app/mcp
|
||||
exec /root/.local/bin/uv run --no-sync main.py
|
||||
exec /root/.local/bin/uv run main.py
|
||||
EOF
|
||||
|
||||
RUN chmod +x /start-services.sh
|
||||
|
||||
# Add Docker labels with version information
|
||||
ARG MCP_SERVER_VERSION=1.0.1
|
||||
ARG MCP_SERVER_VERSION=1.0.0rc0
|
||||
ARG BUILD_DATE
|
||||
ARG VCS_REF
|
||||
LABEL org.opencontainers.image.title="FalkorDB + Graphiti MCP Server" \
|
||||
|
|
|
|||
|
|
@ -28,23 +28,19 @@ ENV UV_COMPILE_BYTECODE=1 \
|
|||
WORKDIR /app/mcp
|
||||
|
||||
# Accept graphiti-core version as build argument
|
||||
ARG GRAPHITI_CORE_VERSION=0.23.1
|
||||
ARG GRAPHITI_CORE_VERSION=0.22.0
|
||||
|
||||
# Copy project files for dependency installation
|
||||
COPY pyproject.toml uv.lock ./
|
||||
|
||||
# Remove the local path override for graphiti-core in Docker builds
|
||||
# Install with BOTH neo4j and falkordb extras for maximum flexibility
|
||||
# and regenerate lock file to match the PyPI version
|
||||
RUN sed -i '/\[tool\.uv\.sources\]/,/graphiti-core/d' pyproject.toml && \
|
||||
sed -i "s/graphiti-core\[falkordb\]>=[0-9]\+\.[0-9]\+\.[0-9]\+$/graphiti-core[neo4j,falkordb]==${GRAPHITI_CORE_VERSION}/" pyproject.toml && \
|
||||
echo "Regenerating lock file for PyPI graphiti-core..." && \
|
||||
rm -f uv.lock && \
|
||||
uv lock
|
||||
sed -i "s/graphiti-core\[falkordb\]>=0\.16\.0/graphiti-core[neo4j,falkordb]==${GRAPHITI_CORE_VERSION}/" pyproject.toml
|
||||
|
||||
# Install Python dependencies (exclude dev dependency group)
|
||||
# Install Python dependencies
|
||||
RUN --mount=type=cache,target=/root/.cache/uv \
|
||||
uv sync --no-group dev
|
||||
uv sync --no-dev
|
||||
|
||||
# Store graphiti-core version
|
||||
RUN echo "${GRAPHITI_CORE_VERSION}" > /app/mcp/.graphiti-core-version
|
||||
|
|
@ -58,7 +54,7 @@ COPY config/ ./config/
|
|||
RUN mkdir -p /var/log/graphiti
|
||||
|
||||
# Add Docker labels with version information
|
||||
ARG MCP_SERVER_VERSION=1.0.1
|
||||
ARG MCP_SERVER_VERSION=1.0.0
|
||||
ARG BUILD_DATE
|
||||
ARG VCS_REF
|
||||
LABEL org.opencontainers.image.title="Graphiti MCP Server (Standalone)" \
|
||||
|
|
@ -78,4 +74,4 @@ HEALTHCHECK --interval=10s --timeout=5s --start-period=15s --retries=3 \
|
|||
CMD curl -f http://localhost:8000/health || exit 1
|
||||
|
||||
# Run the MCP server
|
||||
CMD ["uv", "run", "--no-sync", "main.py"]
|
||||
CMD ["uv", "run", "main.py"]
|
||||
|
|
|
|||
|
|
@ -1,23 +1,23 @@
|
|||
services:
|
||||
neo4j:
|
||||
image: neo4j:latest
|
||||
image: neo4j:5.26.0
|
||||
ports:
|
||||
- "7474:7474" # HTTP
|
||||
- "7687:7687" # Bolt
|
||||
environment:
|
||||
- NEO4J_AUTH=${NEO4J_USER:-neo4j}/${NEO4J_PASSWORD:-kg3Jsdb2}
|
||||
- NEO4J_AUTH=${NEO4J_USER:-neo4j}/${NEO4J_PASSWORD:-demodemo}
|
||||
- NEO4J_server_memory_heap_initial__size=512m
|
||||
- NEO4J_server_memory_heap_max__size=1G
|
||||
- NEO4J_server_memory_pagecache_size=512m
|
||||
volumes:
|
||||
- /data/neo4j/data:/data
|
||||
- /data/neo4j/logs:/logs
|
||||
- /data/neo4j/plugins:/plugins
|
||||
- /data/neo4j/config:/config
|
||||
- neo4j_data:/data
|
||||
- neo4j_logs:/logs
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "-O", "/dev/null", "http://localhost:7474"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
start_period: 30s
|
||||
restart: always
|
||||
|
||||
graphiti-mcp:
|
||||
image: zepai/knowledge-graph-mcp:standalone
|
||||
|
|
@ -27,9 +27,9 @@ services:
|
|||
build:
|
||||
context: ..
|
||||
dockerfile: docker/Dockerfile.standalone
|
||||
#env_file:
|
||||
# - path: ../.env
|
||||
# required: true
|
||||
env_file:
|
||||
- path: ../.env
|
||||
required: false
|
||||
depends_on:
|
||||
neo4j:
|
||||
condition: service_healthy
|
||||
|
|
@ -37,18 +37,13 @@ services:
|
|||
# Database configuration
|
||||
- NEO4J_URI=${NEO4J_URI:-bolt://neo4j:7687}
|
||||
- NEO4J_USER=${NEO4J_USER:-neo4j}
|
||||
- NEO4J_PASSWORD=${NEO4J_PASSWORD:-kg3Jsdb2}
|
||||
- NEO4J_PASSWORD=${NEO4J_PASSWORD:-demodemo}
|
||||
- NEO4J_DATABASE=${NEO4J_DATABASE:-neo4j}
|
||||
# Application configuration
|
||||
- GRAPHITI_GROUP_ID=${GRAPHITI_GROUP_ID:-main}
|
||||
- SEMAPHORE_LIMIT=${SEMAPHORE_LIMIT:-10}
|
||||
- CONFIG_PATH=/app/mcp/config/config.yaml
|
||||
- PATH=/root/.local/bin:${PATH}
|
||||
- MCP_SERVER_HOST=gmakai.online
|
||||
- OPENAI_API_KEY=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
|
||||
- MODEL_NAME=gpt5.1-nano
|
||||
- OPENAI_BASE_URL=https://openrouter.ai/api/v1
|
||||
|
||||
volumes:
|
||||
- ../config/config-docker-neo4j.yaml:/app/mcp/config/config.yaml:ro
|
||||
ports:
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ services:
|
|||
context: ..
|
||||
dockerfile: docker/Dockerfile
|
||||
args:
|
||||
GRAPHITI_CORE_VERSION: ${GRAPHITI_CORE_VERSION:-0.23.0}
|
||||
GRAPHITI_CORE_VERSION: ${GRAPHITI_CORE_VERSION:-0.22.0}
|
||||
MCP_SERVER_VERSION: ${MCP_SERVER_VERSION:-1.0.0}
|
||||
BUILD_DATE: ${BUILD_DATE:-}
|
||||
VCS_REF: ${VCS_REF:-}
|
||||
|
|
|
|||
|
|
@ -1,16 +1,15 @@
|
|||
[project]
|
||||
name = "mcp-server"
|
||||
version = "1.0.1"
|
||||
version = "1.0.0"
|
||||
description = "Graphiti MCP Server"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<4"
|
||||
dependencies = [
|
||||
"mcp>=1.9.4",
|
||||
"openai>=1.91.0",
|
||||
"graphiti-core[falkordb]>=0.23.1",
|
||||
"graphiti-core[falkordb]>=0.16.0",
|
||||
"pydantic-settings>=2.0.0",
|
||||
"pyyaml>=6.0",
|
||||
"typing-extensions>=4.0.0",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
|
|
@ -24,6 +23,15 @@ providers = [
|
|||
"voyageai>=0.2.3",
|
||||
"sentence-transformers>=2.0.0",
|
||||
]
|
||||
dev = [
|
||||
"graphiti-core>=0.16.0",
|
||||
"httpx>=0.28.1",
|
||||
"mcp>=1.9.4",
|
||||
"pyright>=1.1.404",
|
||||
"pytest>=8.0.0",
|
||||
"pytest-asyncio>=0.21.0",
|
||||
"ruff>=0.7.1",
|
||||
]
|
||||
|
||||
[tool.pyright]
|
||||
include = ["src", "tests"]
|
||||
|
|
@ -50,10 +58,6 @@ select = [
|
|||
]
|
||||
ignore = ["E501"]
|
||||
|
||||
[tool.ruff.lint.flake8-tidy-imports.banned-api]
|
||||
# Required by Pydantic on Python < 3.12
|
||||
"typing.TypedDict".msg = "Use typing_extensions.TypedDict instead."
|
||||
|
||||
[tool.ruff.format]
|
||||
quote-style = "single"
|
||||
indent-style = "space"
|
||||
|
|
@ -65,12 +69,7 @@ graphiti-core = { path = "../", editable = true }
|
|||
[dependency-groups]
|
||||
dev = [
|
||||
"faker>=37.12.0",
|
||||
"httpx>=0.28.1",
|
||||
"psutil>=7.1.2",
|
||||
"pyright>=1.1.404",
|
||||
"pytest>=8.0.0",
|
||||
"pytest-asyncio>=0.21.0",
|
||||
"pytest-timeout>=2.4.0",
|
||||
"pytest-xdist>=3.8.0",
|
||||
"ruff>=0.7.1",
|
||||
]
|
||||
|
|
|
|||
|
|
@ -147,7 +147,7 @@ class LLMConfig(BaseModel):
|
|||
"""LLM configuration."""
|
||||
|
||||
provider: str = Field(default='openai', description='LLM provider')
|
||||
model: str = Field(default='gpt-4o-mini', description='Model name')
|
||||
model: str = Field(default='gpt-4.1', description='Model name')
|
||||
temperature: float | None = Field(
|
||||
default=None, description='Temperature (optional, defaults to None for reasoning models)'
|
||||
)
|
||||
|
|
|
|||
|
|
@ -245,35 +245,35 @@ class GraphitiService:
|
|||
db_provider = self.config.database.provider
|
||||
if db_provider.lower() == 'falkordb':
|
||||
raise RuntimeError(
|
||||
f'\n{"=" * 70}\n'
|
||||
f'Database Connection Error: FalkorDB is not running\n'
|
||||
f'{"=" * 70}\n\n'
|
||||
f'FalkorDB at {db_config["host"]}:{db_config["port"]} is not accessible.\n\n'
|
||||
f'To start FalkorDB:\n'
|
||||
f' - Using Docker Compose: cd mcp_server && docker compose up\n'
|
||||
f' - Or run FalkorDB manually: docker run -p 6379:6379 falkordb/falkordb\n\n'
|
||||
f'{"=" * 70}\n'
|
||||
f"\n{'='*70}\n"
|
||||
f"Database Connection Error: FalkorDB is not running\n"
|
||||
f"{'='*70}\n\n"
|
||||
f"FalkorDB at {db_config['host']}:{db_config['port']} is not accessible.\n\n"
|
||||
f"To start FalkorDB:\n"
|
||||
f" - Using Docker Compose: cd mcp_server && docker compose up\n"
|
||||
f" - Or run FalkorDB manually: docker run -p 6379:6379 falkordb/falkordb\n\n"
|
||||
f"{'='*70}\n"
|
||||
) from db_error
|
||||
elif db_provider.lower() == 'neo4j':
|
||||
raise RuntimeError(
|
||||
f'\n{"=" * 70}\n'
|
||||
f'Database Connection Error: Neo4j is not running\n'
|
||||
f'{"=" * 70}\n\n'
|
||||
f'Neo4j at {db_config.get("uri", "unknown")} is not accessible.\n\n'
|
||||
f'To start Neo4j:\n'
|
||||
f' - Using Docker Compose: cd mcp_server && docker compose -f docker/docker-compose-neo4j.yml up\n'
|
||||
f' - Or install Neo4j Desktop from: https://neo4j.com/download/\n'
|
||||
f' - Or run Neo4j manually: docker run -p 7474:7474 -p 7687:7687 neo4j:latest\n\n'
|
||||
f'{"=" * 70}\n'
|
||||
f"\n{'='*70}\n"
|
||||
f"Database Connection Error: Neo4j is not running\n"
|
||||
f"{'='*70}\n\n"
|
||||
f"Neo4j at {db_config.get('uri', 'unknown')} is not accessible.\n\n"
|
||||
f"To start Neo4j:\n"
|
||||
f" - Using Docker Compose: cd mcp_server && docker compose -f docker/docker-compose-neo4j.yml up\n"
|
||||
f" - Or install Neo4j Desktop from: https://neo4j.com/download/\n"
|
||||
f" - Or run Neo4j manually: docker run -p 7474:7474 -p 7687:7687 neo4j:latest\n\n"
|
||||
f"{'='*70}\n"
|
||||
) from db_error
|
||||
else:
|
||||
raise RuntimeError(
|
||||
f'\n{"=" * 70}\n'
|
||||
f'Database Connection Error: {db_provider} is not running\n'
|
||||
f'{"=" * 70}\n\n'
|
||||
f'{db_provider} at {db_config.get("uri", "unknown")} is not accessible.\n\n'
|
||||
f'Please ensure {db_provider} is running and accessible.\n\n'
|
||||
f'{"=" * 70}\n'
|
||||
f"\n{'='*70}\n"
|
||||
f"Database Connection Error: {db_provider} is not running\n"
|
||||
f"{'='*70}\n\n"
|
||||
f"{db_provider} at {db_config.get('uri', 'unknown')} is not accessible.\n\n"
|
||||
f"Please ensure {db_provider} is running and accessible.\n\n"
|
||||
f"{'='*70}\n"
|
||||
) from db_error
|
||||
# Re-raise other errors
|
||||
raise
|
||||
|
|
@ -931,11 +931,6 @@ async def run_mcp_server():
|
|||
logger.info(f' Base URL: http://{display_host}:{mcp.settings.port}/')
|
||||
logger.info(f' MCP Endpoint: http://{display_host}:{mcp.settings.port}/mcp/')
|
||||
logger.info(' Transport: HTTP (streamable)')
|
||||
|
||||
# Show FalkorDB Browser UI access if enabled
|
||||
if os.environ.get('BROWSER', '1') == '1':
|
||||
logger.info(f' FalkorDB Browser UI: http://{display_host}:3000/')
|
||||
|
||||
logger.info('=' * 60)
|
||||
logger.info('For MCP clients, connect to the /mcp/ endpoint above')
|
||||
|
||||
|
|
|
|||
|
|
@ -1,8 +1,6 @@
|
|||
"""Response type definitions for Graphiti MCP Server."""
|
||||
|
||||
from typing import Any
|
||||
|
||||
from typing_extensions import TypedDict
|
||||
from typing import Any, TypedDict
|
||||
|
||||
|
||||
class ErrorResponse(TypedDict):
|
||||
|
|
|
|||
|
|
@ -244,7 +244,7 @@ class TestAsyncErrorHandling:
|
|||
async def test_timeout_recovery(self):
|
||||
"""Test recovery from operation timeouts."""
|
||||
async with graphiti_test_client() as (session, group_id):
|
||||
# Create a very large episode that might time out
|
||||
# Create a very large episode that might timeout
|
||||
large_content = 'x' * 1000000 # 1MB of data
|
||||
|
||||
with contextlib.suppress(asyncio.TimeoutError):
|
||||
|
|
|
|||
|
|
@ -464,7 +464,7 @@ class TestErrorHandling:
|
|||
async def test_timeout_handling(self):
|
||||
"""Test timeout handling for long operations."""
|
||||
async with GraphitiTestClient() as client:
|
||||
# Simulate a very large episode that might time out
|
||||
# Simulate a very large episode that might timeout
|
||||
large_text = 'Large document content. ' * 10000
|
||||
|
||||
result, metric = await client.call_tool_with_metrics(
|
||||
|
|
|
|||
34
mcp_server/uv.lock
generated
34
mcp_server/uv.lock
generated
|
|
@ -648,7 +648,7 @@ wheels = [
|
|||
|
||||
[[package]]
|
||||
name = "graphiti-core"
|
||||
version = "0.23.1"
|
||||
version = "0.22.1rc2"
|
||||
source = { editable = "../" }
|
||||
dependencies = [
|
||||
{ name = "diskcache" },
|
||||
|
|
@ -1074,7 +1074,7 @@ wheels = [
|
|||
|
||||
[[package]]
|
||||
name = "mcp-server"
|
||||
version = "1.0.1"
|
||||
version = "1.0.0"
|
||||
source = { virtual = "." }
|
||||
dependencies = [
|
||||
{ name = "graphiti-core", extra = ["falkordb"] },
|
||||
|
|
@ -1082,13 +1082,21 @@ dependencies = [
|
|||
{ name = "openai" },
|
||||
{ name = "pydantic-settings" },
|
||||
{ name = "pyyaml" },
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
|
||||
[package.optional-dependencies]
|
||||
azure = [
|
||||
{ name = "azure-identity" },
|
||||
]
|
||||
dev = [
|
||||
{ name = "graphiti-core" },
|
||||
{ name = "httpx" },
|
||||
{ name = "mcp" },
|
||||
{ name = "pyright" },
|
||||
{ name = "pytest" },
|
||||
{ name = "pytest-asyncio" },
|
||||
{ name = "ruff" },
|
||||
]
|
||||
providers = [
|
||||
{ name = "anthropic" },
|
||||
{ name = "google-genai" },
|
||||
|
|
@ -1100,14 +1108,9 @@ providers = [
|
|||
[package.dev-dependencies]
|
||||
dev = [
|
||||
{ name = "faker" },
|
||||
{ name = "httpx" },
|
||||
{ name = "psutil" },
|
||||
{ name = "pyright" },
|
||||
{ name = "pytest" },
|
||||
{ name = "pytest-asyncio" },
|
||||
{ name = "pytest-timeout" },
|
||||
{ name = "pytest-xdist" },
|
||||
{ name = "ruff" },
|
||||
]
|
||||
|
||||
[package.metadata]
|
||||
|
|
@ -1115,29 +1118,30 @@ requires-dist = [
|
|||
{ name = "anthropic", marker = "extra == 'providers'", specifier = ">=0.49.0" },
|
||||
{ name = "azure-identity", marker = "extra == 'azure'", specifier = ">=1.21.0" },
|
||||
{ name = "google-genai", marker = "extra == 'providers'", specifier = ">=1.8.0" },
|
||||
{ name = "graphiti-core", marker = "extra == 'dev'", editable = "../" },
|
||||
{ name = "graphiti-core", extras = ["falkordb"], editable = "../" },
|
||||
{ name = "groq", marker = "extra == 'providers'", specifier = ">=0.2.0" },
|
||||
{ name = "httpx", marker = "extra == 'dev'", specifier = ">=0.28.1" },
|
||||
{ name = "mcp", specifier = ">=1.9.4" },
|
||||
{ name = "mcp", marker = "extra == 'dev'", specifier = ">=1.9.4" },
|
||||
{ name = "openai", specifier = ">=1.91.0" },
|
||||
{ name = "pydantic-settings", specifier = ">=2.0.0" },
|
||||
{ name = "pyright", marker = "extra == 'dev'", specifier = ">=1.1.404" },
|
||||
{ name = "pytest", marker = "extra == 'dev'", specifier = ">=8.0.0" },
|
||||
{ name = "pytest-asyncio", marker = "extra == 'dev'", specifier = ">=0.21.0" },
|
||||
{ name = "pyyaml", specifier = ">=6.0" },
|
||||
{ name = "ruff", marker = "extra == 'dev'", specifier = ">=0.7.1" },
|
||||
{ name = "sentence-transformers", marker = "extra == 'providers'", specifier = ">=2.0.0" },
|
||||
{ name = "typing-extensions", specifier = ">=4.0.0" },
|
||||
{ name = "voyageai", marker = "extra == 'providers'", specifier = ">=0.2.3" },
|
||||
]
|
||||
provides-extras = ["azure", "providers"]
|
||||
provides-extras = ["azure", "providers", "dev"]
|
||||
|
||||
[package.metadata.requires-dev]
|
||||
dev = [
|
||||
{ name = "faker", specifier = ">=37.12.0" },
|
||||
{ name = "httpx", specifier = ">=0.28.1" },
|
||||
{ name = "psutil", specifier = ">=7.1.2" },
|
||||
{ name = "pyright", specifier = ">=1.1.404" },
|
||||
{ name = "pytest", specifier = ">=8.0.0" },
|
||||
{ name = "pytest-asyncio", specifier = ">=0.21.0" },
|
||||
{ name = "pytest-timeout", specifier = ">=2.4.0" },
|
||||
{ name = "pytest-xdist", specifier = ">=3.8.0" },
|
||||
{ name = "ruff", specifier = ">=0.7.1" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
[project]
|
||||
name = "graphiti-core"
|
||||
description = "A temporal graph building library"
|
||||
version = "0.24.3"
|
||||
version = "0.22.1pre2"
|
||||
authors = [
|
||||
{ name = "Paul Paliychuk", email = "paul@getzep.com" },
|
||||
{ name = "Preston Rasmussen", email = "preston@getzep.com" },
|
||||
|
|
@ -90,10 +90,6 @@ select = [
|
|||
]
|
||||
ignore = ["E501"]
|
||||
|
||||
[tool.ruff.lint.flake8-tidy-imports.banned-api]
|
||||
# Required by Pydantic on Python < 3.12
|
||||
"typing.TypedDict".msg = "Use typing_extensions.TypedDict instead."
|
||||
|
||||
[tool.ruff.format]
|
||||
quote-style = "single"
|
||||
indent-style = "space"
|
||||
|
|
@ -103,4 +99,3 @@ docstring-code-format = true
|
|||
include = ["graphiti_core"]
|
||||
pythonVersion = "3.10"
|
||||
typeCheckingMode = "basic"
|
||||
|
||||
|
|
|
|||
|
|
@ -447,70 +447,6 @@
|
|||
"created_at": "2025-10-30T15:11:58Z",
|
||||
"repoId": 840056306,
|
||||
"pullRequestNo": 1035
|
||||
},
|
||||
{
|
||||
"name": "Galleons2029",
|
||||
"id": 88185941,
|
||||
"comment_id": 3495884964,
|
||||
"created_at": "2025-11-06T08:39:46Z",
|
||||
"repoId": 840056306,
|
||||
"pullRequestNo": 1053
|
||||
},
|
||||
{
|
||||
"name": "supmo668",
|
||||
"id": 28805779,
|
||||
"comment_id": 3550309664,
|
||||
"created_at": "2025-11-19T01:56:25Z",
|
||||
"repoId": 840056306,
|
||||
"pullRequestNo": 1072
|
||||
},
|
||||
{
|
||||
"name": "donbr",
|
||||
"id": 7340008,
|
||||
"comment_id": 3568970102,
|
||||
"created_at": "2025-11-24T05:19:42Z",
|
||||
"repoId": 840056306,
|
||||
"pullRequestNo": 1081
|
||||
},
|
||||
{
|
||||
"name": "apetti1920",
|
||||
"id": 4706645,
|
||||
"comment_id": 3572726648,
|
||||
"created_at": "2025-11-24T21:07:34Z",
|
||||
"repoId": 840056306,
|
||||
"pullRequestNo": 1084
|
||||
},
|
||||
{
|
||||
"name": "ZLBillShaw",
|
||||
"id": 55940186,
|
||||
"comment_id": 3583997833,
|
||||
"created_at": "2025-11-27T02:45:53Z",
|
||||
"repoId": 840056306,
|
||||
"pullRequestNo": 1085
|
||||
},
|
||||
{
|
||||
"name": "ronaldmego",
|
||||
"id": 17481958,
|
||||
"comment_id": 3617267429,
|
||||
"created_at": "2025-12-05T14:59:42Z",
|
||||
"repoId": 840056306,
|
||||
"pullRequestNo": 1094
|
||||
},
|
||||
{
|
||||
"name": "NShumway",
|
||||
"id": 29358113,
|
||||
"comment_id": 3634967978,
|
||||
"created_at": "2025-12-10T01:26:49Z",
|
||||
"repoId": 840056306,
|
||||
"pullRequestNo": 1102
|
||||
},
|
||||
{
|
||||
"name": "husniadil",
|
||||
"id": 10581130,
|
||||
"comment_id": 3650156180,
|
||||
"created_at": "2025-12-14T03:37:59Z",
|
||||
"repoId": 840056306,
|
||||
"pullRequestNo": 1105
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
@ -81,7 +81,7 @@ class TestAnthropicClientInitialization:
|
|||
config = LLMConfig(api_key='test_api_key')
|
||||
client = AnthropicClient(config=config, cache=False)
|
||||
|
||||
assert client.model == 'claude-haiku-4-5-latest'
|
||||
assert client.model == 'claude-3-7-sonnet-latest'
|
||||
|
||||
@patch.dict(os.environ, {'ANTHROPIC_API_KEY': 'env_api_key'})
|
||||
def test_init_without_config(self):
|
||||
|
|
@ -89,7 +89,7 @@ class TestAnthropicClientInitialization:
|
|||
client = AnthropicClient(cache=False)
|
||||
|
||||
assert client.config.api_key == 'env_api_key'
|
||||
assert client.model == 'claude-haiku-4-5-latest'
|
||||
assert client.model == 'claude-3-7-sonnet-latest'
|
||||
|
||||
def test_init_with_custom_client(self):
|
||||
"""Test initialization with a custom AsyncAnthropic client."""
|
||||
|
|
|
|||
|
|
@ -455,6 +455,7 @@ class TestGeminiClientGenerateResponse:
|
|||
('gemini-2.5-flash', 65536),
|
||||
('gemini-2.5-pro', 65536),
|
||||
('gemini-2.5-flash-lite', 64000),
|
||||
('models/gemini-2.5-flash-lite-preview-06-17', 64000),
|
||||
('gemini-2.0-flash', 8192),
|
||||
('gemini-1.5-pro', 8192),
|
||||
('gemini-1.5-flash', 8192),
|
||||
|
|
|
|||
|
|
@ -87,7 +87,7 @@ def test_truncate_at_sentence_strips_trailing_whitespace():
|
|||
|
||||
def test_max_summary_chars_constant():
|
||||
"""Test that MAX_SUMMARY_CHARS is set to expected value."""
|
||||
assert MAX_SUMMARY_CHARS == 500
|
||||
assert MAX_SUMMARY_CHARS == 250
|
||||
|
||||
|
||||
def test_truncate_at_sentence_realistic_summary():
|
||||
|
|
|
|||
127
uv.lock
generated
127
uv.lock
generated
|
|
@ -1,5 +1,5 @@
|
|||
version = 1
|
||||
revision = 2
|
||||
revision = 3
|
||||
requires-python = ">=3.10, <4"
|
||||
resolution-markers = [
|
||||
"python_full_version >= '3.14'",
|
||||
|
|
@ -358,84 +358,59 @@ wheels = [
|
|||
|
||||
[[package]]
|
||||
name = "cffi"
|
||||
version = "2.0.0"
|
||||
version = "1.17.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "pycparser", marker = "implementation_name != 'PyPy'" },
|
||||
{ name = "pycparser" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/eb/56/b1ba7935a17738ae8453301356628e8147c79dbb825bcbc73dc7401f9846/cffi-2.0.0.tar.gz", hash = "sha256:44d1b5909021139fe36001ae048dbdde8214afa20200eda0f64c068cac5d5529", size = 523588, upload-time = "2025-09-08T23:24:04.541Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/fc/97/c783634659c2920c3fc70419e3af40972dbaf758daa229a7d6ea6135c90d/cffi-1.17.1.tar.gz", hash = "sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824", size = 516621, upload-time = "2024-09-04T20:45:21.852Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/93/d7/516d984057745a6cd96575eea814fe1edd6646ee6efd552fb7b0921dec83/cffi-2.0.0-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:0cf2d91ecc3fcc0625c2c530fe004f82c110405f101548512cce44322fa8ac44", size = 184283, upload-time = "2025-09-08T23:22:08.01Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9e/84/ad6a0b408daa859246f57c03efd28e5dd1b33c21737c2db84cae8c237aa5/cffi-2.0.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f73b96c41e3b2adedc34a7356e64c8eb96e03a3782b535e043a986276ce12a49", size = 180504, upload-time = "2025-09-08T23:22:10.637Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/50/bd/b1a6362b80628111e6653c961f987faa55262b4002fcec42308cad1db680/cffi-2.0.0-cp310-cp310-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:53f77cbe57044e88bbd5ed26ac1d0514d2acf0591dd6bb02a3ae37f76811b80c", size = 208811, upload-time = "2025-09-08T23:22:12.267Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4f/27/6933a8b2562d7bd1fb595074cf99cc81fc3789f6a6c05cdabb46284a3188/cffi-2.0.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:3e837e369566884707ddaf85fc1744b47575005c0a229de3327f8f9a20f4efeb", size = 216402, upload-time = "2025-09-08T23:22:13.455Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/05/eb/b86f2a2645b62adcfff53b0dd97e8dfafb5c8aa864bd0d9a2c2049a0d551/cffi-2.0.0-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:5eda85d6d1879e692d546a078b44251cdd08dd1cfb98dfb77b670c97cee49ea0", size = 203217, upload-time = "2025-09-08T23:22:14.596Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9f/e0/6cbe77a53acf5acc7c08cc186c9928864bd7c005f9efd0d126884858a5fe/cffi-2.0.0-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:9332088d75dc3241c702d852d4671613136d90fa6881da7d770a483fd05248b4", size = 203079, upload-time = "2025-09-08T23:22:15.769Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/98/29/9b366e70e243eb3d14a5cb488dfd3a0b6b2f1fb001a203f653b93ccfac88/cffi-2.0.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:fc7de24befaeae77ba923797c7c87834c73648a05a4bde34b3b7e5588973a453", size = 216475, upload-time = "2025-09-08T23:22:17.427Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/21/7a/13b24e70d2f90a322f2900c5d8e1f14fa7e2a6b3332b7309ba7b2ba51a5a/cffi-2.0.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:cf364028c016c03078a23b503f02058f1814320a56ad535686f90565636a9495", size = 218829, upload-time = "2025-09-08T23:22:19.069Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/60/99/c9dc110974c59cc981b1f5b66e1d8af8af764e00f0293266824d9c4254bc/cffi-2.0.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:e11e82b744887154b182fd3e7e8512418446501191994dbf9c9fc1f32cc8efd5", size = 211211, upload-time = "2025-09-08T23:22:20.588Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/49/72/ff2d12dbf21aca1b32a40ed792ee6b40f6dc3a9cf1644bd7ef6e95e0ac5e/cffi-2.0.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:8ea985900c5c95ce9db1745f7933eeef5d314f0565b27625d9a10ec9881e1bfb", size = 218036, upload-time = "2025-09-08T23:22:22.143Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e2/cc/027d7fb82e58c48ea717149b03bcadcbdc293553edb283af792bd4bcbb3f/cffi-2.0.0-cp310-cp310-win32.whl", hash = "sha256:1f72fb8906754ac8a2cc3f9f5aaa298070652a0ffae577e0ea9bd480dc3c931a", size = 172184, upload-time = "2025-09-08T23:22:23.328Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/33/fa/072dd15ae27fbb4e06b437eb6e944e75b068deb09e2a2826039e49ee2045/cffi-2.0.0-cp310-cp310-win_amd64.whl", hash = "sha256:b18a3ed7d5b3bd8d9ef7a8cb226502c6bf8308df1525e1cc676c3680e7176739", size = 182790, upload-time = "2025-09-08T23:22:24.752Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/12/4a/3dfd5f7850cbf0d06dc84ba9aa00db766b52ca38d8b86e3a38314d52498c/cffi-2.0.0-cp311-cp311-macosx_10_13_x86_64.whl", hash = "sha256:b4c854ef3adc177950a8dfc81a86f5115d2abd545751a304c5bcf2c2c7283cfe", size = 184344, upload-time = "2025-09-08T23:22:26.456Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4f/8b/f0e4c441227ba756aafbe78f117485b25bb26b1c059d01f137fa6d14896b/cffi-2.0.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2de9a304e27f7596cd03d16f1b7c72219bd944e99cc52b84d0145aefb07cbd3c", size = 180560, upload-time = "2025-09-08T23:22:28.197Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b1/b7/1200d354378ef52ec227395d95c2576330fd22a869f7a70e88e1447eb234/cffi-2.0.0-cp311-cp311-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:baf5215e0ab74c16e2dd324e8ec067ef59e41125d3eade2b863d294fd5035c92", size = 209613, upload-time = "2025-09-08T23:22:29.475Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b8/56/6033f5e86e8cc9bb629f0077ba71679508bdf54a9a5e112a3c0b91870332/cffi-2.0.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:730cacb21e1bdff3ce90babf007d0a0917cc3e6492f336c2f0134101e0944f93", size = 216476, upload-time = "2025-09-08T23:22:31.063Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/dc/7f/55fecd70f7ece178db2f26128ec41430d8720f2d12ca97bf8f0a628207d5/cffi-2.0.0-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:6824f87845e3396029f3820c206e459ccc91760e8fa24422f8b0c3d1731cbec5", size = 203374, upload-time = "2025-09-08T23:22:32.507Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/84/ef/a7b77c8bdc0f77adc3b46888f1ad54be8f3b7821697a7b89126e829e676a/cffi-2.0.0-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:9de40a7b0323d889cf8d23d1ef214f565ab154443c42737dfe52ff82cf857664", size = 202597, upload-time = "2025-09-08T23:22:34.132Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d7/91/500d892b2bf36529a75b77958edfcd5ad8e2ce4064ce2ecfeab2125d72d1/cffi-2.0.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:8941aaadaf67246224cee8c3803777eed332a19d909b47e29c9842ef1e79ac26", size = 215574, upload-time = "2025-09-08T23:22:35.443Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/44/64/58f6255b62b101093d5df22dcb752596066c7e89dd725e0afaed242a61be/cffi-2.0.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:a05d0c237b3349096d3981b727493e22147f934b20f6f125a3eba8f994bec4a9", size = 218971, upload-time = "2025-09-08T23:22:36.805Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ab/49/fa72cebe2fd8a55fbe14956f9970fe8eb1ac59e5df042f603ef7c8ba0adc/cffi-2.0.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:94698a9c5f91f9d138526b48fe26a199609544591f859c870d477351dc7b2414", size = 211972, upload-time = "2025-09-08T23:22:38.436Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0b/28/dd0967a76aab36731b6ebfe64dec4e981aff7e0608f60c2d46b46982607d/cffi-2.0.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:5fed36fccc0612a53f1d4d9a816b50a36702c28a2aa880cb8a122b3466638743", size = 217078, upload-time = "2025-09-08T23:22:39.776Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2b/c0/015b25184413d7ab0a410775fdb4a50fca20f5589b5dab1dbbfa3baad8ce/cffi-2.0.0-cp311-cp311-win32.whl", hash = "sha256:c649e3a33450ec82378822b3dad03cc228b8f5963c0c12fc3b1e0ab940f768a5", size = 172076, upload-time = "2025-09-08T23:22:40.95Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ae/8f/dc5531155e7070361eb1b7e4c1a9d896d0cb21c49f807a6c03fd63fc877e/cffi-2.0.0-cp311-cp311-win_amd64.whl", hash = "sha256:66f011380d0e49ed280c789fbd08ff0d40968ee7b665575489afa95c98196ab5", size = 182820, upload-time = "2025-09-08T23:22:42.463Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/95/5c/1b493356429f9aecfd56bc171285a4c4ac8697f76e9bbbbb105e537853a1/cffi-2.0.0-cp311-cp311-win_arm64.whl", hash = "sha256:c6638687455baf640e37344fe26d37c404db8b80d037c3d29f58fe8d1c3b194d", size = 177635, upload-time = "2025-09-08T23:22:43.623Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ea/47/4f61023ea636104d4f16ab488e268b93008c3d0bb76893b1b31db1f96802/cffi-2.0.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:6d02d6655b0e54f54c4ef0b94eb6be0607b70853c45ce98bd278dc7de718be5d", size = 185271, upload-time = "2025-09-08T23:22:44.795Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/df/a2/781b623f57358e360d62cdd7a8c681f074a71d445418a776eef0aadb4ab4/cffi-2.0.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8eca2a813c1cb7ad4fb74d368c2ffbbb4789d377ee5bb8df98373c2cc0dee76c", size = 181048, upload-time = "2025-09-08T23:22:45.938Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ff/df/a4f0fbd47331ceeba3d37c2e51e9dfc9722498becbeec2bd8bc856c9538a/cffi-2.0.0-cp312-cp312-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:21d1152871b019407d8ac3985f6775c079416c282e431a4da6afe7aefd2bccbe", size = 212529, upload-time = "2025-09-08T23:22:47.349Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/72/12b5f8d3865bf0f87cf1404d8c374e7487dcf097a1c91c436e72e6badd83/cffi-2.0.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:b21e08af67b8a103c71a250401c78d5e0893beff75e28c53c98f4de42f774062", size = 220097, upload-time = "2025-09-08T23:22:48.677Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c2/95/7a135d52a50dfa7c882ab0ac17e8dc11cec9d55d2c18dda414c051c5e69e/cffi-2.0.0-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:1e3a615586f05fc4065a8b22b8152f0c1b00cdbc60596d187c2a74f9e3036e4e", size = 207983, upload-time = "2025-09-08T23:22:50.06Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3a/c8/15cb9ada8895957ea171c62dc78ff3e99159ee7adb13c0123c001a2546c1/cffi-2.0.0-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:81afed14892743bbe14dacb9e36d9e0e504cd204e0b165062c488942b9718037", size = 206519, upload-time = "2025-09-08T23:22:51.364Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/78/2d/7fa73dfa841b5ac06c7b8855cfc18622132e365f5b81d02230333ff26e9e/cffi-2.0.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:3e17ed538242334bf70832644a32a7aae3d83b57567f9fd60a26257e992b79ba", size = 219572, upload-time = "2025-09-08T23:22:52.902Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/07/e0/267e57e387b4ca276b90f0434ff88b2c2241ad72b16d31836adddfd6031b/cffi-2.0.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3925dd22fa2b7699ed2617149842d2e6adde22b262fcbfada50e3d195e4b3a94", size = 222963, upload-time = "2025-09-08T23:22:54.518Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b6/75/1f2747525e06f53efbd878f4d03bac5b859cbc11c633d0fb81432d98a795/cffi-2.0.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:2c8f814d84194c9ea681642fd164267891702542f028a15fc97d4674b6206187", size = 221361, upload-time = "2025-09-08T23:22:55.867Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7b/2b/2b6435f76bfeb6bbf055596976da087377ede68df465419d192acf00c437/cffi-2.0.0-cp312-cp312-win32.whl", hash = "sha256:da902562c3e9c550df360bfa53c035b2f241fed6d9aef119048073680ace4a18", size = 172932, upload-time = "2025-09-08T23:22:57.188Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f8/ed/13bd4418627013bec4ed6e54283b1959cf6db888048c7cf4b4c3b5b36002/cffi-2.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:da68248800ad6320861f129cd9c1bf96ca849a2771a59e0344e88681905916f5", size = 183557, upload-time = "2025-09-08T23:22:58.351Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/95/31/9f7f93ad2f8eff1dbc1c3656d7ca5bfd8fb52c9d786b4dcf19b2d02217fa/cffi-2.0.0-cp312-cp312-win_arm64.whl", hash = "sha256:4671d9dd5ec934cb9a73e7ee9676f9362aba54f7f34910956b84d727b0d73fb6", size = 177762, upload-time = "2025-09-08T23:22:59.668Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4b/8d/a0a47a0c9e413a658623d014e91e74a50cdd2c423f7ccfd44086ef767f90/cffi-2.0.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:00bdf7acc5f795150faa6957054fbbca2439db2f775ce831222b66f192f03beb", size = 185230, upload-time = "2025-09-08T23:23:00.879Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4a/d2/a6c0296814556c68ee32009d9c2ad4f85f2707cdecfd7727951ec228005d/cffi-2.0.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:45d5e886156860dc35862657e1494b9bae8dfa63bf56796f2fb56e1679fc0bca", size = 181043, upload-time = "2025-09-08T23:23:02.231Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b0/1e/d22cc63332bd59b06481ceaac49d6c507598642e2230f201649058a7e704/cffi-2.0.0-cp313-cp313-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:07b271772c100085dd28b74fa0cd81c8fb1a3ba18b21e03d7c27f3436a10606b", size = 212446, upload-time = "2025-09-08T23:23:03.472Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a9/f5/a2c23eb03b61a0b8747f211eb716446c826ad66818ddc7810cc2cc19b3f2/cffi-2.0.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d48a880098c96020b02d5a1f7d9251308510ce8858940e6fa99ece33f610838b", size = 220101, upload-time = "2025-09-08T23:23:04.792Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f2/7f/e6647792fc5850d634695bc0e6ab4111ae88e89981d35ac269956605feba/cffi-2.0.0-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:f93fd8e5c8c0a4aa1f424d6173f14a892044054871c771f8566e4008eaa359d2", size = 207948, upload-time = "2025-09-08T23:23:06.127Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cb/1e/a5a1bd6f1fb30f22573f76533de12a00bf274abcdc55c8edab639078abb6/cffi-2.0.0-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:dd4f05f54a52fb558f1ba9f528228066954fee3ebe629fc1660d874d040ae5a3", size = 206422, upload-time = "2025-09-08T23:23:07.753Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/98/df/0a1755e750013a2081e863e7cd37e0cdd02664372c754e5560099eb7aa44/cffi-2.0.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c8d3b5532fc71b7a77c09192b4a5a200ea992702734a2e9279a37f2478236f26", size = 219499, upload-time = "2025-09-08T23:23:09.648Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/50/e1/a969e687fcf9ea58e6e2a928ad5e2dd88cc12f6f0ab477e9971f2309b57c/cffi-2.0.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:d9b29c1f0ae438d5ee9acb31cadee00a58c46cc9c0b2f9038c6b0b3470877a8c", size = 222928, upload-time = "2025-09-08T23:23:10.928Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/36/54/0362578dd2c9e557a28ac77698ed67323ed5b9775ca9d3fe73fe191bb5d8/cffi-2.0.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6d50360be4546678fc1b79ffe7a66265e28667840010348dd69a314145807a1b", size = 221302, upload-time = "2025-09-08T23:23:12.42Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/eb/6d/bf9bda840d5f1dfdbf0feca87fbdb64a918a69bca42cfa0ba7b137c48cb8/cffi-2.0.0-cp313-cp313-win32.whl", hash = "sha256:74a03b9698e198d47562765773b4a8309919089150a0bb17d829ad7b44b60d27", size = 172909, upload-time = "2025-09-08T23:23:14.32Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/37/18/6519e1ee6f5a1e579e04b9ddb6f1676c17368a7aba48299c3759bbc3c8b3/cffi-2.0.0-cp313-cp313-win_amd64.whl", hash = "sha256:19f705ada2530c1167abacb171925dd886168931e0a7b78f5bffcae5c6b5be75", size = 183402, upload-time = "2025-09-08T23:23:15.535Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cb/0e/02ceeec9a7d6ee63bb596121c2c8e9b3a9e150936f4fbef6ca1943e6137c/cffi-2.0.0-cp313-cp313-win_arm64.whl", hash = "sha256:256f80b80ca3853f90c21b23ee78cd008713787b1b1e93eae9f3d6a7134abd91", size = 177780, upload-time = "2025-09-08T23:23:16.761Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/92/c4/3ce07396253a83250ee98564f8d7e9789fab8e58858f35d07a9a2c78de9f/cffi-2.0.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:fc33c5141b55ed366cfaad382df24fe7dcbc686de5be719b207bb248e3053dc5", size = 185320, upload-time = "2025-09-08T23:23:18.087Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/59/dd/27e9fa567a23931c838c6b02d0764611c62290062a6d4e8ff7863daf9730/cffi-2.0.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c654de545946e0db659b3400168c9ad31b5d29593291482c43e3564effbcee13", size = 181487, upload-time = "2025-09-08T23:23:19.622Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d6/43/0e822876f87ea8a4ef95442c3d766a06a51fc5298823f884ef87aaad168c/cffi-2.0.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:24b6f81f1983e6df8db3adc38562c83f7d4a0c36162885ec7f7b77c7dcbec97b", size = 220049, upload-time = "2025-09-08T23:23:20.853Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b4/89/76799151d9c2d2d1ead63c2429da9ea9d7aac304603de0c6e8764e6e8e70/cffi-2.0.0-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:12873ca6cb9b0f0d3a0da705d6086fe911591737a59f28b7936bdfed27c0d47c", size = 207793, upload-time = "2025-09-08T23:23:22.08Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bb/dd/3465b14bb9e24ee24cb88c9e3730f6de63111fffe513492bf8c808a3547e/cffi-2.0.0-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:d9b97165e8aed9272a6bb17c01e3cc5871a594a446ebedc996e2397a1c1ea8ef", size = 206300, upload-time = "2025-09-08T23:23:23.314Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/47/d9/d83e293854571c877a92da46fdec39158f8d7e68da75bf73581225d28e90/cffi-2.0.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:afb8db5439b81cf9c9d0c80404b60c3cc9c3add93e114dcae767f1477cb53775", size = 219244, upload-time = "2025-09-08T23:23:24.541Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2b/0f/1f177e3683aead2bb00f7679a16451d302c436b5cbf2505f0ea8146ef59e/cffi-2.0.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:737fe7d37e1a1bffe70bd5754ea763a62a066dc5913ca57e957824b72a85e205", size = 222828, upload-time = "2025-09-08T23:23:26.143Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c6/0f/cafacebd4b040e3119dcb32fed8bdef8dfe94da653155f9d0b9dc660166e/cffi-2.0.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:38100abb9d1b1435bc4cc340bb4489635dc2f0da7456590877030c9b3d40b0c1", size = 220926, upload-time = "2025-09-08T23:23:27.873Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3e/aa/df335faa45b395396fcbc03de2dfcab242cd61a9900e914fe682a59170b1/cffi-2.0.0-cp314-cp314-win32.whl", hash = "sha256:087067fa8953339c723661eda6b54bc98c5625757ea62e95eb4898ad5e776e9f", size = 175328, upload-time = "2025-09-08T23:23:44.61Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bb/92/882c2d30831744296ce713f0feb4c1cd30f346ef747b530b5318715cc367/cffi-2.0.0-cp314-cp314-win_amd64.whl", hash = "sha256:203a48d1fb583fc7d78a4c6655692963b860a417c0528492a6bc21f1aaefab25", size = 185650, upload-time = "2025-09-08T23:23:45.848Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9f/2c/98ece204b9d35a7366b5b2c6539c350313ca13932143e79dc133ba757104/cffi-2.0.0-cp314-cp314-win_arm64.whl", hash = "sha256:dbd5c7a25a7cb98f5ca55d258b103a2054f859a46ae11aaf23134f9cc0d356ad", size = 180687, upload-time = "2025-09-08T23:23:47.105Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3e/61/c768e4d548bfa607abcda77423448df8c471f25dbe64fb2ef6d555eae006/cffi-2.0.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:9a67fc9e8eb39039280526379fb3a70023d77caec1852002b4da7e8b270c4dd9", size = 188773, upload-time = "2025-09-08T23:23:29.347Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2c/ea/5f76bce7cf6fcd0ab1a1058b5af899bfbef198bea4d5686da88471ea0336/cffi-2.0.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:7a66c7204d8869299919db4d5069a82f1561581af12b11b3c9f48c584eb8743d", size = 185013, upload-time = "2025-09-08T23:23:30.63Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/be/b4/c56878d0d1755cf9caa54ba71e5d049479c52f9e4afc230f06822162ab2f/cffi-2.0.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7cc09976e8b56f8cebd752f7113ad07752461f48a58cbba644139015ac24954c", size = 221593, upload-time = "2025-09-08T23:23:31.91Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e0/0d/eb704606dfe8033e7128df5e90fee946bbcb64a04fcdaa97321309004000/cffi-2.0.0-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:92b68146a71df78564e4ef48af17551a5ddd142e5190cdf2c5624d0c3ff5b2e8", size = 209354, upload-time = "2025-09-08T23:23:33.214Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d8/19/3c435d727b368ca475fb8742ab97c9cb13a0de600ce86f62eab7fa3eea60/cffi-2.0.0-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:b1e74d11748e7e98e2f426ab176d4ed720a64412b6a15054378afdb71e0f37dc", size = 208480, upload-time = "2025-09-08T23:23:34.495Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d0/44/681604464ed9541673e486521497406fadcc15b5217c3e326b061696899a/cffi-2.0.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:28a3a209b96630bca57cce802da70c266eb08c6e97e5afd61a75611ee6c64592", size = 221584, upload-time = "2025-09-08T23:23:36.096Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/25/8e/342a504ff018a2825d395d44d63a767dd8ebc927ebda557fecdaca3ac33a/cffi-2.0.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:7553fb2090d71822f02c629afe6042c299edf91ba1bf94951165613553984512", size = 224443, upload-time = "2025-09-08T23:23:37.328Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e1/5e/b666bacbbc60fbf415ba9988324a132c9a7a0448a9a8f125074671c0f2c3/cffi-2.0.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:6c6c373cfc5c83a975506110d17457138c8c63016b563cc9ed6e056a82f13ce4", size = 223437, upload-time = "2025-09-08T23:23:38.945Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a0/1d/ec1a60bd1a10daa292d3cd6bb0b359a81607154fb8165f3ec95fe003b85c/cffi-2.0.0-cp314-cp314t-win32.whl", hash = "sha256:1fc9ea04857caf665289b7a75923f2c6ed559b8298a1b8c49e59f7dd95c8481e", size = 180487, upload-time = "2025-09-08T23:23:40.423Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bf/41/4c1168c74fac325c0c8156f04b6749c8b6a8f405bbf91413ba088359f60d/cffi-2.0.0-cp314-cp314t-win_amd64.whl", hash = "sha256:d68b6cef7827e8641e8ef16f4494edda8b36104d79773a334beaa1e3521430f6", size = 191726, upload-time = "2025-09-08T23:23:41.742Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ae/3a/dbeec9d1ee0844c679f6bb5d6ad4e9f198b1224f4e7a32825f47f6192b0c/cffi-2.0.0-cp314-cp314t-win_arm64.whl", hash = "sha256:0a1527a803f0a659de1af2e1fd700213caba79377e27e4693648c2923da066f9", size = 184195, upload-time = "2025-09-08T23:23:43.004Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/90/07/f44ca684db4e4f08a3fdc6eeb9a0d15dc6883efc7b8c90357fdbf74e186c/cffi-1.17.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:df8b1c11f177bc2313ec4b2d46baec87a5f3e71fc8b45dab2ee7cae86d9aba14", size = 182191, upload-time = "2024-09-04T20:43:30.027Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/08/fd/cc2fedbd887223f9f5d170c96e57cbf655df9831a6546c1727ae13fa977a/cffi-1.17.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8f2cdc858323644ab277e9bb925ad72ae0e67f69e804f4898c070998d50b1a67", size = 178592, upload-time = "2024-09-04T20:43:32.108Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/de/cc/4635c320081c78d6ffc2cab0a76025b691a91204f4aa317d568ff9280a2d/cffi-1.17.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:edae79245293e15384b51f88b00613ba9f7198016a5948b5dddf4917d4d26382", size = 426024, upload-time = "2024-09-04T20:43:34.186Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b6/7b/3b2b250f3aab91abe5f8a51ada1b717935fdaec53f790ad4100fe2ec64d1/cffi-1.17.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45398b671ac6d70e67da8e4224a065cec6a93541bb7aebe1b198a61b58c7b702", size = 448188, upload-time = "2024-09-04T20:43:36.286Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d3/48/1b9283ebbf0ec065148d8de05d647a986c5f22586b18120020452fff8f5d/cffi-1.17.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ad9413ccdeda48c5afdae7e4fa2192157e991ff761e7ab8fdd8926f40b160cc3", size = 455571, upload-time = "2024-09-04T20:43:38.586Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/40/87/3b8452525437b40f39ca7ff70276679772ee7e8b394934ff60e63b7b090c/cffi-1.17.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5da5719280082ac6bd9aa7becb3938dc9f9cbd57fac7d2871717b1feb0902ab6", size = 436687, upload-time = "2024-09-04T20:43:40.084Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8d/fb/4da72871d177d63649ac449aec2e8a29efe0274035880c7af59101ca2232/cffi-1.17.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bb1a08b8008b281856e5971307cc386a8e9c5b625ac297e853d36da6efe9c17", size = 446211, upload-time = "2024-09-04T20:43:41.526Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ab/a0/62f00bcb411332106c02b663b26f3545a9ef136f80d5df746c05878f8c4b/cffi-1.17.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:045d61c734659cc045141be4bae381a41d89b741f795af1dd018bfb532fd0df8", size = 461325, upload-time = "2024-09-04T20:43:43.117Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/36/83/76127035ed2e7e27b0787604d99da630ac3123bfb02d8e80c633f218a11d/cffi-1.17.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:6883e737d7d9e4899a8a695e00ec36bd4e5e4f18fabe0aca0efe0a4b44cdb13e", size = 438784, upload-time = "2024-09-04T20:43:45.256Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/21/81/a6cd025db2f08ac88b901b745c163d884641909641f9b826e8cb87645942/cffi-1.17.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:6b8b4a92e1c65048ff98cfe1f735ef8f1ceb72e3d5f0c25fdb12087a23da22be", size = 461564, upload-time = "2024-09-04T20:43:46.779Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f8/fe/4d41c2f200c4a457933dbd98d3cf4e911870877bd94d9656cc0fcb390681/cffi-1.17.1-cp310-cp310-win32.whl", hash = "sha256:c9c3d058ebabb74db66e431095118094d06abf53284d9c81f27300d0e0d8bc7c", size = 171804, upload-time = "2024-09-04T20:43:48.186Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d1/b6/0b0f5ab93b0df4acc49cae758c81fe4e5ef26c3ae2e10cc69249dfd8b3ab/cffi-1.17.1-cp310-cp310-win_amd64.whl", hash = "sha256:0f048dcf80db46f0098ccac01132761580d28e28bc0f78ae0d58048063317e15", size = 181299, upload-time = "2024-09-04T20:43:49.812Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6b/f4/927e3a8899e52a27fa57a48607ff7dc91a9ebe97399b357b85a0c7892e00/cffi-1.17.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a45e3c6913c5b87b3ff120dcdc03f6131fa0065027d0ed7ee6190736a74cd401", size = 182264, upload-time = "2024-09-04T20:43:51.124Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6c/f5/6c3a8efe5f503175aaddcbea6ad0d2c96dad6f5abb205750d1b3df44ef29/cffi-1.17.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:30c5e0cb5ae493c04c8b42916e52ca38079f1b235c2f8ae5f4527b963c401caf", size = 178651, upload-time = "2024-09-04T20:43:52.872Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/94/dd/a3f0118e688d1b1a57553da23b16bdade96d2f9bcda4d32e7d2838047ff7/cffi-1.17.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f75c7ab1f9e4aca5414ed4d8e5c0e303a34f4421f8a0d47a4d019ceff0ab6af4", size = 445259, upload-time = "2024-09-04T20:43:56.123Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2e/ea/70ce63780f096e16ce8588efe039d3c4f91deb1dc01e9c73a287939c79a6/cffi-1.17.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a1ed2dd2972641495a3ec98445e09766f077aee98a1c896dcb4ad0d303628e41", size = 469200, upload-time = "2024-09-04T20:43:57.891Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1c/a0/a4fa9f4f781bda074c3ddd57a572b060fa0df7655d2a4247bbe277200146/cffi-1.17.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:46bf43160c1a35f7ec506d254e5c890f3c03648a4dbac12d624e4490a7046cd1", size = 477235, upload-time = "2024-09-04T20:44:00.18Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/62/12/ce8710b5b8affbcdd5c6e367217c242524ad17a02fe5beec3ee339f69f85/cffi-1.17.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a24ed04c8ffd54b0729c07cee15a81d964e6fee0e3d4d342a27b020d22959dc6", size = 459721, upload-time = "2024-09-04T20:44:01.585Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ff/6b/d45873c5e0242196f042d555526f92aa9e0c32355a1be1ff8c27f077fd37/cffi-1.17.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:610faea79c43e44c71e1ec53a554553fa22321b65fae24889706c0a84d4ad86d", size = 467242, upload-time = "2024-09-04T20:44:03.467Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1a/52/d9a0e523a572fbccf2955f5abe883cfa8bcc570d7faeee06336fbd50c9fc/cffi-1.17.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a9b15d491f3ad5d692e11f6b71f7857e7835eb677955c00cc0aefcd0669adaf6", size = 477999, upload-time = "2024-09-04T20:44:05.023Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/44/74/f2a2460684a1a2d00ca799ad880d54652841a780c4c97b87754f660c7603/cffi-1.17.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:de2ea4b5833625383e464549fec1bc395c1bdeeb5f25c4a3a82b5a8c756ec22f", size = 454242, upload-time = "2024-09-04T20:44:06.444Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f8/4a/34599cac7dfcd888ff54e801afe06a19c17787dfd94495ab0c8d35fe99fb/cffi-1.17.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:fc48c783f9c87e60831201f2cce7f3b2e4846bf4d8728eabe54d60700b318a0b", size = 478604, upload-time = "2024-09-04T20:44:08.206Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/34/33/e1b8a1ba29025adbdcda5fb3a36f94c03d771c1b7b12f726ff7fef2ebe36/cffi-1.17.1-cp311-cp311-win32.whl", hash = "sha256:85a950a4ac9c359340d5963966e3e0a94a676bd6245a4b55bc43949eee26a655", size = 171727, upload-time = "2024-09-04T20:44:09.481Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3d/97/50228be003bb2802627d28ec0627837ac0bf35c90cf769812056f235b2d1/cffi-1.17.1-cp311-cp311-win_amd64.whl", hash = "sha256:caaf0640ef5f5517f49bc275eca1406b0ffa6aa184892812030f04c2abf589a0", size = 181400, upload-time = "2024-09-04T20:44:10.873Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5a/84/e94227139ee5fb4d600a7a4927f322e1d4aea6fdc50bd3fca8493caba23f/cffi-1.17.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:805b4371bf7197c329fcb3ead37e710d1bca9da5d583f5073b799d5c5bd1eee4", size = 183178, upload-time = "2024-09-04T20:44:12.232Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/da/ee/fb72c2b48656111c4ef27f0f91da355e130a923473bf5ee75c5643d00cca/cffi-1.17.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:733e99bc2df47476e3848417c5a4540522f234dfd4ef3ab7fafdf555b082ec0c", size = 178840, upload-time = "2024-09-04T20:44:13.739Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cc/b6/db007700f67d151abadf508cbfd6a1884f57eab90b1bb985c4c8c02b0f28/cffi-1.17.1-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1257bdabf294dceb59f5e70c64a3e2f462c30c7ad68092d01bbbfb1c16b1ba36", size = 454803, upload-time = "2024-09-04T20:44:15.231Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1a/df/f8d151540d8c200eb1c6fba8cd0dfd40904f1b0682ea705c36e6c2e97ab3/cffi-1.17.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da95af8214998d77a98cc14e3a3bd00aa191526343078b530ceb0bd710fb48a5", size = 478850, upload-time = "2024-09-04T20:44:17.188Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/28/c0/b31116332a547fd2677ae5b78a2ef662dfc8023d67f41b2a83f7c2aa78b1/cffi-1.17.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d63afe322132c194cf832bfec0dc69a99fb9bb6bbd550f161a49e9e855cc78ff", size = 485729, upload-time = "2024-09-04T20:44:18.688Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/91/2b/9a1ddfa5c7f13cab007a2c9cc295b70fbbda7cb10a286aa6810338e60ea1/cffi-1.17.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f79fc4fc25f1c8698ff97788206bb3c2598949bfe0fef03d299eb1b5356ada99", size = 471256, upload-time = "2024-09-04T20:44:20.248Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b2/d5/da47df7004cb17e4955df6a43d14b3b4ae77737dff8bf7f8f333196717bf/cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b62ce867176a75d03a665bad002af8e6d54644fad99a3c70905c543130e39d93", size = 479424, upload-time = "2024-09-04T20:44:21.673Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0b/ac/2a28bcf513e93a219c8a4e8e125534f4f6db03e3179ba1c45e949b76212c/cffi-1.17.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:386c8bf53c502fff58903061338ce4f4950cbdcb23e2902d86c0f722b786bbe3", size = 484568, upload-time = "2024-09-04T20:44:23.245Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d4/38/ca8a4f639065f14ae0f1d9751e70447a261f1a30fa7547a828ae08142465/cffi-1.17.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:4ceb10419a9adf4460ea14cfd6bc43d08701f0835e979bf821052f1805850fe8", size = 488736, upload-time = "2024-09-04T20:44:24.757Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/86/c5/28b2d6f799ec0bdecf44dced2ec5ed43e0eb63097b0f58c293583b406582/cffi-1.17.1-cp312-cp312-win32.whl", hash = "sha256:a08d7e755f8ed21095a310a693525137cfe756ce62d066e53f502a83dc550f65", size = 172448, upload-time = "2024-09-04T20:44:26.208Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/50/b9/db34c4755a7bd1cb2d1603ac3863f22bcecbd1ba29e5ee841a4bc510b294/cffi-1.17.1-cp312-cp312-win_amd64.whl", hash = "sha256:51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903", size = 181976, upload-time = "2024-09-04T20:44:27.578Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8d/f8/dd6c246b148639254dad4d6803eb6a54e8c85c6e11ec9df2cffa87571dbe/cffi-1.17.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f3a2b4222ce6b60e2e8b337bb9596923045681d71e5a082783484d845390938e", size = 182989, upload-time = "2024-09-04T20:44:28.956Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8b/f1/672d303ddf17c24fc83afd712316fda78dc6fce1cd53011b839483e1ecc8/cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0984a4925a435b1da406122d4d7968dd861c1385afe3b45ba82b750f229811e2", size = 178802, upload-time = "2024-09-04T20:44:30.289Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0e/2d/eab2e858a91fdff70533cab61dcff4a1f55ec60425832ddfdc9cd36bc8af/cffi-1.17.1-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d01b12eeeb4427d3110de311e1774046ad344f5b1a7403101878976ecd7a10f3", size = 454792, upload-time = "2024-09-04T20:44:32.01Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/75/b2/fbaec7c4455c604e29388d55599b99ebcc250a60050610fadde58932b7ee/cffi-1.17.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:706510fe141c86a69c8ddc029c7910003a17353970cff3b904ff0686a5927683", size = 478893, upload-time = "2024-09-04T20:44:33.606Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4f/b7/6e4a2162178bf1935c336d4da8a9352cccab4d3a5d7914065490f08c0690/cffi-1.17.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:de55b766c7aa2e2a3092c51e0483d700341182f08e67c63630d5b6f200bb28e5", size = 485810, upload-time = "2024-09-04T20:44:35.191Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c7/8a/1d0e4a9c26e54746dc08c2c6c037889124d4f59dffd853a659fa545f1b40/cffi-1.17.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c59d6e989d07460165cc5ad3c61f9fd8f1b4796eacbd81cee78957842b834af4", size = 471200, upload-time = "2024-09-04T20:44:36.743Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/26/9f/1aab65a6c0db35f43c4d1b4f580e8df53914310afc10ae0397d29d697af4/cffi-1.17.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd398dbc6773384a17fe0d3e7eeb8d1a21c2200473ee6806bb5e6a8e62bb73dd", size = 479447, upload-time = "2024-09-04T20:44:38.492Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5f/e4/fb8b3dd8dc0e98edf1135ff067ae070bb32ef9d509d6cb0f538cd6f7483f/cffi-1.17.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:3edc8d958eb099c634dace3c7e16560ae474aa3803a5df240542b305d14e14ed", size = 484358, upload-time = "2024-09-04T20:44:40.046Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f1/47/d7145bf2dc04684935d57d67dff9d6d795b2ba2796806bb109864be3a151/cffi-1.17.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:72e72408cad3d5419375fc87d289076ee319835bdfa2caad331e377589aebba9", size = 488469, upload-time = "2024-09-04T20:44:41.616Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bf/ee/f94057fa6426481d663b88637a9a10e859e492c73d0384514a17d78ee205/cffi-1.17.1-cp313-cp313-win32.whl", hash = "sha256:e03eab0a8677fa80d646b5ddece1cbeaf556c313dcfac435ba11f107ba117b5d", size = 172475, upload-time = "2024-09-04T20:44:43.733Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7c/fc/6a8cb64e5f0324877d503c854da15d76c1e50eb722e320b15345c4d0c6de/cffi-1.17.1-cp313-cp313-win_amd64.whl", hash = "sha256:f6a16c31041f09ead72d69f583767292f750d24913dadacf5756b966aacb3f1a", size = 182009, upload-time = "2024-09-04T20:44:45.309Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
|
@ -808,7 +783,7 @@ wheels = [
|
|||
|
||||
[[package]]
|
||||
name = "graphiti-core"
|
||||
version = "0.24.3"
|
||||
version = "0.22.1rc2"
|
||||
source = { editable = "." }
|
||||
dependencies = [
|
||||
{ name = "diskcache" },
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue