Add qdrant test
This commit is contained in:
parent
a197177676
commit
0d230c930a
2 changed files with 80 additions and 10 deletions
20
.github/workflows/test_common.yml
vendored
20
.github/workflows/test_common.yml
vendored
|
|
@ -83,22 +83,22 @@ jobs:
|
|||
- name: Run tests
|
||||
run: poetry run pytest tests/
|
||||
|
||||
|
||||
|
||||
# - name: Download NLTK Punkt Tokenizer Models
|
||||
# run: |
|
||||
# python -m nltk.downloader punkt
|
||||
# python -m nltk.downloader stopwords
|
||||
# python -m nltk.downloader wordnet
|
||||
# python -m nltk.downloader averaged_perceptron_tagger
|
||||
|
||||
- name: Run test script
|
||||
- name: Run default basic pipeline
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
LLM_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
ENV: 'dev'
|
||||
run: poetry run python ./cognee/tests/test_library.py
|
||||
|
||||
- name: Run default Qdrant
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
LLM_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
VECTOR_DB_URL: ${{ secrets.QDRANT_API_URL }}
|
||||
VECTOR_DB_API_KEY: ${{ secrets.QDRANT_API_KEY }}
|
||||
ENV: 'dev'
|
||||
run: poetry run python ./cognee/tests/test_qdrant.py
|
||||
|
||||
- name: Build with Poetry
|
||||
run: poetry build
|
||||
|
||||
|
|
|
|||
70
cognee/tests/test_qdrant.py
Normal file
70
cognee/tests/test_qdrant.py
Normal file
|
|
@ -0,0 +1,70 @@
|
|||
|
||||
import logging
|
||||
import os
|
||||
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
|
||||
async def main():
|
||||
from os import path
|
||||
import pathlib
|
||||
import cognee
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
|
||||
# print("Working dir: ", str(pathlib.Path(__file__).parent))
|
||||
# data_directory_path = str(pathlib.Path(path.join(pathlib.Path(__file__).parent, "../../.data")).resolve())
|
||||
# print("Data dir: ", data_directory_path)
|
||||
# cognee.config.data_root_directory(data_directory_path)
|
||||
#
|
||||
# cognee_directory_path = str(pathlib.Path(path.join(pathlib.Path(__file__).parent, "../../.cognee_system")).resolve())
|
||||
# print("System dir: ", cognee_directory_path)
|
||||
# cognee.config.system_root_directory(cognee_directory_path)
|
||||
|
||||
cognee.config.set_vector_engine_provider("qdrant")
|
||||
|
||||
logging.debug("CURRENCT CWD: %s", pathlib.Path(__file__).parent)
|
||||
logging.debug("CURRENCT CWD: %s", os.getcwd())
|
||||
|
||||
dataset_name = "cs_explanations"
|
||||
|
||||
explanation_file_path = os.path.join(pathlib.Path(__file__).parent, "test_data/Natural_language_processing.txt")
|
||||
await cognee.add([explanation_file_path], dataset_name)
|
||||
|
||||
# dataset_name = "short_stories"
|
||||
# # data_directory_path is defined above
|
||||
# await cognee.add("data://" + "/Users/runner/work/cognee/cognee/./cognee/tests", dataset_name)
|
||||
|
||||
text_1 = """A quantum computer is a computer that takes advantage of quantum mechanical phenomena.
|
||||
At small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior, specifically quantum superposition and entanglement, using specialized hardware that supports the preparation and manipulation of quantum states.
|
||||
Classical physics cannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculations exponentially faster (with respect to input size scaling) than any modern "classical" computer. In particular, a large-scale quantum computer could break widely used encryption schemes and aid physicists in performing physical simulations; however, the current state of the technology is largely experimental and impractical, with several obstacles to useful applications. Moreover, scalable quantum computers do not hold promise for many practical tasks, and for many important tasks quantum speedups are proven impossible.
|
||||
The basic unit of information in quantum computing is the qubit, similar to the bit in traditional digital electronics. Unlike a classical bit, a qubit can exist in a superposition of its two "basis" states. When measuring a qubit, the result is a probabilistic output of a classical bit, therefore making quantum computers nondeterministic in general. If a quantum computer manipulates the qubit in a particular way, wave interference effects can amplify the desired measurement results. The design of quantum algorithms involves creating procedures that allow a quantum computer to perform calculations efficiently and quickly.
|
||||
Physically engineering high-quality qubits has proven challenging. If a physical qubit is not sufficiently isolated from its environment, it suffers from quantum decoherence, introducing noise into calculations. Paradoxically, perfectly isolating qubits is also undesirable because quantum computations typically need to initialize qubits, perform controlled qubit interactions, and measure the resulting quantum states. Each of those operations introduces errors and suffers from noise, and such inaccuracies accumulate.
|
||||
In principle, a non-quantum (classical) computer can solve the same computational problems as a quantum computer, given enough time. Quantum advantage comes in the form of time complexity rather than computability, and quantum complexity theory shows that some quantum algorithms for carefully selected tasks require exponentially fewer computational steps than the best known non-quantum algorithms. Such tasks can in theory be solved on a large-scale quantum computer whereas classical computers would not finish computations in any reasonable amount of time. However, quantum speedup is not universal or even typical across computational tasks, since basic tasks such as sorting are proven to not allow any asymptotic quantum speedup. Claims of quantum supremacy have drawn significant attention to the discipline, but are demonstrated on contrived tasks, while near-term practical use cases remain limited.
|
||||
"""
|
||||
|
||||
text_2 = """A large language model (LLM) is a language model notable for its ability to achieve general-purpose language generation and other natural language processing tasks such as classification. LLMs acquire these abilities by learning statistical relationships from text documents during a computationally intensive self-supervised and semi-supervised training process. LLMs can be used for text generation, a form of generative AI, by taking an input text and repeatedly predicting the next token or word.
|
||||
LLMs are artificial neural networks. The largest and most capable, as of March 2024, are built with a decoder-only transformer-based architecture while some recent implementations are based on other architectures, such as recurrent neural network variants and Mamba (a state space model).
|
||||
Up to 2020, fine tuning was the only way a model could be adapted to be able to accomplish specific tasks. Larger sized models, such as GPT-3, however, can be prompt-engineered to achieve similar results.[6] They are thought to acquire knowledge about syntax, semantics and "ontology" inherent in human language corpora, but also inaccuracies and biases present in the corpora.
|
||||
Some notable LLMs are OpenAI's GPT series of models (e.g., GPT-3.5 and GPT-4, used in ChatGPT and Microsoft Copilot), Google's PaLM and Gemini (the latter of which is currently used in the chatbot of the same name), xAI's Grok, Meta's LLaMA family of open-source models, Anthropic's Claude models, Mistral AI's open source models, and Databricks' open source DBRX.
|
||||
"""
|
||||
|
||||
await cognee.cognify(["cs_explanations"])
|
||||
|
||||
search_results = await cognee.search("SIMILARITY", {"query": "computer science"})
|
||||
assert len(search_results) != 0, "The search results list is empty."
|
||||
print("The search results list is not empty.")
|
||||
|
||||
search_results = await cognee.search("CATEGORIES", {"query": "DefaultGraphModel__default_user"})
|
||||
assert len(search_results) != 0, "The search results list is empty."
|
||||
print("The search results list is not empty.")
|
||||
|
||||
search_results = await cognee.search("NEIGHBOR", {"query": "DefaultGraphModel__default_user"})
|
||||
assert len(search_results) != 0, "The search results list is empty."
|
||||
print("The search results list is not empty.")
|
||||
|
||||
search_results = await cognee.search("SUMMARY", {"query": "Work and computers"})
|
||||
assert len(search_results) != 0, "The search results list is empty."
|
||||
print("The search results list is not empty.")
|
||||
|
||||
search_results = await cognee.search("ADJACENT", {"query": "DefaultGraphModel__default_user"})
|
||||
assert len(search_results) != 0, "The search results list is empty."
|
||||
print("The search results list is not empty.")
|
||||
Loading…
Add table
Reference in a new issue