CI: 32 GB machine for Ollama tests (#1857)
<!-- .github/pull_request_template.md --> ## Description Recently the Llama test became failing with `model requires more system memory (8.9 GiB) than is available (8.4 GiB)`. Due to `cgroup` configuration, only 8 GBs are available for containers running on `buildjet-4vcpu-ubuntu-2204`. The decision is to change the the machine to `buildjet-8vcpu-ubuntu-2204`. it costs 0.0016 $ per minute. Unconfidently changed the model to `phi3:mini`. Any other ideas are welcome. <!-- Please provide a clear, human-generated description of the changes in this PR. DO NOT use AI-generated descriptions. We want to understand your thought process and reasoning. --> ## Type of Change <!-- Please check the relevant option --> - [ ] Bug fix (non-breaking change that fixes an issue) - [ ] New feature (non-breaking change that adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) - [ ] Documentation update - [ ] Code refactoring - [ ] Performance improvement - [x] Other (please specify): ## Screenshots/Videos (if applicable) <!-- Add screenshots or videos to help explain your changes --> ## Pre-submission Checklist <!-- Please check all boxes that apply before submitting your PR --> - [ ] **I have tested my changes thoroughly before submitting this PR** - [ ] **This PR contains minimal changes necessary to address the issue/feature** - [ ] My code follows the project's coding standards and style guidelines - [ ] I have added tests that prove my fix is effective or that my feature works - [ ] I have added necessary documentation (if applicable) - [ ] All new and existing tests pass - [ ] I have searched existing PRs to ensure this change hasn't been submitted already - [ ] I have linked any relevant issues in the description - [ ] My commits have clear and descriptive messages ## DCO Affirmation I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin.
This commit is contained in:
commit
c17f838034
1 changed files with 2 additions and 15 deletions
17
.github/workflows/test_ollama.yml
vendored
17
.github/workflows/test_ollama.yml
vendored
|
|
@ -7,13 +7,8 @@ jobs:
|
||||||
|
|
||||||
run_ollama_test:
|
run_ollama_test:
|
||||||
|
|
||||||
# needs 16 Gb RAM for phi4
|
# needs 32 Gb RAM for phi4 in a container
|
||||||
runs-on: buildjet-4vcpu-ubuntu-2204
|
runs-on: buildjet-8vcpu-ubuntu-2204
|
||||||
# services:
|
|
||||||
# ollama:
|
|
||||||
# image: ollama/ollama
|
|
||||||
# ports:
|
|
||||||
# - 11434:11434
|
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout repository
|
- name: Checkout repository
|
||||||
|
|
@ -28,14 +23,6 @@ jobs:
|
||||||
run: |
|
run: |
|
||||||
uv add torch
|
uv add torch
|
||||||
|
|
||||||
# - name: Install ollama
|
|
||||||
# run: curl -fsSL https://ollama.com/install.sh | sh
|
|
||||||
# - name: Run ollama
|
|
||||||
# run: |
|
|
||||||
# ollama serve --openai &
|
|
||||||
# ollama pull llama3.2 &
|
|
||||||
# ollama pull avr/sfr-embedding-mistral:latest
|
|
||||||
|
|
||||||
- name: Start Ollama container
|
- name: Start Ollama container
|
||||||
run: |
|
run: |
|
||||||
docker run -d --name ollama -p 11434:11434 ollama/ollama
|
docker run -d --name ollama -p 11434:11434 ollama/ollama
|
||||||
|
|
|
||||||
Loading…
Add table
Reference in a new issue