Comments (6)
try installing it manually
pip install --upgrade llama-cpp-python
it should solve
from open-interpreter.
Thanks, that worked, but it seems like there is still an issue with downloading the models on windows.
Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model.
[?] Parameter count (smaller is faster, larger is more capable): 34B
7B
16B
> 34B
[?] Quality (lower is faster, higher is more capable): Medium | Size: 20.22 GB, RAM usage: 22.72 GB
Low | Size: 14.21 GB, RAM usage: 16.71 GB
> Medium | Size: 20.22 GB, RAM usage: 22.72 GB
High | Size: 35.79 GB, RAM usage: 38.29 GB
[?] Use GPU? (Large models might crash on GPU, but will run more quickly) (Y/n): Y
[?] This instance of `Code-Llama` was not found. Would you like to download it? (Y/n):
curl: (3) URL using bad/illegal format or missing URL
curl: (3) URL using bad/illegal format or missing URL
curl: (3) URL using bad/illegal format or missing URL
Finished downloading `Code-Llama`.
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Python311\Scripts\interpreter.exe\__main__.py", line 7, in <module>
File "C:\Python311\Lib\site-packages\interpreter\interpreter.py", line 90, in cli
cli(self)
File "C:\Python311\Lib\site-packages\interpreter\cli.py", line 59, in cli
interpreter.chat()
File "C:\Python311\Lib\site-packages\interpreter\interpreter.py", line 148, in chat
self.llama_instance = get_llama_2_instance()
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\interpreter\llama_2.py", line 176, in get_llama_2_instance
llama_2 = Llama(model_path=model_path, n_gpu_layers=n_gpu_layers, verbose=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\llama_cpp\llama.py", line 312, in __init__
raise ValueError(f"Model path does not exist: {model_path}")
ValueError: Model path does not exist: C:\Users\janwe\AppData\Local\Open Interpreter\Open Interpreter\models\codellama-34b.Q4_K_M.gguf
from open-interpreter.
you can manually download the models and place them at
C:\Users\janwe\AppData\Local\Open Interpreter\Open Interpreter\models\
I found these links from the code from where it downloads the models:
models = {
'7B': {
'Low': {'URL': 'https://huggingface.co/TheBloke/CodeLlama-7B-GGUF/resolve/main/codellama-7b.Q2_K.gguf', 'Size': '3.01 GB', 'RAM': '5.51 GB'},
'Medium': {'URL': 'https://huggingface.co/TheBloke/CodeLlama-7B-GGUF/resolve/main/codellama-7b.Q4_K_M.gguf', 'Size': '4.24 GB', 'RAM': '6.74 GB'},
'High': {'URL': 'https://huggingface.co/TheBloke/CodeLlama-7B-GGUF/resolve/main/codellama-7b.Q8_0.gguf', 'Size': '7.16 GB', 'RAM': '9.66 GB'}
},
'16B': {
'Low': {'URL': 'https://huggingface.co/TheBloke/CodeLlama-13B-GGUF/resolve/main/codellama-13b.Q2_K.gguf', 'Size': '5.66 GB', 'RAM': '8.16 GB'},
'Medium': {'URL': 'https://huggingface.co/TheBloke/CodeLlama-13B-GGUF/resolve/main/codellama-13b.Q4_K_M.gguf', 'Size': '8.06 GB', 'RAM': '10.56 GB'},
'High': {'URL': 'https://huggingface.co/TheBloke/CodeLlama-13B-GGUF/resolve/main/codellama-13b.Q8_0.gguf', 'Size': '13.83 GB', 'RAM': '16.33 GB'}
},
'34B': {
'Low': {'URL': 'https://huggingface.co/TheBloke/CodeLlama-34B-GGUF/resolve/main/codellama-34b.Q2_K.gguf', 'Size': '14.21 GB', 'RAM': '16.71 GB'},
'Medium': {'URL': 'https://huggingface.co/TheBloke/CodeLlama-34B-GGUF/resolve/main/codellama-34b.Q4_K_M.gguf', 'Size': '20.22 GB', 'RAM': '22.72 GB'},
'High': {'URL': 'https://huggingface.co/TheBloke/CodeLlama-34B-GGUF/resolve/main/codellama-34b.Q8_0.gguf', 'Size': '35.79 GB', 'RAM': '38.29 GB'}
}
}
from open-interpreter.
where put the downloaded model? ROOT/models/ don't work for my.
from open-interpreter.
try installing it manually
pip install --upgrade llama-cpp-python
it should solve
from open-interpreter.
I'm closing this issue. Feel free to open it back up if the issue is not resolved.
from open-interpreter.
Related Issues (20)
- System Message overrides prime directive... HOT 11
- Issue during installation via terminal HOT 2
- Unable to start - pydantic.errors.ConfigError HOT 1
- attempting to download phi (but phi is already downloaded) HOT 1
- AttributeError: 'Computer' object has no attribute 'files' │ HOT 2
- Is the %% [commands] implemented not yet? Or is it a bug?
- Waiting for the model to load... HOT 2
- `[IPKernelApp] CRITICAL` error if using at Google Colab
- Add information on how to view the docs HOT 1
- Display link to the Docs if error with flags HOT 4
- Program crashes when attempting computer.display.view() in os mode
- Colab Demo TypeError HOT 2
- AttributeError: 'list' object has no attribute 'lower' when starting interpreter from master branch HOT 2
- bug: `markdown` disabled or not supported. HOT 2
- [IPKernelApp] WARNING | Parent appears to have exited, shutting down. HOT 7
- Freezes after generating prompt HOT 4
- Repeating output
- open interpreter crash when using computer.display.view
- OI freezes then gives Jupyter error
- Check for GPU or MPS availability before using CPU
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from open-interpreter.