Comments (5)
pip 23.2.1 from /opt/homebrew/lib/python3.11/site-packages/pip (python 3.11)
same question
from open-interpreter.
Could you one of these two things?
- Trying to install llama-cpp-python seperately, I have personally faced a lot of issues earlier installing. so could be one of those, use
pip install llama-cpp-python
, (the usual error we have is it would ask you to install c++ build tools I think,) - Creating a seperate environment and trying to install it again just in case there is some conflict within the packages
I am not very sure, but maybe these steps could help
from open-interpreter.
I got the C++ error, so I'm install all 19 GB of the Visual Studio stuff.
Also installing on my linux box, and after install, it can't find "interpreter".
from open-interpreter.
I downloaded all of the CPP stuff, and reinstalled the llama-python. This is my terminal info:
PS C:\Users\15702> pip install llama-cpp-python Collecting llama-cpp-python Using cached llama_cpp_python-0.1.83.tar.gz (1.8 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting typing-extensions>=4.5.0 (from llama-cpp-python) Obtaining dependency information for typing-extensions>=4.5.0 from https://files.pythonhosted.org/packages/ec/6b/63cc3df74987c36fe26157ee12e09e8f9db4de771e0f3404263117e75b95/typing_extensions-4.7.1-py3-none-any.whl.metadata Using cached typing_extensions-4.7.1-py3-none-any.whl.metadata (3.1 kB) Collecting numpy>=1.20.0 (from llama-cpp-python) Obtaining dependency information for numpy>=1.20.0 from https://files.pythonhosted.org/packages/b7/db/4d37359e2c9cf8bf071c08b8a6f7374648a5ab2e76e2e22e3b808f81d507/numpy-1.25.2-cp310-cp310-win_amd64.whl.metadata Using cached numpy-1.25.2-cp310-cp310-win_amd64.whl.metadata (5.7 kB) Collecting diskcache>=5.6.1 (from llama-cpp-python) Obtaining dependency information for diskcache>=5.6.1 from https://files.pythonhosted.org/packages/3f/27/4570e78fc0bf5ea0ca45eb1de3818a23787af9b390c0b0a0033a1b8236f9/diskcache-5.6.3-py3-none-any.whl.metadata Using cached diskcache-5.6.3-py3-none-any.whl.metadata (20 kB) Using cached diskcache-5.6.3-py3-none-any.whl (45 kB) Using cached numpy-1.25.2-cp310-cp310-win_amd64.whl (15.6 MB) Using cached typing_extensions-4.7.1-py3-none-any.whl (33 kB) Building wheels for collected packages: llama-cpp-python Building wheel for llama-cpp-python (pyproject.toml) ... done Created wheel for llama-cpp-python: filename=llama_cpp_python-0.1.83-cp310-cp310-win_amd64.whl size=1641494 sha256=976a07263f0e0b93c2d00ce826774d551cae0be8757b07adcaac62039619beca Stored in directory: c:\users\15702\appdata\local\pip\cache\wheels\3f\39\6f\3e75230ce84bb465df194bca6c0c7b936dc4b0b3c83389688d Successfully built llama-cpp-python Installing collected packages: typing-extensions, numpy, diskcache, llama-cpp-python Successfully installed diskcache-5.6.3 llama-cpp-python-0.1.83 numpy-1.25.2 typing-extensions-4.7.1
But, when I run it:
`PS C:\Users\15702> interpreter --local
Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model.
[?] Parameter count (smaller is faster, larger is more capable): 7B
7B
16B
34B
[?] Quality (lower is faster, higher is more capable): Low | Size: 3.01 GB, RAM usage: 5.51 GB
Low | Size: 3.01 GB, RAM usage: 5.51 GB
Medium | Size: 4.24 GB, RAM usage: 6.74 GB
High | Size: 7.16 GB, RAM usage: 9.66 GB
[?] Use GPU? (Large models might crash on GPU, but will run more quickly) (Y/n): N
[?] Code-Llama
interface package not found. Install llama-cpp-python
? (Y/n): Y
Fatal Python error: _Py_HashRandomization_Init: failed to get random numbers to initialize Python
Python runtime state: preinitialized
Error during installation with OpenBLAS: Command
'['C:\Users\15702\AppData\Local\Programs\Python\Python310\python.exe', '-m', 'pip', 'install',
'llama-cpp-python']' returned non-zero exit status 1.
Failed to install Code-LLama.
We have likely not built the proper Code-Llama
support for your system.
(Running language models locally is a difficult task! If you have insight into the best way to implement this across
platforms/architectures, please join the Open Interpreter community Discord and consider contributing the project's
development.)
Please press enter to switch to GPT-4
(recommended).`
So it's acting like there was no llama-cpp-python installed.
from open-interpreter.
This is now a duplicate of #167. If you still need help, please leave a comment on that issue.
from open-interpreter.
Related Issues (20)
- Use RAG for better context HOT 5
- Keep getting JSONDecodeError when using computer.browser HOT 5
- IPKernelApp Warning & Pydantic issue
- gpt-4 to gpt-3.5 turbo llm change HOT 1
- computer.browser.search is dead HOT 3
- VSCode terminal does not offer code execution y/n option
- Using connect Timeout in Router: httpx.Timeout
- Improve Documentation Formatting and Style for Better Clarity
- How to Extract Complete, Non-redundant, and Correct Code from Messages Testing on Benchmarks like HumanEval? HOT 5
- %info ubuntu 22 no output HOT 2
- When use ollama with model llama3:70b, the code cannot run HOT 11
- Can Not Start a New Chat in Terminal
- 'computer' module not found
- The interpreter cannot install required dependencies by itself.
- ollama llama3 How to remove the first line " ` " when generating code in Windows 11 terminal HOT 3
- Cannot run scripts that arent python HOT 5
- After the task is completed, the task will be executed repeatedly and will not stop automatically; intermittent and continuous repeated output HOT 3
- Adding Groq Support HOT 1
- litellm.exceptions.ServiceUnavailableError: AnthropicException - anthropic does not support parameters: {'functions'
- You can see what's on the screen and go to My Downloads
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from open-interpreter.