Coder Social home page Coder Social logo

openinterpreter / open-interpreter Goto Github PK

View Code? Open in Web Editor NEW
52.5K 52.5K 4.6K 89.45 MB

A natural language interface for computers

Home Page: http://openinterpreter.com/

License: GNU Affero General Public License v3.0

Python 98.29% Shell 0.73% PowerShell 0.85% Dockerfile 0.13%
chatgpt gpt-4 interpreter javascript nodejs python

open-interpreter's People

Contributors

aj47 avatar akuten1298 avatar amazingct avatar birbbit avatar cobular avatar codeacme17 avatar cyanidebyte avatar eltociear avatar ericrallen avatar ihgalis avatar imapersonman avatar jordanbtucker avatar killianlucas avatar krrishdholakia avatar maclean-d avatar merlinfrombelgium avatar mikebirdtech avatar minamorl avatar mocy avatar monstercoder avatar nirvor avatar notnaton avatar okineadev avatar oliverpalonkorp avatar rainrat avatar steve235lab avatar tanmaydoesai avatar thefazzer avatar tyfiero avatar wsbao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

open-interpreter's Issues

Planning to implement OpenSource models ?

Hi,
Are you planning to implement OpenSource models ?
Especially knowing that a lot of the recent models are even better in many tasks than gpt4 itself .
Thanks !

AttributeError: 'NoneType' object has no attribute 'create_chat_completion'

`
▌ Model set to Code-Llama

Open Interpreter will require approval before running code. Use interpreter -y to bypass this.

Press CTRL-C to exit.

code a function for factorials
Traceback (most recent call last):
File "/home/singh/miniconda3/envs/oi/bin/interpreter", line 8, in
sys.exit(cli())
File "/home/singh/miniconda3/envs/oi/lib/python3.10/site-packages/interpreter/interpreter.py", line 91, in cli
cli(self)
File "/home/singh/miniconda3/envs/oi/lib/python3.10/site-packages/interpreter/cli.py", line 42, in cli
interpreter.chat()
File "/home/singh/miniconda3/envs/oi/lib/python3.10/site-packages/interpreter/interpreter.py", line 245, in chat
self.respond()
File "/home/singh/miniconda3/envs/oi/lib/python3.10/site-packages/interpreter/interpreter.py", line 328, in respond
response = self.llama_instance.create_chat_completion(
AttributeError: 'NoneType' object has no attribute 'create_chat_completion'
(oi) ➜ ~
`

Replit connection error

Using python 3.10.0 in conda

Interpreter opens with an initial chat When entering any basic prompt, interpreter crashes with the error below. Is this a problem with my install, or is this a legit internal exception? Thank you.

`
▌ Model set to GPT-4

Tip: To run locally, use interpreter --local

Open Interpreter will require approval before running code. Use interpreter -y to bypass this.

Press CTRL-C to exit.

Traceback (most recent call last):
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/urllib3/connectionpool.py", line 467, in _make_request
self._validate_conn(conn)
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1092, in _validate_conn
conn.connect()
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/urllib3/connection.py", line 642, in connect
sock_and_verified = _ssl_wrap_socket_and_match_hostname(
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/urllib3/connection.py", line 783, in ssl_wrap_socket_and_match_hostname
ssl_sock = ssl_wrap_socket(
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/urllib3/util/ssl
.py", line 469, in ssl_wrap_socket
ssl_sock = ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/urllib3/util/ssl
.py", line 513, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/ssl.py", line 512, in wrap_socket
return self.sslsocket_class._create(
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/ssl.py", line 1070, in _create
self.do_handshake()
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/ssl.py", line 1341, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL] unknown error (_ssl.c:997)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/urllib3/connectionpool.py", line 790, in urlopen
response = self._make_request(
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/urllib3/connectionpool.py", line 491, in _make_request
raise new_e
urllib3.exceptions.SSLError: [SSL] unknown error (_ssl.c:997)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/urllib3/connectionpool.py", line 844, in urlopen
retries = retries.increment(
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='open-procedures.replit.app', port=443): Max retries exceeded with url: /search/?query=%5B%7B%27role%27%3A%20%27user%27%2C%20%27content%27%3A%20%27create%20an%20example%20python%20class%27%7D%5D (Caused by SSLError(SSLError(1, '[SSL] unknown error (_ssl.c:997)')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/bboyd/anaconda3/envs/open-interpreter/bin/interpreter", line 8, in
sys.exit(cli())
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/interpreter/interpreter.py", line 90, in cli
cli(self)
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/interpreter/cli.py", line 59, in cli
interpreter.chat()
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/interpreter/interpreter.py", line 216, in chat
self.respond()
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/interpreter/interpreter.py", line 272, in respond
info = self.get_info_for_system_message()
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/interpreter/interpreter.py", line 118, in get_info_for_system_message
relevant_procedures = requests.get(url).json()["procedures"]
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/home/bboyd/anaconda3/envs/open-interpreter/lib/python3.10/site-packages/requests/adapters.py", line 517, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='open-procedures.replit.app', port=443): Max retries exceeded with url: /search/?query=%5B%7B%27role%27%3A%20%27user%27%2C%20%27content%27%3A%20%27create%20an%20example%20python%20class%27%7D%5D (Caused by SSLError(SSLError(1, '[SSL] unknown error (_ssl.c:997)')))
`

Missing Main Execution Block in `interpreter.py` 🐛

🐛 Bug Report: Missing Main Execution Block in interpreter.py

Description

The interpreter.py script lacks the main execution block (if __name__ == "__main__":) which is necessary to initiate the script's execution when run from the command line. Due to this omission, the script exits silently without any output or error messages when executed with command-line arguments.

🚶 Steps to Reproduce

  1. Clone the repository and navigate to the directory containing interpreter.py.
  2. Run the script from the command line with the --local flag (or any other flag): python interpreter.py --local.
  3. Observe that the script exits silently without initiating the CLI or the chat process.

🤔 Expected Behavior

When the script is executed from the command line, it should initiate the CLI and subsequently the chat process, allowing the user to interact with the program.

😯 Actual Behavior

The script exits silently without any output or error messages, and the CLI or chat process is not initiated.

💡 Suggested Fix

Add the following block of code at the end of the interpreter.py script to initiate the script's execution when run as the main program:

"""
python
if name == "main":
interpreter = Interpreter()
interpreter.cli()
"""

using code-llama 16B always start with fake information like this

▌ Entered debug mode Model set to Code-Llama

Open Interpreter will require approval before running code. Use interpreter -y to bypass this.

Press CTRL-C to exit.

>

 1. Introduction:

  This is a simple chatbot that can answer some basic questions about the world of human. It uses the
  Dialogflow API to get answers from Dialogflow's database and then it displays them in the terminal.

                                               2. Requirements:

   • Python3
   • pip3
   • Google Cloud Platform account (optional)

                                               3. Installation:

   1 Clone this repository using git clone https://github.com/AliMahmoudi/Human-Assistant or download it as a
     zip file and extract it.
   2 Go to the project directory using cd Human-Assistant.
   3 Install all dependencies using pip3 install -r requirements.txt.
   4 Create an account on Dialogflow and create a new agent.
   5 In your Dialogflow agent, go to Settings > Export and Import > Restore from Zip and upload the agent.zip
     file in this repository.

this link "https://github.com/AliMahmoudi/Human-Assistant" does not exists

Hanging forever if the context is too large

Hello, I'm having a blast experimenting with this thing, and in my opinion it has a lot of potential.

I'd like to report one annoying issue though, the program is forever hanging on the response, if the context grows too large. Are there any plans to make it summarize the context if it's getting too big?

pip install open-interpreter

On trying to install using pip, I receive:

ERROR: Could not find a version that satisfies the requirement open-interpreter (from versions: none)
ERROR: No matching distribution found for open-interpreter

Hangs on execution of bash script

Script hangs (infinite wait) on execution of any shell code commands. Ctrl+C from this point returns to interpreter shell.

Running on Arch Linux, installed with pipx

Debug output as follows:

Running function:
{
    'role': 'assistant',
    'content': "The user wants to know the filetype of a specific file. We can achieve this by using the `file` command in a shell script. This command will return the type of the 
file.\n\nPlan:\n1. Use the `file` command to determine the type of the file at the given path.\n2. Return the output to the user.\n\nLet's execute the plan.",
    'function_call': <OpenAIObject at 0x7faf5cd4f1d0> JSON: {
  "name": "run_code",
  "arguments": "{\n  \"language\": \"shell\",\n  \"code\": \"file /home/X/z.html\"\n}",
  "parsed_arguments": {
    "language": "shell",
    "code": "file /home/X/z.html"
  }
                                                                                                                                                                                              
  file /home/X/z.html                                                                                                                                                                  
                                                                                                                                                                                              

  Would you like to run this code? (y/n)

  y

Running code:
echo "ACTIVE_LINE:1"
file /home/X/z.html

echo "END_OF_EXECUTION"

Script hangs here.

cannot use gpu

i select local mode and download llama and select gpu,but no gpu use,cpu use completely

AttributeError: 'Llama' object has no attribute 'ctx

When trying to use the interpreter -y --local

Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model.

[?] Parameter count (smaller is faster, larger is more capable): 7B

7B
16B
34B

[?] Quality (lower is faster, higher is more capable): High | Size: 7.16 GB, RAM usage: 9.66 GB
Low | Size: 3.01 GB, RAM usage: 5.51 GB
Medium | Size: 4.24 GB, RAM usage: 6.74 GB

High | Size: 7.16 GB, RAM usage: 9.66 GB

[?] Use GPU? (Large models might crash on GPU, but will run more quickly) (Y/n): n

[?] This instance of Code-Llama was not found. Would you like to download it? (Y/n): y

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1145 100 1145 0 0 9656 0 --:--:-- --:--:-- --:--:-- 9703
100 6829M 100 6829M 0 0 46.0M 0 0:02:28 0:02:28 --:--:-- 52.1M

Finished downloading Code-Llama.

llama.cpp: loading model from /Users/dwayne/Library/Application Support/Open Interpreter/models/codellama-7b.Q8_0.gguf
error loading model: unknown (magic, version) combination: 46554747, 00000001; is this really a GGML file?
llama_load_model_from_file: failed to load model
Traceback (most recent call last):

AttributeError: 'Llama' object has no attribute 'ctx'

Problem installating `llama-cpp-python`

I'm using Windows 11 and have trouble installing Open Interpreter.

Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model.

[?] Parameter count (smaller is faster, larger is more capable): 34B
   7B
   16B
 > 34B

[?] Quality (lower is faster, higher is more capable): Medium | Size: 20.22 GB, RAM usage: 22.72 GB
   Low | Size: 14.21 GB, RAM usage: 16.71 GB
 > Medium | Size: 20.22 GB, RAM usage: 22.72 GB
   High | Size: 35.79 GB, RAM usage: 38.29 GB

[?] Use GPU? (Large models might crash on GPU, but will run more quickly) (Y/n): Y

[?] This instance of `Code-Llama` was not found. Would you like to download it? (Y/n): Y


curl: (3) URL using bad/illegal format or missing URL
curl: (3) URL using bad/illegal format or missing URL
curl: (3) URL using bad/illegal format or missing URL

 Finished downloading `Code-Llama`.

[?] `Code-Llama` interface package not found. Install `llama-cpp-python`? (Y/n): Y

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Python311\Lib\site-packages\pip\__main__.py", line 22, in <module>
    from pip._internal.cli.main import main as _main
  File "C:\Python311\Lib\site-packages\pip\_internal\cli\main.py", line 10, in <module>
    from pip._internal.cli.autocompletion import autocomplete
  File "C:\Python311\Lib\site-packages\pip\_internal\cli\autocompletion.py", line 10, in <module>
    from pip._internal.cli.main_parser import create_main_parser
  File "C:\Python311\Lib\site-packages\pip\_internal\cli\main_parser.py", line 9, in <module>
    from pip._internal.build_env import get_runnable_pip
  File "C:\Python311\Lib\site-packages\pip\_internal\build_env.py", line 19, in <module>
    from pip._internal.cli.spinners import open_spinner
  File "C:\Python311\Lib\site-packages\pip\_internal\cli\spinners.py", line 9, in <module>
    from pip._internal.utils.logging import get_indentation
  File "C:\Python311\Lib\site-packages\pip\_internal\utils\logging.py", line 29, in <module>
    from pip._internal.utils.misc import ensure_dir
  File "C:\Python311\Lib\site-packages\pip\_internal\utils\misc.py", line 37, in <module>
    from pip._vendor.tenacity import retry, stop_after_delay, wait_fixed
  File "C:\Python311\Lib\site-packages\pip\_vendor\tenacity\__init__.py", line 548, in <module>
    from pip._vendor.tenacity._asyncio import AsyncRetrying  # noqa:E402,I100
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\pip\_vendor\tenacity\_asyncio.py", line 21, in <module>
    from asyncio import sleep
  File "C:\Python311\Lib\asyncio\__init__.py", line 42, in <module>
    from .windows_events import *
  File "C:\Python311\Lib\asyncio\windows_events.py", line 8, in <module>
    import _overlapped
OSError: [WinError 10106] Der angeforderte Dienstanbieter konnte nicht geladen oder initialisiert werden
Error during installation with cuBLAS: Command '['C:\\Python311\\python.exe', '-m', 'pip', 'install',
'llama-cpp-python']' returned non-zero exit status 1.
Traceback (most recent call last):
  File "C:\Python311\Lib\site-packages\interpreter\llama_2.py", line 94, in get_llama_2_instance
    from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Python311\Scripts\interpreter.exe\__main__.py", line 7, in <module>
  File "C:\Python311\Lib\site-packages\interpreter\interpreter.py", line 90, in cli
    cli(self)
  File "C:\Python311\Lib\site-packages\interpreter\cli.py", line 59, in cli
    interpreter.chat()
  File "C:\Python311\Lib\site-packages\interpreter\interpreter.py", line 148, in chat
    self.llama_instance = get_llama_2_instance()
                          ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\interpreter\llama_2.py", line 151, in get_llama_2_instance
    from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'

Segmentation fault in centos.

image

my OS is centos.
when I run 'pip install open-interpreter' and then run interpreter -y
it shows Segmentation fault.
what should I do?

Error

Please open an issue on Github (openinterpreter.com, click Github) and paste the following:

<OpenAIObject at 0x220e13b6e70> JSON: {
"name": "python",
"arguments": "import datetime\n\ncurrent_time = datetime.datetime.now().strftime("%H:%M:%S")\ncurrent_time"
}

Feature Request: Full integration with Jupyter

Summary

I'm using Open Interpreter in a Jupyter Notebook with the Python API. Here is a simple example:

import interpreter
interpreter.chat("Plot a simple graph using matplotlib")

Expected result

  1. A visible plot is rendered in the notebook
  2. Imports and variables created by Open Interpreter are available in code cells

Actual result

  1. A visible plot is not rendered:
    image

  2. Imports and variables created by Open Interpreter are unavailable in code cells:
    image

Proposed solution

Full integration with Jupyter would be amazing for complex use cases and data science!

Ideally Open Interpreter would be able to create and run cells directly in the notebook.

This could be achieved with a Jupyter extension, or by writing outputs to a notebook file.

`Code-Llama` interface package not found. Install `llama-cpp-python`?

When performing this step“When performing this step”,terminal error shows"error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [116 lines of output]"
" note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects"

The log records are as follows“[?] Code-Llama interface package not found. Install llama-cpp-python? (...:
Collecting llama-cpp-python
Using cached llama_cpp_python-0.1.83.tar.gz (1.8 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in /usr/local/lib/python3.11/site-packages (from llama-cpp-python) (4.7.1)
Requirement already satisfied: numpy>=1.20.0 in /usr/local/lib/python3.11/site-packages (from llama-cpp-python) (1.24.3)
Collecting diskcache>=5.6.1 (from llama-cpp-python)
Obtaining dependency information for diskcache>=5.6.1 from https://files.pythonhosted.org/packages/3f/27/4570e78fc0bf5ea0ca45eb1de3818a23787af9b390c0b0a0033a1b8236f9/diskcache-5.6.3-py3-none-any.whl.metadata
Using cached diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
Using cached diskcache-5.6.3-py3-none-any.whl (45 kB)
Building wheels for collected packages: llama-cpp-python
Building wheel for llama-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [116 lines of output]

  --------------------------------------------------------------------------------
  -- Trying 'Ninja' generator
  --------------------------------
  ---------------------------
  ----------------------
  -----------------
  ------------
  -------
  --
  CMake Deprecation Warning at CMakeLists.txt:1 (cmake_minimum_required):
    Compatibility with CMake < 3.5 will be removed from a future version of
    CMake.
  
    Update the VERSION argument <min> value or use a ...<max> suffix to tell
    CMake that the project does not need compatibility with older versions.
  
  Not searching for unused variables given on the command line.
  
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  -- The C compiler identification is AppleClang 11.0.0.11000033
  sh: ps: command not found
  -- Detecting C compiler ABI info
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  -- Detecting C compiler ABI info - failed
  -- Check for working C compiler: /usr/bin/cc
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  -- Check for working C compiler: /usr/bin/cc - broken
  CMake Error at /tmp/pip-build-env-d3oo_3ih/overlay/lib/python3.11/site-packages/cmake/data/share/cmake-3.27/Modules/CMakeTestCCompiler.cmake:67 (message):
    The C compiler
  
      "/usr/bin/cc"
  
    is not able to compile a simple test program.
  
    It fails with the following output:
  
      Change Dir: '/tmp/pip-install-p2qsitk1/llama-cpp-python_c3e8e2f322cc4821955fb6d001cb3564/_cmake_test_compile/build/CMakeFiles/CMakeScratch/TryCompile-3eEGeq'
  
      Run Build Command(s): /private/tmp/pip-build-env-d3oo_3ih/overlay/lib/python3.11/site-packages/ninja/data/bin/ninja -v cmTC_ac853
      [1/2] /usr/bin/cc   -arch x86_64 -mmacosx-version-min=13.0 -MD -MT CMakeFiles/cmTC_ac853.dir/testCCompiler.c.o -MF CMakeFiles/cmTC_ac853.dir/testCCompiler.c.o.d -o CMakeFiles/cmTC_ac853.dir/testCCompiler.c.o -c /tmp/pip-install-p2qsitk1/llama-cpp-python_c3e8e2f322cc4821955fb6d001cb3564/_cmake_test_compile/build/CMakeFiles/CMakeScratch/TryCompile-3eEGeq/testCCompiler.c
      FAILED: CMakeFiles/cmTC_ac853.dir/testCCompiler.c.o
      /usr/bin/cc   -arch x86_64 -mmacosx-version-min=13.0 -MD -MT CMakeFiles/cmTC_ac853.dir/testCCompiler.c.o -MF CMakeFiles/cmTC_ac853.dir/testCCompiler.c.o.d -o CMakeFiles/cmTC_ac853.dir/testCCompiler.c.o -c /tmp/pip-install-p2qsitk1/llama-cpp-python_c3e8e2f322cc4821955fb6d001cb3564/_cmake_test_compile/build/CMakeFiles/CMakeScratch/TryCompile-3eEGeq/testCCompiler.c
      clang: error: invalid version number in '-mmacosx-version-min=13.0'
      ninja: build stopped: subcommand failed.
  
  
  
  
  
    CMake will not be able to correctly generate this project.
  Call Stack (most recent call first):
    CMakeLists.txt:3 (ENABLE_LANGUAGE)
  
  
  -- Configuring incomplete, errors occurred!
  --
  -------
  ------------
  -----------------
  ----------------------
  ---------------------------
  --------------------------------
  -- Trying 'Ninja' generator - failure
  --------------------------------------------------------------------------------
  
  
  
  --------------------------------------------------------------------------------
  -- Trying 'Unix Makefiles' generator
  --------------------------------
  ---------------------------
  ----------------------
  -----------------
  ------------
  -------
  --
  CMake Deprecation Warning at CMakeLists.txt:1 (cmake_minimum_required):
    Compatibility with CMake < 3.5 will be removed from a future version of
    CMake.
  
    Update the VERSION argument <min> value or use a ...<max> suffix to tell
    CMake that the project does not need compatibility with older versions.
  
  Not searching for unused variables given on the command line.
  
  sh: ps: command not found
  CMake Error: CMake was unable to find a build program corresponding to "Unix Makefiles".  CMAKE_MAKE_PROGRAM is not set.  You probably need to select a different build tool.
  -- Configuring incomplete, errors occurred!
  --
  -------
  ------------
  -----------------
  ----------------------
  ---------------------------
  --------------------------------
  -- Trying 'Unix Makefiles' generator - failure
  --------------------------------------------------------------------------------
  
                  ********************************************************************************
                  scikit-build could not get a working generator for your system. Aborting build.
  
                  Building MacOSX wheels for Python 3.11 requires XCode.
  Get it here:
  
    https://developer.apple.com/xcode/
  
                  ********************************************************************************
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
Error during installation with Metal: Command
'['/usr/local/opt/[email protected]/bin/python3.11', '-m', 'pip', 'install',
'llama-cpp-python']' returned non-zero exit status 1.

▌ Failed to install Code-LLama.

We have likely not built the proper Code-Llama support for your system.

( Running language models locally is a difficult task! If you have insight into
the best way to implement this across platforms/architectures, please join the
Open Interpreter community Discord and consider contributing the project's
development. )

Please press enter to switch to GPT-4 (recommended).

please help🙏

Code LLAMA always return unusable and not runnable results

Hello, thank you so much for your work, I tried open-interpreter and it works amazing with GPT4 api, it gives great answers and let me execute code, this is a dream come true. However whatever I try to use Code LLAMA I always get results like this:

https://imgur.com/r6mEFSP

Sometime is just a funny thing like if you ask it to change to dark theme as per your view its just says it did that or mostly give me some code but it is not formatted and it does not let me run it as with GPT4 api, never asked me to run anything.
I have tried - 7b High, 13b High, 34b medium.

Is it just so that Code LLama is that bad or am I doing something wrong?

Thank you so much for your work again! Have a wonderful day!

P.S. just to be clear - I tried every prompt with GPT4 API and everything worked great and asked me to run and worked perfectly, but Code LLLAMA whatever prompt I used or model size I tried never gave me a good answer.

cant install code llama ccp python

hey

i cant install it

heres the error message

Failed to install Code-LLama.

We have likely not built the proper Code-Llama support for your system.

heres my system : macbook pro 2015 mac os monterey i7 16go ram

please help me

Llama does not finish...

I downloaded the Code Llama 7B, Medium. It seems to not finish any text meaning that it never executes the code. Is there any way of fixing this?
Capture

unable to install on macos

pengzhangzhi@pengzhangzhideMacBook-Air ~ % pip3 install open-interpreter
Defaulting to user installation because normal site-packages is not writeable
ERROR: Could not find a version that satisfies the requirement open-interpreter (from versions: none)
ERROR: No matching distribution found for open-interpreter

Stopped working at Windows CLIs (cmd and powershell)

Hi!

After the (at least?) last update (08.08.23) the open-interpreter stopped working. Basically its not responding to the CLI chat input (or is not starting an openai API request?). This counts for a Windows OS (or most probable Winows File system) and is independent of the CLI (powershell or normal cmd behave identical) but(!) its working as usual at a Ubuntu OS. I attached the CLI behaviour.

How it looks like @ ubuntu:
https://ibb.co/LPKJYrc

How it looks like @ Windows Powershell CLI (normal cmd terminal, too):
https://ibb.co/s6Lv25g

If I had to guess it might again have something to do with the different paths / import os / between Win & Linux?

[Feature Request] Raycast Extension

This project looks wonderful! And I think it will be a great addition to be able as an extension for Raycast.

I want to try (re-)impement using the new Raycast AI feature, and it would be great if you could help me out with this.

can`t change the model

Hello, can I only use the GPT-4 model? I only have the GPT-3.5 api, and when I use the interpreter --fast command, after entering the GPT-3.5 api according to the prompt, the error is displayed as follows:
File "c:\Users\get-pip.py", line 3, in
interpreter.chat("Plot APPL and META's normalized stock prices") # Executes a single command
File "C:\conda\envs\myenv\lib\site-packages\interpreter\interpreter.py", line 201, in chat
self.respond()
File "C:\conda\envs\myenv\lib\site-packages\interpreter\interpreter.py", line 303, in respond
response = openai.ChatCompletion.create(
File "C:\conda\envs\myenv\lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "C:\conda\envs\myenv\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "C:\conda\envs\myenv\lib\site-packages\openai\api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
File "C:\conda\envs\myenv\lib\site-packages\openai\api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "C:\conda\envs\myenv\lib\site-packages\openai\api_requestor.py", line 765, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: The model gpt-4 does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.
I look forward to your reply! Thank you!

Support for Azure GP4

I am looking to work with Azure GP4. Do you have any plans to support it in the near future?

highlight in a loop

strange behavior... expected?
running both on windows (in conda ) and ubuntu 20 (via vscode ssh)
its running the code while highlighting the running line (?), expected?

its stuck like that for a while now.

WindowsTerminal_6v0tlhfjY8.mp4
Code_eOpwacB0Da.mp4

How to prompt

Probably I'm doing something wrong, but this is the kind of dialogue I get at the moment:

Finished downloading `Code-Llama`.


▌ Model set to Code-Llama

Open Interpreter will require approval before running code. Use interpreter -y to bypass this.

Press CTRL-C to exit.

> please generate a plot of the evolution of S&P 500 over the last month

  please generate a plot of the evolution of S&P 500 over the last month

> write a Python program to analyze the performance of the S&P500 index over the last month

  write a Python program to analyze the performance of the S&P500 index over the last month

> what is 2**3 + 4**2 ?

  what is 23 + 42 ?

NB, running on Mac M2, with local CodeLLama.

Any hints where I'm wrong?

Incorrect context size of codellama taken

I was exploring the code, understanding it, and was in the interpreter.py file where I saw this:

    if self.local:
      # Model determines how much we'll trim the messages list to get it under the context limit
      # So for Code-Llama, we'll use "gpt-3.5-turbo" which (i think?) has the same context window as Code-Llama
      self.model = "gpt-3.5-turbo"
      # In the future lets make --model {model} just work / include llama

    messages = tt.trim(self.messages, self.model, system_message=system_message)

But Code-Llama actually has 16k context, as stated in this blog at huggingface.

I would have made a pr for the same, but i haven't really worked with tokentrim yet, so I think someone else would have to, if needed I could make the changes by learning it, no worries

Simple Web-UI possible?

Hi!

I like your approach for using the gpt4 api with "expended" codeinterpeter really a lot. Is it maybe possible to give the "chat" input/output to a frontend for a local webserver UI? Like via streamlit? Iam still @learning, cant really help that much with frontends a.e., I guess. But would really "boost" your nice concept.

Product video

What tool, service, etc. did you use to make the demo video?

Error: cannot import name 'cli' from 'interpreter'

╰─$ uname -a
Linux lab 6.2.0-26-generic #26~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Jul 13 16:27:29 UTC 2 x86_64 x86_64 x86_64 GNU/Linux

╰─$ pip --version                                                           1 ↵
pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)

╰─$ interpreter                 
Traceback (most recent call last):
  File "/usr/local/bin/interpreter", line 5, in <module>
    from interpreter import cli
ImportError: cannot import name 'cli' from 'interpreter' (unknown location)

Cannot download Code-Llama

Hi! This seems to be an amazing project and I am eager to have a try. However, I had this problem when I tried to run it locally:

>Failed to install Code-LLama.

**We have likely not built the proper `Code-Llama` support for your system.**

(Running language models locally is a difficult task! If you have insight into the best way 
to implement this across platforms/architectures, please join the Open Interpreter community
Discord and consider contributing the project's development.)

Since I am using a M2 Mac and it's not likely that there's no codellma for it, I tried running the function get_llama_2_instance() in the source file, and I got some exceptions:

Traceback (most recent call last):
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 1348, in do_open
    h.request(req.get_method(), req.selector, req.data, headers,
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1286, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1332, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1281, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1041, in _send_output
    self.send(msg)
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 979, in send
    self.connect()
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1458, in connect
    self.sock = self._context.wrap_socket(self.sock,
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 517, in wrap_socket
    return self.sslsocket_class._create(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 1108, in _create
    self.do_handshake()
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 1379, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Volumes/Ext SD/llama_2.py", line 86, in get_llama_2_instance
    wget.download(url, download_path)
  File "/opt/homebrew/lib/python3.11/site-packages/wget.py", line 526, in download
    (tmpfile, headers) = ulib.urlretrieve(binurl, tmpfile, callback)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 241, in urlretrieve
    with contextlib.closing(urlopen(url, data)) as fp:
                            ^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 216, in urlopen
    return opener.open(url, data, timeout)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 519, in open
    response = self._open(req, data)
               ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 536, in _open
    result = self._call_chain(self.handle_open, protocol, protocol +
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 496, in _call_chain
    result = func(*args)
             ^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 1391, in https_open
    return self.do_open(http.client.HTTPSConnection, req,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 1351, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)>

It seems that there's something wrong with ssl? Or is it that I am using Python 3.11 and it's not supported? (I find that "3.11" was removed from python-package.yml)

unable to start interpreter after updating API

Windows 10 stock on a Hyundai laptop with Intel chip.
I installed it, tried to download the Code-Llama, which it downloaded, but now I get this:
`Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model.

[?] Parameter count (smaller is faster, larger is more capable): 7B

7B
16B
34B

[?] Quality (lower is faster, higher is more capable): Low | Size: 3.01 GB, RAM usage: 5.51 GB

Low | Size: 3.01 GB, RAM usage: 5.51 GB
Medium | Size: 4.24 GB, RAM usage: 6.74 GB
High | Size: 7.16 GB, RAM usage: 9.66 GB

[?] Use GPU? (Large models might crash on GPU, but will run more quickly) (Y/n): n

[?] Code-Llama interface package not found. Install llama-cpp-python? (Y/n): y

Fatal Python error: _Py_HashRandomization_Init: failed to get random numbers to initialize Python
Python runtime state: preinitialized

Error during installation with OpenBLAS: Command
'['C:\Users\15702\AppData\Local\Programs\Python\Python310\python.exe', '-m', 'pip', 'install',
'llama-cpp-python']' returned non-zero exit status 1.

Failed to install Code-LLama.

We have likely not built the proper Code-Llama support for your system.

(Running language models locally is a difficult task! If you have insight into the best way to implement this across
platforms/architectures, please join the Open Interpreter community Discord and consider contributing the project's
development.)

Please press enter to switch to GPT-4 (recommended).`

So I try to use GPT-4, and I use an API from OpenAI as I have an account to use the 3.5 online.
Using --fast gets me this:
`PS C:\Users\15702> interpreter --fast

▌ Model set to GPT-3.5-TURBO

Tip: To run locally, use interpreter --local

Open Interpreter will require approval before running code. Use interpreter -y to bypass this.

Press CTRL-C to exit.`

Any suggestions?

Where is the Code-Llama model downloaded to?

  1. I was prompted to enter an openAI key, but I do not have one, so I switched to code-llama mode.
  2. The instance of code-llama was not found on my Mac. So I was prompted to download it instead:
image

However after the download completed, I'm not able to locate where the file is on my Mac.
image

I'm using completely default settings with the latest build of open-interpreter. Can someone help me locate the file? Thanks!

error during installation

I installed interpreter on mac 10.13.6 with no problem , but when I run interpreter I have this error :
is there a command to remove / reinstall interpreter cleanly ?

PS :
pip3 install llama-cpp-python
seems to do something interesting , now I have

Can you set my system to dark mode ?

yes,sure

the log error :

Building wheels for collected packages: llama-cpp-python
Building wheel for llama-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [142 lines of output]

  --------------------------------------------------------------------------------
  -- Trying 'Ninja' generator
  --------------------------------
  ---------------------------
  ----------------------
  -----------------
  ------------
  -------
  --
  CMake Deprecation Warning at CMakeLists.txt:1 (cmake_minimum_required):
    Compatibility with CMake < 3.5 will be removed from a future version of
    CMake.
  
    Update the VERSION argument <min> value or use a ...<max> suffix to tell
    CMake that the project does not need compatibility with older versions.
  
  Not searching for unused variables given on the command line.
  
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  -- The C compiler identification is AppleClang 10.0.0.10001044
  sh: ps: command not found
  -- Detecting C compiler ABI info
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  -- Detecting C compiler ABI info - done
  -- Check for working C compiler: /usr/bin/cc - skipped
  -- Detecting C compile features
  -- Detecting C compile features - done
  sh: ps: command not found
  -- The CXX compiler identification is AppleClang 10.0.0.10001044
  sh: ps: command not found
  -- Detecting CXX compiler ABI info
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  -- Detecting CXX compiler ABI info - done
  -- Check for working CXX compiler: /usr/bin/c++ - skipped
  -- Detecting CXX compile features
  -- Detecting CXX compile features - done
  -- Configuring done (2.3s)
  -- Generating done (0.0s)
  -- Build files have been written to: /tmp/pip-install-cl_i0nlz/llama-cpp-python_c0a4bebbf8974d04a13e6511abd4cf91/_cmake_test_compile/build
  --
  -------
  ------------
  -----------------
  ----------------------
  ---------------------------
  --------------------------------
  -- Trying 'Ninja' generator - success
  --------------------------------------------------------------------------------
  
  Configuring Project
    Working directory:
      /private/tmp/pip-install-cl_i0nlz/llama-cpp-python_c0a4bebbf8974d04a13e6511abd4cf91/_skbuild/macosx-10.13-x86_64-3.10/cmake-build
    Command:
      /private/tmp/pip-build-env-6595ao8x/overlay/lib/python3.10/site-packages/cmake/data/bin/cmake /private/tmp/pip-install-cl_i0nlz/llama-cpp-python_c0a4bebbf8974d04a13e6511abd4cf91 -G Ninja -DCMAKE_MAKE_PROGRAM:FILEPATH=/private/tmp/pip-build-env-6595ao8x/overlay/lib/python3.10/site-packages/ninja/data/bin/ninja --no-warn-unused-cli -DCMAKE_INSTALL_PREFIX:PATH=/private/tmp/pip-install-cl_i0nlz/llama-cpp-python_c0a4bebbf8974d04a13e6511abd4cf91/_skbuild/macosx-10.13-x86_64-3.10/cmake-install -DPYTHON_VERSION_STRING:STRING=3.10.11 -DSKBUILD:INTERNAL=TRUE -DCMAKE_MODULE_PATH:PATH=/private/tmp/pip-build-env-6595ao8x/overlay/lib/python3.10/site-packages/skbuild/resources/cmake -DPYTHON_EXECUTABLE:PATH=/Library/Frameworks/Python.framework/Versions/3.10/bin/python3.10 -DPYTHON_INCLUDE_DIR:PATH=/Library/Frameworks/Python.framework/Versions/3.10/include/python3.10 -DPYTHON_LIBRARY:PATH=/Library/Frameworks/Python.framework/Versions/3.10/lib/libpython3.10.dylib -DPython_EXECUTABLE:PATH=/Library/Frameworks/Python.framework/Versions/3.10/bin/python3.10 -DPython_ROOT_DIR:PATH=/Library/Frameworks/Python.framework/Versions/3.10 -DPython_FIND_REGISTRY:STRING=NEVER -DPython_INCLUDE_DIR:PATH=/Library/Frameworks/Python.framework/Versions/3.10/include/python3.10 -DPython3_EXECUTABLE:PATH=/Library/Frameworks/Python.framework/Versions/3.10/bin/python3.10 -DPython3_ROOT_DIR:PATH=/Library/Frameworks/Python.framework/Versions/3.10 -DPython3_FIND_REGISTRY:STRING=NEVER -DPython3_INCLUDE_DIR:PATH=/Library/Frameworks/Python.framework/Versions/3.10/include/python3.10 -DCMAKE_MAKE_PROGRAM:FILEPATH=/private/tmp/pip-build-env-6595ao8x/overlay/lib/python3.10/site-packages/ninja/data/bin/ninja -DLLAMA_METAL=on -DCMAKE_OSX_DEPLOYMENT_TARGET:STRING=10.13 -DCMAKE_OSX_ARCHITECTURES:STRING=x86_64 -DCMAKE_BUILD_TYPE:STRING=Release -DLLAMA_METAL=on
  
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  Not searching for unused variables given on the command line.
  -- The C compiler identification is unknown
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  sh: ps: command not found
  -- The CXX compiler identification is unknown
  sh: ps: command not found
  CMake Error at CMakeLists.txt:3 (project):
    No CMAKE_C_COMPILER could be found.
  
    Tell CMake where to find the compiler by setting either the environment
    variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to
    the compiler, or to the compiler name if it is in the PATH.
  
  
  CMake Error at CMakeLists.txt:3 (project):
    No CMAKE_CXX_COMPILER could be found.
  
    Tell CMake where to find the compiler by setting either the environment
    variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path
    to the compiler, or to the compiler name if it is in the PATH.
  
  
  -- Configuring incomplete, errors occurred!
  Traceback (most recent call last):
    File "/private/tmp/pip-build-env-6595ao8x/overlay/lib/python3.10/site-packages/skbuild/setuptools_wrap.py", line 666, in setup
      env = cmkr.configure(
    File "/private/tmp/pip-build-env-6595ao8x/overlay/lib/python3.10/site-packages/skbuild/cmaker.py", line 357, in configure
      raise SKBuildError(msg)
  
  An error occurred while configuring with CMake.
    Command:
      /private/tmp/pip-build-env-6595ao8x/overlay/lib/python3.10/site-packages/cmake/data/bin/cmake /private/tmp/pip-install-cl_i0nlz/llama-cpp-python_c0a4bebbf8974d04a13e6511abd4cf91 -G Ninja -DCMAKE_MAKE_PROGRAM:FILEPATH=/private/tmp/pip-build-env-6595ao8x/overlay/lib/python3.10/site-packages/ninja/data/bin/ninja --no-warn-unused-cli -DCMAKE_INSTALL_PREFIX:PATH=/private/tmp/pip-install-cl_i0nlz/llama-cpp-python_c0a4bebbf8974d04a13e6511abd4cf91/_skbuild/macosx-10.13-x86_64-3.10/cmake-install -DPYTHON_VERSION_STRING:STRING=3.10.11 -DSKBUILD:INTERNAL=TRUE -DCMAKE_MODULE_PATH:PATH=/private/tmp/pip-build-env-6595ao8x/overlay/lib/python3.10/site-packages/skbuild/resources/cmake -DPYTHON_EXECUTABLE:PATH=/Library/Frameworks/Python.framework/Versions/3.10/bin/python3.10 -DPYTHON_INCLUDE_DIR:PATH=/Library/Frameworks/Python.framework/Versions/3.10/include/python3.10 -DPYTHON_LIBRARY:PATH=/Library/Frameworks/Python.framework/Versions/3.10/lib/libpython3.10.dylib -DPython_EXECUTABLE:PATH=/Library/Frameworks/Python.framework/Versions/3.10/bin/python3.10 -DPython_ROOT_DIR:PATH=/Library/Frameworks/Python.framework/Versions/3.10 -DPython_FIND_REGISTRY:STRING=NEVER -DPython_INCLUDE_DIR:PATH=/Library/Frameworks/Python.framework/Versions/3.10/include/python3.10 -DPython3_EXECUTABLE:PATH=/Library/Frameworks/Python.framework/Versions/3.10/bin/python3.10 -DPython3_ROOT_DIR:PATH=/Library/Frameworks/Python.framework/Versions/3.10 -DPython3_FIND_REGISTRY:STRING=NEVER -DPython3_INCLUDE_DIR:PATH=/Library/Frameworks/Python.framework/Versions/3.10/include/python3.10 -DCMAKE_MAKE_PROGRAM:FILEPATH=/private/tmp/pip-build-env-6595ao8x/overlay/lib/python3.10/site-packages/ninja/data/bin/ninja -DLLAMA_METAL=on -DCMAKE_OSX_DEPLOYMENT_TARGET:STRING=10.13 -DCMAKE_OSX_ARCHITECTURES:STRING=x86_64 -DCMAKE_BUILD_TYPE:STRING=Release -DLLAMA_METAL=on
    Source directory:
      /private/tmp/pip-install-cl_i0nlz/llama-cpp-python_c0a4bebbf8974d04a13e6511abd4cf91
    Working directory:
      /private/tmp/pip-install-cl_i0nlz/llama-cpp-python_c0a4bebbf8974d04a13e6511abd4cf91/_skbuild/macosx-10.13-x86_64-3.10/cmake-build
  Please see CMake's output for more information.
  
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
Error during installation with Metal: Command '['/Library/Frameworks/Python.framework/Versions/3.10/bin/python3.10',
'-m', 'pip', 'install', 'llama-cpp-python']' returned non-zero exit status 1.
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/interpreter/llama_2.py", line 94, in get_llama_2_instance
from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/bin/interpreter", line 8, in
sys.exit(cli())
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/interpreter/interpreter.py", line 90, in cli
cli(self)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/interpreter/cli.py", line 59, in cli
interpreter.chat()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/interpreter/interpreter.py", line 148, in chat
self.llama_instance = get_llama_2_instance()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/interpreter/llama_2.py", line 151, in get_llama_2_instance
from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'

'Call' object is not iterable

Must be a problem in code_interpreter.py when we parse everything for python, it expects everything to be an iterable:

Screen Shot 2023-08-12 at 12 21 29 AM

Chat suggested this fix:

def process_body(self, body):
    if not isinstance(body, list):
        body = [body]

    # The rest of your method
    for sub_node in body:
        # process each sub_node

Keyword Error issue

image

I am getting this issue sometimes. The same cmd which is to create a venv in this case works sometimes but sometimes it shows this error. It happens with other cmds too.

Function call could not be parsed

<OpenAIObject at 0x1038e76b0> JSON: {
"name": "python",
"arguments": "import os\nimport requests\n\n# Create the "cat" folder in the
"Downloads" directory\ndownloads_path =
os.path.expanduser("~/Downloads")\ncat_folder_path =
os.path.join(downloads_path, "cat")\nos.makedirs(cat_folder_path,
exist_ok=True)\n\n# Download the image of a cat from a URL\nurl =
"https://images.unsplash.com/photo-1560807707-9b4f6b1f6bcb\"\nresponse =
requests.get(url)\n\n# Save the image in the "cat" folder\nimage_path =
os.path.join(cat_folder_path, "cat_image.jpg")\nwith open(image_path, "wb")
as file:\n file.write(response.content)\n\nimage_path"
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.