Coder Social home page Coder Social logo

stitionai / devika Goto Github PK

View Code? Open in Web Editor NEW
17.2K 17.2K 2.2K 5.95 MB

Devika is an Agentic AI Software Engineer that can understand high-level human instructions, break them down into steps, research relevant information, and write code to achieve the given objective. Devika aims to be a competitive open-source alternative to Devin by Cognition AI.

License: MIT License

Python 59.21% Shell 0.06% Jinja 9.63% HTML 0.24% JavaScript 6.81% Svelte 21.68% CSS 1.41% Makefile 0.39% Dockerfile 0.57%

devika's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

devika's Issues

Anaconda Issue?

Trying to execute bun run dev and get the following output:
...\devika\ui> bun run dev
bun: The term 'bun' is not recognized as a name of a cmdlet, function, script file, or executable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

Ollama Model

Hello when i am choosing Ollama Models i am getting this error on terminal and Devika doesnt move forward, i have tried different ollama models but same issue, havnt tried openai or claude i want to test locally first

(devika) username@usernames-Macbook-M3-Max devika % python devika.py
24.03.22 02:28:49: root: INFO : Booting up... This may take a few seconds
24.03.22 02:28:49: root: INFO : Initializing Devika...
24.03.22 02:28:49: root: INFO : Initializing Prerequisites Jobs...
24.03.22 02:28:49: root: INFO : Loading sentence-transformer BERT models...
24.03.22 02:28:53: root: INFO : BERT model loaded successfully.

  • Serving Flask app 'devika'
  • Debug mode: off
    Token usage: 383
    Exception in thread Thread-201 ():
    Traceback (most recent call last):
    File "/opt/homebrew/Caskroom/miniconda/base/envs/devika/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
    File "/opt/homebrew/Caskroom/miniconda/base/envs/devika/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
    File "/Users/username/devika/devika.py", line 56, in
    target=lambda: Agent(base_model=base_model).execute(prompt, project_name)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/Users/username/devika/src/agents/agent.py", line 264, in execute
    plan = self.planner.execute(prompt)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/Users/username/devika/src/agents/planner/planner.py", line 70, in execute
    response = self.llm.inference(prompt)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/Users/username/devika/src/llm/llm.py", line 58, in inference
    model = self.model_id_to_enum_mapping()[self.model_id]
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
    KeyError: '7B - Q4_K_M'

Error on Bun run dev

I cd to the ui folder and bun run dev but I receive the following error message:

$ vite
file:///mnt/Essam/AI/devika/ui/node_modules/vite/bin/vite.js:7
await import('source-map-support').then((r) => r.default.install())
^^^^^

SyntaxError: Unexpected reserved word
at Loader.moduleStrategy (internal/modules/esm/translators.js:133:18)
at async link (internal/modules/esm/module_job.js:42:21)
error: script "dev" exited with code 1

Local Model not working as Expected

I am trying to run local models using Ollama and i have code-llama,Gemma,Phi-2 models available i got these error while running.

Selected Model: codellama:7b-code (7B - Q4_O)

haseeb-mir@Haseebs-MacBook-Pro devika % python devika.py
24.03.22 11:14:19: root: INFO   : Booting up... This may take a few seconds
24.03.22 11:14:19: root: INFO   : Initializing Devika...
24.03.22 11:14:19: root: INFO   : Initializing Prerequisites Jobs...
24.03.22 11:14:19: root: INFO   : Loading sentence-transformer BERT models...
24.03.22 11:14:22: root: INFO   : BERT model loaded successfully.
 * Serving Flask app 'devika'
 * Debug mode: on
24.03.22 11:14:28: root: INFO   : Booting up... This may take a few seconds
24.03.22 11:14:28: root: INFO   : Initializing Devika...
24.03.22 11:14:28: root: INFO   : Initializing Prerequisites Jobs...
24.03.22 11:14:28: root: INFO   : Loading sentence-transformer BERT models...
24.03.22 11:14:32: root: INFO   : BERT model loaded successfully.
Token usage: 321
Model id: 7B - Q4_0
Exception in thread Thread-588 (<lambda>):
Traceback (most recent call last):
  File "/opt/homebrew/Caskroom/miniforge/base/envs/heaven-env/lib/python3.11/threading.py", line 1038, in _bootstrap_inner
    self.run()
  File "/opt/homebrew/Caskroom/miniforge/base/envs/heaven-env/lib/python3.11/threading.py", line 975, in run
    self._target(*self._args, **self._kwargs)
  File "/Users/haseeb-mir/Documents/Code/Python/devika/devika.py", line 49, in <lambda>
    target=lambda: Agent(base_model=base_model).execute(prompt, project_name)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/haseeb-mir/Documents/Code/Python/devika/src/agents/agent.py", line 264, in execute
    plan = self.planner.execute(prompt)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/haseeb-mir/Documents/Code/Python/devika/src/agents/planner/planner.py", line 70, in execute
    response = self.llm.inference(prompt)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/haseeb-mir/Documents/Code/Python/devika/src/llm/llm.py", line 54, in inference
    model = self.model_id_to_enum_mapping()[self.model_id]
            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
KeyError: '7B - Q4_0'

run success but not wroking

I try run devika and want to use local ollama, ollama is working frontend is working beacuse i can see project name is backend is not work i cannot see project name
backend terminal -----------------
24.03.22 16:18:12: root: INFO : Booting up... This may take a few seconds
24.03.22 16:18:12: root: INFO : Initializing Devika...
24.03.22 16:18:12: root: INFO : Initializing Prerequisites Jobs...
24.03.22 16:18:12: root: INFO : Loading sentence-transformer BERT models...
24.03.22 16:18:15: root: INFO : BERT model loaded successfully.

  • Serving Flask app 'devika'
  • Debug mode: off

frontend
VITE v5.1.6 ready in 450 ms

➜ Local: http://localhost:3000/
➜ Network: use --host to expose
➜ press h + enter to show help

warn - The purge/content options have changed in Tailwind CSS v3.0.
warn - Update your configuration file to eliminate this warning.

when i write some thing in the input no response

TypeError: 'NoneType' object is not subscriptable

Your Reply to the Human Prompter: Sure, I can help you with that!

Current Focus: Developing a Tic-Tac-Toe game using Python.

Plan:

  • Step 1: Research and understand the rules and logic of the Tic-Tac-Toe game.
  • Step 2: Set up the Python development environment on your machine.
  • Step 3: Create a blank 3x3 grid to represent the Tic-Tac-Toe board.
  • Step 4: Implement the logic to alternate turns between two players (X and O).
  • Step 5: Allow players to input their moves on the grid.
  • Step 6: Check for winning conditions after each move.
  • Step 7: Implement a function to check for a draw if no player wins.
  • Step 8: Display the updated board after each move.
  • Step 9: Continue the game until a player wins or it ends in a draw.
  • Step 10: Display the end result (win/draw) and ask if players want to play again.

Summary: To create a Tic-Tac-Toe game in Python, we need to research the game rules, set up the development environment, implement the game logic including player input and turn alternation, check for winning conditions, and display the game board. Testing will involve ensuring the game runs smoothly without bugs, and the final game should allow for continuous play. Dependencies include understanding basic Python syntax and data structures, as well as implementing the grid logic for the game board. Potential challenges may arise in handling user input and managing game states effectively.

==================================================
{'developing', 'toe', 'python', 'game', 'tic'}
Token usage: 1160
Token usage: 1206
I need to research the rules of Tic-Tac-Toe and set up the Python environment first. Then I can start creating the game logic and interface for players to input their moves.
==================================================
Token usage: 2121
Token usage: 2178
{'queries': ['Tic-Tac-Toe game rules and logic explanation Python', 'Python development environment setup guide for beginners', '3x3 grid implementation in Python for Tic-Tac-Toe game'], 'ask_user': ''}
==================================================
Exception in thread Thread-167 (<lambda>):
Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/home/broadifi/Desktop/work/ai_soft/devika/devika.py", line 49, in <lambda>
    target=lambda: Agent(base_model=base_model).execute(prompt, project_name)
  File "/home/broadifi/Desktop/work/ai_soft/devika/src/agents/agent.py", line 333, in execute
    search_results = self.search_queries(queries, project_name)
  File "/home/broadifi/Desktop/work/ai_soft/devika/src/agents/agent.py", line 84, in search_queries
    link = bing_search.get_first_link()
  File "/home/broadifi/Desktop/work/ai_soft/devika/src/browser/search.py", line 24, in get_first_link
    return self.query_result["webPages"]["value"][0]["url"]
TypeError: 'NoneType' object is not subscriptable

httpx.ConnectError: [Errno 61] Connection refused

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpx/_transports/default.py", line 66, in map_httpcore_exceptions
yield
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpx/_transports/default.py", line 228, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
raise exc from None
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
response = connection.handle_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 99, in handle_request
raise exc
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 76, in handle_request
stream = self._connect(request)
^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 122, in _connect
stream = self._network_backend.connect_tcp(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpcore/_backends/sync.py", line 205, in connect_tcp
with map_exceptions(exc_map):
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 158, in exit
self.gen.throw(value)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [Errno 61] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/Users/norbertpapp/Desktop/devika/devika.py", line 16, in
from src.agents import Agent
File "/Users/norbertpapp/Desktop/devika/src/agents/init.py", line 1, in
from .agent import Agent
File "/Users/norbertpapp/Desktop/devika/src/agents/agent.py", line 1, in
from .planner import Planner
File "/Users/norbertpapp/Desktop/devika/src/agents/planner/init.py", line 1, in
from .planner import Planner
File "/Users/norbertpapp/Desktop/devika/src/agents/planner/planner.py", line 3, in
from src.llm import LLM
File "/Users/norbertpapp/Desktop/devika/src/llm/init.py", line 1, in
from .llm import LLM, TOKEN_USAGE
File "/Users/norbertpapp/Desktop/devika/src/llm/llm.py", line 14, in
class Model(Enum):
File "/Users/norbertpapp/Desktop/devika/src/llm/llm.py", line 25, in Model
for model in Ollama.list_models()
^^^^^^^^^^^^^^^^^^^^
File "/Users/norbertpapp/Desktop/devika/src/llm/ollama_client.py", line 6, in list_models
return ollama.list()["models"]
^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/ollama/_client.py", line 328, in list
return self._request('GET', '/api/tags').json()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/ollama/_client.py", line 68, in _request
response = self._client.request(method, url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpx/_client.py", line 814, in request
return self.send(request, auth=auth, follow_redirects=follow_redirects)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpx/_client.py", line 901, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpx/_client.py", line 929, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpx/_client.py", line 966, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpx/_client.py", line 1002, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpx/_transports/default.py", line 227, in handle_request
with map_httpcore_exceptions():
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 158, in exit
self.gen.throw(value)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/httpx/_transports/default.py", line 83, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [Errno 61] Connection refused

Error on Bun run dev

I cd to the ui folder and bun run dev but I receive the following error message:

%USER%\venv\devika\ui> bun run dev
$ vite
error: Failed to due to error bunsh: No such file or directory:

Use Websocket instead of polling

I was looking at the logs and saw,

24.03.21 23:01:49: root: INFO   : /api/token-usage GET
24.03.21 23:01:49: root: DEBUG  : /api/token-usage GET - Response: {"token_usage":0}

24.03.21 23:01:49: root: INFO   : /api/project-list GET
24.03.21 23:01:49: root: DEBUG  : /api/project-list GET - Response: {"projects":["test"]}

24.03.21 23:01:49: root: INFO   : /api/model-list GET
24.03.21 23:01:49: root: DEBUG  : /api/model-list GET - Response: {"models":[["Claude 3 Opus","claude-3-opus-20240229"],["Claude 3 Sonnet","claude-3-sonnet-20240229"],["Claude 3 Haiku","claude-3-haiku-20240307"],["GPT-4 Turbo","gpt-4-0125-preview"],["GPT-3.5","gpt-3.5-turbo-0125"],["gemma:latest","9B - Q4_0"],["mistral:latest","7B - Q4_0"]]}

24.03.21 23:01:49: root: INFO   : /api/get-agent-state POST
24.03.21 23:01:49: root: DEBUG  : /api/get-agent-state POST - Response: {"state":null}

24.03.21 23:01:49: root: INFO   : /api/get-messages POST
24.03.21 23:01:49: root: DEBUG  : /api/get-messages POST - Response: {"messages":[]}

24.03.21 23:01:50: root: INFO   : /api/token-usage GET
24.03.21 23:01:50: root: DEBUG  : /api/token-usage GET - Response: {"token_usage":0}

24.03.21 23:01:50: root: INFO   : /api/project-list GET
24.03.21 23:01:50: root: DEBUG  : /api/project-list GET - Response: {"projects":["test"]}

24.03.21 23:01:50: root: INFO   : /api/model-list GET
24.03.21 23:01:50: root: DEBUG  : /api/model-list GET - Response: {"models":[["Claude 3 Opus","claude-3-opus-20240229"],["Claude 3 Sonnet","claude-3-sonnet-20240229"],["Claude 3 Haiku","claude-3-haiku-20240307"],["GPT-4 Turbo","gpt-4-0125-preview"],["GPT-3.5","gpt-3.5-turbo-0125"],["gemma:latest","9B - Q4_0"],["mistral:latest","7B - Q4_0"]]}

24.03.21 23:01:50: root: INFO   : /api/get-agent-state POST
24.03.21 23:01:50: root: DEBUG  : /api/get-agent-state POST - Response: {"state":null}

24.03.21 23:01:50: root: INFO   : /api/get-messages POST
24.03.21 23:01:50: root: DEBUG  : /api/get-messages POST - Response: {"messages":[]}

24.03.21 23:01:51: root: INFO   : /api/token-usage GET
24.03.21 23:01:51: root: DEBUG  : /api/token-usage GET - Response: {"token_usage":0}

24.03.21 23:01:51: root: INFO   : /api/project-list GET
24.03.21 23:01:51: root: DEBUG  : /api/project-list GET - Response: {"projects":["test"]}

24.03.21 23:01:51: root: INFO   : /api/model-list GET
24.03.21 23:01:51: root: DEBUG  : /api/model-list GET - Response: {"models":[["Claude 3 Opus","claude-3-opus-20240229"],["Claude 3 Sonnet","claude-3-sonnet-20240229"],["Claude 3 Haiku","claude-3-haiku-20240307"],["GPT-4 Turbo","gpt-4-0125-preview"],["GPT-3.5","gpt-3.5-turbo-0125"],["gemma:latest","9B - Q4_0"],["mistral:latest","7B - Q4_0"]]}

24.03.21 23:01:51: root: INFO   : /api/get-agent-state POST
24.03.21 23:01:51: root: DEBUG  : /api/get-agent-state POST - Response: {"state":null}

24.03.21 23:01:51: root: INFO   : /api/get-messages POST
24.03.21 23:01:51: root: DEBUG  : /api/get-messages POST - Response: {"messages":[]}

24.03.21 23:01:52: root: INFO   : /api/token-usage GET
24.03.21 23:01:52: root: DEBUG  : /api/token-usage GET - Response: {"token_usage":0}

24.03.21 23:01:52: root: INFO   : /api/project-list GET
24.03.21 23:01:52: root: DEBUG  : /api/project-list GET - Response: {"projects":["test"]}

24.03.21 23:01:52: root: INFO   : /api/model-list GET
24.03.21 23:01:52: root: DEBUG  : /api/model-list GET - Response: {"models":[["Claude 3 Opus","claude-3-opus-20240229"],["Claude 3 Sonnet","claude-3-sonnet-20240229"],["Claude 3 Haiku","claude-3-haiku-20240307"],["GPT-4 Turbo","gpt-4-0125-preview"],["GPT-3.5","gpt-3.5-turbo-0125"],["gemma:latest","9B - Q4_0"],["mistral:latest","7B - Q4_0"]]}

24.03.21 23:01:52: root: INFO   : /api/get-agent-state POST
24.03.21 23:01:52: root: DEBUG  : /api/get-agent-state POST - Response: {"state":null}

24.03.21 23:01:52: root: INFO   : /api/get-messages POST
24.03.21 23:01:52: root: DEBUG  : /api/get-messages POST - Response: {"messages":[]}

24.03.21 23:01:53: root: INFO   : /api/token-usage GET
24.03.21 23:01:53: root: DEBUG  : /api/token-usage GET - Response: {"token_usage":0}

24.03.21 23:01:53: root: INFO   : /api/project-list GET
24.03.21 23:01:53: root: DEBUG  : /api/project-list GET - Response: {"projects":["test"]}

A lot of requests are being made continuously. I think it would be better (performance-wise) to use a WebSocket instead of continuously polling. Any thoughts? Was this done intentionally or am I missing something here?

[Errno 61] Connection refused

Device:
Macbook Air 2017

Error:
(base) MacBook-Pro-207:devika siddharthduggal$ python devika.py
Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.9/site-packages/httpx/_transports/default.py", line 66, in map_httpcore_exceptions
yield
File "/opt/anaconda3/lib/python3.9/site-packages/httpx/_transports/default.py", line 228, in handle_request
resp = self._pool.handle_request(req)
File "/opt/anaconda3/lib/python3.9/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
raise exc from None
File "/opt/anaconda3/lib/python3.9/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
response = connection.handle_request(
File "/opt/anaconda3/lib/python3.9/site-packages/httpcore/_sync/connection.py", line 99, in handle_request
raise exc
File "/opt/anaconda3/lib/python3.9/site-packages/httpcore/_sync/connection.py", line 76, in handle_request
stream = self._connect(request)
File "/opt/anaconda3/lib/python3.9/site-packages/httpcore/_sync/connection.py", line 122, in _connect
stream = self._network_backend.connect_tcp(**kwargs)
File "/opt/anaconda3/lib/python3.9/site-packages/httpcore/_backends/sync.py", line 213, in connect_tcp
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
File "/opt/anaconda3/lib/python3.9/contextlib.py", line 137, in exit
self.gen.throw(typ, value, traceback)
File "/opt/anaconda3/lib/python3.9/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [Errno 61] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/Users/siddharthduggal/Desktop/devika/devika.py", line 16, in
from src.agents import Agent
File "/Users/siddharthduggal/Desktop/devika/src/agents/init.py", line 1, in
from .agent import Agent
File "/Users/siddharthduggal/Desktop/devika/src/agents/agent.py", line 1, in
from .planner import Planner
File "/Users/siddharthduggal/Desktop/devika/src/agents/planner/init.py", line 1, in
from .planner import Planner
File "/Users/siddharthduggal/Desktop/devika/src/agents/planner/planner.py", line 3, in
from src.llm import LLM
File "/Users/siddharthduggal/Desktop/devika/src/llm/init.py", line 1, in
from .llm import LLM, TOKEN_USAGE
File "/Users/siddharthduggal/Desktop/devika/src/llm/llm.py", line 14, in
class Model(Enum):
File "/Users/siddharthduggal/Desktop/devika/src/llm/llm.py", line 25, in Model
for model in Ollama.list_models()
File "/Users/siddharthduggal/Desktop/devika/src/llm/ollama_client.py", line 6, in list_models
return ollama.list()["models"]
File "/opt/anaconda3/lib/python3.9/site-packages/ollama/_client.py", line 328, in list
return self._request('GET', '/api/tags').json()
File "/opt/anaconda3/lib/python3.9/site-packages/ollama/_client.py", line 68, in _request
response = self._client.request(method, url, **kwargs)
File "/opt/anaconda3/lib/python3.9/site-packages/httpx/_client.py", line 814, in request
return self.send(request, auth=auth, follow_redirects=follow_redirects)
File "/opt/anaconda3/lib/python3.9/site-packages/httpx/_client.py", line 901, in send
response = self._send_handling_auth(
File "/opt/anaconda3/lib/python3.9/site-packages/httpx/_client.py", line 929, in _send_handling_auth
response = self._send_handling_redirects(
File "/opt/anaconda3/lib/python3.9/site-packages/httpx/_client.py", line 966, in _send_handling_redirects
response = self._send_single_request(request)
File "/opt/anaconda3/lib/python3.9/site-packages/httpx/_client.py", line 1002, in _send_single_request
response = transport.handle_request(request)
File "/opt/anaconda3/lib/python3.9/site-packages/httpx/_transports/default.py", line 228, in handle_request
resp = self._pool.handle_request(req)
File "/opt/anaconda3/lib/python3.9/contextlib.py", line 137, in exit
self.gen.throw(typ, value, traceback)
File "/opt/anaconda3/lib/python3.9/site-packages/httpx/_transports/default.py", line 83, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [Errno 61] Connection refused
^[(base) MacBook-Pro-207:devika siddharthduggal$

TypeError: OpenAI.__init__() missing 1 required positional argument: 'api_key'

  • Serving Flask app 'devika'
  • Debug mode: off
    Token usage: 322
    Exception in thread Thread-348 ():
    Traceback (most recent call last):
    File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
    File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/threading.py", line 1010, in run
    self._target(*self._args, **self._kwargs)
    File "/Users/norbertpapp/Desktop/devika/devika.py", line 49, in
    target=lambda: Agent(base_model=base_model).execute(prompt, project_name)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/Users/username/Desktop/devika/src/agents/agent.py", line 264, in execute
    plan = self.planner.execute(prompt)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/Users/username/Desktop/devika/src/agents/planner/planner.py", line 70, in execute
    response = self.llm.inference(prompt)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/Users/username/Desktop/devika/src/llm/llm.py", line 60, in inference
    response = OpenAI().inference(self.model_id, prompt).strip()
    ^^^^^^^^
    TypeError: OpenAI.init() missing 1 required positional argument: 'api_key'

Integrate the `Decision` agent into the Agent Flow Execution

The Decision agent is the most powerful agent in the execution flow with the ability to chain multiple commands to complete a certain task, including the ability to clone a GitHub repo, fix bugs in it, and make a PDF report of the entire process in a single shot.

This agent is currently not integrated into the agent flow. This should be implemented in src/agents/agent.py.

Implement the Logs page in the Svelte Web UI

In the main UI, the Logs page is not implemented yet. Here's how to implement it:

API

Get Real-time Logs

Endpoint: /api/real-time-logs
Method: GET

Poll this endpoint in the main event loop.

UI

Render the logs inside a code block with syntax highlighting for log files.

why we cant include gemini 1.5

Is there any issue with Gemini 1.5? It has cracked CTFs, and the software is sometimes inaccurate. Keeping it optional as Gemini 1.5 would be nice for all users.

vite-plugin-svelte not found

The error message indicates that the package '@sveltejs/vite-plugin-svelte' is not found. This package is required by your Vite configuration file.

You should install the missing package by running the following command in your terminal:
npm install @sveltejs/vite-plugin-svelte

After running this command, try running your development server again with npm run dev.

Here is PR for this #51

Crashes when running without Ollama

I tried starting the devika.py without Ollama running and got this error,

D:\Data\ML\devika\venv\Scripts\python.exe D:\Data\ML\devika\devika.py
Traceback (most recent call last):
File "D:\Data\ML\devika\venv\Lib\site-packages\httpx\_transports\default.py", line 66, in map_httpcore_exceptions
yield
File "D:\Data\ML\devika\venv\Lib\site-packages\httpx\_transports\default.py", line 228, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Data\ML\devika\venv\Lib\site-packages\httpcore\_sync\connection_pool.py", line 216, in handle_request
raise exc from None
File "D:\Data\ML\devika\venv\Lib\site-packages\httpcore\_sync\connection_pool.py", line 196, in handle_request
response = connection.handle_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Data\ML\devika\venv\Lib\site-packages\httpcore\_sync\connection.py", line 99, in handle_request
raise exc
File "D:\Data\ML\devika\venv\Lib\site-packages\httpcore\_sync\connection.py", line 76, in handle_request
stream = self._connect(request)
^^^^^^^^^^^^^^^^^^^^^^
File "D:\Data\ML\devika\venv\Lib\site-packages\httpcore\_sync\connection.py", line 122, in _connect
stream = self._network_backend.connect_tcp(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Data\ML\devika\venv\Lib\site-packages\httpcore\_backends\sync.py", line 205, in connect_tcp
with map_exceptions(exc_map):
File "C:\Users\tproh\AppData\Local\Programs\Python\Python311\Lib\contextlib.py", line 155, in __exit__
self.gen.throw(typ, value, traceback)
File "D:\Data\ML\devika\venv\Lib\site-packages\httpcore\_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [WinError 10061] No connection could be made because the target machine actively refused it

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "D:\Data\ML\devika\devika.py", line 16, in <module>
from src.agents import Agent
File "D:\Data\ML\devika\src\agents\__init__.py", line 1, in <module>
from .agent import Agent
File "D:\Data\ML\devika\src\agents\agent.py", line 1, in <module>
from .planner import Planner
File "D:\Data\ML\devika\src\agents\planner\__init__.py", line 1, in <module>
from .planner import Planner
File "D:\Data\ML\devika\src\agents\planner\planner.py", line 3, in <module>
from src.llm import LLM
File "D:\Data\ML\devika\src\llm\__init__.py", line 1, in <module>
from .llm import LLM, TOKEN_USAGE
File "D:\Data\ML\devika\src\llm\llm.py", line 14, in <module>
class Model(Enum):
File "D:\Data\ML\devika\src\llm\llm.py", line 25, in Model
for model in Ollama.list_models()
^^^^^^^^^^^^^^^^^^^^
File "D:\Data\ML\devika\src\llm\ollama_client.py", line 6, in list_models
return ollama.list()["models"]
^^^^^^^^^^^^^
File "D:\Data\ML\devika\venv\Lib\site-packages\ollama\_client.py", line 328, in list
return self._request('GET', '/api/tags').json()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Data\ML\devika\venv\Lib\site-packages\ollama\_client.py", line 68, in _request
response = self._client.request(method, url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Data\ML\devika\venv\Lib\site-packages\httpx\_client.py", line 814, in request
return self.send(request, auth=auth, follow_redirects=follow_redirects)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Data\ML\devika\venv\Lib\site-packages\httpx\_client.py", line 901, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Data\ML\devika\venv\Lib\site-packages\httpx\_client.py", line 929, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Data\ML\devika\venv\Lib\site-packages\httpx\_client.py", line 966, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Data\ML\devika\venv\Lib\site-packages\httpx\_client.py", line 1002, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Data\ML\devika\venv\Lib\site-packages\httpx\_transports\default.py", line 227, in handle_request
with map_httpcore_exceptions():
File "C:\Users\tproh\AppData\Local\Programs\Python\Python311\Lib\contextlib.py", line 155, in __exit__
self.gen.throw(typ, value, traceback)
File "D:\Data\ML\devika\venv\Lib\site-packages\httpx\_transports\default.py", line 83, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [WinError 10061] No connection could be made because the target machine actively refused it

Process finished with exit code 1

Once I started Ollama it worked fine. This might be an edge case, but I think it would be better to throw a proper error message ( something like "Ollama is required please start Ollama before running this script" ).

Handle ollama server not running bug

Python crash stack traceback from @ARajgor on Discord:

raceback (most recent call last):
  File "D:\D drive\Python\Main Projects\devika\venv\Lib\site-packages\httpx\_transports\default.py", line 66, in map_httpcore_exceptions
    yield
  File "D:\D drive\Python\Main Projects\devika\venv\Lib\site-packages\httpx\_transports\default.py", line 228, in handle_request
    resp = self._pool.handle_request(req)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\D drive\Python\Main Projects\devika\venv\Lib\site-packages\httpcore\_sync\connection_pool.py", line 216, in handle_request
    raise exc from None
  File "D:\D drive\Python\Main Projects\devika\venv\Lib\site-packages\httpcore\_sync\connection_pool.py", line 196, in handle_request
    response = connection.handle_request(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\D drive\Python\Main Projects\devika\venv\Lib\site-packages\httpcore\_sync\connection.py", line 99, in handle_request
    raise exc
  File "D:\D drive\Python\Main Projects\devika\venv\Lib\site-packages\httpcore\_sync\connection.py", line 76, in handle_request
    stream = self._connect(request)
             ^^^^^^^^^^^^^^^^^^^^^^
  File "D:\D drive\Python\Main Projects\devika\venv\Lib\site-packages\httpcore\_sync\connection.py", line 122, in _connect
    stream = self._network_backend.connect_tcp(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\D drive\Python\Main Projects\devika\venv\Lib\site-packages\httpcore\_backends\sync.py", line 205, in connect_tcp
    with map_exceptions(exc_map):
  File "C:\Users\ayush\AppData\Local\Programs\Python\Python311\Lib\contextlib.py", line 155, in __exit__
    self.gen.throw(typ, value, traceback)
  File "D:\D drive\Python\Main Projects\devika\venv\Lib\site-packages\httpcore\_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ConnectError: [WinError 10061] No connection could be made because the target machine actively refused it

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "D:\D drive\Python\Main Projects\devika\devika.py", line 16, in <module>
    from src.agents import Agent
  File "D:\D drive\Python\Main Projects\devika\src\agents\__init__.py", line 1, in <module>
    from .agent import Agent
  File "D:\D drive\Python\Main Projects\devika\src\agents\agent.py", line 1, in <module>
    from .planner import Planner
  File "D:\D drive\Python\Main Projects\devika\src\agents\planner\__init__.py", line 1, in <module>
    from .planner import Planner
  File "D:\D drive\Python\Main Projects\devika\src\agents\planner\planner.py", line 3, in <module>
    from src.llm import LLM
  File "D:\D drive\Python\Main Projects\devika\src\llm\__init__.py", line 1, in <module>
    from .llm import LLM, TOKEN_USAGE
  File "D:\D drive\Python\Main Projects\devika\src\llm\llm.py", line 17, in <module>
    class Model(Enum):
  File "D:\D drive\Python\Main Projects\devika\src\llm\llm.py", line 25, in Model
    for model in ollama.list()["models"]:
                 ^^^^^^^^^^^^^
  File "D:\D drive\Python\Main Projects\devika\venv\Lib\site-packages\ollama\_client.py", line 328, in list
    return self._request('GET', '/api/tags').json()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\D drive\Python\Main Projects\devika\venv\Lib\site-packages\ollama\_client.py", line 68, in _request
    response = self._client.request(method, url, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\D drive\Python\Main Projects\devika\venv\Lib\site-packages\httpx\_client.py", line 814, in request
    return self.send(request, auth=auth, follow_redirects=follow_redirects)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\D drive\Python\Main Projects\devika\venv\Lib\site-packages\httpx\_client.py", line 901, in send
    response = self._send_handling_auth(
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\D drive\Python\Main Projects\devika\venv\Lib\site-packages\httpx\_client.py", line 929, in _send_handling_auth
    response = self._send_handling_redirects(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\D drive\Python\Main Projects\devika\venv\Lib\site-packages\httpx\_client.py", line 966, in _send_handling_redirects
    response = self._send_single_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\D drive\Python\Main Projects\devika\venv\Lib\site-packages\httpx\_client.py", line 1002, in _send_single_request
    response = transport.handle_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\D drive\Python\Main Projects\devika\venv\Lib\site-packages\httpx\_transports\default.py", line 227, in handle_request
    with map_httpcore_exceptions():
  File "C:\Users\ayush\AppData\Local\Programs\Python\Python311\Lib\contextlib.py", line 155, in __exit__
    self.gen.throw(typ, value, traceback)
  File "D:\D drive\Python\Main Projects\devika\venv\Lib\site-packages\httpx\_transports\default.py", line 83, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ConnectError: [WinError 10061] No connection could be made because the target machine actively refused it

The ollama client module in devika\src\llm\llm.py is trying to reach ollama server, but its not running. This is a bug and should be handled properly.

The Terminal XTerm widget gets squished down whenever the Browser widget is updated with a snapshot event

What's the issue?

The Browser Widget and the Terminal Widget share the half-row space in the main web UI. When the Browser gets populated with the browser sessions' snapshot, the Terminal Widget overflows the main app container due the the size of the Browser Widget changing.

See here: https://github.com/stitionai/devika/blob/main/ui/src/App.svelte#L46-L49

Browser Widget: https://github.com/stitionai/devika/blob/main/ui/src/components/BrowserWidget.svelte
Terminal Widget: https://github.com/stitionai/devika/blob/main/ui/src/components/TerminalWidget.svelte

How to fix?

Make the Terminal Widget adapt to the size changes in the Browser Widget, also make the Browser Widget the same size filled by the snapshot image by default.

Completely Locally

Will there be a way to run it completely locally without Claude or open ai api soon?

for example ollama instead of those OpenAI or Claude!?

Token Usage should be inferred from `AgentState` instead of Lazy initialized global value (`TOKEN_USAGE`)

Currently, the Token Usage is only calculated for an active session on a Lazy initialized global value.

Implement Token Usage updates into the AgentState store.

The value is already there:

def new_state(self):
    timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
    
    return {
        "internal_monologue": None,
        "browser_session": {
            "url": None,
            "screenshot": None
        },
        "terminal_session": {
            "command": None,
            "output": None,
            "title": None
        },
        "step": None,
        "message": None,
        "completed": False,
        "agent_is_active": True,
        "token_usage": 0,
        "timestamp": timestamp
    }

Just make a helper function to update the Token Usage to the latest state and use that for the API endpoint:

@app.route("/api/token-usage", methods=["GET"])
def token_usage():
    from src.llm import TOKEN_USAGE
    return jsonify({"token_usage": TOKEN_USAGE})

Files:

Implement the Settings page in the Svelte Web UI

In the main UI, the Settings page is not implemented yet. Here's how to implement it:

API

Set Settings

Endpoint: /api/set-settings
Method: POST
JSON Request Body:

{
  "STORAGE": {
    "SQLITE_DB": "db/devika.db",
    "SCREENSHOTS_DIR": "screenshots",
    "PDFS_DIR": "pdfs",
    "PROJECTS_DIR": "projects",
    "LOGS_DIR": "logs",
    "REPOS_DIR": "repos"
  },
  "API_KEYS": {
    "BING": "<YOUR_BING_API_KEY>",
    "CLAUDE": "<YOUR_CLAUDE_API_KEY>",
    "NETLIFY": "<YOUR_NETLIFY_API_KEY>",
    "OPENAI": "<YOUR_OPENAI_API_KEY>"
  },
  "API_ENDPOINTS": {
    "BING": "https://api.bing.microsoft.com/v7.0/search"
  }
}

Get Settings

Endpoint: /api/get-settings
Method: GET

{
  "STORAGE": {
    "SQLITE_DB": "db/devika.db",
    "SCREENSHOTS_DIR": "screenshots",
    "PDFS_DIR": "pdfs",
    "PROJECTS_DIR": "projects",
    "LOGS_DIR": "logs",
    "REPOS_DIR": "repos"
  },
  "API_KEYS": {
    "BING": "<YOUR_BING_API_KEY>",
    "CLAUDE": "<YOUR_CLAUDE_API_KEY>",
    "NETLIFY": "<YOUR_NETLIFY_API_KEY>",
    "OPENAI": "<YOUR_OPENAI_API_KEY>"
  },
  "API_ENDPOINTS": {
    "BING": "https://api.bing.microsoft.com/v7.0/search"
  }
}

UI

Add the required form widgets for each of these configuration values.

ERROR. Can anyone help?

I would love to get this working.
I try to start the server and i get,

C:\devika>python devika.py
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx_transports\default.py", line 66, in map_httpcore_exceptions
yield
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx_transports\default.py", line 228, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpcore_sync\connection_pool.py", line 216, in handle_request
raise exc from None
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpcore_sync\connection_pool.py", line 196, in handle_request
response = connection.handle_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpcore_sync\connection.py", line 99, in handle_request
raise exc
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpcore_sync\connection.py", line 76, in handle_request
stream = self._connect(request)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpcore_sync\connection.py", line 122, in _connect
stream = self._network_backend.connect_tcp(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpcore_backends\sync.py", line 205, in connect_tcp
with map_exceptions(exc_map):
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\contextlib.py", line 158, in exit
self.gen.throw(value)
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpcore_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [WinError 10061] No connection could be made because the target machine actively refused it

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\devika\devika.py", line 16, in
from src.agents import Agent
File "C:\devika\src\agents_init_.py", line 1, in
from .agent import Agent
File "C:\devika\src\agents\agent.py", line 1, in
from .planner import Planner
File "C:\devika\src\agents\planner_init_.py", line 1, in
from .planner import Planner
File "C:\devika\src\agents\planner\planner.py", line 3, in
from src.llm import LLM
File "C:\devika\src\llm_init_.py", line 1, in
from .llm import LLM, TOKEN_USAGE
File "C:\devika\src\llm\llm.py", line 14, in
class Model(Enum):
File "C:\devika\src\llm\llm.py", line 25, in Model
for model in Ollama.list_models()
^^^^^^^^^^^^^^^^^^^^
File "C:\devika\src\llm\ollama_client.py", line 6, in list_models
return ollama.list()["models"]
^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\ollama_client.py", line 328, in list
return self._request('GET', '/api/tags').json()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\ollama_client.py", line 68, in _request
response = self._client.request(method, url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx_client.py", line 814, in request
return self.send(request, auth=auth, follow_redirects=follow_redirects)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx_client.py", line 901, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx_client.py", line 929, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx_client.py", line 966, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx_client.py", line 1002, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx_transports\default.py", line 227, in handle_request
with map_httpcore_exceptions():
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\contextlib.py", line 158, in exit
self.gen.throw(value)
File "C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx_transports\default.py", line 83, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [WinError 10061] No connection could be made because the target machine actively refused it

Thank you!

Mind adding generic openAI support?

A lot of local servers use openAPI spec. tabbyAPI, textgen, llama.cpp server, etc. Is it possible to add support for that? ollama is very limiting and I want to use this with 70b+. I'm of course going to kludge the code to try but you may be able to support many projects all at once by generalizing one API.

Ollama not working.

Using Ollama : Errors on all models.
Used : codellama & gemma

Created a New project.
Using Ollama:gemma

Send any command
No response in UI/Web
In terminal getting below error.


24.03.23 01:24:48: root: INFO   : Initializing Devika...
24.03.23 01:24:48: root: INFO   : Initializing Prerequisites Jobs...
24.03.23 01:24:48: root: INFO   : Loading sentence-transformer BERT models...
24.03.23 01:24:54: root: INFO   : BERT model loaded successfully.
 * Serving Flask app 'devika'
 * Debug mode: off
Token usage: 321
Exception in thread Thread-241:
Traceback (most recent call last):
  File "/root/.pyenv/versions/3.9.18/lib/python3.9/threading.py", line 980, in _bootstrap_inner
    self.run()
  File "/root/.pyenv/versions/3.9.18/lib/python3.9/threading.py", line 917, in run
    self._target(*self._args, **self._kwargs)
  File "/var/www/devika/devika.py", line 49, in <lambda>
    target=lambda: Agent(base_model=base_model).execute(prompt, project_name)
  File "/var/www/devika/src/agents/agent.py", line 264, in execute
    plan = self.planner.execute(prompt)
  File "/var/www/devika/src/agents/planner/planner.py", line 70, in execute
    response = self.llm.inference(prompt)
  File "/var/www/devika/src/llm/llm.py", line 53, in inference
    model = self.model_id_to_enum_mapping()[self.model_id]
KeyError: '7B - Q4_0'

Following the simple installation instructions results in an error in building wheel for fastlogging

1.git clone https://github.com/stitionai/devika.git
2.cd devika
3.pip install -r requirements.txt

installation goes fine until building the wheel for fastlogging:

Building wheels for collected packages: fastlogging, keybert, svglib
Building wheel for fastlogging (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [41 lines of output]
Creating C:\Users\JonMi\AppData\Local\Temp\pip-install-j4bbgzah\fastlogging_55cbaebdbb0042b698ab20777b87b3f5\fastlogging\fastlogging.pyx
Creating C:\Users\JonMi\AppData\Local\Temp\pip-install-j4bbgzah\fastlogging_55cbaebdbb0042b698ab20777b87b3f5\fastlogging\network.pyx
building with Cython 3.0.9
Warning: passing language='c++' to cythonize() is deprecated. Instead, put "# distutils: language=c++" in your .pyx or .pxd file(s)
Compiling fastlogging\fastlogging.pyx because it changed.
Compiling fastlogging\network.pyx because it changed.
[1/2] Cythonizing fastlogging\fastlogging.pyx
[2/2] Cythonizing fastlogging\network.pyx
C:\Users\JonMi\miniconda3\Lib\site-packages\setuptools\config\setupcfg.py:293: _DeprecatedConfig: Deprecated config in setup.cfg
!!

          ********************************************************************************
          The license_file parameter is deprecated, use license_files instead.

          This deprecation is overdue, please update your project and remove deprecated
          calls to avoid build errors in the future.

          See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
          ********************************************************************************

  !!
    parsed = self.parsers.get(option_name, lambda x: x)(value)
  running bdist_wheel
  running build
  running build_ext
  building 'fastlogging.fastlogging' extension
  creating build
  creating build\temp.win-amd64-cpython-311
  creating build\temp.win-amd64-cpython-311\Release
  creating build\temp.win-amd64-cpython-311\Release\Users
  creating build\temp.win-amd64-cpython-311\Release\Users\JonMi
  creating build\temp.win-amd64-cpython-311\Release\Users\JonMi\AppData
  creating build\temp.win-amd64-cpython-311\Release\Users\JonMi\AppData\Local
  creating build\temp.win-amd64-cpython-311\Release\Users\JonMi\AppData\Local\Temp
  creating build\temp.win-amd64-cpython-311\Release\Users\JonMi\AppData\Local\Temp\pip-install-j4bbgzah
  creating build\temp.win-amd64-cpython-311\Release\Users\JonMi\AppData\Local\Temp\pip-install-j4bbgzah\fastlogging_55cbaebdbb0042b698ab20777b87b3f5
  creating build\temp.win-amd64-cpython-311\Release\Users\JonMi\AppData\Local\Temp\pip-install-j4bbgzah\fastlogging_55cbaebdbb0042b698ab20777b87b3f5\fastlogging
  "C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\JonMi\miniconda3\include -IC:\Users\JonMi\miniconda3\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\cppwinrt" /EHsc /TpC:\Users\JonMi\AppData\Local\Temp\pip-install-j4bbgzah\fastlogging_55cbaebdbb0042b698ab20777b87b3f5\fastlogging\fastlogging.cpp /Fobuild\temp.win-amd64-cpython-311\Release\Users\JonMi\AppData\Local\Temp\pip-install-j4bbgzah\fastlogging_55cbaebdbb0042b698ab20777b87b3f5\fastlogging\fastlogging.obj /O2 /EHsc /D__PYX_FORCE_INIT_THREADS=1
  fastlogging.cpp
  C:\Users\JonMi\AppData\Local\Temp\pip-install-j4bbgzah\fastlogging_55cbaebdbb0042b698ab20777b87b3f5\fastlogging\fastlogging.cpp : fatal error C1083: Cannot open compiler generated file: '': Invalid argument
  error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 1
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for fastlogging
Running setup.py clean for fastlogging
Building wheel for keybert (setup.py) ... done
Created wheel for keybert: filename=keybert-0.8.4-py3-none-any.whl size=39256 sha256=49ecbd373e2a3282f02f94654ffaac049897c0b3c313705b0930acdedd8d4a30
Stored in directory: C:\Users\JonMi\AppData\Local\Temp\pip-ephem-wheel-cache-ia45drz4\wheels\a0\c7\34\c35525fc11f76fe759878e01896b47c46598ff359e3e0aa4b2
Building wheel for svglib (setup.py) ... done
Created wheel for svglib: filename=svglib-1.5.1-py3-none-any.whl size=30977 sha256=6d6af65220c99ca3964dcae5b80789b029158e6c7b5c717ad0989bb01ec1f05d
Stored in directory: C:\Users\JonMi\AppData\Local\Temp\pip-ephem-wheel-cache-ia45drz4\wheels\7e\01\0e\e6e336915d6e8448890a695770ba88fe030cc71060988016f6
Successfully built keybert svglib
Failed to build fastlogging
ERROR: Could not build wheels for fastlogging, which is required to install pyproject.toml-based projects

Any ideas why this would occur.
I double checked and made sure MSVC build tools are up to date

Include AzureOpenAI

Azure Open AI service is also one of many popular LLM services , It is based on Azure cloud AI studio platform

I have forked and modified appropriate changes
will raise PR tomorrow

example:
image

Use different browser provider

Hey there! Ive modifed the code to use serper's api but it returns the result witrh a URL But for some reaosn it returns None when it searches the query correctly could someone guide me to modify it again

FastLogging can't be installed on Windows

I followed every step in the installation:

ollama serve
git clone https://github.com/stitionai/devika.git
cd devika/
uv venv
uv pip install -r requirements.txt
cd ui/
bun install
bun run dev
python3 devika.py

when I type python devika.py on windows it says "Traceback (most recent call last):
File "D:\devika\devika.py", line 9, in
from src.init import init_devika
File "D:\devika\src\init.py", line 4, in
from src.logger import Logger
File "D:\devika\src\logger.py", line 1, in
from fastlogging import LogInit
ModuleNotFoundError: No module named 'fastlogging'

fastlogging is the only library that I can't install on Windows 10, would it be possible to replace it with picologging?

Openai api ket not working

TypeError: OpenAI.init() missing 1 required positional argument: 'api_key'
[2024-03-22 13:20:25,809] ERROR in app: Exception on /api/project-list [GET]
Traceback (most recent call last):

[ Feat ] Create Issues template and Extend Contribution guide.

Hi Everyone,

Arindam this side. I recently came to know about this project via social media. I don't want to raise an irrelevant Issue/ feature request. So I was going through the codes and recent issues. Noticed there are no Issue Temples for this repo. This is a great project and I believe more contributions will come along the way. So having some issue templates for each type issue can help me keep a format, segregation and also helps in case of automation, like blocking spam issues. Also we can extend the contribution guide for better code checkins.

Hence I am raising this issue as a feature request. If the maintainers agree with my point. I would like to take this task on me.

Thanks and Regards
❤️❤️ Arindam

Raspberry Pi error

Hello,

iam trying install and run on rpi and get error when run python devika.py

Traceback (most recent call last):
File "/home/pi/.local/lib/python3.9/site-packages/httpx/_transports/default.py", line 66, in map_httpcore_exceptions
yield
File "/home/pi/.local/lib/python3.9/site-packages/httpx/_transports/default.py", line 228, in handle_request
resp = self._pool.handle_request(req)
File "/home/pi/.local/lib/python3.9/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
raise exc from None
File "/home/pi/.local/lib/python3.9/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
response = connection.handle_request(
File "/home/pi/.local/lib/python3.9/site-packages/httpcore/_sync/connection.py", line 99, in handle_request
raise exc
File "/home/pi/.local/lib/python3.9/site-packages/httpcore/_sync/connection.py", line 76, in handle_request
stream = self._connect(request)
File "/home/pi/.local/lib/python3.9/site-packages/httpcore/_sync/connection.py", line 122, in _connect
stream = self._network_backend.connect_tcp(**kwargs)
File "/home/pi/.local/lib/python3.9/site-packages/httpcore/_backends/sync.py", line 213, in connect_tcp
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
File "/usr/lib/python3.9/contextlib.py", line 135, in exit
self.gen.throw(type, value, traceback)
File "/home/pi/.local/lib/python3.9/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/pi/devika/devika.py", line 16, in
from src.agents import Agent
File "/home/pi/devika/src/agents/init.py", line 1, in
from .agent import Agent
File "/home/pi/devika/src/agents/agent.py", line 1, in
from .planner import Planner
File "/home/pi/devika/src/agents/planner/init.py", line 1, in
from .planner import Planner
File "/home/pi/devika/src/agents/planner/planner.py", line 3, in
from src.llm import LLM
File "/home/pi/devika/src/llm/init.py", line 1, in
from .llm import LLM, TOKEN_USAGE
File "/home/pi/devika/src/llm/llm.py", line 14, in
class Model(Enum):
File "/home/pi/devika/src/llm/llm.py", line 25, in Model
for model in Ollama.list_models()
File "/home/pi/devika/src/llm/ollama_client.py", line 6, in list_models
return ollama.list()["models"]
File "/home/pi/.local/lib/python3.9/site-packages/ollama/_client.py", line 328, in list
return self._request('GET', '/api/tags').json()
File "/home/pi/.local/lib/python3.9/site-packages/ollama/_client.py", line 68, in _request
response = self._client.request(method, url, **kwargs)
File "/home/pi/.local/lib/python3.9/site-packages/httpx/_client.py", line 814, in request
return self.send(request, auth=auth, follow_redirects=follow_redirects)
File "/home/pi/.local/lib/python3.9/site-packages/httpx/_client.py", line 901, in send
response = self._send_handling_auth(
File "/home/pi/.local/lib/python3.9/site-packages/httpx/_client.py", line 929, in _send_handling_auth
response = self._send_handling_redirects(
File "/home/pi/.local/lib/python3.9/site-packages/httpx/_client.py", line 966, in _send_handling_redirects
response = self._send_single_request(request)
File "/home/pi/.local/lib/python3.9/site-packages/httpx/_client.py", line 1002, in _send_single_request
response = transport.handle_request(request)
File "/home/pi/.local/lib/python3.9/site-packages/httpx/_transports/default.py", line 228, in handle_request
resp = self._pool.handle_request(req)
File "/usr/lib/python3.9/contextlib.py", line 135, in exit
self.gen.throw(type, value, traceback)
File "/home/pi/.local/lib/python3.9/site-packages/httpx/_transports/default.py", line 83, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [Errno 111] Connection refused

Local LLM

Please add ability to use Locally hosted LLM, for example using LM Studio
Getting far away from Closed Source LLM is the best for everyone.

install specific cuda version (11.8)

while installing Devika. it installed cuda12 and I have compatibility issue with this version and my GPU

how to install Devika using cuda 11.8.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.