Comments (6)
Thanks for pointing this out @HexHands!
We should probably have some sort of message that says when it completes because it hits max_tokens
(which might be happening here?) . Then we could perhaps ask the user if they want to run the unfinished code block.
Does it do this for very small snippets on your system, like "print hello world"? That will help us determine if it's a max_tokens
issue.
from open-interpreter.
This is the best result I got. It runs the code, but for longer code it doesn't finish it. I would like the ability to be able to set max tokens using code.
from open-interpreter.
same problem
from open-interpreter.
Fiy, this was also referenced by me here: #126 (comment)
Even though I've changed the context size on the appropriate files, also had a brief go at tokentrim.py but no luck
from open-interpreter.
#126 (comment) Thanks @jordanbtucker the refactoring now works by editing the params in interpreter.py
Still a bit buggy in between instructions, but at least it does not stop 🚀
from open-interpreter.
This issue looks resolved based on @brunoboto96 comment.
from open-interpreter.
Related Issues (20)
- Use RAG for better context HOT 5
- Keep getting JSONDecodeError when using computer.browser HOT 5
- IPKernelApp Warning & Pydantic issue
- gpt-4 to gpt-3.5 turbo llm change HOT 1
- computer.browser.search is dead HOT 3
- VSCode terminal does not offer code execution y/n option
- Using connect Timeout in Router: httpx.Timeout
- Improve Documentation Formatting and Style for Better Clarity
- How to Extract Complete, Non-redundant, and Correct Code from Messages Testing on Benchmarks like HumanEval? HOT 5
- %info ubuntu 22 no output HOT 2
- When use ollama with model llama3:70b, the code cannot run HOT 11
- Can Not Start a New Chat in Terminal
- 'computer' module not found
- The interpreter cannot install required dependencies by itself.
- ollama llama3 How to remove the first line " ` " when generating code in Windows 11 terminal HOT 3
- Cannot run scripts that arent python HOT 5
- After the task is completed, the task will be executed repeatedly and will not stop automatically; intermittent and continuous repeated output HOT 3
- Adding Groq Support HOT 1
- litellm.exceptions.ServiceUnavailableError: AnthropicException - anthropic does not support parameters: {'functions'
- You can see what's on the screen and go to My Downloads
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from open-interpreter.