Coder Social home page Coder Social logo

Comments (10)

Floriferous avatar Floriferous commented on September 25, 2024

This is crucial if you want to have a real conversation with an AI that can call tools. Right now I get the toolCall result back, and then I can't chat anymore.

Here's my conversation:

Me: Hi, what can you do?
AI: Hi, I can do X
Me: Ok please do X
AI: toolcall.result = Y // I would expect some commentary to come with the result here
Me: Ok thanks
AI: toolcall.result = Y
Me: What do you think about the result?
AI: toolcall.result = Y

How do you go around this behavior?

from ai.

benjaminalgreen avatar benjaminalgreen commented on September 25, 2024

This is possible using frontend tool calls, but tool calls for streamed responses is supposed to be coming soon. #1574 (comment)

from ai.

nsenthilkumar avatar nsenthilkumar commented on September 25, 2024

while a QoL update to automate round trips would be great for right now you can recursively pipe tool call responses back into your submitMessage() yourself

func myTool(){
     do stuff
     generate: return{submitMessage(toolResult)}

from ai.

kevinschaich avatar kevinschaich commented on September 25, 2024

Also interested in this – one thought is maybe the streamUI API would benefit from keeping structured tool output separate from the desired UI (generate function in most of the examples).

Manually piping tool output back in via submitMessage as mentioned by @nsenthilkumar is interesting, but wouldn't that only provide additional context to the following user message in the chat history? It doesn't direct the assistant to take another step after the current tool run completes in order to make incremental progress using multiple tool invocations against the original prompt. If I'm missing something I'd love to see a more complete example.

Open to ideas but something like this would provide a history of previously called tool invocations similar to generateText:

tools: {
    getWeather: {
        description: 'Get Weather',
        parameters: z.object({
            city: z.string().describe('The city to get weather for'),
        }),
        generate: async function* ({ city }) {
            const weatherForCity = getWeather(city)
            yield (
                {
                    output: weatherForCity,
                    widget: <Weather weather={weatherForCity} />,
                }
            )
        },
    }
}

from ai.

bastotec avatar bastotec commented on September 25, 2024

I'm also curious how to persist tool result messages using streamUI. Right now OpenAI requires that every tool call have a corresponding tool result message on the chat history array. Since tools with streamUI return a ReactNode, are we supposed to save that serialized node as a ToolResultPart and feed that back into the history?

from ai.

cryptoKevinL avatar cryptoKevinL commented on September 25, 2024

I've made a support request to Vercel for this issue as well. Was very surprised there was no existing support or some other suggested workaround/solution.

from ai.

karam-khanna avatar karam-khanna commented on September 25, 2024

Big +1 on this

from ai.

yamz8 avatar yamz8 commented on September 25, 2024

I've made a support request to Vercel for this issue as well. Was very surprised there was no existing support or some other suggested workaround/solution.

any updates from vercel?

from ai.

cryptoKevinL avatar cryptoKevinL commented on September 25, 2024

Would be nice - is there a technical reason why its not possible? Like because the streamUI wants to start returning data back to the user before its got a full response and needs to wait and process the full result in case its needs to call another tool (thus defeating the purpose of streaming in the first place?). Just curious and would help developing knowing the path we need to take one way or the other.

from ai.

yamz8 avatar yamz8 commented on September 25, 2024

Would be nice - is there a technical reason why its not possible? Like because the streamUI wants to start returning data back to the user before its got a full response and needs to wait and process the full result in case its needs to call another tool (thus defeating the purpose of streaming in the first place?). Just curious and would help developing knowing the path we need to take one way or the other.

in the latest version, they added support for text stream but not stream UI https://vercel.com/blog/introducing-vercel-ai-sdk-3-2 .

from ai.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.