Coder Social home page Coder Social logo

bolna-ai / bolna Goto Github PK

View Code? Open in Web Editor NEW
371.0 9.0 107.0 18.77 MB

End-to-end platform for building voice first multimodal agents

Home Page: https://playground.bolna.dev

License: MIT License

Python 99.67% Dockerfile 0.33%
deepgram llm openai telephony twilio whisper xtts polly anyscale elevenlabs

bolna's People

Contributors

adityajain2162 avatar anshjoseph avatar denisbalan avatar eltociear avatar h3110fr13nd avatar marmikcfc avatar mustaphau avatar prateeksachan avatar sachanayush47 avatar saurabhjdsingh avatar shibasish avatar vipul-maheshwari avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bolna's Issues

whisper-melo-llama3 still call Deepgram

Hello,
It appears that whisper-melo-llama3 example uses DEEPGRAM i guess for fillers right ?
Not a big issue but this prevents it to be called 100% offline :-)

2024-07-03 00:13:31 2024-07-03 04:13:31.927 INFO {deepgram_transcriber} [transcribe] STARTED TRANSCRIBING 2024-07-03 00:13:31 2024-07-03 04:13:31.927 INFO {deepgram_transcriber} [get_deepgram_ws_url] GETTING DEEPGRAM WS 2024-07-03 00:13:31 2024-07-03 04:13:31.928 INFO {deepgram_transcriber} [get_deepgram_ws_url] Deepgram websocket url: wss://api.deepgram.com/v1/listen?model=nova-2&filler_words=true&diarize=true&language=en&vad_events=true&encoding=mulaw&sample_rate=8000&channels=1&interim_results=true&utterance_end_ms=1000 2024-07-03 00:13:32 2024-07-03 04:13:32.427 INFO {task_manager} [__first_message] Got stream sid and hence sending the first message MZ6605b1b187e313aacd72ed8a348c7834 2024-07-03 00:13:32 2024-07-03 04:13:32.428 INFO {task_manager} [__first_message] Generating This call is being recorded for quality assurance and training. Please speak now. 2024-07-03 00:13:32 2024-07-03 04:13:32.440 INFO {task_manager} [_synthesize] ##### sending text to melotts for generation: This call is being recorded for quality assurance and training. Please speak now.

quickstart_client.py not working with the local setup

Quickstart client doesn't seem to be working with the local setup but works with the hosted API

#quickstart_server.py
2024-05-17 19:37:20.760 INFO {assistant_manager} [run] Done with execution of the agent
INFO:     connection closed
Traceback (most recent call last):
  File "X:\bolna\bolna\input_handlers\default.py", line 67, in _listen
    request = await self.websocket.receive_json()
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "X:\bolna\my_env\Lib\site-packages\starlette\websockets.py", line 133, in receive_json
    self._raise_on_disconnect(message)
  File "X:\bolna\my_env\Lib\site-packages\starlette\websockets.py", line 105, in _raise_on_disconnect
    raise WebSocketDisconnect(message["code"], message.get("reason"))
starlette.websockets.WebSocketDisconnect: (1000, None)
2024-05-17 19:37:20.775 INFO {default} [_listen] Error while handling websocket message: (1000, None)
2024-05-17 19:37:20.775 INFO {deepgram_transcriber} [_check_and_process_end_of_stream] First closing transcription websocket
2024-05-17 19:37:20.776 ERROR {base_transcriber} [_close] Error while closing transcriber stream received 1000 (OK); then sent 1000 (OK)
2024-05-17 19:37:20.776 INFO {deepgram_transcriber} [_check_and_process_end_of_stream] Closed transcription websocket and now closing transcription task
2024-05-17 19:37:21.659 ERROR {deepgram_transcriber} [send_heartbeat] Error while sending: received 1000 (OK); then sent 1000 (OK)
2024-05-17 19:37:31.150 INFO {task_manager} [__handle_initial_silence] Woke up from my slumber True, [{'role': 'system', 'content': 'Ask if they are coming for party tonight'}], [{'role': 'system', 'content': 'Ask if they are coming for party tonight'}]


#quickstart_client.py 
ERROR:root:received 1000 (OK); then sent 1000 (OK)
ERROR:root:received 1000 (OK); then sent 1000 (OK)
ERROR:root:received 1000 (OK); then sent 1000 (OK)
ERROR:root:received 1000 (OK); then sent 1000 (OK)
ERROR:root:received 1000 (OK); then sent 1000 (OK)
ERROR:root:received 1000 (OK); then sent 1000 (OK)
ERROR:root:received 1000 (OK); then sent 1000 (OK)
ERROR:root:received 1000 (OK); then sent 1000 (OK)
ERROR:root:received 1000 (OK); then sent 1000 (OK)
ERROR:root:received 1000 (OK); then sent 1000 (OK)
ERROR:root:received 1000 (OK); then sent 1000 (OK)
Traceback (most recent call last):
  File "x:\bolna\local_setup\quickstart_client.py", line 176, in <module>
    asyncio.run(main())
  File "C:\Python\Python312\Lib\asyncio\runners.py", line 194, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "C:\Python\Python312\Lib\asyncio\runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python\Python312\Lib\asyncio\base_events.py", line 687, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "x:\bolna\local_setup\quickstart_client.py", line 172, in main
    await asyncio.gather(*tasks)
  File "x:\bolna\local_setup\quickstart_client.py", line 80, in emitter
    await self.ensure_open()
  File "X:\bolna\my_env\Lib\site-packages\websockets\legacy\protocol.py", line 944, in ensure_open 
    raise self.connection_closed_exc()
websockets.exceptions.ConnectionClosedOK: received 1000 (OK); then sent 1000 (OK)

Error: " NameError: name 'WhisperTranscriber' is not defined"

Getting following error after build and compose-up

bolna-app-1 | Traceback (most recent call last):
bolna-app-1 | File "/usr/local/bin/uvicorn", line 8, in
bolna-app-1 | sys.exit(main())
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1157, in call
bolna-app-1 | return self.main(*args, **kwargs)
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1078, in main
bolna-app-1 | rv = self.invoke(ctx)
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
bolna-app-1 | return ctx.invoke(self.callback, **ctx.params)
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/click/core.py", line 783, in invoke
bolna-app-1 | return __callback(*args, **kwargs)
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 410, in main
bolna-app-1 | run(
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 578, in run
bolna-app-1 | server.run()
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
bolna-app-1 | return asyncio.run(self.serve(sockets=sockets))
bolna-app-1 | File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
bolna-app-1 | return loop.run_until_complete(main)
bolna-app-1 | File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
bolna-app-1 | config.load()
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/uvicorn/config.py", line 473, in load
bolna-app-1 | self.loaded_app = import_from_string(self.app)
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
bolna-app-1 | module = importlib.import_module(module_str)
bolna-app-1 | File "/usr/local/lib/python3.10/importlib/init.py", line 126, in import_module
bolna-app-1 | return _bootstrap._gcd_import(name[level:], package, level)
bolna-app-1 | File "", line 1050, in _gcd_import
bolna-app-1 | File "", line 1027, in _find_and_load
bolna-app-1 | File "", line 1006, in _find_and_load_unlocked
bolna-app-1 | File "", line 688, in _load_unlocked
bolna-app-1 | File "", line 883, in exec_module
bolna-app-1 | File "", line 241, in _call_with_frames_removed
bolna-app-1 | File "/app/quickstart_server.py", line 12, in
bolna-app-1 | from bolna.models import *
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/bolna/models.py", line 4, in
bolna-app-1 | from .providers import *
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/bolna/providers.py", line 17, in
bolna-app-1 | 'whisper': WhisperTranscriber #Seperate out a transcriber for https://github.com/bolna-ai/streaming-transcriber-server or build a deepgram compatible proxy
bolna-app-1 | NameError: name 'WhisperTranscriber' is not defined
bolna-app-1 exited with code 1

Gettting this error while initiating call /call,

it was working fine previously for me in v-7.6

twilio-app-1 | INFO: 172.20.0.3:38740 - "POST /call HTTP/1.1" 500 Internal Server Error
twilio-app-1 | Exception occurred in make_call: HTTPConnectionPool(host='localhost', port=4040): Max retries exceeded with url: /api/tunnels (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0xffff80628730>: Failed to establish a new connection: [Errno 111] Connection refused'))

Extraction KeyError, Even though, Extraction Details were provided

in run_endpoint_function
bolna-app-1 | return await dependant.call(**values)
bolna-app-1 | File "/app/quickstart_server.py", line 57, in create_agent
bolna-app-1 | {'role': 'user', 'content': data_for_db["tasks"][index]['tools_config']["llm_agent"]['extraction_details']}
bolna-app-1 | KeyError: 'extraction_details'


{
  "agent_config": {
    "agent_name": "Dominos",
    "agent_type": "Lead Qualification",
    "agent_welcome_message": "Please speak now.",
    "tasks": [
      {
        "task_type": "conversation",
        "call_cancellation_prompt": null,
        "hangup_after_LLMCall": false,
        "hangup_after_silence": 10,
        "incremental_delay": 100,
        "interruption_backoff_period": 100,
        "number_of_words_for_interruption": 3,
        "optimize_latency": true,
        "task_config": {
         "hangup_after_silence": 10
         },
        "tools_config": {
          "llm_agent": {
            "max_tokens": 100,
            "family": "openai",
            "model": "gpt-3.5-turbo",
            "agent_flow_type": "streaming",
            "use_fallback": true,
            "temperature": 0.2,
            "base_url": null
          },
          "synthesizer": {
                        "audio_format": "wav",
                        "buffer_size": 100,
                        "provider": "deepgram",
                        "provider_config": {
                            "voice": "aura-luna-en"
                        },
                        "stream": true
                    },
          "transcriber": {
            "model": "deepgram",
            "stream": true,
            "language": "en",
            "endpointing": 400,
            "keywords": ""
          },
          "input": {
            "provider": "twilio",
            "format": "wav"
          },
          "output": {
            "provider": "twilio",
            "format": "wav"
          }
        },
        "toolchain": {
          "execution": "parallel",
          "pipelines": [
            [
              "transcriber",
              "llm",
              "synthesizer"
            ]
          ]
        }
      },
    
      {
        "task_type": "summarization",
                        "task_config": {
         "hangup_after_silence": 10
         },
        "tools_config": {
          "llm_agent": {
            "max_tokens": 100,
            "family": "openai",
            "request_json": true,
            "model": "gpt-3.5-turbo-1106"
          }
        },
        "toolchain": {
          "execution": "parallel",
          "pipelines": [
            [
              "llm"
            ]
          ]
        }
      },

      {
        "task_type": "extraction",
                "task_config": {
         "hangup_after_silence": 10
         },
        "tools_config": {
          "llm_agent": {
            "max_tokens": 100,
            "family": "openai",
            "request_json": true,
            "model": "gpt-3.5-turbo-1106",
            "extraction_details": "Pizza_type : Yield the name of the Pizza that they have bought\nPayment_type : If they are paying by cash, yield cash. If they are paying by card yield card. Else yield NA"
          }
        },
        "toolchain": {
          "execution": "parallel",
          "pipelines": [
            [
              "llm"
            ]
          ]
        }
      },
      {
        "task_type": "summarization",
                "task_config": {
         "hangup_after_silence": 10
         },
        "tools_config": {
          "llm_agent": {
            "max_tokens": 100,
            "family": "openai",
            "request_json": true,
            "model": "gpt-3.5-turbo-1106"
          }
        },
        "toolchain": {
          "execution": "parallel",
          "pipelines": [
            [
              "llm"
            ]
          ]
        }
      },
      {
        "task_type": "webhook",
                        "task_config": {
         "hangup_after_silence": 10
         },
        "tools_config": {
          "api_tools": {
            "webhookURL": "https://eo7iosexlws9d8.m.pipedream.net"
          }
        },
        "toolchain": {
          "execution": "parallel",
          "pipelines": [
            [
              "llm"
            ]
          ]
        }
      }
    ]
  },
  "agent_prompts": {
        "task_1": {
          "system_prompt": "You will always be very concise and brief with your responses. You will NEVER speak longer than 2 sentences. \nYou are Rachel, an order representative for Pizza Square. You will be very helpful and kind. Customers will call you and give their order. You will follow this script. You will ONLY ask 1 question at a time. Once you get an answer you will move on to the next one. \n 1) You will start your conversation by saying. \"Hi! I am Rachel from Pizza Square. Am I speaking to Michael from 345 Palm Square?\" All your responses should be as long as this one.) \n 2) You will ask the customer what they would like to order. Only ask the next question once you have a clear answer to this question. If the customer wants assistance, you will ask them about their preferences. \n 3) You will ask the customer for the required size - regular, medium, or large. If the customer has follow up questions, you will clarify, that the small is for 1 person and 4 dollars, the medium is for 2 people and 7 dollars and the large is for 4 people and 11 dollars. Only ask the next question once you have a clear answer to this question. \n 4) Once confirmed, you will ask whether the customer wants to pay the amount of for the chosen pizza via card or cash. \n 5) You will then tell the customer that their pizza is in the oven and will reach within 30 minutes. \n You have options of 3 pizzas. You will only give description if the user asks for it.  \n 1) The BBQ Chicken - Which has Chicken, Mushrooms, Olives in a tomato base 2) The Pesto Dominator - Which has Chicken, Bacon and Ham in a green base. This is the best-selling pizza. 3) The Veg Ultimate - Which has tofu, mushrooms, olives, tomatoes in a tomato base. "
        }
      }
}

Exception: Something went wrong while sending heartbeats to deepgram bolna-

bolna-app-1 | 2024-06-17 08:19:23.514 ERROR {traceback} [extract] Task exception was never retrieved
bolna-app-1 | future: <Task finished name='Task-31' coro=<DeepgramTranscriber.send_heartbeat() done, defined at /usr/local/lib/python3.10/site-packages/bolna/transcriber/deepgram_transcriber.py:112> exception=Exception('Something went wrong while sending heartbeats to deepgram') created at /usr/local/lib/python3.10/asyncio/tasks.py:337>
bolna-app-1 | source_traceback: Object created at (most recent call last):
bolna-app-1 | File "/usr/local/bin/uvicorn", line 8, in
bolna-app-1 | sys.exit(main())
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1157, in call
bolna-app-1 | return self.main(*args, **kwargs)
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1078, in main
bolna-app-1 | rv = self.invoke(ctx)
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
bolna-app-1 | return ctx.invoke(self.callback, **ctx.params)
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/click/core.py", line 783, in invoke
bolna-app-1 | return __callback(*args, **kwargs)
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 410, in main
bolna-app-1 | run(
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 578, in run
bolna-app-1 | server.run()
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
bolna-app-1 | return asyncio.run(self.serve(sockets=sockets))
bolna-app-1 | File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
bolna-app-1 | return loop.run_until_complete(main)
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/bolna/transcriber/deepgram_transcriber.py", line 353, in transcribe
bolna-app-1 | self.heartbeat_task = asyncio.create_task(self.send_heartbeat(deepgram_ws))
bolna-app-1 | File "/usr/local/lib/python3.10/asyncio/tasks.py", line 337, in create_task
bolna-app-1 | task = loop.create_task(coro)
bolna-app-1 | Traceback (most recent call last):
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/bolna/transcriber/deepgram_transcriber.py", line 116, in send_heartbeat
bolna-app-1 | await ws.send(json.dumps(data))
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/websockets/legacy/protocol.py", line 635, in send
bolna-app-1 | await self.ensure_open()
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/websockets/legacy/protocol.py", line 944, in ensure_open
bolna-app-1 | raise self.connection_closed_exc()
bolna-app-1 | websockets.exceptions.ConnectionClosedOK: received 1000 (OK); then sent 1000 (OK)
bolna-app-1 |
bolna-app-1 | During handling of the above exception, another exception occurred:
bolna-app-1 |
bolna-app-1 | Traceback (most recent call last):
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/bolna/transcriber/deepgram_transcriber.py", line 120, in send_heartbeat
bolna-app-1 | raise Exception("Something went wrong while sending heartbeats to {}".format(self.model))
bolna-app-1 | Exception: Something went wrong while sending heartbeats to deepgram
bolna-

Fatal error with docker on whisper-melo-llama3

Hello there is a fatal error when trying to deploy whisper-melo-llama3
First it is necessary to modify bolna_server.Dockerfile to get the last repository right ?
RUN pip install --force-reinstall git+https://github.com/bolna-ai/bolna@MeloTTS
to:
RUN pip install --force-reinstall git+https://github.com/bolna-ai/bolna@master

Then the fatal error i guess in MeloTTS :
` => [melo-app 5/10] RUN git clone https://github.com/bolna-ai/MeloTTS 2.1s
=> [melo-app 6/10] RUN pip install fastapi uvicorn torchaudio 46.9s
=> [bolna-app 12/14] RUN pip install pydub==0.25.1 1.6s
=> [bolna-app 13/14] RUN pip install ffprobe 2.7s
=> [bolna-app 14/14] RUN pip install aiofiles 1.3s
=> ERROR [bolna-app] exporting to docker image format 183.9s
=> => exporting layers 112.2s
=> => exporting manifest sha256:06c40a005b2f6040981b04d677a5994fa440a4b8b0a826bb7193ffae940856e5 0.0s
=> => exporting config sha256:89c37bae6f62aca0d4d57689610467537971fa097ea12c00147f3fc21cd900d5 0.0s
=> => sending tarball 71.7s
=> [melo-app 7/10] RUN cp -a MeloTTS/. . 0.7s
=> [melo-app 8/10] RUN python -m pip cache purge 0.8s
=> ERROR [melo-app 9/10] RUN pip install --no-cache-dir txtsplit torch torchaudio cached_path transformers==4.27.4 mecab-python3==1.0.5 num2words==0.5.12 unidic_lite unidic mecab-python3==1.0.5 pykaka 147.7s
=> [whisper-app 9/12] RUN pip install git+https://github.com/SYSTRAN/faster-whisper.git 8.6s
=> [whisper-app 10/12] RUN pip install transformers 11.1s
=> [whisper-app 11/12] RUN ct2-transformers-converter --model openai/whisper-small --copy_files preprocessor_config.json --output_dir ./Server/ASR/whisper_small --quantization float16 33.0s
=> [whisper-app 12/12] WORKDIR Server 0.0s
=> CANCELED [whisper-app] exporting to docker image format 87.4s
=> => exporting layers 87.4s
=> CANCELED [bolna-app bolna-app] importing to docker 17.5s

[bolna-app] exporting to docker image format:



[melo-app 9/10] RUN pip install --no-cache-dir txtsplit torch torchaudio cached_path transformers==4.27.4 mecab-python3==1.0.5 num2words==0.5.12 unidic_lite unidic mecab-python3==1.0.5 pykakasi==2.2.1 fugashi==1.3.0 g2p_en==2.1.0 anyascii==0.3.2 jamo==0.4.1 gruut[de,es,fr]==2.2.3 g2pkk>=0.1.1 librosa==0.9.1 pydub==0.25.1 eng_to_ipa==0.0.2 inflect==7.0.0 unidecode==1.3.7 pypinyin==0.50.0 cn2an==0.5.22 jieba==0.42.1 langid==1.1.6 tqdm tensorboard==2.16.2 loguru==0.7.2:
132.9 error: subprocess-exited-with-error
132.9
132.9 ร— python setup.py bdist_wheel did not run successfully.
132.9 โ”‚ exit code: 1
132.9 โ•ฐโ”€> [24 lines of output]
132.9 running bdist_wheel
132.9 running build
132.9 running build_py
132.9 creating build
132.9 creating build/lib.linux-aarch64-cpython-310
132.9 creating build/lib.linux-aarch64-cpython-310/pycrfsuite
132.9 copying pycrfsuite/_dumpparser.py -> build/lib.linux-aarch64-cpython-310/pycrfsuite
132.9 copying pycrfsuite/_logparser.py -> build/lib.linux-aarch64-cpython-310/pycrfsuite
132.9 copying pycrfsuite/init.py -> build/lib.linux-aarch64-cpython-310/pycrfsuite
132.9 running build_ext
132.9 building 'pycrfsuite._pycrfsuite' extension
132.9 creating build/temp.linux-aarch64-cpython-310
132.9 creating build/temp.linux-aarch64-cpython-310/crfsuite
132.9 creating build/temp.linux-aarch64-cpython-310/crfsuite/lib
132.9 creating build/temp.linux-aarch64-cpython-310/crfsuite/lib/cqdb
132.9 creating build/temp.linux-aarch64-cpython-310/crfsuite/lib/cqdb/src
132.9 creating build/temp.linux-aarch64-cpython-310/crfsuite/lib/crf
132.9 creating build/temp.linux-aarch64-cpython-310/crfsuite/lib/crf/src
132.9 creating build/temp.linux-aarch64-cpython-310/crfsuite/swig
132.9 creating build/temp.linux-aarch64-cpython-310/liblbfgs
132.9 creating build/temp.linux-aarch64-cpython-310/liblbfgs/lib
132.9 creating build/temp.linux-aarch64-cpython-310/pycrfsuite
132.9 gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Icrfsuite/include/ -Icrfsuite/lib/cqdb/include -Iliblbfgs/include -Ipycrfsuite -I/usr/local/include/python3.10 -c -std=c99 crfsuite/lib/cqdb/src/cqdb.c -o build/temp.linux-aarch64-cpython-310/crfsuite/lib/cqdb/src/cqdb.o
132.9 error: command 'gcc' failed: No such file or directory
132.9 [end of output]
132.9
132.9 note: This error originates from a subprocess, and is likely not a problem with pip.
132.9 ERROR: Failed building wheel for python-crfsuite
139.5 error: subprocess-exited-with-error
139.5
139.5 ร— Running setup.py install for python-crfsuite did not run successfully.
139.5 โ”‚ exit code: 1
139.5 โ•ฐโ”€> [26 lines of output]
139.5 running install
139.5 /usr/local/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
139.5 warnings.warn(
139.5 running build
139.5 running build_py
139.5 creating build
139.5 creating build/lib.linux-aarch64-cpython-310
139.5 creating build/lib.linux-aarch64-cpython-310/pycrfsuite
139.5 copying pycrfsuite/_dumpparser.py -> build/lib.linux-aarch64-cpython-310/pycrfsuite
139.5 copying pycrfsuite/_logparser.py -> build/lib.linux-aarch64-cpython-310/pycrfsuite
139.5 copying pycrfsuite/init.py -> build/lib.linux-aarch64-cpython-310/pycrfsuite
139.5 running build_ext
139.5 building 'pycrfsuite._pycrfsuite' extension
139.5 creating build/temp.linux-aarch64-cpython-310
139.5 creating build/temp.linux-aarch64-cpython-310/crfsuite
139.5 creating build/temp.linux-aarch64-cpython-310/crfsuite/lib
139.5 creating build/temp.linux-aarch64-cpython-310/crfsuite/lib/cqdb
139.5 creating build/temp.linux-aarch64-cpython-310/crfsuite/lib/cqdb/src
139.5 creating build/temp.linux-aarch64-cpython-310/crfsuite/lib/crf
139.5 creating build/temp.linux-aarch64-cpython-310/crfsuite/lib/crf/src
139.5 creating build/temp.linux-aarch64-cpython-310/crfsuite/swig
139.5 creating build/temp.linux-aarch64-cpython-310/liblbfgs
139.5 creating build/temp.linux-aarch64-cpython-310/liblbfgs/lib
139.5 creating build/temp.linux-aarch64-cpython-310/pycrfsuite
139.5 gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Icrfsuite/include/ -Icrfsuite/lib/cqdb/include -Iliblbfgs/include -Ipycrfsuite -I/usr/local/include/python3.10 -c -std=c99 crfsuite/lib/cqdb/src/cqdb.c -o build/temp.linux-aarch64-cpython-310/crfsuite/lib/cqdb/src/cqdb.o
139.5 error: command 'gcc' failed: No such file or directory
139.5 [end of output]
139.5
139.5 note: This error originates from a subprocess, and is likely not a problem with pip.
139.5 error: legacy-install-failure
139.5
139.5 ร— Encountered error while trying to install package.
139.5 โ•ฐโ”€> python-crfsuite
139.5
139.5 note: This is an issue with the package mentioned above, not pip.
139.5 hint: See above for output from the failure.
141.0
141.0 [notice] A new release of pip is available: 23.0.1 -> 24.1.1
141.0 [notice] To update, run: pip install --upgrade pip


failed to solve: process "/bin/sh -c pip install --no-cache-dir txtsplit torch torchaudio cached_path transformers==4.27.4 mecab-python3==1.0.5 num2words==0.5.12 unidic_lite unidic mecab-python3==1.0.5 pykakasi==2.2.1 fugashi==1.3.0 g2p_en==2.1.0 anyascii==0.3.2 jamo==0.4.1 gruut[de,es,fr]==2.2.3 g2pkk>=0.1.1 librosa==0.9.1 pydub==0.25.1 eng_to_ipa==0.0.2 inflect==7.0.0 unidecode==1.3.7 pypinyin==0.50.0 cn2an==0.5.22 jieba==0.42.1 langid==1.1.6 tqdm tensorboard==2.16.2 loguru==0.7.2" did not complete successfully: exit code: 1
`

Thanks for your support,

Import Error

Hi,

Just a quick one: not sure where this error is coming from:

ImportError: cannot import name 'CustomLogger' from 'bolna.helpers.logger_config' (/usr/local/lib/python3.10/site-packages/bolna/helpers/logger_config.py)

I had renamed the config_logger function in logger_config.py to CustomLogger but the issue persists.

I would appreciate your help!

SUPPORTED_LLM_MODELS variable miss used, please check

def __setup_llm(self, llm_config):
    if self.task_config["tools_config"]["llm_agent"] is not None:
        if self.task_config["tools_config"]["llm_agent"]["family"] in SUPPORTED_LLM_MODELS.keys():
            llm_class = SUPPORTED_LLM_MODELS.get(self.task_config["tools_config"]["llm_agent"]["family"])
            logger.info(f"LLM CONFIG {llm_config}")
            llm = llm_class(**llm_config, **self.kwargs)
            return llm
        else:
            raise Exception(f'LLM {self.task_config["tools_config"]["llm_agent"]["family"]} not supported')

LiteLLM.__init__() missing 1 required positional argument: 'model'

bolna-app-1 | File "/app/quickstart_server.py", line 53, in create_agent
bolna-app-1 | extraction_prompt_generation_llm = LiteLLM(streaming_model=extraction_prompt_llm, max_tokens=2000)
bolna-app-1 | TypeError: LiteLLM.init() missing 1 required positional argument: 'model'

Got Error in Elevenlabs synthesis

Passed this in agent for synthesizer

   "synthesizer": {
          "audio_format": "wav",
          "provider": "elevenlabs",
          "stream": true,
          "provider_config": {
            "voice": "Rachel",
            "voice_id":"21m00Tcm4TlvDq8ikWAM",
            "model":"eleven_multilingual_v2"
          },

bolna-app-1 | 2024-05-08 06:13:36.224 INFO {elevenlabs_synthesizer} [generate] Cache hit and hence returning quickly Hi! I am Rachel from Pizza Square. Am I speaking to Michael from 345 Palm Square?
bolna-app-1 | 2024-05-08 06:13:36.224 INFO {utils} [convert_audio_to_wav] CONVERTING AUDIO TO WAV mp3
bolna-app-1 | Traceback (most recent call last):
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/bolna/synthesizer/elevenlabs_synthesizer.py", line 212, in generate
bolna-app-1 | wav_bytes = convert_audio_to_wav(audio, source_format="mp3")
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/bolna/helpers/utils.py", line 344, in convert_audio_to_wav
bolna-app-1 | audio = AudioSegment.from_file(io.BytesIO(audio_bytes), format=source_format)
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/pydub/audio_segment.py", line 773, in from_file
bolna-app-1 | raise CouldntDecodeError(
bolna-app-1 | pydub.exceptions.CouldntDecodeError: Decoding failed. ffmpeg returned error code: 1
bolna-app-1 |
bolna-app-1 | Output from ffmpeg/avlib:
bolna-app-1 |
bolna-app-1 | ffmpeg version 5.1.4-0+deb12u1 Copyright (c) 2000-2023 the FFmpeg developers
bolna-app-1 | built with gcc 12 (Debian 12.2.0-14)
bolna-app-1 | configuration: --prefix=/usr --extra-version=0+deb12u1 --toolchain=hardened --libdir=/usr/lib/aarch64-linux-gnu --incdir=/usr/include/aarch64-linux-gnu --arch=arm64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libglslang --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librist --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --disable-sndio --enable-libjxl --enable-pocketsphinx --enable-librsvg --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-libplacebo --enable-librav1e --enable-shared
bolna-app-1 | libavutil 57. 28.100 / 57. 28.100
bolna-app-1 | libavcodec 59. 37.100 / 59. 37.100
bolna-app-1 | libavformat 59. 27.100 / 59. 27.100
bolna-app-1 | libavdevice 59. 7.100 / 59. 7.100
bolna-app-1 | libavfilter 8. 44.100 / 8. 44.100
bolna-app-1 | libswscale 6. 7.100 / 6. 7.100
bolna-app-1 | libswresample 4. 7.100 / 4. 7.100
bolna-app-1 | libpostproc 56. 6.100 / 56. 6.100
bolna-app-1 | [cache @ 0xaaab06b20df0] Inner protocol failed to seekback end : -38
bolna-app-1 | Last message repeated 1 times
bolna-app-1 | [mp3float @ 0xaaab06b293a0] Header missing
bolna-app-1 | Last message repeated 1 times
bolna-app-1 | [cache @ 0xaaab06b20df0] Inner protocol failed to seekback end : -38
bolna-app-1 | Last message repeated 1 times
bolna-app-1 | [mp3 @ 0xaaab06b20910] Could not find codec parameters for stream 0 (Audio: mp3 (mp3float), 0 channels, fltp): unspecified frame size
bolna-app-1 | Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
bolna-app-1 | Input #0, mp3, from 'cache:pipe:0':
bolna-app-1 | Duration: N/A, start: 0.000000, bitrate: N/A
bolna-app-1 | Stream #0:0: Audio: mp3, 0 channels, fltp
bolna-app-1 | Stream mapping:
bolna-app-1 | Stream #0:0 -> #0:0 (mp3 (mp3float) -> pcm_s32le (native))
bolna-app-1 | Press [q] to stop, [?] for help
bolna-app-1 | [mp3float @ 0xaaab06b299d0] Header missing
bolna-app-1 | Error while decoding stream #0:0: Invalid data found when processing input
bolna-app-1 | [mp3float @ 0xaaab06b299d0] Header missing
bolna-app-1 | Error while decoding stream #0:0: Invalid data found when processing input
bolna-app-1 | [abuffer @ 0xaaab06b52740] Value inf for parameter 'time_base' out of range [0 - 2.14748e+09]
bolna-app-1 | Last message repeated 1 times
bolna-app-1 | [abuffer @ 0xaaab06b52740] Error setting option time_base to value 1/0.
bolna-app-1 | [graph_0_in_0_0 @ 0xaaab06b2c830] Error applying options to the filter.
bolna-app-1 | Error reinitializing filters!
bolna-app-1 | Error while filtering: Numerical result out of range
bolna-app-1 | Finishing stream 0:0 without any data written to it.
bolna-app-1 | [abuffer @ 0xaaab06b52740] Value inf for parameter 'time_base' out of range [0 - 2.14748e+09]
bolna-app-1 | Last message repeated 1 times
bolna-app-1 | [abuffer @ 0xaaab06b52740] Error setting option time_base to value 1/0.
bolna-app-1 | [graph_0_in_0_0 @ 0xaaab06b2c830] Error applying options to the filter.
bolna-app-1 | Error configuring filter graph
bolna-app-1 | [cache @ 0xaaab06b20df0] Statistics, cache hits:32785 cache misses:6
bolna-app-1 | Conversion failed!
bolna-app-1 |
bolna-app-1 | 2024-05-08 06:13:36.582 ERROR {elevenlabs_synthesizer} [generate] Error in eleven labs generate Decoding failed. ffmpeg returned error code: 1
b

Please add Plivo lib in local setup requirement.txt

Here's the corrected version of your text:

  1. Add the Plivo library to requirements.txt:

    plivo==4.55.0
    
  2. Add the keepalive parameter to Plivo to stay connected. For example:

    <Stream bidirectional="true" keepCallAlive="true">{}</Stream>
  3. I am still not getting any output. Please update the code.

Standarize open ai llm response_format based on models passed

Should check for the model passed is compatible with the response_format
Only compatible models/response_format should be passed to openai SDK

async def generate_stream(self, messages, classification_task=False, synthesize=True, request_json=False):
response_format = self.get_response_format(request_json)
answer, buffer = "", ""
model = self.classification_model if classification_task is True else self.model
async for chunk in await self.async_client.chat.completions.create(model=model, temperature=0.2,
messages=messages, stream=True,
max_tokens=self.max_tokens,
response_format=response_format):

async def generate(self, messages, classification_task=False, stream=False, synthesize=True, request_json=False):
response_format = self.get_response_format(request_json)
model = self.classification_model if classification_task is True else self.model
completion = await self.async_client.chat.completions.create(model=model, temperature=0.0, messages=messages,
stream=False, response_format=response_format)

Conflicting dependency aiohttp

Unable to setup locally due to conflicting dependency caused by aiohttp library.

162.5 ERROR: Cannot install twilio because these package versions have conflicting dependencies.
162.5
162.5 The conflict is caused by:
162.5 aiohttp 3.9.3 depends on multidict<7.0 and >=4.5
162.5 aiohttp 3.9.2 depends on multidict<7.0 and >=4.5
162.5 aiohttp 3.9.1 depends on multidict<7.0 and >=4.5
162.5 aiohttp 3.9.0 depends on multidict<7.0 and >=4.5
162.5 aiohttp 3.8.6 depends on multidict<7.0 and >=4.5
162.5 aiohttp 3.8.5 depends on multidict<7.0 and >=4.5
162.5 aiohttp 3.8.4 depends on multidict<7.0 and >=4.5

TypeError: TwilioInputHandler.__init__() got an unexpected keyword argument 'conversation_recording'

bolna-app-1 | 2024-06-03 09:53:51.586 INFO {task_manager} [init] Connected via websocket
bolna-app-1 | 2024-06-03 09:53:51.586 INFO {task_manager} [__setup_input_handlers] Connected through dashboard None
bolna-app-1 | Traceback (most recent call last):
bolna-app-1 | File "/app/endpoints/agent_functions.py", line 159, in websocket_endpoint
bolna-app-1 | async for index, task_output in assistant_manager.run(local=True):
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/bolna/agent_manager/assistant_manager.py", line 41, in run
bolna-app-1 | task_manager = TaskManager(self.agent_config.get("agent_name", self.agent_config.get("assistant_name")),
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/bolna/agent_manager/task_manager.py", line 108, in init
bolna-app-1 | self.__setup_input_handlers(connected_through_dashboard, input_queue, self.should_record)
bolna-app-1 | File "/usr/local/lib/python3.10/site-packages/bolna/agent_manager/task_manager.py", line 368, in __setup_input_handlers
bolna-app-1 | self.tools["input"] = input_handler_class(**input_kwargs)
bolna-app-1 | TypeError: TwilioInputHandler.init() got an unexpected keyword argument 'conversation_recording'
bolna-app-1 | 2024-06-03 09:53:51.591 ERROR {agent_functions} [websocket_endpoint] error in executing TwilioInputHandler.init() got an unexpected keyword argument 'conversation_recording'

Handle ConversationConfig optional case

Currently, one needs to pass ConversationConfig due to the way pydantic handles Optional attributes else it throws 422 error.

bolna/bolna/models.py

Lines 163 to 167 in cf14ea3

class Task(BaseModel):
tools_config: ToolsConfig
toolchain: ToolsChainModel
task_type: Optional[str] = "conversation" # extraction, summarization, notification
task_config: Optional[ConversationConfig]

Eleven Labs : pydub.exceptions.CouldntDecodeError: Decoding failed. ffmpeg returned error code: 1

Timestamp:
Date and Time: May 23, 2024, at 16:47:15.
Severity:
Error: This issue is classified as an error.
Component:
Component: Eleven Labs Synthesizer.
Context:
The process encountered an error while generating audio using the Eleven Labs Synthesizer.
Details:
Traceback:

File "/usr/local/lib/python3.10/site-packages/bolna/synthesizer/elevenlabs_synthesizer.py", line 225, in generate
wav_bytes = convert_audio_to_wav(audio, source_format="mp3")
File "/usr/local/lib/python3.10/site-packages/bolna/helpers/utils.py", line 347, in convert_audio_to_wav
audio = AudioSegment.from_file(io.BytesIO(audio_bytes), format=source_format)
...
pydub.exceptions.CouldntDecodeError: Decoding failed. ffmpeg returned error code: 1
Error Message:

Decoding failed. ffmpeg returned error code: 1
Failure Point:

The failure occurred during the conversion of audio from MP3 format to WAV format.
The error arose while reading frame size, specifically at position 1026.
The inner protocol failed to seekback, returning -38.
Additional Information:

The cache statistics show no hits or misses, implying a lack of cache utilization.
The error occurred again at a different timestamp, with similar details.
Impact:
The error disrupts the audio generation process, hindering the expected outcome.
Root Cause:
The root cause of the error lies in the failure to decode the MP3 audio file using FFMPEG, leading to the inability to proceed with the audio conversion.
Steps to Reproduce:
Attempt to generate audio using the Eleven Labs Synthesizer with an MP3 input file.
Observe the error occurring during the conversion process.
Expected Behavior:
The audio conversion process should successfully convert the MP3 file to WAV format without encountering any decoding errors.
Actual Behavior:
Decoding fails with FFMPEG returning error code 1 during the audio conversion process.
Possible Solutions:
Investigate and update FFMPEG configuration.
Verify the integrity of input MP3 files.
Update dependencies (FFMPEG, Pydub) to the latest versions.
Environment:
Python 3.10
Operating System: MAC
Relevant libraries/dependencies versions

Error receiving twillio response from the synthesizer

This is our configuration:

{"assistant_name":"sample_agent","assistant_type":"other","tasks":[{"tools_config":{"llm_agent":{"streaming_model":"gpt-3.5-turbo-16k","classification_model":"gpt-4","max_tokens":100,"agent_flow_type":"streaming","use_fallback":false,"family":"openai","temperature":0.1,"request_json":false,"langchain_agent":false,"extraction_details":null,"extraction_json":null},"synthesizer":{"provider":"elevenlabs","provider_config":{"voice":"Myra","voice_id":"hNJixpY16PhIcW2MXauQ","model":""},"stream":true,"buffer_size":40,"audio_format":"mp3"},"transcriber":{"model":"deepgram","language":"en","stream":true,"sampling_rate":16000,"encoding":"linear16","endpointing":400},"input":{"provider":"twilio","format":"pcm"},"output":{"provider":"twilio","format":"pcm"},"api_tools":null},"toolchain":{"execution":"parallel","pipelines":[["transcriber","llm","synthesizer"]]},"task_type":"conversation"}]}

I am able to receive the response from synthesizer however, we get noise in the twillio response. We tried both stream:true and stream:false cases for synthesizer. We have stored the synthesizer output into mp3 format and the content is ok, however, the response is not received correctly by twillio.

(Not an issue) Is tunneling not available in ngrok free version

This is NOT an issue.
I could not find 'discussions' section in this repo, so asking the community here.

After doing the local setup, I created a new ngrok account and provided authtoken.
However, I'm getting 'Upgrade your account' error.

Is this expected? Or did I miss some setup step.

Getting stuck on loop "Last transmitted timestamp is simply 0 and hence continuing" when calling /call API

bolna-app-1   | future: <Task finished name='Task-33' coro=<write_request_logs() done, defined at /usr/local/lib/python3.10/site-packages/bolna/helpers/utils.py:414> exception=TypeError("object of type 'NoneType' has no len()") created at /usr/local/lib/python3.10/asyncio/tasks.py:337>
bolna-app-1   | source_traceback: Object created at (most recent call last):
bolna-app-1   |   File "/usr/local/bin/uvicorn", line 8, in <module>
bolna-app-1   |     sys.exit(main())
bolna-app-1   |   File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
bolna-app-1   |     return self.main(*args, **kwargs)
bolna-app-1   |   File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1078, in main
bolna-app-1   |     rv = self.invoke(ctx)
bolna-app-1   |   File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
bolna-app-1   |     return ctx.invoke(self.callback, **ctx.params)
bolna-app-1   |   File "/usr/local/lib/python3.10/site-packages/click/core.py", line 783, in invoke
bolna-app-1   |     return __callback(*args, **kwargs)
bolna-app-1   |   File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 410, in main
bolna-app-1   |     run(
bolna-app-1   |   File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 578, in run
bolna-app-1   |     server.run()
bolna-app-1   |   File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
bolna-app-1   |     return asyncio.run(self.serve(sockets=sockets))
bolna-app-1   |   File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
bolna-app-1   |     return loop.run_until_complete(main)
bolna-app-1   |   File "/usr/local/lib/python3.10/site-packages/bolna/agent_manager/task_manager.py", line 1341, in __first_message
bolna-app-1   |     await self._synthesize(create_ws_data_packet(self.kwargs.get('agent_welcome_message', None), meta_info= meta_info))
bolna-app-1   |   File "/usr/local/lib/python3.10/site-packages/bolna/agent_manager/task_manager.py", line 1172, in _synthesize
bolna-app-1   |     self.__convert_to_request_log(message = text, meta_info= meta_info, component="synthesizer", direction="response", model = self.synthesizer_provider, is_cached= True, engine=self.tools['synthesizer'].get_engine())
bolna-app-1   |   File "/usr/local/lib/python3.10/site-packages/bolna/agent_manager/task_manager.py", line 626, in __convert_to_request_log
bolna-app-1   |     asyncio.create_task(write_request_logs(log, self.run_id))
bolna-app-1   |   File "/usr/local/lib/python3.10/asyncio/tasks.py", line 337, in create_task
bolna-app-1   |     task = loop.create_task(coro)
bolna-app-1   | Traceback (most recent call last):
bolna-app-1   |   File "/usr/local/lib/python3.10/site-packages/bolna/helpers/utils.py", line 423, in write_request_logs
bolna-app-1   |     component_details = [message['data'], None, None,len(message['data']), message['cached'], None, message['engine']]
bolna-app-1   | TypeError: object of type 'NoneType' has no len()
bolna-app-1   | 2024-05-19 07:00:31.360 INFO {task_manager} [__check_for_completion] Last transmitted timestamp is simply 0 and hence continuing
bolna-app-1   | 2024-05-19 07:00:33.361 INFO {task_manager} [__check_for_completion] Last transmitted timestamp is simply 0 and hence continuing
bolna-app-1   | 2024-05-19 07:00:35.362 INFO {task_manager} [__check_for_completion] Last transmitted timestamp is simply 0 and hence continuing
bolna-app-1   | 2024-05-19 07:00:37.363 INFO {task_manager} [__check_for_completion] Last transmitted timestamp is simply 0 and hence continuing
bolna-app-1   | 2024-05-19 07:00:38.771 INFO {telephony} [_listen] call stopping
bolna-app-1   | 2024-05-19 07:00:39.363 INFO {task_manager} [__check_for_completion] Last transmitted timestamp is simply 0 and hence continuing
bolna-app-1   | 2024-05-19 07:00:39.364 ERROR {deepgram_transcriber} [transcribe] Error in transcribe:
bolna-app-1   | 2024-05-19 07:00:41.362 INFO {task_manager} [__check_for_completion] Last transmitted timestamp is simply 0 and hence continuing
Gracefully stopping... (press Ctrl+C again to force)

No Audio in Call

Hello, I am not able to here AI voice while running the "/call" API. But I am able to see the AI response on terminal logs. Can you please let me know what can be the issue here? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.