Coder Social home page Coder Social logo

traceloop / openllmetry Goto Github PK

View Code? Open in Web Editor NEW
1.3K 6.0 89.0 13.03 MB

Open-source observability for your LLM application, based on OpenTelemetry

Home Page: https://www.traceloop.com/openllmetry

License: Apache License 2.0

Python 99.97% Shell 0.03%
llmops observability open-telemetry metrics monitoring opentelemetry datascience ml model-monitoring opentelemetry-python

openllmetry's Introduction

Open-source observability for your LLM application

πŸŽ‰ New: Our semantic conventions are now part of OpenTelemetry! Join the discussion and help us shape the future of LLM observability.

Looking for the JS/TS version? Check out OpenLLMetry-JS.

OpenLLMetry is a set of extensions built on top of OpenTelemetry that gives you complete observability over your LLM application. Because it uses OpenTelemetry under the hood, it can be connected to your existing observability solutions - Datadog, Honeycomb, and others.

It's built and maintained by Traceloop under the Apache 2.0 license.

The repo contains standard OpenTelemetry instrumentations for LLM providers and Vector DBs, as well as a Traceloop SDK that makes it easy to get started with OpenLLMetry, while still outputting standard OpenTelemetry data that can be connected to your observability stack. If you already have OpenTelemetry instrumented, you can just add any of our instrumentations directly.

πŸš€ Getting Started

The easiest way to get started is to use our SDK. For a complete guide, go to our docs.

Install the SDK:

pip install traceloop-sdk

Then, to start instrumenting your code, just add this line to your code:

from traceloop.sdk import Traceloop

Traceloop.init()

That's it. You're now tracing your code with OpenLLMetry! If you're running this locally, you may want to disable batch sending, so you can see the traces immediately:

Traceloop.init(disable_batch=True)

⏫ Supported (and tested) destinations

See our docs for instructions on connecting to each one.

πŸͺ— What do we instrument?

OpenLLMetry can instrument everything that OpenTelemetry already instruments - so things like your DB, API calls, and more. On top of that, we built a set of custom extensions that instrument things like your calls to OpenAI or Anthropic, or your Vector DB like Chroma, Pinecone, Qdrant or Weaviate.

LLM Providers

  • βœ… OpenAI / Azure OpenAI
  • βœ… Anthropic
  • βœ… Cohere
  • βœ… HuggingFace
  • βœ… Bedrock (AWS)
  • βœ… Replicate
  • βœ… Vertex AI (GCP)
  • βœ… IBM Watsonx AI

Vector DBs

  • βœ… Chroma
  • βœ… Pinecone
  • βœ… Qdrant
  • βœ… Weaviate
  • ⏳ Milvus

Frameworks

🌱 Contributing

Whether it's big or small, we love contributions ❀️ Check out our guide to see how to get started.

Not sure where to get started? You can:

πŸ’š Community & Support

  • Slack (For live discussion with the community and the Traceloop team)
  • GitHub Discussions (For help with building and deeper conversations about features)
  • GitHub Issues (For any bugs and errors you encounter using OpenLLMetry)
  • Twitter (Get news fast)

πŸ™ Special Thanks

To @patrickdebois, who suggested the great name we're now using for this repo!

openllmetry's People

Contributors

5war00p avatar alex-feel avatar anjor avatar anush008 avatar aromatichydrocarbon avatar ashishlakraa avatar cmpxchg16 avatar dependabot[bot] avatar evisong avatar galkleinman avatar github-actions[bot] avatar gyliu513 avatar hanchchch avatar huang-cn avatar humbertzhang avatar jinsongo avatar kartik1397 avatar lazyplatypus avatar mayankagarwals avatar mizba-anjum avatar najork avatar nirga avatar pamelafox avatar paolorechia avatar reiyw avatar spullara avatar tomer-friedman avatar tonybaloney avatar wolfgangb33r avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

openllmetry's Issues

πŸ› Bug Report: UI: Update time format to MM/DD/YY

Which component is this bug for?

Anthropic Instrumentation

πŸ“œ Description

Can we update the data format to MM/DD/YY, current format is DD/MM/YY

πŸ‘Ÿ Reproduction steps

Check filter on the UI page

πŸ‘ Expected behavior

Can we update the data format to MM/DD/YY, current format is DD/MM/YY

πŸ‘Ž Actual Behavior with Screenshots

Screenshot 2024-02-22 at 9 38 34β€―AM

πŸ€– Python Version

No response

πŸ“ƒ Provide any additional context for the Bug.

No response

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸ› Bug Report: Async and streaming Anthropic is not supported

Which component is this bug for?

Anthropic Instrumentation

πŸ“œ Description

Anthropic provides an async API as well as a sync API:
https://github.com/anthropics/anthropic-sdk-python?tab=readme-ov-file#async-usage

Both should be supported as part of the instrumentation

πŸ‘Ÿ Reproduction steps

πŸ‘ Expected behavior

πŸ‘Ž Actual Behavior with Screenshots

πŸ€– Python Version

No response

πŸ“ƒ Provide any additional context for the Bug.

This can be supported similarily to our OpenAI instrumentation

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: Add more properties to Span Attributes for OpenAI instrumentation

Which component is this feature for?

OpenAI Instrumentation

πŸ”– Feature description

The OpenAI instrumentation is great, but is missing some span attributes compared with the httpx instrumentor that would be useful to capture. I'd like to know what the client attributes were for the OpenAI client like the host and the deployment name. Those are in SpanAttributes.HTTP_METHOD and SpanAttributes.HTTP_URL.

🎀 Why is this feature needed ?

I'm using different OpenAI backends (Azure OpenAI) and want to capture the deployment name, the host name and other information that is in the client.

✌️ How do you aim to achieve this?

Add some additional span attributes to the trace that capture the properties of the open ai client object.

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸš€ Feature: embedding vectors in attributes of vector DB calls

Which component is this feature for?

All Packages

πŸ”– Feature description

We need to extract and report embedding vectors that are sent and returned for query calls to vector DBs like Chroma, Weaviate, Pinecone, etc.

🎀 Why is this feature needed ?

More visiblity into what's happening when calling a vector DB

✌️ How do you aim to achieve this?

This isn't straightforward - these might be large, so sending them as attributes might not make sense and we'll need to think of the proper OTEL format to use (logs? span events?)

πŸ”„οΈ Additional Information

Consult on slack before working on it.

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: LanceDB Integration

Which component is this feature for?

Anthropic Instrumentation

πŸ”– Feature description

Integrating LanceDB

🎀 Why is this feature needed ?

Flexibility to switch between VectorDBs

✌️ How do you aim to achieve this?

In addition with other VectorDBs, LanceDB can also be integrated in same way.

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸš€ Feature: Weaviate Instrumentation

Which component is this feature for?

All Packages

πŸ”– Feature description

Instrument calls to Weaviate, including adding attributes, similarly to our Chroma instrumentation. The instrumentation should support all types of calls - streaming, non streaming, async, etc.

🎀 Why is this feature needed?

Completeness of OpenLLMetry

✌️ How do you aim to achieve this?

Similar to other instrumentations, we have in this repo.

πŸ”„οΈ Additional Information

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸ› Bug Report: service name is not reported correctly

Which component is this bug for?

Traceloop SDK

πŸ“œ Description

Spans are shown with unknown_service instead of the right service name as initialized in Traceloop.init()

πŸ‘Ÿ Reproduction steps

  1. Call Traceloop.init("some_service_name")
  2. View the spans

πŸ‘ Expected behavior

some_service_name should be shown

πŸ‘Ž Actual Behavior with Screenshots

image (3)

πŸ€– Python Version

No response

πŸ“ƒ Provide any additional context for the Bug.

No response

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸš€ Feature: More Declarative Instrumentation Wrappers

Which component is this feature for?

All Packages

πŸ”– Feature description

We should improve our instrumentation wrappers to reduce the required amount of boilerplate per project.

The end-goal is to drive the instrumentations towards a more declarative approach, as much as possible. There will always be exceptions, so the solution also needs to be flexible with this.

🎀 Why is this feature needed ?

Less maintenance overhead.

✌️ How do you aim to achieve this?

We'll need to write some custom tooling for this.

In the weaviate instrumentation PR, I wrote some simple classes to reduce boilerplate, for instance:

class _BatchInstrumentor(_Instrumentor):
    namespace = "db.weaviate.batch"
    mapped_attributes = {
        "add_data_object": [
            "data_object",
            "class_name",
            "uuid",
            "vector",
            "tenant",
        ],
        "flush": [],
    }

We could try and take this further. Python offers an amazing introspection module in the standard library (inspect), so in theory just by passing the function name we should be able to automatically extract all arguments. That should be the default way to instrument methods.

Using such resources, we can then simply map in a declarative way most of the standard objects / functions, like we already do:

WRAPPED_METHODS = [
    {
        "package": chromadb.api.segment,
        "object": "SegmentAPI",
        "method": "_query",
        "span_name": "chroma.query.segment._query"
    },
    {
        "package": chromadb,
        "object": "Collection",
        "method": "add",
        "span_name": "chroma.add"
    },

But without the need to explicitly add calls to set span attributes, as our instrumentation tooling should automatically take care of that:

def _set_add_attributes(span, kwargs):
    _set_span_attribute(span, "db.chroma.add.ids_count", count_or_none(kwargs.get("ids")))
    _set_span_attribute(span, "db.chroma.add.embeddings_count", count_or_none(kwargs.get("embeddings")))
    _set_span_attribute(span, "db.chroma.add.metadatas_count", count_or_none(kwargs.get("metadatas")))
    _set_span_attribute(span, "db.chroma.add.documents_count", count_or_none(kwargs.get("documents")))

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸ› Bug Report: missing traces when running on Colab

Which component is this bug for?

Traceloop SDK

πŸ“œ Description

Combined with the haystack implementation, traces sometime do not appear in the dashboard.

πŸ‘Ÿ Reproduction steps

This is a simple example that when run in a colab, do not always product traces:

import os
from haystack.nodes import PromptNode, PromptTemplate, AnswerParser
from haystack.pipelines import Pipeline
from traceloop.sdk import Traceloop
from traceloop.sdk.decorators import workflow

Traceloop.init(app_name="haystack_app")

prompt = PromptTemplate(
    prompt="Tell me a joke about {query}\n",
    output_parser=AnswerParser(),
)

prompt_node = PromptNode(
    model_name_or_path="gpt-4",
    api_key=os.getenv("OPENAI_API_KEY"),
    default_prompt_template=prompt,
)

pipeline = Pipeline()
pipeline.add_node(component=prompt_node, name="PromptNode", inputs=["Query"])

query = "OpenTelemetry"
result = pipeline.run(query)
print(result["answers"][0].answer)

πŸ‘ Expected behavior

Traces should be produced

πŸ‘Ž Actual Behavior with Screenshots

See deepset-ai/haystack-integrations#51 (comment)

πŸ€– Python Version

No response

πŸ“ƒ Provide any additional context for the Bug.

This doesn't happen if we explicitly annotate methods with @workflow. The main difference is that this annotation forces flush of traces at the end of traces. We did recently add an auth-flush on app shutdown - but this probably never gets called on notebooks.

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸ› Bug Report: Errors are not logged

Which component is this bug for?

All Packages

πŸ“œ Description

If an HTTP error is returned from a foundation model API, we don't properly log it as a failed span.

πŸ‘Ÿ Reproduction steps

N/A

πŸ‘ Expected behavior

N/A

πŸ‘Ž Actual Behavior with Screenshots

N/A

πŸ€– Python Version

No response

πŸ“ƒ Provide any additional context for the Bug.

No response

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: re-write LlamaIndex instrumentation to use LlamaIndex `CallbackManager`

Which component is this feature for?

LlamaIndex Instrumentation

πŸ”– Feature description

Right now, we monkey-patch classes and methods in LlamaIndex which requires endless work and constant maintenance. LlamaIndex has a system for callbacks that can potentially be used to create/end spans without being too coupled with with the framework's inner structure.

🎀 Why is this feature needed ?

Support LlamaIndex entirely and be future-proof to internal API changes

✌️ How do you aim to achieve this?

Look into LlamaIndex callback_manager and how other frameworks are using it.

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: VertexAI Instrumentation

Which component is this feature for?

All Packages

πŸ”– Feature description

Instrument calls to Google's Vertex AI, including adding attributes for input parameters, model, etc. - similarly to our Anthropic instrumentation. The instrumentation should support all types of calls - streaming, non streaming, async, etc.

This should specifically work with Google's new Gemini model.

🎀 Why is this feature needed ?

Completeness of OpenLLMetry

✌️ How do you aim to achieve this?

Similar to other instrumentations we have in this repo.

πŸ”„οΈ Additional Information

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: re-write Langchain instrumentation to use Langchain Callbacks

Which component is this feature for?

Langchain Instrumentation

πŸ”– Feature description

Right now, we monkey-patch classes and methods in LlamaIndex which requires endless work and constant maintenance. Langchain has a system for callbacks that can potentially be used to create/end spans without being too coupled with with the framework's inner structure.

🎀 Why is this feature needed ?

Support Langchain entirely and be future-proof to internal API changes

✌️ How do you aim to achieve this?

Look into Langchain callbacks and how other frameworks are using it.

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: Pydantic v2 upgrade

Which component is this feature for?

All Packages

πŸ”– Feature description

Could you please update pydantic to v2?

🎀 Why is this feature needed ?

Version resolving fails if any of the packages has a contstraint for >2 eg. pydantic-settings.

Because no versions of traceloop-sdk match >0.3.0,<0.4.0
and traceloop-sdk (0.3.0) depends on pydantic (>=1.10.12,<2.0.0), traceloop-sdk (>=0.3.0,<0.4.0) requires pydantic (>=1.10.12,<2.0.0).
And because pydantic-settings (2.0.3) depends on pydantic (>=2.0.1)
and no versions of pydantic-settings match >2.0.3,<3.0.0, traceloop-sdk (>=0.3.0,<0.4.0) is incompatible with pydantic-settings (>=2.0.3,<3.0.0).
So, because theydo-journey-ai depends on both pydantic-settings (^2.0.3) and traceloop-sdk (^0.3.0), version solving failed.

✌️ How do you aim to achieve this?

It'll need some work probably.

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸ› Bug Report: Traces showing up as dependencies in Azure Application Insights

Which component is this bug for?

Traceloop SDK

πŸ“œ Description

I'm not sure if this bug report should be in the docs repo or here, but I am trying to send traces to Azure Application Insights using traceloop and getting different results that what is documented.

πŸ‘Ÿ Reproduction steps

Run the sample code from https://www.traceloop.com/docs/openllmetry/integrations/azure

πŸ‘ Expected behavior

I expected to see a trace but instead am seeing the call show up as a dependency in Application Insights

The screenshot from the docs shows there are 4 traces and there seems to be an extra llm_span that I don't have on my end.

πŸ‘Ž Actual Behavior with Screenshots

When I ran the code form the docs, Traces are coming through as Dependencies. From the screenshot below, you can see that Traces are 0.
Screenshot 2024-02-06 at 10 54 24β€―AM

πŸ€– Python Version

3.11.5

πŸ“ƒ Provide any additional context for the Bug.

In addition, the code from the documentation needs some updates:

  1. Missing os import
  2. response['choices'][0]['message']['content'] should be updated to response.choices[0].message.content.

note I would normally just submit a PR with these changes but since I am running into different behavior than what is documented, I chose to open an issue instead.

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸš€ Feature: Make Instrumentations Robust to How Users Call Functions (args/kwargs)

Which component is this feature for?

All Packages

πŸ”– Feature description

I've noticed some instrumentations only check for the kwargs passed by the method. For instance, in ChromaDB:

def _set_add_attributes(span, kwargs):
    _set_span_attribute(span, "db.chroma.add.ids_count", count_or_none(kwargs.get("ids")))
    _set_span_attribute(span, "db.chroma.add.embeddings_count", count_or_none(kwargs.get("embeddings")))
    _set_span_attribute(span, "db.chroma.add.metadatas_count", count_or_none(kwargs.get("metadatas")))
    _set_span_attribute(span, "db.chroma.add.documents_count", count_or_none(kwargs.get("documents")))

This has the implication that depending on the user calls the function, he might miss some attributes from being added to the trace.

For instance:

chromadb.some_function("1", "books")

Would not instrument the arguments, while calling:

chromadb.some_function(id="1", collection="books")

Would successfully include attributes id and collection in the trace.

🎀 Why is this feature needed ?

Make instrumentations more robust / reliable / predictable. We don't want the behavior to change depending on the user calls a method.

✌️ How do you aim to achieve this?

We should make this is baked-in in the instrumentation tooling, so that this always handled correctly. We need some code design proposal done and experimented with. This is what I've implemented in weaviate instrumentation to avoid the issue and could serve as a starting point:

class ArgsGetter:
    """Helper to make sure we get arguments regardless
    of whether they were passed as args or as kwargs.
    Additionally, cast serializes dicts to JSON string.
    """

    def __init__(self, args, kwargs):
        self.args = args
        self.kwargs = kwargs

    def __call__(self, index, name):
        try:
            obj = self.args[index]
        except IndexError:
            obj = self.kwargs.get(name)

        if obj:
            try:
                return json.dumps(obj)
            except json.decoder.JSONDecodeError:
                logger.warning(
                    "Failed to decode argument (%s) (%s) to JSON", index, name
                )


class _Instrumentor:
    def map_attributes(self, span, method_name, attributes, args, kwargs):
        getter = ArgsGetter(args, kwargs)
        for idx, attribute in enumerate(attributes):
            _set_span_attribute(
                span,
                f"{self.namespace}.{method_name}.{attribute}",
                getter(idx, attribute),
            )

    def instrument(self, method_name, span, args, kwargs):
        attributes = self.mapped_attributes.get(method_name)
        if attributes:
            self.map_attributes(span, method_name, attributes, args, kwargs)

πŸ”„οΈ Additional Information

Note, this could also be labeled as a bug, but I think the goal here is not to simple fix cases where it might misbehave. It's to prevent it from even happening again in the future.

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸš€ Feature: Replicate Instrumentation

Which component is this feature for?

All Packages

πŸ”– Feature description

Instrument calls to Replicate, including adding attributes for input parameters, model, etc. - similarly to our Anthropic instrumentation. The instrumentation should support all types of calls - streaming, non streaming, async, etc.

🎀 Why is this feature needed ?

Completeness of OpenLLMetry

✌️ How do you aim to achieve this?

Similar to other instrumentations we have in this repo.

πŸ”„οΈ Additional Information

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸ› Bug Report: error logged when using Chroma

Which component is this bug for?

Chromadb Instrumentation

πŸ“œ Description

An error is logged when running a query with Chroma.

πŸ‘Ÿ Reproduction steps

Use chromadb = "^0.4.22", and run the llama_index_chroma_app.py from the sample_app.

πŸ‘ Expected behavior

No error should be thrown.

πŸ‘Ž Actual Behavior with Screenshots

Screenshot 2024-02-02 at 17 58 09

πŸ€– Python Version

3.9.5

πŸ“ƒ Provide any additional context for the Bug.

No response

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸ› Bug Report: disabled tests for GCP / VertexAI

Which component is this bug for?

VertexAI Instrumentation

πŸ“œ Description

Following #413, I had to disable the VertexAI tests since vcr.py doesn't support GRPC. We need to figure out how to mock those requests to avoid making actual calls to GRPC.

πŸ‘Ÿ Reproduction steps

N/A

πŸ‘ Expected behavior

N/A

πŸ‘Ž Actual Behavior with Screenshots

N/A

πŸ€– Python Version

No response

πŸ“ƒ Provide any additional context for the Bug.

No response

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: Add IBM Watsonx Instrumentation

Which component is this feature for?

Traceloop SDK

πŸ”– Feature description

Enable traceloop can collect metrics and tracing for IBM Watsonx, we have a prototype at https://github.com/gyliu513/langX101/tree/main/otel/opentelemetry-instrumentation-watsonx

🎀 Why is this feature needed ?

Before OTEL community reach to an agreement of the AI Semantic Convention at open-telemetry/semantic-conventions#639 , this repo is the best place to host most of the instrumentation for different llm providers.

✌️ How do you aim to achieve this?

I will create a PR based on https://github.com/gyliu513/langX101/tree/main/otel/opentelemetry-instrumentation-watsonx

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸš€ Feature: support embedding APIs for OpenAI

Which component is this feature for?

OpenAI Instrumentation

πŸ”– Feature description

Support OpenAI Embeddings API. Note that since outputs are extremely big, we probably shouldn't add them to a span, just the metadata

🎀 Why is this feature needed ?

Complete visibility into usage of LLMs

✌️ How do you aim to achieve this?

Similar to current instrumentation for chat / completion APIs

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸ› Bug Report: Traceloop.init() will init all LLM Providers

Which component is this bug for?

Anthropic Instrumentation

πŸ“œ Description

If there are sth wrong with LLM Provider instrumentation code, the Traceloop.init() will be failed. One example is I want to run a watsonx service, and llamaindex init failed, this caused I cannot run my watsonx service.

πŸ‘Ÿ Reproduction steps

(py311) gyliu@Guangyas-MacBook-Air openllmetry % /Users/gyliu/py311/bin/python /Users/gyliu/go/src/github.com/traceloop/openllmetry/
packages/sample-app/sample_app/watsonx_generate.py
Traceloop syncing configuration and prompts
Traceloop exporting traces to https://api.traceloop.com authenticating with bearer token

/Users/gyliu/py311/lib/python3.11/site-packages/langchain/chat_models/__init__.py:31: LangChainDeprecationWarning: Importing chat models from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

`from langchain_community.chat_models import ChatAnyscale`.

To install langchain-community run `pip install -U langchain-community`.
  warnings.warn(
/Users/gyliu/py311/lib/python3.11/site-packages/langchain/chat_models/__init__.py:31: LangChainDeprecationWarning: Importing chat models from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

`from langchain_community.chat_models import ChatOpenAI`.

To install langchain-community run `pip install -U langchain-community`.
  warnings.warn(
/Users/gyliu/py311/lib/python3.11/site-packages/langchain/embeddings/__init__.py:29: LangChainDeprecationWarning: Importing embeddings from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

`from langchain_community.embeddings import HuggingFaceBgeEmbeddings`.

To install langchain-community run `pip install -U langchain-community`.
  warnings.warn(
/Users/gyliu/py311/lib/python3.11/site-packages/langchain/embeddings/__init__.py:29: LangChainDeprecationWarning: Importing embeddings from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

`from langchain_community.embeddings import HuggingFaceEmbeddings`.

To install langchain-community run `pip install -U langchain-community`.
  warnings.warn(
/Users/gyliu/py311/lib/python3.11/site-packages/langchain/llms/__init__.py:548: LangChainDeprecationWarning: Importing LLMs from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

`from langchain_community.llms import AI21`.

To install langchain-community run `pip install -U langchain-community`.
  warnings.warn(
/Users/gyliu/py311/lib/python3.11/site-packages/langchain/llms/__init__.py:548: LangChainDeprecationWarning: Importing LLMs from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

`from langchain_community.llms import Cohere`.

To install langchain-community run `pip install -U langchain-community`.
  warnings.warn(
/Users/gyliu/py311/lib/python3.11/site-packages/langchain/llms/__init__.py:548: LangChainDeprecationWarning: Importing LLMs from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

`from langchain_community.llms import FakeListLLM`.

To install langchain-community run `pip install -U langchain-community`.
  warnings.warn(
/Users/gyliu/py311/lib/python3.11/site-packages/langchain/llms/__init__.py:548: LangChainDeprecationWarning: Importing LLMs from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

`from langchain_community.llms import OpenAI`.

To install langchain-community run `pip install -U langchain-community`.
  warnings.warn(
Traceback (most recent call last):
  File "/Users/gyliu/go/src/github.com/traceloop/openllmetry/packages/sample-app/sample_app/watsonx_generate.py", line 9, in <module>
    Traceloop.init()
  File "/Users/gyliu/py311/lib/python3.11/site-packages/traceloop/sdk/__init__.py", line 152, in init
    Traceloop.__tracer_wrapper = TracerWrapper(
                                 ^^^^^^^^^^^^^^
  File "/Users/gyliu/py311/lib/python3.11/site-packages/traceloop/sdk/tracing/tracing.py", line 111, in __new__
    init_instrumentations()
  File "/Users/gyliu/py311/lib/python3.11/site-packages/traceloop/sdk/tracing/tracing.py", line 266, in init_instrumentations
    init_llama_index_instrumentor()
  File "/Users/gyliu/py311/lib/python3.11/site-packages/traceloop/sdk/tracing/tracing.py", line 360, in init_llama_index_instrumentor
    from opentelemetry.instrumentation.llamaindex import LlamaIndexInstrumentor
  File "/Users/gyliu/py311/lib/python3.11/site-packages/opentelemetry/instrumentation/llamaindex/__init__.py", line 14, in <module>
    from opentelemetry.instrumentation.llamaindex.custom_llm_instrumentor import CustomLLMInstrumentor
  File "/Users/gyliu/py311/lib/python3.11/site-packages/opentelemetry/instrumentation/llamaindex/custom_llm_instrumentor.py", line 14, in <module>
    from llama_index.llms.custom import CustomLLM
  File "/Users/gyliu/py311/lib/python3.11/site-packages/llama_index/__init__.py", line 17, in <module>
    from llama_index.embeddings.langchain import LangchainEmbedding
  File "/Users/gyliu/py311/lib/python3.11/site-packages/llama_index/embeddings/__init__.py", line 18, in <module>
    from llama_index.embeddings.huggingface import (
  File "/Users/gyliu/py311/lib/python3.11/site-packages/llama_index/embeddings/huggingface.py", line 17, in <module>
    from llama_index.llms.huggingface import HuggingFaceInferenceAPI
  File "/Users/gyliu/py311/lib/python3.11/site-packages/llama_index/llms/__init__.py", line 25, in <module>
    from llama_index.llms.litellm import LiteLLM
  File "/Users/gyliu/py311/lib/python3.11/site-packages/llama_index/llms/litellm.py", line 28, in <module>
    from llama_index.llms.litellm_utils import (
  File "/Users/gyliu/py311/lib/python3.11/site-packages/llama_index/llms/litellm_utils.py", line 4, in <module>
    from openai.openai_object import OpenAIObject
ModuleNotFoundError: No module named 'openai.openai_object'

πŸ‘ Expected behavior

Enable init can only init a specified llm provider.

πŸ‘Ž Actual Behavior with Screenshots

as Reproduction steps

πŸ€– Python Version

No response

πŸ“ƒ Provide any additional context for the Bug.

No response

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸš€ Feature: simple unit tests with mock responses

Which component is this feature for?

All Packages

πŸ”– Feature description

Right now our entire testing suite is more like "integration tests" - we make actual calls to models and check the created spans. This is way too complex and flaky and we should have simple unit tests that we can run in PRs to test that we didn't break each instrumentation. These should be part of each instrumentation package by itself, where we mock the responses from the likes of OpenAI instead of making actual calls.

🎀 Why is this feature needed ?

Lighter, less flaky tests.

✌️ How do you aim to achieve this?

See feature decsription

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸ› Bug Report: Validate if watsonx can run with python 3.8

Which component is this bug for?

Watsonx Instrumentation

πŸ“œ Description

This is a follow up for #526, need check if watsonx support python 3.8

πŸ‘Ÿ Reproduction steps

Check #526 for detail

πŸ‘ Expected behavior

Watsonx need to work with python 3.8

πŸ‘Ž Actual Behavior with Screenshots

Check #526

πŸ€– Python Version

No response

πŸ“ƒ Provide any additional context for the Bug.

No response

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸ› Bug Report: pydantic v1 is not supported

Which component is this bug for?

Traceloop SDK

πŸ“œ Description

Traceloop SDK was upgraded to use pydantic v2, but this causes users with old pydantic versions to not able to use our SDK. Some frameworks, like Haystack v1, also rely on pydantic v1 and so cannot be used together with our SDK. We need to support both pydantic v1 and v2.

πŸ‘Ÿ Reproduction steps

Use Traceloop SDK with pydantic v1 -> version conflict

πŸ‘ Expected behavior

pydantic v1 should also be supported

πŸ‘Ž Actual Behavior with Screenshots

N/A

πŸ€– Python Version

No response

πŸ“ƒ Provide any additional context for the Bug.

This can be solved similarily to how Anthropic SDK does that.

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: allow disabling prompt sending as an argument to Traceloop.init()

Which component is this feature for?

Traceloop SDK

πŸ”– Feature description

Add a setting called trace_content (which will default to true) and will control whether all the instrumentations are sending sensitive content (like prompts and completions). This should override the env var behavior (similar to other flags we have).

🎀 Why is this feature needed ?

More control on this sensitive feature

✌️ How do you aim to achieve this?

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: Support qdrant vector database

Which component is this feature for?

Traceloop SDK

πŸ”– Feature description

Support tracing of calls to the qdrant vector database

🎀 Why is this feature needed ?

For users of qdrant

✌️ How do you aim to achieve this?

A new qdrant extension

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: Add logger for traceloop via OpenTelemetry

Which component is this feature for?

Traceloop SDK

πŸ”– Feature description

Traceloop instrumentation only have tracer, how about add meter and logger to expose metrics and logs?

🎀 Why is this feature needed ?

opentelemetry support log, metrics and tracing, we should support all those.

✌️ How do you aim to achieve this?

enable meter and logger.

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸ› Bug Report: LCEL Runnables not supported in traces

Which component is this bug for?

All Packages

πŸ“œ Description

When a chain is built using LCEL, traces are not emitted.

πŸ‘Ÿ Reproduction steps

  1. Create an LCEL chain
  2. Run the application

πŸ‘ Expected behavior

Traces should be emitted for langchain blocks.

πŸ‘Ž Actual Behavior with Screenshots

Only traces emitted :

{
"name": "PATCH",
"context": {
"trace_id": "0x5d226499992625813276d417daf03aad",
"span_id": "0x52cf0fa0514c33a5",
"trace_state": "[]"
},
"kind": "SpanKind.CLIENT",
"parent_id": null,
"start_time": "2024-01-29T11:06:14.970928Z",
"end_time": "2024-01-29T11:06:15.404928Z",
"status": {
"status_code": "UNSET"
},
"attributes": {
"http.method": "PATCH",
"http.url": "https://api.smith.langchain.com/runs/97d3bdd0-8a1e-4437-9401-84e2004f16d8",
"http.status_code": 200
},
"events": [],
"links": [],
"resource": {
"attributes": {
"service.name": "src/streamlit_main.py"
},
"schema_url": ""
}
}
{
"name": "POST",
"context": {
"trace_id": "0xe6cf84bd7a5a72d6220d76528fc6fa32",
"span_id": "0x0aa04c6a4dbf1bfd",
"trace_state": "[]"
},
"kind": "SpanKind.CLIENT",
"parent_id": null,
"start_time": "2024-01-29T11:06:14.970581Z",
"end_time": "2024-01-29T11:06:15.430682Z",
"status": {
"status_code": "UNSET"
},
"attributes": {
"http.method": "POST",
"http.url": "https://api.smith.langchain.com/runs",
"http.status_code": 200
},
"events": [],
"links": [],
"resource": {
"attributes": {
"service.name": "src/streamlit_main.py"
},
"schema_url": ""
}
}
{
"name": "POST",
"context": {
"trace_id": "0xfa2c996295cfda4103f37090627791c7",
"span_id": "0x5c3d0c3dd6bd1d51",
"trace_state": "[]"
},
"kind": "SpanKind.CLIENT",
"parent_id": null,
"start_time": "2024-01-29T11:06:14.971143Z",
"end_time": "2024-01-29T11:06:15.441099Z",
"status": {
"status_code": "UNSET"
},
"attributes": {
"http.method": "POST",
"http.url": "https://api.smith.langchain.com/runs",
"http.status_code": 200
},
"events": [],
"links": [],
"resource": {
"attributes": {
"service.name": "src/streamlit_main.py"
},
"schema_url": ""
}
}
{
"name": "POST",
"context": {
"trace_id": "0xeff8835537de179427d32f928d2b244c",
"span_id": "0xa010e98b7ec99bc6",
"trace_state": "[]"
},
"kind": "SpanKind.CLIENT",
"parent_id": null,
"start_time": "2024-01-29T11:06:14.971049Z",
"end_time": "2024-01-29T11:06:15.471814Z",
"status": {
"status_code": "UNSET"
},
"attributes": {
"http.method": "POST",
"http.url": "https://api.smith.langchain.com/runs",
"http.status_code": 200
},
"events": [],
"links": [],
"resource": {
"attributes": {
"service.name": "src/streamlit_main.py"
},
"schema_url": ""
}
}
INFO: HTTP Request: POST https://navira-poc-02.openai.azure.com//openai/deployments/gpt-35-turbo/chat/completions?api-version=2023-09-01-preview "HTTP/1.1 200 OK"
{
"name": "openai.chat",
"context": {
"trace_id": "0xa3b49a57b3990fdd94f44b647ab7f4fb",
"span_id": "0xe925e3010d0e482a",
"trace_state": "[]"
},
"kind": "SpanKind.CLIENT",
"parent_id": null,
"start_time": "2024-01-29T11:06:14.955512Z",
"end_time": "2024-01-29T11:06:15.976070Z",
"status": {
"status_code": "UNSET"
},
"attributes": {
"llm.request.type": "chat",
"openai.api_version": "2023-09-01-preview",
"llm.vendor": "OpenAI",
"llm.request.model": "gpt-3.5-turbo",
"llm.temperature": 0.0,
"llm.headers": "None",
"llm.prompts.0.role": "user",
"llm.prompts.0.content": "You are an AWS cloud consultant tasked with recognizing the intent of users' queries in the context of cloud cost governance. Pay close attention to the exact definitions of allowed categories & services. DO NOT deviate from them STRICTLY.\n\nContext: \nThe intent consists of 2 parts : category and service.\n\nThe possible categories of user's intent. \n- RESOURCE_METADATA_DISCOVERY: Involves querying metadata or attributes of user provisioned AWS cloud resources. This can include details like configurations, provisioned capacity, tags and status.\n- RESOURCE_USAGE_DISCOVERY: Focuses on analyzing resource utilization metrics of the user's provisioned resources such as performance metrics and usage patterns across different metrics.\n- RESOURCE_COST_DISCOVERY: It involves evaluating cost-related data of user's provisioned resources to gain insights into expenditure patterns and identify cost drivers.\n- PRICING_DISCOVERY: Dedicated to exploring the general public pricing details of AWS services (not related to the user's data). This includes obtaining information about service rates and understanding different pricing models (e.g., on-demand, reserved instances).\n- RECOMMENDATION: Includes providing suggestions and actionable advice for optimizing AWS cloud resource usage, costs & performance.\n\nThe services related to the user's intent.\n- EC2: Amazon Elastic Compute Cloud\n- RDS: Amazon Relational Database Service\n- S3: Amazon Simple Storage Service\n- DYNAMODB: Amazon DynamoDB\n- EBS: Amazon Elastic Block Storage\n- OPENSEARCH_SERVICE: Amazon OpenSearch Service\n- CLOUDWATCH: Amazon CloudWatch\n- ELASTICACHE: Amazon ElastiCache\n- VPC: Amazon Virtual Private Cloud\n- ELB: Amazon Elastic Load Balancing\n- CLOUDFRONT: Amazon CloudFront\nOutput Format:The output should be formatted as a JSON instance that conforms to the JSON schema below.\n\nAs an example, for the schema {"properties": {"foo": {"title": "Foo", "description": "a list of strings", "type": "array", "items": {"type": "string"}}}, "required": ["foo"]}\nthe object {"foo": ["bar", "baz"]} is a well-formatted instance of the schema. The object {"properties": {"foo": ["bar", "baz"]}} is not well-formatted.\n\nHere is the output schema:\n\n{\"description\": \"Serializable base class.\", \"properties\": {\"category\": {\"description\": \"The category of the user's intent.\", \"allOf\": [{\"$ref\": \"#/definitions/IntentCategory\"}]}, \"service\": {\"description\": \"The service related to the user's intent.\", \"allOf\": [{\"$ref\": \"#/definitions/AmazonService\"}]}}, \"required\": [\"category\", \"service\"], \"definitions\": {\"IntentCategory\": {\"title\": \"IntentCategory\", \"description\": \"An enumeration.\", \"enum\": [\"RESOURCE_METADATA_DISCOVERY\", \"RESOURCE_USAGE_DISCOVERY\", \"RESOURCE_COST_DISCOVERY\", \"COST_DISCOVERY\", \"RECOMMENDATION\"], \"type\": \"string\"}, \"AmazonService\": {\"title\": \"AmazonService\", \"description\": \"An enumeration.\", \"enum\": [\"EC2\", \"RDS\", \"S3\", \"DYNAMODB\", \"EBS\", \"OPENSEARCH_SERVICE\", \"CLOUDWATCH\", \"ELASTICACHE\", \"VPC\", \"ELB\", \"CLOUDFRONT\"], \"type\": \"string\"}}}\n\nUser's query: how many ec2 instances\n",
"llm.response.model": "gpt-35-turbo",
"llm.usage.total_tokens": 735,
"llm.usage.completion_tokens": 20,
"llm.usage.prompt_tokens": 715,
"llm.completions.0.finish_reason": "stop",
"llm.completions.0.role": "assistant",
"llm.completions.0.content": "{\n "category": "RESOURCE_METADATA_DISCOVERY",\n "service": "EC2"\n}"
},
"events": [],
"links": [],
"resource": {
"attributes": {
"service.name": "src/streamlit_main.py"
},
"schema_url": ""
}
}
{
"name": "PATCH",
"context": {
"trace_id": "0xd95ec01e44229f05b4844ca8325c94c2",
"span_id": "0x9dc9086284294043",
"trace_state": "[]"
},
"kind": "SpanKind.CLIENT",
"parent_id": null,
"start_time": "2024-01-29T11:06:16.068286Z",
"end_time": "2024-01-29T11:06:16.367807Z",
"status": {
"status_code": "UNSET"
},
"attributes": {
"http.method": "PATCH",
"http.url": "https://api.smith.langchain.com/runs/39ee576a-6e25-4260-9ba7-d6e080e65ec9",
"http.status_code": 200
},
"events": [],
"links": [],
"resource": {
"attributes": {
"service.name": "src/streamlit_main.py"
},
"schema_url": ""
}
}
{
"name": "POST",
"context": {
"trace_id": "0x6e2c49a93463ceb0965582c405be317e",
"span_id": "0x9f801f0e5058c0ba",
"trace_state": "[]"
},
"kind": "SpanKind.CLIENT",
"parent_id": null,
"start_time": "2024-01-29T11:06:16.068165Z",
"end_time": "2024-01-29T11:06:16.398113Z",
"status": {
"status_code": "UNSET"
},
"attributes": {
"http.method": "POST",
"http.url": "https://api.smith.langchain.com/runs",
"http.status_code": 200
},
"events": [],
"links": [],
"resource": {
"attributes": {
"service.name": "src/streamlit_main.py"
},
"schema_url": ""
}
}
{
"name": "PATCH",
"context": {
"trace_id": "0x66442059532c14185b5da4b6c0e3e2a5",
"span_id": "0xf8da2f9a81d0d8e1",
"trace_state": "[]"
},
"kind": "SpanKind.CLIENT",
"parent_id": null,
"start_time": "2024-01-29T11:06:16.067977Z",
"end_time": "2024-01-29T11:06:16.398613Z",
"status": {
"status_code": "UNSET"
},
"attributes": {
"http.method": "PATCH",
"http.url": "https://api.smith.langchain.com/runs/5260c5a9-68c0-4910-b206-5986e4244dac",
"http.status_code": 200
},
"events": [],
"links": [],
"resource": {
"attributes": {
"service.name": "src/streamlit_main.py"
},
"schema_url": ""
}
}
{
"name": "PATCH",
"context": {
"trace_id": "0xa4309c9db74617d83e689ac2fcaeae3d",
"span_id": "0x52d896629a51553c",
"trace_state": "[]"
},
"kind": "SpanKind.CLIENT",
"parent_id": null,
"start_time": "2024-01-29T11:06:16.067677Z",
"end_time": "2024-01-29T11:06:16.402780Z",
"status": {
"status_code": "UNSET"
},
"attributes": {
"http.method": "PATCH",
"http.url": "https://api.smith.langchain.com/runs/3047f648-72af-4c4e-b223-dbb38c97002e",
"http.status_code": 200
},
"events": [],
"links": [],
"resource": {
"attributes": {
"service.name": "src/streamlit_main.py"
},
"schema_url": ""
}
}

πŸ€– Python Version

3.11.7

πŸ“ƒ Provide any additional context for the Bug.

No response

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: How to use opentelemetry-instrumentation-langchain

Which component is this feature for?

Anthropic Instrumentation

πŸ”– Feature description

I have backend opentelemetry-server, how to use opentelemetry-instrumentation-langchain to collector our langchain tracing

🎀 Why is this feature needed ?

I can see opentelemetry-instrumentation-langchain , but not found readme

✌️ How do you aim to achieve this?

use our langchain project

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: IBM watsonx instrumentation ibm_watsonx_ai package support

Which component is this feature for?

Traceloop SDK

πŸ”– Feature description

adding ibm_watsonx_ai package support to IBM watsonx traceloop instrumentation

🎀 Why is this feature needed ?

IBM watsonx has ibm_watsonx_ai and ibm_watson_machine_learning packages, the current instrumentation only support instrumentation of ibm_watson_machine_learning. plus the ibm_watsonx_ai package is also what langchain_community llms supports, so this support is also needed to get langchain watsonx trace.

✌️ How do you aim to achieve this?

I will create a PR based on https://github.com/huang-cn/traceloop-openllmetry/tree/watson_ml_moduel_support/packages/opentelemetry-instrumentation-watsonx

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸš€ Feature: OpenTelemetry Metrics

Which component is this feature for?

All Packages

πŸ”– Feature description

Some values, like token usage, are better reported (also) as otel metrics. We should update all instrumentations to send metrics as well as traces.

🎀 Why is this feature needed ?

More ways to use the outputted data. Observability platforms can use metrics to set alerts on or to create dashboards.

✌️ How do you aim to achieve this?

This needs to be researched and defined. Please consult on slack before starting to work on this.

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸ› Bug Report: Streaming Tokens

Which component is this bug for?

OpenAI Instrumentation

πŸ“œ Description

I have an app running where I am streaming the tokens of my answers, streaming is working fine but when I initialize Traceloop using LlamaIndex the streaming features is disabled.

πŸ‘Ÿ Reproduction steps

  1. Enable streaming in OpenAI
  2. Initialize Traceloop in my applicaiton
  3. Make a query

πŸ‘ Expected behavior

Streaming of tokens should still be working

πŸ‘Ž Actual Behavior with Screenshots

Tokens are not streamed.

πŸ€– Python Version

3.11.5

πŸ“ƒ Provide any additional context for the Bug.

No response

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: Add Azure Application Insights as an open telemetry destination

Which component is this feature for?

OpenAI Instrumentation

πŸ”– Feature description

In addition to the destinations currently supported to catch traces, it would be great to extend and include Azure Application Insights. OpenTelemetry is a key recommendation for almost any solution deployed on Azure as they should be tracking traces and metrics, and it would be great to have native support.

I was going to add links to Azure documentation for OTEL + App Insights... Realized it might just be easier to send this ChatGPT instruction for a sample Python app instrumented with OTEL, sending traces to App Insights.

https://chat.openai.com/share/8b6cc9a7-b8c5-4dea-8e0b-7c6728ff40a0

🎀 Why is this feature needed ?

Holistic Azure solution, e.g. using Azure OpenAI and sending metrics to Azure Application Insights. All data would be airgapped and stored in your own subscription.

✌️ How do you aim to achieve this?

Support Azure App Insights as a trace export destination, using instrumentation key and endpoint.

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸ› Bug Report: hard to decipher error message when providing wrong API key

Which component is this bug for?

Traceloop SDK

πŸ“œ Description

When providing the wrong TRACELOOP_API_KEY the following error is thrown, making it hard to trace back the reason (which is a bad API key). We should output a better log for that.

Traceback (most recent call last):
  File "/Users/nirga/vecinity/openllmetry/packages/sample-app/sample_app/prompt_registry_example_app.py", line 10, in <module>
    Traceloop.init(app_name="prompt_registry_example_app")
  File "/Users/nirga/vecinity/openllmetry/packages/traceloop-sdk/traceloop/sdk/__init__.py", line 62, in init
    Traceloop.__fetcher.run()
  File "/Users/nirga/vecinity/openllmetry/packages/traceloop-sdk/traceloop/sdk/fetcher.py", line 55, in run
    refresh_data(
  File "/Users/nirga/vecinity/openllmetry/packages/traceloop-sdk/traceloop/sdk/fetcher.py", line 144, in refresh_data
    response = fetch_url(f"{base_url}/v1/traceloop/prompts", api_key)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/nirga/vecinity/openllmetry/packages/sample-app/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
           ^^^^^^^^^^^^^^^^^^^^
  File "/Users/nirga/vecinity/openllmetry/packages/sample-app/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/nirga/vecinity/openllmetry/packages/sample-app/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 326, in iter
    raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x11492b910 state=finished raised HTTPError>]

πŸ‘Ÿ Reproduction steps

Choose a random TRACELOOP_API_KEY and set TRACELOOP_BASE_URL to api.traceloop.com (or leave empty).

πŸ‘ Expected behavior

Some output saying something like Authorization error: invalid API key.

πŸ‘Ž Actual Behavior with Screenshots

See log message in the description

πŸ€– Python Version

No response

πŸ“ƒ Provide any additional context for the Bug.

No response

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: log variable names in prompts

Which component is this feature for?

OpenAI Instrumentation

πŸ”– Feature description

Want to log a variable's value in the trace of llm. Where llm is called from a async task. Currently the association_properties are shared by all the tasks.
So basically a way to report a variable's value.

🎀 Why is this feature needed ?

To add more details to the LLM trace that are specific to each llm call made in the same context.

✌️ How do you aim to achieve this?

Give a way to log variables inside a llm traces. This should only be set for local scope of the function or the linked functions/processes. Should not be updated globally.

πŸ”„οΈ Additional Information

I tried setting the association_properties within each task those were called from a common parent. But the variable's value was getting overwritten globally. Each time I was getting the most recent value.

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: Anthropic instrumentation

Which component is this feature for?

Anthropic Instrumentation

πŸ”– Feature description

Similar to what we have for OpenAI

🎀 Why is this feature needed ?

It will help us stabilize semantic conversions as we'll have another example of an LLM call

✌️ How do you aim to achieve this?

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: extract semantic conventions

Which component is this feature for?

All Packages

πŸ”– Feature description

As the instrumentations grew, there are some semantic conventions that should be extracted out of the individual packages into the common one, namely:

  • LLM prompts
  • Prompt keys
  • Whether to disable prompt logging for privacy reasons

🎀 Why is this feature needed ?

Order in the repo :)

✌️ How do you aim to achieve this?

.

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: contribute this to otel

Which component is this feature for?

All Packages

πŸ”– Feature description

I can see https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation already have some python instrumentation code, any plan to upstream this to otel? Thanks

🎀 Why is this feature needed ?

otel is the official repo for all instrumentations

✌️ How do you aim to achieve this?

move all repos to https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸ› Bug Report: Always getting exception when Traceloop.init()

Which component is this bug for?

Traceloop SDK

πŸ“œ Description

from dotenv import load_dotenv
import os
load_dotenv()

from traceloop.sdk import Traceloop
Traceloop.init()

I have created a .env file which contains my trace loop api key.

When run above program, it always report error as follows, am I doing anything wrong? Thanks

(py310) gyliu@guangyas-air openllmetry % /Users/gyliu/py310/bin/python /Users/gyliu/go/src/github.com/traceloop/openllmetry/packages/sample-app/sample_app/test.py
Traceback (most recent call last):
  File "/Users/gyliu/go/src/github.com/traceloop/openllmetry/packages/sample-app/sample_app/test.py", line 6, in <module>
    Traceloop.init()
  File "/Users/gyliu/py310/lib/python3.10/site-packages/traceloop/sdk/__init__.py", line 64, in init
    Traceloop.__fetcher.run()
  File "/Users/gyliu/py310/lib/python3.10/site-packages/traceloop/sdk/fetcher.py", line 55, in run
    refresh_data(
  File "/Users/gyliu/py310/lib/python3.10/site-packages/traceloop/sdk/fetcher.py", line 144, in refresh_data
    response = fetch_url(f"{base_url}/v1/traceloop/prompts", api_key)
  File "/Users/gyliu/py310/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
  File "/Users/gyliu/py310/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
  File "/Users/gyliu/py310/lib/python3.10/site-packages/tenacity/__init__.py", line 314, in iter
    return fut.result()
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/Users/gyliu/py310/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
  File "/Users/gyliu/py310/lib/python3.10/site-packages/traceloop/sdk/fetcher.py", line 101, in fetch_url
    raise requests.exceptions.HTTPError(response=response)
requests.exceptions.HTTPError

πŸ‘Ÿ Reproduction steps

Run the python script as above

πŸ‘ Expected behavior

Run this program

from dotenv import load_dotenv
import os
load_dotenv()

from traceloop.sdk import Traceloop
Traceloop.init()

πŸ‘Ž Actual Behavior with Screenshots

error as follows

(py310) gyliu@guangyas-air openllmetry % /Users/gyliu/py310/bin/python /Users/gyliu/go/src/github.com/traceloop/openllmetry/packages/sample-app/sample_app/test.py
Traceback (most recent call last):
  File "/Users/gyliu/go/src/github.com/traceloop/openllmetry/packages/sample-app/sample_app/test.py", line 6, in <module>
    Traceloop.init()
  File "/Users/gyliu/py310/lib/python3.10/site-packages/traceloop/sdk/__init__.py", line 64, in init
    Traceloop.__fetcher.run()
  File "/Users/gyliu/py310/lib/python3.10/site-packages/traceloop/sdk/fetcher.py", line 55, in run
    refresh_data(
  File "/Users/gyliu/py310/lib/python3.10/site-packages/traceloop/sdk/fetcher.py", line 144, in refresh_data
    response = fetch_url(f"{base_url}/v1/traceloop/prompts", api_key)
  File "/Users/gyliu/py310/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
  File "/Users/gyliu/py310/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
  File "/Users/gyliu/py310/lib/python3.10/site-packages/tenacity/__init__.py", line 314, in iter
    return fut.result()
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/Users/gyliu/py310/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
  File "/Users/gyliu/py310/lib/python3.10/site-packages/traceloop/sdk/fetcher.py", line 101, in fetch_url
    raise requests.exceptions.HTTPError(response=response)
requests.exceptions.HTTPError

πŸ€– Python Version

3.10

πŸ“ƒ Provide any additional context for the Bug.

No response

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸ› Bug Report: Calling openai methods in v1 SDK with `with_raw_response` redirect causes crash

Which component is this bug for?

OpenAI Instrumentation

πŸ“œ Description

OpenAI's v1 SDK has a with_raw_responses redirect that returns a different type, LegacyAPIResponse.

The code assumes it's a pydantic model and crashes in this mode.

πŸ‘Ÿ Reproduction steps

from opentelemetry import trace
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.richconsole import RichConsoleSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
from openai import AsyncAzureOpenAI, AsyncOpenAI
from azure.identity.aio import DefaultAzureCredential, get_bearer_token_provider

trace.set_tracer_provider(TracerProvider())
tracer = trace.get_tracer(__name__)
trace.get_tracer_provider().add_span_processor(BatchSpanProcessor(RichConsoleSpanExporter()))

OpenAIInstrumentor().instrument()

azure_credential = DefaultAzureCredential(exclude_shared_token_cache_credential=True)
token_provider = get_bearer_token_provider(azure_credential, "https://cognitiveservices.azure.com/.default")

openai_client = AsyncAzureOpenAI(
    api_version="2023-07-01-preview",
    azure_endpoint="https://<redacted>.openai.azure.com",
    azure_ad_token_provider=token_provider,
)

async def test():
    # THIS next line
    response = await openai_client.embeddings.with_raw_response.create(
        model="embedding",
        input="Ground control to Major Tom",
    )

    response.close()

import asyncio

asyncio.run(test())

πŸ‘ Expected behavior

It should either not trace, but definitely not crash.

πŸ‘Ž Actual Behavior with Screenshots

Will submit test to reproduce

πŸ€– Python Version

3.11

πŸ“ƒ Provide any additional context for the Bug.

No response

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸš€ Feature: Declare Optional Dependencies for Traceloop SDK

Which component is this feature for?

Traceloop SDK

πŸ”– Feature description

We're starting to hit some issues with dependency conflicts, which prevents us from upgrading key packages that we instrument in the library.

One of these conflicts, for instance, is how weaviate-client v4 depends on pydantic>=2 but haystack v1 depends on pydantic==1.

In theory, there's no need to always have all dependencies from all instrumentations to be always resolved simultaneously. We should instead make Traceloop SDK dependency light, and declare the instrumentations as optional dependencies.

One could then, install the specific instrumentations that he desires, like this:

pip install traceloop[pinecone, openai]

🎀 Why is this feature needed ?

This would also enable us to simultaneously support many versions of the same libraries, as we can effectively just declare a new optional dependency for each version we're instrumenting.

✌️ How do you aim to achieve this?

Update to pyproject.toml on traceloop-sdk, needs additional research on details.

πŸ”„οΈ Additional Information

Many libraries in the wild offer this, we can find one and use their configuration as a starting point.

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸ› Bug Report: Sample App uses outdated OpenAI interface

Which component is this bug for?

Chromadb Instrumentation

πŸ“œ Description

When I try to use the sample app based on OpenAI, I get a deprecated error and the script fails to execute.

πŸ‘Ÿ Reproduction steps

  1. Install the traceloop-sdk through poetry install, as instructed by the documentation.
  2. Attempt to execute the chromadb sample app based on OpenAI. For example:

python sample_app/chroma_app.py

πŸ‘ Expected behavior

Script should run without errors. It should support the same OpenAI version defined in the test dependencies.

πŸ‘Ž Actual Behavior with Screenshots

(venv) (base) paolo@paolo-MS-7D08:~/dev/openllmetry/packages/sample-app$ python sample_app/chroma_app.py 
Traceloop syncing configuration and prompts
Traceloop exporting traces to https://api.traceloop.com authenticating with bearer token

Traceback (most recent call last):
  File "/home/paolo/dev/openllmetry/packages/sample-app/sample_app/chroma_app.py", line 97, in <module>
    assess_claims(samples["claim"].tolist())
  File "/home/paolo/dev/openllmetry/packages/traceloop-sdk/traceloop/sdk/decorators/__init__.py", line 102, in wrap
    return fn(*args, **kwargs)
  File "/home/paolo/dev/openllmetry/packages/sample-app/sample_app/chroma_app.py", line 82, in assess_claims
    response = openai.ChatCompletion.create(
  File "/home/paolo/dev/openllmetry/packages/traceloop-sdk/venv/lib/python3.10/site-packages/openai/lib/_old_api.py", line 39, in __call__
    raise APIRemovedInV1(symbol=self._symbol)
openai.lib._old_api.APIRemovedInV1: 

You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface. 

Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`

A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742

πŸ€– Python Version

3.10.10

πŸ“ƒ Provide any additional context for the Bug.

This is the same issue I've encountered when working in this PR: #368

Specifically, I had to update this file:
packages/sample-app/sample_app/pinecone_app.py

Using the existing code from packages/traceloop-sdk/tests/test_pinecone_instrumentation.py was a quick way to do it, since in the tests the code is up-to-date.

It might be that this affects additional packages, I haven't checked other cases.

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸš€ Feature: Support arbitrary resource attribute in SDK initialization

Which component is this feature for?

Traceloop SDK

πŸ”– Feature description

Right now, we only allow setting the service name when initializing the Traceloop SDK. We need to allow providing an arbitrary list of attributes (which is a dictionary) in addition so that users can set other resource attributes.

🎀 Why is this feature needed ?

For Splunk integration - traceloop/docs#4

✌️ How do you aim to achieve this?

As an initialization variable to the SDK

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸš€ Feature: Pinecone instrumentation

Which component is this feature for?

Pinecone Instrumentation

πŸ”– Feature description

Add a basic instrumentation for Pinecone requests - specifically the most important ones are query requests.,

🎀 Why is this feature needed ?

Enrich our instrumentation library

✌️ How do you aim to achieve this?

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸ› Bug Report: cannot override auto-instrumented packages

Which component is this bug for?

Traceloop SDK

πŸ“œ Description

We need a way to override and specify the packages to auto-instrument. Right now we do this automatically based on installed packages, which might not fit everyone.

πŸ‘Ÿ Reproduction steps

Issues with transformer package we saw with litellm maintainer - BerriAI/litellm#1160

πŸ‘ Expected behavior

πŸ‘Ž Actual Behavior with Screenshots

πŸ€– Python Version

No response

πŸ“ƒ Provide any additional context for the Bug.

No response

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸ› Bug Report: Traceloop does not work in python 3.8

Which component is this bug for?

Traceloop SDK

πŸ“œ Description

on 3.8.13, i get the error

    class TracerWrapper(object):
  File "/Users/jferge/code/ai/python-demo-project/venv/lib/python3.8/site-packages/traceloop/sdk/tracing/tracing.py", line 155, in TracerWrapper
    headers: dict[str, str],
TypeError: 'type' object is not subscriptable

it appears that this is python 3.9 syntax, and needs from __future__ import annotations for 3.8.

In the project's pyproject.toml, it appears 3.8 should be supported. are multiversion tests being ran?

https://github.com/traceloop/openllmetry/blob/main/packages/traceloop-sdk/pyproject.toml

πŸ‘Ÿ Reproduction steps

use project in python 3.8

πŸ‘ Expected behavior

should not error upon import

πŸ‘Ž Actual Behavior with Screenshots

errors upon import

πŸ€– Python Version

3.8.13

πŸ“ƒ Provide any additional context for the Bug.

No response

πŸ‘€ Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: Milvus Instrumentation

Which component is this feature for?

All Packages

πŸ”– Feature description

Instrument calls to Milvus, including adding attributes, similarly to our Chroma instrumentation. The instrumentation should support all types of calls - streaming, non streaming, async, etc.

🎀 Why is this feature needed ?

Completness of OpenLLMetry

✌️ How do you aim to achieve this?

Similarily to other instrumentations we have in this repo.

πŸ”„οΈ Additional Information

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

πŸš€ Feature: Enable IBM Instana as a Supported (and tested) destinations

Which component is this feature for?

Anthropic Instrumentation

πŸ”– Feature description

The supported destination do not have IBM Instana yet, we will enable Instana as well.

🎀 Why is this feature needed ?

Instana already support OTEL, and the instrumentation for AI can help Instana support AI Observability.

✌️ How do you aim to achieve this?

Test with IBM Watsonx and langchain first, depend on #341

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

πŸš€ Feature: option to disable prompt logging due to privacy / size

Which component is this feature for?

Traceloop SDK

πŸ”– Feature description

Currently, all prompts are logged by default. This might be problematic for some use cases, so we need an option to disable that for a specific task / workflow.

🎀 Why is this feature needed ?

Sometimes prompts can be sensitive, or just too big in size and disabling it is required by the user.

✌️ How do you aim to achieve this?

We need to properly design this, as we need to propagate this option from the SDK to the specific instrumentations.

πŸ”„οΈ Additional Information

No response

πŸ‘€ Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.