Coder Social home page Coder Social logo

open-telemetry / opentelemetry-python-contrib Goto Github PK

View Code? Open in Web Editor NEW
627.0 16.0 505.0 6.33 MB

OpenTelemetry instrumentation for Python modules

Home Page: https://opentelemetry.io

License: Apache License 2.0

Python 99.67% Shell 0.32% HTML 0.01% Dockerfile 0.01%

opentelemetry-python-contrib's Introduction


Getting Started   •   API Documentation   •   Getting In Touch (GitHub Discussions)

GitHub release (latest by date including pre-releases) Codecov Status license
Build Status Beta

Contributing   •   Examples


OpenTelemetry Python Contrib

The Python auto-instrumentation libraries for OpenTelemetry (per OTEP 0001)

Index

Installation

This repository includes installable packages for each instrumented library. Libraries that produce telemetry data should only depend on opentelemetry-api, and defer the choice of the SDK to the application developer. Applications may depend on opentelemetry-sdk or another package that implements the API.

Please note that these libraries are currently in beta, and shouldn't generally be used in production environments.

Unless explicitly stated otherwise, any instrumentation here for a particular library is not developed or maintained by the authors of such library.

The instrumentation/ directory includes OpenTelemetry instrumentation packages, which can be installed separately as:

pip install opentelemetry-instrumentation-{integration}

To install the development versions of these packages instead, clone or fork this repo and do an editable install:

pip install -e ./instrumentation/opentelemetry-instrumentation-{integration}

Releasing

Maintainers release new versions of the packages in opentelemetry-python-contrib on a monthly cadence. See releases for all previous releases.

Contributions that enhance OTel for Python are welcome to be hosted upstream for the benefit of group collaboration. Maintainers will look for things like good documentation, good unit tests, and in general their own confidence when deciding to release a package with the stability guarantees that are implied with a 1.0 release.

To resolve this, members of the community are encouraged to commit to becoming a CODEOWNER for packages in -contrib that they feel experienced enough to maintain. CODEOWNERS can then follow the checklist below to release -contrib packages as 1.0 stable:

Releasing a package as 1.0 stable

To release a package as 1.0 stable, the package:

  • SHOULD have a CODEOWNER. To become one, submit an issue and explain why you meet the responsibilities found in CODEOWNERS.
  • MUST have unit tests that cover all supported versions of the instrumented library.
    • e.g. Instrumentation packages might use different techniques to instrument different major versions of python packages
  • MUST have clear documentation for non-obvious usages of the package
    • e.g. If an instrumentation package uses flags, a token as context, or parameters that are not typical of the BaseInstrumentor class, these are documented
  • After the release of 1.0, a CODEOWNER may no longer feel like they have the bandwidth to meet the responsibilities of maintaining the package. That's not a problem at all, life happens! However, if that is the case, we ask that the CODEOWNER please raise an issue indicating that they would like to be removed as a CODEOWNER so that they don't get pinged on future PRs. Ultimately, we hope to use that issue to find a new CODEOWNER.

Semantic Convention status of instrumentations

In our efforts to maintain optimal user experience and prevent breaking changes for transitioning into stable semantic conventions, OpenTelemetry Python is adopting the semantic convention migration plan for several instrumentations. Currently this plan is only being adopted for HTTP-related instrumentations, but will eventually cover all types. Please refer to the semconv status column of the instrumentation README of the current status of instrumentations' semantic conventions. The possible values are experimental, stable and migration referring to status of that particular semantic convention. Migration refers to an instrumentation that currently supports the migration plan.

Contributing

See CONTRIBUTING.md

We meet weekly on Thursday at 9AM PT. The meeting is subject to change depending on contributors' availability. Check the OpenTelemetry community calendar for specific dates and for the Zoom link.

Meeting notes are available as a public Google doc. For edit access, get in touch on GitHub Discussions.

Approvers (@open-telemetry/python-approvers):

Emeritus Approvers:

Find more about the approver role in community repository.

Maintainers (@open-telemetry/python-maintainers):

Emeritus Maintainers:

Find more about the maintainer role in community repository.

Running Tests Locally

  1. Go to your Contrib repo directory. cd ~/git/opentelemetry-python-contrib.
  2. Create a virtual env in your Contrib repo directory. python3 -m venv my_test_venv.
  3. Activate your virtual env. source my_test_venv/bin/activate.
  4. Make sure you have tox installed. pip install tox.
  5. Run tests for a package. (e.g. tox -e test-instrumentation-flask.)

Thanks to all the people who already contributed

opentelemetry-python-contrib's People

Contributors

adamantike avatar akochavi avatar alertedsnake avatar ashu658 avatar avzis avatar c24t avatar codeboten avatar crflynn avatar ffe4 avatar hectorhdzg avatar jeremydvoss avatar lzchen avatar majorgreys avatar mariojonke avatar mauriciovasquezbernal avatar nathanielrn avatar nemoshlag avatar nprajilesh avatar oberon00 avatar ocelotl avatar ofek avatar opentelemetrybot avatar owais avatar sanketmehta28 avatar shalevr avatar srikanthccv avatar theanshul756 avatar thiyagu55 avatar toumorokoshi avatar xrmx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

opentelemetry-python-contrib's Issues

Add travis CI

Need to setup continuous integration, to stay inline w/ the rest of OpenTelemetry python, let's setup Travis

Add instrumentation for the kafka-python library

Is your feature request related to a problem?

There is currently no instrumentation support for the kafka-python library.

Describe the solution you'd like

Add support for instrumenting the python-kafka library. The instrumentation support should inject the current trace context into the message headers when KafkaProducer.send is called and should set the current span if a consumed message contains appropriate headers.

Describe alternatives you've considered

Manually implementing the tracing support in each client, which looked something like this:

 for msg in consumer:
    with tracer.start_as_current_span("consuming"):
        # ...
        with tracer.start_as_current_span("sending"):
            headers = {}
            propagators.inject(type(headers).__setitem__, headers)
            headers = [(k, v.encode()) for k, v in headers.items()]
            producer.send("msg", value=msg.value, key=None, headers=headers)

Update approvers/maintainers

Currently, the list of approvers and maintainers is out of date. The list will be updated to match the opentelemetry-python approvers/maintainers

Remove non-inclusive language

Is your feature request related to a problem?

This repository currently contains non-inclusive language:

$ > git grep -Ei '(blacklist|whitelist|black.list|white.list)'
.pylintrc:extension-pkg-whitelist=
.pylintrc:# Add files or directories to the blacklist. They should be base names, not
.pylintrc:# Add files or directories matching the regex patterns to the blacklist. The
reference/ddtrace/ext/aws.py:BLACKLIST_ENDPOINT = ['kms', 'sts']
reference/ddtrace/ext/aws.py:BLACKLIST_ENDPOINT_TAGS = {
reference/ddtrace/ext/aws.py:    if endpoint_name not in BLACKLIST_ENDPOINT:
reference/ddtrace/ext/aws.py:        blacklisted = BLACKLIST_ENDPOINT_TAGS.get(endpoint_name, [])
reference/ddtrace/ext/aws.py:            if k not in blacklisted
reference/ddtrace/http/headers.py:    :param headers: All the request's http headers, will be filtered through the whitelist
reference/ddtrace/http/headers.py:    :param headers: All the response's http headers, will be filtered through the whitelist
reference/ddtrace/settings/config.py:    def trace_headers(self, whitelist):
reference/ddtrace/settings/config.py:        :param whitelist: the case-insensitive list of traced headers
reference/ddtrace/settings/config.py:        :type whitelist: list of str or str
reference/ddtrace/settings/config.py:        self._http.trace_headers(whitelist)
reference/ddtrace/settings/http.py:        self._whitelist_headers = set()
reference/ddtrace/settings/http.py:        return len(self._whitelist_headers) > 0
reference/ddtrace/settings/http.py:    def trace_headers(self, whitelist):
reference/ddtrace/settings/http.py:        :param whitelist: the case-insensitive list of traced headers
reference/ddtrace/settings/http.py:        :type whitelist: list of str or str
reference/ddtrace/settings/http.py:        if not whitelist:
reference/ddtrace/settings/http.py:        whitelist = [whitelist] if isinstance(whitelist, str) else whitelist
reference/ddtrace/settings/http.py:        for whitelist_entry in whitelist:
reference/ddtrace/settings/http.py:            normalized_header_name = normalize_header_name(whitelist_entry)
reference/ddtrace/settings/http.py:            self._whitelist_headers.add(normalized_header_name)
reference/ddtrace/settings/http.py:        log.debug('Checking header \'%s\' tracing in whitelist %s', normalized_header_name, self._whitelist_headers)
reference/ddtrace/settings/http.py:        return normalized_header_name in self._whitelist_headers
reference/ddtrace/settings/http.py:            self.__class__.__name__, self._whitelist_headers, self.trace_query_string)
reference/docs/advanced_usage.rst:  - headers configuration is based on a whitelist. If a header does not appear in the whitelist, it won't be traced.
reference/tests/contrib/botocore/test.py:        # confirm blacklisted
reference/tests/unit/http/test_headers.py:    def test_no_whitelist(self, span, integration_config):
reference/tests/unit/http/test_headers.py:    def test_whitelist_exact(self, span, integration_config):
reference/tests/unit/http/test_headers.py:    def test_whitelist_case_insensitive(self, span, integration_config):
reference/tests/unit/test_settings.py:    def test_trace_headers_whitelist_case_insensitive(self):

Note: pylint needs to be handled separately.

Describe the solution you'd like
Remove non-inclusive language

Describe alternatives you've considered
Which alternative solutions or features have you considered?

Additional context
Add any other context about the feature request here.

Inform about other repos in contributing instructions

This is the main OpenTelemetry Python repo, but now we have another one (and possibly more in the future) for related instrumentation-related code.

Explain this in the contributing instructions to inform new developers about the existence of this repo so that they are aware of this and they efforts can be directed to the right place.

Django ASGI not supported

Describe your environment
Python 3.8
Docker(python:3.8 image)
daphne-2.5.0/uvicorn 0.11.8/hypercorn 0.10.2
Django-3.1.0

Steps to reproduce

git clone https://github.com/HiveTraum/opentelemetry-django-asgi-example
cd opentelemetry-django-asgi-example
docker-compose up -d
curl -I 127.0.0.1:8000/api/
curl -I 127.0.0.1:8001/api/
curl -I 127.0.0.1:8002/api/

What is the expected behavior?
Correct request processing

What is the actual behavior?
KeyError when trying to get wsgi.url_scheme from environ

Additional context

Every wsgi server fill this variable inside request.META, but every asgi doesn't. Actually I don't know precisely to whom this bug can be addressed. For now opentelemetry-instrumentation-django relies on opentelemetry-instrumentation-wsgi, but ignores opentelemetry-instrumentation-asgi, maybe this can be key to solve this issue

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/asgiref/sync.py", line 330, in thread_handler
    raise exc_info[1]
  File "/usr/local/lib/python3.8/site-packages/django/core/handlers/exception.py", line 38, in inner
    response = await get_response(request)
  File "/usr/local/lib/python3.8/site-packages/django/core/handlers/base.py", line 221, in _get_response_async
    response = await middleware_method(request, callback, callback_args, callback_kwargs)
  File "/usr/local/lib/python3.8/site-packages/asgiref/sync.py", line 296, in __call__
    ret = await asyncio.wait_for(future, timeout=None)
  File "/usr/local/lib/python3.8/asyncio/tasks.py", line 455, in wait_for
    return await fut
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.8/site-packages/asgiref/sync.py", line 334, in thread_handler
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/opentelemetry/instrumentation/django/middleware.py", line 73, in process_view
    attributes = collect_request_attributes(environ)
  File "/usr/local/lib/python3.8/site-packages/opentelemetry/instrumentation/wsgi/__init__.py", line 111, in collect_request_attributes
    result["http.url"] = wsgiref_util.request_uri(environ)
  File "/usr/local/lib/python3.8/wsgiref/util.py", line 72, in request_uri
    url = application_uri(environ)
  File "/usr/local/lib/python3.8/wsgiref/util.py", line 52, in application_uri
    url = environ['wsgi.url_scheme']+'://'
KeyError: 'wsgi.url_scheme'

Auto-instrumentation for celery might segfault when mysql is loaded before

Environment

Python 3.7.5 (default, Apr 19 2020, 20:18:17) [GCC 9.2.1 20191008] on linux
Ubuntu 19.10

Auto-instrumentation might fail with a SIGSEGV on Linux with python 3.7 when running

python opentelemetry-instrumentation/src/opentelemetry/instrumentation/sitecustomize.py

It looks like that loading the instrumentation for celery segfaults when instrumentation mysql was loaded before (mysql currently does not load correctly see: open-telemetry/opentelemetry-python#858, so the issue will not occur until the fix is merged).
Modifying sitecustomize.py to:

entry_points = []
for entry_point in iter_entry_points("opentelemetry_instrumentor"):
    if entry_point.name not in ("mysql", "celery"):
        continue
    entry_points.append(entry_point)

def key(entry_point):
    return entry_point.name


entry_points = sorted(entry_points, key=key, reverse=True)

for entry_point in entry_points:
    try:
        entry_point.load()().instrument()  # type: ignore
        logger.debug("Instrumented %s", entry_point.name)

    except Exception:  # pylint: disable=broad-except
        logger.exception("Instrumenting of %s failed", entry_point.name)

makes the segfault reproducible every time (in py35 and py37, py38 works fine). Reversing the order in which the instrumentations are loaded does not raise a segfault.

Running:

gdb python
(gdb) run opentelemetry-instrumentation/src/opentelemetry/instrumentation/sitecustomize.py
...
(gdb) backtrace

says the following:

#0  0x000000000005c626 in ?? ()
open-telemetry/opentelemetry-python#1  0x00007ffff62013d8 in (anonymous namespace)::__future_category_instance() ()
   from /home/mario/workspaces/github/opentelemetry/opentelemetry-python/.venv/lib/python3.7/site-packages/_mysql_connector.cpython-37m-x86_64-linux-gnu.so
open-telemetry/opentelemetry-python#2  0x00007ffff6201769 in std::future_category() ()
   from /home/mario/workspaces/github/opentelemetry/opentelemetry-python/.venv/lib/python3.7/site-packages/_mysql_connector.cpython-37m-x86_64-linux-gnu.so
open-telemetry/opentelemetry-python#3  0x00007ffff5c0922d in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
open-telemetry/opentelemetry-python#4  0x00007ffff7fe102a in ?? () from /lib64/ld-linux-x86-64.so.2
open-telemetry/opentelemetry-python#5  0x00007ffff7fe1131 in ?? () from /lib64/ld-linux-x86-64.so.2
open-telemetry/opentelemetry-python#6  0x00007ffff7fe541e in ?? () from /lib64/ld-linux-x86-64.so.2
open-telemetry/opentelemetry-python#7  0x00007ffff7f1ed39 in __GI__dl_catch_exception (exception=<optimized out>, operate=<optimized out>, args=<optimized out>) at dl-error-skeleton.c:196
open-telemetry/opentelemetry-python#8  0x00007ffff7fe49ba in ?? () from /lib64/ld-linux-x86-64.so.2
open-telemetry/opentelemetry-python#9  0x00007ffff7d9334c in dlopen_doit (a=a@entry=0x7fffffff3fb0) at dlopen.c:66
open-telemetry/opentelemetry-python#10 0x00007ffff7f1ed39 in __GI__dl_catch_exception (exception=exception@entry=0x7fffffff3f50, operate=<optimized out>, args=<optimized out>) at dl-error-skeleton.c:196
open-telemetry/opentelemetry-python#11 0x00007ffff7f1edd3 in __GI__dl_catch_error (objname=0xa04930, errstring=0xa04938, mallocedp=0xa04928, operate=<optimized out>, args=<optimized out>) at dl-error-skeleton.c:215
open-telemetry/opentelemetry-python#12 0x00007ffff7d93b59 in _dlerror_run (operate=operate@entry=0x7ffff7d932f0 <dlopen_doit>, args=args@entry=0x7fffffff3fb0) at dlerror.c:170
open-telemetry/opentelemetry-python#13 0x00007ffff7d933da in __dlopen (file=<optimized out>, mode=<optimized out>) at dlopen.c:87
open-telemetry/opentelemetry-python#14 0x0000000000639011 in _PyImport_FindSharedFuncptr ()
open-telemetry/opentelemetry-python#15 0x0000000000645fe7 in _PyImport_LoadDynamicModuleWithSpec ()
open-telemetry/opentelemetry-python#16 0x0000000000646e80 in ?? ()
open-telemetry/opentelemetry-python#17 0x00000000005c8e2d in PyCFunction_Call ()
open-telemetry/opentelemetry-python#18 0x000000000053d13a in _PyEval_EvalFrameDefault ()
...

gRPC server instrumentation docs are inadequate and misleading

I recently submitted open-telemetry/opentelemetry-python#1113 to fix a missing line from the docs, but I've been digging more into this and I'm not sure that was the right choice.

The docs have this sequence:

grpc_server_instrumentor = GrpcInstrumentorServer()
grpc_server_instrumentor.instrument()

I added this line in open-telemetry/opentelemetry-python#1113:

server = intercept_server(server, server_interceptor())

It turns out that the instrument() method there does some magic by wrapping grpc.server in the same way my added line does. The reason I added this line in the docs, was that I wasn't calling grpc.server() after this instrument() call, I'd already created the server some lines before. And then nothing worked until I specifically called the wrapper.

So as the docs are written, this probably needs a revert, but also as they're written, nothing makes it clear what hidden effects are actually going on here, and so it's really easy to miss. Like I did.

So my bug report is that this very very basic example isn't enough for someone to go on for anything but running the basic example itself, especially when there isn't really any other useful documentation on how these components work, and there are many hidden behind-the-scenes effects that aren't immediately clear.

Enable documentation for Instrumentor classes

Each integration should implement the _instrument and _uninstrument methods according to the BaseInstrumentor interface. Those methods are private and are not included in the documentation with the current configuration: https://opentelemetry-python.readthedocs.io/en/latest/ext/flask/flask.html#api.

We could enable them by using :private-members:, but probably it's too much and will include documentation of members we don't want to, another issue is that the documented name would include the underscore, and we want the user to call the method without it on the BaseInstrumentor class. Also, we don't have a way to document the parameters that those methods could take as they are declared as **kwargs.

Ability to disable opentelemetry-instrumentation-dbapi sql parameter capture

Is your feature request related to a problem?
dbapi tracing is working well. The problem I am seeing is that is captures db.statement.parameter for every sql statement. This is a privacy issue for us.

Describe the solution you'd like
I would like the ability to turn off the capturing of db.statement.parameter. Or better yet, to not capturedb.statement.parameter by default and provide a way to turn it on. I suspect that most production services would not want to capture and expose db statement parameters.

Support for metrics in library instrumentations

  1. There is definitely value in generating metrics (either by user choice or automatically) for many instrumented libraries. Having a latency metric for something like gRPC is valuable for visualization purposes and it is difficult to manually create that metric outside of a wrapping library. Things like bytes in/out, count of requests on topic, etc could also prove useful. Is there a plan to add stats to our current instrumentations? What would the behaviour look like? (would there be a second instrumentor-like object you have to create and call to use the metrics?)

  2. To use opentelemetry-ext-system-metrics we have to pass an exporter and an interval, so it can create a controller. This is fine for one library, but if there is 10, nobody wants to manually pass their exporter to every instrumentation? Is there some better way to give the configuration to the library? Right now traces have trace.get_tracer(tracer_provider) which gives enough information to the library to export the traces properly, why does metrics not have the same functionality (or does it)?

Baggage propagation tests are missing for Flask and WSGI

Flask and the OpenWSGIMiddleware should extract the baggage from the incoming request and make it active. The implementation appears to be working but there is not any unit test checking that.

The criteria to close this issue is

  • Test implemented for Flask
  • Test implemented for WSGI

Flask instrumentor logs an error message when used with `app.test_request_context`

Describe your environment

You should be able to see the same result regardless of what the environment is.

Steps to reproduce

Python 3.6.9 (default, Apr 15 2020, 09:31:57) 
Type 'copyright', 'credits' or 'license' for more information
IPython 7.16.1 -- An enhanced Interactive Python. Type '?' for help.

In [1]: from opentelemetry.instrumentation.flask import FlaskInstrumentor
   ...: FlaskInstrumentor().instrument()
   ...: 
   ...: import flask
   ...: 
   ...: app = flask.Flask(__name__)
   ...: 
   ...: with app.test_request_context():
   ...:     print('Hey!')
   ...: 
Hey!
Flask environ's OpenTelemetry activation missingat _teardown_flask_request(None)

What is the expected behavior?

I did not expect to see the error message in the last line of the example.

What is the actual behavior?

I see the line.

Additional context

According to Flask’s documentation, app.test_request_context does not run before_request and after_request handlers, but it does run teardown_request handlers, which means the counter-handler of before_request is after_request, not teardown_request. Source: https://flask.palletsprojects.com/en/1.1.x/testing/#other-testing-tricks (5th paragraph after the anchor).

The Flask instrumentor doesn't take that into consideration, and it registers a handler with before_request, and expects it to have run when the teardown_request handler runs, which is not guaranteed: handlers are created[1][2] and handler's are removed. Because the before_request handler won't run for app.test_request_context, it won't have flask.request.environ['_ENVIRON_ACTIVATION_KEY'], which is required here to close the span that should have been opened in the before_request handler that never ran.

Remove DD code

DD code has been moved into the reference branch, remove all the code from master.

Support AWS Lambda function instrumentation

Before opening a feature request against this repo, consider whether the feature should/could be implemented in the other OpenTelemetry client libraries. If so, please open an issue on opentelemetry-specification first.

Is your feature request related to a problem?
Support AWS Lambda instrumentation

Describe the solution you'd like
Create a new instrumentor for AWS Lambda function. the instrumentor uses wrapt to decorate lambda function.
Lambda function module and name can be detected from environment variables _HANDLER, Ref AWS Doc

Describe alternatives you've considered
No

Additional context
Refer faas spec

Add integration for Gunicorn

Starting a discussion around what integration for Gunicorn would look like, and also discuss if it's needed.

Today, we already have integrations with web frameworks directly (e.g. flask) as well as wsgi:

https://github.com/open-telemetry/opentelemetry-python/tree/master/ext

I presume that, if we support the major gateway interfaces (WSGI and ASGI), and finer grained support per framework, then everything should work fine with Gunicorn with support for ASGI in benoitc/gunicorn#1380.

But wanted to get further thoughts.

ext/celery: fix inconsistent test results

Currently the celery tests in the docker-tests tox target fail inconsistently. Here's an example failed build log:

=================================== FAILURES ===================================

______________________________ test_fn_task_delay ______________________________

celery_app = <Celery celery.tests at 0x7fab2e53e2b0>
memory_exporter = <opentelemetry.sdk.trace.export.in_memory_span_exporter.InMemorySpanExporter object at 0x7fab2efc8fd0>

    def test_fn_task_delay(celery_app, memory_exporter):
        @celery_app.task
        def fn_task_parameters(user, force_logout=False):
            return (user, force_logout)

    

        result = fn_task_parameters.delay("user", force_logout=True)
        assert result.get(timeout=ASYNC_GET_TIMEOUT) == ["user", True]

    
        spans = memory_exporter.get_finished_spans()
>       assert len(spans) == 2
E       assert 1 == 2
E         +1
E         -2

celery/test_celery_functional.py:203: AssertionError

I've not been able to reproduce the failures in my local environment. We should investigate to get to the bottom of the failures, for now flaky tests have been marked as skipped.

Django instrumentation - Trace request attributes also in the response

Sorry, but I'm slightly confused whether I should create an issue in this repository or https://github.com/open-telemetry/opentelemetry-python-contrib. If it's the other one, i'll gladly reopen it there.


Is your feature request related to a problem?
According to the docs and the implementation, you can use OTEL_PYTHON_DJANGO_TRACED_REQUEST_ATTRS to specify what request attributes should be traced.

The middleware uses these options during process_request in https://github.com/open-telemetry/opentelemetry-python/blob/master/instrumentation/opentelemetry-instrumentation-django/src/opentelemetry/instrumentation/django/middleware.py#L147

The problem is that some of the request attributes that one could wish to trace are available only after some other middlewares run. For example (pretty common setups):

  • django.contrib.auth.middleware.AuthenticationMiddleware adds the user attribute
  • django.contrib.sites.middleware.CurrentSiteMiddleware adds the site attribute

Of course, we cannot mess with the middleware order, but these attributes are available in the process_response part of the middleware.

Describe the solution you'd like
It would be nice if we could specify attributes that would be extracted in the process_response, so that we can extract these as well using the instrumentation library.
I do not have a preference, and I am describing the options that come to my mind in the next section.

Describe alternatives you've considered

  1. Extract all attributes in process_response instead of process_request
  2. Include additional configuration variable so that we somehow distinguish between the two extraction places.
  3. Try extracting the attributes in process_request as well as process_response (maybe only try to extract the unextracted ones again?)

Additional context
I am willing to create a PR for this upon the decision to do it one way or another.

Thanks.

Cannnot Wrap Handler with wrap_server_method_handler

Describe your environment

My env

Python: 3.8.2
Platform: macOS-10.14.6-x86_64-i386-64bit
Dependencies:
  opentelemetry-api: 0.10b0
  opentelemetry-sdk: 0.10b0
  opentelemetry-ext-grpc: 0.10b0
  grpcio-tools: 1.30.0

I'm trying to use wrap_server_method_handler but it fails because the opentelemetry interceptor wraps a handler with something different.

Exception servicing handler: '_InterceptorRpcMethodHandler' object has no attribute '_replace'

Steps to reproduce
Describe exactly how to reproduce the error. Include a code sample if applicable.

Here's the interceptor which caused the error above.

import functools
from typing import Any, Callable

from grpc import HandlerCallDetails, RpcMethodHandler, ServerInterceptor, ServicerContext
from grpc.experimental import wrap_server_method_handler

from validator import validate

class ProtocValidationServerInterceptor(ServerInterceptor):  # type: ignore
    def intercept_service(
        self,
        continuation: Callable[[HandlerCallDetails], RpcMethodHandler],
        handler_call_details: HandlerCallDetails,
    ) -> RpcMethodHandler:
        handler: RpcMethodHandler = continuation(handler_call_details)

        if handler and (handler.request_streaming or handler.response_streaming):
            return handler

        def wrapper(behavior: Callable[[Any, ServicerContext], Any]) -> Callable[[Any, ServicerContext], Any]:
            @functools.wraps(behavior)
            def wrapper(request: Any, context: ServicerContext) -> Any:
                validate(request)
                return behavior(request, context)

            return wrapper

        return wrap_server_method_handler(wrapper, handler)

What is the expected behavior?

validate() in the above code is called.

What is the actual behavior?

Exception servicing handler: '_InterceptorRpcMethodHandler' object has no attribute '_replace'

Additional context

N/A

Fix documentation links

Need to agree on where the documentation for all the instrumentation packages will live. I suggest as a starting point it just lives with the package in pypi.

Botocore instrumentation omits TracerProvider resources

I'm manually instrumenting Botocore (v0.13b) in my Flask/postgres app as follows with OTLP exporter.

from opentelemetry.instrumentation.botocore import BotocoreInstrumentor labels = { "service.name": "myapp", "version": 1 } resource = Resource(attributes=labels) trace.set_tracer_provider(TracerProvider(resource=resource)) otlp_exporter = OTLPSpanExporter(endpoint=endpoint) span_processor = BatchExportSpanProcessor(otlp_exporter) FlaskInstrumentor().instrument_app(tapp) BotocoreInstrumentor().instrument()

The traces that the Botocore instrumentation is generating is missing the resource attributes(service.name and version). The traces generated by the Flask instrumentation have these resources. Is this a bug or am I missing something?

What is the expected behavior?
Expecting to see service.name and version in the traces.

Botocore trace
InstrumentationLibrary opentelemetry.instrumentation.botocore 0.13b0 Span #0 Trace ID : 1f87a1cc76c38b2e6879a9d9459c8e3e Parent ID : ID : 375e03d19f05e29e Name : s3.command Kind : SPAN_KIND_CONSUMER Start time : 2020-09-28 21:47:25.9426736 +0000 UTC End time : 2020-09-28 21:47:26.004678 +0000 UTC Status code : STATUS_CODE_UNKNOWN_ERROR Status message : NoCredentialsError: Unable to locate credentials Attributes: -> params.Bucket: STRING(data) -> params.Key: STRING(xlsx) -> params.ServerSideEncryption: STRING(aws:kms) -> aws.agent: STRING(botocore) -> aws.operation: STRING(PutObject) -> aws.region: STRING(us-east-1)

Other traces

InstrumentationLibrary postgres 0.13b0 Span open-telemetry/opentelemetry-python#16 Trace ID : 0ef61b3da87cdcc46bd150336fbe7cdb Parent ID : 600800d603896b80 ID : 6181692326794a48 Name : postgres.query Resource labels: -> service.name: STRING(myapp) -> version: STRING(1)

See how this one has service.name while the Boto traces does not. Let me know if you need any other details

`psycopg2` instrumentor should not require the binary package

Is your feature request related to a problem?
The psycopg2 instrumentor requires psycopg2-binary, but the official psycopg2 documentation says the binary package is not recommended for production and only exists as an aid for development in machines where a C compiler is not available. Source here.

Describe the solution you'd like
Instead of requiring the binary package, the code should check, at runtime, whether psycopg2 is available to import, and if it is, use that. Whether it's the source or binary distribution is irrelevant (or should be). If psycopg2 is not available for whatever reason, raise an exception, or skip instrumentation and log a warning.

ext-flask: flask.copy_current_request_context breaks span

Describe your environment
$ python --version
Python 3.7.5

opentelemetry-api 0.3a0
opentelemetry-ext-flask 0.3a0
opentelemetry-ext-jaeger 0.3a0
opentelemetry-ext-wsgi 0.3a0
opentelemetry-sdk 0.3a0

Not fixed in latest master, as of today:
https://github.com/open-telemetry/opentelemetry-python/blob/4458698ea178498f7af2d476644b34afcc366335/ext/opentelemetry-ext-flask/src/opentelemetry/ext/flask/__init__.py#L83

Steps to reproduce

from flask import Flask, copy_current_request_context
from opentelemetry.ext.flask import instrument_app
from opentelemetry import trace
from opentelemetry.context import Context
from opentelemetry.sdk.trace import Tracer
from opentelemetry.sdk.trace.export import ConsoleSpanExporter
from opentelemetry.sdk.trace.export import SimpleExportSpanProcessor

trace.set_preferred_tracer_implementation(lambda T: Tracer())
span_exporter = ConsoleSpanExporter()
trace.tracer().add_span_processor(SimpleExportSpanProcessor(span_exporter))

app = Flask(__name__)
instrument_app(app)

@app.route("/")
def hello():
    @copy_current_request_context
    def nest():
        return "Hello!"

    return nest()

if __name__ == "__main__":
    app.run(debug=True)

What is the expected behavior?

Span(name="hello", context=SpanContext(trace_id=0x5f518f71873e44725041844a36ccc493, span_id=0xf198dc7dbbf56851, trace_state={}), kind=SpanKind.SERVER, parent=None, start_time=2019-12-30T10:35:25.124501Z, end_time=2019-12-30T10:35:25.125568Z)
127.0.0.1 - - [30/Dec/2019 11:35:25] "GET / HTTP/1.1" 200 -

What is the actual behavior?

Span(name="hello", context=SpanContext(trace_id=0x5f518f71873e44725041844a36ccc493, span_id=0xf198dc7dbbf56851, trace_state={}), kind=SpanKind.SERVER, parent=None, start_time=2019-12-30T10:35:25.124501Z, end_time=2019-12-30T10:35:25.125568Z)
Setting attribute on ended span.
Setting attribute on ended span.
Calling set_status() on an ended span.
127.0.0.1 - - [30/Dec/2019 11:35:25] "GET / HTTP/1.1" 200 -

Additional context

The example to reproduce doesn't make much sense, as to why use the copy request decorator, but in a real application, it is needed when working in a multithreaded environment.

I notice there is a lot of functional overlap with the wsgi version, wouldn't it be feasible to rely more on the wsgi implementation, and simply add the extra flask specific information on top of that?

Note, that the wsgi implementation does not suffer from this issue.

fastapi instrumentation: add possibility to exclude some routes

Hi,
Thank you for that awesome lib 😊

Jaegger is crumbling under my logs because I have a “/health” route that is called every second on more than 20 services. Is it possible to add the possibility of excluding routes?

Idea exemple :

from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor

@FastAPIInstrumentor.exclude()
@app.get("/health")
async def health():
    return {"status": "OK"}

Or

FastAPIInstrumentor.instrument_app(app, ["/health"])

Flask Instrumentation : traceparent header not propagating

I have a Flask app instrumented with opentelemetry-instrumentation-flask==0.13b0 using the default Trace Context propagation. When it receives a request with a traceparent header as you can see in the curl command below (copied from a browser trace instrumented with opentelemetry-js), I am expecting the resulting trace from the Flask app to have the browser trace id as the parent. But the parent id is blank and the browser and Flask traces are getting disjoined/orphaned. Can someone please tell me if this a defect or am I missing a configuration?

Request
curl 'http://127.0.0.1:5000/api/graphql'
-H 'Accept: application/json, text/plain, /'
-H 'Referer: http://localhost:8080/somepage'
-H 'DNT: 1'
*-H 'traceparent: 00-a9c3b99a95cc045e573e163c3ac80a77-d99d251a8caecd06-01' *
-H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36 Edg/85.0.564.70'
-H 'Content-Type: application/json'

Trace Generated

"name":"graphql",
"context":{
"trace_id":"0xe38556c630a25daafab10524bab79d43",
"span_id":"0x2eeb581ee82ec68d",
"trace_state":"{}"
},
"kind":"SpanKind.SERVER",
"parent_id":null,
"start_time":"2020-10-08T21:05:00.359514Z",
"end_time":"2020-10-08T21:05:03.863810Z",
"status":{
}

Code snippet :

`from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.trace import SpanKind

from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace.export import (
BatchExportSpanProcessor,
ConsoleSpanExporter,
SimpleExportSpanProcessor
)

from opentelemetry.exporter.otlp.trace_exporter import OTLPSpanExporter

from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware
from opentelemetry.instrumentation.flask import FlaskInstrumentor
trace.set_tracer_provider(TracerProvider(resource=resource))
FlaskInstrumentor().instrument_app(app)
`

Document django instrumentation for production

Is your feature request related to a problem?
Reading through Django instrumentation docs, the example adds DjangoInstrumentor().instrument() to manage.py. This works for development but if you run gunicorn or other WSGI server, the instrumentation is not happening. I guess the best thing is to add this startup code in wsgi.py? Not 100% sure on this.

The OpenTelemetry WSGI Instrumentation docs explain to add to wsgi.py.

Describe the solution you'd like
Update documentation to explain the how to use it in production (wsgi/

Describe alternatives you've considered
n/a

Additional context
n/a

Instrumentation breaks psycopg2 when registering types.

Describe your environment
PostgreSQL supports JSON and JSONB types of DB columns. psycopg2 supports registering custom typecasters for these types. Most of the registration functions accept a connection or a cursor instance. The library crashes if anything else is passed in place of a connection or a cursor.

Most higher level frameworks and ORMs support these features out of the box meaning they crash as well. For example, django projects using psycopg2 driver will crash.

Steps to reproduce

Running the following program

import psycopg2.extras
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.instrumentation.psycopg2 import Psycopg2Instrumentor

trace.set_tracer_provider(TracerProvider())
Psycopg2Instrumentor().instrument()

cnx = psycopg2.connect(host='localhost', port='5432', dbname='postgres', password='1234', user='postgres')
psycopg2.extras.register_default_jsonb(conn_or_curs=cnx, loads=lambda x: x)

results in the following exception:

❯ python main.py
Traceback (most recent call last):
  File "main.py", line 10, in <module>
    psycopg2.extras.register_default_jsonb(conn_or_curs=cnx, loads=lambda x: x)
  File "/Users/olone/playground/otel-psycopg2/venv/lib/python3.7/site-packages/psycopg2/_json.py", line 156, in register_default_jsonb
    loads=loads, oid=JSONB_OID, array_oid=JSONBARRAY_OID, name='jsonb')
  File "/Users/olone/playground/otel-psycopg2/venv/lib/python3.7/site-packages/psycopg2/_json.py", line 125, in register_json
    register_type(JSON, not globally and conn_or_curs or None)
TypeError: argument 2 must be a connection, cursor or None

What is the expected behavior?
For the instrumentation to not crash psycopg2 ever.

What is the actual behavior?
It crashes when psycopg2 enforces connection or cursor types.

Additional context
This is happening because instrumentation wraps connection factories and returns wrapt.ObjectProxy instances that wrap connection or cursor objects. These objects fail the type checks psycopg2 runs internally (probably in C code).

Here: https://github.com/open-telemetry/opentelemetry-python/blob/6019a91980ec84bbf969b0d82d44483c93f3ea4c/instrumentation/opentelemetry-instrumentation-dbapi/src/opentelemetry/instrumentation/dbapi/__init__.py#L287-L304
And here: https://github.com/open-telemetry/opentelemetry-python/blob/6019a91980ec84bbf969b0d82d44483c93f3ea4c/instrumentation/opentelemetry-instrumentation-dbapi/src/opentelemetry/instrumentation/dbapi/__init__.py#L361-L389

The solution might be to change how DBAPI instrumentation works and patch actual connection and cursor classes instead of wrap with object proxies.

Celery: traces within tasks or nested tasks don't seem to get published

Describe your environment
Python 3.8 and the following:
celery==4.4.7
kombu==4.6.11
opentelemetry-api==0.11b0
opentelemetry-ext-celery==0.11b0
opentelemetry-ext-jaeger==0.11b0
opentelemetry-instrumentation==0.11b0
opentelemetry-sdk==0.11b0

rabbitmq 3.7 and jaeger/all-in-one running on docker

Steps to reproduce
Using the following docker-compose for services:

version: '3.4'

services:
  rabbitmq:
    image: bitnami/rabbitmq:3.7
    ports:
      - '127.0.0.1:15672:15672'
      - '5672:5672'
    environment:
      RABBITMQ_USERNAME: ot_test
      RABBITMQ_PASSWORD: ot_test
      RABBITMQ_VHOST: ot_test

  jaeger:
    image: jaegertracing/all-in-one:latest
    environment:
      COLLECTOR_ZIPKIN_HTTP_PORT: 9411
    ports:
      - '16686:16686'
      - '127.0.0.1:6831:6831/udp'
      - '127.0.0.1:14268:14268'

And the following Celery app:

import os
from celery import Celery
from opentelemetry import trace
from opentelemetry.ext import jaeger
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchExportSpanProcessor
from opentelemetry.ext.celery import CeleryInstrumentor
import structlog


def initialize_telemetry():

    provider = TracerProvider()
    trace.set_tracer_provider(provider)

    CeleryInstrumentor().instrument()

    tracer = trace.get_tracer(__name__)

    with tracer.start_as_current_span("initializing telemetry"):
        # create a JaegerSpanExporter
        jaeger_exporter = jaeger.JaegerSpanExporter(
            service_name='testapp',
            # configure agent
            agent_host_name='localhost',
            agent_port=6831,
            # optional: configure also collector
            collector_host_name='localhost',
            collector_port=14268,
            # collector_endpoint='/api/traces?format=jaeger.thrift',
            # username=xxxx, # optional
            # password=xxxx, # optional
        )

        # Create a BatchExportSpanProcessor and add the exporter to it
        span_processor = BatchExportSpanProcessor(jaeger_exporter)

        # add to the tracer
        trace.get_tracer_provider().add_span_processor(span_processor)


log = structlog.get_logger()

initialize_telemetry()

tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span('test_span'):
    print("test")

app = Celery(
    'ot_test',
    broker=os.environ.get('CELERY_BROKER',
                          'amqp://ot_test:ot_test@localhost/ot_test'),
    backend=os.environ.get('CELERY_BACKEND', 'rpc://'),
)


@app.task(bind=True, name='dispatcher.really_add')
def really_add(self, x, y):
    log.debug('starting really_add', current_span=trace.get_current_span())
    with trace.get_tracer(__name__).start_as_current_span('add_in_really_add'):
        log.debug('inside inner span', current_span=trace.get_current_span())
        log.debug('showing full context', context=getattr(self, "__otel_task_span", None))
    log.debug("really adding", x=x, y=y)
    print(x + y)


@app.task(bind=True, name='dispatcher.add')
def add(self, x, y):
    log.debug('starting add', current_span=trace.get_current_span())
    with trace.get_tracer(__name__).start_as_current_span('add_in_add'):
        log.debug('inside inner span', current_span=trace.get_current_span())
        log.debug('showing full context', context=getattr(self, "__otel_task_span", None))
    really_add.delay(x, y)

print("Delayed:", add.delay(42, 50))
print("Immediate:", add(42, 50))

What is the expected behavior?
I expected to see nested traces in Jaeger showing the dispatcher.add function with a nested trace (add_in_add), and either a connected or separate trace (dispatcher.really_add) with its own nested trace (add_in_really_add).

What is the actual behavior?
Only the initializing_telemetry, test_span, and dispatcher.add spans (from Celery telemetry) were reported to Jaeger for the add.delay() case. In the immediate case, the add_in_add span and dispatcher.really_add spans were created and sent to Jaeger.

spans when only doing the delayed call (commenting out the immediate line):
traces_delayed

Traces when add() is called in immediate mode:
traces_immediate

(in both the above cases I was getting two traces per call, which were identical but one had warnings about duplicate IDs and clock skew)

The other spans were constructed and reflected in the logs as seen below, but were not reported and did not appear in Jaeger.

Logs:

[2020-08-04 14:56:24,928: WARNING/ForkPoolWorker-1] 2020-08-04 14:56.24 inside inner span              current_span=Span(name="add_in_add", context=SpanContext(trace_id=0xe79630d9b40064a881a558393426cc39, span_id=0x2060772061734699, trace_state={}, is_remote=False))
[2020-08-04 14:56:24,929: WARNING/ForkPoolWorker-1] 2020-08-04 14:56.24 showing full context           context={('6c7699a9-7459-4388-b5b6-d427e7d2b330', False): (Span(name="dispatcher.add", context=SpanContext(trace_id=0xe79630d9b40064a881a558393426cc39, span_id=0x195bceb18b0073fc, trace_state={}, is_remote=False)), <contextlib._GeneratorContextManager object at 0x7fbff598d8e0>)}
[2020-08-04 14:56:24,945: WARNING/ForkPoolWorker-1] 2020-08-04 14:56.24 starting really_add            current_span=Span(name="dispatcher.really_add", context=SpanContext(trace_id=0x7482e466066047bb4e967939473b39d4, span_id=0x2f3fd59c95de25c8, trace_state={}, is_remote=False))
[2020-08-04 14:56:24,945: WARNING/ForkPoolWorker-1] 2020-08-04 14:56.24 inside inner span              current_span=Span(name="add_in_really_add", context=SpanContext(trace_id=0x7482e466066047bb4e967939473b39d4, span_id=0x9a9a8ffbad3a73b5, trace_state={}, is_remote=False))
[2020-08-04 14:56:24,946: WARNING/ForkPoolWorker-1] 2020-08-04 14:56.24 showing full context           context={('93c370c9-0308-4651-9ddb-522815a83d37', False): (Span(name="dispatcher.really_add", context=SpanContext(trace_id=0x7482e466066047bb4e967939473b39d4, span_id=0x2f3fd59c95de25c8, trace_state={}, is_remote=False)), <contextlib._GeneratorContextManager object at 0x7fbff599cb80>)}
[2020-08-04 14:56:24,946: WARNING/ForkPoolWorker-1] 2020-08-04 14:56.24 really adding                  x=42 y=50
[2020-08-04 14:56:24,946: WARNING/ForkPoolWorker-1] 92
[2020-08-04 14:56:24,947: WARNING/ForkPoolWorker-1] 2020-08-04 14:56.24 starting really_add            current_span=Span(name="dispatcher.really_add", context=SpanContext(trace_id=0x68f0b83f06fe2d46b480fbd4c350f506, span_id=0xc6919e6887321329, trace_state={}, is_remote=False))
[2020-08-04 14:56:24,947: WARNING/ForkPoolWorker-1] 2020-08-04 14:56.24 inside inner span              current_span=Span(name="add_in_really_add", context=SpanContext(trace_id=0x68f0b83f06fe2d46b480fbd4c350f506, span_id=0x0a0ed1ae2ab9d2f3, trace_state={}, is_remote=False))
[2020-08-04 14:56:24,947: WARNING/ForkPoolWorker-1] 2020-08-04 14:56.24 showing full context           context={('1a110389-4c61-47db-ab1e-9e04574dc936', False): (Span(name="dispatcher.really_add", context=SpanContext(trace_id=0x68f0b83f06fe2d46b480fbd4c350f506, span_id=0xc6919e6887321329, trace_state={}, is_remote=False)), <contextlib._GeneratorContextManager object at 0x7fbff591c8b0>)}
[2020-08-04 14:56:24,947: WARNING/ForkPoolWorker-1] 2020-08-04 14:56.24 really adding                  x=42 y=50
[2020-08-04 14:56:24,947: WARNING/ForkPoolWorker-1] 92

Additional context
It could certainly be that I'm missing something terribly obvious, but if so some sample code in the documentation would certainly go a long way! It is extremely cryptic to have traces appear or not based on whether the call is done through apply_async or not, and it's not clear how to display traces within tasks.

Additional data: when looking at the source code, the before/after publish hooks should generate a PRODUCER kind of span (which seems to be what we're getting) but the before/after run hooks, which are being executed, should also produce a CONSUMER kind of span (which is not reflected anywhere). Additional additional data: when attempting to use this within our existing app, tasks sent to a worker by an external app using app.send_task were executed but did not publish traces to Jaeger.

celery_example.zip

Many thanks--extremely impressed with the whole OT work, and keen to use it for our app, but we're heavily celery-based with multiple workers so I'm very much wanting this to be working.

Add labels to entry level tasks for new contributors

Hi,

based on open-telemetry/community#469 I have added open-telemetry/opentelemetry-python-contrib to Up For Grabs:

https://up-for-grabs.net/#/filters?names=662

There are currently no issues with label help wanted. Please add this label to your entry level tasks, so people can find a way to contribute easily.

If "help wanted" is not the right label, let me know and I can change it (e.g. to "good first issue" or "up-for-grabs"), or you can provide a pull request by editing https://github.com/up-for-grabs/up-for-grabs.net/blob/gh-pages/_data/projects/opentelemetry-python-contrib.yml

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.