Coder Social home page Coder Social logo

hypertrace / hypertrace Goto Github PK

View Code? Open in Web Editor NEW
514.0 15.0 32.0 535 KB

An open source distributed tracing & observability platform

Home Page: https://www.hypertrace.org/

License: Other

Shell 54.07% Kotlin 1.83% Java 15.42% JavaScript 2.06% TypeScript 24.69% Python 1.92%
observability monitoring kubernetes distributed-tracing cloud-native tracing opentelemetry application-monitoring java kafka

hypertrace's Introduction

Contributors Forks Stargazers Issues E2E-test Twitter


Logo

Hypertrace

An open distributed tracing & observability platform!
Explore the docs »

Visit our blog · Report Bug · Request Feature

CVE-2021-44228 and CVE-2021-45046 disclosed security vulnerabilities in the Apache Log4j 2 version 2.15 or below.

We have upgraded all the dependent hypertrace repositories and have cut the new release with a safe version of Log4j (2.17). We strongly encourage upgrading to the latest version (v0.2.7) of hypertrace or using appropriate charts from the latest release.

About The Project

Hypertrace is a cloud-native distributed tracing based Observability platform that gives visibility into your dev and production distributed systems.

Hypertrace converts distributed trace data into relevant insight for everyone. Infrastructure teams can identify which services are causing overload. Service teams can diagnose why a specific user's request failed, or which applications put their service objectives at risk. Deployment teams can know if a new version is causing a problem.

With Hypertrace you can,

  • Perform Root cause analysis(RCA) whenever something breaks in your system.
  • Watch roll-outs and compare key metrics.
  • Determine performance bottlenecks and identify slow operations like slow API calls or DB queries.
  • Monitor microservice dependencies and Observe your applications.
![Product Name Screen Shot][product-screenshot]
Hypertrace

Getting Started

Quick-start with docker-compose

If you want to see Hypertrace in action, you can quickly start Hypertrace.

Prerequisites

Run with docker-compose

git clone https://github.com/hypertrace/hypertrace.git
cd hypertrace/docker
docker-compose pull
docker-compose up --force-recreate

This will start all services required for Hypertrace. Once you see the service Hypertrace-UI start, you can visit the UI at http://localhost:2020.

If your application is already instrumented to send traces to Zipkin or Jaeger, it will work with Hypertrace.

If not, you can try Hypertrace with our sample application by running

docker-compose -f docker-compose-zipkin-example.yml up

the sample app will run at http://localhost:8081. You should request the URL a few times to generate some sample trace requests!

Deploy in production with Kubernetes

We support helm charts to simplify deploying Hypertrace in Kubernetes environment, maybe on your on-premise server or cloud instance!

Please refer to the deployments section in our documentation which lists the steps to deploy Hypertrace on different Kubernetes flavors across different operating systems and cloud providers. You can find the Helm Charts and installation scripts with more details here.

Note: We have created hypertrace-ingester and hypertrace-service to simplify local deployment and quick-start with Hypertrace. As of now, we don't support them for production because of some limitations and some unreliabiliy with scaling. So, we will encourage you to deploy individual components for staging as well as production deployments.

Community

  • Join the Hypertrace Workspace on Slack to connect with other users, contributors and people behind Hypertrace.
  • We have public monthly meeting on last Thursday of the month at 8:00 AM PST/ 8:30 PM IST/ 11:00 AM ET/ 5:00 PM CET where we try to give our community a holistic overview of new features in Hypertrace and community activities. We would like to hear feedback, discuss feature requests and also help new contributors to get started with contributing to projects. You can join the zoom meeting here or use zoom meeting details as below:
    • Meeting ID: 990 5679 8944
    • Passcode: 111111
  • If you want to discuss any ideas or have any questions or show us how you are using Hypertrace, you can use GitHub discsussions as well.

Docker images

Released versions of Docker images for various Hypertrace components are available on dockerhub.

Roadmap

See the open issues for a list of proposed features (and known issues).

Contributing

Contributions are what make the open community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated. Check out our Contribution Guidelines for more details.

License

Hypertrace follows the open core model where "Hypertrace core" (or simply Core) is made available under the Apache 2.0 license, which has distributed trace ingestion and exploration features. The Services, Endpoints, Backends and Service Graph features of Hypertrace Community Edition are made available under the Traceable Community license.

hypertrace's People

Contributors

aaron-steinfeld avatar adriancole avatar dnielsen avatar jbahire avatar jcchavezs avatar kotharironak avatar sarthak77 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hypertrace's Issues

Unexpected explorer behavior when we select metric for which there's no data

Issue

When we select metric for which you don't have data like in below case user agent dashboard shows not data which is expected but now when I am changing to other metrics it shows no data for all others as well.

We have to refresh to see data again.

Before Before
Screenshot 2020-10-28 at 10 44 36 AM Screenshot 2020-10-28 at 10 44 44 AM
Screenshot 2020-10-28 at 10 44 53 AM
Selected metric user agent which has not data
After After
Screenshot 2020-10-28 at 10 44 59 AM Screenshot 2020-10-28 at 10 45 07 AM

Use Case

Better UX

Proposal

Switching metric choice should show data for that metric just like it works for other metrics.

Questions to address (if any)

  • is this expected behavior?

Convert all config/attribute/entity service repos into a macro repo and single service

This idea originally came from @aaron-steinfeld and I agree with it. Logging it so that we don't lose track of it.

Proposal

Like we have created a macro repo on the ingestion side, let's create a new macro repo with all config related services and even run them as a single service in helm based setup.

This would help with different things:

  • Reduce the no. of moving services in helm based setup.
  • Reduce the no. of repos to contribute/develop on. We have already seen the benefit of it on the ingestion side.
  • Potentially simplification of config bootstrapping because we can get rid of some weird cyclic dependencies.

cc @aaron-steinfeld @kotharironak

Avoid using the EntitiesRequest API object in query execution logic code in gateway-service

Use Case

This isn't a new use case but a tech-debt. We currently use EntitiesRequest (https://github.com/hypertrace/gateway-service/blob/main/gateway-service-api/src/main/proto/org/hypertrace/gateway/service/v1/entities.proto#L27) to receive the entity query in gateway-service but we use the same API DTO across the implementation code. This restricts us in-terms of how the API or implementation can independently evolve.
This issue is to fix that and introduce an internal object for implementation.
cc @aaron-steinfeld @tim-mwangi

Support for grouping data into spaces (name TBD)

Use Case

A user should be able to separate their data into arbitrary logical groups (aka spaces, segments) so that all spans, traces, entities and relationships only pertain to that group. They should be able to use the UI to both define how to group this data, and to switch the group they’re viewing. For example, if a user is ingesting data from multiple applications or environments, they should be able to segregate this data based on a span value and switch between environments easily in the UI. Likewise, a user who wants to see only a subset of data (e.g. they're responsible for monitoring one set of services, or want to slice data and metrics across different planes such as namespace, availability zone, blue/green ec.) should be able to do so without requiring admin privileges or changing the span collection. If a change in collection or tracing is added to add more metadata, the user should be able to easily use that data to set up spaces.

Proposal

Generate zero or more space IDs on the span based on user configurable rules propagating these IDs to higher level constructs and add top-level (not a standard filter so we don't have to expose the internal construct, and also so that it can always apply at the span level, while filters operate at the requested resource level) api parameters to support filtering results by one space. In essence, spaces become generated tags on the span data.

Details

Changes for spaces would be mostly on the query side. A breakdown by location:

Enrichment

A new enricher is added that will read the space generation rules for a tenant and apply them to each ingested span, adding the space data as a new enriched attribute.

View Generation/Pinot/Query Service

The new space data is persisted as a multi valued column in each pinot view.

Gateway

Relevant existing APIs will now accept an optional additional parameter specifying a space to further filter the resulting data. The service is responsible for converting these, if provided, into generic filters in the query sent to query service.

GraphQL

Similar to gateway - Relevant existing apis now accept an optional additional parameter, which is propagated to the appropriate gateway requests.
GraphQL will also need new APIs to support the read/write of space configuration as required by the UI. It will also need to support a way of enumerating the spaces (which may or may not be part of the config, depending on the implementation)

UI

Two new UI elements (exact mocks and implementation TBD):

  • A top level space selector, similar to the existing time range selector, would allow global filtering of the entire UI application by space. This should be prepopulated
  • A configuration tool/screen to allow defining new, or editing existing space generation rules. This will be based on one or more values from the ingested span tags. Eventually, we should support constructs such as predicates (e.g. if (tag.env == "prod") { space = "Production"}) and expressions (e.g. space = ${tag.env} - ${tag.application}).

Space Generation Config

Location TBD, we'll need to persist the space generation rules and support general CRUD operations. The consumers of this will be the space enricher (for reading and executing the rules) and graphql (for read-write).

Setup docker CI for all repos that produce a docker image

all repos that produce docker images should test the image(s) they built.

Since we write HEALTHCHECK always now, the most portable basic test is parsing that. This helps avoid a dud image due to simple problems like file renames.

The most simple test is running a container with a fixed name (ex docker run --rm --name test ...) and parsing health check status docker inspect --format='{{json .State.Health.Status}}' test|jq . IIRC this is already done on push when using DockerHub.

To do before that means an explicit command in github actions or circleci. We are using gradle instead of github actions to build/push docker images. It is possible something like this could be done in gradle, or we can do something in circleci like #13, noting especially leaf repos may not need to use docker-compose.

cc @aaron-steinfeld @jcchavezs for thoughts.

Remove schema registry dependency from docker-compose (quick-start)

we are currently using docker-compose setup for local (dev) deployment. Currently, we have the following data services,

  • pinot, kafka, zookeeper, mongodb, and schema-registry

we are using kafka for our streaming pipeline, and both kafka and pinot has a zookeeper dependency. As schema will be static in nature for local deployment, can we consider reducing usage of schema-registry in local mode setup?

Adds support for Distributed Tracing in Hypertrace

Use Case

Debugging hypertrace is complex because of the amount of components and the lack of e2e in the components. Also there is a pipeline in the middle of the processing which needs to be digested.

Proposal

Adds support for enabling tracing in hypertrace to understand failures. Ideally we should use our own hypertrace agent.

Questions to address (if any)

  • Do we support kafka streams tracing? (e.g. brave does)

Ping @pavolloffay @adriancole

Resource leak detected

Use Case

Better logging needed.

Proposal

Hypertrace service logs sometimes show error related to resource leak.

[grpc-default-worker-ELG-1-2] ERROR i.g.n.s.i.n.u.ResourceLeakDetector - LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records:
Created at:
	io.grpc.netty.shaded.io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:331)
	io.grpc.netty.shaded.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:185)
	io.grpc.netty.shaded.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:176)
	io.grpc.netty.shaded.io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:137)
	io.grpc.netty.shaded.io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114)
	io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:147)
	io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656)
	io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:591)
	io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:508)
	io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470)
	io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)

Was recently observed by @kotharironak as well, so there needs to be some investigation around.

Update default branch name

As it seems we'd historically made a decision to rename default branches, this issue can track them so that anything hanging off it can be addressed, such as corresponding docker tags representing last commit.

Simplify the config bootstrapping process by removing additional job for config-bootstrapper

Hypertrace needs two kinds of configuration to be present for the entities and query layer to work.

  • entity types configuration --> owned by Entity Service
  • attribute metadata --> owned by Attribute Service

Currently, config-bootstrapper uses the clients of attribute-service and entity-service to initialize the configuration. In single node setup, this is introducing dependencies b/w bootstrapper and services, there by either delaying the setup startup or makes it fragile.

Here are some ways in which we can fix it:
1 Let the clients of ES and AS retry with timeout so that we can reduce the fragility. Simple approach, just needs the gRPC client retry tuning.
2 Fold the config-bootstrapper into hypertrace-federated-service with git submodules. Though this is a quick job, let it be done. The order in which services are started in federated-service should take care of fragility. This also avoids an extra pod in the startup.
3 Convert config-bootstrapper into a library and fold into corresponding services. Attribute initialization will be done in attribute-service after the service comes up, similarly entity types.
4 ??

IMO, 1 should take care of the fragility and simple to fix.

cc @kotharironak @adriancole

Upgrade Mongo java driver in document-store

Proposal

Currently we are using 3.12 mongo java driver in the document-store for MongoCollection implementation but we have already upgraded our MongoDB server to 4.4.0.
We are running into some issues in bulk upserting the docs via entity-service APIs. Since there were a lot of changes in that area in the recent mongo java driver, it's better we upgrade the java driver to recent versions and see if the bulk upsert issues are solved.
cc @avinashkolluru

Avg latency coming as negative value at times

Screen: Individual service screen (services/service/<service_id>/endpoints)

When the duration for one more requests is very high, average latency on that service and endpoint is showing up as negative value. Same trace in Jaeger shows positive value.

They seem to be coming from integer overflow.

Screenshot 2021-01-04 at 11 49 18 AM

Screenshot 2021-01-05 at 10 46 42 AM

URLs to access few pages in UI are broken

In docker-compose setup we are using UI in hypertrace-service which is using Jetty server. Some of the URLs in this deployment are broken. List of broken URLs is as follows:

1. Services

  • Go to Home -> Services -> Select any service -> Refresh the service home page

Screenshot 2020-09-28 at 2 24 58 PM

2. Traces

  • Go to Home -> Services -> Select any service -> Go to traces tab -> Refresh the page

Screenshot 2020-09-28 at 2 26 14 PM

3. Trace View

  • Go to Home -> Services -> Select any service -> Go to traces tab-> Select any trace to see trace view -> Refresh the page

Screenshot 2020-09-28 at 2 14 40 PM

4. Backends

  • Go to Home -> Backends -> Select any backend -> Refresh the backend home page

Screenshot 2020-09-28 at 2 28 01 PM

5. Filter by criteria in both backends and services Traces tab

Screenshot 2020-09-28 at 2 29 26 PM

5. Endpoints

  • Click any endpoint from home page and refresh endpoint dashboard.

Screenshot 2020-10-01 at 9 47 59 AM

Move all repos to GitHub action based CI

Use Case

We were facing issues with CircleCI similar to #132 and also you need authorized user commit to get workflow running in the first place. We did analysis of GHA vs CircleCI and GHA seemed to be better overall option for Hypertrace CI use-case.

We have already done POC on attribute service repo: https://github.com/hypertrace/attribute-service and we should move rest of the repos to use GitHub actions as well.

Proposal

We have created template workflows which you can import for each repo and modify according to requirements (similar to this: https://github.com/hypertrace/attribute-service/actions/new).

This issue will track migration to GitHub Actions:

Fix ENV variable issue for snyk on forked branch PR by non-member

Use Case

When PR is raised from forked branch by non-member circleci couldn't find ENV variables and throws error like below.
Screenshot 2020-11-30 at 8 53 43 PM

According to Aaron,
The repo has to be set up to pass secrets to forked PRs, but there are security implications to that so we deferred it.

Proposal

This a big think when it comes to contribution from non-members and we should address it asap. I found a FAQ around this in circleci support which are as below:

And this note here explains things better but this is not expected behavior from CI ideally (thoughts @jcchavezs ?):
https://circleci.com/docs/2.0/oss/#build-pull-requests-from-forked-repositories

we will need to investigate more and explore options if circleci is not able to this.

Error while removing network

When I run the following command, I get an error while removing the network. The rest of them are successful.

OS: Ubuntu 16.04

docker-compose -f docker-compose.yml down

22

Document how to test components

Right now, people can know that they can test docker by either using docker directly or running ./gradlew dockerBuildImages then changing the docker-compose here to the :test tag.

This should be documented, and also how to test helm. Ex what are the commands to run in the repo or monorepo, and what do you do here to test it.

Switch to docker-compose v3 (currently on v2.4)

Currently we are using docker-compose v2.4 to be able to access some of the features which are removed in v3. Especially things like conditional dependency (in v2.4 you can add condition for all other services your service depends on.) are very helpful considering we are dealing with considerably large stack here.

Once we have some reliability and stability issues resolved let's plan to switch to v3.

Exclude Spans from messaging system in endpoint traces

Screen: Service endpoints screen ( /services/service/<service-id>/endpoints)

Endpoint traces should be showing when the spans are touching entry/exit boundaries of a service. Spans from messaging system do not fit here naturally. Exclude them from endpoint traces.

Screenshot 2021-01-05 at 11 26 18 AM

ManagedChannelImpl is not getting shutdown in bootstrap task

During startup of hypertrace, we see below stack trace as INFO message,

| 2020-10-01 23:57:56.399 [GraphQLServlet-6] INFO  o.h.c.g.u.g.DefaultGrpcChannelRegistry - Creating new channel for localhost:9001
hypertrace              | Oct 01, 2020 11:57:56 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
hypertrace              | SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=14, target=localhost:9001} was not shutdown properly!!! ~*~*~*
hypertrace              |     Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
hypertrace              | java.lang.RuntimeException: ManagedChannel allocation site
hypertrace              | 	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
hypertrace              | 	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
hypertrace              | 	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
hypertrace              | 	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:518)
hypertrace              | 	at org.hypertrace.core.attribute.service.client.AttributeServiceClient.<init>(AttributeServiceClient.java:23)
hypertrace              | 	at org.hypertrace.core.attribute.service.client.AttributeServiceClient.<init>(AttributeServiceClient.java:27)
hypertrace              | 	at org.hypertrace.core.bootstrapper.BootstrapContext.<init>(BootstrapContext.java:17)
hypertrace              | 	at org.hypertrace.core.bootstrapper.BootstrapContext.buildFrom(BootstrapContext.java:23)
hypertrace              | 	at org.hypertrace.core.bootstrapper.BootstrapRunner.execute(BootstrapRunner.java:56)
hypertrace              | 	at org.hypertrace.service.BootstrapTimerTask.run(BootstrapTimerTask.java:76)
hypertrace              | 	at java.base/java.util.TimerThread.mainLoop(Unknown Source)
hypertrace              | 	at java.base/java.util.TimerThread.run(Unknown Source)
hypertrace              |
hypertrace              | Oct 01, 2020 11:57:56 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
hypertrace              | SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=16, target=localhost:9001} was not shutdown properly!!! ~*~*~*
hypertrace              |     Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
hypertrace              | java.lang.RuntimeException: ManagedChannel allocation site
hypertrace              | 	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
hypertrace              | 	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
hypertrace              | 	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
hypertrace              | 	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:518)
hypertrace              | 	at org.hypertrace.entity.type.service.client.EntityTypeServiceClient.<init>(EntityTypeServiceClient.java:30)
hypertrace              | 	at org.hypertrace.core.bootstrapper.BootstrapContext.<init>(BootstrapContext.java:18)
hypertrace              | 	at org.hypertrace.core.bootstrapper.BootstrapContext.buildFrom(BootstrapContext.java:23)
hypertrace              | 	at org.hypertrace.core.bootstrapper.BootstrapRunner.execute(BootstrapRunner.java:56)
hypertrace              | 	at org.hypertrace.service.BootstrapTimerTask.run(BootstrapTimerTask.java:76)
hypertrace              | 	at java.base/java.util.TimerThread.mainLoop(Unknown Source)
hypertrace              | 	at java.base/java.util.TimerThread.run(Unknown Source)

However, this is not affecting any functionality, can we stub this INFO message or shutdown channel cleanly.

Release process for Hypertrace

Logging this issue to come up with the release process for Hypertrace, which includes the docker-compose and helm charts and make the corresponding changes so that users can use a pinned version instead of always forking the repo and using the latest from the main branch.
Once we have the release process and artifacts, we need to update the documentation also to fetch the latest version and install rather than cloning the repo.

cc @JBAhire @kotharironak @jcchavezs

Update docker-plugins to apply new release policy

Hypertrace helm install fails forever if it failed once before completion

Use Case

Better deployment experience

Steps to reproduce

  • Installed Hypertrace with single command hypertrace.sh install However, it failed before completion with the an error that said k8s services aren't reachable. Unfortunately, the console output has overflown and I couldn't capture the error.
  • Helm list only shows hypertrace-data-services, no platform services.
  • Next run of hypertrace.sh install and subsequent runs fail with the below error.
[INFO] installing hypertrace platform services. namespace: hypertrace, context: docker-desktop
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "ht" chart repository
...Successfully got an update from the "hypertrace" chart repository
...Successfully got an update from the "traceable" chart repository
...Successfully got an update from the "helm" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 12 charts
Downloading hypertrace-oc-collector from repo https://traceableai.jfrog.io/traceableai/helm
Downloading jaeger-to-raw-spans-converter from repo https://traceableai.jfrog.io/traceableai/helm
Downloading raw-spans-grouper from repo https://storage.googleapis.com/hypertrace-helm-charts
Downloading hypertrace-trace-enricher from repo https://traceableai.jfrog.io/traceableai/helm
Downloading hypertrace-view-generator from repo https://traceableai.jfrog.io/traceableai/helm
Downloading hypertrace-ui from repo https://traceableai.jfrog.io/traceableai/helm
Downloading hypertrace-graphql-service from repo https://traceableai.jfrog.io/traceableai/helm
Downloading attribute-service from repo https://storage.googleapis.com/hypertrace-helm-charts
Downloading gateway-service from repo https://traceableai.jfrog.io/traceableai/helm
Downloading query-service from repo https://storage.googleapis.com/hypertrace-helm-charts
Downloading entity-service from repo https://traceableai.jfrog.io/traceableai/helm
Downloading kafka-topic-creator from repo https://storage.googleapis.com/hypertrace-helm-charts
Deleting outdated charts
history.go:52: [debug] getting history for release hypertrace-platform-services
upgrade.go:121: [debug] preparing upgrade for hypertrace-platform-services
Error: UPGRADE FAILED: "hypertrace-platform-services" has no deployed releases
helm.go:84: [debug] "hypertrace-platform-services" has no deployed releases
UPGRADE FAILED
main.newUpgradeCmd.func1
	/private/tmp/helm-20200508-23207-1ycgb97/src/helm.sh/helm/cmd/helm/upgrade.go:146
github.com/spf13/cobra.(*Command).execute
	/private/tmp/helm-20200508-23207-1ycgb97/pkg/mod/github.com/spf13/[email protected]/command.go:842
github.com/spf13/cobra.(*Command).ExecuteC
	/private/tmp/helm-20200508-23207-1ycgb97/pkg/mod/github.com/spf13/[email protected]/command.go:950
github.com/spf13/cobra.(*Command).Execute
	/private/tmp/helm-20200508-23207-1ycgb97/pkg/mod/github.com/spf13/[email protected]/command.go:887
main.main
	/private/tmp/helm-20200508-23207-1ycgb97/src/helm.sh/helm/cmd/helm/helm.go:83
runtime.main
	/usr/local/Cellar/[email protected]/1.13.10_1/libexec/src/runtime/proc.go:203
runtime.goexit
	/usr/local/Cellar/[email protected]/1.13.10_1/libexec/src/runtime/asm_amd64.s:1357
./hypertrace.sh: Error at line#: 48, command: helm, error code: 1

hypertrace-helm on  feature/add_ht_attributes [$!?] at ☸️  docker-desktop (hypertrace) took 16s
➜ helm list
NAME                    	NAMESPACE 	REVISION	UPDATED                             	STATUS  	CHART                         	APP VERSION
hypertrace-data-services	hypertrace	5       	2020-07-04 18:00:57.597235 -0700 PDT	deployed	hypertrace-data-services-0.1.0	0.1.0

This basically happens if your install failed at some point due to timeout or k8s unavailability or any reason.

Proposal

  • can we check if we have previously deployed releases before each install?
  • If there are previously deployed releases, we can just uninstall it and then install new one.

Fix the /health implementation for all Hypertrace services

Regardless of what we do at docker layer, service health should operate in such a way to “quick test” your direct dependencies or your most important dep. Without this, almost all orchestration and triage steps are thwarted.

For example, right now, the platform code has a hook, but some services (at least query-service) return constant true like so:

@Override
public boolean healthCheck() {
  return true;
}

These false positives compounded by the layering of architecture can bleed time. So, we need to fix that for every service.

Here’s an example of a nice health check in Zipkin (which took a long time to polish incidentally). Notice you can tell which core dependencies are up without looking for exceptions in logs.

$ curl -s localhost:9411/health
{
  "status" : "UP",
  "zipkin" : {
    "status" : "UP",
    "details" : {
      "ElasticsearchStorage{initialEndpoints=elasticsearch:9200, index=zipkin}" : {"status" : "UP"},
      "KafkaCollector{bootstrapServers=kafka-zookeeper:9092, topic=zipkin}" : {"status" : "UP"}
    }
  }
}

Making good use of this is related to other work. For example, not just readiness probes, but also Docker HEALTHCHECK is more effective. If the HEALTHCHECK is possible to fail, use of docker ps becomes useful post start, and also you can start services independently, only caring about your entrypoint (ex hypertrace-service) vs knowing the dep tree.

Add Support for Postgres in Document Store

Use Case

Adding an alternative datastore i.e. Postgres to the document store, currently, it is limited to MongoDB only.

Proposal

The entity document will be stored as json in Postgres data-store. Following the conventions from MongoDB,
the postgres schema will look as follows:

CREATE TABLE documents (
	id TEXT PRIMARY KEY,
	document JSONB NOT NULL,
	created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
	updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

The document-store will connect to Postgres via JDBC driver.

I'm already working on a PR to add this. Here is the initial draft of the PR.
All the functionalities provided by MongoDB will be implemeneted on Postgres as well.

Support for config-driven attributes and entities

Use Case

The hypertrace platform currently takes in spans, includes code (enrichers) that read known span tags to create attributes and entities and then persists the data in an OLAP or document store, depending on what it is. For entities, the entity config must match the code level implementation in terms of the identifying attributes referenced. For attributes, there is custom logic to persist each attribute in the view generation code, as well as a mapping to convert the persisted data back to its logical attribute name. The problem with this model is that supporting new attributes or entities requires coordinated code level changes in multiple locations, and enforces a uniform data schema across all hypertrace users.

Proposal

We already persist both an entity and attribute schema in the document store, although currently the entity schema is barely used and the attribute schema is only used on the query side. This issue is to expand the data in each schema to support the actual generation of entities and attributes to be dynamic, based on the current value of the schema. This would be done by adding support for projecting an attribute from other attributes and including the raw span source for attributes that are not projections. This would have multiple benefits:

  • A single source of truth (rather than data being defined in raw tags, utility functions, one or more enrichers, view generators, or query side constructs)
  • Simplifying the addition of new attributes and entities to the system
  • The ability to adjust the schema per hypertrace installation, or even amongst tenants in an installation
  • The future capability of adding new attributes or entities on the fly (via api) without a deployment
  • Smaller data transfer and persistence (because projections can be done just in time)

Existing entities and attributes throughout the system can be migrated in phases, as the old and new models can coexist.

Questions to address (if any)

  • Migrating existing entities without changing IDs (ID generation has changed to be a config driven projection)

Easy way to test locally built docker images with helm setup

As a contributor or someone who likes to tinker around people might build local images with changes and they should be able to test those images with helm setup using test tag.

For example,
As @ravisingal pointed out we support tagOverride for pinot. so, if I want to test pinot image I built locally which will be something like this hypertrace/pinot:test I just need to add two lines in values.yaml like below:

pinot:
  image:
    repository: "hypertrace/pinot"
    tagOverride: "test"
    ....

Can we have similar way to test images using tagOverride for all the services?

Setup docker CI for helm setup in Hypertrace repo.

We have a working CI setup for docker-compose setup now. Can we setup CI for helm-setup as well?

Why do we need it?

As lot of changes have happened recently and some of those can break helm setup once we update charts there. Having CI will help us detecting those issues before merging PRs related to Kubernetes/helm setup.

Move CircleCI to use Docker 19 by default

By default, CircleCI uses an old version of Docker which resulted in @ravisingal replacing syntax to setup a non-root user with a message like this as opposed to using the ARG of USER:

# use hard-coded username as a workaround to CircleCI build failure.
# see https://github.com/moby/moby/issues/35018 for more details.
COPY --from=kafka --chown=kafka /opt/kafka /opt/kafka

This is not the only problem people would run into that needs an update as the default version of Docker in CircleCI is literally from 2017

This issue will track updating the default CircleCI to a more recent remote_docker, and free us from having to individually learn very old bugs. This should apply to all repos for the same reason. There's too much work to do already, and setting traps like this can eat hours. This can happen on any repo, so we should update all of them.

Call to /quitquitquit failing in hypertrace job pods because istio sidecar is absent

Use Case

In Hypertrace which has no istio this error is being logged after the jobs is done:

16:57:57.230 [main] INFO  org.apache.pinot.common.utils.FileUploadDownloadClient - Sending request: http://pinot-controller:9000/schemas to controller: pinot-servicemanager-0.pinot-servicemanager.hypertrace.svc.cluster.local, version: Unknown
16:57:57.434 [main] INFO  org.hypertrace.core.viewcreator.pinot.PinotUtils - Trying to send table creation request {"tableName":"backendEntityView_REALTIME","tableType":"REALTIME","segmentsConfig":{"schemaName":"backendEntityView","timeColumnName":"start_time_millis","timeType":"MILLISECONDS","retentionTimeValue":"5","retentionTimeUnit":"DAYS","segmentAssignmentStrategy":"BalanceNumSegmentAssignmentStrategy","segmentPushFrequency":null,"segmentPushType":"APPEND","replication":"1","replicasPerPartition":"1","replicaGroupStrategyConfig":null,"completionConfig":null},"tenants":{"broker":"defaultBroker","server":"defaultServer","tagOverrideConfig":null},"tableIndexConfig":{"columnMinMaxValueGeneratorMode":null,"noDictionaryColumns":null,"noDictionaryConfig":null,"onHeapDictionaryColumns":null,"starTreeIndexConfigs":null,"aggregateMetrics":false,"segmentPartitionConfig":null,"varLengthDictionaryColumns":null,"autoGeneratedInvertedIndex":false,"createInvertedIndexDuringSegmentGeneration":false,"sortedColumn":[],"bloomFilterColumns":null,"segmentFormatVersion":null,"streamConfigs":{"stream.kafka.decoder.class.name":"org.apache.pinot.plugin.inputformat.avro.confluent.KafkaConfluentSchemaRegistryAvroMessageDecoder","streamType":"kafka","stream.kafka.decoder.prop.schema.registry.rest.url":"http://schema-registry-service:8081","stream.kafka.hlc.zk.connect.string":"zookeeper:2181","realtime.segment.flush.threshold.size":"500000","stream.kafka.consumer.type":"LowLevel","stream.kafka.zk.broker.url":"zookeeper:2181","stream.kafka.broker.list":"bootstrap:9092","realtime.segment.flush.threshold.time":"3600000","stream.kafka.consumer.factory.class.name":"org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory","stream.kafka.consumer.prop.auto.offset.reset":"largest","stream.kafka.topic.name":"backend-entity-view-events"},"invertedIndexColumns":[],"loadMode":"MMAP","nullHandlingEnabled":false},"metadata":{"customConfigs":null}} to http://pinot-controller:9000. 
16:57:57.831 [main] INFO  org.hypertrace.core.viewcreator.pinot.PinotUtils - {"status":"Table backendEntityView_REALTIME succesfully added"}
16:57:57.847 [Thread-0] ERROR org.hypertrace.core.viewcreator.ViewCreatorLauncher - Error while calling quitquitquit
org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:15020 [/127.0.0.1] failed: Connection refused (Connection refused)
	at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:159) ~[httpclient-4.5.6.jar:4.5.6]
	at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:373) ~[httpclient-4.5.6.jar:4.5.6]
	at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:394) ~[httpclient-4.5.6.jar:4.5.6]
	at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237) ~[httpclient-4.5.6.jar:4.5.6]
	at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) ~[httpclient-4.5.6.jar:4.5.6]
	at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) ~[httpclient-4.5.6.jar:4.5.6]
	at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) ~[httpclient-4.5.6.jar:4.5.6]
	at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) ~[httpclient-4.5.6.jar:4.5.6]
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) ~[httpclient-4.5.6.jar:4.5.6]
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) ~[httpclient-4.5.6.jar:4.5.6]
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) ~[httpclient-4.5.6.jar:4.5.6]
	at org.hypertrace.core.viewcreator.ViewCreatorLauncher.finalizeLauncher(ViewCreatorLauncher.java:78) ~[view-creator-framework-0.2.0-prerelease.1145.jar:?]
	at org.hypertrace.core.viewcreator.ViewCreatorLauncher.lambda$updateRuntime$0(ViewCreatorLauncher.java:69) ~[view-creator-framework-0.2.0-prerelease.1145.jar:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: java.net.ConnectException: Connection refused (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:?]
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399) ~[?:?]
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242) ~[?:?]
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224) ~[?:?]
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403) ~[?:?]
	at java.net.Socket.connect(Socket.java:609) ~[?:?]
	at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) ~[httpclient-4.5.6.jar:4.5.6]
	at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ~[httpclient-4.5.6.jar:4.5.6]
	... 13 more

Proposal

This doesn't make sense in Hypertrace so can be removed.

Changing a part of the build requires more pull requests than services

In order to do something across services, I had to raise 12 similar pull requests. Concretely, this was fixing the Java source level so that it wouldn't be implicitly taken from the environment.

This is not limited to a change like I mentioned and it would be foolish to assume this is a one-off thing.

There are many other changes that affect all Java projects, not the least of which updating the version of one of several of our plugins:

ex.

plugins {
  id("org.hypertrace.repository-plugin") version "0.2.1"
  id("org.hypertrace.ci-utils-plugin") version "0.1.2"
  id("org.hypertrace.docker-java-application-plugin") version "0.4.0" apply false
  id("org.hypertrace.docker-publish-plugin") version "0.4.0" apply false
  id("org.hypertrace.jacoco-report-plugin") version "0.1.1" apply false
}

If you were to change one of these plugin versions, it would likely affect N service repos + however many are also present due to Hypertrace Core + potentially other library repositories. Each of these changes are things to remember, perform identically, and chase up.

An ideal case would be to
A. reduce the cardinality of repositories
B. introduce a common base layer to the build that results in a single version change
or possibly C. Automate build refactoring

In normal cases, there is a base project plugin or not too many projects who already have good defaults. So either there is little to copy/paste, or updating the central build project inherits defaults. This is not to say inheriting a central build is without warts, just certainly at the repo count and plugin cardinality we are, it is not ideal now.

For example, the former code snippet was a partial list: our gradle plugins are split across 8 different repositories. Not one of our plugins has unit tests, which makes plugin changes themselves brittle. Imagine doing a plugin update * 12 repos then to have to do it again due to a small bug.

I'm raising an issue here as by definition of the problem, there is no base project, and so there's no place to elaborate that either. However, if such a repo was created, this issue could transfer to it. Regardless, this problem must really be solved before it impacts too many others.

Resources to consider (not exhaustive as there are many "default project" works out there):
https://github.com/spring-cloud/spring-cloud-build
https://github.com/nebula-plugins/nebula-project-plugin
https://github.com/curioswitch/curiostack/blob/master/tools/gradle-plugins/gradle-curiostack-plugin/src/main/java/org/curioswitch/gradle/plugins/curiostack/CuriostackRootPlugin.java

Consider using Google's Docker Hub registry cache across the board

I understand we authenticate with paid accounts to avoid tripping over the recent Docker Hub pull quota.

It seems still good defense in case one of those accounts have a problem to use a mirror. A mirror doesn't guarantee you won't hit Docker Hub (as explicitly choosing an alternate registry would). However, it does give a chance to cache things in a way that works with or without a paid account. https://cloud.google.com/container-registry/docs/pulling-cached-images

In setting this up, it would be a good idea to ensure images required to be fresh are not cached. One way is to not use Docker Hub for intermediate or dev images. Instead, use something like GitHub Container Registry.

Throwing this out there for consideration.

bug in table interaction

Use Case

I deployed Hypertrace with latest charts on EKS and I am using hotrod to application to generate traces. I am observing this weird blue selection while clicking on traces with blank metadata fields.

Screenshot 2020-11-24 at 6 59 03 PM

Screenshot 2020-11-24 at 6 58 33 PM

Screenshot 2020-11-24 at 6 54 35 PM

Provide a way to upload/download a trace to debug it

Currently it is really hard to debug a trace because we don't know what is the data being represented in the UI. Issues like discrepancy of data or missing fields (e.g. hypertrace/hypertrace-ui#283) can be debugged in this way, otherwise hard to.

Ideally we should use a format that can be: downloaded over http, uploaded over http and consumed easily (e.g. http). Zipkin format fits in all this as we can ingest zipkin data. It will also safe the need to build (or use) tools like https://github.com/jcchavezs/jaeger2zipkin

Some examples on how debugging will be better:

Ping @kotharironak @jake-bassett @JBAhire @adriancole @buchi-busireddy

Significant digits for time being lost in transformations (or storage)

Bug

There is a discrepancy over reported data and transformed data between jaeger format and hypertrace. Look at this trace from jaeger which shows 343249 as duration (in microseconds)

{
    "traceID": "65272b59f5737e8d6b6e382622ab5735",
    "spanID": "52dc390b398f4b0f",
    "operationName": "Sent.hipstershop.CheckoutService.PlaceOrder",
    "references": [
        {
            "refType": "CHILD_OF",
            "traceID": "65272b59f5737e8d6b6e382622ab5735",
            "spanID": "70c356eeecde9395"
        }
    ],
    "startTime": 1602566264242895,
    "duration": 343249,
    "tags": [
        {
            "key": "Client",
            "type": "bool",
            "value": true
        },
        {
            "key": "FailFast",
            "type": "bool",
            "value": true
        },
        {
            "key": "status.code",
            "type": "int64",
            "value": 0
        },
        {
            "key": "status.message",
            "type": "string",
            "value": ""
        },
        {
            "key": "internal.span.format",
            "type": "string",
            "value": "jaeger"
        }
    ],
    "logs": [],
    "processID": "p8",
    "warnings": null
},

where as what hypertrace retrieves from the graphql query is 344:

{
    "id": "52dc390b398f4b0f",
    "displaySpanName": "hipstershop.CheckoutService.PlaceOrder",
    "duration": 344,
    "endTime": 1602566264586,
    "parentSpanId": "70c356eeecde9395",
    "serviceName": "traceshop",
    "spanTags": {
        "jaeger.servicename": "frontend",
        "failfast": "true",
        "client": "true",
        "status.message": "",
        "status.code": "0",
        "span.status": ""
    },
    "startTime": 1602566264242,
    "type": "EXIT",
    "traceId": "65272b59f5737e8d6b6e382622ab5735",
    "__typename": "Span"
},

Notice endTime is also different: 1602566264242895 (jaeger) 1602566264242 (hypertrace).

Proposal

I believe the reason is that we hypertrace, at some point in the pipeline or the graphql API is trying to convert data into miliseconds in a weird way. Ideally we should keep the data in the units origin reports and also follow what most of the tracers do which is microseconds (except for example datadog).

Questions to address (if any)

  • Where does this transformation happens?

cc @aaron-steinfeld @kotharironak @adriancole @laxmanchekka @jake-bassett @pavolloffay

Setup Windows CI for Hypertrace docker-compose setup

Use Case

Windows CI for docker-compose will make sure any change is not breaking docker-compose setup on windows.

Proposal

We can use GitHub actions/CircleCI to build windows CI wish will trigger on every pull request or change so we can verify successful install on windows.

Backend is showing up in services

Use Case

I deployed Hypertrace with latest charts on EKS and I am using hotrod to application to generate traces. I am observing that redis (which is backend) is showing up in services. Which wasn't happening earlier.

refer screenshots below:
Screenshot 2020-11-24 at 9 19 37 PM

Screenshot 2020-11-24 at 9 20 32 PM

Initially, I used charts from https://github.com/hypertrace/hypertrace and then I updated charts here: #127 and used those for deployment and still issue persists so not related to chart versions, I assume.

Proposal

We should find what's causing it and address this as soon as possible.

cc: @buchi-busireddy @jake-bassett @kotharironak

Document Store Restructuring

Proposal

Currently, the document store codebase is structured according to a single data-store. The following things can be improved while restructuring the codebase:

  1. The default_db value is hardcoded in the code. The default DB should be overridable(Reference Dicussion)
  2. The interfaces are weak at a few places:
    a. The _id field has assumptions in other services using document-store
    b. createdTime, _lastUpdateTime are part of the document and structured specifically to MongoDB.
    c. Return types for upsert/bulkUpsert are not handled in entity-service.
    d. At some places error handling is weak
    e. Mongo Integration test has hardcoded BasicDB object instances.
  3. Separation of Implementation from the interface definition(Reference Discussion)

Use different operator for String Map attribute filters instead of CONTAINS_KEY and CONTAINS_KEY_VALUE

Use Case

Current operators are bit confusing and will need some context for someone new to figure out their working.

Proposal

As @jake-bassett proposed:

This could go a few different ways. Our parsing logic uses the operator as a signal for how we parse the remainder of the filter. Right now, = is reserved to Boolean, Number, and String attribute types. When we see that operator, we know to use a ComparisonFilterParser, which means the LHS of the filter is always the attribute name, and the RHS is a primitive value. Pretty simple to parse.

However, when we see CONTAINS_KEY or CONTAINS_KEY_VALUE, we know we have a StringMap attribute type, which signals to us to to use our ContainsFilterParser. This parser needs to inspect the LHS and potentially pull off the key for the StringMap. The RHS is more complicated as well not being a primitive type when combined with the key from the LHS.

All this is to say, that if we try to reuse = as the operator for StringMaps, we will need to refactor to not just check the operator to know what we are parsing, but to now have to check both the operator and attribute to know which parser to use. This is further complicated by the fact that we can’t just know the attribute name when we look at the LHS for StringMaps. There may be a StringMap key on the LHS. So, we would need to parse it first to know which parser to use. Solvable, but a little bit of a chicken and egg problem so it could get ugly.

An alternative suggestion for the CONTAINS_KEY_VALUE operator… instead of using = as a replacement use EQUALS so we keep a unique operator to check and know which parser to use. Additionally, we could use CONTAINS for the replacement to CONTAINS_KEY. These would be relatively simple changes to make.

Optimise release process and versioning for service helm charts

Use Case

  • As a part of bigger initiative of having proper semantic versioning based releases on manual release, we need to have internal versions for helm charts on each merge similar to 1.0.0-main.10bd522c which will help us trigger helm CD on each merge.
  • This will help in testing intermediate releases.

Proposal

We can explore something like this: https://insights.project-a.com/using-github-actions-to-deploy-to-kubernetes-122c653c0b09 to cut releases and CD based on github action.

make hypertrace-docker or rename existing data-services repo to hypertrace-docker-x

Proposal

Currently, we have 5 different repos for data services - pinot, schema-registry, mongo, kafka, zookeeper.

  • Can we convert all the 5 to a single repo?
  • Can we have common base JRE images for them? ex hypertrace-docker-jre
  • for pinot Java 8
  • for rest Java 11+

If, we can't able to convert to mono-repo, can we rename all the repos to hypertrace-docker-x eg. hypertrace-docker-pinot

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.