Coder Social home page Coder Social logo

quarkusio / quarkus-super-heroes Goto Github PK

View Code? Open in Web Editor NEW
264.0 12.0 150.0 168.57 MB

Quarkus sample application - Super Heroes

License: Apache License 2.0

Java 12.99% HTML 0.54% Shell 1.42% JavaScript 0.90% CSS 82.63% SCSS 0.70% Kotlin 0.82%
quarkus sample example best-practices nodejs docker grpc java kafka kubernetes

quarkus-super-heroes's Introduction

Quarkus Superheroes Sample

Table of Contents

Introduction

This is a sample application demonstrating Quarkus features and best practices. The application allows superheroes to fight against supervillains. The application consists of several microservices, communicating either synchronously via REST or asynchronously using Kafka. All the data used by the applications are on the characterdata branch of this repository.

This is NOT a single multi-module project. Each service in the system is its own sub-directory of this parent directory. As such, each individual service needs to be run on its own.

The base JVM version for all the applications is Java 17.

Here is an architecture diagram of the application: Superheroes architecture diagram

The main UI allows you to pick one random Hero and Villain by clicking on New Fighters. Then, click Fight! to start the battle. The table at the bottom shows the list of previous fights.

You can then click the Narrate Fight button if you want to perform a narration using the Narration Service.

Caution

Using Azure OpenAI or OpenAI may not be a free resource for you, so please understand this! Unless configured otherwise, the Narration Service does NOT communicate with any external service. Instead, by default, it just returns a default narration. See the Integration with OpenAI Providers for more details.

Fight screen

Superheroes.AI.mp4

Running Locally via Docker Compose

Pre-built images for all of the applications in the system can be found at quay.io/quarkus-super-heroes.

Pick one of the 4 versions of the application from the table below and execute the appropriate docker compose command from the quarkus-super-heroes directory.

Note

You may see errors as the applications start up. This may happen if an application completes startup before one if its required services (i.e. database, kafka, etc). This is fine. Once everything completes startup things will work fine.

There is a watch-services.sh script that can be run in a separate terminal that will watch the startup of all the services and report when they are all up and ready to serve requests.

Run scripts/watch-services.sh -h for details about it's usage.

Description Image Tag Docker Compose Run Command Docker Compose Run Command with Monitoring
JVM Java 17 java17-latest docker compose -f deploy/docker-compose/java17.yml up --remove-orphans docker compose -f deploy/docker-compose/java17.yml -f deploy/docker-compose/monitoring.yml up --remove-orphans
Native native-latest docker compose -f deploy/docker-compose/native.yml up --remove-orphans docker compose -f deploy/docker-compose/native.yml -f deploy/docker-compose/monitoring.yml up --remove-orphans

Tip

If your system does not have the compose sub-command, you can try the above commands with the docker-compose command instead of docker compose.

Once started the main application will be exposed at http://localhost:8080. If you want to watch the Event Statistics UI, that will be available at http://localhost:8085. The Apicurio Registry will be available at http://localhost:8086.

If you launched the monitoring stack, Prometheus will be available at http://localhost:9090 and Jaeger will be available at http://localhost:16686.

Deploying to Kubernetes

Pre-built images for all of the applications in the system can be found at quay.io/quarkus-super-heroes.

Deployment descriptors for these images are provided in the deploy/k8s directory. There are versions for OpenShift, Minikube, Kubernetes, and Knative.

Note

The Knative variant can be used on any Knative installation that runs on top of Kubernetes or OpenShift. For OpenShift, you need OpenShift Serverless installed from the OpenShift operator catalog. Using Knative has the benefit that services are scaled down to zero replicas when they are not used.

The only real difference between the Minikube and Kubernetes descriptors is that all the application Services in the Minikube descriptors use type: NodePort so that a list of all the applications can be obtained simply by running minikube service list.

Note

If you'd like to deploy each application directly from source to Kubernetes, please follow the guide located within each application's folder (i.e. event-statistics, rest-fights, rest-heroes, rest-villains, rest-narration, grpc-locations).

Routing

Both the Minikube and Kubernetes descriptors also assume there is an Ingress Controller installed and configured. There is a single Ingress in the Minikube and Kubernetes descriptors denoting / and /api/fights paths. You may need to add/update the host field in the Ingress as well in order for things to work.

Both the ui-super-heroes and the rest-fights applications need to be exposed from outside the cluster. On Minikube and Kubernetes, the ui-super-heroes Angular application communicates back to the same host and port as where it was launched from under the /api/fights path. See the routing section in the UI project for more details.

On OpenShift, the URL containing the ui-super-heroes host name is replaced with rest-fights. This is because the OpenShift descriptors use Route objects for gaining external access to the application. In most cases, no manual updating of the OpenShift descriptors is needed before deploying the system. Everything should work as-is.

Additionally, there is also a Route for the event-statistics application. On Minikube or Kubernetes, you will need to expose the event-statistics application, either by using an Ingress or doing a kubectl port-forward. The event-statistics application runs on port 8085.

Versions

Pick one of the 4 versions of the system from the table below and deploy the appropriate descriptor from the deploy/k8s directory. Each descriptor contains all of the resources needed to deploy a particular version of the entire system.

Warning

These descriptors are NOT considered to be production-ready. They are basic enough to deploy and run the system with as little configuration as possible. The databases, Kafka broker, and schema registry deployed are not highly-available and do not use any Kubernetes operators for management or monitoring. They also only use ephemeral storage.

For production-ready Kafka brokers, please see the Strimzi documentation for how to properly deploy and configure production-ready Kafka brokers on Kubernetes. You can also try out a fully hosted and managed Kafka service!

For a production-ready Apicurio Schema Registry, please see the Apicurio Registry Operator documentation. You can also try out a fully hosted and managed Schema Registry service!

Description Image Tag OpenShift Descriptor Minikube Descriptor Kubernetes Descriptor Knative Descriptor
JVM Java 17 java17-latest java17-openshift.yml java17-minikube.yml java17-kubernetes.yml java17-knative.yml
Native native-latest native-openshift.yml native-minikube.yml native-kubernetes.yml native-knative.yml

Monitoring

There are also Kubernetes deployment descriptors for monitoring with OpenTelemetry, Prometheus, and Jaeger in the deploy/k8s directory (monitoring-openshift.yml, monitoring-minikube.yml, monitoring-kubernetes.yml). Each descriptor contains the resources necessary to monitor and gather metrics and traces from all of the applications in the system. Deploy the appropriate descriptor to your cluster if you want it.

The OpenShift descriptor will automatically create Routes for Prometheus and Jaeger. On Kubernetes/Minikube you may need to expose the Prometheus and Jaeger services in order to access them from outside your cluster, either by using an Ingress or by using kubectl port-forward. On Minikube, the Prometheus and Jaeger Services are also exposed as a NodePort.

Warning

These descriptors are NOT considered to be production-ready. They are basic enough to deploy Prometheus, Jaeger, and the OpenTelemetry Collector with as little configuration as possible. They are not highly-available and does not use any Kubernetes operators for management or monitoring. They also only uses ephemeral storage.

For production-ready Prometheus instances, please see the Prometheus Operator documentation for how to properly deploy and configure production-ready instances.

For production-ready Jaeger instances, please see the Jaeger Operator documentation for how to properly deploy and configure production-ready instances.

For production-ready OpenTelemetry Collector instances, please see the OpenTelemetry Operator documentation for how to properly deploy and configure production-ready instances.

Jaeger

By now you've performed a few battles, so let's analyze the telemetry data. Open the Jaeger UI based on how you are running the system, either through Docker Compose or by deploying the monitoring stack to kubernetes.

Jaeger Filters

Now, let's analyze the traces for when requesting new fighters. When clicking the New Fighters button in the Superheroes UI, the browser makes an HTTP request to the /api/fights/randomfighters endpoint within the rest-fights application. In the Jaeger UI, select rest-fights for the Service and /api/fights/randomfighters for the Operation, then click Find Traces. You should see all the traces corresponding to the request of getting new fighters.

Jaeger Filters

Then, select one trace. A trace consists of a series of spans. Each span is a time interval representing a unit of work. Spans can have a parent/child relationship and form a hierarchy. You can see that each trace contains 14 total spans: six spans in the rest-fights application, four spans in the rest-heroes application, and four spans in the rest-villains application. Each trace also provides the total round-trip time of the request into the /api/fights/randomfighters endpoint within the rest-fights application and the total time spent within each unit of work.

Jaeger Filters

quarkus-super-heroes's People

Contributors

agoncal avatar ambaumann avatar brunobat avatar cescoffier avatar dependabot[bot] avatar edeandrea avatar evanshortiss avatar geoand avatar github-actions[bot] avatar growi avatar holly-cummins avatar joshgav avatar ozangunalp avatar radcortez avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

quarkus-super-heroes's Issues

Add Infinispan to event-statistics

Currently the event-statistics service stores data in-memory. The service itself can handle upwards of 1,000,000 events/second, but wWe should add Infinispan so that data is persisted.

Figure out better solution for bitnami postgresql images

The immediate fix for #154 was to revert back to a previous bitnami image. After some conversation on bitnami/containers#9143 we need to figure out a different way to pull in the Hero/Villain data which will allow us to not have to be pinned to this PostgreSQL image.

Some possible solutions:

  1. Maybe use an initContainer on the db pods for pulling the data to a shared volume?
  2. Switch away from the bitnami images to something else?

Allowing "fair" fights

While requesting random fights is great, what if we introduced a "fairness" setting in the UI. Sometimes we get fights that don't seem fair, where one fighter is a level 10 and the other is 10,000,000.

We could have a dropdown/radio button/whatever that, when selected in the UI, gave a choice to the user. Something like

  • Weaklings
  • Average
  • Super strong
  • Super-duper strong

We can certainly come up with better names/categories, but whatever we come up with we'd assign ranges of fighter levels to each term. The weakling would be anything <= some number and the Super-duper strong would be anything >= some number.

We'd pass that on the getRandomFighters call to rest-fights, which would in turn pass to the rest-heroes & rest-villains. rest-heroes & rest-villains would guarantee that only a fighter between those levels would be returned.

We could use this scenario to really showcase consumer-driven contract testing where making sure rules within the contract between consumers & producers stipulates the requirement in the contract.

So for example, in the rest-fights consumer contract you'd have something in the contract that said

When I (the rest-fights consumer requests a random hero/villain between a certain level, the level attribute in the response from the rest-heroes/rest-villains service should be between the values passed in the request.

Thats the kind of thing that can't be done using straight OpenAPI.

Azure super heroes is broken on new postgres change

The single script version for super heroes is broken on https://github.com/quarkusio/quarkus-super-heroes/blob/main/docs/deploying-to-azure-containerapps.md - there was a change for the protocol for postgres to include open telemetry and now the villains app cant connect to the database

Exception:
error 11:12:08 ERROR traceId=, parentId=, spanId=, sampled= [or.hi.en.jd.sp.SqlExceptionHelper] (JPA Startup Thread: ) Driver does not support the provided URL: jdbc:postgresql://villains-db-codespace.postgres.database.azure.com:5432/villains?ssl=true&sslmode=require

Missing spaces in the super powers

The imports.sql had an import issue. The superpowers are missing some spaces.
For example: WillInvisibilityLongevityMaster. It should be "Will Invisibility Longevity Master"

Having a parent pom for IDEs

It would be nice to have a parent pom. It's not mandatory, but for some IDEs it's much easier to open the project and have all the modules setup.

Create multi-arch images

We need to create multi-arch images supporting both amd64 and arm64 for both JVM & native for the event-statistics, rest-fights, rest-heroes, & rest-villains services, as well as both an amd64 and arm64 image for the ui-super-heroes.

Currently we are only building amd64 images for JVM & native for the Quarkus services and an amd64 image for the ui-super-heroes service.

Introduce Pact contract tests

@holly-cummins & I had a talk at Devoxx Belgium about Pact contract testing:

We used the Quarkus superheroes app as the foundation for our demos. Currently all the code resides in my own personal fork. We need to get this into the main repo.

The code, as-is, will not break anything, but it also will not be executed during any CI/CD process, or dev mode/continuous testing.

There are a few caveats that need to be sorted out.

  1. The Pact tests use the free tier within Pactflow broker - https://quarkus-super-heroes.pactflow.io.
    • There is a CI/CD robot user that has credentials we can store as GitHub secrets, but the larger problem is that the account we currently use is tied to my (@edeandrea's) redhat.com email address. Ideally we should have a more "generic" user to own the account.
    • Under an account you can assign individual users with permissions. @holly-cummins and I are currently in there as admin users.
    • @cescoffier / @maxandersen how do you want to handle this? I can add you both as users in the current Pactflow and you can explore, or I can demo it for you so you get an understanding of it. Let me know your thoughts.
      • You need to authenticate with the broker even to perform read actions
  2. The Pact tests as they currently are written will not run by default - they have to have flags to enable them
    • This is because, in its current state, Pact does not work in dev mode/continuous testing. There are some issues on the Pact side and some work on the Quarkus side
    • This way we can safely merge in the code without breaking any of the existing CI/CD workflows and/or dev mode/continuous testing within any of the apps
    • @holly-cummins has already started work on that. In our Devoxx demo we showed it working in continuous testing on the provider side. That was done with a local patch to Pact and the beginnings of a Quarkus extension which @holly-cummins is building
  3. From a CI perspective we should hook in, at a minimum, the provider verification tests into the simple build/test workflow AND the nightly CI process that tests against Quarkus main
    • The simple build/test workflow runs against all PRs as well as the first workflow against any pushes to main.

@holly-cummins did I miss anything?

Fix UI k8s knative yamls

The k8s KNative yamls for the UI app don't have the CALCULATE_API_BASE_URL environment variable set properly.

Introduce AuthZ/AuthN

Currently there isn't any AuthZ/AuthN in any of the apps. It would be nice if there was some.

Add distributed tracing

Add distributed tracing within and between each of the services. This would also need to add Jaeger to the infra which gets deployed.

What about adding JMeter load test?

@edeandrea What about having a JMeter load test that gets new fighters and just fights? This way we could add some load (and Azure Load Testing is compatible with JMeter load scripts ;o)

WDYT ?

The generated Deployment YAML is not valid when set quarkus.openshift.deployment-kind=Deployment

Describe the bug
When building projects and changing the generated deployment resource by setting quarkus.openshift.deployment-kind=Deployment property for Maven build, the generated YAML for the Deployment resource doesn't contain .spec.template.metadata and its child elements.

To Reproduce
Steps to reproduce the behavior:

  1. Run maven build with quarkus.openshift.deployment-kind=Deployment property setting.
./mvnw clean package \ 
 -Dquarkus.profile=openshift-17 \
 -Dquarkus.kubernetes.deploy=true \
 -Dquarkus.openshift.deployment-kind=Deployment
  1. Open the target/kubernetes/openshift.yml file and look for YAML for the Deployment resource.

  2. .spec.template.metadata and child elements didn't get generated.

Expected behavior
.spec.template.metadata and its child elements e.g. labels, and annotations should be generated.

Desktop (please complete the following information):

  • OS: macOS Ventura
  • JDK: Java SE build 17.0.5+9-LTS-191
  • Maven: mvnw comes with the project

Additional context
Sample generated YAML of Deployment resource.

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    app.openshift.io/vcs-url: <<unknown>>
    app.openshift.io/connects-to: "fights-db,fights-kafka,apicurio,rest-villains,rest-heroes,otel-collector"
    app.quarkus.io/commit-id: 58303663091232da528a9eecfcd74f0c592f558d
    app.quarkus.io/build-timestamp: 2022-11-28 - 14:42:55 +0000
    prometheus.io/scrape: "true"
    prometheus.io/path: /q/metrics
    prometheus.io/port: "8082"
    prometheus.io/scheme: http
  labels:
    app.kubernetes.io/name: rest-fights
    app.kubernetes.io/part-of: fights-service
    app.kubernetes.io/version: "1.0"
    app: rest-fights
    application: fights-service
    system: quarkus-super-heroes
    app.openshift.io/runtime: quarkus
  name: rest-fights
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: rest-fights
      app.kubernetes.io/part-of: fights-service
      app.kubernetes.io/version: "1.0"
  template:
    spec:
      containers:
        - env:
            - name: JAVA_APP_JAR
              value: /deployments/quarkus-run.jar
          envFrom:
            - secretRef:
                name: rest-fights-config-creds
            - configMapRef:
                name: rest-fights-config
          image: quay.io/quarkus-super-heroes/rest-fights:1.0
          imagePullPolicy: Always
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /q/health/live
              port: 8082
              scheme: HTTP
            initialDelaySeconds: 0
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 10
          name: rest-fights
          ports:
            - containerPort: 8082
              name: http
              protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /q/health/ready
              port: 8082
              scheme: HTTP
            initialDelaySeconds: 0
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 10

Break kubernetes/minishift/knative/openshift yamls into common parts

Currently, to add custom "stuff" for the Quarkus kubernetes/openshift extensions, you have to duplicate those resource definitions in each of the files (src/main/kubernetes/kubernetes.yml, src/main/kubernetes/minishift.yml, src/main/kubernetes/knative.yml, & src/main/kubernetes/openshift.yml).

See the guide for more details on this.

Once dekorateio/dekorate#942 makes it into a Quarkus version (via quarkusio/quarkus#22290), we will want to update the super heroes (rest-heroes, rest-villains, rest-fights, & event-statistics) to take advantage of this feature.

Redis Cache Implementation

I have noticed there are no examples of Redis Cache integrated into Quarkus.
Can we request the best practice sample within this project where we use
Redisson and Jedis clients
And even perhaps showcase how we would connect to AWS Elasticache redis

Add infinispan to fights service

It would be nice if there was a local cache of hero and villain fighters available to the fights service. This cache could be used to demonstrate the resiliency patterns if the hero or villain service was not available. In that case, the fight service could first look into the cache for a random fighter before going to its fallback.

Add OpenTelemetry in Azure ContainerApps

Add SmallRye Stork service discovery

Add SmallRye Stork Service Discovery & load balancing to the rest-fights service.

NOTE: Only calls to the rest-heroes service will use it since calls to the rest-villains service do not use the jax-rs client (calls to the rest-villains service use the JAX-RS client API directly - just to illustrate how it would be done).

The Kubernetes descriptors would configure stork to use the Kubernetes service discovery. We can discuss on here what the right load balancing strategy should be.

Run with podman-compose instead of docker compose

When I tried to run the project using 'podman-compose' on the provided YAML files the execution returns a 'KeyError'. I attached the erros below.

I would like to run the project with podman-compose. I'm no expert in docker/podman but I thought that it should be possible to run the 'podman-compose' tool.

image

image

I'm a newbie but if someone is able to provide some directions I can try to address this issue.

Introduce ability to pick specific fighters for battles

In the super heroes UI, introduce the ability to pick a specific hero and/or villain for a battle, rather than selecting random ones.

Maybe a pop-up that hits the heroes & villains services directly? This would also be a good thing where in the future we could introduce an API management layer so that the UI only talks to that and not the individual services directly.

There would probably need to be some discussion on the UI and the flow. That discussion can happen here in this issue.

Additionally - the same way the UI handles the URL to the fight service would have to work here as well for the UI to talk to both the heroes & villains services.

Investigate removing prometheus from architecture

Now that the OpenTelemetry Collector is part of the architecture and collecting metrics, investigate whether it can process/handle metrics as well, potentially removing Prometheus from the architecture.

Or maybe we keep prometheus, but instead of scraping each individual app, it gets the metrics from the Otel collector?

Provide only a single native image

Currently we build java11-native and java17-native images for native image. Native image is native image and the Java version should not matter.

Therefore we should just produce a single native image for each application.

As part of this we will also want to switch back to using quay.io/quarkus/ubi-quarkus-mandrel-builder-image for building the native images. #177 switched to quay.io/quarkus/ubi-quarkus-graalvmce-builder-image for the short term.

That will essentially mean reverting b2693a9 and 302ff69

Introduce variables in the JMeter script

To be able to run the script easily in both local and Cloud environment, we should introduce variables for the protocol (either HTTP or HTTPs), URL of the Fight API and port number

Use the TAG_SYSTEM variable when creating the container registry

When creating the Azure container registry, we should use the TAG_SYSTEM variable: --tags system="$TAG_SYSTEM" instead of --tags system=quarkus-super-heroes

az acr create \
  --resource-group "$RESOURCE_GROUP" \
  --location "$LOCATION" \
  --name "$CONTAINER_REGISTRY_NAME" \
  --sku Standard \
  --tags system=quarkus-super-heroes

Add additional win/loss details in event stats UI

In the event stats UI

event-statistics-ui

in addition to just the bar going between heroes & villains, show the number of wins for each in paraentheses beside each label as well as the total number of fights.

Something like this:
Screen Shot 2022-02-11 at 1 29 26 PM

Add simple UI to heroes service

Add a simple read-only UI in the heroes service that shows all the heroes and is searchable by name. This will be done using Quarkus Qute. There would need to be tests as well.

This could be expanded on in the future, but for now, just read-only.

Essentially the same as #41 just for the heroes service.

Add simple UI to villains service

Add a simple read-only UI in the villains service that shows all the villains and is searchable by name. This will be done using Quarkus Qute. There would need to be tests as well.

This could be expanded on in the future, but for now, just read-only.

Essentially the same as #42 just for the villains service.

Database pods are failing with `curl: command not found`

I deployed a few weeks ago and everything spun up healthy.
Today, I tried to spin up in OpenShift and am seeing errors with the heroes & villains db pods with this:

postgresql 16:14:10.52
postgresql 16:14:10.52 Welcome to the Bitnami postgresql container
postgresql 16:14:10.52 Subscribe to project updates by watching https://github.com/bitnami/containers
postgresql 16:14:10.52 Submit issues and feature requests at https://github.com/bitnami/containers/issues
postgresql 16:14:10.52
postgresql 16:14:10.53 INFO ==> ** Starting PostgreSQL setup **
postgresql 16:14:10.54 INFO ==> Validating settings in POSTGRESQL_* env vars..
postgresql 16:14:10.54 INFO ==> Loading custom pre-init scripts...
postgresql 16:14:10.55 INFO ==> Loading user's custom files from /docker-entrypoint-preinitdb.d ...
/docker-entrypoint-preinitdb.d/..2022_10_06_15_42_44.4190271301/get-data.sh: line 3: curl: command not found

I love this demo, I hope this can be fixed soon! I use this to test OpenTelemetry things. Thank you!
Here is how I deployed:

oc apply -f https://raw.githubusercontent.com/quarkusio/quarkus-super-heroes/main/deploy/k8s/java17-openshift.yml
oc apply -f https://raw.githubusercontent.com/quarkusio/quarkus-super-heroes/main/deploy/k8s/monitoring-openshift.yml

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.