Coder Social home page Coder Social logo

facerecognition-external-model's Introduction

Face Recognition External Model ๐Ÿ‘ช

This service implements the same models that already exists in the Nextcloud Face Recognition application, but it allows to run it on an external machine, which can be faster, and thus free up important resources from the server where you have Nextcloud installed.

Take this also as a reference model, since you can implement any external model for the Nextcloud Face Recognition application. You can implement an external model using your favorite machine learning tool, with the programming language you love.. โค๏ธ

Privacy

Take into account how the service works. You must send a copy of each of your images (or of your clients), from your Nextcloud instance to the server where you run this service. The image files are sent via POST, and are immediately deleted after being analyzed. The shared API key is sent in the headers of each queries. This is only as secure as the connection between the two communicating devices. If you run it outside your local network, you should minimally use it behind an HTTPS proxy, which protects your data.

So, please. Think seriously about data security before running this service outside of your local network. ๐Ÿ˜‰

Usage

API key

An shared API key is used to control access to the service. It can be any alphanumeric key, but it is recommended to create it automatically.

[matias@services ~]$ openssl rand -base64 32 > api.key
[matias@services ~]$ cat api.key 
NZ9ciQuH0djnyyTcsDhNL7so6SVrR01znNnv0iXLrSk=

Docker

The fastest way to get this up and running without manual installation and configuration is a docker image. You only have to define the api key and the exposed port:

# Expose the service on 8080 TCP port and send the API key as a file. By default it uses model 4 for facial recognition.
[matias@services ~]$ docker run --rm -i -p 8080:5000 -v /path/to/api.key:/app/api.key --name facerecognition matiasdelellis/facerecognition-external-model:v0.2.0
# You can pass the API key as an environment variable, but it is a practice that is not recommended because it is exposed on the command line.
[matias@services ~]$ docker run --rm -i -p 8080:5000 -e API_KEY="NZ9ciQuH0djnyyTcsDhNL7so6SVrR01znNnv0iXLrSk=" --name facerecognition matiasdelellis/facerecognition-external-model:v0.2.0
# You can change the default model using the `FACE_MODEL` environment variable.
# If you do not set the API key, it remains "some-super-secret-api-key". Needless to say, it is not advisable to leave it by default.
[matias@services ~]$ docker run --rm -i -p 8080:5000 -e FACE_MODEL=3 --name facerecognition matiasdelellis/facerecognition-external-model:v0.2.0 

Test

Check that the service is running using the /welcome endpoint.

[matias@services ~]$ curl localhost:8080/welcome
{"facerecognition-external-model":"welcome","model":3,"version":"0.2.0"}

You must obtain the IP where you run the service.

[matias@services facerecognition-external-model]$ hostname -I
192.168.1.123

...and do the same test on the server that hosts your nextcloud instance.

[matias@cloud ~]$ curl 192.168.1.123:8080/welcome
{"facerecognition-external-model":"welcome","model":3,"version":"0.2.0"}

Configure Nextcloud

If the service is accessible, you can now configure Nextcloud indicating that you have an external model at this address, and the API key used to communicate with it.

[matias@cloud nextcloud]$ php occ config:system:set facerecognition.external_model_url --value 192.168.1.123:8080
System config value facerecognition.external_model_url set to string 192.168.1.123:8080
[matias@cloud nextcloud]$ php occ config:system:set facerecognition.external_model_api_key --value NZ9ciQuH0djnyyTcsDhNL7so6SVrR01znNnv0iXLrSk=
System config value facerecognition.external_model_api_key set to string NZ9ciQuH0djnyyTcsDhNL7so6SVrR01znNnv0iXLrSk=

You can now configure the external model (which is the 5), in the same way that it did until now.

[matias@cloud nextcloud]$ php occ face:setup -m 5
The files of model 5 (ExternalModel) are already installed
The model 5 (ExternalModel) was configured as default

... and that's all my friends. You can now continue with the backgroud_task. ๐Ÿ˜ƒ

facerecognition-external-model's People

Contributors

brccabral avatar escoand avatar guystreeter avatar matiasdelellis avatar szaimen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

facerecognition-external-model's Issues

Illegal instruction (core dumped) on container start

Which happens at line:

import dlib

Host cpuinfo (4 cores):

processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 150
model name	: Intel(R) Celeron(R) J6412 @ 2.00GHz
stepping	: 1
microcode	: 0x17
cpu MHz		: 1872.062
cache size	: 4096 KB
physical id	: 0
siblings	: 4
core id		: 0
cpu cores	: 4
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 27
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep erms rdt_a rdseed smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip waitpkg gfni rdpid movdiri movdir64b md_clear flush_l1d arch_capabilities
vmx flags	: vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple shadow_vmcs ept_mode_based_exec tsc_scaling usr_wait_pause
bugs		: spectre_v1 spectre_v2 spec_store_bypass swapgs srbds mmio_stale_data
bogomips	: 3993.60
clflush size	: 64
cache_alignment	: 64
address sizes	: 39 bits physical, 48 bits virtual

How to reproduce:

docker run --rm -it matiasdelellis/facerecognition-external-model:v1 /bin/bash
flask  facerecognition-external-model.py --debug
# or
python
> import dlib

RuntimeError after Host & Container apt-upgrade

Hello,

after apt-upgrade on the host and in the nextcloud docker container the following error below occurs:

Updating the facerecognition-external container also, or re-build, did not eliminate the error

RuntimeError:

Error detected at line 111.
Error detected in file /tmp/pip-wheel-ho26inp5/dlib_3890393138614d1f83bcc5be40c3cb62/dlib/../dlib/python/numpy_image.h.
Error detected in function dlib::assert_is_image<rgb_pixel>(const pybind11::array&)::<lambda(char, size_t)>.

Failing expression was false.
unknown type

172.24.0.1 - - [30/Jun/2024 18:23:25] "POST /detect HTTP/1.1" 500 -
[2024-06-30 18:23:27,940] ERROR in app: Exception on /detect [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/flask/app.py", line 1473, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/flask/app.py", line 882, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/flask/app.py", line 880, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/flask/app.py", line 865, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/facerecognition-external-model.py", line 140, in decorated_function
return view_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/facerecognition-external-model.py", line 156, in detect_faces
img: numpy.ndarray = dlib.load_rgb_image(image_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++

response on occ face:background_job: Faces found: 0. Image will be skipped because of the following error: External model response /detect with error

latest updates on the host:
base-files/stable 12.4+deb12u6 arm64 [upgradable from: 12.4+deb12u5]
bash/stable 5.2.15-2+b7 arm64 [upgradable from: 5.2.15-2+b2]
curl/stable 7.88.1-10+deb12u6 arm64 [upgradable from: 7.88.1-10+deb12u5]
libcjson1/stable 1.7.15-1+deb12u1 arm64 [upgradable from: 1.7.15-1]
libcurl3-gnutls/stable 7.88.1-10+deb12u6 arm64 [upgradable from: 7.88.1-10+deb12u5]
libcurl4/stable 7.88.1-10+deb12u6 arm64 [upgradable from: 7.88.1-10+deb12u5]
libfreetype6/stable 2.12.1+dfsg-5+deb12u3 arm64 [upgradable from: 2.12.1+dfsg-5]
libgdk-pixbuf-2.0-0/stable 2.42.10+dfsg-1+deb12u1 arm64 [upgradable from: 2.42.10+dfsg-1+b1]
libgdk-pixbuf2.0-bin/stable 2.42.10+dfsg-1+deb12u1 arm64 [upgradable from: 2.42.10+dfsg-1+b1]
libgdk-pixbuf2.0-common/stable 2.42.10+dfsg-1+deb12u1 all [upgradable from: 2.42.10+dfsg-1]
libglib2.0-0/stable 2.74.6-2+deb12u3 arm64 [upgradable from: 2.74.6-2+deb12u2]
libglib2.0-bin/stable 2.74.6-2+deb12u3 arm64 [upgradable from: 2.74.6-2+deb12u2]
libglib2.0-data/stable 2.74.6-2+deb12u3 all [upgradable from: 2.74.6-2+deb12u2]
libglib2.0-dev-bin/stable 2.74.6-2+deb12u3 arm64 [upgradable from: 2.74.6-2+deb12u2]
libglib2.0-dev/stable 2.74.6-2+deb12u3 arm64 [upgradable from: 2.74.6-2+deb12u2]
libgnutls30/stable 3.7.9-2+deb12u3 arm64 [upgradable from: 3.7.9-2+deb12u2]
libltdl7/stable 2.4.7-7deb12u1 arm64 [upgradable from: 2.4.7-5]
libpq5/stable 15.7-0+deb12u1 arm64 [upgradable from: 15.6-0+deb12u1]
libpython3.11-minimal/stable 3.11.2-6+deb12u2 arm64 [upgradable from: 3.11.2-6]
libpython3.11-stdlib/stable 3.11.2-6+deb12u2 arm64 [upgradable from: 3.11.2-6]
libseccomp2/stable 2.5.4-1+deb12u1 arm64 [upgradable from: 2.5.4-1+b3]
libssl3/stable 3.0.13-1
deb12u1 arm64 [upgradable from: 3.0.11-1deb12u2]
libsystemd0/stable 252.26-1
deb12u2 arm64 [upgradable from: 252.22-1deb12u1]
libudev1/stable 252.26-1
deb12u2 arm64 [upgradable from: 252.22-1deb12u1]
linux-libc-dev/stable 6.1.94-1 arm64 [upgradable from: 6.1.90-1]
openssl/stable 3.0.13-1
deb12u1 arm64 [upgradable from: 3.0.11-1~deb12u2]
python3.11-minimal/stable 3.11.2-6+deb12u2 arm64 [upgradable from: 3.11.2-6]
python3.11/stable 3.11.2-6+deb12u2 arm64 [upgradable from: 3.11.2-6]
++++++++

I am grateful for any advice

Ralf

Docker restart leads to errors

I was running occ face:background_job on nextcloud server, and face detection on separate server. During this process, I restarted remote face detection server and expected continuation of background_job detection. However, this led to these errors:

XXX.XXX.XXX.XXX - - [09/Nov/2023 07:08:54] "POST /detect HTTP/1.1" 500 -
[2023-11-09 07:08:58,550] ERROR in app: Exception on /detect [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.12/site-packages/flask/app.py", line 1455, in wsgi_app
    response = self.full_dispatch_request()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/flask/app.py", line 869, in full_dispatch_request
    rv = self.handle_user_exception(e)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/flask/app.py", line 867, in full_dispatch_request
    rv = self.dispatch_request()
         ^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/flask/app.py", line 852, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/facerecognition-external-model.py", line 60, in decorated_function
    return view_function(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/facerecognition-external-model.py", line 148, in detect_faces
    faces = DETECT_FACES_FUNCTIONS[FACE_MODEL](img)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/facerecognition-external-model.py", line 116, in cnn_hog_detect
    cnn_faces = cnn_detect(img)
                ^^^^^^^^^^^^^^^
  File "/app/facerecognition-external-model.py", line 69, in cnn_detect
    dets: list = CNN_DETECTOR(img)
                 ^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object is not callable

I did not know how to resolve this, but I went into in nextcloud admin panel, face recognition menu, and the process seems to have properly started to work again:

....
TypeError: 'NoneType' object is not callable
XXX.XXX.XXX.XXX - - [09/Nov/2023 08:14:13] "POST /detect HTTP/1.1" 500 -
XXX.XXX.XXX.XXX - - [09/Nov/2023 08:14:17] "GET /open HTTP/1.1" 200 -
XXX.XXX.XXX.XXX - - [09/Nov/2023 08:14:35] "POST /detect HTTP/1.1" 200 -
XXX.XXX.XXX.XXX - - [09/Nov/2023 08:14:58] "POST /detect HTTP/1.1" 200 -
...

Facerecognition community container - reset memory setting nad image area to starting setting every time nextcloud-aio restarts

Hi

I have issue like describe in title. Every time when my nextcloud-aio instance are restarted those setting are resetting to defaults, And I see that in nextcloud container log:

[28-Jun-2024 09:46:15] NOTICE: fpm is running, pid 453
[28-Jun-2024 09:46:15] NOTICE: ready to handle connections
sh: taskset: not found
facerecognition already installed
facerecognition already enabled
System config value facerecognition.external_model_url set to string nextcloud-aio-facerecognition:5000
System config value facerecognition.external_model_api_key set to string some-super-secret-api-key

The files of model 5 (ExternalModel) are already installed
The model 5 (ExternalModel) was configured as default
System memory: 7.6 GB (8174907392B)
Memory assigned to PHP: 4 GB (4294967296B)

Minimum value to assign to image processing.: 682.7 MB (715827882B)
Maximum value to assign to image processing.: 4 GB (4294967296B)

Maximum memory assigned for image processing: 1 GB (1073741824B)
Config value were not updated
System config value enabledFaceRecognitionMimetype => 0 set to string image/jpeg
System config value enabledFaceRecognitionMimetype => 1 set to string image/png
System config value enabledFaceRecognitionMimetype => 2 set to string image/heic
System config value enabledFaceRecognitionMimetype => 3 set to string image/tiff
System config value enabledFaceRecognitionMimetype => 4 set to string image/webp
Activating Collabora config...
โœ“ Reset callback url autodetect
Checking configuration
๐Ÿ›ˆ Configured WOPI URL: https://
๐Ÿ›ˆ Configured public WOPI URL: https://
๐Ÿ›ˆ Configured callback URL: 

1/8 - Executing task CheckRequirementsTask (Check all requirements)
โœ“ Fetched /hosting/discovery endpoint
โœ“ Valid mimetype response
โœ“ Valid capabilities entry
โœ“ Fetched /hosting/capabilities endpoint
โœ“ Detected WOPI server: Collabora Online Development Edition 24.04.4.1

Collabora URL (used for Nextcloud to contact the Collabora server):
  https://nadysku.eu
Collabora public URL (used in the browser to open Collabora):
  https://nadysku.eu
Callback URL (used by Collabora to connect back to Nextcloud):
  autodetected (will use the same URL as your user for browsing Nextcloud)
2/8 - Executing task CheckCronTask (Check that service is started from either cron or from command)
3/8 - Executing task DisabledUserRemovalTask (Purge all the information of a user when disable the analysis.)
4/8 - Executing task StaleImagesRemovalTask (Crawl for stale images (either missing in filesystem or under .nomedia) and remove them from DB)
5/8 - Executing task AddMissingImagesTask (Crawl for missing images for each user and insert them in DB)

I modify those setting:

docker exec -it --user www-data nextcloud-aio-nextcloud php --define opcache.enable_cli=1 occ face:setup -M 3G
docker exec -it --user www-data nextcloud-aio-nextcloud php --define opcache.enable_cli=1 occ config:app:set facerecognition alysis_image_area --value='8294400' --type=integer

How to prevent resetting those settings ?

I use newest nextcloud-aio and Facerecognition container.

Best regards
Michal

Does docker-version support GPU

Hi

I could not find this info anywhere in the specs: Does the docker-version of the external-model support GPU usage?

If yes: great!!

If no: would it be possible to provide a dockerfile that includes this option?

Background: I do have a perfectly smooth running NC inculding easy update-processes and custom configuration. So I would much prefer not to build an own NC image as I suspect a high probablity of having issues in bringing my current set-up back to life. Another unknown are update/-gradability issues of the self-compiled image. To add the external-model on the same host seems a much safe route to take.

Thanks for clarification!
akrea

Add container to AIOs community containers to make configuration for them easier

Hi, I just wanted to mention that AiO has this now: https://github.com/nextcloud/all-in-one/tree/main/community-containers#how-to-add-containers. So the community could potentially add the facerecognition container as additional container there. Feel free to ping me if you should need help on this!


Regarding the json, I just leave this here as an example:

{
    "aio_services_v1": [
        {
            "container_name": "nextcloud-aio-facerecognition",
            "display_name": "Computing container for facerecognition",
            "documentation": "https://github.com/nextcloud/all-in-one/tree/main/community-containers/facerecognition",
            "image": "matiasdelellis/aio-facerecognition",
            "image_tag": "v1",
            "internal_port": "5000",
            "restart": "unless-stopped",
            "environment": [
                "TZ=%TIMEZONE%",
                "API_KEY=some-super-secret-api-key"
            ],
            "aio_variables": [
                "nextcloud_memory_limit=4096M"
            ],
            "nextcloud_exec_commands": [
                "php /var/www/html/occ app:install facerecognition",
                "php /var/www/html/occ config:system:set facerecognition.external_model_url --value nextcloud-aio-facerecognition:5000",
                "php /var/www/html/occ config:system:set facerecognition.external_model_api_key --value some-super-secret-api-key",
                "php /var/www/html/occ face:setup -m 5",
                "php /var/www/html/occ face:setup -M 4G",
                "php /var/www/html/occ occ face:background_job &"
            ]
        }
    ]
}

docker compose yaml file example

docker-compose.yaml example, not sure if environment is required here. Is the setup of FACE_MODEL correct ?
For some reason, I had to setup environment for API_KEY, as the file did not work as per below.

version: '3'
services:
  facerecognition:
    container_name: facerecognition
    restart: unless-stopped
#    image: ghcr.io/matiasdelellis/facerecognition-external-model:latest
    build: https://github.com/matiasdelellis/facerecognition-external-model.git

    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /root/facerecognition/api.key:/app/api.key:ro
      - type: tmpfs # Optional: 1GB of memory
        target: /tmp/cache
        tmpfs:
          size: 1GB
    ports:
      - "8080:5000"
    environment:
      FACE_MODEL: 4

No Faces Detected

Hi!

First of all, thank you and other contributors for maintaining and expanding this functionality! I'm a user of Nextcloud AIO and I'm very glad there is this option now. (I'm unsure if I should post this here or on https://github.com/nextcloud/all-in-one )

I've set up the container and it seems to be running, however, on each image, 0 faces get detected and the nextcloud logs are filled with this error:

Trying to access array offset on value of type null at /var/www/html/custom_apps/facerecognition/lib/Model/ExternalModel/ExternalModel.php#181

The following error is repeatingly showing in the container log:
[2024-01-11 18:03:39,598] ERROR in app: Exception on /detect [POST] Traceback (most recent call last): File "/usr/local/lib/python3.12/site-packages/flask/app.py", line 1455, in wsgi_app response = self.full_dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/flask/app.py", line 869, in full_dispatch_request rv = self.handle_user_exception(e) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/flask/app.py", line 867, in full_dispatch_request rv = self.dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/flask/app.py", line 852, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/facerecognition-external-model.py", line 60, in decorated_function return view_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/facerecognition-external-model.py", line 148, in detect_faces faces = DETECT_FACES_FUNCTIONS[FACE_MODEL](img) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/facerecognition-external-model.py", line 69, in cnn_detect dets: list = CNN_DETECTOR(img) ^^^^^^^^^^^^^^^^^ TypeError: 'NoneType' object is not callable 172.27.0.7 - - [11/Jan/2024 18:03:39] "๏ฟฝ[35m๏ฟฝ[1mPOST /detect HTTP/1.1๏ฟฝ[0m" 500 -

Might I've made a mistake in the installation? Is it a bug?

I use nextcloud as part of my homeserver hobby and I sadly have no experience in this field.

If I can supply any extra information needed, let me know!

GPU Usage on External-model

Hi, I know this is not the channel to send this kind of message, but I tried to use GPU to process externally and I really don't know if it's working. My server is processor only, but is processing faster than the dedicated external-model server GPU powered I built just to face recognition. Is there a way to really know if the model is using GPU?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.