Coder Social home page Coder Social logo

cert-polska / mquery Goto Github PK

View Code? Open in Web Editor NEW
400.0 27.0 75.0 8.4 MB

YARA malware query accelerator (web frontend)

License: GNU Affero General Public License v3.0

Python 59.13% CSS 0.70% JavaScript 38.19% HTML 0.74% Dockerfile 1.00% Mako 0.23%
yara malware database security-tools security-automation

mquery's Introduction

mquery: Blazingly fast Yara queries for malware analysts

Ever had trouble searching for malware samples? Mquery is an analyst-friendly web GUI to look through your digital warehouse.

It can be used to search through terabytes of malware in a blink of an eye:

mquery web GUI

Under the hood we use our UrsaDB, to accelerate yara queries with ngrams.

Demo

Public instance will be created soon, stay tuned...

Quickstart

1. Install and start

The easiest way to do this is with docker-compose:

git clone --recurse-submodules https://github.com/CERT-Polska/mquery.git
cd mquery
vim .env  # optional - change samples and index directory locations
docker-compose up --scale daemon=3  # building the images will take a while

The web interface should be available at http://localhost.

(For more installation options see the installation manual ).

2. Add the files

Put some files in the SAMPLES_DIR (by default ./samples in the repository, configurable with variable in the .env file).

3. Index your collection

Launch ursacli in docker:

sudo docker-compose exec ursadb ursacli
[2023-06-14 17:20:24.940] [info] Connecting to tcp://localhost:9281
[2023-06-14 17:20:24.942] [info] Connected to UrsaDB v1.5.1+98421d7 (connection id: 006B8B46B6)
ursadb>

Index the samples with n-grams of your choosing (this may take a while!)

ursadb> index "/mnt/samples" with [gram3, text4, wide8, hash4];
[2023-06-14 17:29:27.672] [info] Working... 1% (109 / 8218)
[2023-06-14 17:29:28.674] [info] Working... 1% (125 / 8218)
...
[2023-06-14 17:37:40.265] [info] Working... 99% (8217 / 8218)
[2023-06-14 17:37:41.266] [info] Working... 99% (8217 / 8218)
{
    "result": {
        "status": "ok"
    },
    "type": "ok"
}

This will scan samples directory for all new files and index them. You can monitor the progress in the tasks window on the left:

You have to repeat this process every time you want to add new files!

After indexing is over, you will notice new datasets:

This is a good and easy way to start, but if you have a big collection you are strongly encouraged to read indexing page in the manual.

4. Test it

Now your files should be searchable - insert any Yara rule into the search window and click Query. Just for demonstration, I've indexed the source code of this application and tested this Yara rule:

rule mquery_exceptions {
    strings: $a = "Exception"
    condition: all of them
}

Learn more

See the documentation to learn more. Probably a good idea if you plan a bigger deployment.

You can also read the hosted version here: cert-polska.github.io/mquery/docs.

Installation

See the installation instruction.

Contributing

If you want to contribute, see our dedicated documentation for contributors.

Changelog

Learn how the project has changed by reading our release log.

Contact

If you have any problems, bugs or feature requests related to mquery, you're encouraged to create a GitHub issue.

You can chat about this project on Discord:

If you have questions unsuitable for Github or discord, you can email CERT.PL ([email protected]) directly.

mquery's People

Contributors

bonusplay avatar bzeba avatar c3rb3ru5d3d53c avatar damionmounts avatar dskwhitehat avatar icedevml avatar itayc0hen avatar jaropowerh avatar kwmorale avatar msm-cert avatar msm-code avatar nazywam avatar psrok1 avatar raw-data avatar yankovs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mquery's Issues

Remove job entry from the Recent Jobs page

Description:

Sometimes, junk tests and miserably-failing rules could inflate the list of recent jobs and cause some noise and clutter. It would be nice to have the option to remove entries from the table. Preferably, it won't remove the job itself, but only its record in the displayed table. The link to each of the removed jobs should still be valid.

An example can be found on VT's retrohunt page:
image

Support multiple rules and dependencies between them

Description

Out of all the issues I opened, this probably is the most important to me.

mquery currently supports the querying of only a single rule in a given query. That means that set of rules is not supported. This issue is important to me in particular since this is a usual research workflow, and in general, a very basic feature of Yara.

Not only that mquery does not support referencing rules in the same query, but it also does not support multiple rules querying at the time. And I mean multiple rules for the same query, and not for different queries running on multiple workers.

Most of the Yara rules repositories will contain .yar files with multiple rules inside. So are the rules that are written by different research vendors or by different individuals. A typical use case is a rule to catch all samples of a given actor, say a ruleset to catch samples of Equation group. Mquery does not allow to do something like this, which is very common when working on big projects, analyzing entire actor and teams.

For me, the queries I usually create contain more than a single rule. It contains global rules, private rules and referencing other rules. This allows me to create powerful rulesets with stronger conditions and classification. In the end, the samples that will match the ruleset will be classified based on the rules in the ruleset that detected them. For example, in the aforementioned ruleset of Equation group, the results will show whether each sample detected by "EquationDrug_PlatformOrchestrator", "Equation_Kaspersky_TripleFantasy_Loader", "Equation_Kaspersky_FannyWorm", or other rule.

Yara is very powerful, and it seems like mquery doesn't use its powers fully.

Read more:
Yara - Referencing Other Rules

rule Rule1
{
    strings:
        $a = "dummy1"

    condition:
        $a
}

rule Rule2
{
    strings:
        $a = "dummy2"

    condition:
        $a and Rule1
}

Yara - More About Rules - Global and Private rules

global rule SizeLimit
{
    condition:
        filesize < 2MB
}

private rule PrivateRuleExample
{
    ...
}

Show matched strings in matched samples

Description

When looking for interesting samples in the internal dataset, it is great to see the context in which a given sample was found. By showing the matched strings when possible (in context, maybe with partial 32* bytes of hexdump) you will provide the user meaningful information. This can be done via tooltip.

This info can be retrieved from yara -sL

image

Support case insenstive strings in yara rules

Right now, we just ignore strings with the nocase flag:

rule CaseInsensitiveTextExample
{
    strings:
        $text_string = "foobar" nocase

    condition:
        $text_string
}

Supporting them correctly is... harder than it looks like. This can match:

foobar
foobaR
foobAr
foobAR
fooBar
... and 59 strings more

and ursadb query language is not expressive enough to support this.

We can't hack around this by chopping the query in the backend to something like:

( "foo" AND (
    "oob" AND (
        "oba" AND (
            ...
        ) OR
        "obA" AND (

        )
    ) OR
    "ooB" AND (
        "oBa" AND (
            ...
        ) OR
        "oBA" AND (
            ...
        )
    )
) OR "foO AND (
    ...
) OR "fOo" AND (
    ...
) OR "fOO" AND (
    ...
) ...

Because of exponential growth.

OTOH I feel like like this can solved with a C++ method (needs investigation). In this case we need to introduce nocase strings to ursadb.

Needs investigation (if this results in too many false positives, we may as well give up).

What is UrsaDB?

Could you link the README.md to what UrsaDB is or give a quick explanation there? I'm trying to understand how this project is more "accelerated" than just using yara.

Show line numbers in the Yara editor

Description
Line numbers are a useful feature in text editors. The current textarea does nto show line numbers. This is especially a problem when trying to compile a rule and getting and error saying that there is a problem in line 1234. A user would have to go to another IDE to see which line is the problematic

image

Fix query result updater getting stuck on the last result

That's a bit hard to explain and reproduce, but:

  1. Index a lot of files (say 40k)
  2. Open your browser devtools
  3. Do a trivial query on them, like
rule aaa {
strings:   $a = "aa"
condition: $a
}
  1. Watch progress quickly going to 100% and matches being slowly downloaded from server
  2. Click another tab in the browser, for example "Recent jobs"
  3. Mquery will sometimes keep querying the job, even though it's not visible anymore

This may bug may "fix itself" after #39 , but not necessarily.

Support full words in yara rules

fullword modifier ensures that string will match only if it appears in the file delimited by non-alphanumeric characters. We should support it since it's trivial for us - just drop the modifier.

Make Dockerfiles more cacheable

Right now, any change in (for example) react frontend will rebuild:

  • dev-frontend from dev dockerfile (correctly, expected behaviour)
  • web (expected, but should only rebuild the frontend, not everything)
  • daemon (unnecessary)
  • dev-web (unnecessary)

Right now we recommend docker-compose for development. We should strive to make rebuilds a faster operation.

I think we could:

  • move mqueryfront to a separate directory, instead of it being a subdirectory of ./src/
  • investigate npm caching in mqueryfront (I think it should be doable? Just copy package.json and install first?)

Alternatively/additionally, we may consider uploading prebuild images artifacts to dockerhub? So instead of building them locally, we can download them instead. Just a thought.

Rethink plugin system and support

Currently we have a small set of hardcoded metadata plugins, that we recently merged with the main repo.

This system is not extensible and hard to configure, even in our environment. Think how can we improve this and open it a bit.

Fail running docker compose

target Operating system:

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.1 LTS"

I tried to run docker compose, but I'm getting some errors:

docker-compose up --scale daemon=3

Result:

 ---> Running in 337c59f38b0e
2018-08-09T23:07:57.054233251+02:00 network connect d391f974016b2815519970162e26b4b96414460afae237b14a716362c50221cb (container=337c59f38b0e573a32b6e9688cda250af51575464d42e14eaf3842dc829448c1, name=bridge, type=bridge)
2018-08-09T23:07:57.363154370+02:00 container start 337c59f38b0e573a32b6e9688cda250af51575464d42e14eaf3842dc829448c1 (image=sha256:82b6f873a70f83db57b83fe95c8bb58d5ea820d4ddd857acaebea490a954e096, name=upbeat_beaver)
-- The C compiler identification is GNU 7.3.0
-- The CXX compiler identification is unknown
-- Check for working C compiler: /usr/bin/gcc-7
-- Check for working C compiler: /usr/bin/gcc-7 -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
CMake Error at CMakeLists.txt:2 (project):
  The CMAKE_CXX_COMPILER:

    g++-7

  is not a full path and was not found in the PATH.

  Tell CMake where to find the compiler by setting either the environment
  variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path
  to the compiler, or to the compiler name if it is in the PATH.


-- Configuring incomplete, errors occurred!
See also "/src/build/CMakeFiles/CMakeOutput.log".
See also "/src/build/CMakeFiles/CMakeError.log".
2018-08-09T23:07:57.887019066+02:00 container die 337c59f38b0e573a32b6e9688cda250af51575464d42e14eaf3842dc829448c1 (exitCode=1, image=sha256:82b6f873a70f83db57b83fe95c8bb58d5ea820d4ddd857acaebea490a954e096, name=upbeat_beaver)
2018-08-09T23:07:58.002523293+02:00 network disconnect d391f974016b2815519970162e26b4b96414460afae237b14a716362c50221cb (container=337c59f38b0e573a32b6e9688cda250af51575464d42e14eaf3842dc829448c1, name=bridge, type=bridge)
ERROR: Service 'ursadb' failed to build: The command '/bin/sh -c cmake -D CMAKE_C_COMPILER=gcc-7 -D CMAKE_CXX_COMPILER=g++-7 -D CMAKE_BUILD_TYPE=Release .. && make' returned a non-zero code: 1

Improve mquery RAM usage for large queries

Currently, in the worst case, mquery will load all the filenames in the corpus to memory - in three copies!

This should be reduced to at most one copy, or ideally zero.

Filter previous jobs by author

Description
Currently, mqeury parses the rule's author from the metadata part in the rule.

When the number of jobs will start to rise, it would be harder to return to find a previous job. It's even more important if a big organization is using mquery with > 10 daily users.
Thus, it would be nice to have an option to filter the jobs by their author.

image


*In general, I believe that implementing user management would be a good decision. Maybe something that is based on mwdb user permission model. This will help to make mquery suitable for big organizations. For example, some queries are too sensitive to share even with other people in the same organization. In addition, the author signed on a Yara rule isn't necessarily the person who executes the yara. Say, if someone is executing a rule from the internet written by a stranger. Later, it will be hard for them to find this job since they don't remember the name of the stranger.

I'll open a new issue for this ^*

Make e2e tests easily runnable from local machine

At least I don't completely understand how to run them - the local state is a huge pain.

Maybe create a special docker-compose.yml that will always start with a clean state? Just thinking aloud here.

Assigning @icedevml since you understant these tests best, but I'll pick it up myself later if you don't have time

Internal exception when executing rules with no strings defined

Description

When no strings are provided in the Yara rule, mquery will get an internal server error and a python exception.

Some Yara rules do not contain any strings, As can be seen in the official Yara documentation and in the following examples:

rule FileSizeExample
{
    condition:
       filesize > 200KB
}
rule IsPE
{
  condition:
     // MZ signature at offset 0 and ...
     uint16(0) == 0x5A4D and
     // ... PE signature at offset stored in MZ header at 0x3C
     uint32(uint32(0x3C)) == 0x00004550
}
import "hash"

rule test
{
    condition:
        hash.sha256(0, filesize) == "e9455dcfaa7571ef460913879ad8c562d4abb374316bdf242b2459c7f85519e6"
}

Cause to the problem
From a quick investigation, it seems like the object job_obj is passed to redis.hmset(..., job_obj) with a NoneType value for the parsed key in the dictionary.

mquery/src/app.py

Lines 91 to 122 in 5001934

rule_strings = {}
for r_string in rule.strings:
rule_strings[r_string.identifier] = r_string
parsed = yara_traverse(rule.condition, rule_strings)
except Exception as e:
logging.exception("YaraParser failed")
return jsonify({"error": f"Yara rule conversion failed: {e}"}), 400
if req["method"] == "parse":
return jsonify({"rule_name": rule_name, "parsed": parsed})
job_hash = "".join(
random.SystemRandom().choice(string.ascii_uppercase + string.digits)
for _ in range(12)
)
job_obj = {
"status": "new",
"max_files": -1,
"rule_name": rule_name,
"rule_author": rule_author,
"parsed": parsed,
"raw_yara": raw_yara,
"submitted": int(time.time()),
"priority": priority,
}
if req["method"] == "query_100":
job_obj.update({"max_files": 100})
redis.hmset("job:" + job_hash, job_obj)

There is no validation check for the value before passing it, which isn't a good practice.
`redis-py no longer accepts NoneType objects: redis/redis-py#1071 (comment)

Expected behavior
The query should be executed without any problem

Add pagination or lazy-load to results table

Description
When executing a query with many results, the interface will show all the matches in the results table. The table can be with tens of thousands of results for a big dataset, which will make the page slow to response. I suggest implementing pagination (the xhr requests are already fetching 50 at a time) or a lazy-loading

Remove the Actions column from the Recent Jobs page

Description
It seems like the Actions column in /recent is used only for displaying the Cancel button for active jobs. While this is nice, it takes unnecessary places because most of the time there are one or none active jobs.

image

My suggestion is to add the cancel button next to the active scan or in the Status cell.

In virus total, they put it in the column of the Delete task buttons (see #21)

image

Show query duration

Description
I think that it will be nice to show query duration both on the Query page as well as on the "/recent" page. It will be helpful for the user, as with the time they'll get to know the best-practices by measuring querying durations.

Action items

  • show query duration for running queries on the Query page
  • show query duration for running queries on the Recent page
  • show query duration for finished queries on the Recent page
  • show ETA for running query

Filter query results based on rule-name

Description
I think it is a good idea to let the user interact a little bit more with the results page. This including secondary filtering.

For example, a query can contain multiple rules, and each sample that showed in the matches table has a label(s) with the name(s) of the matched rule(s).

This can be shown in the following image.
image

I'd like to a more dynamic approach for the matches table, in which the user can click on a label (rule name) and the table will be filtered based on the selected rule.

Not surprisingly, this is implemented in mwdb in a very nice way. The user can click on a tag and it will immediately filter.

Show visible indication that no matches were found

Description
It is not intuitive to understand that no matches were found for a given rule. The interface shows the table's header (why?) even though there are no matches, the progress bar isn't full, etc.

image

Instead, I suggest to show a full progress bar and then a message saying "No matches found for your rule" instead of the empty table

Support ascii and wide strings together

rule WideCharTextExample2
{
    strings:
        $wide_and_ascii_string = "Borland" wide ascii

    condition:
       $wide_and_ascii_string
}

Expected result:

{42006f0072006c0061006e006400} | {426f726c616e64}

Actual result:

{42006f0072006c0061006e006400}

Mquery is counting results when displaying previous jobs

Description
When entering previous jobs, mquery is counting the number of matches instead of showing the number immediately.

mquery keep counting

Expected behavior
I expect the total number of results to be shown imeediately. Even if the table is needed to be fetched (lazy loading / pagination will be better btw)

Files not being indexed

Trying to index files placed in /mnt/samples. After executing docker-compose run ursadb-cli tcp://ursadb:9281 --cmd 'index "/mnt/samples"; a status code of 'ok' is returned, but no logging or other information is showing files being indexed.

Is there any guidance on troubleshooting file index? Is it only certain file types, etc. ?

Implement user permission model

Description

I believe that implementing user management would be a good decision. Maybe something that is based on mwdb user permission model. This will help to make mquery suitable for big organizations.
For example, some queries are too sensitive to share even with other people in the same organization

Similarly to mwdb, this will allow the user to choose whether they want all the users to view the newly added job or to submit it as a personal rule, visible only for them (and maybe for the system admin). In this case, the recent jobs page will show by default only the user's jobs, and with a toggle turn-on, it will show all the publicly available jobs.

In addition, currently mquery parser the "author" attribute from the Yara. But the author signed on a Yara rule isn't necessarily the person who executes the rule in mquery. Say, if someone is executing a rule from the internet written by a stranger. Later, it will be hard for the user to find this job since they don't remember the name of the stranger.

Research the usefulness of accelerating xor modifier

Accelerate rules like:

rule XorExample1
{
    strings:
        $xor_string = "This program cannot" xor

    condition:
       $xor_string
}

My gut feeling is that this may result in way too many false positives, but I may be wrong.

We should probably transform strings like this to an OR of 256 xored strings in the backend. I don't feel like first-class support in ursadb is necessary (unless someone finds a way to optimise this that needs it).

Decrease font size on the interface

Description

By default, when showing the job's results mquery uses 16px font as retrieved from the :root by specifying 1rem. For most places in the interface, 16px is too big and causes a simple ./samples/<sha256> to line-wrap.

I suggest using a 0.75REM (12px) font when possible.

Before:

image

After:

image

Before:

image

After:
image

[proposed enhancement] query specific datasets

Could be interesting having the possibility to query specific datasets before running a query.

Workflow example:

  • Select from a drop-down menu a specific dataset/s to query
  • copy-paste YARA rule
  • run query

Use case:
Scanning paths storing different kind of data, i.e. known-to-be-good, known-to-be-bad, [...]

Add tab key support to the Yara editor

Description
When pressing TAB in the Yara editor the website will navigate to the next HTML element by shifting the focus. When coding, TAB is frequently used for indention. Yara rules are rich with indentions and thus I believe it would be better that pressing on TAB will be treated as a \t in the editor.

Examples:
https://jsfiddle.net/2wAzx/13/
https://philipnewcomer.net/2015/11/how-to-make-the-tab-key-indent-text-in-a-texarea/
https://stackoverflow.com/questions/6637341/use-tab-to-indent-in-textarea

Add a "Download All" button to download all matched samples

Description
When executing a Yara from the interface, there is no way to download all the matched samples. Instead, the user needs to manually click each sample, or use the undocumented API.

I suggest adding two buttons for both the job and the Recent pages.

  • A button to download all samples in a zip (protected with "infected" password)
  • A dropdown button of buttons to download a text file
    • Download the names of the files
    • Download the SHA256 of the files
    • Download the SHA1 of the files
    • Download the MD5 of the files

Related: #16

Build Error

I am getting the following error when I run "docker-compose up --scale daemon=3"

Successfully tagged mquery_daemon:latest
Building web
ERROR: No build stage in current context

Show number of total indexed files in the Status page

Description
The /status page is used to provide information about the underlying infrastructure, including versions, topology, and current connections.
If a user wants to know how many files are indexed, there's no clear way to do so (unless ursadb-cli provides it somehow, and still - it is no accessible).

My suggestion is to display the number of indexed files in the dataset(s). It will also be nice-to-have to show the number of files that aren't indexed because ls /mnt/samples | wc -l != num_of_indexex_files.

A bonus feature would be to show a total size of the dataset.

Add a Copy button for for all hashes, and for each single hash

Description
Currently, it is not easy to retrieve the hash of the sample. If you're lucky, the filename is the hash, and then you'll need to copy each filename.
I suggest to add buttons to:

  • Copy single hash (might be implemented by clicking on the cell)
  • Copy all hashes as
    • Sha256
    • Sha1
    • MD5

image

image

Related to #33

Display the number of files that are not indexed

Suggested in #19

It will also be nice-to-have to show the number of files that aren't indexed because ls /mnt/samples | wc -l != num_of_indexex_files.

Needs backend changes, redesign, and pondering if it's even in scope. I can see how it can be useful though (for example, as an alert when autoindex job fails)

Progress bar shows NaN% when no matches found

Description
When no matches and no potential files found, the progress bar can show NaN%. This is probably caused by dividing 0 samples by 0 indexed-potentials in the JS.

Also, notice that the progress bar isn't even fully filled, as expected from a no-results-found job

image

[META] Support more features of Yara

Description
It seems like mquery does not support a lot of the features of yara, which makes it less powerful and blocking the possibility for utilizing mquery for quality hunting of new samples, exploits and more. The limited set of features used by mquery is limiting the user significantly.

Required Features

Rules (see also #40)

Strings

Make the robust installation easier

Right now there are two ways to setup everything:

  1. Do it by hand, which requires a lot of knowledge about system internals and some time.

or

  1. Do it with docker compose, with samples in the repository root etc - looks like it doesn't play nicely with large datasets (there were a lot of problems during today's test run)

Maybe publish stable docker images on dockerhub to make stable installation easier?

Migrate the backend database to postgres

We're using redis for historic reasons, but we should probably move the metadata to postgres, to ensure data integrity and maybe even improve performance.

  • Design a schema and update docker-compose files DB;
  • Change all uses of redis in the code to postgres;
  • Provide a migration scripts?
  • Update documentation (including installation methods)

This is pretty low priority because it's a lot of work and don't add any immediate features though

Suggested by @BonusPlay (not directly, but related to schema problems)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.