Coder Social home page Coder Social logo

logstash-patterns-core's Introduction

Logstash Plugin

Travis Build Status

This plugin provides pattern definitions used by the grok filter.

It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.

Documentation

Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one central location.

Need Help?

Need help? Try https://discuss.elastic.co/c/logstash discussion forum.

Developing

1. Plugin Developement and Testing

Code

  • Install dependencies
bundle install

Test

  • Update your dependencies
bundle install
  • Run tests
bundle exec rspec

2. Running your unpublished Plugin in Logstash

2.1 Run in a local Logstash clone

  • Edit Logstash Gemfile and add the local plugin path, for example:
gem "logstash-patterns-core", :path => "/your/local/logstash-patterns-core"
  • Install plugin
# Logstash 2.3 and higher
bin/logstash-plugin install --no-verify
  • Run Logstash with your plugin
bin/logstash -e 'filter { grok { } }'

At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.

2.2 Run in an installed Logstash

You can use the same 2.1 method to run your plugin in an installed Logstash by editing its Gemfile and pointing the :path to your local plugin development directory or you can build the gem and install it using:

  • Build your plugin gem
gem build logstash-patterns-core.gemspec
  • Install the plugin from the Logstash home
bin/logstash-plugin install --no-verify
  • Start Logstash and proceed to test the plugin

Contributing

All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.

Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.

It is more important to the community that you are able to contribute.

For more information about contributing, see the CONTRIBUTING file.

logstash-patterns-core's People

Contributors

bdashrad avatar chernjie avatar cniweb avatar colinsurprenant avatar danhermann avatar electrical avatar elyscape avatar flysen avatar frennkie avatar geogdog avatar gibson042 avatar hatt avatar human39 avatar igalic avatar jordansissel avatar jsvd avatar kares avatar lebinh avatar mdelapenya avatar nhuff avatar olafz avatar paulrbr avatar ph avatar philhagen avatar radu-gheorghe avatar rijnhard avatar robbavey avatar sathieu avatar yaauie avatar ycombinator avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logstash-patterns-core's Issues

how to display pattern group

text

<User-Name data_type="1">xxx</User-Name>

pattern

CALLING_USER >([^<]+)</User-Name>

grok

"message" => ${CALLING_USER:user}

display

>xxx</User-Name>

how to only show group one ?

xxx

Grok pattern for NCSA log

I was reading grok patterns and found your pattern to parse combined apache log
your pattern to parse combined apache log, namely the NCSA log format, does not accept spaces in the authuser field:

USERNAME [a-zA-Z0-9._-]+
USER %{USERNAME}
[...]
COMMONAPACHELOG %{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] ...

I read every documentations I could find on the subject, even NCSA HTTPd source code, and could not find any reference of escaping, forbidding, or replacing spaces in the authuser field.

Yet a lot of people do it wrong too (starting by me, I personally have the habit to split my NCSA log lines on space, or use awk, cut, etc on them).

I personally thought at first I was right (to split on spaces, use awk, etc...), so I even opened a ticket on Varnish bug tracker.

I think we should open a conversation here on this subject, in one hand we either have a lot of people doing the same thing wrong, on the other hand we'll have hard times finding a clean way to encode authuser without dropping information, and convincing Apache, Microsoft, and Varnish, nginx, etc... to change their log handling code to change this...

CISCOFW305011 not reporting the src_xlated_port

CISCOFW305011 %{CISCO_ACTION:action} %{CISCO_XLATE_TYPE:xlate_type} %{WORD:protocol} translation from %{DATA:src_interface}:%{IP:src_ip}(/%{INT:src_port})?((%{DATA:src_fwuser}))? to %{DATA:src_xlated_interface}:%{IP:src_xlated_ip}/%{DATA:src_xlated_port}

{DATA:src_xlated_port} should be {INT:src_xlated_port}

EMAILLOCALPART Pattern doesn't match some accounts

Hi,
Recently I began to use the EMAILLOCALPART pattern but it doesn't match accounts that start with a number so many maillogs provided by some solutions are not indexed correctly

Example: [email protected]

So I think is better that the pattern should be:

EMAILLOCALPART [a-zA-Z0-9_.+-=:]+

Regards,

Release often : I'm eager to use new pattern and no new gem

Hi,
It would be nice if you released a new version of the gem : there are a lot of new patterns since 1.10.
Please!!

In my particular case migrating from Logstash 1.4 to 1.5 is hurting due to pattern abductions (maybe aliens doing again).

Whitespace in Cisco ASA output breaks firewall pattern

(This issue was originally filed by @roderickm at elastic/logstash#2101)


If a Cisco ASA has a logging device-id set (for instance with logging device-id string asa.sfo), the syslog message emitted does not match the grok pattern CISCO_TAGGED_SYSLOG. An additional space should be allowed by the pattern between the device_id and the colon.

Here are example messages to demonstrate:

without device-id:
<164>Nov 19 2014 17:27:56: %ASA-4-733100: [ Scanning] drop rate-1 exceeded. ...

with device-id:
<164>Nov 19 2014 17:30:36 asa.sfo : %ASA-4-733100: [ Scanning] drop rate-1 exceeded. ...

The example with device-id is not matched by CISCO_TAGGED_SYSLOG because of the space in
asa.sfo :

Syslog logs might not get parsed properly

See #10 for more details, (closed because of very long inactivity) for more detailed error description you can see: elastic/logstash#1734

From the main issue:


I figured I'd report the logs we're seeing from the syslog input plugin that aren't being parsed properly. The vast majority are being parsed just fine, but there are three edge cases that aren't.

This one fails because "Server Administrator" has a space in it:

<30>2014-09-15T11:35:55.965491-05:00 hostname Server Administrator: Storage Service EventID: 2243 The Patrol Read has stopped.: Controller 0 (PERC H800 Adapter) 

This one fails because there's no message:

<4>2014-09-14T23:21:38.214167-05:00 hostname kernel:

This one fails because "run-parts(/etc/cron.hourly)" has parentheses in it. I've discussed this one with whack in the IRC channel, and he said this should be fixed in the next release, but I figured it should be documented:

<77>2014-09-15T06:01:01.687109-05:00 hostname run-parts(/etc/cron.hourly)[25969]: starting 0anacron

Test need for Junos Firewall patterns

as of #12 there is a lack of test for the junos firewall test, however there are no test being provided for this systems. If you know how this logs looks like it would be nice to have test that can validate this patterns works as expected.

All contributions are welcome, even if just some log examples, then we can workout the test for sure.

Cisco ASA pattern error for inbound and outbound

Migrated from: elastic/logstash#1369
....

Hi,

There is an issue with the built-in pattern for Cisco ASA firewalls. The line :

# ASA-6-302020, ASA-6-302021
CISCOFW302020_302021 %{CISCO_ACTION:action}(?: %{CISCO_DIRECTION:direction})? %{WORD:protocol} connection for faddr %{IP:dst_ip}/%{INT:icmp_seq_num}(?:\(%{DATA:fwuser}\))? gaddr %{IP:src_xlated_ip}/%{INT:icmp_code_xlated} laddr %{IP:src_ip}/%{INT:icmp_code}( \(%{DATA:user}\))?

should be replaced by :

# ASA-6-302020_302021 inbound
CISCOFW302020_302021_1 %{CISCO_ACTION:action}(?: (?<direction>inbound))? %{WORD:protocol} connection for faddr %{IP:src_ip}/%{INT:icmp_seq_num}(?:\(%{DATA:fwuser}\))? gaddr %{IP:dst_xlated_ip}/%{INT:icmp_code_xlated} laddr %{IP:dst_ip}/%{INT:icmp_code}( \(%{DATA:user}\))?
# ASA-6-302020_302021 outbound
CISCOFW302020_302021_2 %{CISCO_ACTION:action}(?: (?<direction>outbound))? %{WORD:protocol} connection for faddr %{IP:dst_ip}/%{INT:icmp_seq_num}(?:\(%{DATA:fwuser}\))? gaddr %{IP:src_xlated_ip}/%{INT:icmp_code_xlated} laddr %{IP:src_ip}/%{INT:icmp_code}( \(%{DATA:user}\))?

Indeed, the src_ip & dst_ip are different if the direction is inbound or outbound.

You will need to update the Logstash Cookbook page for Cisco ASA too, because we replace the pattern CISCOFW302020_302021 by two patterns (CISCOFW302020_302021_1 and CISCOFW302020_302021_2).

Missing test for ASA-4-106100, ASA-4-106102, ASA-4-106103 syslog messages

Since the last releases of this project we added a way to add test to the patterns so we're able to check for regressions, integrity, etc.

We need test for this new patterns added in the grok patterns lib, this are from cisco devices. If you are not confortable wirting ruby test, don't worry contributing log lines is good! then we can sort out the test πŸ˜„

_happy testing_

PATH pattern does not combine well with comma delimination

I have a logfile whose lines may include snippets along the line of a=/some/path, b=some/other/path. As an ELK user interested in these logs I would like to parse them with LogStash.

The logstash configuration:

filter {
  grok {
    match => { "message" => "((a=(?<a>%{PATH})?|b=(?<b>%{PATH})?)(,\s)?)+" }
  }
}

when given a=/some/path, b=/some/other/path LogStash gives the output:

{
         "message" => "a=/some/path, b=/some/other/path",
        "@version" => "1",
      "@timestamp" => "2015-02-23T01:57:56.933Z",
            "type" => "stdin",
            "host" => "gallifry",
               "a" => "/some/path,"
}

I expect b to bind to some/other/path:

{
         "message" => "a=/some/path, b=/some/other/path",
        "@version" => "1",
      "@timestamp" => "2015-02-23T01:57:56.933Z",
            "type" => "stdin",
            "host" => "gallifry",
               "a" => "/some/path"
               "b" => "/some/other/path"
}

Mark HOST as breaking change between logstash 1.5.4 and 1.5.5

After updating my logstash from the repositories from 1.5.4 to 1.5.5, it wouldn't start anymore talking about a pattern %{HOST} not being defined.

After tracking down the change and the help of someone on IRC, it turned out it had been removed.

Could you mark this as a breaking change for 1.5.5 release and check if any other pattern is affected (and mark them as breaking change too)?

Thanks.

As a sysadmin, I'd like to have shorewall logs on firewall default grok patterns

I've read this issue on Jira (https://logstash.jira.com/browse/LOGSTASH-491), but could not replicate the pattern, as the patterns are not present anywhere, such as IPTABLESCHAIN, SPORT...).

As shorewall is a vey popular open source firewall product, could it be possible to add a pattern to filter its logs?

Thanks!

PS: This could be an example of a log line in Shorewall, which follows http://logi.cc/en/2010/07/netfilter-log-format to define the log format

Apr 16 08:26:46 myHostName kernel: [5595162.268034] Shorewall:net2fw:DROP:IN=eth1 OUT= MAC=myMacAddress SRC=sourceIP DST=destinyIP LEN=48 TOS=0x00 PREC=0x00 TTL=114 ID=25671 DF PROTO=TCP SPT=10884 DPT=43406 WINDOW=8192 RES=0x00 SYN URGP=0

Improve test for HTTPD

Test for HTTP should be improved, for example including situations where the auth has actually content, for example like an email, see #3 for more details.

Having example logs to craft the test would be also a good to have to write the tests.

test would be necessary for the nagios notification patterns

Related to #58

test would be necessary for the patterns:

DISABLE_HOST_SVC_NOTIFICATIONS
ENABLE_HOST_SVC_NOTIFICATIONS
DISABLE_HOST_NOTIFICATIONS
ENABLE_HOST_NOTIFICATIONS
DISABLE_SVC_NOTIFICATIONS
ENABLE_SVC_NOTIFICATIONS

also example log lines would be necessary to write this patterns test properly.

Mongo 3 log changes

Hi!
I was looking for mongodb 3 log patterns and I end up here just to notice that your mongo patterns are outdated for the version 3 of the database

And example:
You say:

MONGO_LOG %{SYSLOGTIMESTAMP:timestamp} [%{WORD:component}] %{GREEDYDATA:message}

But the new version turns out to be:

%{TIMESTAMP_ISO8601:timestamp} %{WORD:severity} %{WORD:component} [%{WORD:context}] %{GREEDYDATA:message}

See official documentation on the matter: http://docs.mongodb.org/manual/reference/log-messages/

I'm looking for the rest of the patterns but this one groks just fine in my tests
As soon as I found any other will post here if you are ok with it

Thanks

Errors in java patterns

The patterns

  • JAVACLASS
  • JAVAFILE
  • JAVASTACKTRACEPART

in patterns/java are have multiple and non unique definitions. This causes issues for example as the second definition of JAVACLASS (which takes precedence) is defined as

JAVACLASS (?:[a-zA-Z0-9-]+\.)+[A-Za-z0-9$]+

is wrong. Class qualifiers should be optional and identifiers should be [a-zA-Z$_][a-zA-Z$_0-9]* as defined first time.

logstash 1.5.3 on windows won't read custom patterns in grok{ patterns_dir

Hi,

logstash 1.5.3 on windows won't read custom patterns file
I use patterns_dir
grok{
patterns_dir => "C:/devtools/logstash-1.5.3/patterns"

it works only when I put the patterns file in
C:\devtools\logstash-1.5.3\vendor\bundle\jruby\1.9\gems\logstash-patterns-core-0.1.10\patterns

best regards

Missing nagios_type field in NAGIOS_HOST_NOTIFICATION pattern

From LOGSTASH-1677

The pattern named NAGIOS_HOST_NOTIFICATION doesn't match NAGIOS_TYPE_HOST_NOTIFICATION to the nagios_type field.
It should be:
NAGIOS_HOST_NOTIFICATION %{NAGIOS_TYPE_HOST_NOTIFICATION:nagios_type}: %{DATA:nagios_notifyname};%{DATA:nagios_hostname};%{DATA:nagios_state};%{DATA:nagios_contact};%{GREEDYDATA:nagios_message}
...as per the rest of the patterns matching nagios_type

Indeed the NAGIOS_TYPE_HOST_NOTIFICATION is an anonymous capture in NAGIOS_HOST_NOTIFICATION pattern definition

NAGIOS_HOST_NOTIFICATION %{NAGIOS_TYPE_HOST_NOTIFICATION}: %{DATA:nagios_notifyname};%{DATA:nagios_hostname};%{DATA:nagios_state};%{DATA:nagios_contact};%{GREEDYDATA:nagios_message}

https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/nagios#L69
where all others pattern use a named capture

A specific HAProxy HTTP Pattern is not being matched

Hello,

This particular request on my HAProxy server is being tagged with ["_grokparsefailure"]. I am pasting the log event below. I have tried using the grok debugger and it says - "No matches"

May 26 03:51:23 localhost haproxy[13489]: 199.16.156.124:17410 [26/May/2015:03:51:23.433] http-in web-cluster/web-4 0/0/0/13/13 200 14631 - - --NR 0/0/0/0/0 0/0 {} {} "GET /tivamoservice/mo/275-Josh-Herbert-Live-Performance-and-Q&A HTTP/1.1"

My gut feeling says that the '&' character as part of the URIPATHPARAM is the reason for the pattern to fail.

It would be great if you could take a look at this.

Thank you,
Raghu

why is logstash parsing the year only as 2-digit?

I was using the following snippet to parse a customized size postgres logfile:

                grok {
                        match => {
                                "message" => [
                                  "%{DATESTAMP:timestamp_psql} %{TZ:tz} ...

which worked very well. As it turned out, sometimes postgres is using multiline, so my first shot was:

            multiline {
                    pattern => "^%{DATESTAMP}.*"
                    what => previous
                    negate => true
            }

which did not work. Looking at the JSON i found:

"timestamp_psql": "15-07-10 09:31:57.030 UTC",

so the leading 20 is discarded. I mean, for most logfiles this should be totally fine, but for me it was very confusing. I guess grok somehow ignores leading and trailing data for pattern matching.
Im now using

            multiline {
                    pattern => "^20%{DATESTAMP}.*"
                    what => previous
                    negate => true
            }

as multiline filter (it works). but still thats wierd.

Need test for BACULA bacula software logs

Hi, #49 proposed some patterns for bacula software (backup software), however there are no test being provided for this systems. If you know how this logs looks like it would be nice to have test that can validate this patterns works as expected.

All contributions are welcome, even if just some log examples, then we can workout the test for sure.

Patterns core having issues with the new LOGSTASH_HOME variables

From the CI environment:

Using logstash-filter-grok 0.1.10
Using bundler 1.7.6
Your bundle is complete!
Use `bundle show [gemname]` to see where a bundled gem is installed.
Using Accessor#strict_set for specs
NameError: uninitialized constant LogStash::Environment::LOGSTASH_HOME
    const_missing at org/jruby/RubyModule.java:2723
     pattern_path at /mnt/jenkins/rbenv/versions/jruby-1.7.16/lib/ruby/gems/shared/gems/logstash-core-1.5.0.rc4.snapshot2-java/lib/logstash/environment.rb:88
             Grok at /mnt/jenkins/rbenv/versions/jruby-1.7.16/lib/ruby/gems/shared/gems/logstash-filter-grok-0.1.10/lib/logstash/filters/grok.rb:217
           (root) at /mnt/jenkins/rbenv/versions/jruby-1.7.16/lib/ruby/gems/shared/gems/logstash-filter-grok-0.1.10/lib/logstash/filters/grok.rb:139
          require at org/jruby/RubyKernel.java:1065
          require at /mnt/jenkins/rbenv/versions/jruby-1.7.16/lib/ruby/gems/shared/gems/polyglot-0.3.5/lib/polyglot.rb:65
           (root) at /home/jenkins/workspace/logstash_plugin_patterns_core/logstash-patterns-core/spec/spec_helper.rb:1
          require at org/jruby/RubyKernel.java:1065
           (root) at /home/jenkins/workspace/logstash_plugin_patterns_core/logstash-patterns-core/spec/spec_helper.rb:2
             load at org/jruby/RubyKernel.java:1081
           (root) at /home/jenkins/workspace/logstash_plugin_patterns_core/logstash-patterns-core/spec/patterns/core_spec.rb:1
             each at org/jruby/RubyArray.java:1613

More details: http://build-eu-00.elastic.co/job/logstash_plugin_patterns_core/477/

patterns for ASA-5525X

(This issue was originally filed by @mr-future at elastic/logstash#2889)


Hello,

I hope this can be of help. I wrote custom patterns for 90 different message IDs for Cisco ASA 5525X used as a VPN concentrator. Messages with severity code 6 or lower are parsed for multiple fields of interest. Severity code 7 messages are primarily parsed for group, ip, and user only. A few IDs have no values of interest and are matched without parsing so as to eliminate tags for grok parse failure.

I named the patterns from the message ID portion of the "ciscotag" field. ie. ciscotag:ASA-7-713169 would match pattern ASA_713169. Some message IDs occur in multiple severity levels.

Patterns -> http://pastebin.com/7iW8HB7g
Logstash config -> http://pastebin.com/32xGAEuB

NOTE: ASA_713906_1, and ASA_713906_2 encompass 15 different possible formats! (In my config, the other messages are matched if [ciscotag] != "ASA-7-713906, and these are matched if [ciscotag == "ASA-7-713906”.)

Error in nagios pattern

Migrated from elastic/logstash#1562

In the nagios pattern file, there is a missing closing curly bracket on the NAGIOSLOGLINE definition

On line 108, col 565
%{NAGIOS_EC_LINE_DISABLE_HOST_CHECK|
to be replaced by
%{NAGIOS_EC_LINE_DISABLE_HOST_CHECK}|

(Note the missing closed curly brace).

Make pattern elements matching integers emit integer fields

There are a number of patterns that match integers (or floats) via e.g. %{INT:foo} that are emitting string values for values that cannot be anything but numeric. This is an annoyance since it forces users to define their own Elasticsearch index templates with explicit mappings to get the fields correctly mapped in Elasticsearch. Users shouldn't have to do that if all they want to do is parse and visualize an Apache log; index templates should be for experienced users.

Example with problematic tokens highlighted:

HAPROXYHTTP %{SYSLOGTIMESTAMP:syslog_timestamp} %{IPORHOST:syslog_server} %{SYSLOGPROG}: %{IP:client_ip}:%{INT:client_port} [%{HAPROXYDATE:accept_date}] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_request}/%{INT:time_queue}/%{INT:time_backend_connect}/%{INT:time_backend_response}/%{NOTSPACE:time_duration} %{INT:http_status_code} %{NOTSPACE:bytes_read} %{DATA:captured_request_cookie} %{DATA:captured_response_cookie} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue} ({%{HAPROXYCAPTUREDREQUESTHEADERS}})?( )?({%{HAPROXYCAPTUREDRESPONSEHEADERS}})?( )?"(|(%{WORD:http_verb} (%{URIPROTO:http_proto}://)?(?:%{USER:http_user}(?::[^@]*)?@)?(?:%{URIHOST:http_host})?(?:%{URIPATHPARAM:http_request})?( HTTP/%{NUMBER:http_version})?))?"

grok-patterns: IPORHOST - divergent result

Hello everone,

given the following log line that I am examining via http://grokdebug.herokuapp.com/:

10.67.1.38 - - [14/Sep/2015:09:46:40 +0200] "GET /measures/search_filter HTTP/1.0" 304 - "http://sonarqube/" "Mozilla/5.0"

The expression %{IP}|%{HOSTNAME} results in a field IP with the value 10.67.1.38 as expected.

Now if I use the shortcut %{IPORHOST} then the field IP is empty and HOSTNAME contains the correct value. Is this behaviour normal?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.