Coder Social home page Coder Social logo

node-segment's Introduction

中文分词模块 Build Status Dependencies Status

node-segment

本模块以**盘古分词组件**中的词库为基础, 算法设计也部分参考了盘古分词组件中的算法。

在线演示地址:http://segment.ucdok.com/

本分词模块具有以下特点:

  • 纯JavaScript编写,可以在任何支持ECMAScript5的引擎上执行(需要稍微修改部分代码)
  • 基于词性进行联想识别
  • 可使用JavaScript编写自定义的分词模块

1、使用方法

安装:

$ npm install segment --save

使用方法:

// 载入模块
var Segment = require('segment');
// 创建实例
var segment = new Segment();
// 使用默认的识别模块及字典,载入字典文件需要1秒,仅初始化时执行一次即可
segment.useDefault();

// 开始分词
console.log(segment.doSegment('这是一个基于Node.js的中文分词模块。'));

返回结果格式:

[ { w: '这是', p: 0 },
  { w: '一个', p: 2097152 },
  { w: '基于', p: 262144 },
  { w: 'Node.js', p: 8 },
  { w: '的', p: 8192 },
  { w: '中文', p: 1048576 },
  { w: '分词', p: 4096 },
  { w: '模块', p: 1048576 },
  { w: '。', p: 2048 } ]

其中 w 表示词的内容,p 表示词性(具体参考 https://github.com/leizongmin/node-segment/blob/master/lib/POSTAG.js 中的定义)

不返回词性

var text = '这是一个基于Node.js的中文分词模块。';
var result = segment.doSegment(text, {
  simple: true
});
console.log(result);

结果:

[ '这是', '一个', '基于', 'Node.js', '的', '中文', '分词', '模块', '。' ]

去除标点符号

var text = '这是一个基于Node.js的中文分词模块。';
var result = segment.doSegment(text, {
  stripPunctuation: true
});
console.log(result);

结果:

[ { w: '这是', p: 0 },
  { w: '一个', p: 2097152 },
  { w: '基于', p: 262144 },
  { w: 'Node.js', p: 8 },
  { w: '的', p: 8192 },
  { w: '中文', p: 1048576 },
  { w: '分词', p: 4096 },
  { w: '模块', p: 1048576 } ]

转换同义词

载入同义词词典:

segment.loadSynonymDict('synonym.txt');

词典格式:

什么时候,何时
入眠,入睡

在分词时设置convertSynonym=true则结果中的"什么时候"将被转换为"何时""入眠"将被转换为"入睡"

var text = '什么时候我也开始夜夜无法入睡';
var result = segment.doSegment(text, {
  convertSynonym: true
});
console.log(result);

结果:

[ { w: '何时', p: 0 },
  { w: '我', p: 65536 },
  { w: '也', p: 134217728 },
  { w: '开始', p: 4096 },
  { w: '夜夜', p: 131072 },
  { w: '无法', p: 134217728 },
  { w: '入睡', p: 4096 } ]

去除停止符

载入词典:

segment.loadStopwordDict('stopword.txt');

词典格式:

之所以
因为

在分词时设置stripStopword=true则结果中的"之所以""因为"将被去除:

var text = '之所以要编写一个纯JS的分词器是因为当时没有一个简单易用的Node.js模块';
var result = segment.doSegment(text, {
  stripStopword: true
});
console.log(result);

结果:

[ { w: '编写', p: 4096 },
  { w: '纯', p: 1073741824 },
  { w: 'JS', p: [ 16 ] },
  { w: '分词', p: 4096 },
  { w: '器' },
  { w: '当时', p: 16384 },
  { w: '没有', p: 4096 },
  { w: '简单', p: 1073741824 },
  { w: '易用' },
  { w: 'Node.js', p: 8 },
  { w: '模块', p: 1048576 } ]

2、词典格式

词典文件为纯文本文件,每行定义一个词,格式为: 词|词性|词权值 ,如:工信处|0x0020|100

词性 的定义可参考文件 https://github.com/leizongmin/node-segment/blob/master/lib/POSTAG.js

词权值 越大表示词出现的频率越高

词典文件可参考:https://github.com/leizongmin/node-segment/tree/master/dicts

2、自定义识别模块

// 载入模块
var Segment = require('segment');
// 创建实例
var segment = new Segment();
// 配置,可根据实际情况增删,详见segment.useDefault()方法
segment.use('URLTokenizer');  // 载入识别模块,详见lib/module目录,或者是自定义模块的绝对路径
segment.loadDict('dict.txt'); // 载入字典,详见dicts目录,或者是自定义字典文件的绝对路径

// 开始分词
console.log(segment.doSegment('这是一个基于Node.js的中文分词模块。'));

一般可通过 segment.useDefault() 来载入默认的配置,若要自定义加载,可参考 useDefault() 的代码:

segment
  // 分词模块
  // 强制分割类单词识别
  .use('URLTokenizer')            // URL识别
  .use('WildcardTokenizer')       // 通配符,必须在标点符号识别之前
  .use('PunctuationTokenizer')    // 标点符号识别
  .use('ForeignTokenizer')        // 外文字符、数字识别,必须在标点符号识别之后
  // 中文单词识别
  .use('DictTokenizer')           // 词典识别
  .use('ChsNameTokenizer')        // 人名识别,建议在词典识别之后

  // 优化模块
  .use('EmailOptimizer')          // 邮箱地址识别
  .use('ChsNameOptimizer')        // 人名识别优化
  .use('DictOptimizer')           // 词典识别优化
  .use('DatetimeOptimizer')       // 日期时间识别优化

  // 字典文件
  .loadDict('dict.txt')           // 盘古词典
  .loadDict('dict2.txt')          // 扩展词典(用于调整原盘古词典)
  .loadDict('names.txt')          // 常见名词、人名
  .loadDict('wildcard.txt', 'WILDCARD', true)   // 通配符
  .loadSynonymDict('synonym.txt')   // 同义词
  .loadStopwordDict('stopword.txt') // 停止符

自定义分词器:

segment.use({

  // 类型
  type: 'tokenizer',

  // segment.use() 载入模块,初始化时执行
  init: function (segment) {
    // segment 为当前的Segment实例
  },

  // 分词
  split: function (words) {
    // words 为单词数组,如:['中文', '分词']
    // 返回一个新的数组用来替换旧的数组
    return words;
  }

});

自定义优化器:

segment.use({

  // 类型
  type: 'optimizer',

  // segment.use() 载入模块,初始化时执行
  init: function (segment) {
    // segment 为当前的Segment实例
  },

  // 优化
  doOptimize: function (words) {
    // words 为分词结果的单词数组,如:[{w: '中文', p: 1048576}, {w: '分词', p: 4096}]
    // 返回一个新的数组用来替换旧的数组
    return words;
  }

})

分词器和优化器可参考默认模块:https://github.com/leizongmin/node-segment/tree/master/lib/module

其中 *Tokenizer 表示分词器, *Optimizer 表示优化器。

注意

请勿用此模块来对较长且无任何标点符号的文本进行分词,否则会导致分词时间成倍增加。

MIT License

Copyright (c) 2012-2015 Zongmin Lei (雷宗民) <[email protected]>
http://ucdok.com

The MIT License

Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:

The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

node-segment's People

Contributors

blakmatrix avatar djuretic avatar leizongmin avatar zhike-smallwolf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

node-segment's Issues

如何获取TF?

给定一段文字,即要切好的词,也要词频。该如何获取呢?

Action required: Greenkeeper could not be activated 🚨

🚨 You need to enable Continuous Integration on all branches of this repository. 🚨

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because it uses your CI build statuses to figure out when to notify you about breaking changes.

Since we didn’t receive a CI status on the greenkeeper/initial branch, it’s possible that you don’t have CI set up yet. We recommend using Travis CI, but Greenkeeper will work with every other CI service as well.

If you have already set up a CI for this repository, you might need to check how it’s configured. Make sure it is set to run on all new branches. If you don’t want it to run on absolutely every branch, you can whitelist branches starting with greenkeeper/.

Once you have installed and configured CI on this repository correctly, you’ll need to re-trigger Greenkeeper’s initial pull request. To do this, please delete the greenkeeper/initial branch in this repository, and then remove and re-add this repository to the Greenkeeper App’s white list on Github. You'll find this list on your repo or organization’s settings page, under Installed GitHub Apps.

Version 10 of node.js has been released

Version 10 of Node.js (code name Dubnium) has been released! 🎊

To see what happens to your code in Node.js 10, Greenkeeper has created a branch with the following changes:

  • Added the new Node.js version to your .travis.yml
  • The new Node.js version is in-range for the engines in 1 of your package.json files, so that was left alone

If you’re interested in upgrading this repo to Node.js 10, you can open a PR with these changes. Please note that this issue is just intended as a friendly reminder and the PR as a possible starting point for getting your code running on Node.js 10.

More information on this issue

Greenkeeper has checked the engines key in any package.json file, the .nvmrc file, and the .travis.yml file, if present.

  • engines was only updated if it defined a single version, not a range.
  • .nvmrc was updated to Node.js 10
  • .travis.yml was only changed if there was a root-level node_js that didn’t already include Node.js 10, such as node or lts/*. In this case, the new version was appended to the list. We didn’t touch job or matrix configurations because these tend to be quite specific and complex, and it’s difficult to infer what the intentions were.

For many simpler .travis.yml configurations, this PR should suffice as-is, but depending on what you’re doing it may require additional work or may not be applicable at all. We’re also aware that you may have good reasons to not update to Node.js 10, which is why this was sent as an issue and not a pull request. Feel free to delete it without comment, I’m a humble robot and won’t feel rejected 🤖


FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

同义词替换无法保留原始内容结构,空格、换行丢失

我想要对一篇文章实现一个简单的同义词替换,但是我使用以下方式,一些空格和换行会丢失

    const text = fs.readFileSync('file.txt', 'utf8');
    const result = segment.doSegment(text, {
      convertSynonym: true,
      simple: true
    })
    console.info(`finish with result: ${result.join('')}`)

RangeError: Maximum call stack size exceeded

Hi! 感谢开源这么棒的分词工具供大家学习〜

我今天尝试生成了一个70万词的字典,loadDict没有问题,但随便doSegment一个短语后,直接报错栈爆了...

An in-range update of mocha is breaking the build 🚨

Version 4.1.0 of mocha was just published.

Branch Build failing 🚨
Dependency mocha
Current Version 4.0.1
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

mocha is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push The Travis CI build could not complete due to an error Details

Release Notes v4.1.0

4.1.0 / 2017-12-28

This is mainly a "housekeeping" release.

Welcome @Bamieh and @xxczaki to the team!

🐛: Fixes

  • #2661: progress reporter now accepts reporter options (@canoztokmak)
  • #3142: xit in bdd interface now properly returns its Test object (@Bamieh)
  • #3075: Diffs now computed eagerly to avoid misinformation when reported (@abrady0)
  • #2745: --help will now help you even if you have a mocha.opts (@Zarel)

🎉 Enhancements

  • #2514: The --no-diff flag will completely disable diff output (@CapacitorSet)
  • #3058: All "setters" in Mocha's API are now also "getters" if called without arguments (@makepanic)

📖 Documentation

🔩 Other

Commits

The new version differs by 409 commits.

  • 6b9ddc6 Release v4.1.0
  • 3c4b116 update CHANGELOG for v4.1.0
  • 5be22b2 options.reporterOptions are used for progress reporter
  • ea96b18 add .fossaignore [ci skip]
  • adc67fd Revert "[ImgBot] optimizes images (#3175)"
  • ae3712c [ImgBot] optimizes images (#3175)
  • 33db6b1 Use x64 node on appveyor
  • 4a6e095 Run appveyor tests on x64 platform. Might enable sharp installation
  • 3abed9b Lint netlify-headers script
  • 119543e Add preconnect for doubleclick domain that google analytics results in contacting
  • bd5109e Remove crossorigin='anonymous' from preconnect hints. Only needed for fonts, xhr and es module loads
  • 123ee4f Handle the case where all avatars are already loaded at the time when the script exexecutes
  • 64deadc Specific value for inlining htmlimages to guarantee logo is inlined
  • 8f1ded4 https urls where possible
  • d5a5125 Be explicit about styling of screenshot images

There are 250 commits in total.

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

es6中出现数组遍历的错误

源码中遍历数组有用for..in的,此方法再使用harminy参数启动的node进程中会报错,我已更正报错部分,pull request已提交。

是否有参数可以指定分词时,不作联想组合,只返回最小词组?

分词模块自动联想合并词组本来是挺好的,但是作为拼音的依赖模块,这个功能反而带来了麻烦。
这种不确定性导致词组拼音库极大的增加,如果可以不做联想,返回最小词组(成语例外)就比较好处理了。比如:

文本 实际结果 期望结果
香港特别行政区 香港特别行政区 香港, 特别, 行政, 区
重庆市 重庆市 重庆, 市
重庆市政府 重庆市, 政府 重庆, 市, 政府
重庆市区 重庆, 市区 重庆, 市区
重庆市民 重庆, 市民 重庆, 市民

注:最小词组也许不准确,比如成语可以继续拆分,应该优先返回成语。

不同的使用场景可以要求不同,比如对于拼音来说,拆分成『最小』、准确的词组会比较好;而对于语义分析坑能联想会较好。

期待你的意见 😃

BUG: 被忽略的空格

console.log(segment.doSegment("a a")); // [ { w: 'a', p: 16 }, { w: 'a', p: 16 } ]
console.log(segment.doSegment("一 一")); // [ { w: '一一', p: 6291456 } ]

EmailOptimizer.js优化模块存在BUG

我正在参加抽奖活动:#2013易迅送你快乐到家#,奖品丰厚,你也赶快来参加吧!活动地址:http://url.cn/Ds2hyz @wzgdmje

一条类似这样的微博文本,在Email识别优化时,由于忽略了@符号前面的空格,以及未考虑email长度,域名后缀特征等,直接把整句识别为一个email地址。

特定的省市分不出来

console.log(segment.doSegment('2015-2016学年**乌鲁木齐九十八中七年级
(上)期中数学试卷', {stripStopword: true,stripPunctuation: true}));
[ { w: '2015', p: 4194304 },
{ w: '2016', p: [ 4194304 ] },
{ w: '学年', p: 1048576 },
{ w: '**乌鲁木齐', p: 1048576 },
{ w: '九十八', p: 6291456 },
{ w: '中', p: 33558528 },
{ w: '七年级', p: 2097152 },
{ w: '上', p: 33554432 },
{ w: '期中', p: 16384 },
{ w: '数学试卷', p: 1048576 } ]

不能将省(**)和市(乌鲁木齐)分出来,但是**的其他城市却能分出来(比如:**克拉玛依),而且分词的词典中也有**、乌鲁木齐这些词,不知道是不是一个bug

如何忽略一些无意义的词

例如

这是一个基于Node.js的中文分词模块。

里面 没有什么实际意义,如何忽略这些词?
另外分词结果 { w: '空', p: 1073741824 } 中的 p 是什么意思?为什么有些词没有 p 这个属性?
谢谢!!

Out of memory

0: ExitFrame [pc: 0x2cd055041bd]

Security context: 0x3c146121e6c9
1: push [0x3c1461205449](this=0x3c14e6e825a1 <JSArray[1210033]>,0x3c14ed885719 <JSArray[48]>)
2: getChunks(aka getChunks) [0x3c143f66a319] [/Users/jim/Share/lab/gui/awtk/awtk/tools/word_gen/node_modules/segment/lib/module/DictTokenizer.js:~277] [pc=0x2cd05a10c20](this=0x3c1472f022e1 ,wordpos=0x3c1470b02311 ,pos=...

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0x100033c31 node::Abort() [/usr/local/bin/node]
2: 0x1000353ea node::FatalTryCatch::~FatalTryCatch() [/usr/local/bin/node]
3: 0x10019bc9a v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
4: 0x10056a072 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/usr/local/bin/node]
5: 0x100569029 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/usr/local/bin/node]
6: 0x100566cb8 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/local/bin/node]
7: 0x1005659c8 v8::internal::Heap::HandleGCRequest() [/usr/local/bin/node]
8: 0x100519708 v8::internal::StackGuard::HandleInterrupts() [/usr/local/bin/node]
9: 0x1007caf01 v8::internal::Runtime_StackGuard(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/local/bin/node]
10: 0x2cd055041bd
11: 0x2cd05a51454

词库格式、意义及其算法?

我在写一个汉语注音程序, 用到这个非常赞的分词模块。
但是我对分词了解不多,请问下词库里的格式分别是什么意义,后面的数值是如何计算出来的?比如:

茂盛|0x40000000|15108
茂县|0x0040|993
冒充|0x1000|12743
冒出|0x0000|703

后面两列的数值如何理解及如何计算得到?

我发现词库里有些词语缺失,比如『冒顿(mòdú)』这是个匈奴单于的名字,不知道放在汉族姓名的词库里是否合适?

另外请问有带注音的词库吗?

自定义字典的格式是什么?

想自己定义敏感词,请问自定义字典的格式是什么?
我在dicts目录下看到的:

周襄王|0x0080|101

后面两个字段不太理解是什么意思

An in-range update of should is breaking the build 🚨

Version 13.2.0 of should was just published.

Branch Build failing 🚨
Dependency should
Current Version 13.1.3
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

should is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push The Travis CI build could not complete due to an error Details

Commits

The new version differs by 3 commits.

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

如何找到联想词汇

看readme,分词会根据词性进行联想分词。

所以在分词之后,是否有标识,哪些是联想出来的,哪些是字典中的。因为我想根据联想词汇建立一个动态的词汇库。

非常感谢!

Cannot find module "URLTokenizer"

At

var filename = path.resolve(__dirname, 'module', module + '.js');
      if (!fs.existsSync(filename)) {

Where filename === "/module/URLTokenizer.js"

I'm importing it inside electron.

Can we change a way to load module?

问一个问题,分词怎么设置优先匹配长词

比如“程序员的时间观”,分词结果是“ 程序/员/的/时间/观”
而希望的结果是“程序员/的/时间/观”,其中“程序员”、“程序”这两个词在字典中都存在

关于同义词,我写的好像没问题啊

// 载入模块
var Segment = require('segment');
//var POSTAG = Segment.POSTAG;
// 创建实例
var segment = new Segment();
// 使用默认的识别模块及字典,载入字典文件需要1秒,仅初始化时执行一次即可
segment.useDefault();

segment.loadSynonymDict('synonym.txt');

var text = '什么时候我也开始夜夜无法入睡';
var result = segment.doSegment(text, {
    convertSynonym: true
});
console.log(result);

输出为

D:\raclen\jsCode\participle>node app
[ { w: '什么时候', p: 0 },
  { w: '我', p: 65536 },
  { w: '也', p: 134217728 },
  { w: '开始', p: 4096 },
  { w: '夜夜', p: 131072 },
  { w: '无法', p: 134217728 },
  { w: '入睡', p: 4096 } ]

版本应该没问题

{
  "name": "segment",
  "main": "./index.js",
  "version": "0.1.1",
  "description": "Chinese word segmentation 中文分词模块",
  "keywords": [
    "segment",
    "chinese",
    "中文",
    "分词"
  ],

这是什么问题

提供选项设置只做简单的分词,不做关联分词;及繁简自动转换;姓名标识

不关联多个词语

我想算法中应该是有根据词汇的相关度等因素将词语做了关联,比如:

熊出没:熊出没
狗出没:狗, 出没
惊天地泣鬼神:惊天地泣鬼神

等等,这些词汇的关联,对于做拼音转换来说非常不利,极大的增加了词典量。
汉语多音字总共 3千多个,分词词典里的词汇,出现多音字的有 7万多,在汉语词典里能查到的有 4万多,其中还包括繁体字的部分。

繁体字部分自动转换

繁体和简体的词汇分词应该不会很大,大部分情况可以使用一种词库(例如简体词库),分词时可以先转成简体字,分词后再转换成原字(可以做一些是否需要转换的判断)。

这样词库量会小很多。如果确实有不同的地方,可以通过特殊词典和算法进行修正。

我找到一个繁简转换的模块: https://github.com/RobinQu/simplebig 供参考。

结果集给出类型标识

有些姓氏是多音字,读音与常字不同,如果分词算法是以姓名进行分词的部分,最好能标识这是姓名,这样转换拼音的时候可以使用更准确的拼音。

这句话解析不了

尝试解析这句,结果无响应:
无耻啊无耻,西工大图书馆 标题要长长长长长长长长长长长长长长长长长长长长长长长长长长长长长长长长长

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.