Coder Social home page Coder Social logo

thudm / codegeex4 Goto Github PK

View Code? Open in Web Editor NEW
934.0 16.0 71.0 14.91 MB

CodeGeeX4-ALL-9B, a versatile model for all AI software development scenarios, including code completion, code interpreter, web search, function calling, repository-level Q&A and much more.

Home Page: https://codegeex.cn

License: Apache License 2.0

Python 73.48% Rust 26.52%

codegeex4's Introduction

🏠 Homepage|🛠 Extensions VS Code, Jetbrains|🤗 HF Repo | 🪧 HF DEMO

English | 中文

CodeGeeX4: Open Multilingual Code Generation Model

We introduce CodeGeeX4-ALL-9B, the open-source version of the latest CodeGeeX4 model series. It is a multilingual code generation model continually trained on the GLM-4-9B, significantly enhancing its code generation capabilities. Using a single CodeGeeX4-ALL-9B model, it can support comprehensive functions such as code completion and generation, code interpreter, web search, function call, repository-level code Q&A, covering various scenarios of software development. CodeGeeX4-ALL-9B has achieved highly competitive performance on public benchmarks, such as BigCodeBench and NaturalCodeBench. It is currently the most powerful code generation model with less than 10B parameters, even surpassing much larger general-purpose models, achieving the best balance in terms of inference speed and model performance.

Model List

Model Type Seq Length Download
codegeex4-all-9b Chat 128K 🤗 Huggingface 🤖 ModelScope 🟣 WiseModel

Get Started

Ollama

CodeGeeX4 is now available on Ollama! Please install Ollama 0.2 or later and run the following command:

ollama run codegeex4

To connect the local model to our VS Code / Jetbrains extensions, please check Local Mode Guideline.

Huggingface transformers

Use 4.39.0<=transformers<=4.40.2 to quickly launch codegeex4-all-9b

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained("THUDM/codegeex4-all-9b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    "THUDM/codegeex4-all-9b",
    torch_dtype=torch.bfloat16,
    low_cpu_mem_usage=True,
    trust_remote_code=True
).to(device).eval()
inputs = tokenizer.apply_chat_template([{"role": "user", "content": "write a quick sort"}], add_generation_prompt=True, tokenize=True, return_tensors="pt", return_dict=True ).to(device)
with torch.no_grad():
    outputs = model.generate(**inputs)
    outputs = outputs[:, inputs['input_ids'].shape[1]:]
    print(tokenizer.decode(outputs[0], skip_special_tokens=True))

vLLM

Use vllm==0.5.1 to quickly launch codegeex4-all-9b:

from transformers import AutoTokenizer
from vllm import LLM, SamplingParams

# CodeGeeX4-ALL-9B
# max_model_len, tp_size = 1048576, 4
# If OOM,please reduce max_model_len,or increase tp_size
max_model_len, tp_size = 131072, 1
model_name = "codegeex4-all-9b"
prompt = [{"role": "user", "content": "Hello"}]

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
llm = LLM(
    model=model_name,
    tensor_parallel_size=tp_size,
    max_model_len=max_model_len,
    trust_remote_code=True,
    enforce_eager=True,
    # If OOM,try using follong parameters
    # enable_chunked_prefill=True,
    # max_num_batched_tokens=8192
)
stop_token_ids = [151329, 151336, 151338]
sampling_params = SamplingParams(temperature=0.95, max_tokens=1024, stop_token_ids=stop_token_ids)

inputs = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)
outputs = llm.generate(prompts=inputs, sampling_params=sampling_params)

print(outputs[0].outputs[0].text)

Set up OpenAI Compatible Server via vllm, detailed please check OpenAI Compatible Server

python -m vllm.entrypoints.openai.api_server \
     --model THUDM/codegeex4-all-9b \
     --trust_remote_code

Rust-candle

Codegeex4 now suport Candle framwork Repo

Cli

Use Rust to launch codegeex4-all-9b:

	cd candle_demo
	cargo build -p codegeex4-cli --release --features cuda # for Cuda
	cargo build -p codegeex4-cli --release # for cpu
	./target/release/codegeex4-cli --sample-len 512

Tutorials

CodeGeeX4-ALL-9B provides three user guides to help users quickly understand and use the model:

ALL Fuctions

  1. System Prompt Guideline: This guide introduces how to use system prompts in CodeGeeX4-ALL-9B, including the VSCode extension official system prompt, customized system prompts, and some tips for maintaining multi-turn dialogue history.

  2. Infilling Guideline: This guide explains the VSCode extension official infilling format, covering general infilling, cross-file infilling, and generating a new file in a repository.

  3. Repository Tasks Guideline: This guide demonstrates how to use repository tasks in CodeGeeX4-ALL-9B, including QA tasks at the repository level and how to trigger the aicommiter capability of CodeGeeX4-ALL-9B to perform deletions, additions, and changes to files at the repository level.

  4. Local Mode Guideline:This guide introduces how to deploy CodeGeeX4-ALL-9B locally and connect it to Visual Studio Code / Jetbrains extensions.

These guides aim to provide a comprehensive understanding and facilitate efficient use of the model.

Evaluation

CodeGeeX4-ALL-9B is ranked as the most powerful model under 10 billion parameters, even surpassing general models several times its size, achieving the best balance between inference performance and model effectiveness.

Model Seq Length HumanEval MBPP NCB LCB HumanEvalFIM CRUXEval-O
Llama3-70B-intruct 8K 77.4 82.3 37.0 27.4 - -
DeepSeek Coder 33B Instruct 16K 81.1 80.4 39.3 29.3 78.2 49.9
Codestral-22B 32K 81.1 78.2 46.0 35.3 91.6 51.3
CodeGeeX4-All-9B 128K 82.3 75.7 40.4 28.5 85.0 47.1

CodeGeeX4-ALL-9B scored 48.9 and 40.4 for the complete and instruct tasks of BigCodeBench, which are the highest scores among models with less than 20 billion parameters. BigCodeBench Test Results In CRUXEval, a benchmark for testing code reasoning, understanding, and execution capabilities, CodeGeeX4-ALL-9B presented remarkable results with its COT (chain-of-thought) abilities. From easy code generation tasks in HumanEval and MBPP, to very challenging tasks in NaturalCodeBench, CodeGeeX4-ALL-9B also achieved outstanding performance at its scale. It is currently the only code model that supports Function Call capabilities and even achieves a better execution success rate than GPT-4. Function Call Evaluation Furthermore, in the "Code Needle In A Haystack" (NIAH) evaluation, the CodeGeeX4-ALL-9B model demonstrated its ability to retrieve code within contexts up to 128K, achieving a 100% retrieval accuracy in all python scripts.

图片1描述 图片2描述

Details of the evaluation results can be found in the Evaluation.

License

The code in this repository is open source under the Apache-2.0 license. The model weights are licensed under the Model License. CodeGeeX4-9B weights are open for academic research. For users who wish to use the models for commercial purposes, please fill in the registration form.

Citation

If you find our work helpful, please feel free to cite the following paper:

@inproceedings{zheng2023codegeex,
  title={CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Benchmarking on HumanEval-X},
  author={Qinkai Zheng and Xiao Xia and Xu Zou and Yuxiao Dong and Shan Wang and Yufei Xue and Zihan Wang and Lei Shen and Andi Wang and Yang Li and Teng Su and Zhilin Yang and Jie Tang},
  booktitle={Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
  pages={5673--5684},
  year={2023}
}

codegeex4's People

Contributors

donjuanplatinum avatar jasonyang170 avatar rojas-diego avatar shaobozhang avatar stanislas0 avatar xingyu-zhong avatar xinpeng-zhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

codegeex4's Issues

报错了请问这个怎么处理?

2024-07-05 18:43:44 - Invalid URL '': No scheme supplied. Perhaps you meant https://?
Traceback (most recent call last):
File "C:\Users\xxx\AppData\Local\Programs\Python\Python39\lib\site-packages\chainlit\utils.py", line 44, in wrapper
return await user_function(**params_values)
File "run.py", line 150, in main
for part in stream:
File "J:\Tools\CodeGeeX4\CodeGeeX4\repodemo\llm\api\codegeex4.py", line 21, in codegeex4
response = requests.post(url, json=data, headers=headers, verify=False, stream=True)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\sessions.py", line 575, in request
prep = self.prepare_request(req)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\sessions.py", line 486, in prepare_request
p.prepare(
File "C:\Users\xxx\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\models.py", line 368, in prepare
self.prepare_url(url, params)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\models.py", line 439, in prepare_url
raise MissingSchema(
requests.exceptions.MissingSchema: Invalid URL '': No scheme supplied. Perhaps you meant https://?

按照公众号视频号教程,下载项目装了库 打开了chainlit ,run了之后,就这样了,模型也执行对应文件下载完毕

vllm加载模型之后没推理,一直满GPU占用,是怎么回事?

代码如下:

from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
max_model_len, tp_size = 131072, 1
model_name = "/models/codegeex4-all-9b"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
llm = LLM(
    model=model_name,
    tensor_parallel_size=tp_size,
    max_model_len=max_model_len,
    trust_remote_code=True,
    enforce_eager=True,
)
stop_token_ids = [151329, 151336, 151338]
sampling_params = SamplingParams(temperature=0.95, max_tokens=1024, stop_token_ids=stop_token_ids)

vllm的输出:

Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
INFO 07-11 15:51:44 llm_engine.py:169] Initializing an LLM engine (v0.5.1) with config: model='/models/codegeex4-all-9b', speculative_config=None, tokenizer='/models/codegeex4-all-9b', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None), seed=0, served_model_name=/models/codegeex4-all-9b, use_v2_block_manager=False, enable_prefix_caching=False)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
WARNING 07-11 15:51:44 tokenizer.py:126] Using a slow tokenizer. This might cause a significant slowdown. Consider using a fast tokenizer instead.
WARNING 07-11 15:51:44 utils.py:562] Using 'pin_memory=False' as WSL is detected. This may slow down the performance.
INFO 07-11 15:55:24 model_runner.py:255] Loading model weights took 17.5635 GB

占用显存40GB,如果用xfomers的backend也一样

completions not working

使用ollama本地运行codegeex4
请求补全的接口curl -s http://localhost:62333/v1/completions -d @/tmp/p3.json|jq .
image

请求的body

{
  "model": "codegeex4",
  "messages": [
    {
      "role": "user",
      "content": "###LANGUAGE:Python\n###MODE:BLOCK\n<|code_suffix|>\n<|code_prefix|>def parse_nested_parens(paren_string: str) -> List[int]:\n    \"\"\" Input to this function is a string represented multiple groups for nested parentheses separated by spaces.\n    For each of the group, output the deepest level of nesting of parentheses.\n    E.g. (()()) has maximum two levels of nesting while ((())) has three.\n\n    >>> parse_nested_parens('(()()) ((())) () ((())()())')\n   [2, 3, 1, 3]\n\"\"\"\n<|code_middle|>"
    }
  ],
  "temperature": 0.2,
  "top_p": 0.95,
  "max_tokens": 64,
  "presence_penalty": 1,
  "stream": false,
  "stop": []
}

响应 "finish_reason": "load",没有生成text是何原因?

Question about CodeGeeX4

Hey! 👋 I came across your CodeGeeX4 project. It's fantastic! Keep up the good work! 💪 Could you send me more details on Telegram? Also, please review my work and follow me on GitHub @nectariferous. Thanks!

prompt 过长, Code Review 效果不如 GLM4-9B?

system: 你是审查机器人请给出代码评审,审查包含代码质量、可读性、拼写错误、可维护性、复用性、Bug、命名等。
prompt:

- 你是审查机器人请给出代码评审,审查包含代码质量、可读性、拼写错误、可维护性、复用性、Bug、命名, 

- 不需要审查代码风格、未使用的变量、导入路径, 

- 中文或代码回答

- 你要输出 JSON 文本,格式为:
[
 {
 filename:"app.tsx", 
 
 review:[{code: 代码行号, content:"review 评论,此处可用markdown", score:"严重等级(致命、严重、一般、轻微)"}]
 }
]

- 代码前面附带了代码行号

- 你只需要对修改后,与修改前有差异的代码评论

- 下面是代码数据:

修改前的代码:  
xxxx
 
修改后的代码:
xxxxx

参数:

temperature=0.2,
presence_penalty=1.2,
top_p=0.95

Reproducing BigCodeBench Scores

Hi there,

We're trying to reproduce the scores reported on BigCodeBench using v0.1.7post2. As there is no chat template provided inside the HF tokenizer config, I slightly changed the code and used the default chat template. So far, I got 49.0 on Complete and 38.9 on Instruct. The reproduced Instruct performance is a bit lower than your reported one. I doubt you had a customized template during the evaluation. Could you share more details about your setup?

bigcode-project/bigcodebench#19

Cheers

is function calling supported?

I have tested this model with same system prompt of GLM4. This model only emits parameters, but not the function name. For example, here, get_weather is not generated:

You  > what's the weather like in beijng?
A.I. > 
{"city_name": "beijing"}

ollama运行codegeex4一段时间后输出GGGGGGGGG

我用ollama运行codegeex4运行一段时间后,会输出GGGGGGGGG,但刚加载后是正常的,GPU为双4090,ollama版本0.2.5。这个问题可以复现,重新加载后恢复正常,一端时间后又这样了。
我看ollama项目有个类似的issue,GLM4好像也有类似的问题。

ollama启动codegeex,在未正常处理流式返回后模型总是会输出GGGGGGG.............

我尝试将ollama启动codegeex作为后端,通过访问ollama api方式获得结果,使用了yield 将流式返回结果作为迭代器处理,在我调试中在迭代的第一次直接退出了程序,再次访问接口时(无论是流式还是非流式)模型生成结果总是不全且生成结果末尾包含GGGGGGGGG.......,"done"字段总是false,但是重启该模型后输出结果正常。

这是不正常的输出
Snipaste_2024-07-25_16-27-39
结束该服务
Snipaste_2024-07-25_16-29-02
重新访问正常
Snipaste_2024-07-25_16-30-26
我不确定这是ollama的问题还是codegeex的问题。感觉似乎是流式返回仍在处理导致了这个BUG。^_^'

模型function call功能复现

请问,我看function call例子中有一个tool的角色,但是我ollama部署的时候 只有system user assistant的选择,这个tool是怎么设置的呀:
image

IDEA插件不支持本地模式

看演示编辑器插件中可以使用本地模式连接模型工具,但是在idea中安装最新版的插件后没有发现有可以开启本地模式的页面

IDEA升级到2024.2后codegeex插件报错

报错信息:Plugin 'CodeGeeX' (version '2.13.0-223') is not compatible with the current version of the IDE, because it requires build 241.* or older but the current build is IU-242.20224.300

能提供一个预训练的demo吗

我想使用我自己的代码库,用text类型的文件,来预训练一下。但是现在在create_datasets时报错

train_dataset, eval_dataset = create_datasets(tokenizer, args)

希望官方能提供一个demo,类似starcoder2

vscode远程,代码补全和生成注释无法使用,其他功能均正常。

如题。代码补全和生成注释报一下错误。

2024-07-10 10:41:34.001 [info] Local mode state: false
2024-07-10 10:41:34.001 [info] CodeGeeX is now active
2024-07-10 10:41:36.106 [info] Registering commands...
2024-07-10 11:12:33.167 [error] AggregateError: 
	at internalConnectMultiple (node:net:1114:18)
	at internalConnectMultiple (node:net:1177:5)
	at Timeout.internalConnectMultipleTimeout (node:net:1687:3)
	at listOnTimeout (node:internal/timers:575:11)
	at process.processTimers (node:internal/timers:514:7)
2024-07-10 11:14:15.027 [info] 10/7/2024, 11:14:15 am [Error] [Add Comment] Internal error occurs
2024-07-10 11:14:37.670 [info] 10/7/2024, 11:14:37 am [Error] [Add Comment] Internal error occurs

请问这个问题如何解决?

关于嵌入模型的选择问题

目前demo中的嵌入模型使用的是在线服务,embedding-2模型。但是因为我们公司的开发环境是不联网的,所以想请教一下embedding-2模型是否有开源计划。
或者使用其它的开源嵌入模型会不会影响效果。因为目前我在网上找到的资料,关于RAG基本上都是针对自然语言文本的,不清楚代码和自然语言在向量转换和向量检索方面是否一样。

webstorm插件,会导致较大项目一直卡死

问题:在webstorm使用codegeex插件时,部分大型项目卡死,如下图,经过测试,会卡两三个小时以上。

d4a239e0c2cefc1da358be07b23a23f

受影响版本:如下图:

image

关闭插件后,项目秒开,经过多轮验证。确定是该插件引起。

项目情况:

项目使用umi/dumi 库,会在src下生成.umi缓存文件,文件数量巨大,尽管webstorm已经设置该目录为禁用索引目录,但是插件仍然对该目录文件进行了遍历索引解析,从而导致卡死。

image

期望

希望优化此问题。

vscode ollama连接失败

ollama已经成功启动了codegeex4模型,但是用codegeex去连接,一直提示:连接错误,请确认模型配置

输出全部为0

inputs = tokenizer.apply_chat_template([{"role": "user", "content": "你是谁"}],
                                       add_generation_prompt=True, tokenize=True, return_tensors="pt",
                                       return_dict=True).to(device)
with torch.no_grad():
    print("inputs", inputs)
    outputs = model.generate(**inputs, max_length=512)
    print("outputs", outputs)
    outputs = outputs[:, inputs['input_ids'].shape[1]:]
    print("outputs_part", outputs)
    print(tokenizer.decode(outputs[0], skip_special_tokens=True))

outputs_part tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0]], device='cuda:0')

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.