Coder Social home page Coder Social logo

muvsfunc's People

Contributors

akarinvs avatar bodayw avatar dtlnor avatar kskshaf avatar littlepox avatar nsqy avatar nuevo009 avatar omae-kumiko avatar saltychiang avatar sinsanction avatar stgn avatar wolframrhodium avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

muvsfunc's Issues

YAHRmask Bug

G'day!

I have found a bug in YAHRmask. Calling YAHRmask with useawarp4=True results in a streak of black and white dots. I've made an image comparison for this purpose: https://slow.pics/c/Tnl72CAq

Sample Script:
import vapoursynth as vs
import fvsfunc as fvf
import muvsfunc as muf
core = vs.core

src = core.d2v.Source(r'oo_sample.d2v', rff=False)
src = fvf.Depth(src, 16)
#src = muf.YAHRmask(src, expand=10)
src = muf.YAHRmask(src, expand=10, useawarp4=True)
src = src.std.Crop(left=0, top=4, right=2, bottom=0)
src = src.resize.Spline36(1280, 720) #for image comparison
src = fvf.Depth(src, 10)
src.set_output()

Sample:
https://www.dropbox.com/s/hz4vep9jfarlk98/oo_sample.rar?dl=0

Tested on VS R58.

MDSI error: Expr: Failed to convert 'inf' to float

High noise = Expr: Failed to convert 'inf' to float

import muvsfunc as muv
clip = core.lsmas.LWLibavSource(r"d:\1080p.mkv")
noise = clip.grain.Add(var=200)

noise = core.resize.Bicubic(noise, format=vs.RGB24, matrix_in_s="709")
clip = core.resize.Bicubic(clip, format=vs.RGB24, matrix_in_s="709")

clip = muv.MDSI(noise, clip, down_scale = 1)
clip.set_output()

Tested in vseditor (benchmark) and vspipe

See also https://forum.doom9.org/showthread.php?p=1866304

TypeError: GradFun3: "thr" must be an int or a float!

Hi.

I'm getting this error with the current git master version. This is the call:

deband = muvsfunc.GradFun3(src=nr16, thr=0.25, radius=12, smode=2, mask=0, ampn=0, lsb=True)

GradFun3 always raises this error if thr is not None. The type check is missing.

'origsuper' is not defined in SharpAAMcmod

Found another bug:

muvsfunc.py", line 842, in SharpAAMcmod
fv2 = core.mv.Analyse(origsuper, isb=False, delta=2, overlap=aaov, blksize=aablk) if tradius >= 2 else None
NameError: name 'origsuper' is not defined

My function call
clip = muv.SharpAAMcmod(clip,dark = 0.4 ,thin = 1 ,sharp = 50 ,smooth = -1, stabilize =True, aapel = 4, aablk = 4, aaov=4, aatype = "sangnom")

It works with stabilize = False

Cdeblend weird behaviour

I don't know how to describe it, but it seems that cdeblend sometimes do "nothing", like it's not really deblending at all.

with omode=4 it always shows 0.0. But for some reason it worked once and showed the correct values + BLEND string

I restarted vseditor multiple times, tried to seek from frame 0. Nothing seems to help.
The avisynth version worked correctly in all my tests.

I have no idea what the issue could be. Can you reproduce the issue?
Python 3.10, VS R59, Windows 10 x64

clip = clip.vivtc.VFM(0)
vs=muf.Cdeblend(clip).text.Text("Cdeblend Vapoursynth")
avs = core.avsw.Eval('cdeblend()', clips=[clip], clip_names=["last"]).text.Text("Cdeblend Avisynth")
clip=core.std.StackHorizontal([vs, avs])

Test file https://www.dropbox.com/s/kl7llfc2vsy29l4/blend%20_sample-001.mkv?dl=1

Local Statistics Matching bug

Found bug with Local Statistics Matching where it causes the effect shown here: https://slow.pics/c/JxDvBZYS
Not sure as to what the fix is

Issue persists no matter what I do on some frames. Setting higher radius seems to make it much more noticeable. I've tried adding std.Limiter calls as well to the source clips and or after the LocalStatisticsMatching call and it still happens.

'clip' is not defined

I tried some of the aa functions, but got an error everytime 'clip' is not defined.
I'm using the latest vapoursynth version R35 x64

muvsfunc.py", line 731, in nnedi3aa
last = core.fmtc.resample(last, a.width, a.height, [-0.5, -0.5 * (1 NameError: name 'clip' is not defined
muvsfunc.py", line 712, in ediaa
last = core.fmtc.resample(last, a.width, a.height, [-0.5, -0.5 * (1 NameError: name 'clip' is not defined

It's this line
last = core.fmtc.resample(last, a.width, a.height, [-0.5, -0.5 * (1 << clip.format.subsampling_w)], [-0.5, -0.5 * (1 << clip.format.subsampling_h)], kernel='spline36')

I think it should be 1<< a.format .... since the clip name is a "def nnedi3aa(a):"

Trying to understand inpainting + OpenCV

Looking at OpenCV-Python:](https://github.com/WolframRhodium/muvsfunc/wiki/OpenCV-Python-for-VapourSynth)
I wanted to try to use the sample from here logo to test the inpainting.
Not really knowing what I'm doing, I tried with:

# Imports
import vapoursynth as vs
import os
import ctypes
# Loading Support Files
Dllref = ctypes.windll.LoadLibrary("i:/Hybrid/64bit/vsfilters/Support/libfftw3f-3.dll")
import sys
# getting Vapoursynth core
core = vs.core
# Import scripts folder
scriptPath = 'i:/Hybrid/64bit/vsscripts'
sys.path.insert(0, os.path.abspath(scriptPath))
# Loading Plugins
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/GrainFilter/RemoveGrain/RemoveGrainVS.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/GrainFilter/AddGrain/AddGrain.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/DenoiseFilter/FFT3DFilter/fft3dfilter.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/DenoiseFilter/DFTTest/DFTTest.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/Support/EEDI3m.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/ResizeFilter/nnedi3/vsznedi3.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/Support/libmvtools.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/Support/scenechange.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/MiscFilter/MiscFilters/MiscFilters.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/vslsmashsource.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/Support/libimwri.dll")
# Import scripts
import havsfunc
# source: 'G:\TestClips&Co\files\MPEG-2\ZDF_Logoremoval_Disco 1977-04 077 576p Digi-TVRip - Stormjoe .mkv'
# current color space: YUV420P8, bit depth: 8, resolution: 720x576, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: top field first
# Loading G:\TestClips&Co\files\MPEG-2\ZDF_Logoremoval_Disco 1977-04 077 576p Digi-TVRip - Stormjoe .mkv using LWLibavSource
clip = core.lsmas.LWLibavSource(source="G:/TestClips&Co/files/MPEG-2/ZDF_Logoremoval_Disco 1977-04 077 576p Digi-TVRip - Stormjoe .mkv", format="YUV420P8", stream_index=0, cache=0, prefer_hw=0)
# Setting color matrix to 470bg.
clip = core.std.SetFrameProps(clip, _Matrix=5)
clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# making sure frame rate is set to 25
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# setting field order to what QTGMC should assume (top field first)
clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=2)
# Deinterlacing using QTGMC
clip = havsfunc.QTGMC(Input=clip, Preset="Fast", TFF=True) # new fps: 25
# make sure content is preceived as frame based
clip = core.std.SetFieldBased(clip, 0)
clip = clip[::2]

# resize to square pixel
clip = core.resize.Bicubic(clip=clip, width=768, height=576)


input = clip

##### INPAINTING CODE START

import cv2
import muvsfunc_numpy as mufnp
import numpy as np

def inpaint_core(img, mask, radius=1, flags=cv2.INPAINT_NS):
    dst = np.empty_like(img)
    cv2.inpaint(img, mask, inpaintRadius=radius, flags=flags)
    return dst

mask = core.imwri.Read(["C:/Users/Selur/Desktop/logo.png"])
mask = core.std.BinarizeMask(mask,threshold=16)
mask = core.resize.Bicubic(clip=mask, format=vs.YUV420P8, matrix_s="470bg", range_s="limited", dither_type="error_diffusion")

inpainting = mufnp.numpy_process([clip, mask], inpaint_core, radius=1, flags=cv2.INPAINT_NS)
clip = core.std.MaskedMerge(clip, inpainting, mask)

##### INPAINTING CODE END

# set output frame rate to 25fps
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)

clip = core.std.StackHorizontal([input, clip])
# Output
clip.set_output()

It's not crashing, but the output:
output
isn't what I was hoping for. From the looks of it, there is some color space issue.

-> Would be nice if someone could post an example with the given clip&logo how to use inpainting. :)

Thanks!

Srestore omode=pp returns error

After trying to see how well this srestore ran, I can't get to use any of the omode=pp1, pp2, or pp3 modes, none of them work, they get stuck and report following error code:

Script evaluation failed:
Python exception: Exceeds the limit (4300 digits) for integer string conversion; use sys.set_int_max_str_digits() to increase the limit

Traceback (most recent call last):
  File "src/cython/vapoursynth.pyx", line 2886, in vapoursynth._vpy_evaluate
  File "src/cython/vapoursynth.pyx", line 2887, in vapoursynth._vpy_evaluate
  File "P3OP.py", line 91, in <module>
    deblend = muf.srestore(deint, omode='pp2')
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Scripts/muvsfunc.py", line 8345, in srestore
    .format(i=scale(4, peak), peak=peak, j=scale(200, peak), k=scale(28, peak))
     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Exceeds the limit (4300 digits) for integer string conversion; use sys.set_int_max_str_digits() to increase the limit

I'm really not sure what's wrong here or what I'm doing wrong, if it helps, I'm working with a clip with the following stats:

Width: 640
Height: 368
Frames: 2800
FPS: 30000/1001 (29.970 fps)
Format Name: YUV420P16
Color Family: YUV
Alpha: No
Sample Type: Integer
Bits: 16
SubSampling W: 1
SubSampling H: 1

And I'm running Python 3.11 with Vapoursynth 58.

GuidedFilter bug

Most likely in Line 2987, the filter attempts to divide by zero.
kk = -4 / (frameMin - alpha)
It happened on a black frame about 1200 frames into an encode I had running, and disabling the GuidedFilter line allowed me to preview it

SuperRes.py - KNLMeansCL: 'rclip' does not match the source clip!

Is SuperRes.py up to date?
I get this error:

diff = core.knlm.KNLMeansCL(diff, rclip=highRes, **knlm_args)
vapoursynth.Error: knlm.KNLMeansCL: 'rclip' does not match the source clip!

https://github.com/WolframRhodium/muvsfunc/blob/master/Collections/SuperRes.py#L41

input = mvf.Depth(clip, 16) # 1080p YUV420 source
target_width = 3840
target_height = 2160

upsampleFilter = partial(nnrs.nnedi3_resample, target_width=target_width, target_height=target_height)
clip = SuperRes(input, target_width, target_width, upsampleFilter1=upsampleFilter)

SSIM

This is an pseudo-implementation of SSIM downsampler with slight modification.

I take a look at the pseudo code of the algorithm and have some questions.

L ← subSample(convValid(H, P(s)), s)
L2 ← subSample(convValid(H², P(s)), s)

The original code convolves the image with an averaging filter of size s×s before downscaling, but yours didn't, is that intentional?

And I hope you can modify your code so that people can select the kernel they want when using fmtconv.

list of plugin/reqeriments in the readme

Hi

i think posted in the readme the list of plugin/scripts/foo need for works with this scripts helps to install the plugins before trow the error

for example, the error #30

in the VSDB exist a list with dependencies, but seems outdated/not take care of optional dependencies

greetings

Seesaw

In function SeeSaw() line 2077: Szrp = Szp / pow(Sstr, 0.25) / pow((ssx + ssy) / 2, 0.5)
Szrp was not used anywhere (It should be szp)
Original avs code: Szp = Szp / pow(Sstr, 1.0/4.0) / pow((ssx+ssy)/2.0, 1.0/2.0)
Also, peak was not used in main function, only sharpen2 needs it.

MSDI, SSIM and GMSD map handling

Map return types between these three metrics are inconsistent. What do you think about storing the maps using std.ClipToProp()?

I prefer the output of MSDI out of the 3, so simply mirroring this behavior is probably fine too.

Documentation

I was wondering if you'd document (e.g. usage examples) of the functions you've written so that people will know how to use them. Currently, there are many scripts but most of them have little to no documentation or sample usage examples of using said functions. I'd like to learn and I know that this is a tedious request.

Please let me know.

Thank you.

Linear and gamma

In function firniture(), the method proposed by me is not usable in most cases.
I think your original implementation is better. :octocat:

Potential for optimization: Gradfun3

Hi Wolfram,

It has come to my attention that your port of Gradfun3 is significantly slower than the Avisynth version and the modified version from https://github.com/Dogway/Avisynth-Scripts/tree/master/EX%20mods

To match the speed of the Avisynth version, significantly more CPU cycles must be used. Testing done on my Windows 10 VM (AVS on Linux is a nightmare!)

Python 3.9.5 (tags/v3.9.5:0a7dcbd, May  3 2021, 17:27:52) [MSC v.1928 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license()" for more information.
>>> import vapoursynth
>>> print(vapoursynth.core.version())
VapourSynth Video Processing Library
Copyright (c) 2012-2021 Fredrik Mellbin
Core R57
API R4.0
API R3.6
Options: -

>>> 

muvsfunc.GradFun3(img, thr=0.35, thrc=0.35, radius=17, elast=3.0, elastc=3.0, mask=2, smode=0)
Output 240 frames in 29.42 seconds (8.16 fps)

Measured with https://forum.doom9.org/showthread.php?t=174797
AviSynth+ 3.7.1 (r3593, master, x86_64) (3.7.1.0):

GradFun3 (8-bit input)
FPS (min | max | average): 4.694 | 29.31 | 26.97
ex_GradFun3 (16-bit input)
FPS (min | max | average): 15.34 | 31.50 | 23.37

import vapoursynth as vs
core = vs.core
core.num_threads = 1

import muvsfunc

img = core.imwri.Read(r'banding.png')
img = core.resize.Bicubic(img, format=vs.YUV420P16, matrix_s='709')*240
gradfun = muvsfunc.GradFun3(img, thr=0.35, thrc=0.35, radius=17, elast=3.0, elastc=3.0, mask=2, smode=0)
gradfun.set_output()
ImageSource("banding.png")
Trim(0, 240)
ConvertToYUV420(matrix="709")

exMod = ConvertBits(16)
exMod = ex_GradFun3(exMod, thr=0.35, thrc=0.35, radius=17, elast=3.0, elastc=3.0, mask=2, smode=0)
exMod

MDSI Expr: Failed to convert 'inf' to float

2019-02-09 03:42:41.410
Error on frame 2585 request:
Expr: Failed to convert 'inf' to float
2019-02-09 03:42:43.545
Error on frame 5687 request:
Expr: Failed to convert 'inf' to float
Error on frame 11891 request:
Expr: Failed to convert 'inf' to float
Error on frame 15897 request:
Expr: Failed to convert 'inf' to float
Error on frame 19258 request:
Expr: Failed to convert 'inf' to float
Error on frame 22360 request:
Expr: Failed to convert 'inf' to float
Error on frame 26625 request:
Expr: Failed to convert 'inf' to float
Error on frame 29856 request:
Expr: Failed to convert 'inf' to float
Error on frame 37998 request:
Expr: Failed to convert 'inf' to float
Error on frame 49243 request:
Expr: Failed to convert 'inf' to float
Error on frame 55317 request:
Expr: Failed to convert 'inf' to float
2019-02-09 03:42:58.594
Error on frame 41102 request:
Expr: Failed to convert 'inf' to float
Error on frame 41102 request:
Expr: Failed to convert 'inf' to float
Error on frame 41099 request:
Expr: Failed to convert 'inf' to float
Error on frame 41099 request:
Expr: Failed to convert 'inf' to float
Error on frame 41102 request:
Expr: Failed to convert 'inf' to float
2019-02-09 03:43:05.610
Error on frame 144109 request:
Expr: Failed to convert 'inf' to float
Error on frame 148504 request:
Expr: Failed to convert 'inf' to float
Error on frame 153932 request:
Expr: Failed to convert 'inf' to float
2019-02-09 03:43:11.504
Error on frame 165822 request:
Expr: Failed to convert 'inf' to float
Error on frame 169829 request:
Expr: Failed to convert 'inf' to float
2019-02-09 03:43:33.898
Error on frame 54800 request:
Expr: Failed to convert 'inf' to float
Error on frame 42780 request:
Expr: Failed to convert 'inf' to float
Error on frame 34379 request:
Expr: Failed to convert 'inf' to float
2019-02-09 03:43:37.734
Error on frame 70956 request:
Expr: Failed to convert 'inf' to float

SeeSaw limit

When Slimit < 0, the sharpdiff is normalized to 8-bit range before doing pow(diff,1/abs(limit))
So SLIM used don't need to scale by bit depth when Slimit < 0
SLIM = scale(Slimit, peak) if Slimit >= 0 else abs(Slimit) should solve this problem.

Error getting frame: all input arrays must have the same shape

Hello, I am starting to use opencv with vapoursynth.

I tried to play the pencilsketch sample, but I get the message when rendered with mpv using video test.webm
[ffmpeg/demuxer] vapoursynth: Error getting frame: all input arrays must have the same shape

my test code is:

import numpy as np
import cv2
import muvsfunc_numpy as mufnp

import vapoursynth as vs
from  vapoursynth import core

video ="tests/test.webm"
import os
script_dir = os.path.dirname(__file__)
rel_path = video
abs_file_path=os.path.join(script_dir, rel_path)
rgb = core.ffms2.Source(source= abs_file_path)


def resize_core2(img, w, h):
    return cv2.resize(img, dsize=(w, h), interpolation=cv2.INTER_CUBIC)

#clip=rgb
#pencil_core2 = lambda img: cv2.pencilSketch(img)[1]
# This time we define a function explicitly rather than using anonymous function
def pencil_core(original_img):
    w=1280
    h=720
    # resize image fix it? no
    resized_img = resize_core2(original_img, w, h)
    img=cv2.pencilSketch(resized_img)[1]
    return img
    
#clip = mufnp.numpy_process(rgb, pencil_core2, input_per_plane=False, output_per_plane=False)
clip = mufnp.numpy_process(rgb, pencil_core, input_per_plane=False, output_per_plane=False)
#clip=rgb
#print(clip)

import mvsfunc as mvf
clip = mvf.ToYUV(clip,matrix = "709", css = "420", depth = 8) 
clip.set_output()	

Normaly video frames have the same image size as video so I don't understand

There is no attribute or namespace named cas, SharpAAMcmod

File "C:\Users\User\AppData\Roaming\Python\Python39\site-packages\muvsfunc.py", line 1147, in SharpAAMcmod
postsh = haf.LSFmod(aa, strength=sharp, overshoot=1, soft=smooth, edgemode=1)
File "C:\Users\User\AppData\Roaming\Python\Python39\site-packages\havsfunc.py", line 4921, in LSFmod
normsharp = pre.cas.CAS(sharpness=min(Str, 1))
File "src\cython\vapoursynth.pyx", line 1441, in vapoursynth.VideoNode.getattr
AttributeError: There is no attribute or namespace named cas

getnative error

muvsfunc.py", line 5733, in getnative
assert isinstance(clip, vs.VideoNode) and clip.format.id == vs.GRAYS and clip.num_frames == 1
AssertionError

I used your example

result1 = muf.getnative(clip, kernel="bicubic")
result2 = muf.getnative(clip, kernel="lanczos")
result3 = muf.getnative(clip, kernel="spline36")
last = core.std.Splice([result1, result2, result3])
last.set_output()

But even with

clip = mvf.GetPlane(clip, 0)
clip = clip[2000]
## Frames: 1 | Time: 0:00:00.042 | Size: 1920x1080 | FPS: 24000/1001 = 23.976 | Format: Gray8

the error is still shown.

FixTelecinedFades is broken

Example:
https://slow.pics/c/duWWKBW1

topAvg = 0.00002433726434429883
bottomAvg = 0.0016658474283489072
meanAvg = 0.000845092346346603

meanAvg / topAvg = 34.7242128117235

Pixels end up being multiplied by 34x which results in these random artifacts.

Typo

In _numpy, line 1352 should be thr /= (2 ** bits) - 1 instead of thr /= (2 ** bit) - 1
IMHO, vs.core is better than vs.get_core(), and should be used as a drop-in replacement.

DeHaloHmod (8-16 bit support)

Hello!

Please add DeHaloHmod to musvsfunc. It's been ported to vs but I'm not sure because I wasn't able to test it on my end. The main mask is great but after testing it misses lines/spots and malfunctions in some places when called with default settings except the radius parameter, however, increasing the radius doesn't improve the mask's accuracy. After some testing, the 2016 revision is more effective compared to the latest revision. So I'm not sure if the latest revision is worth it. Also, the aforementioned masking issue is present in both. I'm hoping maybe it could be fixed/improved upon.

AVS script:
src()
DeHaloHmod(radius=3, maska=True)

Image Comparison:
https://slow.pics/c/15GEO6GO

VS port: https://pastebin.com/2uziEgxr
AVS: https://pastebin.com/raw/tA7aHtYP (2016)
AVS revision: https://raw.githubusercontent.com/realfinder/AVS-Stuff/master/avs%202.5%20and%20up/DeHaloH.avsi (2021)

Kind Regards,
B.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.