Coder Social home page Coder Social logo

redditimagegrab's Introduction

Build Status

RedditImageGrab

I created this script to download the latest (and greatest) wallpapers off of image subreddits like wallpaper to keep my desktop wallpaper fresh and interesting. The main idea is that the script would download any JPEG or PNG formatted image that it found listed in the specified subreddit and download them to a folder.

Requirements:

  • Python 2 (Python3 might be supported over 2to3, but see for yourself and report back).
  • Optional requirements: listed in setup.py under extras_require.

Usage:

See ./redditdl.py --help for uptodate details.

ordering = ('key', )

redditdl.py [-h] [--multireddit] [--last l] [--score s] [--num n]
                 [--update] [--sfw] [--nsfw]
                 [--filename-format FILENAME_FORMAT] [--title-contain TEXT]
                 [--regex REGEX] [--verbose] [--skipAlbums]
                 [--mirror-gfycat] [--sort-type SORT_TYPE]
                 <subreddit> [<dest_file>]

Downloads files with specified extension from the specified subreddit.

positional arguments:

<subreddit>           Subreddit name.
<dest_file>           Dir to put downloaded files in.

optional arguments:

-h, --help            show this help message and exit
--multireddit         Take multirredit instead of subreddit as input. If so,
                    provide /user/m/multireddit-name as argument
--last l              ID of the last downloaded file.
--score s             Minimum score of images to download.
--num n               Number of images to download.
--update              Run until you encounter a file already downloaded.
--sfw                 Download safe for work images only.
--nsfw                Download NSFW images only.
--regex REGEX         Use Python regex to filter based on title.
--verbose             Enable verbose output.
--skipAlbums          Skip all albums
--mirror-gfycat       Download available mirror in gfycat.com.
--filename-format FILENAME_FORMAT
                    Specify filename format: reddit (default), title or
                    url
--sort-type         Sort the subreddit.

Examples

An example of running this script to download images with a score greater than 50 from the wallpaper sub-reddit into a folder called wallpaper would be as follows:

python redditdl.py wallpaper wallpaper --score 50

And to run the same query but only get new images you don't already have, run the following:

python redditdl.py wallpaper wallpaper --score 50 -update

For getting some nice pictures of cats in your catsfolder (wich will be created if it doesn't exist yet) run:

python redditdl.py cats ~/Pictures/catsfolder --score 1000 --num 5 --sfw --verbose

Advanced Examples

Retrieve last 10 pics in the 'wallpaper' subreddit with the word "sunset" in the title (note: case is ignored by (?i) predicate)

python redditdl.py wallpaper sunsets --regex '(?i).*sunset.*' --num 10

Download top week post from subreddit 'animegifs' and use gfycat gif mirror (if available)

python redditdl.py animegifs --sort-type topweek --mirror-gfycat

Sorting

Available sorting are following : hot, new, rising, controversial, top, gilded

'top' and 'controversial' sorting can also be extended using available time limit extension (hour, day, week, month, year, all).

example : tophour, topweek, topweek, controversialhour, controversialweek etc

redditimagegrab's People

Contributors

emmeff avatar endlesslycurious avatar hoverhell avatar joaquinlpereyra avatar jtara1 avatar kaligule avatar parkerlreed avatar rachmadaniharyono avatar verhovsky avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

redditimagegrab's Issues

update for 2.0

hi @HoverHell

will there be update for next version of the program?

i want to ask if it is possible to do the following

  • review the issues
  • merging with other fork and enhancing the program
  • python3
  • adding collaborator
  • testing
  • documenting
  • collaboration with other module (imgur and gfycat, and other img downloader module such
  • rachmadaniHaryono#4 use plugin for image, video or audio
  • rachmadaniHaryono#3 add gui

i am thinking to review my pull request (and close it if necessary).

cc to last active fork user @jtara1 @vrublack @asampat3090

e: closed my unused pr

urllib2.URLError: <urlopen error [Errno 111] Connection refused>

Started to get this messages now.
Any ideas how to skip urls that cause connection refused exception?
Here's the output log

Downloading images from "unixporn" subreddit
Attempting to download URL [http://i.minus.com/ibbOA7nTrGwmOK.png] as [2vc6eu.png].
URL ERROR: http://i.minus.com/ibbOA7nTrGwmOK.png!
Attempting to download URL [http://i.imgur.com/4Q79met.jpg] as [2vb0qz.jpg].
URL [http://i.imgur.com/4Q79met.jpg] already downloaded.
Attempting to download URL [https://i.imgur.com/UpQ1BkU.jpg] as [2vc7yk.jpg].
URL [https://i.imgur.com/UpQ1BkU.jpg] already downloaded.
Attempting to download URL [https://www.youtube.com/watch?v=q2nEIJpzLMQ&hd=1] as [2vc2nb].
WRONG FILE TYPE: https://www.youtube.com/watch?v=q2nEIJpzLMQ&amp;hd=1 has type: text/html; charset=utf-8!
Attempting to download URL [https://u.teknik.io/0g6q7Y.png] as [2va26r.png].
URL [https://u.teknik.io/0g6q7Y.png] already downloaded.
Attempting to download URL [http://www.reddit.com/r/unixporn/comments/2vdh80/text_field_bar/] as [2vdh80].
WRONG FILE TYPE: http://www.reddit.com/r/unixporn/comments/2vdh80/text_field_bar/ has type: text/html; charset=UTF-8!
Attempting to download URL [http://i.imgur.com/ew2dYjO.jpg] as [2v8dj9.jpg].
URL [http://i.imgur.com/ew2dYjO.jpg] already downloaded.
Attempting to download URL [http://i.imgur.com/lpR8GKH.jpg] as [2v91ln_0.jpg].
URL [http://i.imgur.com/lpR8GKH.jpg] already downloaded.
Attempting to download URL [http://i.imgur.com/8WNnHcJ.jpg] as [2v91ln_0.jpg].
URL [http://i.imgur.com/8WNnHcJ.jpg] already downloaded.
Attempting to download URL [http://i.imgur.com/6NTHSUy.jpg] as [2v91ln_0.jpg].
URL [http://i.imgur.com/6NTHSUy.jpg] already downloaded.
Attempting to download URL [http://i.imgur.com/WtLRprC.jpg] as [2v7r1b_0.jpg].
Sucessfully downloaded URL [http://i.imgur.com/WtLRprC.jpg] as [2v7r1b_0.jpg].
Attempting to download URL [http://i.imgur.com/fFD1amp.jpg] as [2v7r1b_1.jpg].
Sucessfully downloaded URL [http://i.imgur.com/fFD1amp.jpg] as [2v7r1b_1.jpg].
Attempting to download URL [http://i.imgur.com/u6N8fjK.jpg] as [2v7r1b_2.jpg].
URL ERROR: http://i.imgur.com/u6N8fjK.jpg!
Attempting to download URL [http://i.imgur.com/WtLRprC.jpg] as [2v7r1b_2.jpg].
URL ERROR: http://i.imgur.com/WtLRprC.jpg!
Attempting to download URL [http://i.imgur.com/fFD1amp.jpg] as [2v7r1b_2.jpg].
URL ERROR: http://i.imgur.com/fFD1amp.jpg!
Attempting to download URL [http://i.imgur.com/u6N8fjK.jpg] as [2v7r1b_2.jpg].
URL ERROR: http://i.imgur.com/u6N8fjK.jpg!
Attempting to download URL [http://i.imgur.com/BxwrfRi.jpg] as [2v80q5.jpg].
URL ERROR: http://i.imgur.com/BxwrfRi.jpg!
Traceback (most recent call last):
File "/home/ubuntu/workspace/RedditImageGrab/redditdownload.py", line 269, in
URLS = extract_urls(ITEM['url'])
File "/home/ubuntu/workspace/RedditImageGrab/redditdownload.py", line 195, in extract_urls
urls = process_imgur_url(url)
File "/home/ubuntu/workspace/RedditImageGrab/redditdownload.py", line 140, in process_imgur_url
return extract_imgur_album_urls(url)
File "/home/ubuntu/workspace/RedditImageGrab/redditdownload.py", line 58, in extract_imgur_album_urls
response = urlopen(album_url)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 404, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 422, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1214, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1184, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [Errno 111] Connection refused>

Thanks

line 2 SyntaxError: encoding problem: utf8

C:\Users\user\Desktop\RedditImageGrab-master>setup.py
File "C:\Users\Admin\Desktop\RedditImageGrab-master\setup.py", line 2
SyntaxError: encoding problem: utf8

On windows 8. I obviously had python 3 installled.

HTTP Error 401: Unauthorized

I'm also getting a lot of these:

Attempting to download URL[https://i.reddituploads.com/43f4a85b0536870d54f4?fit=max&amp;h=1536&amp;w=1536&amp;s=59f1221e068469caabb2e84] as [98uasd1].
Try# 0 err HTTPError()  (u'https://i.reddituploads.com/43f4a85b0536870d54f4?fit=max&amp;h=1536&amp;w=1536&amp;s=59f1221e068469caabb2e84')
Try# 1 err HTTPError()  (u'https://i.reddituploads.com/43f4a85b0536870d54f4?fit=max&amp;h=1536&amp;w=1536&amp;s=59f1221e068469caabb2e84')
Try# 2 err HTTPError()  (u'https://i.reddituploads.com/43f4a85b0536870d54f4?fit=max&amp;h=1536&amp;w=1536&amp;s=59f1221e068469caabb2e84')
    HTTP Error 401: Unauthorized

I also tried pausing 10 seconds between tries to no success. Any ideas on what to do?

No modules named bs4

Hey, tried to run this and it failed with

Traceback (most recent call last): File "redditdl.py", line 10, in <module> from redditdownload.redditdownload import main File "/home/jafesu/git/RedditImageGrab/redditdownload/__init__.py", line 1, in <module> from redditdownload import * File "/home/jafesu/git/RedditImageGrab/redditdownload/redditdownload.py", line 22, in <module> from .deviantart import process_deviant_url File "/home/jafesu/git/RedditImageGrab/redditdownload/deviantart.py", line 7, in <module> from bs4 import BeautifulSoup ImportError: No module named bs4

opened setup.py and manuall installed all the "optional" dependancies with pip and still it continues. i am using Python 2.7.12

Download only from first n pages?

Hello, thanks for great script. Is it possible to define how many pages to process? For example I want to get all images (including albums) that were posted on first and second page of particular subreddit. Or images from first 100 posts only.

Just wanted to say thanks for wrong filetype dump

I wanted to pull videos so I wrote a quick script that parses the .wrong_type_pages.jsl file and pulls to the folder that was originally specified.

#!/bin/bash
cat .wrong* | grep $1 | while read line
do
        url=`echo $line | cut -d"\"" -f 4`
        folder=`echo $line | cut -d"\"" -f 8`
        youtube-dl -o $folder/'%(title)s.%(ext)s' "$url"
done

So I am able to do something like

./vidpull youtube

Youtube-dl supports a ton of sites.

Catch exceptions when there are errors

While downloading an exception was thrown when urlopen was called.

I am no Python programmer, but adding a try catch worked in resolving the issue for me. I am sure you can think of a better way than my code, so I didn't make a pull request.

I think it is fine to just not retrieve things where exceptions are raised because in the end we are just mass downloading some thing, so missing 1 for every 10~40 is fine.

Hope this helps.

High CPU Usage

I'm running on Debian on an Intel i7-3770K at 4GHz, so it's no slowpoke. RedditImageGrab, while downloading from a subreddit, will quite often use 100% of a single CPU core, then drop to low usage for a while before spiking to 100% for maybe 2-10 seconds. Is this expected? Or is there some kind of busy-waiting loop where it's just spinning and eating CPU cycles? If this is normal feel free to disregard.

can't download gifv: wrong file type

When I try to download files with extension gifv, I'm getting the following error:

    Attempting to download URL [http://i.imgur.com/mY30lPn.gifv] as [39n7v1.gifv
].
    WRONG FILE TYPE: http://i.imgur.com/mY30lPn.gifv has type: text/html; charse
t=utf-8!

Proxy support?

Would it be easy to add proxy support? Imgur is really limiting speeds...

Invalid Syntax error

C:\Users\Bharath S\dev\python\RedditImageGrab-master>python redditdownload.py Mo
dels models -update
File "redditdownload.py", line 227
print 'Downloading images from "%s" subreddit' % (ARGS.reddit)
^
SyntaxError: invalid syntax

Skip images removed from imgur

First of all, this is a great scrapper! Thanks you all guys for all the great work!

I have noticed images that no longer exist in imgur are redirected to http://i.imgur.com/removed.png -which is a 503 bytes size PNG image saved as a JPG- that says "The image you are requesting does not exist or is not longer available". I am getting this sort of error images even when using sort type topday.

I wonder if it worth to have a switch to skip images by file size. Thoughts?

EDIT:
I meant to skip images smaller than few KB in size. But now that I think about it the idea seems a bit crazy beause images would have to be downloaded first. So what about a filter to filter out urls such as "http://i.imgur.com/removed.png" ?

HTTP Error 403: Forbidden

I'm getting quite a lot of these:

Attempting to download URL[https://i.redd.it/u9gxuiais.png] as [8912hkads.png].
Try 0 err HTTPError()  (u'https://i.redd.it/u9gxuiais.png')
Try 1 err HTTPError()  (u'https://i.redd.it/u9gxuiais.png')
Try 2 err HTTPError()  (u'https://i.redd.it/u9gxuiais.png')
    HTTP Error 403: Forbidden

I tried pausing 10 seconds (time.sleep(10)) between tries but still I get the 403. Any thoughts?

Flickr image URLs

There seems to be no way currently to get the URL of an image for any given flickr page :(

--last arg is not working

Say I enter into my terminal:

python redditdl.py --num 1 --verbose --sort-type hot pics pics

Then I get the reddit id myself from the submission and enter it in with --last arg to run

python redditdl.py --num 1 --verbose --sort-type hot --last 53a48u pics pics

Now with the messages printed we can see it did not begin downloading from the id we passed with the --last arg.

This bug shouldn't occur in the current code if we are using an 'advanced_sort' (i.e.: topmonth, controversialweek, etc.).

I've identified and solved the problem already in my pull request

Concurrency

Hi,

Thanks for releasing your script. I love it!

One suggestion: maybe add a --threads option that could control the number of downloading threads. Could save a lot of time!

Using a pool over the list of URLs could work:
Multiprocessing reference in python 2.7

Doesn't work in windows 10, python ver. 3.6

In command line, I typed:
python C:\Users\Admin\Desktop\RedditImageGrab-master\redditdl.py

Got:

Traceback (most recent call last):
File "C:\Users\Admin\Desktop\RedditImageGrab-master\redditdl.py", line 10, in
from redditdownload.redditdownload import main
File "C:\Users\Admin\Desktop\RedditImageGrab-master\redditdownload\redditdownload.py", line 8, in
import StringIO
ModuleNotFoundError: No module named 'StringIO'

Consequently, it doesn't work with any arguments, e.g.:

python C:\Users\Admin\Desktop\RedditImageGrab-master\redditdl.py animegifs --sort-type topweek --mirror-gfycat

urllib2.HTTPError: HTTP Error 404: Not Found

Fair Warning: my version of the script is modified, and these modifications were my first attempt ever at Python, but I this stack trace is similar enough to #10 that it probably affects the original code, too.

I keep getting this error:

Traceback (most recent call last):
  File "redditdownload.py", line 212, in <module>
    URLS = extract_urls(ITEM['url'])
  File "redditdownload.py", line 137, in extract_urls
    urls = process_imgur_url(url)
  File "redditdownload.py", line 111, in process_imgur_url
    return extract_imgur_album_urls(url)
  File "redditdownload.py", line 29, in extract_imgur_album_urls
    response = urlopen(album_url)
  File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
    return _opener.open(url, data, timeout)
  File "/usr/lib/python2.7/urllib2.py", line 410, in open
    response = meth(req, response)
  File "/usr/lib/python2.7/urllib2.py", line 523, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/lib/python2.7/urllib2.py", line 448, in error
    return self._call_chain(*args)
  File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
    result = func(*args)
  File "/usr/lib/python2.7/urllib2.py", line 531, in http_error_default
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 404: Not Found

I think this is caused when running the code without the --update tag and the script reaches the absolute last entry in the sub's list of posts. I think it is specifically 404'ing on the URL of the "next page".

Other than that, all the images that I would reasonably expect to have successfully downloaded, seem to be successfully downloading.

Re-Naming downloaded files

After successfully downloading a bunch of images, I noticed that the file names are altered by imgur and are just random values.
Is there any possibility of renaming the files to meaningful values such as the Reddit posts' topic.
This would be very useful in saving time and in organizing it.
See if this feature can be added to this awesome piece of code.

SyntaxError: invalid syntax on line 217

Hello guys, here is the error I'm getting (I've tried commands that are listed in readme):

File "redditdownload.py", line 217
print 'Downloading images from "%s" subreddit' % (ARGS.reddit)
^
SyntaxError: invalid syntax

anyone getting this as well ? I'm on Linux (Arch, Python 3.4.2)

Downloads extra pics from imgur albums...

When downloading from imgur albums, there are either one or two extra 'pics' that are downloaded... I've seen this with a album downloader called 'albumr', and if you changed line 125 in the redditdownload.py to the following, it fixes this.

match = re.compile(r'"hash":"(.[^\"]*)","title"')

since the album title has a hash as well, you have to specify only those with a title after the hash.

QQ: saved items

I tend to save a lot of things while i'm on the iPad so I was wondering if there's a way to have the script authenticate and download from the saved items for my profile?

bugs

hello there
first of all: great script! i love using it. there are a few bugs.
well first of all, it only downloads 1000 images (someone already posted that issue). buts thats okay, because you can add --num 20000 (just an example) and it won't stop at 1000.
another bug: i cant download subreddits with sort type new, hot, rising and gilded
top and controversial are working

i get this error if i try to download new, hot rising or gilded (using windows 7 command line):

A:\Python27>python redditdl.py example A:\Python27\here --sort-type new --num 10
Downloading images from "example" subreddit
Traceback (most recent call last):
File "redditdl.py", line 14, in
main()
File "A:\Python27\redditdownload\redditdownload.py", line 393, in main
reddit_sort=ARGS.sort_type)
File "A:\Python27\redditdownload\reddit.py", line 80, in getitems
if is_advanced_sort:
UnboundLocalError: local variable 'is_advanced_sort' referenced before assignment

it would be nice if you could help me

greetings

What values are expected with the [--last l] cli argument?

There's only several lines of code related to this cli arg, and the critical line seems to be:

if previd:
    url = '%s?after=t3_%s' % (url, previd)

where previd is equal to the value passed by the user with the --last arg.
So if I was downloading from /r/wallpapers the url would probably look something like this:

https://www.reddit.com/r/wallpapers?after=t3_15

assuming the previd is the reddit index, but I don't think that's a valid url query.

RedditImageGrab is classified under HTML on Github

Shouldn't be classified under Python? I have been using this repo for a while and today decided to come back and check for news/updates. But it took me a while to find it because it is filed under HTML. Probably other pythonistas aren't find it it as well. This is very good and sophisticated computer engineering program. Why file it under to HTML? Or am I missing something?

Made the repository python 3 compatible.

I've made the repository python 3 compatible after doing some major changes in the code.
Major difference between what I did and the jtara1 repository is that I did only basic fixes that span up to only 3-4 line changes in reddit.py, 3-4 changes in redditdownload.py and one line change in gfycat.py and it is working perfectly.

Is there any way for my contribution to be taken?

Hoping for a response.

GIFS being saved as jpgs

I've noticed that some gifs are being saved as jpgs. I've not looked to closely at the code to figure it out... (only had it for about an hour).

--update flag is not working

The --update flag should abort the downloading, if one already downloaded image should be downloaded again. In this case the script should abort the download.

The exception handler for FileExistsException is not executed, because the error is catched before.

Fix would be:
Handle the specific FileExistsException instead of a general exception.

[Question] What sites can this download from?

Hello, I'm working on a Python 3 fork of RedditImageGrab.

Given that most subreddits are built up by any user's contributions, any reddit submission could be any link, however, most links are Imgur, Reddit Image, or Gfycat hosted.

It appears there are classes and functions specifically for DeviantArt, but I recall at some points my fork failed to download any images from DeviantArt.

Gfycat & Reddit Images seem to work as expected.

I implemented another github fork (jtara1/imgur-downloader) of mine to handle all Imgur images and galleries so Imgur works well.

I haven't tested tumblr or pixiv hosted media yet, but I'd like to add support for them too.

Second Question:

Is this unit test reliable for RedditImageGrab? I haven't played with it yet, and have been too lazy to port over to my fork.

/RedditImageGrab/redditdownload/tests/test-redditdownload.py

Script bombs

Traceback (most recent call last):
File "./redditdownload.py", line 208, in
URLS = extract_urls(ITEM['url'])
File "./redditdownload.py", line 137, in extract_urls
urls = process_imgur_url(url)
File "./redditdownload.py", line 111, in process_imgur_url
return extract_imgur_album_urls(url)
File "./redditdownload.py", line 29, in extract_imgur_album_urls
response = urlopen(album_url)
File "/usr/lib64/python2.7/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib64/python2.7/urllib2.py", line 406, in open
response = meth(req, response)
File "/usr/lib64/python2.7/urllib2.py", line 519, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib64/python2.7/urllib2.py", line 444, in error
return self._call_chain(_args)
File "/usr/lib64/python2.7/urllib2.py", line 378, in _call_chain
result = func(_args)
File "/usr/lib64/python2.7/urllib2.py", line 527, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 502: Bad Gateway

No support for imgur albums

imgur albums are denoted by the URL containing http://imgur.com/a/ and are a link to a page containing several imgur pictures. It appears there is no way to request the contents of another person's album from the imgur API. Hopefully this is something that will be possible in the future.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.