Coder Social home page Coder Social logo

facebook-scraper's Introduction

Facebook Scraper

PyPI download month PyPI download week PyPI download day

PyPI version PyPI pyversions GitHub commits since tagged version

Code style: black

Scrape Facebook public pages without an API key. Inspired by twitter-scraper.

Install

To install the latest release from PyPI:

pip install facebook-scraper

Or, to install the latest master branch:

pip install git+https://github.com/kevinzg/facebook-scraper.git

Usage

Send the unique page name, profile name, or ID as the first parameter and you're good to go:

>>> from facebook_scraper import get_posts

>>> for post in get_posts('nintendo', pages=1):
...     print(post['text'][:50])
...
The final step on the road to the Super Smash Bros
Were headed to PAX East 3/28-3/31 with new games

Optional parameters

(For the get_posts function).

  • group: group id, to scrape groups instead of pages. Default is None.
  • pages: how many pages of posts to request, the first 2 pages may have no results, so try with a number greater than 2. Default is 10.
  • timeout: how many seconds to wait before timing out. Default is 30.
  • credentials: tuple of user and password to login before requesting the posts. Default is None.
  • extra_info: bool, if true the function will try to do an extra request to get the post reactions. Default is False.
  • youtube_dl: bool, use Youtube-DL for (high-quality) video extraction. You need to have youtube-dl installed on your environment. Default is False.
  • post_urls: list, URLs or post IDs to extract posts from. Alternative to fetching based on username.
  • cookies: One of:
    • The path to a file containing cookies in Netscape or JSON format. You can extract cookies from your browser after logging into Facebook with an extension like Get cookies.txt LOCALLY or Cookie Quick Manager (Firefox). Make sure that you include both the c_user cookie and the xs cookie, you will get an InvalidCookies exception if you don't.
    • A CookieJar
    • A dictionary that can be converted to a CookieJar with cookiejar_from_dict
    • The string "from_browser" to try extract Facebook cookies from your browser
  • options: Dictionary of options. Set options={"comments": True} to extract comments, set options={"reactors": True} to extract the people reacting to the post. Both comments and reactors can also be set to a number to set a limit for the amount of comments/reactors to retrieve. Set options={"progress": True} to get a tqdm progress bar while extracting comments and replies. Set options={"allow_extra_requests": False} to disable making extra requests when extracting post data (required for some things like full text and image links). Set options={"posts_per_page": 200} to request 200 posts per page. The default is 4.

CLI usage

$ facebook-scraper --filename nintendo_page_posts.csv --pages 10 nintendo

Run facebook-scraper --help for more details on CLI usage.

Note: If you get a UnicodeEncodeError try adding --encoding utf-8.

Practical example: donwload comments of a post

"""
Download comments for a public Facebook post.
"""

import facebook_scraper as fs

# get POST_ID from the URL of the post which can have the following structure:
# https://www.facebook.com/USER/posts/POST_ID
# https://www.facebook.com/groups/GROUP_ID/posts/POST_ID
POST_ID = "pfbid02NsuAiBU9o1ouwBrw1vYAQ7khcVXvz8F8zMvkVat9UJ6uiwdgojgddQRLpXcVBqYbl"

# number of comments to download -- set this to True to download all comments
MAX_COMMENTS = 100

# get the post (this gives a generator)
gen = fs.get_posts(
    post_urls=[POST_ID],
    options={"comments": MAX_COMMENTS, "progress": True}
)

# take 1st element of the generator which is the post we requested
post = next(gen)

# extract the comments part
comments = post['comments_full']

# process comments as you want...
for comment in comments:

    # e.g. ...print them
    print(comment)

    # e.g. ...get the replies for them
    for reply in comment['replies']:
        print(' ', reply)

Post example

{'available': True,
 'comments': 459,
 'comments_full': None,
 'factcheck': None,
 'fetched_time': datetime.datetime(2021, 4, 20, 13, 39, 53, 651417),
 'image': 'https://scontent.fhlz2-1.fna.fbcdn.net/v/t1.6435-9/fr/cp0/e15/q65/58745049_2257182057699568_1761478225390731264_n.jpg?_nc_cat=111&ccb=1-3&_nc_sid=8024bb&_nc_ohc=ygH2fPmfQpAAX92ABYY&_nc_ht=scontent.fhlz2-1.fna&tp=14&oh=7a8a7b4904deb55ec696ae255fff97dd&oe=60A36717',
 'images': ['https://scontent.fhlz2-1.fna.fbcdn.net/v/t1.6435-9/fr/cp0/e15/q65/58745049_2257182057699568_1761478225390731264_n.jpg?_nc_cat=111&ccb=1-3&_nc_sid=8024bb&_nc_ohc=ygH2fPmfQpAAX92ABYY&_nc_ht=scontent.fhlz2-1.fna&tp=14&oh=7a8a7b4904deb55ec696ae255fff97dd&oe=60A36717'],
 'is_live': False,
 'likes': 3509,
 'link': 'https://www.nintendo.com/amiibo/line-up/',
 'post_id': '2257188721032235',
 'post_text': 'Don’t let this diminutive version of the Hero of Time fool you, '
              'Young Link is just as heroic as his fully grown version! Young '
              'Link joins the Super Smash Bros. series of amiibo figures!\n'
              '\n'
              'https://www.nintendo.com/amiibo/line-up/',
 'post_url': 'https://facebook.com/story.php?story_fbid=2257188721032235&id=119240841493711',
 'reactions': {'haha': 22, 'like': 2657, 'love': 706, 'sorry': 1, 'wow': 123}, # if `extra_info` was set
 'reactors': None,
 'shared_post_id': None,
 'shared_post_url': None,
 'shared_text': '',
 'shared_time': None,
 'shared_user_id': None,
 'shared_username': None,
 'shares': 441,
 'text': 'Don’t let this diminutive version of the Hero of Time fool you, '
         'Young Link is just as heroic as his fully grown version! Young Link '
         'joins the Super Smash Bros. series of amiibo figures!\n'
         '\n'
         'https://www.nintendo.com/amiibo/line-up/',
 'time': datetime.datetime(2019, 4, 30, 5, 0, 1),
 'user_id': '119240841493711',
 'username': 'Nintendo',
 'video': None,
 'video_id': None,
 'video_thumbnail': None,
 'w3_fb_url': 'https://www.facebook.com/Nintendo/posts/2257188721032235'}

Notes

  • There is no guarantee that every field will be extracted (they might be None).
  • Group posts may be missing some fields like time and post_url.
  • Group scraping may return only one page and not work on private groups.
  • If you scrape too much, Facebook might temporarily ban your IP.
  • The vast majority of unique IDs on facebook (post IDs, video IDs, photo IDs, comment IDs, profile IDs, etc) can be appended to "https://www.facebook.com/" to result in a redirect to the corresponding object.
  • Some functions (such as extracting reactions) require you to be logged into Facebook (pass cookies). If something isn't working as expected, try pass cookies and see if that fixes it.
  • Reaction Categories (EN): [like, love, haha, sorry, wow, angry, care]

Comment & Reply example

{'comment_id': '1417925635669547', 
 'comment_url': 'https://facebook.com/1417925635669547', 
 'commenter_id': '100009665948953', 
 'commenter_url': 'https://facebook.com/tw0311?eav=AfZuEAOAat6KRX5WFplL0SNA4ZW78Z2O7W_sjwMApq67hZxXDwXh2WF2ezhICX1LCT4&fref=nf&rc=p&refid=52&__tn__=R&paipv=0', 
 'commenter_name': 'someone', 
 'commenter_meta': None, 
 'comment_text': 'something', 
 'comment_time': datetime.datetime(2023, 6, 23, 0, 0), 
 'comment_image': 'https://scontent.ftpe8-2.fna.fbcdn.net/m1/v/t6/An_UvxJXg9tdnLU3Y5qjPi0200MLilhzPXUgxzGjQzUMaNcmjdZA6anyrngvkdub33NZzZhd51fpCAEzNHFhko5aKRFP5fS1w_lKwYrzcNLupv27.png?ccb=10-5&oh=00_AfCdlpCwAg-SHhniMQ16uElFHh-OG8kGGmLAzvOY5_WZgw&oe=64BE3279&_nc_sid=7da55a', 
 'comment_reactors': [
   {'name': 'Tom', 'link': 'https://facebook.com/ryan.dwayne?eav=AfaxdKIITTXyZj4H-eanXQgoxzOa8Vag6XkGXXDisGzh_W74RYZSXxlFZBofR4jUIOg&fref=pb&paipv=0', 'type': 'like'}, 
   {'name': 'Macy', 'link': 'https://facebook.com/profile.php?id=100000112053053&eav=AfZ5iWlNN-EjjSwVNQl7E2HiVp25AUZMqfoPvLRZGnbUAQxuLeN8nl6xnnQTJB3uxDM&fref=pb&paipv=0', 'type': 'like'}],
 'comment_reactions': {'like': 2}, 
 'comment_reaction_count': 2, 
 'replies': [
   {'comment_id': '793761608817229', 
    'comment_url': 'https://facebook.com/793761608817229', 
    'commenter_id': '100022377272712', 
    'commenter_url': 'https://facebook.com/brizanne.torres?eav=Afab9uP4ByIMn1xaYK0UDd1SRU8e5Zu7faKEx6qTzLKD2vp_bB1xLDGvTwEd6u8A7jY&fref=nf&rc=p&__tn__=R&paipv=0', 
    'commenter_name': 'David', 
    'commenter_meta': None, 
    'comment_text': 'something', 
    'comment_time': datetime.datetime(2023, 6, 23, 18, 0), 
    'comment_image': None, 
    'comment_reactors': [], 
    'comment_reactions': {'love': 2}, 
    'comment_reaction_count': None}
 ]
}

Profiles

The get_profile function can extract information from a profile's about section. Pass in the account name or ID as the first parameter. Note that Facebook serves different information depending on whether you're logged in (cookies parameter), such as Date of birth and Gender. Usage:

from facebook_scraper import get_profile
get_profile("zuck") # Or get_profile("zuck", cookies="cookies.txt")

Outputs:

{'About': "I'm trying to make the world a more open place.",
 'Education': 'Harvard University\n'
              'Computer Science and Psychology\n'
              '30 August 2002 - 30 April 2004\n'
              'Phillips Exeter Academy\n'
              'Classics\n'
              'School year 2002\n'
              'Ardsley High School\n'
              'High School\n'
              'September 1998 - June 2000',
 'Favourite Quotes': '"Fortune favors the bold."\n'
                     '- Virgil, Aeneid X.284\n'
                     '\n'
                     '"All children are artists. The problem is how to remain '
                     'an artist once you grow up."\n'
                     '- Pablo Picasso\n'
                     '\n'
                     '"Make things as simple as possible but no simpler."\n'
                     '- Albert Einstein',
 'Name': 'Mark Zuckerberg',
 'Places lived': [{'link': '/profile.php?id=104022926303756&refid=17',
                   'text': 'Palo Alto, California',
                   'type': 'Current town/city'},
                  {'link': '/profile.php?id=105506396148790&refid=17',
                   'text': 'Dobbs Ferry, New York',
                   'type': 'Home town'}],
 'Work': 'Chan Zuckerberg Initiative\n'
         '1 December 2015 - Present\n'
         'Facebook\n'
         'Founder and CEO\n'
         '4 February 2004 - Present\n'
         'Palo Alto, California\n'
         'Bringing the world closer together.'}

To extract friends, pass the argument friends=True, or to limit the amount of friends retrieved, set friends to the desired number.

Group info

The get_group_info function can extract info about a group. Pass in the group name or ID as the first parameter. Note that in order to see the list of admins, you need to be logged in (cookies parameter).

Usage:

from facebook_scraper import get_group_info
get_group_info("makeupartistsgroup") # or get_group_info("makeupartistsgroup", cookies="cookies.txt")

Output:

{'admins': [{'link': '/africanstylemagazinecom/?refid=18',
             'name': 'African Style Magazine'},
            {'link': '/connectfluencer/?refid=18',
             'name': 'Everythingbrightandbeautiful'},
            {'link': '/Kaakakigroup/?refid=18', 'name': 'Kaakaki Group'},
            {'link': '/opentohelp/?refid=18', 'name': 'Open to Help'}],
 'id': '579169815767106',
 'members': 6814229,
 'name': 'HAIRSTYLES',
 'type': 'Public group'}

Write to a CSV file directly

The library also provides a write_posts_to_csv() function that writes posts directly to the disk and is able to resume scraping from the address of the last page. It is very useful when scraping large pages as the data is saved continuously and scraping can be resumed in case of an error. Here is an example to fetch the posts of a group 100 pages at a time and save them in separate files.

import facebook_scraper as fs

# Saves the first 100 pages
for i in range(1, 101):
    fs.write_posts_to_csv(
        group=GROUP_ID, # The method uses get_posts internally so you can use the same arguments and they will be passed along
        page_limit=100,
        timeout=60,
        options={
            'allow_extra_requests': False
        },
        filename=f'./data/messages_{i}.csv', # Will throw an error if the file already exists
        resume_file='next_page.txt', # Will save a link to the next page in this file after fetching it and use it when starting.
        matching='.+', # A regex can be used to filter all the posts matching a certain pattern (here, we accept anything)
        not_matching='^Warning', # And likewise those that don't fit a pattern (here, we filter out all posts starting with "Warning")
        keys=[
            'post_id',
            'text',
            'timestamp',
            'time',
            'user_id'
        ], # List of the keys that should be saved for each post, will save all keys if not set
        format='csv', # Output file format, can be csv or json, defaults to csv
        days_limit=3650 # Number of days for the oldest post to fetch, defaults to 3650
    )

To-Do

  • Async support
  • Image galleries (images entry)
  • Profiles or post authors (get_profile())
  • Comments (with options={'comments': True})

Alternatives and related projects

facebook-scraper's People

Contributors

asymness avatar barakplasma avatar bipsen avatar dependabot[bot] avatar ianneee avatar is3ka1 avatar jocejocejoe avatar josx avatar jwesheath avatar kevinzg avatar krzygorz avatar lazanet avatar lennoxho avatar lucasmrdt avatar neon-ninja avatar nielsoerbaek avatar nubpro avatar pierremesure avatar qdii avatar rednafi avatar roma-glushko avatar rshkunov avatar salamtamp avatar sassbalint avatar senexus avatar suryashekharc avatar tbuytaer avatar themulti0 avatar travelsir avatar vanguard-52236 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

facebook-scraper's Issues

Login unsuccessful

for post in get_posts(group='EatsleeprepeatESR/', credentials={'email' : '[email protected]' , 'pass' : '*****************'}):
print(post['text'][:50])

This is my code. Despite of providing my correct credentials, it gives me error:
" warnings.warn('login unsuccessful')
UserWarning: login unsuccessful"

Changed algorithm?

It happened since yesterday, I can't extract text from facebook page anymore

Scraper only retrieving 2 posts

It seems Facebook changed its code and the scraper is only getting 2 posts now.

I'll try to fix this over the weekend but if anyone has looked into it please share your findings here. PRs with a fix are also welcomed.

Only limited number of posts could be scraped when scraping groups

from facebook_scraper import get_posts

for post in get_posts(group='234176430922917', credentials=('xxxx', 'xxxx')):
    print(post)

When executing the above code, only about 20 posts are collected while the group has hundreds of new posts a day.
Moreover, the time of all posts is none. Is it normal?

Thank you very much.

Only a small part of longer post is getting returned

For longer post, only a fraction of the text is being returned. this is different from the issue #2 as more than one paragraph is being shown but not the full text. In my use cases, there seems to be a limit of around 700 characters after which "... more" is displayed at the end of the returned fraction.

Has anyone ran into this issue, any fix ?
Thank you

Scrape all images per post

When using get_posts only the first image url gets scraped. Is there a way to get all of them?
Thank you

Feature request - provide login option in order to scrape private/age restricted pages

Thank you for this great package. I need a way to scrape certain Facebook pages that require the user to be logged in. I believe some of these pages (think alcohol and tobacco related products) only need the user to be logged in so they can check the user's age/DOB and restrict young users from viewing the page (though I'm not 100% certain about this). I've forked this repo and modified the code so that I can provide my login information to log the session in before scraping so that I can scrape these restricted pages. If I submit a pull request to add this feature, is this something people would be interested in? I am admittedly not a professional software developer and may not implement the feature in the most optimal way, but I have it working and would be glad to try and share if folks are interested.

No response!

I've tried to run this code as it is, by providing my login credentials and the facebook group id. However, I'm not getting any response neither any message on the terminal. It's all blank. Can anyone here guide me?

installation with pip mal functions

Hi!

i get the following error trying to install with pip

Collecting facebook-scraper
Could not find a version that satisfies the requirement facebook-scraper (from versions: )
No matching distribution found for facebook-scraper

thanks!

Extract search results

Is there a way to query for a keyword across all posts? Or do I have to pick a specific user page?

Use account without logging in on each subsequent request

If I'm not mistaken, at the moment scraping several pages in a row would have _login_user triggered for each _get_posts call. This might be flagged by facebook as suspicious.

In my usecase I fixed it this way:

def _get_posts(path, pages=10, timeout=5, sleep=0, credentials=None):
    """Gets posts for a given account."""
    global _session, _timeout

    url = f'{_base_url}/{path}'
    if _session is None:
        _session = HTMLSession()
        _session.headers.update(_headers)

        if credentials:
            _login_user(*credentials)

That however this isn't optimal for people who use several different accounts in their workloads, so I believe it should be discussed.

How can I download the facebook scraped data in an excel file

This is the code
for post in get_posts('iamforloveofwater', pages=50, extra_info=True):
#print (post)
print (post['text'][:100])
temp=pd.DataFrame(data=post)
dj=dj.append(temp,verify_integrity=False)

Error: ValueError: If using all scalar values, you must pass an index

No response when retrieving posts

I just installed the library and I'm trying to run the example at the README, but I get an error

Traceback (most recent call last):
  File "/Users/fernandagomes/dev/deni_dashbot/main_executer.py", line 49, in <module>
    for p in posts:
  File "/Users/fernandagomes/dev/deni_dashbot/deni_env/lib/python3.6/site-packages/facebook_scraper.py", line 55, in get_posts
    html = response.html
AttributeError: 'Response' object has no attribute 'html'

Any idea on why this is happening? I tried with many different pages.

AttributeError: 'NoneType' object has no attribute 'find'

First, I would like to say thank for your work, this script is very useful.

I ran get_post with a absurdly big number just to get all the post of the post.
After downloading 144 post I got this

` File ".../facebook_post_downloader/venv/lib/python3.6/site-packages/facebook_scraper.py", line 75, in _get_posts
yield _extract_post(article)
File ".../facebook_post_downloader/venv/lib/python3.6/site-packages/facebook_scraper.py", line 102, in _extract_post
text, post_text, shared_text = _extract_text(article)
File ".../facebook_post_downloader/venv/lib/python3.6/site-packages/facebook_scraper.py", line 137, in _extract_text
nodes = article.find('p, header')
AttributeError: 'NoneType' object has no attribute 'find'

Process finished with exit code 1
`

I was expecting a graceful exit from the generator.

Timestamp option

I get the following info from nintendo, with extra_info=True:

image

But, I can see the timestamp from when the text was posted.
How I can see the timestamp?

empty console - no output/error

Hi, I have tried your library, very easy to use. However, while scraping the facebook groups, it doesn't return anything. I've tried with public groups too. Returns nothing and emptiness just like my 2019...xD

can you please look into this? Thanks.

Couldn't get any posts. (error)

With one private group, I was able to scrape the posts (there are only 3 or 4 in the group) however in a larger private group with more activity where I am also an admin, I get an error of Couldn't get any posts.

Could it be a facebook group setting that differs between the group that works and the one that doesn't? I know the smaller group was recently created, and basically has the default facebook settings. Also I ran this again and outputted some of the html to a file, and now it is titled "Security Check". So I think it is presenting a captcha now.

time is not properly formatted from posts into into posts for groups

Time for posts from the current year do not include the current year in the post. They are in the format of May 29 at 1:35

The code is expecting you provide the year.
Second issue is for recent post they are relative to "current" time. They show up as 2 mins or 4 hrs
I am fixing it locally on my version, I am not sure if you accept merge requests etc.

Capture posts from old pages

Please, I did:

from facebook_scraper import get_posts

for post in get_posts('cezinhademadureira', pages=1):
    print(post['text'])

And for this page the return is empty. I noticed that it's an old page and hasn't been updated in a while. Is there a solution in these cases?

Proxies support

Hi!
Cannot find proxy support in the library, am I wrong?

Regards,
Vlad

Continue Reading not working

When "Continue Reading" is displayed in a post, only a fraction of the text is returned (when "see more" is displayed it works well).

Post Limit

I very much appreciate you work on this. Thank you.

There seems to be a limit of about 300 posts that are able to be collected. How can i remedy this and collect all of a persons posts ?

Thank You

Retrieve text from image posts (and one more..)

Firstly, thank you so much giving us this amazing scraper, I really appreciate your hard work that went into building this.

Now to the issue:

  1. Whenever facebook-scraper gets to a post like this, it returns empty text, post_text, shared_text
    For example:
    https://www.facebook.com/littlekiteskerala/posts/553818741897029

2 . Also, sometimes number of shares returned are 0, and I am unable to figure out why?

Cheers and stay safe

Feature Request - Video

Is it possible to provide video support with a post type parameter like in the original graph api

eg 'type': video,
'source_url': link,
Thank you

Shares field issue

Sometimes the shares field returns 0 when in fact a post has been shared multiple times, just FYI.

AttributeError: 'NoneType' object has no attribute 'html'

Hi there,

Thanks for your great efforts. It's great to see this kind of package since FB changed its API policy last year.

I was trying to leverage your code to collect some data. The following is my code. However, when I executed my code, I got the "AttributeError: 'NoneType' object has no attribute 'html'. Do you know how to fix the issue?

d = []
for post in get_posts('Chrysler', pages=1000):
d.append({'time': post['time'], 'post_id': post['post_id'], 'text': post['text'][:20000], 'like':
post['likes'], 'comment':post['comments'],'share': post['shares'] , 'URL':post['post_url'], 'link':post['link']})

Obj1= pd.DataFrame(d)
Obj1.to_csv("fgc_Chrysler.csv")

Thank a lot!

Login does not work

Apparently after the login function run, later request still get the response as if the user hasn't logged in.

Scraping Likes, but no reactions

I have noticed, that your script only scraps total number of likes and it dose not take reactions at all

Also shares are not working at all

Attribute error _find_and_search

For some posts, in the function _find_and_search the container returned is of type None. The subsequent pattern search explicitly looks for the attribute html which then leads to an exception.

2019-12-18-002101_655x344_scrot

Image quality is low

Hi, the link of 'image' returned by get_posts is different from the link you get with 'show image' on browser and it gives back an image on a lower resolution.
Would it be possible to change it in order to get the highest resolution?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.