Coder Social home page Coder Social logo

mwmbl / mwmbl Goto Github PK

View Code? Open in Web Editor NEW
1.4K 14.0 61.0 86.32 MB

An open source, non-profit search engine implemented in python

Home Page: https://mwmbl.org

License: GNU Affero General Public License v3.0

Dockerfile 0.20% Python 57.74% CSS 38.76% JavaScript 1.40% HTML 1.90%
search-engine non-profit

mwmbl's Introduction

banner

Mwmbl - No ads, no tracking, no cruft, no profit

Matrix

Mwmbl is a non-profit, ad-free, free-libre and free-lunch search engine with a focus on useability and speed. At the moment it is little more than an idea together with a proof of concept implementation of the web front-end and search technology on a small index.

Our vision is a community working to provide top quality search particularly for hackers, funded purely by donations.

mwmbl

Crawling

Update 2022-02-05: We now have a distributed crawler that runs on our volunteers' machines! If you have Firefox you can help out by installing our extension. This will crawl the web in the background, retrieving one page a second. It does not use or access any of your personal data. Instead it crawls the web at random, using the top scoring sites on Hacker News as seed pages. After extracting a summary of each page, it batches these up and sends the data to a central server to be stored and indexed.

Why a non-profit search engine?

The motives of ad-funded search engine are at odds with providing an optimal user experience. These sites are optimised for ad revenue, with user experience taking second place. This means that pages are loaded with ads which are often not clearly distinguished from search results. Also, eitland on Hacker News comments:

Thinking about it it seems logical that for a search engine that practically speaking has monopoly both on users and as mattgb points out - [to some] degree also on indexing - serving the correct answer first is just dumb: if they can keep me going between their search results and tech blogs with their ads embedded one, two or five times extra that means one, two or five times more ad impressions.

But what about...?

The space of alternative search engines has expanded rapidly in recent years. Here's a very incomplete list of some that have interested me:

  • YaCy - an open source distributed search engine
  • search.marginalia.nu - a search engine favouring text-heavy websites
  • Gigablast - a privacy-focused search engine whose owner makes money by selling the technology to third parties
  • Brave
  • DuckDuckGo

Of these, YaCy is the closest in spirit to the idea of a non-profit search engine. The index is distributed across a peer-to-peer network. Unfortunately this design decision makes search very slow.

Marginalia Search is fantastic, but it is more of a personal project than an open source community.

All other search engines that I've come across are for-profit. Please let me know if I've missed one!

Designing for non-profit

To be a good search engine, we need to store many items, but the cost of running the engine is at least proportional to the number of items stored. Our main consideration is thus to reduce the cost per item stored.

The design is founded on the observation that most items rank for a small set of terms. In the extreme version of this, where each item ranks for a single term, the usual inverted index design is grossly inefficient, since we have to store each term at least twice: once in the index and once in the item data itself.

Our design is a giant hash map. We have a single store consisting of a fixed number N of pages. Each page is of a fixed size (currently 4096 bytes to match a page of memory), and consists of a compressed list of items. Given a term for which we want an item to rank, we compute a hash of the term, a value between 0 and N - 1. The item is then stored in the corresponding page.

To retrieve pages, we simply compute the hash of the terms in the user query and load the corresponding pages, filter the items to those containing the term and rank the items. Since each page is small, this can be done very quickly.

Because we compress the list of items, we can rank for more than a single term and maintain an index smaller than the inverted index design. Well, that's the theory. This idea has yet to be tested out on a large scale.

How to contribute

There are lots of ways to help:

If you would like to help in any of these or other ways, thank you! Please join our Matrix chat server or email the main author (email address is in the git commit history).

Development

Local Testing

For trying out the service locally see the section in the Mwmbl book.

Using Dokku

Note: this method is not recommended as it is more involved, and your index will not have any data in it unless you set up a crawler to crawl to your server. You will need to set up your own Backblaze or S3 equivalent storage, or have access to the production keys, which we probably won't give you.

Follow the deployment instructions

Frequently Asked Question

How do you pronounce "mwmbl"?

Like "mumble". I live in Mumbles, which is spelt "Mwmbwls" in Welsh. But the intended meaning is "to mumble", as in "don't search, just mwmbl!"

mwmbl's People

Contributors

arcomul avatar colinespinas avatar daoudclarke avatar milovanderlinden avatar nitred avatar omasanori avatar peterdavehello avatar rishabhsingh8 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mwmbl's Issues

Command line crawler

It would be nice to have a crawler that can run from the command line for people who have spare server CPU time they would like to donate.

The server should probably start off as a direct translation of the Firefox extension, crawling at random and sending batches of pages to the central server.

We should also modularize it so that where possible we can reuse bits of Rust code within the Firefox extension itself, via WASM.

Design a colour scheme for Mwmbl

I would like Mwmbl to be more colourful! The current blue colour chosen by @ColinEspinas, #185ADB is nice, and as he noted it is "inspiring trust, intelligence and sincerity":

Mwmbl logo

Here were my original attempts at a logo - clearly the colours are wrong.

mwmbl-logo-2 svg

mwmbl-small svg

I would like something that inspires not only trust and integrity, but also shows creativity, exploration, and fun. Maybe that's too much to ask for one colour scheme 😂

Choice in Git hosting does not match project philosophy

Reading the article over at https://daoudclarke.net/search%20engines/2022/07/10/non-profit-search-engine was interesting, goals like these are indeed worth fighting for:

Even if I make the web better for one person, it’s worth it. Because the way things are is just wrong.

Though, it is a bit puzzling that the choice in tooling does not follow that same spirit. Why is a project like this hosted on the Google of Git hosting services: GitHub?

Please consider moving to more open alternatives, e.g. GitLab.

Implement boilerplate removal in Rust or Javascript

We currently extract the text content in Python using the Justext library. We need something similar implemented in (ideally) Rust or Javascript. The Rust should compile to WASM so we can use it in a browser extension which will be used for crawling the web.

Check out http://www.scielo.org.mx/scielo.php?pid=S1870-90442013000200011&script=sci_abstract&tlng=pt for an approach that looks good.

Also check out this PhD thesis: https://is.muni.cz/th/o6om2/phdthesis.pdf

Building Image from Dockerfile failed

In Step 14/15 copying the data folder is failing, because the folder is not present.
If I manually create the data folder, the image will build, but I can not start the container:
`Traceback (most recent call last):

File "/usr/local/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/venv/lib/python3.9/site-packages/tinysearchengine/app.py", line 13, in
tiny_index = TinyIndex(Document, index_path, NUM_PAGES, PAGE_SIZE)
File "/venv/lib/python3.9/site-packages/tinysearchengine/indexer.py", line 76, in init
self.index_file = open(self.index_path, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: '/data/index.tinysearch'`

Which files have to be present in this data folder?

Domain Filters (Feature Request)

Allows users to add custom filters to only display results from a specified domain

Example

The following example below uses the site: function to only display domains from a certain domain. As persistent toggle for this feature would be quite nice for developers looking for answers for a certain forum or even students looking for study materials from a certain website domain.

site:quizlet.com My Question Here

Basic Concept

Filters.mp4

Search Results

image

Perform a benchmark comparison between Mwmbl and Elasticsearch

Mwmbl was designed to be cheap to run in terms of storage and speed. It would be good to do a proper comparison and confirm whether or not this is really the case.

Size comparison will be tricky right now because we use a fixed size database and the design is based on only indexing certain terms, so we'd have to figure out how to do this fairly.

Let's start with a comparison just on speed, compute power and memory usage.

OpenSearch is broken

I don't know this is intentional at the moment, but notice this when I was poking around the page.

https://mwmbl.org/
Search for application/opensearchdescription+xml in the mark up. Which shows this:
image


I think we should be expecting something like this:
https://search.brave.com/

<link rel="search" type="application/opensearchdescription+xml" title="Brave Search" href="https://cdn.search.brave.com/serp/v1/static/brand/c57da39655b0b08603d88711f8e33aae50500cbcd8d2fc70a0d01e105cbd0985-opensearch.xml">

Prepare an API endpoint for testing crawlers

During development of crawler code, it would be significantly helpful to have a crawler API endpoint that let crawlers access certain testing pages.

Some ideas:

  • api.crawler-test.mwmbl.org: The API endpoint.
  • target.crawler-test.mwmbl.org/200.html, target.crawler-test.mwmbl.org/404.html, target.crawler-test.mwmbl.org/noindex.html, target.crawler-test.mwmbl.org/disallow.html, ...: Test cases.
  • disallow-all.crawler-test.mwmbl.org: Another test case.
  • etc.

Launching such API endpoint locally seems useful too. In principle we can share most of the code between public test endpoint and custom local test endpoint.

Change name of python package to mwmbl

As discussed with @daoudclarke it might be best to change the name of the package from tinysearchengine to mwmbl. Also it would be good to nest the tinysearchengine and indexer modules within a mwmbl root module.

It has the following advantages:

  • In the future you could break the repo into two separate repos like mwmbl-tinysearchengine and mwmbl-indexer but imports will not have to change if we follow conventions for namespaces pacakges.
  • mwmbl can be released as a python package on pypi.org
  • We can have official named entrypoints (i.e. named CLI scripts) which are auto installed when you install the mwmbl package using pip install mwmbl.
  • There are other more subtle advantages that come from mwmbl being a python package such as easy testing and dependency management.

TODO

  • Change the name of the project from tinysearchengine to mwmbl in pyproject.toml
  • Change the project directory to look like the following
    mwmbl                        # repository root
    -- pyproject.toml
    -- poetry.lock
    -- mwmbl                     # new python package folder (this can eventually become the namespace name)
    ---- __init__.py
    ---- tinysearchengine        # tinysearchengine becomes a module within mwmbl
    ------ __init__.py
    ------ ...
    ---- indexer                 # indexer becomes a module within mwmbl
    ------ __init__.py
    ------ ....
    
  • Change all import statements to include the new package structure. For example from tinysearchengine import create_app will become from mwmbl.tinysearchengine import create_app

Flood in browser history

While you typing a query the search results are updating and URL with this query also, there is no need to store intermediary URL queries in browser history, because it is bulking and make history less readable.

Screenshot

Browser History Flood

Introduce a config for tinysearchengine

Having a config for tinysearchengine will the have the following benefits:

  1. Remove some hardcoded values in the source code
  2. Allow for configuration of endpoint paths or proxies or debounce configs without having to alter source code.

TODO

  1. Introduce pydantic model for validating the config
  2. Introduce a config.yaml file that follows the pydantic model. The standard or default config.yaml can be committed to the repo to bootstrap the tinysearchengine with good defaults.

Add debouncing to search as you type

At the moment, search updates with every letter you type - we should add a small delay and update when the user stops typing, after say 0.1 seconds.

Inconsistent use of environment variables

When I look at app.py, there are variables set of which some are taken from os.environ and some variables are hardcoded. There is a DATA_DIR in paths.pyand domains.py and a DATABASE_URL in database.py

I would like to suggest setting all variables in a single place, f.i. app.py or a newly created settings.py and I would like to suggest using os.environ.get using a default so at least some of the variables will be available even when not set.

Error when indexing batches retrieved remotely

ERROR:mwmbl.background:Error retrieving batches
Traceback (most recent call last):
  File "/venv/lib/python3.10/site-packages/mwmbl/background.py", line 19, in run
    retrieve_batches()
  File "/venv/lib/python3.10/site-packages/mwmbl/indexer/retrieve.py", line 37, in retrieve_batches
    for result in results:
  File "/usr/local/lib/python3.10/multiprocessing/pool.py", line 870, in next
    raise value
  File "/usr/local/lib/python3.10/multiprocessing/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/venv/lib/python3.10/site-packages/mwmbl/indexer/retrieve.py", line 53, in retrieve_batch
    queue_batch(batch)
  File "/venv/lib/python3.10/site-packages/mwmbl/crawler/app.py", line 289, in queue_batch
    index_db.queue_documents(documents)
  File "/venv/lib/python3.10/site-packages/mwmbl/indexer/indexdb.py", line 112, in queue_documents
    execute_values(cursor, sql, data)
  File "/venv/lib/python3.10/site-packages/psycopg2/extras.py", line 1267, in execute_values
    parts.append(cur.mogrify(template, args))
UnicodeEncodeError: 'utf-8' codec can't encode character '\ud83c' in position 57: surrogates not allowed

Community aspect

Something between lemmy and an indexer/search engine. Where the visibility for each website would be different in each instance based on the user ratings. Some instance could also block some domains. For example an instance for FOSS enthusiasts that blocked big tech domains. Or an instance for leftists that blocked western media.

To search a niche topic you could select (if it existed) the instance focusing on that. Instead of doing kung-fu on a general search engine.

It would show all links, except the ones for the blocked domains, instead of only the ones posted by the users, as lemmy does. But instead of only the links you would also have a button to show a comment section to have a community aspect like lemmy. Comments would have multiple levels, votes, and different sorting options just like lemmy.

The rating system would require community moderation. A pyramidal trust-based moderation system like discourse so that an instance admin would only have to deal with a few users to make sure the instance remained free of bots and ill-intentioned users skewing the ratings.

[Feature request] Language-based content summary

I have checked that some websites with different languages availability show the content in the language how was indexed.

I think that this content summary should be a registry based on the language of the web browser using it, and maybe a same website should be indexed several times to get content summary in different languages or made through the extension with prefixed languages.

2022-12-05_11-14

In this picture, it is showed the http link in a Cyrillic-based language instead of English, that is being used by my web browser.

2022-12-05_11-15

In this other picture, one of the links is showed in french.

Create an evaluation dataset

We can use the Bing API to:

  1. Identify common search queries using the Autosuggest API. To do this we can query e.g. "n" to autosuggest and get back common queries that begin with "n", e.g. ["next", "news", ...]. This can then be bootstrapped by putting in these common queries to get longer queries. So, send "news" to autosuggest to get ["news uk", "news bbc", "news today"].
  2. Given each query, retrieve the top N results for each query from the Web Search API.

We should ideally collect a dataset at least 2,000 queries, which we can split into development set and test set of 1,000 each.

For the evaluation we will want to filter the retrieved results to the same set of domains that we are currently restricted to (top HN scoring domains).

Store index metadata along with index

At the moment, the number of pages and the page size are stored in the code. This doesn't make sense as different indexes can have different page sizes and number of pages. Instead I suggest storing this metadata either:

  1. in the index itself, in which case we could sacrifice the first 4096 bytes for metadata so as to maintain the physical memory page boundaries
  2. or, we could use a separate file stored along with the index.

My current preference is for 1.

Improve crawler prioritisation

At the moment, the crawler uses heuristics to decide which pages to crawl, however this often fails and it gets stuck in loops of crawling the same rubbish site for days on end.

To get around this, I propose that we choose which sites we wish to crawl, and distribute the crawling across those sites - instead of distributing by individual pages.

So the algorithm would work something like this:

  • We have a list of top sites that we would like to crawl. We will spend say 50% of our time crawling these sites.
  • For each top site, choose a maximum N pages to crawl for that site. We can use our current scoring technique to choose which pages.
  • For every other site, choose a maximum M pages to crawl for those sites, where M < N. Again we can use our current scoring to choose which sites/pages to crawl, up to a maximum of 50% of the total number of pages to crawl.

I would suggest initial values of N = 100 and M = 1

Prioritise root URLs in search ranking

At the moment if you search for "facebook" you will get results about facebook, whereas you should probably get facebook.com/. We should prioritise such root URLs if they exist.

Search result are super irrelevant

This project is really cool. But I can't use it while the search result is irrelevant, like in the screenshot below. Any search with 3+ words produces a completely incomprehensible result. I think this is the main issue to work on.

Screenshots

image
image

We need to use an app_factory for tinysearchengine if we want to use multiple uvicorn workers

If we would like to use multiple uvicorn workers for production then we need to have an app factory that uvicorn can use to spawn multiple instances of the app, one for each worker. Usually a new process is spawned for each worker and the workers & processes don't share any state.

However the app factory must take the config or config filename as an argument if the app is to be configured by the user.

Relates to #30

Wrong URL when adding to search engines

In Firefox, when I right-click on the MWMBL search bar and select “Add a Keyword for this search…” the URL that gets saved is https://mwmbl.org/?=%s, which does not work… the working one is https://mwmbl.org/?q=%s.

I had to manually fix it.

Add theme selection

The goal of that issue would be to add theme management.

This would be easy to add thanks to the current style architecture, we just need to changes the CSS variables indicated in the assets/css/theme.css file.
This could maybe be done by giving an id to the theme stylesheet and switching CSS files on theme selection but could cause visual glitches at page load.

This would also imply a new component for theme selection and to store a value to the local storage.

Unable to run mwmbl locally / index.tinysearch corrupt?

Hi everyone,

I'm trying to run a local development env for mwmbl, but I'm experiencing some difficulties.
Been following the dev guide: https://github.com/mwmbl/mwmbl/wiki/Development-FAQ

Steps I've done:

  1. Cloning the repo.
git clone [email protected]:raypatterson77/mwmbl.git
  1. Setting up Python venv
python -m venv .     
  1. Activating Python venv:
 source bin/activate       
  1. Install dependencies
 pip install .        
  1. Downloaded index File (serveral times) and placed it in data folder
  2. Trying to run mwmbl:
mwmbl-tinysearchengine --config config/tinysearchengine.yaml  

Errors:

Running

mwmbl-tinysearchengine --config config/tinysearchengine.yaml  

Gives the error:

usage: mwmbl-tinysearchengine [-h] --index INDEX --terms TERMS
mwmbl-tinysearchengine: error: the following arguments are required: --index, --terms

NOTE: /config/tinysearchengine.yaml is not in the master branch, but still available in the update-readme-for-new-crawler branch. Even with /config/tinysearchengine.yaml present,

mwmbl-tinysearchengine --config config/tinysearchengine.yaml  

is failing with the error above.

Trying to use --index --terms paramters as following:

mwmbl-tinysearchengine --index data/index.tinysearch --terms data/terms.csv  

NOTE: the terms CSV file is nowhere present, not sure about what exactly is expected to be in the file, but from the errors I was getting,

term,count

needs to be present. Creating the file terms.csv under data/terms.csv with above content gives me the error:

Terms [] []
Traceback (most recent call last):
  File "/home/o/Dokumente/code/python/mwmbl-dev/bin/mwmbl-tinysearchengine", line 8, in <module>
    sys.exit(main())
  File "/home/o/Dokumente/code/python/mwmbl-dev/lib/python3.10/site-packages/mwmbl/tinysearchengine/app.py", line 39, in main
    with TinyIndex(item_factory=Document, index_path=args.index) as tiny_index:
  File "/home/o/Dokumente/code/python/mwmbl-dev/lib/python3.10/site-packages/mwmbl/tinysearchengine/indexer.py", line 81, in __init__
    metadata = TinyIndexMetadata.from_bytes(metadata_bytes)
  File "/home/o/Dokumente/code/python/mwmbl-dev/lib/python3.10/site-packages/mwmbl/tinysearchengine/indexer.py", line 51, in from_bytes
    raise ValueError("This doesn't seem to be an index file")
ValueError: This doesn't seem to be an index file

There is no sha sum to validate the integrity of the index.tinysearch file, but as said I downloaded it multiple times and I don't think the file gets corrupted during the download process.

Am I missing something or is there a problem with the resent version of mwmbl?
And is there anywhere a example of the terms CSV file?
Is config/tinysearchengine.yaml for some reason no longer in the master branch? If not I can make a pull requests to add it back from the update-readme-for-new-crawler branch

Docker

Trying to set up the dev env with docker, ends in similar problems. Unluckily I haven't it documented well, but for building the image config/tinysearchengine.yaml is necessary, as defined in the Dockerfile.
After building:

sudo docker run -p 8080:8080 mwmbl                                                                                                                                                                        
usage: mwmbl-tinysearchengine [-h] --index INDEX --terms TERMS
mwmbl-tinysearchengine: error: the following arguments are required: --index, --terms

Env

uname -a                                                                                                                                                                                                   
Linux o 5.16.7-arch1-1

python --version                                                                                                                                                                                          
Python 3.10.2

pip --version                                                                                                                                                                                                 
pip 21.2.4

docker version                                                                                                                                                                                                
Client:
 Version:           20.10.12
 API version:       1.41
 Go version:        go1.17.5
 Git commit:        e91ed5707e
 Built:             Mon Dec 13 22:31:40 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Add dependencies for indexer as extra or extras_require

Add the dependencies for the indexer module to the pyproject.toml so that they can be installed as extras. For example pip install mwmbl[indexer].

The dependencies for the indexer can be found in the indexer/bootstrap.sh file.

Experiment with lemmatization and/or stemming to improve search rankings

lowered = {nlp.vocab[token.orth].text.lower() for token in content_tokens}

Instead of indexing on lowercase tokens.
We can use either lemmatization or stemming. (https://stackoverflow.com/questions/1787110/what-is-the-difference-between-lemmatization-vs-stemming)

Spacy lemmatizer: https://spacy.io/api/lemmatizer

Pros:
Will have reduced vocabulary ({play, playing, played} => {play})
Will improve search result

Cons:
Overhead in preprocessing (lemmatizer needs POS|DEP, stemming algorithm will have some overheads)
Current SOTA lemma/stem algorithms are still not 100% accurate.

Automate rebuilding of index

Now that we have new crawl data coming in daily, it would be nice to have an automated system that rebuilds the index using the latest data daily, and deploys the new version. We could use GitHub actions for this.

app is None when running tinysearchenginer server with python -m mwmbl.tinysearchengine.app

Reported first by @daoudclarke and I've verified as well that tinesearchengine server works fine when using the binary/entrypoint:
mwmbl-tinysearchengine --config config/tinysearchengine.yaml

However it fails when running the module as a script using
python -m mwmbl.tinysearchengine.app --config config/tinysearchengine.yaml

The following error message is raised:

ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/home/nitred/anaconda3/envs/mwmbl/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 373, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/home/nitred/anaconda3/envs/mwmbl/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
    return await self.app(scope, receive, send)
  File "/home/nitred/anaconda3/envs/mwmbl/lib/python3.9/site-packages/uvicorn/middleware/asgi2.py", line 16, in __call__
    instance = self.app(scope)
TypeError: 'NoneType' object is not callable
ERROR:uvicorn.error:Exception in ASGI application
Traceback (most recent call last):
  File "/home/nitred/anaconda3/envs/mwmbl/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 373, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/home/nitred/anaconda3/envs/mwmbl/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
    return await self.app(scope, receive, send)
  File "/home/nitred/anaconda3/envs/mwmbl/lib/python3.9/site-packages/uvicorn/middleware/asgi2.py", line 16, in __call__
    instance = self.app(scope)
TypeError: 'NoneType' object is not callable

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.