Coder Social home page Coder Social logo

archivy-docker's Introduction

Guide to using Archivy with Docker

This document contains enough information to help you get started with using Archivy as a container, in this case, with Docker(although you can use any other container runtime).

This document will cover the following:

NOTE: Parts of the document may be incomplete as it is a work in progress. In time, more information will be added to each section/topic. If some part of the documentation is ambiguous, feel free to ask questions or make suggestions on the Issues page of the project. If necessary, additional revisions to the documentation can be made based on user feedback.

Prerequisites

  • Docker.

You can check if Docker is installed by running

$ docker --version
Docker version 19.03.12, build 48a66213fe

If you don't have Docker installed, take a look at the official installation guide for your device.

  • Docker-compose.

You can check if Docker-compose is installed by running

$ docker-compose --version
docker-compose version 1.12.0, build unknown

If you don't have Docker-compose installed, take a look at the official installation guide for your device.

Setup

Docker-Compose

  1. Download docker-compose.yml or docker-compose-light.yml into the folder you want to use for Archivy (something like ~/docker/archivy). Edit the compose file as needed for your network (host, port...). The default compose file (docker-compose.yml) is setup with Elasticsearch whereas the other one is more lightweight, using ripgrep for search. See here for more info on this.

  2. In the folder from which you will start docker-compose, create a directory for persistent storage of your notes: mkdir ./archivy_data.

  3. (optional): Archivy has many config options that allow you to finetune its behavior. If you want to define your own configuration, instead of using the default ones we wrote for use with Docker, create an archivy_config directory in the same directory as archivy_data. We recommend you at least build off the defaults.

Note: If your user ID is anything other than 1000 (you can check with the id command), you will need to change the owner of these directories to the 1000 UID and 1000 GID. Example: chown -R 1000:1000 ./archivy_data.

  1. Start the docker-compose stack with: docker-compose up -d or docker-compose up -d -f docker-compose-lite.yml for the lightweight option. If you're using your custom configuration, add -f docker-compose.custom-config.yml as an option to the preceding command.

Application Setup

You should now be able to access your Archivy installation at http://<your-docker-host>:5000 where is the IP of the machine running your Docker environment.

However, the base installation has no users, so you will be unable to log in.

To create a new admin, run:

docker exec -it archivy archivy create-admin --password <your-password> <your-username>

  • docker exec -it archivy tells Docker to execute a command on the archivy container with an interactive pseudo-TTY. Read more here.
  • archivy create-admin --password <your-password> <your-username> is the command run by docker which creates a new admin account with the password and username provided.

Congratulations! You can now log into your new Archivy instance (complete with search and persistent data) with the credentials you created above. Happy archiving!

Installing Plugins

To install plugins into your Dockerized Archivy instance, you can simply run pip inside the container. For example:

docker exec archivy pip install archivy_git to install the archivy-git plugin.

NOTE: Plugins will persist as long as the container's system volume does. If you turn off your Archivy instance docker-compose down, you will destroy the container's system volume. Turning off your Archivy instance with docker container stop archivy will not cause this issue.

Note: Some plugins will require dependencies installed into the container (e.g. archivy-hn). In such cases, follow the Docker installation instructions provided by the plugin maintainer. If none exist, open an issue.

archivy-docker's People

Contributors

jafner avatar uzay-g avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

archivy-docker's Issues

Volume for plugins

There should be a way to persist plugins installed into the container environment. The README suggests that this is currently not supported.

Some suggestions can be taken from:

Ideally plugins are not installed into the container, but preinstalled into custom images, that reuse the archivy images with a FROM: directive. This could be exemplified in the documentation alongside this repository.

Can't create user / auto-created admin password does not work

I have followed the directions in Application Setup to create an admin, however I get the following error

glenn@zeus:~/docker$ docker exec -it docker_archivy_1 archivy create-admin --password Stinky1303$ glenn
/usr/local/lib/python3.9/site-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.2) or chardet (3.0.4) doesn't match a supported version!
  warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
[2021-01-23 01:20:10,432] INFO in __init__: Archivy has created an admin user as it did not exist.
                            Username: 'admin', password: '6m7oKd_xh4B00iDkUXuVP8Idgx6es555vOryjknTm6s'

Usage: archivy [OPTIONS] COMMAND [ARGS]...
Try 'archivy --help' for help.

Error: No such command 'create-admin'.

and when I try 'archivy --help'` I get

glenn@zeus:~/docker$ docker exec -it docker_archivy_1 archivy --help
/usr/local/lib/python3.9/site-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.2) or chardet (3.0.4) doesn't mat                                                                                    ch a supported version!
  warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
Usage: archivy [OPTIONS] COMMAND [ARGS]...

Options:
  --version  Show the flask version
  --help     Show this message and exit.

Commands:
  routes  Show the routes for the app.
  run     Runs archivy web application
  shell   Run a shell in the app context.

Now that wouldn't necessarily be a problem except when I try to login with the password shown, I am told that my credentials are invalid.

image

Any help with how to proceed would be greatly appreciated!

Archivy Redirect behind Traefik with HTTPS redirecting to HTTP

So, here's my issue. I have Traefik running elsewhere, and it's configured to use the file backend, instead of the docker backend.

Browsing to https://<domain>/login displays properly, but browsing to https://<domain> redirects me back to http://<domain> (because it's trying to redirect me to /login):

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to target URL: <a href="/login?next=%2F">/login?next=%2F</a>.  If not click the link.

Even when browsing to https://<domain>/login (where the page loads fine, I can't do anything, because when I do, it keeps redirecting me back to HTTP. So every time it goes back to HTTP, I just add an S and I can get that page to load (until I click something else), and the process keeps repeating.

I tried to fix this in Traefik (by redirecting HTTP to HTTPS), but then I get the "Too Many Redirects" message.

I think it would be a great idea to add a URL option in the config so we can add the full domain (https://<domain>) to the config, and the redirect would use it. Or at least be able to tell it to use HTTPS instead of HTTP (HTTPS=true) or something.

Thoughts? I mean, if I missed something that already fixes this issue, please let me know.

when deployed (docker compose) - nothing happens

I don't know what I'm doing wrong but I'm using the docker compose file on this github page ( I changed some lines ofcourse), but somehow I don't get to see an application when I go to 192.168.1.1:5000.
I used this example with Traefik so I don't really need the 'ports' line. I did add it to see if it works internally (without traefik) and that is NOT the case.
the message is: This site cannot be reached 192.168.1.1 has refused the connection.

What I did extra is the 'chown' command:
chown -R 1000:1000 /share/docker/swarm/appdata/archivy/data

this is also a docker swarm example:

version: '3'
services:
archivy:
image: uzayg/archivy:v1.0.0
networks:
- traefik_public
ports:
- 5000:5000
environment:
- FLASK_DEBUG=0 # this sets the level of verbosity printed to the Archivy container's logs
- ELASTICSEARCH_ENABLED=1 # this sets whether the container should check if an Elasticsearch container is running before it attempts to start the Archivy server. Note: This does not check whether the elasticsearch server is working properly, only if an Elasticsearch container is working. Further, this setting is overridden by the contents of config.yml
- ELASTICSEARCH_URL=http://elasticsearch:9200/ # sets the URL that the entrypoint.sh script should use to check for a running Elasticsearch container
volumes:
- /share/docker/swarm/appdata/archivy/data:/archivy/data
- /share/docker/swarm/appdata/archivy/config:/archivy/.local/share/archivy
deploy:
labels:
- "traefik.enable=true"
- "traefik.http.routers.archivy-rtr.entrypoints=https"
- "traefik.http.routers.archivy-rtr.rule=Host(archivy.mydomain.com)"
- "traefik.http.routers.archivy-rtr.middlewares=chain-oauth@file"
- "traefik.http.routers.archivy-rtr.service=archivy-svc"
- "traefik.http.services.archivy-svc.loadbalancer.server.port=5000"

elasticsearch:
image: elasticsearch:7.9.0
networks:
- traefik_public
volumes:
- /share/docker/swarm/appdata/archivy/search:/usr/share/elasticsearch/data:rw
environment:
- "discovery.type=single-node"

networks:
traefik_public:
external: true

the logs:

Waiting for Elasticsearch @ http://elasticsearch:9200/ to start.                                                                                                                                                                                                                                     
Elasticsearch is running @ http://elasticsearch:9200/.                                                                                                                                                                                                                                               
Starting Archivy                                                                                                                                                                                                                                                                                     
Running archivy...                                                                                                                                                                                                                                                                                   
[2021-02-08 09:57:23,219] INFO in __init__: OUTPUT_FOLDER: /tmp/click-web                                                                                                                                                                                                                            
 * Serving Flask app "archivy" (lazy loading)                                                                                                                                                                                                                                                        
 * Environment: production                                                                                                                                                                                                                                                                           
   WARNING: This is a development server. Do not use it in a production deployment.                                                                                                                                                                                                                  
   Use a production WSGI server instead.                                                                                                                                                                                                                                                             
 * Debug mode: off                                                                                                                                                                                                                                                                                   
 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)                        

elastic search log:

{"type": "server", "timestamp": "2021-02-08T09:57:24,806Z", "level": "INFO", "component": "o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-cluster", "node.name": "a8318a21da59", "message": "adding index lifecycle policy [ilm-history-ilm-policy]", "cluster.uuid": "PIrcC8YNSjSQh
jSw55w", "node.id": "6XJaHUtuSo6Ua2ZGKI90Ow"  }                                                                                                                                                                                                                                                      
{"type": "server", "timestamp": "2021-02-08T09:57:25,191Z", "level": "INFO", "component": "o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-cluster", "node.name": "a8318a21da59", "message": "adding index lifecycle policy [slm-history-ilm-policy]", "cluster.uuid": "PIrcC8YNSjSQh
jSw55w", "node.id": "6XJaHUtuSo6Ua2ZGKI90Ow"  }                                                                                                                                                                                                                                                      
{"type": "server", "timestamp": "2021-02-08T09:57:25,840Z", "level": "INFO", "component": "o.e.l.LicenseService", "cluster.name": "docker-cluster", "node.name": "a8318a21da59", "message": "license [b6c8a1ec-e104-415c-ab4a-4b0ebc3c2884] mode [basic] - valid", "cluster.uuid": "PIrcC8YNSjSQhmaAj
5w", "node.id": "6XJaHUtuSo6Ua2ZGKI90Ow"  }                                                                                                                                                                                                                                                          
{"type": "server", "timestamp": "2021-02-08T09:57:25,843Z", "level": "INFO", "component": "o.e.x.s.s.SecurityStatusChangeListener", "cluster.name": "docker-cluster", "node.name": "a8318a21da59", "message": "Active license is now [BASIC]; Security is disabled", "cluster.uuid": "PIrcC8YNSjSQhma
w55w", "node.id": "6XJaHUtuSo6Ua2ZGKI90Ow"  }         

is uzayg/archivy the result of following the steps in this repo?

I tried using it but was unable to get a working instance running using the following docker compose :

version: '3'

services:

  archivy:
    image: uzayg/archivy:v1.0.0
    container_name: archivy
#   networks: # If you are using a reverse proxy, you will need to edit this file to add Archivy to your reverse proxy network. You can also remove the host-to-container port mapping, as that should be handled by the reverse proxy
    ports:
      - 6875:5000 # this is a host-to-container port mapping. If your Docker environment already uses the host's port `:5000`, then you can remap this to any `<port>:5000` you need
    environment:
      - FLASK_DEBUG=0 # this sets the level of verbosity printed to the Archivy container's logs
      - ELASTICSEARCH_ENABLED=1 # this sets whether the container should check if an Elasticsearch container is running before it attempts to start the Archivy server. Note: This *does not* check whether the elasticsearch server is working properly, only if an Elasticsearch container is working. Further, this setting is overridden by the contents of `config.yml`
      - ELASTICSEARCH_URL=http://elasticsearch:9200/ # sets the URL that the `entrypoint.sh` script should use to check for a running Elasticsearch container
    volumes:
      - ./archivy_data:/archivy:rw # this looks for a Docker volume on the host called `archivy_data` and mounts it into the container's `/archivy` directory. You can change the name of the Docker volume on the host, but not the mount path
  elasticsearch:
    image: elasticsearch:7.9.0
    container_name: elasticsearch
    volumes:
      - ./elasticsearch_data:/usr/share/elasticsearch/data
    environment:
      - "discovery.type=single-node"

Does not run on Raspberry Pi 4

I am unable to run archivy through docker-compose on Raspberry Pi 4.
It runs fine if I install it directly on the RPi using pip3 install archivy

I am running ubuntu server 20.10 (64-bit).
Since it runs outside docker, I suspect it is related to the Dockerfile, but I am not able to see the cause.

I get the following error in the docker logs:
archivy_1 | standard_init_linux.go:211: exec user process caused "exec format error"

Elasticsearch is running fine.
Snippet of my docker-compose.yml file below
(I also run other containers not shown. The container is exposed through a Caddy reverse proxy):

version: '3.4'
services:

  archivy:
  # https://github.com/archivy/archivy-docker/blob/main/docker-compose.yml
    image: uzayg/archivy:v1.0.0
    restart: unless-stopped
    environment:
      - FLASK_DEBUG=0
      - ELASTICSEARCH_ENABLED=1
      - ELASTICSEARCH_URL=http://elasticsearch:9200/
    volumes:
      - ./archivy/data:/archivy/data
      - ./archivy/config:/archivy/.local/share/archivy
  elasticsearch:
    image: elasticsearch:7.9.0
    restart: unless-stopped
    volumes:
      - ./archivy/elasticsearch/data:/usr/share/elasticsearch/data:rw
    environment:
      - "discovery.type=single-node"

Reading a bit in your repository, I see an issues note that it does not work on arm due to pandoc is missing. I am able to install pandoc on ubuntu server on the raspberry pi using apt. Is this issue related to base image being alpine? Not sure if it is related to the error though..

flask error

after config with this :
HOST: 127.0.0.1 INTERNAL_DIR: /archivy/.local/share/archivy SEARCH_CONF: enabled: 1 engine: elasticsearch url: http://elasticsearch:9200/ USER_DIR: /archivy/data
and create an admin launchin run command exit this error
`[2021-12-17 13:26:01,044] INFO in init: Elasticsearch index already created
Running archivy...
[2021-12-17 13:26:01,107] INFO in init: OUTPUT_FOLDER: /tmp/click-web

  • Serving Flask app "archivy" (lazy loading)
  • Environment: production
    WARNING: This is a development server. Do not use it in a production deployment.
    Use a production WSGI server instead.
  • Debug mode: off
    Traceback (most recent call last):
    File "/usr/local/bin/archivy", line 8, in
    sys.exit(cli())
    File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1128, in call
    return self.main(*args, **kwargs)
    File "/usr/local/lib/python3.9/site-packages/flask/cli.py", line 586, in main
    return super(FlaskGroup, self).main(*args, **kwargs)
    File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1053, in main
    rv = self.invoke(ctx)
    File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1659, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
    File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1395, in invoke
    return ctx.invoke(self.callback, **ctx.params)
    File "/usr/local/lib/python3.9/site-packages/click/core.py", line 754, in invoke
    return __callback(*args, **kwargs)
    File "/usr/local/lib/python3.9/site-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
    File "/usr/local/lib/python3.9/site-packages/flask/cli.py", line 426, in decorator
    return __ctx.invoke(f, *args, **kwargs)
    File "/usr/local/lib/python3.9/site-packages/click/core.py", line 754, in invoke
    return __callback(*args, **kwargs)
    File "/usr/local/lib/python3.9/site-packages/archivy/cli.py", line 116, in run
    app_with_cli.run(host=app.config["HOST"], port=app.config["PORT"])
    File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 990, in run
    run_simple(host, port, self, **options)
    File "/usr/local/lib/python3.9/site-packages/werkzeug/serving.py", line 1052, in run_simple
    inner()
    File "/usr/local/lib/python3.9/site-packages/werkzeug/serving.py", line 996, in inner
    srv = make_server(
    File "/usr/local/lib/python3.9/site-packages/werkzeug/serving.py", line 847, in make_server
    return ThreadedWSGIServer(
    File "/usr/local/lib/python3.9/site-packages/werkzeug/serving.py", line 740, in init
    HTTPServer.init(self, server_address, handler)
    File "/usr/local/lib/python3.9/socketserver.py", line 452, in init
    self.server_bind()
    File "/usr/local/lib/python3.9/http/server.py", line 138, in server_bind
    socketserver.TCPServer.server_bind(self)
    File "/usr/local/lib/python3.9/socketserver.py", line 466, in server_bind
    self.socket.bind(self.server_address)
    OSError: [Errno 98] Address in use`

Production example not provided

When launching the archivy container, it complains that its embedded development webserver is configured for production:

archivy_1  | Starting Archivy
archivy_1  | Running archivy...
archivy_1  | [2022-09-24 18:39:42,440] INFO in __init__: OUTPUT_FOLDER: /tmp/click-web
archivy_1  |  * Serving Flask app 'archivy' (lazy loading)
archivy_1  |  * Environment: production
archivy_1  |    WARNING: This is a development server. Do not use it in a production deployment.
archivy_1  |    Use a production WSGI server instead.
archivy_1  |  * Debug mode: off
archivy_1  |  * Running on all addresses.
archivy_1  |    WARNING: This is a development server. Do not use it in a production deployment.
archivy_1  |  * Running on http://10.89.5.2:5000/ (Press CTRL+C to quit)

Exemplary uWSGI setups for Flask with Nginx can be found in:

ElasticSearch enabled in light image

Archivy watches for ElasticSearch, if no environmental variable is given to disable.

mkdir -p archivy_data archivy_config
wget https://github.com/archivy/archivy-docker/raw/main/config-lite.yml -O archivy_config/config.yml

Then, depending on Docker or Podman being used, we issue:

chown -R 1000:1000 archivy_*
podman unshare chown 1000:1000 -R archivy_*

When running the following docker-compose.yml (with fused configurations of light and custom-config), Archivy attempts to wait for ElasticSearch, because it is enabled:

version: '3'

services:

  archivy:
    image: uzayg/archivy:latest-lite
    ports:
      - "5000"
    environment:
      - ELASTICSEARCH_ENABLED=0
      - FLASK_DEBUG=0
    volumes:
      - ./archivy_data:/archivy/data:z
      - ./archivy_config:/archivy/.local/share/archivy:z
$ docker-compose up
Creating network "archivy_default" with the default driver
Creating archivy_archivy_1 ... done
Attaching to archivy_archivy_1
archivy_1  | Setting environment variables.
archivy_1  | The following environment variables were set:
archivy_1  |            FLASK_DEBUG=0
archivy_1  |            ELASTICSEARCH_ENABLED=1
archivy_1  |            ELASTICSEARCH_URL=http://elasticsearch:9200/
archivy_1  |            ARCHIVY_PORT=5000
archivy_1  | Checking if Elasticsearch is up and running
archivy_1  | Waiting for Elasticsearch @ http://elasticsearch:9200/ to start.
archivy_1  | Waiting for Elasticsearch @ http://elasticsearch:9200/ to start.
archivy_1  | Waiting for Elasticsearch @ http://elasticsearch:9200/ to start.
^CGracefully stopping... (press Ctrl+C again to force)
Stopping archivy_archivy_1 ... done

This is circumvented with inserting the environmental variable ELASTICSEARCH_ENABLED set to 0.

      - ELASTICSEARCH_ENABLED=0
$ docker-compose up
Creating network "archivy_default" with the default driver
Creating archivy_archivy_1 ... done
Attaching to archivy_archivy_1
archivy_1  | Setting environment variables.
archivy_1  | The following environment variables were set:
archivy_1  |            FLASK_DEBUG=0
archivy_1  |            ELASTICSEARCH_ENABLED=0
archivy_1  |            ELASTICSEARCH_URL=http://elasticsearch:9200/
archivy_1  |            ARCHIVY_PORT=5000
archivy_1  | Elasticsearch not used. Search function will not work.
archivy_1  | Starting Archivy
archivy_1  | Running archivy...
archivy_1  | [2022-09-24 18:39:42,440] INFO in __init__: OUTPUT_FOLDER: /tmp/click-web
archivy_1  |  * Serving Flask app 'archivy' (lazy loading)
archivy_1  |  * Environment: production
archivy_1  |    WARNING: This is a development server. Do not use it in a production deployment.
archivy_1  |    Use a production WSGI server instead.
archivy_1  |  * Debug mode: off
archivy_1  |  * Running on all addresses.
archivy_1  |    WARNING: This is a development server. Do not use it in a production deployment.
archivy_1  |  * Running on http://10.89.5.2:5000/ (Press CTRL+C to quit)
archivy_1  | 10.89.5.2 - - [24/Sep/2022 18:42:45] "GET / HTTP/1.1" 302 -
archivy_1  | 10.89.5.2 - - [24/Sep/2022 18:42:45] "GET /login?next=%2F HTTP/1.1" 200 -
archivy_1  | 10.89.5.2 - - [24/Sep/2022 18:42:45] "GET /static/main.css HTTP/1.1" 200 -
archivy_1  | 10.89.5.2 - - [24/Sep/2022 18:42:45] "GET /static/logo.png HTTP/1.1" 200 -
archivy_1  | 10.89.5.2 - - [24/Sep/2022 18:42:45] "GET /static/archivy.svg HTTP/1.1" 200 -

We can then create the admin user, and get the port of the application from the container API:

$ docker-compose exec archivy archivy create-admin username
$ docker-compose port archivy 5000

Note: This is using Podman > 4 together with systemctl enable --user podman.socket podman.service and export DOCKER_HOST=unix:///run/user/1000/podman/podman.sock and docker-compose 1.29.2, why we add the :z annotation to the volume mounts. More about why this is neccessary in:

This adds an additional unshare step to the initialisation of the directories to be mounted, if they were not autogenerated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.