Coder Social home page Coder Social logo

equalitie / ouinet Goto Github PK

View Code? Open in Web Editor NEW
118.0 19.0 11.0 6.11 MB

This is a read-only mirror of: https://gitlab.com/equalitie/ouinet/

Home Page: https://ouinet.work

License: MIT License

CMake 4.05% C++ 84.04% Shell 2.10% Java 2.69% C 0.09% Python 3.65% Dockerfile 0.34% Makefile 0.11% Go 0.76% Scheme 0.07% Kotlin 2.11%
i2p i2p-network bittorrent-dht decentralized-web cplusplus

ouinet's Introduction

pipeline status

Ouinet

See lightning talk at the Decentralized Web Summit 2018.

Ouinet is a Free/Open Source technology which allows web content to be served with the help of an entire network of cooperating nodes using peer-to-peer routing and distributed caching of responses. This helps mitigate the Web's characteristic single point of failure due to a client application not being able to connect to a particular server.

The typical Ouinet client node setup consists of a web browser or other application using a special HTTP proxy or API provided by a dedicated program or library on the local machine. When the client gets a request for content, it attempts to retrieve the resource using several mechanisms. It tries to fetch the page from a distributed cache by looking up the content in a distributed cache index (like the BitTorrent DHT), and if not available, it contacts a trusted injector server over a peer-to-peer routing system (like I2P) and asks it to fetch the page and store it in the distributed cache.

Ouinet request/response flow

Future access by client nodes to popular content inserted in distributed storage shall benefit from increased redundancy and locality, which translates to: increased availability in the face of connectivity problems; increased transfer speeds in case of poor upstream links; and reduced bandwidth costs when internet access providers charge more for external or international traffic. Content injection is also designed to allow for content re-introduction and seeding in extreme cases of total connectivity loss (e.g. natural disasters).

The Ouinet library is a core technology that can be used by any application to benefit from these advantages. Ouinet integration provides any content creator the opportunity to use cooperative networking and storage for the delivery of their content to users around the world.

Warning: Ouinet is still highly experimental. Some features (like peer-to-peer routing) may or may not not work smoothly depending on the different back-end technologies, and random unexpected crashes may occur. Also, Ouinet is not an anonymity tool: information about your browsing might be leaked to other participants in the network, as well as the fact that your application is seeding particular content. Running some components (like injector code) may turn your computer into an open web proxy, and other security or privacy-affecting issues might exist. Please keep this in mind when using this software and only assume reasonable risks.

Note: The steps described below have only been tested to work on GNU/Linux on AMD64 platforms. Building and testing Ouinet on your computer requires familiarity with the command line. At the moment there are no user-friendly packages for Ouinet on the desktop.

Cloning the source tree

Ouinet uses Git submodules, thus to properly clone it, use:

$ git clone --recursive https://gitlab.com/equalitie/ouinet.git

You can also clone and update the modules separately:

$ git clone https://gitlab.com/equalitie/ouinet.git
$ cd ouinet
$ git submodule update --init --recursive

Build requirements (desktop)

To build Ouinet natively on your system, you will need the following software to be already available:

Assuming that <SOURCE DIR> points to the directory where the CMakeLists.txt file is, and <BUILD DIR> is a directory of your choice where all (even temporary) build files will go, you can build Ouinet with:

$ mkdir -p <BUILD DIR>
$ cd <BUILD DIR>
$ cmake <SOURCE DIR>
$ make

However, we encourage you to use a Vagrant environment for development, or Docker containers for deploying a Ouinet client or an injector. These have a different set of requirements. See the corresponding sections below for further instructions on Vagrant and Docker.

Running integration tests

The Ouinet source comes with a set of integration tests. To run them you will need the Twisted Python framework.

If you already built Ouinet from <SOURCE DIR> into <BUILD DIR> (see above), you can run the tests as follows:

$ export OUINET_REPO_DIR=<SOURCE DIR>
$ export OUINET_BUILD_DIR=<BUILD DIR>
$ ./scripts/run_integration_tests.sh

Using a Vagrant environment

One of the easiest ways to build Ouinet from source code (e.g. for development or testing changes and fixes to code) is using a Vagrant development environment.

To install Vagrant on a Debian system, run:

$ sudo apt-get install vagrant

Ouinet's source tree contains a Vagrantfile which allows you to start a Vagrant environment ready to build and run Ouinet by entering the source directory and executing:

$ vagrant up

If your Vagrant installation uses VirtualBox by default and you find problems, you may need to force it to use libvirt instead:

$ sudo apt-get install libvirt-bin libvirt-dev
$ vagrant plugin install vagrant-libvirt
$ vagrant up --provider=libvirt

Building Ouinet in Vagrant

Enter the Vagrant environment with vagrant ssh. There you will find:

  • Your local Ouinet source tree mounted read-only under /vagrant (<SOURCE DIR> above).

  • Your local Ouinet source tree mounted read-write under /vagrant-rw. You can use it as a bridge to your host.

  • ~vagrant/build-ouinet-git.sh: Running this script will clone the Ouinet Git repository and all submodules into $PWD/ouinet-git-source and build Ouinet into $PWD/ouinet-git-build (<BUILD DIR> above). Changes to source outside of the Vagrant environment will not affect this build.

  • ~vagrant/build-ouinet-local.sh: Running this script will use your local Ouinet source tree (mounted under /vagrant) to build Ouinet into $PWD/ouinet-local-build (<BUILD DIR> above). Thus you can edit source files on your computer and have them built in a consistent environment.

    Please note that this requires that you keep submodules in your checkout up to date as indicated above.

Accessing Ouinet services from your computer

The Vagrant environment is by default isolated, but you can configure it to redirect ports from the host to the environment.

For instance, if you want to run a Ouinet client (with its default configuration) in Vagrant and use it as a proxy in a browser on your computer, you may uncomment the following line in Vagrantfile:

#vm.vm.network "forwarded_port", guest: 8077, host: 8077, guest_ip: "127.0.0.1"

And restart the environment:

$ vagrant halt
$ vagrant up

Then you can configure your browser to use localhost port 8077 to contact the HTTP proxy (see the section further below).

Docker development environment

We provide a bootstrap Docker image which is automatically updated with each commit and provides all prerequisites for building the latest Oiunet desktop binaries and Android libraries.

To exchange with the container data like Ouinet's source code and cached downloads and build files, we will bind mount the following directories to /usr/local/src/ in the container (some we'll create first):

  • source (assumed to be at the current directory),
  • build (in ../ouinet.build/),
  • and the container's $HOME (in ../ouinet.home/), where .gradle, .cargo, etc. will reside.

Note that with the following incantations you will not be able to use sudo in the container (--user), and that all the changes besides those in bind mounts will be lost after you exit (--rm).

mkdir -p ../ouinet.build/ ../ouinet.home/
sudo docker run \
  --rm -it \
  --user $(id -u):$(id -g) \
  --mount type=bind,source="$(pwd)",target=/usr/local/src/ouinet \
  --mount type=bind,source="$(pwd)/../ouinet.build",target=/usr/local/src/ouinet.build \
  --mount type=bind,source="$(pwd)/../ouinet.home",target=/mnt/home \
  -e HOME=/mnt/home \
  registry.gitlab.com/equalitie/ouinet:android

If you only need to build Ouinet desktop binaries, you may replace the image name at the end of the command with registry.gitlab.com/equalitie/ouinet, which is much lighter.

After running the command, you should find yourself in a new terminal, ready to accept the build instructions described elsewhere in the document.

Please consult the GitLab CI scripts to see how to build your own bootstrap images locally.

Docker deployment

Ouinet injectors and clients can be run as Docker containers. An application configuration file for Docker Compose is included for easily deploying all needed volumes and containers.

To run a Ouinet node container only a couple hundred MiB are needed, plus the space devoted to the data volume (which may grow considerably larger in the case of the injector).

A Dockerfile is also included that can be used to create a Docker image which contains the Ouinet injector, client and necessary software dependencies running on top of a Debian base system.

Building the image

Ouinet Docker images should be available from the Docker Hub. Follow the instructions in this section if you still want to build the image yourself. You will need around 3 GiB of disk space.

You may use the Dockerfile as included in Ouinet's source code, or you can just download it. Then build the image by running:

$ sudo docker build -t equalitie/ouinet:latest - < Dockerfile

That command will build a default recommended version, which you can override with --build-arg OUINET_VERSION=<VERSION>.

After a while you will get the equalitie/ouinet:latest image. Then you may want to run sudo docker prune to free up the space taken by temporary builder images (which may amount to a couple of GiB).

Debugging-enabled image

You can also build an alternative version of the image where programs contain debugging symbols and they are run under gdb, which shows a backtrace in case of a crash. Just add --build-arg OUINET_DEBUG=yes to the build command. We recommend that you use a different tag for these images (e.g. equalitie/ouinet:<VERSION>-debug).

Depending on your Docker setup, you may need to change the container's security profile and give it tracing capabilities. For more information, see this thread.

Deploying a client

You may use Docker Compose with the docker-compose.yml file included in Ouinet's source code (or you can just download it). Whenever you run docker-compose commands using that configuration file, you must be in the directory where the file resides.

If you want to create a client that seeds a static cache root (see below) from a directory in the host, check the instructions in docker-compose.yml.

If you just plan to run a single client with the latest code on your computer, you should be fine with running the following command:

$ sudo docker-compose up

That command will create a data volume, a main node container for running the Ouinet client or injector (using the host's network directly), and a convenience shell container (see below) to allow you to modify files in the data volume. It will then run the containers (the shell container will exit immediately; this is normal).

To stop the node, hit Ctrl+C or run sudo docker-compose stop. Please note that with the default configuration in docker-compose.yml, the node will be automatically restarted whenever it crashes or the host is rebooted, until explicitly stopped.

A new client node which starts with no configuration will get a default one from templates included in Ouinet's source code and it will be missing some important parameters, so you may want to stop it (see above) and use the shell container (see below) to edit client/ouinet-client.conf:

  • If using a local test injector, set its endpoint in option injector-ep.
  • Set the injector's credentials in option injector-credentials.
  • Unless using a local test injector, set option injector-tls-cert-file to /var/opt/ouinet/client/ssl-inj-cert.pem and copy the injector's TLS certificate to that file.
  • Set the public key used by the injector for HTTP signatures in option cache-http-public-key.
  • To enable the distributed cache, set option cache-type. The only value currently supported is bep5-http.

After you have set up your client's configuration, you can restart it. The client's HTTP proxy endpoint should be available to the host at localhost port 8077.

If you get a "connection refused" error when using the client's proxy, your Docker setup may not support host networking. To enable port forwarding, follow the instructions in docker-compose.yml.

Finally, restart the client container.

Using the shell container

You may use the convenience shell container to access Ouinet node files directly:

$ sudo docker-compose run --rm shell

This will create a throwaway container with a shell at the /var/opt/ouinet directory in the data volume.

If you want to transfer an existing repository to /var/opt/ouinet, you first need to move away or remove the existing one using the shell container:

# mv REPO REPO.old  # REPO is either 'injector' or 'client'

Then you may copy it in from the host using:

$ sudo docker cp /path/to/REPO SHELL_CONTAINER:/var/opt/ouinet/REPO

Other deployments

If you plan on running several nodes on the same host you will need to use different explicit Docker Compose project names for them. To make the node an injector instead of a client you need to set OUINET_ROLE=injector. To make the container use a particular image version instead of latest, set OUINET_VERSION. To limit the amount of memory that the container may use, set OUINET_MEM_LIMIT, but you will need to pass the --compatibility option to docker-compose.

An easy way to set all these parameters is to copy or link the docker-compose.yml file to a directory with the desired project name and populate its default environment file:

$ mkdir -p /path/to/ouinet-injector  # ouinet-injector is the project name
$ cd /path/to/ouinet-injector
$ cp /path/to/docker-compose.yml .
$ echo OUINET_ROLE=injector >> .env
$ echo OUINET_VERSION=v0.1.0 >> .env
$ echo OUINET_MEM_LIMIT=6g >> .env
$ sudo docker-compose --compatibility up

Injector container

After an injector has finished starting, you may want to use the shell container to inspect and note down the contents of injector/endpoint-* (injector endpoints) and injector/ed25519-public-key (public key for HTTP signatures) to be used by clients. The injector will also generate a tls-cert.pem file which you should distribute to clients for TLS access. Other configuration information like credentials can be found in injector/ouinet-injector.conf.

Remember that the injector will be available as an HTTP proxy for anyone having its credentials; if you want to disable this feature, set disable-proxy = true. You can also restrict the URLs injected to those matching a regular expression with the restricted option.

To start the injector in headless mode, you can run:

$ sudo docker-compose up -d

You will need to use sudo docker-compose stop to stop the container.

To be able to follow its logs, you can run:

$ sudo docker-compose logs --tail=100 -ft

Testing (desktop)

Running a test injector

If you want to run your own injector for testing and you have a local build, create a copy of the repos/injector repository template directory included in Ouinet's source tree:

$ cp -r <SOURCE DIR>/repos/injector /path/to/injector-repo

When using a Docker-based injector as described above, just run and stop it so that it creates a default configuration for you.

You should now edit ouinet-injector.conf in the injector repository (for Docker, use the shell container to edit injector/ouinet-injector.conf):

  1. Enable listening on loopback addresses:

    listen-tcp = ::1:7070
    

    For clients you may then use 127.0.0.1:7070 as the injector endpoint (IPv6 is not yet supported).

  2. Change the credentials to use the injector (use your own ones):

    credentials = injector_user:injector_password
    

    For clients you may use these as injector credentials.

All the steps above only need to be done once.

Finally, start the injector. For the local build you will need to explicitly point it to the repository created above:

$ <BUILD DIR>/injector --repo /path/to/injector-repo
...
[INFO] HTTP signing public key (Ed25519): <CACHE_PUB_KEY>
...

Note down the <CACHE_PUB_KEY> string in the above output since clients will need it as the public key for HTTP signatures. You may also find that value in the ed25519-public-key file in the injector repository.

When you are done testing the Ouinet injector, you may shut it down by hitting Ctrl+C.

Running a test client

To perform some tests using a Ouinet client and an existing test injector, you first need to know the injector endpoint and credentials, its TLS certificate, and its public key for HTTP signatures. These use to be respectively a tcp:<IP>:<PORT> string, a <USER>:<PASSWORD> string, a path to a PEM file, and an Ed25519 public key (hexadecimal or Base32).

You need to configure the Ouinet client to use the aforementioned parameters. If you have a local build, create a copy of the repos/client repository template directory included in Ouinet's source tree:

$ cp -r <SOURCE DIR>/repos/client /path/to/client-repo

When using a Docker-based client as described above, just run and stop it so that it creates a default configuration for you.

Now edit ouinet-client.conf in the client repository (for Docker, use the shell container to edit client/ouinet-client.conf) and add options for the injector endpoint (if testing), credentials and public key. Remember to replace the values with your own:

injector-ep = tcp:127.0.0.1:7070
injector-credentials = injector_user:injector_password
cache-http-public-key = 00112233445566778899aabbccddeeff00112233445566778899aabbccddeeff
cache-type = bep5-http

All the steps above only need to be done once.

Finally, start the client. For the local build you will need to explicitly point it to the repository created above:

$ <BUILD DIR>/client --repo /path/to/client-repo

The client opens a web proxy on local port 8077 by default (see option listen-on-tcp in its configuration file). When you access the web using this proxy (see the following section), your requests will go through your local Ouinet client, which will attempt several mechanisms supported by Ouinet to retrieve the resource.

When you are done testing the Ouinet client, you may shut it down by hitting Ctrl+C.

A note on persistent options

Please note that a few selected options (like the log level and which request mechanisms are enabled) are saved when changed, either from the command line or the client front-end (see below).

On client start, the values of saved options take precedence over those in the configuration file, but not over those in the command line. You can use the --drop-saved-opts option to drop the values of saved options altogether.

Please run the client with --help to see which options are persistent.

Testing the client with a browser

Once your local Ouinet client is running (see above), if you have Firefox installed, you can create a new profile (stored under the ff-profile directory in the example below) which uses the Ouinet client as an HTTP proxy (listening on localhost:8077 here) by executing the following commands on another shell:

mkdir -p ff-profile
env http_proxy=http://localhost:8077 https_proxy=http://localhost:8077 \
    firefox --no-remote --profile ff-profile

Otherwise you may manually modify your browser's settings to make the client (listening on host localhost and port 8077 here) its HTTP and HTTPS/SSL proxy.

Please note that you do not need to change proxy settings at all when using CENO Extension >= v1.4.0 (see below), as long as your client is listening on the default address shown above.

To reduce noise in the client log, you may want to disable Firefox's data collection by unchecking all options from "Preferences / Privacy & Security / Firefox Data Collection and Use", and maybe entering about:config in the location bar and clearing the value of toolkit.telemetry.server. You can also avoid some more noise by disabling Firefox's automatic captive portal detection by changing network.captive-portal-service.enabled to false in about:config.

If security does not worry you for testing, you can avoid even more noise by disabling Safe Browsing under "Preferences / Privacy & Security / Deceptive Content and Dangerous Software Protection" and add-on hotfixes at "Preferences / Add-ons / (gear icon) / Update Add-ons Automatically".

Also, if you want to avoid wasting Ouinet network resources and disk space on ads and similar undesired content, you can install an ad blocker like uBlock Origin.

Once done, you can visit localhost:8078 in your browser and it should show you the client front-end with assorted information from the client and configuration tools:

  • To be able to browse HTTPS sites, you must first install the client-specific CA certificate linked from the top of the front-end page and authorize it to identify web sites. Depending on your browser version, you may need to save it to disk first, then import it from Preferences / Privacy & Security / Certificates / View Certificates… into the Authorities list.

    The Ouinet client acts as a man in the middle to enable it to process HTTPS requests, but it (or a trusted injector when appropriate) still performs all standard certificate validations. This CA certificate is unique to your device.

  • Several buttons near the top of the page look something like this:

    Injector access: enabled [ disable ]
    

    They allow you to enable or disable different request mechanisms to retrieve content:

    • Origin: The client contacts the origin server directly via HTTP(S).
    • Proxy: The client contacts the origin server through an HTTP proxy (currently the configured injector).
    • Injector: The client asks the injector to fetch and sign the content from the origin server, then it starts seeding the signed content to the distributed cache.
    • Distributed Cache: The client attempts to retrieve the content from the distributed cache.

    Content retrieved via the Origin and Proxy mechanisms is considered private and not seeded to the distributed cache. Content retrieved via the Injector and Cache mechanisms is considered public and seeded to the distributed cache.

    These mechanisms are attempted in order according to a (currently hard-wired, customizable in the future) request router configuration. For instance, if one points the browser to a web page which is not yet in the distributed cache, then the client shall forward the request to the injector. On success, (A) the injector will fetch, sign and send the content back to the client and (B) the client will seed the content to the cache.

  • Other information about the cache index is shown next.

Note: For a response to be injected, its request currently needs to carry an X-Ouinet-Group header. The CENO Extension takes care of that whenever browsing in normal mode, and it does not when browsing in private mode. Unfortunately, the Extension is not yet packaged independently and the only way to use it is to clone its repository locally and load it every time you start the browser; to do that, open Firefox's Add-ons window, then click on the gears icon, then Debug Add-ons, then Load Temporary Add-on… and choose the manifest.json file in the Extension's source tree. Back to the Add-ons page, remember to click on CENO Extension and allow Run in Private Windows under Details.

After visiting a page with the Origin mechanism disabled and Injector mechanism enabled, and waiting for a short while, you should be able to disable all request mechanisms except for the Cache, clear the browser's cached data, point the browser back to the same page and still get its contents from the distributed cache even when the origin server is completely unreachable.

Using an external static cache

Ouinet supports circulating cached Web content offline as file storage and using a client to seed it back into the distributed cache. Such content is placed in a static cache, which is read-only and consists of two directories:

  • A static cache root or content directory where data files are stored in a hierarchy which may make sense for user browsing.

  • A static cache repository where Ouinet-specific metadata and signatures for the previous content are kept.

To give your client access to a static cache, use the cache-static-root and cache-static-repo options to point to the appropriate directories. If the later is not specified, the .ouinet subdirectory under the static cache root is assumed.

Please note that all content in the static cache is permanently announced by the client, and that purging the client's local cache has no effect on the static cache. When cached content is requested from a client, the client first looks up the content in its local cache, with the static cache being used as a fallback.

Any user can create such a static cache as a capture of a browsing session by copying the bep5_http directory of the client's repository as a static cache repository (with an empty static cache root). We recommend that you purge your local cache before starting the browsing session to avoid leaking your previous browsing to other users.

If you are a content provider in possession of your own signing key, please check the ouinet-inject tool, which allows you to create a static cache from a variety of sources.

Android library and demo client

Ouinet can also be built as an Android Archive library (AAR) to use in your Android apps.

Build requirements

A lot of free space (something less than 15 GiB). Everything else shall be downloaded by the build-android.sh script.

The instructions below use Vagrant for bulding, but the build-android.sh script should work on any reasonably up-to-date Debian based system.

In the following instructions, we will use <ANDROID> to represent the absolute path to your build directory. That is, the directory from which you will run the build-android.sh script (e.g. ~/ouinet.android.build).

Building

The following instructions will build a Ouinet AAR library and demo client APK package for the armeabi-v7a Android ABI:

host    $ vagrant up --provider=libvirt
host    $ vagrant ssh
vagrant $ mkdir <ANDROID>
vagrant $ cd <ANDROID>
vagrant $ git clone --recursive /vagrant
vagrant $ ./vagrant/scripts/build-android.sh

Note that we cloned a fresh copy of the Ouinet repository at /vagrant. This is not strictly necessary since the build environment supports out-of-source builds, however it spares you from having to keep your source directory clean and submodules up to date at the host. If you fullfill these requirements, you can just skip the cloning and run /vagrant/scripts/build-android.sh instead.

If you want a build for a different ABI, do set the ABI environment variable:

vagrant $ env ABI=x86_64 /path/to/build-android.sh

In any case, when the build script finishes successfully, it will leave the Ouinet AAR library at build.ouinet/build-android-$ABI/builddir/ouinet/build-android/outputs/aar/ouinet-debug.aar.

Using existing Android SDK/NDK and Boost

By default the build-android.sh script downloads all dependencies required to build the Ouinet Android library, including the Android SDK and NDK. If you already have these installed on your system you can tune the script to use them:

$ export SDK_DIR=/opt/android-sdk
$ export NDK_DIR=/opt/android-sdk/ndk-bundle
$ export ABI=armeabi-v7a
$ /path/to/build-android.sh

Testing with Android emulator

You may also use the build-android.sh script to fire up an Android emulator session with a compatible system image; just run:

host $ /path/to/build-android.sh emu

It will download the necessary files to the current directory (or reuse files downloaded by the build process, if available) and start the emulator. Please note that downloading the system image may take a few minutes, and booting the emulator for the first time may take more than 10 minutes. In subsequent runs, the emulator will just recover the snapshot saved on last quit, which is much faster.

The ABI environment variable described above also works for selecting the emulator architecture:

host $ env ABI=x86_64 /path/to/build-android.sh emu

You may also set EMULATOR_API to start a version of Android different from the minimum one supported by Ouinet:

host $ env EMULATOR_API=30 /path/to/build-android.sh emu  # Android 11

You may pass options to the emulator at the script's command line, after a -- (double dash) argument. For instance:

host $ /path/to/build-android.sh emu -- -no-snapshot-save

Some useful options include -no-snapshot, -no-snapshot-load and -no-snapshot-save. See emulator startup options for more information.

While the emulator is running, you may interact with it using ADB, e.g. to install the APK built previously. See the script's output for particular instructions and paths.

Running the Android emulator under Docker

The Dockerfile.android-emu file can be used to setup a Docker container able to run the Android emulator. First create the emulator image with:

$ sudo docker build -t ouinet:android-emu - < Dockerfile.android-emu

Then, if $SDK_PARENT_DIR is the directory where you want Ouinet's build script to place Android SDK downloads (so that you can reuse them between container runs or from an existing Ouinet build), you may start a temporary emulator container like this:

$ sudo docker run --rm -it \
      --device /dev/kvm \
      --mount type=bind,source="$(realpath "$SDK_PARENT_DIR")",target=/mnt \
      --mount type=bind,source=$PWD,target=/usr/local/src,ro \
      --mount type=bind,source=/tmp/.X11-unix/X0,target=/tmp/.X11-unix/X0 \
      --mount type=bind,source=$HOME/.Xauthority,target=/root/.Xauthority,ro \
      -h "$(uname -n)" -e DISPLAY ouinet:android-emu

The --device option is only needed to emulate an x86_64 device.

Please note how the Ouinet source directory as well as the X11 socket and authentication cookie database are mounted into the container to allow showing the emulator's screen on your display (without giving access to it to everyone via xhost -- this is also why the container has the same host name as the Docker host).

Once in the container, you may run the emulator like this:

$ cd /mnt
$ /usr/local/src/scripts/build-android.sh bootstrap emu &

You can use adb inside of the container to install packages into the emulated device.

Integrating the Ouinet library into your app

In order for your Android app to access the resources it needs using the HTTP protocol over Ouinet, thus taking advantage of its caching and distributed request handling, you need to take few simple steps.

Here we assume that the app is developed in the Android Studio environment, and that <PROJECT DIR> is your app's project directory.

Option A: Get Ouinet from Maven Central

Select the Ouinet version according to your app's ABI (we officially support ouinet-armeabi-v7a, ouinet-arm64-v8a and omni that includes all the supported ABIs plus x86_64), and also add Relinker as adependency in <PROJECT DIR>/app/build.gradle:

dependencies {
    //...
    implementation 'ie.equalit.ouinet:ouinet-armeabi-v7a:0.20.0'
    implementation 'com.getkeepsafe.relinker:relinker:1.4.4'
}

Check that Maven Central is added to the list of repositories used by Gradle:

allprojects {
    repositories {
        // ...
        mavenCentral()
    }
}

Now the Ouinet library will be automatically fetched by Gradle when your app is built.

Option B: Use your own compiled version of Ouinet

First, you need to compile the Ouinet library for the ABI environment you are aiming at (e.g. armeabi-v7a or x86_64) as described above. After the build_android.sh script finishes successfully, you can copy the ouinet-debug.aar file to your app libs folder:

$ cp /path/to/ouinet-debug.aar <PROJECT DIR>/app/libs/

Then look for the following section of your <PROJECT DIR>/build.gradle:

allprojects {
  repositories {
    // ...
  }
}

And add this:

flatDir {
  dirs 'libs'
}
mavenCentral()  // for ReLinker

Then look for the following section of your <PROJECT DIR>/app/build.gradle:

dependencies {
  // ...
}

And add these:

implementation 'com.getkeepsafe.relinker:relinker:1.4.4'
implementation(name:'ouinet-debug', ext:'aar')

Initialize Ouinet

At this stage your project should compile with no errors. Now to tell Ouinet to take over the app's HTTP communications, in the MainActivity.java of your app import Ouinet:

import ie.equalit.ouinet.Ouinet;

Then add a private member to your MainActivity class:

private Ouinet ouinet;

And in its OnCreate method initiate the Ouinet object (using the BEP5/HTTP cache):

Config config = new Config.ConfigBuilder(this)
            .setCacheType("bep5-http")
            .setCacheHttpPubKey(<CACHE_PUB_KEY>)
            .setInjectorCredentials(<INJECTOR_USERNAME>:<INJECTOR_PASSWORD>)
            .setInjectorTlsCert(<INJECTOR_TLS_CERT>)
            .setTlsCaCertStorePath(<TLS_CA_CERT_STORE_PATH>)
            .build()

ouinet = new Ouinet(this, config);
ouinet.start();

From now on, all of the app's HTTP communication will be handled by Ouinet.

Please note that if you plan to use a directory for Ouinet's static cache in your application (by using ConfigBuilder's setCacheStaticPath() and setCacheStaticContentPath()), then besides the permissions declared by the library in its manifest, your app will need the READ_EXTERNAL_STORAGE permission (Ouinet will not attempt to write to that directory).

Integration Examples

You can find additional information and samples of Android applications using Ouinet in the following repository: equalitie/ouinet-examples.

ouinet's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ouinet's Issues

Update android build to use gradle 7

Building the ouinet AAR currently requires Gradle 6.0 (released in 2019). It should be updated to the latest stable version of gradle (currently 7.5.1). First, it will likely be easier to just update from 6.0 to 7.0, then the update from 7.0 to 7.5.1.

Store HTTP response headers and body separately

This is a followup of #1, which covers two different subjects. This particular issue is about splitting the to-be-cached HTTP response into two pieces (response headers and response body) and storing them separately.

As indicated in #1:

This may help reuse the distributed cache when the same document data is uploaded or requested on different occasions (e.g. with changing caching headers) or via completely different applications using the same storage backend.

From @inetic:

I kind of see the point in [splitting the header and body into different pieces], e.g. some app could store a raw cat.jpg picture into the cache and fetch it without the header. On the other hand such app could easily download it with the header in a same manner as it would if it was downloading it using HTTP. Another argument against this could be that it (likely) takes longer to search into the DHT for two items than for just one.

From @ivilata:

[…] by uploading content as is we don't force other apps to use the HTTP-like (or any other) encoding. As for doubling the number of requests to the DHT, I'd expect for its cost to be overtaken by IPFS DHT queries to fetch the body. Also, if we have an actual browser with its own cache using the client, it may try actual HTTP HEAD requests beforehand which may result in less and smaller transfers (just the head).

Regarding [splitting the header and body into different pieces], by uploading content as is we don't force other apps to use the HTTP-like (or any other) encoding. As for doubling the number of requests to the DHT, I'd expect for its cost to be overtaken by IPFS DHT queries to fetch the body. Also, if we have an actual browser with its own cache using the client, it may try actual HTTP HEAD requests beforehand which may result in less and smaller transfers (just the head). […] Also, please note that when several requests map to the same content (e.g. because the server ignores or lacks most accepted languages), several clients which used different canonical requests may still provide the content to others, but only as long as head and body are stored separatedly […].

Better handling of non OK HTTP status codes in injector

At the moment, when the injector receives a message from the origin, it only checks the error code. But this error code has nothing to do with the response HTTP status code.

We need to check that, and handle non OK responses appropriately.

Sample 304 response I found in the cache:

HTTP/1.1 304 Not Modified
Content-Type: image/png
Last-Modified: Tue, 03 Jan 2017 21:29:30 GMT
Cache-Control: max-age=1661425
Expires: Tue, 02 Jan 2018 16:25:41 GMT
Date: Thu, 14 Dec 2017 10:55:16 GMT
Connection: keep-alive
Access-Control-Allow-Methods: GET
Access-Control-Allow-Origin: *

Segfault when client or injector exits with error

With master commit 4c86c44, after building with ./build-ouinet-local.sh in the Vagrant VM, if I enter ouinet-local-build and run either ./client --help or ./injector --help, the program always terminates with a segmentation fault. It seems to happen every time that code in main() does return 1.

make error z-lib related, on ubuntu 20.04

web@racknerd-9e3111:~/oui$ make
[  2%] Built target uri
[  4%] Built target json
[  6%] Built target built_boost
[  6%] Built target boost_asio
[  6%] Built target configfiles
[  7%] Built target boost_asio_ssl
[  9%] Built target zdnsparser
[ 10%] Built target cpp_upnp
[ 12%] Built target gpg_error
[ 14%] Built target gcrypt
[ 16%] Built target golang
[ 17%] Performing download step (download, verify and extract) for 'zlib-project'
-- verifying file...
       file='/home/web/oui/src/ouiservice/i2p/i2pd/build/zlib/src/zlib-1.2.11.tar.gz'
-- SHA256 hash of
    /home/web/oui/src/ouiservice/i2p/i2pd/build/zlib/src/zlib-1.2.11.tar.gz
  does not match expected value
    expected: 'c3e5e9fdd5004dcb542feda5ee4f0ff0744628baf8ed2dd5d66f8ca1197cb1a1'
      actual: 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'
-- File already exists but hash mismatch. Removing...
-- Downloading...
   dst='/home/web/oui/src/ouiservice/i2p/i2pd/build/zlib/src/zlib-1.2.11.tar.gz'
   timeout='none'
-- Using src='https://zlib.net/zlib-1.2.11.tar.gz'
-- Retrying...
-- Using src='https://zlib.net/zlib-1.2.11.tar.gz'
-- Retry after 5 seconds (attempt #2) ...
-- Using src='https://zlib.net/zlib-1.2.11.tar.gz'
-- Retry after 5 seconds (attempt #3) ...
-- Using src='https://zlib.net/zlib-1.2.11.tar.gz'
-- Retry after 15 seconds (attempt #4) ...
-- Using src='https://zlib.net/zlib-1.2.11.tar.gz'
-- Retry after 60 seconds (attempt #5) ...
-- Using src='https://zlib.net/zlib-1.2.11.tar.gz'
CMake Error at zlib-project-stamp/download-zlib-project.cmake:159 (message):
  Each download failed!

    error: downloading 'https://zlib.net/zlib-1.2.11.tar.gz' failed
         status_code: 22
         status_string: "HTTP response code said error"
         log:
         --- LOG BEGIN ---
           Trying 85.187.148.2:443...

  TCP_NODELAY set

  Connected to zlib.net (85.187.148.2) port 443 (#0)

  ALPN, offering h2

  ALPN, offering http/1.1

  successfully set certificate verify locations:

    CAfile: /etc/ssl/certs/ca-certificates.crt
    CApath: /etc/ssl/certs

  [5 bytes data]

  TLSv1.3 (OUT), TLS handshake, Client hello (1):

  [512 bytes data]

  [5 bytes data]

  TLSv1.3 (IN), TLS handshake, Server hello (2):

  [122 bytes data]

  [5 bytes data]

  [5 bytes data]

  [1 bytes data]

  ............ similar stuff about tls and bytes data until the log ends
  
         --- LOG END ---




make[2]: *** [src/ouiservice/i2p/i2pd/build/CMakeFiles/zlib-project.dir/build.make:91: src/ouiservice/i2p/i2pd/build/zlib/src/zlib-project-stamp/zlib-project-download] Error 1
make[1]: *** [CMakeFiles/Makefile2:706: src/ouiservice/i2p/i2pd/build/CMakeFiles/zlib-project.dir/all] Error 2
make: *** [Makefile:152: all] Error 2

any fix for this?

Add support for publishToMavenLocal to android build

Currently, there is no way to easily publish the ouinet AAR to a local maven repository. This will be better (more standard than a manual copy) for the F-Droid release of CENO, since we need to build and publish all AARs locally.

Set up server

  • Create server
  • Grant access to Ilya and Jump Servers
  • Build ceno-client
  • Create CENO injector
  • Add Webrecorder (deets? subtask?)

Do not cache hop-by-hop HTTP response headers

As explained here, hop-by-hop response headers should not be cached, as they only refer to a single transport-level connection. Unfortunately, the list of preserved headers here includes Connection and Transfer-Encoding.

We should check, before removing Transfer-Encoding from the list, that receiving (say) a text file with Transfer-Encoding: gzip does not result in the body being stored with Gzip compression. Removing Connection is probably safe.

Injector not publishing IPNS until insertion

As of 0200905, the injector doesn't publish the last stored IPNS->IPFS mapping on start, and it only does when it actually gets a request that does trigger an insertion.

I start the injector, which had already published an IPNS->IPFS mapping in previous runs and was stored in REPO/ipfs/ipfs_cache_db.Qm…, but it never prints the Publishing DB message, and accessing https://ipfs.io/ipns/Qm… says that it can't resolve the name (Qm… is the injector's IPNS name).

After I visit a page which gets inserted, the injector does publish the IPNS->IPFS mapping and the link above works. I think automatic publication of the db on start worked some time ago.

Remove need for build-android.sh

It should be possible to build the ouinet AAR without the need for a shell script that wraps the gradle scripts. Gradle is a fully featured scripting language intended specifically for compiling code, there should not be any need to wrap it in a shell script. We might still want portions of the "bootstrap" task from shell script, but that would only need to be run once when first setting up a development machine.

  • Compile target and min sdks should move to a buildSrc or a plugin dependencies directory, there are some notes on the sdk variables in android-sdk-versions

  • Build and publish all abis at the same time by default, no need to specify or loop over the script, unless you want to.

  • Get versionNumber from version.txt, this is already done in one part of the gradle scripts, but not others.

  • buildId, just equals branch name, probably easily write this into gradle

  • The setting below could possibly be set elsewhere, e.g. in local properties? or gradle properties.

--project-dir="${ROOT}"/android \
--gradle-user-home "${DIR}"/_gradle-home \
--project-cache-dir "${GRADLE_BUILDDIR}"/_gradle-cache \

Use canonical HTTP requests instead of URLs as db indexes

This is a followup of #1, which covers two different subjects. This particular issue is about replacing the URL as a handle for cached content with a richer object including information from request headers, i.e. a simplified or canonical version of the HTTP request.

As indicated in #1:

Actually, since different request headers may cause different responses and documents, we may not use the URL as an index, but rather the hash of the request itself after putting it into some "canonical" form. […] Injector injects [hash of canonical request]. When requesting a URL, the client constructs the canonical request again, hashes it, and looks up [the document]. […] This storage format also avoids enumerating the URLs stored by ipfs-cache, unless the client or injector also upload QmBLAHBLAH… to IPFS, of course.

From @inetic:

About [what to include in the key], it's probably a very good idea to support multiple languages, but I think the number of variables in the key should be limited as much as possible. It's because with each such variable the number of keys per URL grows exponentially. This would (a) make the database huge and (b) would (also exponentially) decrease the number of peers in a swarm corresponding to any particular key. […] Does it make sense to store that the requester asked for HTTP/1.1? Are there modern browsers that don't support compression? Do we care about the order of requester's language preference? Do we want two separate swarms for en-US and en with k and l peers respectively, or do we prefer one big swarm with k+l peers? Do we care about the 'q' parameters? Given that we know that example.com/foo.html has mime type text/html, do we need to store that the client would have accepted other types as well?

Lastly, I think the main reason to hash the keys would be to obfuscate the content. Thus it wouldn't be trivially possible to see what's stored in the database. On the other hand it would still be possible just by fetching the values from ipfs, or guessing. I'm not totally convinced we need that, but I'm not against either, perhaps we need to list more pros and cons and make a consensus in the team. Also, there is still the chance that we'll be able to persuade the guys from IPFS to add salt to their mutable DHT data as BitTorrent does. In such case we wouldn't even need the database.

In the mean time, we could encode the keys in a similar way you suggested by concatenating all the important variables in a string, separating them with a colon. E.g.: GET:http://example.com/foo.html?bar=baz:en

From @ivilata:

Regarding [what to include in the key", I acknowledge that the devil is in the details and we should go over HTTP request headers to choose which ones to include and how to preprocess their values to avoid an explosion of keys while not discriminating some users (e.g. language-wise). I just kept the 3 ones which I think may affect the actual content returned by the origin server, but careful review is needed. We cannot skip headers like [Accept] (or their values) since the client needs to know the canonical request before getting the answer from the server (e.g. to get content from the cache). […]

Regarding (hashing the keys), hashing is specially useful in this specific proposal since using the whole request as an index would make the db way bigger. Yes it practically obfuscates the index of the db but if the owner of an injector would like to know what it is storing, the injector could as well store the request itself (locally or in IPFS, which should map to the key which appears in the index — ideally).

[…] if Accept-Language includes (say) French and English, we really cannot know what the Language of the response will be until we have the actual response from the server. Thus, the only way to reduce Accept-Language in the canonical request to the actual value of Language from the response would be for the injector to compute it post facto.

Now imagine that the server returned a page in English. If the same or a different client wanted to retrieve the page (with the same FR-EN preference) and it wasn't able to reach the origin (nor the injector), when canonicalizing the request on its own, if the process just kept French (1st lang preference) in Accept-Language, it's pre facto version of the request wouldn't match the injector's post facto version and the client wouldn't be able to retrieve a page which was actually in the distributed cache.

One solution to this is to have a clear canonicalization process which happens pre facto at the client side, so that an injector just checks that its format is ok and forwards it to the origin.

[…] That's the point where we must strike a balance between diversity (pushing for more/richer headers, e.g. keeping multiple entries in Accept-Language, possibly with country hints) and swarmability/privacy (pushing for less/simpler headers, e.g. having a single, language-only Accept-Language or even none). Maybe there could be a configurable "privacy level" (or its inverse) where a user could progressively toggle content customization options (language, encoding, etc.) to get different levels of privacy, customization or swarmability. It would affect which headers would be included in the request and their richness, but in any case the rules used to canonicalize these headers should be clear.

From @inetic:

If we don't hash the canonized requests, then the client could apply its own logic for choosing a language.

E.g. say that the database contained entries:

GET:http://example.com/foo.html?bar=baz:en
GET:http://example.com/foo.html?bar=baz:fr
GET:http://example.com/foo.html?bar=baz:es

and the user would send a request with Accept-Language first fr and then en. The client would in such case be able to sort these entries and return the fr version first. Granted that this could get more complicated if we start to require sorting by multiple parameters, though I'd say its still preferable to spend CPU cycles on users's device than reduce swarm sizes.

For the argument of hashing the canonized request to compress the keys, I think actually compressing the database before it's put into IPFS may be a better approach (or perhaps IPFS already does so?).

Client gets stuck on boot

After building using build-ouinet.sh (using commit 2cda0c0), I run:

$ ouinet-build/client --repo ouinet/repos/client \
  --injector-ep INJECTOR_IP:INJECTOR_PORT \
  --injector-ipns INJECTOR_IPNS

The client shows this and then gets stuck at that point:

Default RLIMIT_NOFILE value is: 1024
RLIMIT_NOFILE value changed to: 4096

netstat shows no open ports for the process (which according to PS has reserved 20000 TiB virtual space), it uses no CPU and attaching to it with GDB shows:

#3  0x000056406e886ccb in boost::asio::detail::posix_event::waity<boost::asio::detail::scoped_lock<boost::asio::detail::posix_mutex> > (this=0x611000000318, lock=...)
    at /usr/include/boost/asio/detail/posix_event.hpp:106

The injector is working (it replies to proxy requests).

Canonicalize URL used as db index key

Currently we lack a canonical format for URLs used as keys. For instance, one browser may request http://foo.bar/foo-bar and another one http://foo.bar/foo%2dbar, which are the same URL, but since we don't try to put them in a single format, they would get injected under two different keys.

This applies both to IPFS and BEP44.

Hard-coded B-tree index for /api/descriptor

When retrieving a URI descriptor from the client front end, B-tree index is always used as hard-coded here, which leads to the lookup failing instantly when using the BEP44 index in the client.

It should instead get the used index from the client configuration.

Desktop client ignores "--injector-credentials" option

When testing commit 276c22d on GNU/Linux Docker with credentials in the injector, adding the exact same value for the injector-credentials in ouinet-client.conf doesn't seem to have effect in the client, and the browser keeps on asking authentication for the proxy until it is entered. Completely disabling the option in the client yields the same result. After the correct credentials are entered at the browser, everything seems to work as expected.

Injector crashes when listening on TCP

When running the command ./injector --repo ../repos/injector --listen-on-tcp 127.0.0.1:8080 --listen-on-i2p false from the build directory under a checkout of commit ef66c0dd from master, with an empty repo, I get this crash:

Default RLIMIT_NOFILE value is: 1024
RLIMIT_NOFILE value changed to: 32768
generating 2048-bit RSA keypair...done
peer identity: […]
Swarm listening on […]
Warning: Couldn't open ../repos/injector/ipfs/ipfs_cache_db.[…].json
IPNS DB: […]
=================================================================
==3619==ERROR: AddressSanitizer: stack-use-after-scope on address 0x7ffc5ab9a7e0 at pc 0x55968d19840e bp 0x6310000382d0 sp 0x6310000382c8
READ of size 28 at 0x7ffc5ab9a7e0 thread T0
    #0 0x55968d19840d in boost::asio::ip::detail::endpoint::endpoint(boost::asio::ip::detail::endpoint const&) /usr/include/boost/asio/ip/detail/endpoint.hpp:48
    #1 0x55968d1b8928 in boost::asio::ip::basic_endpoint<boost::asio::ip::tcp>::basic_endpoint(boost::asio::ip::basic_endpoint<boost::asio::ip::tcp> const&) /usr/include/boost/asio/ip/basic_endpoint.hpp:97 
    #2 0x55968d162af2 in operator() /home/ivan/vc/git/ouinet/src/injector.cpp:438
    #3 0x55968d179102 in operator() /usr/include/boost/asio/impl/spawn.hpp:273
    #4 0x55968d177bbd in run /usr/include/boost/coroutine/detail/push_coroutine_object.hpp:293
    #5 0x55968d17724d in trampoline_push_void<boost::coroutines::detail::push_coroutine_object<boost::coroutines::pull_coroutine<void>, void, boost::asio::detail::coro_entry_point<boost::asio::detail::wrapped_handler<boost::asio::io_service::strand, void (*)(), boost::asio::detail::is_continuation_if_running>, main(int, char**)::<lambda(boost::asio::yield_context)> >&, boost::coroutines::basic_standard_stack_allocator<boost::coroutines::stack_traits> > > /usr/include/boost/coroutine/detail/trampoline_push.hpp:70
    #6 0x7f8419cf8f7a in make_fcontext (/lib/x86_64-linux-gnu/libboost_context.so.1.62.0+0xf7a)

Address 0x7ffc5ab9a7e0 is located in stack of thread T0 at offset 1440 in frame
    #0 0x55968d163223 in main /home/ivan/vc/git/ouinet/src/injector.cpp:340

  This frame has 50 object(s):
    […]
    [1440, 1468) 'injector_ep' <== Memory access at offset 1440 is inside this variable
    […]
SUMMARY: AddressSanitizer: stack-use-after-scope /usr/include/boost/asio/ip/detail/endpoint.hpp:48 in boost::asio::ip::detail::endpoint::endpoint(boost::asio::ip::detail::endpoint const&)
[…]
==3619==ABORTING

The program dies with exit code 1. Running the command again (supposedly now with an existing IPFS repo) crashes in the same way.

I traced the error back to commit 7c3ca08 (same command without --listen-on-i2p option), i.e. the crash is present in that and later commits but not in the previous commit 565c33b and older.

Testing for staleness

We need a function which takes a header of a response and outputs whether that response can still be
shown to the user. This function shall be used in both, the client and the injector.

Ouinet AAR reports its version name differently after update to Gradle 7

After resolving #51, I noticed that the version name reported by the Ouinet library when it is included in the Ceno Browser is different.

Previous versions reported like so,

0.21.5 release master

Now, it is reported as,

0.21.6 RelWIthDebInfo master

The only place where I know this string is seen is in the Ceno extension settings.

I'm not sure why this changed, but I'm guessing it is related to the update to Gradle 7. Maybe a taskname changed and now the function for generating this string gets this strange RelWIthDebInfo name. It's not a huge issue, but it also probably has a simple fix, though maybe we don't even care to fix it?

Something is leaking memory

When we leave injector running for a long time, it starts to accumulate memory allocations and eventually crashes.

Not sure yet whether the leak is in GNUnet, gnunet-channels, IPFS, ipfs-cache or ouinet itself.

Upload HTTP headers and document data separatedly

While testing ouinet with the browser as indicated in the readme, I accessed the IPFS db and then the data for one random link. I checked the downloaded data for the link and I saw what looked like an HTTP response capture, i.e. HTTP response and headers followed by the document body.

I know this is not the final implementation, but I was wondering whether it would be worth splitting HTTP response+headers from data. This may help reuse the distributed cache when the same document data is uploaded or requested on different occasions (e.g. with changing caching headers) or via completely different applications using the same storage backend.

For instance, instead of mapping URL->IPFS_HASH, e.g.

  • "http://example.com/": "COMBINED_HASH"

we could hash both {HEAD|BODY}:URL->IPFS_HASH, e.g.

  • "HEAD:http://example.com/": "IPFS_HASH('HTTP/1.1 200 OK…')"
  • "BODY:http://example.com/": "IPFS_HASH('<html …')"

Actually, since different request headers may cause different responses and documents, we may not use the URL as an index, but rather the hash of the request itself after putting it into some "canonical" form. For instance:

  1. Initial request:

    GET /foo.html HTTP/1.1
    Host: example.com
    User-Agent: Mozilla/5.0 (X11; Linux x86_64…)
    Accept: text/html,application/xhtml+xm…plication/xml;q=0.9,*/*;q=0.8
    Accept-Language: en-US;q=0.7,en;q=0.3
    Accept-Encoding: gzip, deflate
    Connection: keep-alive
    Upgrade-Insecure-Requests: 1
    

    Please note that Host: is required in order to differentiate requests to different sites, since we have no actual DNS resolving to IP going on.

  2. Canonical request (same for client and injector):

    GET /foo.html HTTP/1.1
    Accept: text/html,application/xhtml+xm…plication/xml;q=0.9,*/*;q=0.8
    Accept-Encoding: gzip, deflate
    Accept-Language: en-US;q=0.7,en;q=0.3
    Host: example.com
    
  3. SHA256 multihash of canonical request (same for client and injector):

    QmBLAHBLAH…
    
  4. Reply from server:

    HTTP/1.1 200 OK
    Last-Modified: Tue, 03 Oct 2017 16:36:10 GMT
    Content-Type: text/html
    Content-Length: 4242
    …
    
    <BODY>
    
  5. Injector injects:

    "HEAD:QmBLAHBLAH…": "HASH_OF_REPLY_HEAD"
    "BODY:QmBLAHBLAH…": "HASH_OF_REPLY_BODY" (if any, e.g. HTTP HEAD has no body)
    

When requesting a URL, the client constructs the canonical request again, hashes it, and looks up HEAD:HASH or BODY:HASH.

This storage format also avoids enumerating the URLs stored by ipfs-cache, unless the client or injector also upload QmBLAHBLAH… to IPFS, of course.

One open issue with this encoding is whether HTTPS should be handled in some special way.

Unable to build armeabi-v7a with min API 16

While preparing release v0.21.7, I noticed that when building for the armeabi-v7a target, I get the following error,

ouinet/android/ouinet/src/main/java/ie/equalit/ouinet/OuinetBackground.kt:230: Error: Call requires API level 19 (current min is 16): android.app.ActivityManager#clearApplicationUserData [NewApi]
            am.clearApplicationUserData()
               ~~~~~~~~~~~~~~~~~~~~~~~~

Lint found errors in the project; aborting build.

Fix the issues identified by lint, or add the following to your build script to proceed with errors:
...
android {
    lintOptions {
        abortOnError false
    }
}
...

FAILURE: Build failed with an exception.

It seems that this clearApplicationUserData method added as part of #60 isn't supported before API level 19, but the min level for the armeabi-v7a build is 16.

@mhqz, how would you like me to resolve this? Should I just move the min API up to 19? I'm not sure that anyone is building an application with Ouinet that has a min API lower than 19 (Ceno's min API is 21). What's more, the min APIs for all the other ABIs is 21, so maybe we could just make them all the same.

Or should I come up with a work around to avoid calling this method in older android versions?

"Connection: close" header in HTTP GET request results in "502 Bad Gate way error"

This is to reproduce:

$ curl  -x http://127.0.0.1:8081/ http://127.0.0.1:8080/?content=aoueuaou
<html><body>TESTPAGE</body></html>
$ curl --header "Connection: close" -I -x http://127.0.0.1:8081/ http://127.0.0.1:8080/?content=aoueuaou
HTTP/1.1 502 Bad Gateway
Server: Ouinet
Connection: close

I have tracked it down to this line:

if (ec) return or_throw(yield, ec, move(res));

It seems because event though beast returns

end of stream

error, it doesn't mean an error:

boostorg/beast#1023

So the injector shouldn't freak out over every error and there are some errors that are not errors.

Do not require GNUnet when not using it

As indicated in #6, not running GNUnet services makes the client hang during boot. Forcing users to start the services when not using it is involved since a different, concurrent shell must be run for executing the start-gnunet-*.sh script.

It would be nice to avoid the client from using GNUnet if no such endpoints are being used in its configuration.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.