Coder Social home page Coder Social logo

tiktorch's Introduction

ilastik logo

ilastik

The interactive learning and segmentation toolkit

CircleCI AppVeyor Codecov Image.sc forum Code style: black

Leverage machine learning algorithms to easily segment, classify, track and count your cells or other experimental data. Most operations are interactive, even on large datasets: you just draw the labels and immediately see the result. No machine learning expertise required.

Screenshot

See ilastik.org for more info.


Installation

Binary installation

Go to the download page, get the latest non-beta version for your operating system, and follow the installation instructions. If you are new to ilastik, we suggest to start from the pixel classification workflow. If you don't have a dataset to work with, download one of the example projects to get started.

Conda installation (experimental)

ilastik is also available as a conda package on our ilastik-forge conda channel. We recommend using mamba instead of conda, for faster installation:

mamba create -n ilastik --override-channels -c pytorch -c ilastik-forge -c conda-forge ilastik

# activate ilastik environment and start ilastik
conda activate ilastik
ilastik

Python compatibility notes

Versions of ilastik until 1.4.1b2 are based on, and only compatible with Python 3.7 Starting from ilastik 1.4.1b3 ilastik environments can be created with Python versions 3.7 to 3.9. Limitations when going with Python 3.7: please use a version of tifffile >2020.9.22,<=2021.11.2 (see also note in environment-dev.yml).

Usage

ilastik is a collection of workflows, designed to guide you through a sequence of steps. You can select a new workflow, or load an existing one, via the startup screen. The specific steps vary between workflows, but there are some common elements like data selection and data navigation. See more details on the documentation page.

Support

If you have a question, please create a topic on the image.sc forum. Before doing that, search for similar topics first: maybe your issue has been already solved! You can also open an issue here on GitHub if you have a technical bug report and/or feature suggestion.

Contributing

We always welcome good pull requests! If you just want to suggest a documentation edit, you can do this directly here, on GitHub. For more complex changes, see CONTRIBUTING.md for details.

License

GPL

tiktorch's People

Contributors

chaubold avatar constantinpape avatar emilmelnikov avatar fynnbe avatar johanneshugger avatar k-dominik avatar m-novikov avatar nasimrahaman avatar phhere avatar tomaz-vieira avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tiktorch's Issues

Migrate to gRPC

We should use gRPC for our RPC.

Why?

Pros:

  • Supported in all programming languages, if somebody wants to connect to tiktorch server from java it would be easy to implement.
  • It uses http2 for communication, this means we only need to have port 80/443 open on the server machine.
  • Provides auth, encryption, and compression out of the box
  • Allows data streaming (useful for such use-cases as live memory usage, loss function, number of iteration passed)

Cons:

  • Lower performance than fine-tuned ZMQ implementation
  • Requires grpc/protobuf dependency (protobuf already present in ilastik)

What's wrong with our current setup (zmq)?

While it provided us good insight on how we want our RPC to look like, it's not the goal of this project to implement yet another RPC protocol.

Why not plain http?

It allows define strict types (protobuf) for communication, no manual parsing required.
Easy client generation for multiple languages.

Storing label names in the model config

For convenience, it would be nice to store label names inside the model config to allow user have meaningful names in label widget.

1. Background
2. Mitochondria
3. Membrane

Blocks: #55 (we compute the number of labels in the loaded model)

Reloading data after loading model gives a silent crash on ilastik side

reloading_data_crash

ERROR 2020-07-03 16:13:26,723 classifier 14043 140383719229184 predict tile shape: (1, 256, 256, 1) (axistags: z y x c) ERROR 2020-07-03 16:13:26,725 classifier 14043 140383719229184 Predict call failed Traceback (most recent call last): File "/home/zinchenk/software/ilastik-1.4.0b5-Linux/ilastik-meta/ilastik/lazyflow/operators/tiktorch/classifier.py", line 161, in predict resp = resp.result() File "/home/zinchenk/software/ilastik-1.4.0b5-Linux/lib/python3.7/site-packages/grpc/_channel.py", line 295, in result raise self grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: status = StatusCode.FAILED_PRECONDITION details = "model-session with id d9bfef0ccefc4c198ef713e486719036 doesn't exist" debug_error_string = "{"created":"@1593785606.725321260","description":"Error received from peer ipv4:127.0.0.1:5567","file":"src/core/lib/surface/call.cc","file_line":1052,"grpc_message":"model-session with id d9bfef0ccefc4c198ef713e486719036 doesn't exist","grpc_status":9}"

Disable SO_REUSEPORT to avoid warnings on startup

Currently starting the server produces following warning:

E0313 10:32:25.536425704   10310 socket_utils_common_posix.cc:201] check for SO_REUSEPORT: {"created":"@1584091945.536417597","description":"SO_REUSEPORT unavailable on compiling system","file":"src/core/lib/iomgr/socket_utils_common_posix.cc","file_line":169}

ref #103

Server configuration UI

I propose to move server configuration from project properties to the ilastik config file (still configured through UI).
My assumption here is that a user usually has a single ilastik installation that works with the predefined server. User will configure it one time, and then he will proceed to use it for all projects.

For advanced users that want to use more than one server configuration, we'll allow overriding server config through the command line flags (e.g. user will have multiple icons Ilastik Local, Ilastik Remote EMBL, Ilastik Remote AWS)

Device is already in use

No job is running but I got this error

Starting server on 127.0.0.1:5567
01:21:59.654 [MainProcess/ThreadPoolExecutor-0_0] INFO Created session ee970ab64d6d4c2586824229be6e9cc0
01:21:59.654 [MainProcess/ThreadPoolExecutor-0_0] DEBUG Registered close handler <bound method _Lease.terminate of <tiktorch.server.device_pool._Lease object at 0x000001DF73655940>> for session ee970ab64d6d4c2586824229be6e9cc0
01:21:59.654 [MainProcess/ThreadPoolExecutor-0_0] DEBUG Registered close handler <tiktorch.rpc.mp.create_client.<locals>._make_method.<locals>.MethodWrapper object at 0x000001DF73678160> for session ee970ab64d6d4c2586824229be6e9cc0
01:22:01.994 [ModelSessionProcess/ModelThread] INFO Starting session worker
01:22:01.994 [ModelSessionProcess/ModelThread] DEBUG Set new state State.Paused
01:23:42.568 [MainProcess/ThreadPoolExecutor-0_0] ERROR Exception calling application: Device cuda:0 is already in use
Traceback (most recent call last):
  File "C:\Anaconda3\envs\tiktorch-server-env\lib\site-packages\grpc\_server.py", line 434, in _call_behavior
    response_or_iterator = behavior(argument, context)
  File "C:\Anaconda3\envs\tiktorch-server-env\lib\site-packages\tiktorch\server\grpc_svc.py", line 25, in CreateModelSession
    lease = self.__device_pool.lease(request.deviceIds)
  File "C:\Anaconda3\envs\tiktorch-server-env\lib\site-packages\tiktorch\server\device_pool.py", line 140, in lease
    raise Exception(f"Device {dev_id} is already in use")
Exception: Device cuda:0 is already in use
01:23:42.601 [MainProcess/ThreadPoolExecutor-0_0] ERROR Exception calling application: Device cuda:0 is already in use
Traceback (most recent call last):
  File "C:\Anaconda3\envs\tiktorch-server-env\lib\site-packages\grpc\_server.py", line 434, in _call_behavior
    response_or_iterator = behavior(argument, context)
  File "C:\Anaconda3\envs\tiktorch-server-env\lib\site-packages\tiktorch\server\grpc_svc.py", line 25, in CreateModelSession
    lease = self.__device_pool.lease(request.deviceIds)
  File "C:\Anaconda3\envs\tiktorch-server-env\lib\site-packages\tiktorch\server\device_pool.py", line 140, in lease
    raise Exception(f"Device {dev_id} is already in use")
Exception: Device cuda:0 is already in use
01:24:16.658 [MainProcess/ThreadPoolExecutor-0_0] INFO Created session 35e3ac3d511344c18dffaf495a234e19
01:24:16.659 [MainProcess/ThreadPoolExecutor-0_0] DEBUG Registered close handler <bound method _Lease.terminate of <tiktorch.server.device_pool._Lease object at 0x000001DF73655910>> for session 35e3ac3d511344c18dffaf495a234e19
01:24:16.659 [MainProcess/ThreadPoolExecutor-0_0] DEBUG Registered close handler <tiktorch.rpc.mp.create_client.<locals>._make_method.<locals>.MethodWrapper object at 0x000001DF73ADD940> for session 35e3ac3d511344c18dffaf495a234e19
01:24:16.745 [ModelSessionProcess/ModelThread] INFO Starting session worker
01:24:16.746 [ModelSessionProcess/ModelThread] DEBUG Set new state State.Paused
01:24:18.181 [MainProcess/ThreadPoolExecutor-0_0] ERROR Exception calling application: Device cpu is already in use
Traceback (most recent call last):
  File "C:\Anaconda3\envs\tiktorch-server-env\lib\site-packages\grpc\_server.py", line 434, in _call_behavior
    response_or_iterator = behavior(argument, context)
  File "C:\Anaconda3\envs\tiktorch-server-env\lib\site-packages\tiktorch\server\grpc_svc.py", line 25, in CreateModelSession
    lease = self.__device_pool.lease(request.deviceIds)
  File "C:\Anaconda3\envs\tiktorch-server-env\lib\site-packages\tiktorch\server\device_pool.py", line 140, in lease
    raise Exception(f"Device {dev_id} is already in use")
Exception: Device cpu is already in use

RTX 3000 Generation Support

Dear Anna, Dominik and all the ilastik team,

I just got to know about the new ilastik version. Great job as always and I am very excited giving a try to the new NN workflow (debug mode). Since I got a RTX 3080 GPU recently I am curios about your support. As far as I can see the tiktorch server works with CUDA 10.0 to 10.2.

The RTX 3000 generation works in my hands with CUDA 11.0 on. I had a long fight with StarDist but I see very good speed potentials.

Thanks for your help/comments.

Best,

Carlo

add scripts for binary distribution

currently I have a private gitlab repo with distribution scripts for all 3 osses. Should maybe be moved to this repo so that everything is in a single place.

big model.zip cannot be loaded

trying to create a session from a mode.zip of ~200MB fails with:
OS: Windows, local server at 39c3491

ERROR 2020-03-31 14:39:11,897 opNNclass 816 19184 Failed to create session
Traceback (most recent call last):
  File "C:\repos\ilastik-meta\ilastik\ilastik\applets\networkClassification\opNNclass.py", line 111, in setupOutputs
    session = tiktorch.create_model_session(model_binary, [d.id for d in devices if d.enabled])
  File "C:\repos\ilastik-meta\lazyflow\lazyflow\operators\tiktorch\classifier.py", line 223, in create_model_session
    inference_pb2.CreateModelSessionRequest(model_blob=inference_pb2.Blob(content=model_str), deviceIds=devices)
  File "C:\conda\envs\ila3\lib\site-packages\grpc\_channel.py", line 533, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "C:\conda\envs\ila3\lib\site-packages\grpc\_channel.py", line 467, in _end_unary_response_blocking
    raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
	status = StatusCode.RESOURCE_EXHAUSTED
	details = "Sent message larger than max (189683787 vs. 104857600)"
	debug_error_string = "{"created":"@1585658351.856000000","description":"Sent message larger than max (189683787 vs. 104857600)","file":"src/core/ext/filters/message_size/message_size_filter.cc","file_line":202,"grpc_status":8}"
>

Cannot run remote nn classification workflow

I tried to run the remote nn classification workflow (client: my laptop, server: embl gpu 7), but ran int some issues:

  1. the server lists "127.0.0.0" instead of the correct ip
  2. even when inserting the correct IP the client does not find the server

See screenshots (server was still running when I tried to connect):
Screenshot from 2021-07-16 07-19-26
Screenshot from 2021-07-16 07-18-24

Makefile not up to date?

don't think you can currently create a working devenv via the Makefile. Looks like pybio is missing from resulting env.

What I did:

make devenv

EOFError due to invalid url within model.zip

using the first tiktorch-cpu windows distribution with ilastik I get the following error in the log when loading an outdated model.zip ( https://github.com/subeesh/hbp-DL-seg-codes/releases/download/0.1.0/2sUNetDAweights.pth.tar is not valid anymore)

A more concise exception message would be nice. This might actually be an issue in pybio.

20604
Starting server on 127.0.0.1:5567
14:48:35.212 [MainProcess/ThreadPoolExecutor-0_0] INFO Created session 81e1a85ee0dc4b448cd621af03d89ab8
14:48:35.286 [MainProcess/ThreadPoolExecutor-0_0] DEBUG Registered close handler <bound method _Lease.terminate of <tiktorch.server.device_pool._Lease object at 0x0000018594FAEF28>> for session 81e1a85ee0dc4b448cd621af03d89ab8
14:48:35.292 [MainProcess/ThreadPoolExecutor-0_0] DEBUG Registered close handler <tiktorch.rpc.mp.create_client.<locals>._make_method.<locals>.MethodWrapper object at 0x00000185956DB358> for session 81e1a85ee0dc4b448cd621af03d89ab8
14:48:35.902 [ModelSessionProcess/MainThread] ERROR Failed to download URI(scheme='https', netloc='github.com', path='/subeesh/hbp-DL-seg-codes/releases/download/0.1.0/2sUNetDAweights.pth.tar', query='')
Process ModelSessionProcess:
Traceback (most recent call last):
  File "C:\Program Files\tiktorch\lib\multiprocessing\process.py", line 297, in _bootstrap
    self.run()
  File "C:\Program Files\tiktorch\lib\multiprocessing\process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Program Files\tiktorch\lib\site-packages\tiktorch\server\session\process.py", line 83, in _run_model_session_process
    session_proc = ModelSessionProcess(model_zip, devices)
  File "C:\Program Files\tiktorch\lib\site-packages\tiktorch\server\session\process.py", line 40, in __init__
    self._model = eval_model_zip(model_file, devices, cache_path=cache_path)
  File "C:\Program Files\tiktorch\lib\site-packages\tiktorch\server\reader.py", line 33, in eval_model_zip
    pybio_model = spec.utils.load_model(spec_file_str, root_path=temp_path, cache_path=cache_path)
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 366, in load_model
    ret = load_spec_and_kwargs(*args, **kwargs)
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 359, in load_spec_and_kwargs
    tree = URITransformer(root_path=local_spec_path.parent, cache_path=cache_path).transform(tree)
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 58, in transform
    return transformer(node)
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 63, in generic_transformer
    node, **{field.name: self.transform(getattr(node, field.name)) for field in fields(node)}
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 63, in <dictcomp>
    node, **{field.name: self.transform(getattr(node, field.name)) for field in fields(node)}
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 58, in transform
    return transformer(node)
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 231, in transform_SpecURI
    return subtransformer.transform(resolved_node)
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 58, in transform
    return transformer(node)
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 63, in generic_transformer
    node, **{field.name: self.transform(getattr(node, field.name)) for field in fields(node)}
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 63, in <dictcomp>
    node, **{field.name: self.transform(getattr(node, field.name)) for field in fields(node)}
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 58, in transform
    return transformer(node)
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 63, in generic_transformer
    node, **{field.name: self.transform(getattr(node, field.name)) for field in fields(node)}
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 63, in <dictcomp>
    node, **{field.name: self.transform(getattr(node, field.name)) for field in fields(node)}
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 58, in transform
    return transformer(node)
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 63, in generic_transformer
    node, **{field.name: self.transform(getattr(node, field.name)) for field in fields(node)}
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 63, in <dictcomp>
    node, **{field.name: self.transform(getattr(node, field.name)) for field in fields(node)}
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 58, in transform
    return transformer(node)
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 234, in transform_URI
    local_path = resolve_uri(node, root_path=self.root_path, cache_path=self.cache_path)
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 149, in resolve_uri
    local_path = _download_uri_node_to_local_path(uri_node, cache_path)
  File "C:\Program Files\tiktorch\lib\site-packages\pybio\spec\utils.py", line 181, in _download_uri_node_to_local_path
    urlretrieve(url_str, str(local_path))
  File "C:\Program Files\tiktorch\lib\urllib\request.py", line 247, in urlretrieve
    with contextlib.closing(urlopen(url, data)) as fp:
  File "C:\Program Files\tiktorch\lib\urllib\request.py", line 222, in urlopen
    return opener.open(url, data, timeout)
  File "C:\Program Files\tiktorch\lib\urllib\request.py", line 531, in open
    response = meth(req, response)
  File "C:\Program Files\tiktorch\lib\urllib\request.py", line 641, in http_response
    'http', request, response, code, msg, hdrs)
  File "C:\Program Files\tiktorch\lib\urllib\request.py", line 569, in error
    return self._call_chain(*args)
  File "C:\Program Files\tiktorch\lib\urllib\request.py", line 503, in _call_chain
    result = func(*args)
  File "C:\Program Files\tiktorch\lib\urllib\request.py", line 649, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found
14:48:35.964 [MainProcess/ClientPoller[IRPCModelSession]] WARNING Communication channel closed. Shutting Down.
14:48:35.965 [MainProcess/ThreadPoolExecutor-0_0] ERROR Exception calling application:
Traceback (most recent call last):
  File "C:\Program Files\tiktorch\lib\multiprocessing\connection.py", line 302, in _recv_bytes
    overlapped=True)
BrokenPipeError: [WinError 109] The pipe has been ended

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Program Files\tiktorch\lib\site-packages\grpc\_server.py", line 434, in _call_behavior
    response_or_iterator = behavior(argument, context)
  File "C:\Program Files\tiktorch\lib\site-packages\tiktorch\server\grpc_svc.py", line 41, in CreateModelSession
    model_info = session.client.get_model_info()
  File "C:\Program Files\tiktorch\lib\site-packages\tiktorch\rpc\mp.py", line 99, in __call__
    return fut.result(timeout=timeout)
  File "C:\Program Files\tiktorch\lib\site-packages\tiktorch\rpc\types.py", line 82, in result
    return super().result(timeout or self._timeout)
  File "C:\Program Files\tiktorch\lib\concurrent\futures\_base.py", line 432, in result
    return self.__get_result()
  File "C:\Program Files\tiktorch\lib\concurrent\futures\_base.py", line 384, in __get_result
    raise self._exception
  File "C:\Program Files\tiktorch\lib\site-packages\tiktorch\rpc\mp.py", line 138, in _poller
    msg = self._conn.recv()
  File "C:\Program Files\tiktorch\lib\multiprocessing\connection.py", line 250, in recv
    buf = self._recv_bytes()
  File "C:\Program Files\tiktorch\lib\multiprocessing\connection.py", line 321, in _recv_bytes
    raise EOFError
EOFError

Move server startup logic from setupOutputs to execute

Right now server starts as soon as operators are configured, which results in slow project start and weird behaviors in case of misconfiguration.
It looks like a better place for this to occur is execute method which will only trigger execution on request.
For this, we would need to implement such Operator that will have server as output and only be executed once and cache result

Model Zoo format support

We expect to have a draft format after Dresden hackathon 29.10.2019 - 31.10.2019.
Plan to update this issue after.

trap: SIGTERM: bad trap

tiktorch 20.6 does not start on Pop!_OS on ThinkPad P1

OS: Pop!_OS 20.04 LTS
Processor: Intel® Core™ i7-9750H CPU @ 2.60GHz × 12
Graphics: Quadro T2000/PCIe/SSE2

Screenshot from 2020-07-02 15-19-30

Add pid file

It would be great for autodiscoverability locally if tiktorch server would add a pidfile, also giving the port.

Consider using Random Forest + NN

Idea is to use NN as a filter along with normal ilastik features and only connect RF to the output probability layer.
Maybe precision is sufficient for a biology use case even though it wouldn't reach state of the art performance.
While we won't have a clear answer on how to interactively train NN, we'll have something for end-users to play with and it will train fast.
@akreshuk @FynnBe @johanneshugger

Implement missing preprocessing and postprocessing (per sample)

Main issue about implementing pre/post-processing in tiktorch.
For now, we decided to go only with per_sample implementation per_dataset would be added after
Descriptions in configuration spec

Finished implementation should contain:

  1. Snapshot tests
  2. Should correctly handle axes keyword argument if present in the spec
  3. Should have a docstring
  4. Should use xarray/numpy for implementation and not framework-specific tools
  5. Should correctly propagate axis tags

Dry run on CPU

In order to estimate a reasonable size of the blocks that can be processed in a forward-pass on a CPU,
implement this feature.

Suggested method of installation does not work

We tried to install the tiktorch server the way it's described in the README, but this did not work:

  • After installing the environment, tiktorch-server is not available (there is no such executable in bin).
  • There is the exec tiktorch though.
  • When executing it, the grpc package is missing.
  • We have added it via conda install -c conda-forge grpcio
  • Finally, the tiktorch command then runs, but throws a worning or something:
10310
Starting grpc server on 127.0.0.1:29500
E0313 10:32:25.536425704   10310 socket_utils_common_posix.cc:201] check for SO_REUSEPORT: {"created":"@1584091945.536417597","description":"SO_REUSEPORT unavailable on compiling system","file":"src/core/lib/iomgr/socket_utils_common_posix.cc","file_line":169}

cc @constantinpape

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.