Coder Social home page Coder Social logo

Comments (15)

rmccorm4 avatar rmccorm4 commented on June 2, 2024

Hi @geraldstanje,

Thanks for raising this issue.

I believe this error generally indicates a version mismatch issue:

[TensorRT-LLM][ERROR] Assertion failed: d == a + length

You mentioned the following environment:

Triton: 2.41
tensorrtllm_backend: 0.8.0

However, Triton v2.41 (23.12) is built for TRT-LLM backend v0.7.0 per the release notes: https://docs.nvidia.com/deeplearning/triton-inference-server/release-notes/rel-23-12.html#rel-23-12

If you'd like to use TRT-LLM v0.8.0, I recommend using Triton 24.03 or 24.02 which were built and tested for TRT-LLM version v0.8.0.

Please let us know if this fixes your issue.

from server.

geraldstanje avatar geraldstanje commented on June 2, 2024

@rmccorm4 thanks for your reply - can i use the following on ubuntu 20.04 host?

i will rerun after you confirm it.

from server.

rmccorm4 avatar rmccorm4 commented on June 2, 2024

Hi @geraldstanje, Triton 24.02 + TRTLLM v0.8.0 should work. The 7b models should likely fit on a single GPU with 24GB memory, but you can use tensor parallelism to split across gpus based on your use case.

from server.

geraldstanje avatar geraldstanje commented on June 2, 2024

@rmccorm4 any issues regarding the ubuntu 20.04 host or cuda version 12.2 on the host? i plan to run the docker image: sudo docker run -it --ipc=host --gpus all --ulimit memlock=-1 --shm-size="2g" nvcr.io/nvidia/tritonserver:24.02-trtllm-python-py3 /bin/bash

can i run any of the models above?

from server.

rmccorm4 avatar rmccorm4 commented on June 2, 2024

I don't believe the Ubuntu 20.04 host should be an issue, as the container will have the required Ubuntu 22.04 inside.

As for the CUDA/driver version, see this note from the tritonserver release notes:

Driver Requirements
Release 24.02 is based on CUDA 12.3.2, which requires NVIDIA Driver release 545 or later. However, if you are running on a data center GPU (for example, T4 or any other data center GPU), you can use NVIDIA driver release 470.57 (or later R470), 525.85 (or later R525), 535.86 (or later R535), or 545.23 (or later R545).

The CUDA driver's compatibility package only supports particular drivers. Thus, users should upgrade from all R418, R440, R450, R460, R510, and R520 drivers, which are not forward-compatible with CUDA 12.3. For a complete list of supported drivers, see the CUDA Application Compatibility topic. For more information, see CUDA Compatibility and Upgrades.

Since you have a datacenter GPU (A10G), and driver R535.161* on the host from your screenshot, it should be compatible based on However, if you are running on a data center GPU ... you can use NVIDIA driver release ... 535.86 (or later R535). If not compatible for some reason, the container should print a banner with a descriptive error when you start the container. Please try it out and let us know if it doesn't work for some reason.

from server.

geraldstanje avatar geraldstanje commented on June 2, 2024

Hi @rmccorm4 @Tabrizian

i still see the problem using nvcr.io/nvidia/tritionserver:24.02-trtllm-python-py3:

[TensorRT-LLM][WARNING] Device 0 peer access Device 1 is not available.
[TensorRT-LLM][WARNING] Device 0 peer access Device 2 is not available.
[TensorRT-LLM][WARNING] Device 0 peer access Device 3 is not available.

more infos from inside the docker container:

lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 22.04.3 LTS
Release:	22.04
Codename:	jammy

nvidia-smi
Mon Apr 22 17:00:40 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.161.08             Driver Version: 535.161.08   CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA A10G                    On  | 00000000:00:1B.0 Off |                    0 |
|  0%   17C    P8              15W / 300W |      0MiB / 23028MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA A10G                    On  | 00000000:00:1C.0 Off |                    0 |
|  0%   16C    P8              15W / 300W |      0MiB / 23028MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   2  NVIDIA A10G                    On  | 00000000:00:1D.0 Off |                    0 |
|  0%   16C    P8              15W / 300W |      0MiB / 23028MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   3  NVIDIA A10G                    On  | 00000000:00:1E.0 Off |                    0 |
|  0%   16C    P8              15W / 300W |      0MiB / 23028MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

model building:

./llama2_llm_tensorrt_engine_build_and_test.sh 
[TensorRT-LLM] TensorRT-LLM version: 0.8.00.8.0
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:02<00:00,  1.36s/it]
Weights loaded. Total time: 00:00:10
Total time of converting checkpoints: 00:02:05
[TensorRT-LLM] TensorRT-LLM version: 0.8.0[04/22/2024-16:40:34] [TRT-LLM] [I] Set bert_attention_plugin to float16.
[04/22/2024-16:40:34] [TRT-LLM] [I] Set gpt_attention_plugin to float16.
[04/22/2024-16:40:34] [TRT-LLM] [I] Set gemm_plugin to float16.
[04/22/2024-16:40:34] [TRT-LLM] [I] Set lookup_plugin to None.
[04/22/2024-16:40:34] [TRT-LLM] [I] Set lora_plugin to None.
[04/22/2024-16:40:34] [TRT-LLM] [I] Set context_fmha to True.
[04/22/2024-16:40:34] [TRT-LLM] [I] Set context_fmha_fp32_acc to False.
[04/22/2024-16:40:34] [TRT-LLM] [I] Set paged_kv_cache to True.
[04/22/2024-16:40:34] [TRT-LLM] [I] Set remove_input_padding to True.
[04/22/2024-16:40:34] [TRT-LLM] [I] Set use_custom_all_reduce to True.
[04/22/2024-16:40:34] [TRT-LLM] [I] Set multi_block_mode to False.
[04/22/2024-16:40:34] [TRT-LLM] [I] Set enable_xqa to True.
[04/22/2024-16:40:34] [TRT-LLM] [I] Set attention_qk_half_accumulation to False.
[04/22/2024-16:40:34] [TRT-LLM] [I] Set tokens_per_block to 128.
[04/22/2024-16:40:34] [TRT-LLM] [I] Set use_paged_context_fmha to False.
[04/22/2024-16:40:34] [TRT-LLM] [I] Set use_context_fmha_for_generation to False.
[04/22/2024-16:40:34] [TRT-LLM] [W] remove_input_padding is enabled, while max_num_tokens is not set, setting to max_batch_size*max_input_len. 
It may not be optimal to set max_num_tokens=max_batch_size*max_input_len when remove_input_padding is enabled, because the number of packed input tokens are very likely to be smaller, we strongly recommend to set max_num_tokens according to your workloads.
[04/22/2024-16:40:34] [TRT] [I] [MemUsageChange] Init CUDA: CPU +14, GPU +0, now: CPU 183, GPU 256 (MiB)
[04/22/2024-16:41:24] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +1798, GPU +312, now: CPU 2117, GPU 568 (MiB)
[04/22/2024-16:41:24] [TRT-LLM] [I] Set nccl_plugin to None.
[04/22/2024-16:41:24] [TRT-LLM] [I] Set use_custom_all_reduce to True.
[04/22/2024-16:41:25] [TRT-LLM] [I] Build TensorRT engine Unnamed Network 0
[04/22/2024-16:41:25] [TRT] [W] Unused Input: position_ids
[04/22/2024-16:41:25] [TRT] [W] Detected layernorm nodes in FP16.
[04/22/2024-16:41:25] [TRT] [W] Running layernorm after self-attention in FP16 may cause overflow. Exporting the model to the latest available ONNX opset (later than opset 17) to use the INormalizationLayer, or forcing layernorm layers to run in FP32 precision can help with preserving accuracy.
[04/22/2024-16:41:25] [TRT] [W] [RemoveDeadLayers] Input Tensor position_ids is unused or used only at compile-time, but is not being removed.
[04/22/2024-16:41:25] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 2153, GPU 594 (MiB)
[04/22/2024-16:41:25] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +2, GPU +10, now: CPU 2155, GPU 604 (MiB)
[04/22/2024-16:41:25] [TRT] [W] TensorRT was linked against cuDNN 8.9.6 but loaded cuDNN 8.9.2
[04/22/2024-16:41:25] [TRT] [I] Global timing cache in use. Profiling results in this builder pass will be stored.
[04/22/2024-16:41:35] [TRT] [I] [GraphReduction] The approximate region cut reduction algorithm is called.
[04/22/2024-16:41:35] [TRT] [I] Detected 106 inputs and 1 output network tensors.
[04/22/2024-16:41:40] [TRT] [I] Total Host Persistent Memory: 82640
[04/22/2024-16:41:40] [TRT] [I] Total Device Persistent Memory: 0
[04/22/2024-16:41:40] [TRT] [I] Total Scratch Memory: 537001984
[04/22/2024-16:41:40] [TRT] [I] [BlockAssignment] Started assigning block shifts. This will take 619 steps to complete.
[04/22/2024-16:41:40] [TRT] [I] [BlockAssignment] Algorithm ShiftNTopDown took 24.3962ms to assign 12 blocks to 619 nodes requiring 3238006272 bytes.
[04/22/2024-16:41:40] [TRT] [I] Total Activation Memory: 3238006272
[04/22/2024-16:41:40] [TRT] [I] Total Weights Memory: 13476831232
[04/22/2024-16:41:40] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 2192, GPU 13474 (MiB)
[04/22/2024-16:41:40] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +1, GPU +10, now: CPU 2193, GPU 13484 (MiB)
[04/22/2024-16:41:40] [TRT] [W] TensorRT was linked against cuDNN 8.9.6 but loaded cuDNN 8.9.2
[04/22/2024-16:41:40] [TRT] [I] Engine generation completed in 15.4387 seconds.
[04/22/2024-16:41:40] [TRT] [I] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 0 MiB, GPU 12853 MiB
[04/22/2024-16:41:40] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +0, GPU +12853, now: CPU 0, GPU 12853 (MiB)
[04/22/2024-16:41:47] [TRT] [I] [MemUsageStats] Peak memory usage during Engine building and serialization: CPU: 28514 MiB
[04/22/2024-16:41:47] [TRT-LLM] [I] Total time of building Unnamed Network 0: 00:00:22
[04/22/2024-16:41:48] [TRT-LLM] [I] Serializing engine to /tensorrt/tensorrt-models/Llama-2-7b-chat-hf/v0.8.0/trt-engines/fp16/1-gpu/rank0.engine...
[04/22/2024-16:42:09] [TRT-LLM] [I] Engine serialized. Total time: 00:00:21
[04/22/2024-16:42:10] [TRT-LLM] [I] Total time of building all engines: 00:01:36
[TensorRT-LLM][INFO] Engine version 0.8.0 found in the config file, assuming engine(s) built by new builder API.
[TensorRT-LLM][WARNING] [json.exception.type_error.302] type must be array, but is null
[TensorRT-LLM][WARNING] Optional value for parameter lora_target_modules will not be set.
[TensorRT-LLM][WARNING] Parameter max_draft_len cannot be read from json:
[TensorRT-LLM][WARNING] [json.exception.out_of_range.403] key 'max_draft_len' not found
[TensorRT-LLM][WARNING] [json.exception.type_error.302] type must be string, but is null
[TensorRT-LLM][WARNING] Optional value for parameter quant_algo will not be set.
[TensorRT-LLM][WARNING] [json.exception.type_error.302] type must be string, but is null
[TensorRT-LLM][WARNING] Optional value for parameter kv_cache_quant_algo will not be set.
[TensorRT-LLM][INFO] MPI size: 1, rank: 0
[TensorRT-LLM][WARNING] Device 0 peer access Device 1 is not available.
[TensorRT-LLM][WARNING] Device 0 peer access Device 2 is not available.
[TensorRT-LLM][WARNING] Device 0 peer access Device 3 is not available.
[TensorRT-LLM][INFO] Loaded engine size: 12855 MiB
[TensorRT-LLM][INFO] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 13001, GPU 13130 (MiB)
[TensorRT-LLM][INFO] [MemUsageChange] Init cuDNN: CPU +1, GPU +10, now: CPU 13002, GPU 13140 (MiB)
[TensorRT-LLM][WARNING] TensorRT was linked against cuDNN 8.9.6 but loaded cuDNN 8.9.2
[TensorRT-LLM][INFO] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +12852, now: CPU 0, GPU 12852 (MiB)
[TensorRT-LLM][WARNING] The value of maxAttentionWindow cannot exceed maxSequenceLength. Therefore, it has been adjusted to match the value of maxSequenceLength.
[TensorRT-LLM][INFO] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 13035, GPU 16242 (MiB)
[TensorRT-LLM][INFO] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 13035, GPU 16250 (MiB)
[TensorRT-LLM][WARNING] TensorRT was linked against cuDNN 8.9.6 but loaded cuDNN 8.9.2
[TensorRT-LLM][INFO] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +0, now: CPU 0, GPU 12852 (MiB)
[TensorRT-LLM][INFO] Allocate 5972688896 bytes for k/v cache. 
[TensorRT-LLM][INFO] Using 11392 tokens in paged KV cache.
[TensorRT-LLM] TensorRT-LLM version: 0.8.0Input [Text 0]: "<s> [INST] What is deep learning? [/INST]"
Output [Text 0 Beam 0]: " Deep learning is a subfield of machine learning that involves the use of artificial neural networks to model and solve complex problems. Here are some key things to know about deep learning:

1. Artificial Neural Networks (ANNs): Deep learning algorithms are based on artificial neural networks, which are modeled after the structure and function of the human brain. ANNs consist of interconnected nodes or neurons that process inputs and produce outputs.
2. Multi-Layer Perceptron (MLP): The most common type of deep learning algorithm is the multi-layer perceptron (MLP), which consists of multiple layers of neurons with nonlinear activation functions. Each layer processes the output from the previous layer, allowing the network to learn increasingly complex patterns in the data.
3. Convolutional Neural Networks (CNNs): CNNs are a type of deep learning algorithm specifically designed for image recognition tasks. They use convolutional and pooling layers to extract features from images, followed by fully connected layers to make predictions.
4. Recurrent Neural Networks (RNNs): RNNs are a type of deep learning algorithm used for sequential data, such as"

llama2_llm_tensorrt_engine_build_and_test.sh looks like this:

#!/bin/bash

HF_MODEL_NAME="Llama-2-7b-chat-hf"
HF_MODEL_PATH="meta-llama/Llama-2-7b-chat-h"
# Clone the Hugging Face model repository
# ...
# Convert the model checkpoint to TensorRT format
python /tensorrt/v0.8.0/tensorrtllm_backend/tensorrt_llm/examples/llama/convert_checkpoint.py \
    --model_dir /tensorrt/models/$HF_MODEL_NAME \
    --output_dir /tensorrt/tensorrt-models/$HF_MODEL_NAME/v0.8.0/trt-checkpoints/fp16/1-gpu/ \
    --dtype float16
# Build TensorRT engine
trtllm-build --checkpoint_dir /tensorrt/tensorrt-models/$HF_MODEL_NAME/v0.8.0/trt-checkpoints/fp16/1-gpu/ \
    --output_dir /tensorrt/tensorrt-models/$HF_MODEL_NAME/v0.8.0/trt-engines/fp16/1-gpu/ \
    --remove_input_padding enable \
    --context_fmha enable \
    --gemm_plugin float16 \
    --max_input_len 32768 \
    --strongly_typed
# Run inference with the TensorRT engine
python3 /tensorrt/v0.8.0/tensorrtllm_backend/tensorrt_llm/examples/run.py \
    --max_output_len=250 \
    --tokenizer_dir /tensorrt/models/$HF_MODEL_NAME \
    --engine_dir=/tensorrt/tensorrt-models/$HF_MODEL_NAME/v0.8.0/trt-engines/fp16/1-gpu/ \
    --max_attention_window_size=4096 \
    --temperature=0.3 \
    --top_k=50 \
    --top_p=0.9 \
    --repetition_penalty=1.2 \
    --input_text="[INST] What is deep learning? [/INST]"

also what i notices is when i measure the latency of of the run.py - it takes 21 seconds to run it - why is that so slow?

time python3 /tensorrt/v0.8.0/tensorrtllm_backend/tensorrt_llm/examples/run.py \
    --max_output_len=250 \
    --tokenizer_dir /tensorrt/models/$HF_MODEL_NAME \
    --engine_dir=/tensorrt/tensorrt-models/$HF_MODEL_NAME/v0.8.0/trt-engines/fp16/1-gpu/ \
    --max_attention_window_size=4096 \
    --temperature=0.3 \
    --top_k=50 \
    --top_p=0.9 \
    --repetition_penalty=1.2 \
    --input_text="[INST] What is deep learning? [/INST]"

...

real   0m21.735s
user  0m11.898s
sys    0m14.218s

Thanks,
Gerald

from server.

rmccorm4 avatar rmccorm4 commented on June 2, 2024

Hi @geraldstanje, for questions about running the engine directly (outside of Triton) via run.py and specific details of the standalone engine performance, I would reach out in the TRT-LLM github channel: https://github.com/NVIDIA/TensorRT-LLM/issues

from server.

geraldstanje avatar geraldstanje commented on June 2, 2024

@rmccorm4 what about these warnings here? if see these warnings - compiling the model with tp_size = 4 would not work than...

[TensorRT-LLM][WARNING] Device 0 peer access Device 1 is not available.
[TensorRT-LLM][WARNING] Device 0 peer access Device 2 is not available.
[TensorRT-LLM][WARNING] Device 0 peer access Device 3 is not available.

from server.

rmccorm4 avatar rmccorm4 commented on June 2, 2024

@fpetrini15 @krishung5 do you know anything about these multi-gpu engine build warnings?

My assumption is that this is saying multi-gpu performance may be degraded without direct p2p access like NVLink, but may otherwise be functional? But will let others who know more comment. Otherwise this is a question for the TRT-LLM team as well.

from server.

krishung5 avatar krishung5 commented on June 2, 2024

It looks like your GPU doesn't support peer-to-peer access. Could you run nvidia-smi topo -m to see if that's the case? I did have a similar issue before where my GPUs don't support peer access:

root@g242-p33-0002:/opt/tritonserver/tensorrtllm_backend/ci/L0_backend_trtllm# nvidia-smi topo -m
        GPU0    GPU1    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      SYS     0-79    0               N/A
GPU1    SYS      X      0-79    0               N/A

The way to resolve the runtime issue for me was just to add this flag --use_custom_all_reduce=disable when building the engines. For more detailed info, I would suggest asking TRT-LLM team.

from server.

geraldstanje avatar geraldstanje commented on June 2, 2024

@krishung5 here is my gpu topo - it looks like they have p2p access via PHB?

nvidia-smi topo -m

       GPU0   GPU1   GPU2   GPU3   CPU Affinity  NUMA Affinity GPU NUMA ID
GPU0   X     PHB    PHB    PHB    0-47   0             N/A
GPU1   PHB    X     PHB    PHB    0-47   0             N/A
GPU2   PHB    PHB    X     PHB    0-47   0             N/A
GPU3   PHB    PHB    PHB    X     0-47   0             N/A

Legend:
  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

can i still use tp_size = 4 and use all gpus?

from server.

krishung5 avatar krishung5 commented on June 2, 2024

@geraldstanje I think it might also require nvlinks for p2p access - not sure about this part, should have more clarification from the TRT-LLM GitHub channel.

From my experience, I was able to specify tp_size and use all gpus by using this flag --use_custom_all_reduce=disable when building the engines.

from server.

geraldstanje avatar geraldstanje commented on June 2, 2024

@krishung5 sure lets way for trt-llm people to look at it - can you show me what you used exactly in the meantime?

from server.

krishung5 avatar krishung5 commented on June 2, 2024

@geraldstanje Sure thing! I'm using the command in the README as example. Basically just adding the last line when building engines:

# Build TensorRT engines
trtllm-build --checkpoint_dir ./c-model/gpt2/fp16/4-gpu \
        --gpt_attention_plugin float16 \
        --remove_input_padding enable \
        --paged_kv_cache enable \
        --gemm_plugin float16 \
        --output_dir engines/fp16/4-gpu \
       --use_custom_all_reduce=disable  # Add this line

For the question for TRT-LLM team, can you file a separate GitHub issue for this topic in the TRT-LLM channel? I believe this will be faster to get a respond from them.

from server.

geraldstanje avatar geraldstanje commented on June 2, 2024

@krishung5 thanks for quick reply. i created an issue for the TRT-LLM team: NVIDIA/TensorRT-LLM#1487 - they said its only a warning and it should work still for 1 or 4 gpus?

from server.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.