Coder Social home page Coder Social logo

zzh-tech / estrnn Goto Github PK

View Code? Open in Web Editor NEW
304.0 4.0 39.0 79 KB

[ECCV2020 Spotlight] Efficient Spatio-Temporal Recurrent Neural Network for Video Deblurring

License: MIT License

Python 94.92% Jupyter Notebook 5.08%
video-deblurring dataset real-world-data deep-learning deblurring motion-blur eccv2020

estrnn's Introduction

Hi there 👋

  • 🌱 I’m currently a researcher at Shanghai AI Lab (OpenGVLab).
  • 🔭 My current research interests include:
    • 4D motion modeling
    • AI for sports
    • image/video restoration and enhancement
  • ✉️ Email: zhongzhihang [at] pjlab.org.cn
  • 🍉 Website: https://zzh-tech.github.io/

We are looking for highly self-motivated students (Joint Ph.D., 联合培养博士), and interns!
👉 If you are interested, please email me with your CV and your published papers ;-)

estrnn's People

Contributors

zzh-tech avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

estrnn's Issues

Requesting for the checkpoint for the model trained with the REDS dataset

Hi,

Will it be possible for you to share the checkpoint of the model trained with the REDS dataset?

By the way, just to make sure, the score reported in the main paper evaluated with the validation set of the REDS dataset, right?
As far as know, the authors of the REDS dataset don't share ground truths for the test set.

About training

Hi! Thanks for the great work.
Did you train BSD_2ms16ms and BSD_3ms24ms on 1ms8ms pretrained model?

About test datasets settings

Hi, when testing on BSD 1ms8ms, did you use the model trained from scratch on 1ms8ms? Also, when testing on 2ms16ms, did you use the model trained on 1ms8ms from scratch?

Difference of GMACs

I tested the GMACs with input (B, 8, 3, 1280, 720) of ESTRNN (version B9C90) but found something strange. If the input frames are 8, the output frames will be 4. So the total GMACs should be divided by 8 or 4? A extreme example is when the input frames are 5, the output will be just one image. But the model computed 5 frames in the RNN cell. To this point, I doubt the GMACs are related to the input frames' number instead of a variable decided by the model and input resolution.
Besides, could you share the settings of computing GMACs in your paper?

Something about the training .

Hi, thanks for your excellent work. I wonder that will it save some time if I change the "trainer_mode" from "dp" into "ddp"? I have 4 GPUs and it seems that it did not speed up ~

about the size of GOPRO dataset

Hi,Thanks for your excellent work.
You have mention that the size of a frame in GOPRO dataset is 1280x720, however, in the GOPRO_DS I downloaded from your link, the size is 960x540, so what's the correct size for the PSNR in your paper?
Thanks you very much.

CPU

您好,请问这个支持用CPU训练或者推理吗

About Dataset

First of all, thank you for the great work.

In the BSD dataset, the beam splitter divides the light to create two images with different exposure times, but there could be some misalignment between the paired images. Is any additional method employed to align the misaligned pixels in these images?

Thank you in advance.

About dataset

Thank you for the great work!

Is there any color mismatching or different intensity issue, during the collection of the dataset?

I think there might be some issues that come from the scattering of light or the refractive index of the beam splitter.

Thank you in advance!

How can I use Multi-GPU training in your code

Hello,

I use the command like this
' CUDA_VISIBLE_DEVICES=0,1,2,3 python main.py --data_root datasets --lr 1e-4 --batch_size 4 --num_gpus 4 --trainer_mode ddp '
to train on 4 GPU, but It seems like that model still works on single GPU.

Do your model support training on Multi-GPU or there is something wrong when I use Multi-GPU training?
image

When BSD dataset will be available ?

Hi,
When do you think your BSD dataset become available ?
It would be great to run some experiments on real blur-sharp image data, instead of synthesized ones.

Issue on downloading BSD database

Dear Dr.Zhong!

Thank you for excellent work. I tried several times to download your BSD, but I failed. At the beginning, it looks like ok, but after going some amounts, it always shows "Failed-Network error", though other downloading works are successful.

Is it possible to share your database in another mode, e.g., splitting file or Baidu wangpan?

Thank you for your consideration.

Regards.

Some questions about --trainer_mode dp and DataLoader

Hello,

First,
Do you experiment on dp version before?
I find that dp runs faster than ddp with the same batch_size and GPU numbers, but why do you use ddp to train?

Second,
In the gopro_ds_lmdb.py, you used DeblurDataset() to generate video deblurring datasets.
You used random.randint to chose seq_idx and frame_idx, but will it cause some to choose repeatedly?
Whether it will cause some images in GoPro datasets never be chosen at all?

Looking forward to your reply! Thank you very much!

image

About the psnr drop during testing

Hi, I have trained a model on GOPRO dataset. During training ,the validation PSNR is 32.25 dB. However, during testing, the PSNR drops to 27.00dB. Are there any problem about this issue?
Thanks!

Something about the test

Hello! I use this code and the parameters in the paper to train on the GOPRO dataset. The results on the test set are lower than the results in the paper. If I want to accurately reproduce the results in the paper, what should I pay attention to?

Regarding Training using Custom Dataset

Hi!

Say I have a custom dataset of blur-sharp image pairs from videos. Could you provide a detail step-by-step guide on how to proceed to training the network with a new custom dataset? Like how to convert to the lmdb format that you are using here, etc. It would easy for all the people who want to use your work for their purpose.

Thank you :)

Using pre-trained model to inference , the results is strange

I choose 15 pictures from BSD dataset to test
I use the comman:
python inference.py --src ./blur/ --dst ./results/ --ckpt ./checkpoints/ESTRNN_C80B15_BSD_2ms16ms.tar

the result is so strange
12
13

I prepare a video to test
Use the comman:
python inference.py --src 1.mp4 --dst ./results/2/ --ckpt ./checkpoints/ESTRNN_C80B15_BSD_2ms16ms.tar
there is an error:
14

Anything I did wrong?

How to use with custom input ??

Hi, I just discovered you project and it's very impressive.
However I can't find how to use on my own video with pretrained model.
I put the images in a folder in the test directory (BSD/test/000/Blur/RGB/XXX.png ...) but the script seems to need also something in the sharp directory, I really don't understand, I tried to understand the script but I must not be searching in the right py as it uses blur and sharp directory.

Thanks fo any help

Cannot reproduce results

Hi, dear authors, I've recently reproduced your work, the command is as following:

   python main.py --data_root /home/data/ --dataset gopro_ds_lmdb --ds_config 2ms16ms --batch_size 32 --data_format RGB

I use 4 x p40 devices, but the output is:

2021/06/09, 02:28:12 - gopro_ds_lmdb results generating ...
2021/06/09, 02:28:12 - seq 000 image results generating ...
2021/06/09, 02:28:54 - seq 001 image results generating ...
2021/06/09, 02:29:33 - seq 002 image results generating ...
2021/06/09, 02:30:11 - seq 003 image results generating ...
2021/06/09, 02:30:49 - seq 004 image results generating ...
2021/06/09, 02:31:28 - seq 005 image results generating ...
2021/06/09, 02:32:06 - seq 006 image results generating ...
2021/06/09, 02:32:59 - seq 007 image results generating ...
2021/06/09, 02:33:37 - seq 008 image results generating ...
2021/06/09, 02:34:15 - seq 009 image results generating ...
2021/06/09, 02:34:53 - seq 010 image results generating ...

2021/06/09, 02:35:24 - Test images : 1067
2021/06/09, 02:35:24 - Test PSNR : 29.877435602139418
2021/06/09, 02:35:24 - Test SSIM : 0.8923279783187907
2021/06/09, 02:35:24 - Average time per image: 0.07301736338173642

Final PSNR and SSIM are much lower than paper's results.

Old Version of BSD dataset

Hi, dear authors, have you provided old version of BSD dataset? I notice that in your eccv paper, there are two versions of BSD (15fps and 30fps), can you provide these two datasets? Since we want to cite your paper and compare your BSD results in your eccv paper.

Average PSNR drops when testing a sequence checkpoint

when testing the best checkpoint, which when the sequence is in /valid the PSNR is 36.038 during training, the same sequences (renamed to /test) will have a PSNR of 27.366(the same or even blurrier as what's being fed in as the blur)
it is applying some change, but it's not sharpening the image.
Input
00000088_input
ESTRNN
00000088_estrnn
GT
00000088_gt

A question about the test output

Hi,

Sorry to bother you.

I have a question about the test.py output.

I found that when I test the model with a sequence, frames indexing from 1 to 300, the output sequence's frames only range from 3 to 297. Frame 1, 2, 298, 299, 300 are lost. Why does this happen? What should I do If I want to get the lost frames?

About pretrained model

Hello, I really appreciate your work. I download your model and do not find the model trained on the GORPO dataset. Could you release this model? Thanks a lot .

Training problems

Traceback (most recent call last):
File "c:/Users/ChenJiwang/acjw1/ESTRNN/ESTRNN/main.py", line 24, in
trainer.run()
File "c:\Users\ChenJiwang\acjw1\ESTRNN\ESTRNN\train\trainer.py", line 36, in run
process(self.para)
File "c:\Users\ChenJiwang\acjw1\ESTRNN\ESTRNN\train\dp.py", line 73, in process
data = Data(para, device_id=0)
File "c:\Users\ChenJiwang\acjw1\ESTRNN\ESTRNN\data\data.py", line 8, in init
self.dataloader_train = module.Dataloader(para, device_id, ds_type='train')
File "c:\Users\ChenJiwang\acjw1\ESTRNN\ESTRNN\data\gopro_ds_lmdb.py", line 94, in init
para.centralize, para.normalize)
File "c:\Users\ChenJiwang\acjw1\ESTRNN\ESTRNN\data\gopro_ds_lmdb.py", line 32, in init
self.env_blur = lmdb.open(self.datapath_blur, map_size=1099511627776)
lmdb.Error: C:\Users\ChenJiwang\acjw1\ESTRNN\ESTRNN\dataset\gopro_ds_lmdb\gopro_ds_train: ���̿ռ
䲻�㡣

Test Output to video

Thanks for your impressive work.
I use my own dataset as input and the test output is a set of frames. What tool/command to make these frames to video is better? I used cv.videowriter but the result video is worse than original input.

about dataset

dear author:
I have a question that training on gopro, have 2 blur folder, that blur and blur_gamma, I want to know you used is blur or blur_gamma? thankyou

Are these artifacts expected?

With the provide pre-trained models I keep getting these unsightly artifacts (compare the sky in these images). Are these artifacts common with the REDS and GOPRO models as well? Is there an available per-trained model that might avoid these artifacts? Thanks!

00000002_estrnn
Deblurred
00000002_input
Input

L1 vs L2

Hi, congratulations on your work and thanks for sharing your code and dataset!

I have a question, you mention in the paper "For the synthentic dataset GOPRO and REDS, the loss function is uniformly defined as L2 loss; while for the proposed dataset BSD, we use L1 for each model ..."

Is there any motivation behind that choice, or was it purely due to performance gains? Moreover, would there be any case you could share the difference in PSNR between L1 and L2?

Many thanks!

NotImplementedError: There were no tensor arguments to this function

I receive an error I don't understand when trying to run the inference script:

(py36pt1.6) H:\git\VideoDebluring2\ESTRNN>python inference.py
Traceback (most recent call last):
  File "inference.py", line 58, in <module>
    output_seq = model([input_seq, ])
  File "H:\miniconda\envs\py36pt1.6\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "H:\miniconda\envs\py36pt1.6\lib\site-packages\torch\nn\parallel\data_parallel.py", line 166, in forward
    return self.module(*inputs[0], **kwargs[0])
  File "H:\miniconda\envs\py36pt1.6\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "H:\git\VideoDebluring2\ESTRNN\model\model.py", line 15, in forward
    outputs = self.module.feed(self.model, iter_samples)
  File "H:\git\VideoDebluring2\ESTRNN\model\ESTRNN.py", line 233, in feed
    outputs = model(inputs)
  File "H:\miniconda\envs\py36pt1.6\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "H:\git\VideoDebluring2\ESTRNN\model\ESTRNN.py", line 209, in forward
    return torch.cat(outputs, dim=1)
NotImplementedError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat.  This usually means that this function requires a non-empty list of Tensors, or that you (the operator writer) forgot to register a fallback function.  Available functions are [CPU, CUDA, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].

CPU: registered at aten\src\ATen\RegisterCPU.cpp:18433 [kernel]
CUDA: registered at aten\src\ATen\RegisterCUDA.cpp:26496 [kernel]
QuantizedCPU: registered at aten\src\ATen\RegisterQuantizedCPU.cpp:1068 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:47 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:18 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradCPU: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradCUDA: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradXLA: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradLazy: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradXPU: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradMLC: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradHPU: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradNestedTensor: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradPrivateUse1: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradPrivateUse2: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradPrivateUse3: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_3.cpp:11560 [kernel]
UNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:466 [backend fallback]
Autocast: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:305 [backend fallback]
Batched: registered at ..\aten\src\ATen\BatchingRegistrations.cpp:1016 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]

Can someone help ?

When i want to train in my own dataset to deblur, the value of loss run into nan

Hello, @zzh-tech
I want to know the reason why the loss value go into nan and the result of model has no significant effect.
Please guide me.

The training log is described below:

2023/04/12, 15:35:29 - recording parameters ...
description: develop
seed: 39
threads: 8
num_gpus: 2
no_profile: False
profile_H: 1080
profile_W: 1920
resume: True
resume_file: /data/UDCVideo/baseline/ESTRNN/experiment/2023_04_05_22_31_29_ESTRNN_VideoUDC/model_best.pth.tar
data_root: /home/zhong/Dataset/
dataset: VideoUDC
save_dir: ./experiment/
frames: 8
ds_config: 2ms16ms
data_format: RGB
patch_size: [256, 256]
model: ESTRNN
n_features: 16
n_blocks: 15
future_frames: 2
past_frames: 2
activation: gelu
loss: 1*L1_Charbonnier_loss_color
metrics: PSNR
optimizer: Adam
lr: 0.0005
lr_scheduler: cosine
batch_size: 8
milestones: [200, 400]
decay_gamma: 0.5
start_epoch: 1
end_epoch: 500
trainer_mode: dp
test_only: False
test_frames: 20
test_save_dir: ./results/
test_checkpoint: /data/UDCVideo/baseline/ESTRNN/experiment/2023_04_05_22_31_29_ESTRNN_VideoUDC/model_best.pth.tar
video: False
normalize: True
centralize: True
time: 2023-04-12 15:35:29.064241

2023/04/12, 15:35:29 - building ESTRNN model ...
2023/04/12, 15:35:32 - model structure:
Model(
  (model): Model(
    (cell): RDBCell(
      (F_B0): Conv2d(3, 16, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
      (F_B1): RDB_DS(
        (rdb): RDB(
          (dense_layers): Sequential(
            (0): dense_layer(
              (conv): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
              (act): GELU(approximate=none)
            )
            (1): dense_layer(
              (conv): Conv2d(32, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
              (act): GELU(approximate=none)
            )
            (2): dense_layer(
              (conv): Conv2d(48, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
              (act): GELU(approximate=none)
            )
          )
          (conv1x1): Conv2d(64, 16, kernel_size=(1, 1), stride=(1, 1))
        )
        (down_sampling): Conv2d(16, 32, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2))
      )
      (F_B2): RDB_DS(
        (rdb): RDB(
          (dense_layers): Sequential(
            (0): dense_layer(
              (conv): Conv2d(32, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
              (act): GELU(approximate=none)
            )
            (1): dense_layer(
              (conv): Conv2d(56, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
              (act): GELU(approximate=none)
            )
            (2): dense_layer(
              (conv): Conv2d(80, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
              (act): GELU(approximate=none)
            )
          )
          (conv1x1): Conv2d(104, 32, kernel_size=(1, 1), stride=(1, 1))
        )
        (down_sampling): Conv2d(32, 64, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2))
      )
      (F_R): RDNet(
        (RDBs): ModuleList(
          (0): RDB(
            (dense_layers): Sequential(
              (0): dense_layer(
                (conv): Conv2d(80, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (1): dense_layer(
                (conv): Conv2d(112, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (2): dense_layer(
                (conv): Conv2d(144, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
            )
            (conv1x1): Conv2d(176, 80, kernel_size=(1, 1), stride=(1, 1))
          )
          (1): RDB(
            (dense_layers): Sequential(
              (0): dense_layer(
                (conv): Conv2d(80, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (1): dense_layer(
                (conv): Conv2d(112, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (2): dense_layer(
                (conv): Conv2d(144, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
            )
            (conv1x1): Conv2d(176, 80, kernel_size=(1, 1), stride=(1, 1))
          )
          (2): RDB(
            (dense_layers): Sequential(
              (0): dense_layer(
                (conv): Conv2d(80, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (1): dense_layer(
                (conv): Conv2d(112, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (2): dense_layer(
                (conv): Conv2d(144, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
            )
            (conv1x1): Conv2d(176, 80, kernel_size=(1, 1), stride=(1, 1))
          )
          (3): RDB(
            (dense_layers): Sequential(
              (0): dense_layer(
                (conv): Conv2d(80, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (1): dense_layer(
                (conv): Conv2d(112, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (2): dense_layer(
                (conv): Conv2d(144, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
            )
            (conv1x1): Conv2d(176, 80, kernel_size=(1, 1), stride=(1, 1))
          )
          (4): RDB(
            (dense_layers): Sequential(
              (0): dense_layer(
                (conv): Conv2d(80, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (1): dense_layer(
                (conv): Conv2d(112, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (2): dense_layer(
                (conv): Conv2d(144, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
            )
            (conv1x1): Conv2d(176, 80, kernel_size=(1, 1), stride=(1, 1))
          )
          (5): RDB(
            (dense_layers): Sequential(
              (0): dense_layer(
                (conv): Conv2d(80, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (1): dense_layer(
                (conv): Conv2d(112, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (2): dense_layer(
                (conv): Conv2d(144, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
            )
            (conv1x1): Conv2d(176, 80, kernel_size=(1, 1), stride=(1, 1))
          )
          (6): RDB(
            (dense_layers): Sequential(
              (0): dense_layer(
                (conv): Conv2d(80, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (1): dense_layer(
                (conv): Conv2d(112, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (2): dense_layer(
                (conv): Conv2d(144, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
            )
            (conv1x1): Conv2d(176, 80, kernel_size=(1, 1), stride=(1, 1))
          )
          (7): RDB(
            (dense_layers): Sequential(
              (0): dense_layer(
                (conv): Conv2d(80, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (1): dense_layer(
                (conv): Conv2d(112, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (2): dense_layer(
                (conv): Conv2d(144, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
            )
            (conv1x1): Conv2d(176, 80, kernel_size=(1, 1), stride=(1, 1))
          )
          (8): RDB(
            (dense_layers): Sequential(
              (0): dense_layer(
                (conv): Conv2d(80, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (1): dense_layer(
                (conv): Conv2d(112, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (2): dense_layer(
                (conv): Conv2d(144, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
            )
            (conv1x1): Conv2d(176, 80, kernel_size=(1, 1), stride=(1, 1))
          )
          (9): RDB(
            (dense_layers): Sequential(
              (0): dense_layer(
                (conv): Conv2d(80, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (1): dense_layer(
                (conv): Conv2d(112, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (2): dense_layer(
                (conv): Conv2d(144, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
            )
            (conv1x1): Conv2d(176, 80, kernel_size=(1, 1), stride=(1, 1))
          )
          (10): RDB(
            (dense_layers): Sequential(
              (0): dense_layer(
                (conv): Conv2d(80, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (1): dense_layer(
                (conv): Conv2d(112, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (2): dense_layer(
                (conv): Conv2d(144, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
            )
            (conv1x1): Conv2d(176, 80, kernel_size=(1, 1), stride=(1, 1))
          )
          (11): RDB(
            (dense_layers): Sequential(
              (0): dense_layer(
                (conv): Conv2d(80, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (1): dense_layer(
                (conv): Conv2d(112, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (2): dense_layer(
                (conv): Conv2d(144, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
            )
            (conv1x1): Conv2d(176, 80, kernel_size=(1, 1), stride=(1, 1))
          )
          (12): RDB(
            (dense_layers): Sequential(
              (0): dense_layer(
                (conv): Conv2d(80, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (1): dense_layer(
                (conv): Conv2d(112, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (2): dense_layer(
                (conv): Conv2d(144, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
            )
            (conv1x1): Conv2d(176, 80, kernel_size=(1, 1), stride=(1, 1))
          )
          (13): RDB(
            (dense_layers): Sequential(
              (0): dense_layer(
                (conv): Conv2d(80, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (1): dense_layer(
                (conv): Conv2d(112, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (2): dense_layer(
                (conv): Conv2d(144, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
            )
            (conv1x1): Conv2d(176, 80, kernel_size=(1, 1), stride=(1, 1))
          )
          (14): RDB(
            (dense_layers): Sequential(
              (0): dense_layer(
                (conv): Conv2d(80, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (1): dense_layer(
                (conv): Conv2d(112, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
              (2): dense_layer(
                (conv): Conv2d(144, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
                (act): GELU(approximate=none)
              )
            )
            (conv1x1): Conv2d(176, 80, kernel_size=(1, 1), stride=(1, 1))
          )
        )
        (conv1x1): Conv2d(1200, 80, kernel_size=(1, 1), stride=(1, 1))
        (conv3x3): Conv2d(80, 80, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
      (F_h): Sequential(
        (0): Conv2d(80, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): RDB(
          (dense_layers): Sequential(
            (0): dense_layer(
              (conv): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
              (act): GELU(approximate=none)
            )
            (1): dense_layer(
              (conv): Conv2d(32, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
              (act): GELU(approximate=none)
            )
            (2): dense_layer(
              (conv): Conv2d(48, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
              (act): GELU(approximate=none)
            )
          )
          (conv1x1): Conv2d(64, 16, kernel_size=(1, 1), stride=(1, 1))
        )
        (2): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
    )
    (recons): Reconstructor(
      (model): Sequential(
        (0): ConvTranspose2d(400, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1))
        (1): ConvTranspose2d(32, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1))
        (2): Conv2d(16, 3, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
      )
    )
    (fusion): GSA(
      (F_f): Sequential(
        (0): Linear(in_features=160, out_features=320, bias=True)
        (1): GELU(approximate=none)
        (2): Linear(in_features=320, out_features=160, bias=True)
        (3): Sigmoid()
      )
      (F_p): Sequential(
        (0): Conv2d(160, 320, kernel_size=(1, 1), stride=(1, 1))
        (1): Conv2d(320, 160, kernel_size=(1, 1), stride=(1, 1))
      )
      (condense): Conv2d(160, 80, kernel_size=(1, 1), stride=(1, 1))
      (fusion): Conv2d(400, 400, kernel_size=(1, 1), stride=(1, 1))
    )
  )
)

2023/04/12, 15:35:36 - generating profile of ESTRNN model ...
[profile] computation cost: 458.42 GMACs, parameters: 2.47 M

2023/04/12, 15:35:36 - loading VideoUDC dataloader ...
2023/04/12, 15:35:57 - loading checkpoint /data/UDCVideo/baseline/ESTRNN/experiment/2023_04_05_22_31_29_ESTRNN_VideoUDC/model_best.pth.tar ...

2023/04/12, 15:35:57 - [Epoch 2 / lr 5.00e-04]
[train] epoch time: 30389.37s, average batch time: 9.02s
[train] 1*L1_Charbonnier_loss_color : 0.0497 (best 0.0497), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 0.049653;

2023/04/13, 00:02:27 - [Epoch 3 / lr 5.00e-04]
[train] epoch time: 31388.36s, average batch time: 9.31s
[train] 1*L1_Charbonnier_loss_color : 4138261907.0748 (best 0.0497), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 4138261907.074845;

2023/04/13, 08:45:35 - [Epoch 4 / lr 5.00e-04]
[train] epoch time: 31176.41s, average batch time: 9.25s
[train] 1*L1_Charbonnier_loss_color : 0.0515 (best 0.0497), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 0.051471;

2023/04/13, 17:25:12 - [Epoch 5 / lr 5.00e-04]
[train] epoch time: 30173.87s, average batch time: 8.95s
[train] 1*L1_Charbonnier_loss_color : 0.0486 (best 0.0486), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 0.048556;

2023/04/14, 01:48:06 - [Epoch 6 / lr 5.00e-04]
[train] epoch time: 30326.00s, average batch time: 9.00s
[train] 1*L1_Charbonnier_loss_color : 0.0457 (best 0.0457), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 0.045680;

2023/04/14, 10:13:33 - [Epoch 7 / lr 5.00e-04]
[train] epoch time: 30364.56s, average batch time: 9.01s
[train] 1*L1_Charbonnier_loss_color : 1016826.8275 (best 0.0457), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 1016826.827540;

2023/04/14, 18:39:38 - [Epoch 8 / lr 5.00e-04]
[train] epoch time: 30601.50s, average batch time: 9.08s
[train] 1*L1_Charbonnier_loss_color : 0.0460 (best 0.0457), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 0.045977;

2023/04/15, 03:09:40 - [Epoch 9 / lr 5.00e-04]
[train] epoch time: 30508.70s, average batch time: 9.05s
[train] 1*L1_Charbonnier_loss_color : 0.0443 (best 0.0443), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 0.044296;

2023/04/15, 11:38:09 - [Epoch 10 / lr 5.00e-04]
[train] epoch time: 30297.35s, average batch time: 8.99s
[train] 1*L1_Charbonnier_loss_color : inf (best 0.0443), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color :  inf;

2023/04/15, 20:03:06 - [Epoch 11 / lr 5.00e-04]
[train] epoch time: 30177.56s, average batch time: 8.95s
[train] 1*L1_Charbonnier_loss_color : 0.0448 (best 0.0443), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 0.044764;

2023/04/16, 04:26:04 - [Epoch 12 / lr 4.99e-04]
[train] epoch time: 30493.02s, average batch time: 9.05s
[train] 1*L1_Charbonnier_loss_color : 0.0441 (best 0.0441), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 0.044116;

2023/04/16, 12:54:18 - [Epoch 13 / lr 4.99e-04]
[train] epoch time: 30154.28s, average batch time: 8.95s
[train] 1*L1_Charbonnier_loss_color : 2385378883146.6274 (best 0.0441), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 2385378883146.627441;

2023/04/16, 21:16:52 - [Epoch 14 / lr 4.99e-04]
[train] epoch time: 30205.32s, average batch time: 8.96s
[train] 1*L1_Charbonnier_loss_color : 0.0451 (best 0.0441), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 0.045062;

2023/04/17, 05:40:18 - [Epoch 15 / lr 4.99e-04]
[train] epoch time: 30142.28s, average batch time: 8.94s
[train] 1*L1_Charbonnier_loss_color : 0.0431 (best 0.0431), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 0.043079;

2023/04/17, 14:02:41 - [Epoch 16 / lr 4.99e-04]
[train] epoch time: 30201.14s, average batch time: 8.96s
[train] 1*L1_Charbonnier_loss_color : 6983098584846.5400 (best 0.0431), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 6983098584846.540039;

2023/04/17, 22:26:02 - [Epoch 17 / lr 4.99e-04]
[train] epoch time: 30098.64s, average batch time: 8.93s
[train] 1*L1_Charbonnier_loss_color : 0.0440 (best 0.0431), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 0.043975;

2023/04/18, 06:47:41 - [Epoch 18 / lr 4.99e-04]
[train] epoch time: 30196.35s, average batch time: 8.96s
[train] 1*L1_Charbonnier_loss_color : 2596996.1693 (best 0.0431), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 2596996.169278;

2023/04/18, 15:10:58 - [Epoch 19 / lr 4.98e-04]
[train] epoch time: 30428.21s, average batch time: 9.03s
[train] 1*L1_Charbonnier_loss_color : 0.0442 (best 0.0431), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 0.044210;

2023/04/18, 23:38:07 - [Epoch 20 / lr 4.98e-04]
[train] epoch time: 30350.31s, average batch time: 9.01s
[train] 1*L1_Charbonnier_loss_color : 111287.4230 (best 0.0431), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 111287.422983;

2023/04/19, 08:03:57 - [Epoch 21 / lr 4.98e-04]
[train] epoch time: 30116.00s, average batch time: 8.94s
[train] 1*L1_Charbonnier_loss_color : 0.0439 (best 0.0431), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 0.043916;

2023/04/19, 16:25:54 - [Epoch 22 / lr 4.98e-04]
[train] epoch time: 30330.90s, average batch time: 9.00s
[train] 1*L1_Charbonnier_loss_color : 0.0426 (best 0.0426), PSNR : inf (best inf)
[train] L1_Charbonnier_loss_color : 0.042591;

2023/04/20, 00:51:25 - [Epoch 23 / lr 4.98e-04]
[train] epoch time: 30530.48s, average batch time: 9.06s
[train] 1*L1_Charbonnier_loss_color : nan (best 0.0426), PSNR : nan (best inf)
[train] L1_Charbonnier_loss_color :  nan;

2023/04/20, 09:20:16 - [Epoch 24 / lr 4.97e-04]
[train] epoch time: 30399.65s, average batch time: 9.02s
[train] 1*L1_Charbonnier_loss_color : nan (best 0.0426), PSNR : nan (best inf)
[train] L1_Charbonnier_loss_color :  nan;

2023/04/20, 17:46:56 - [Epoch 25 / lr 4.97e-04]
[train] epoch time: 30338.83s, average batch time: 9.00s
[train] 1*L1_Charbonnier_loss_color : nan (best 0.0426), PSNR : nan (best inf)
[train] L1_Charbonnier_loss_color :  nan;

2023/04/21, 02:12:35 - [Epoch 26 / lr 4.97e-04]
[train] epoch time: 29817.28s, average batch time: 8.85s
[train] 1*L1_Charbonnier_loss_color : nan (best 0.0426), PSNR : nan (best inf)
[train] L1_Charbonnier_loss_color :  nan;

2023/04/21, 10:29:33 - [Epoch 27 / lr 4.97e-04]
[train] epoch time: 30047.09s, average batch time: 8.92s
[train] 1*L1_Charbonnier_loss_color : nan (best 0.0426), PSNR : nan (best inf)
[train] L1_Charbonnier_loss_color :  nan;

2023/04/21, 18:50:20 - [Epoch 28 / lr 4.96e-04]
[train] epoch time: 30250.96s, average batch time: 8.98s
[train] 1*L1_Charbonnier_loss_color : nan (best 0.0426), PSNR : nan (best inf)
[train] L1_Charbonnier_loss_color :  nan;

2023/04/22, 03:14:32 - [Epoch 29 / lr 4.96e-04]
[train] epoch time: 29903.88s, average batch time: 8.87s
[train] 1*L1_Charbonnier_loss_color : nan (best 0.0426), PSNR : nan (best inf)
[train] L1_Charbonnier_loss_color :  nan;

2023/04/22, 11:32:56 - [Epoch 30 / lr 4.96e-04]
[train] epoch time: 30027.95s, average batch time: 8.91s
[train] 1*L1_Charbonnier_loss_color : nan (best 0.0426), PSNR : nan (best inf)
[train] L1_Charbonnier_loss_color :  nan;

2023/04/22, 19:53:24 - [Epoch 31 / lr 4.96e-04]
[train] epoch time: 31119.49s, average batch time: 9.23s
[train] 1*L1_Charbonnier_loss_color : nan (best 0.0426), PSNR : nan (best inf)
[train] L1_Charbonnier_loss_color :  nan;

2023/04/23, 04:32:04 - [Epoch 32 / lr 4.95e-04]
[train] epoch time: 31796.63s, average batch time: 9.44s
[train] 1*L1_Charbonnier_loss_color : nan (best 0.0426), PSNR : nan (best inf)
[train] L1_Charbonnier_loss_color :  nan;

2023/04/23, 13:22:01 - [Epoch 33 / lr 4.95e-04]
[train] epoch time: 31785.75s, average batch time: 9.43s
[train] 1*L1_Charbonnier_loss_color : nan (best 0.0426), PSNR : nan (best inf)
[train] L1_Charbonnier_loss_color :  nan;

2023/04/23, 22:11:47 - [Epoch 34 / lr 4.95e-04]
[train] epoch time: 31376.58s, average batch time: 9.31s
[train] 1*L1_Charbonnier_loss_color : nan (best 0.0426), PSNR : nan (best inf)
[train] L1_Charbonnier_loss_color :  nan;

2023/04/24, 06:54:44 - [Epoch 35 / lr 4.94e-04]
[train] epoch time: 30429.09s, average batch time: 9.03s
[train] 1*L1_Charbonnier_loss_color : nan (best 0.0426), PSNR : nan (best inf)
[train] L1_Charbonnier_loss_color :  nan;

2023/04/24, 15:21:53 - [Epoch 36 / lr 4.94e-04]
[train] epoch time: 30782.03s, average batch time: 9.13s
[train] 1*L1_Charbonnier_loss_color : nan (best 0.0426), PSNR : nan (best inf)
[train] L1_Charbonnier_loss_color :  nan;

2023/04/24, 23:54:56 - [Epoch 37 / lr 4.94e-04]
[train] epoch time: 30213.03s, average batch time: 8.97s
[train] 1*L1_Charbonnier_loss_color : nan (best 0.0426), PSNR : nan (best inf)
[train] L1_Charbonnier_loss_color :  nan;

2023/04/25, 08:18:30 - [Epoch 38 / lr 4.93e-04]
[train] epoch time: 30763.64s, average batch time: 9.13s
[train] 1*L1_Charbonnier_loss_color : nan (best 0.0426), PSNR : nan (best inf)
[train] L1_Charbonnier_loss_color :  nan;

2023/04/25, 16:51:14 - [Epoch 39 / lr 4.93e-04]
[train] epoch time: 30456.50s, average batch time: 9.04s
[train] 1*L1_Charbonnier_loss_color : nan (best 0.0426), PSNR : nan (best inf)
[train] L1_Charbonnier_loss_color :  nan;

2023/04/26, 01:18:51 - [Epoch 40 / lr 4.93e-04]
[train] epoch time: 30266.62s, average batch time: 8.98s
[train] 1*L1_Charbonnier_loss_color : nan (best 0.0426), PSNR : nan (best inf)
[train] L1_Charbonnier_loss_color :  nan;

Collab

Thanks for sharing this fantastic work.
Would you consider make a Google Collab version available for less-techinically inclined folks like myself to try it out? Thanks in advance.

When only use past data?

how was the PSNR, SSIM performance when only use past feature data in GSA stage in GOPRO, REDS, BSD?

Cannot download BSD dataset

When I want to download BSD dataset from google drive , the warning is :
抱歉,您目前无法查看或下载此文件。
最近查看或下载此文件的用户过多。请稍后再尝试访问此文件。如果您尝试访问的文件特别大或已与很多人共享,那么您最长可能需要等待 24 小时才能查看或下载该文件。如果您在 24 小时后仍然无法访问文件,请与您的网域管理员联系。

Please upload a new copy,thanks
Or can you upload to Baidu drive?

Cannot download BSD

Hi team,
Excellent work. However, Google seems to be blocking BSD downloads due to download quotas. Is there any other way you can host BSD?

About training time of BSD dataset

  • Dear Sir:

    Hello, thank you for your work! I am about to reproduce your paper. Could you tell me the training time of the model and the hardware environment of the experiment according to the requirements of the paper? Please forgive me, due to my personal hardware environment is limited (single card 1080Ti, 12G), so send an email to ask for your advice, look forward to your reply very much, thank you, I wish you a happy life thank you!

About test code

Hello,
It seems that there is only the training code.
Can you provide the test code used for the evaluation in the paper?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.