Coder Social home page Coder Social logo

ronyabecidan / mantranet-pytorch Goto Github PK

View Code? Open in Web Editor NEW
110.0 2.0 17.0 32.86 MB

Implementation of the famous Image Manipulation\Forgery Detector "ManTraNet" in Pytorch

Python 1.72% Jupyter Notebook 98.28%
forgery detection forensics falsifications neural networks forgery-detection neural-network mantranet feature-extractor

mantranet-pytorch's Introduction

Generic badge Ask Me Anything ! visitors

Who has never met a forged picture on the web ? No one ! Everyday we are constantly facing fake pictures touched up in Photoshop but it is not always easy to detect it.

In this repo, you will find an implementation of ManTraNet, a manipulation tracing network for detection and localization of image forgeries with anomalous features. With this algorithm, you may find if an image has been falsified and even identify suspicious regions. A little example is displayed below.

It's a faifthful replica of the official implementation using however the library Pytorch. To learn more about this network, I suggest you to read the paper that describes it here.

On top of the MantraNet, there is also a file containing pre-trained weights obtained by the authors which is compatible with this pytorch version.

There is a slight discrepancy between the architecture depicted in the paper compared to the real one implemented and shared on the official repo. I put below the real architecture which is implemented here.

Please note that the rest of the README is largely inspired by the original repo.

N.B : You can also be interested by this model for your forensics model !


What is ManTraNet ?

ManTraNet is an end-to-end image forgery detection and localization solution, which means it takes a testing image as input, and predicts pixel-level forgery likelihood map as output. Comparing to existing methods, the proposed ManTraNet has the following advantages:

  • Simplicity: ManTraNet needs no extra pre- and/or post-processing
  • Fast: ManTraNet puts all computations in a single network, and accepts an image of arbitrary size.
  • Robustness: ManTraNet does not rely on working assumptions other than the local manipulation assumption, i.e. some region in a testing image is modified differently from the rest.

Technically speaking, ManTraNet is composed of two sub-networks as shown below:

  • The Image Manipulation Trace Feature Extractor: It's a feature extraction network for the image manipulation classification task, which is sensitive to different manipulation types, and encodes the image manipulation in a patch into a fixed dimension feature vector.

  • The Local Anomaly Detection Network: It's a network that is designed following the intuition that we need to inspect more and more locally our extracted features if we want to be able to detect many kind of forgeries efficiently.

Where are the pre-trained weights coming from ?

  • The authors have first pretrained the Image Manipulation Trace Feature Extractor with an homemade database containing 385 types of forgeries. Unfortunately, their database is not shared publicly. Then, they trained the Anomaly Detector with four types of synthetic data, i.e. copy-move, splicing, removal, and enhancement.

Mantranet results from the composition of these two networks

The pre-trained weights available in this repo are the results of these two trainings achieved by the authors

Remarks : To train ManTraNet you need your own (relevant) datasets.

Dependency

  • Pytorch >= 1.8.1

Demo

One may simply download the repo and play with the provided ipython notebook.

N.B. :

  • Considering that there is some differences between the implementation of common functions between Tensorflow/Keras and Pytorch, some particular methods of Pytorch (like batch normalization or hardsigmoid) are re-implemented here to match perfectly with the original Tensorflow version

  • MantraNet is an architecture difficult to train without GPU/Multi-CPU. Even in "eval" mode, if you want to use it for detecting forgeries in one image it may take some minutes using only your CPU. It depends on the size of your input image.

  • There is also a slightly different version of MantraNet that uses ConvGRU instead of ConvLSTM in the repo. It enables to speed up a bit the training of the MantraNet without losing efficiency.

Citation :

@InProceedings{Wu_2019_CVPR,
author = {Wu, Yue and AbdAlmageed, Wael and Natarajan, Premkumar},
title = {ManTra-Net: Manipulation Tracing Network for Detection and Localization of Image Forgeries With Anomalous Features},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}

mantranet-pytorch's People

Contributors

ronyabecidan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

mantranet-pytorch's Issues

about training

hi,
thanks for your works, it's great for researchers.

I have some questions about the repo.
The test script can be run executed.
However, the training script has some errors.
No matter what dataset I use, the loss value stays the same after it is reduced to 0.693. So I'm asking if you run this training script successfully?

Exporting model to ONNX format fails

Exporting model in ONNX format following the process described in PyTorch's documentation fails.

Below I attach returned error message:

/home/dkarageo/development/ManTraNet-pytorch/venv/lib/python3.9/site-packages/torch/cuda/__init__.py:82: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 804: forward compatibility was attempted on non supported HW (Triggered internally at  ../c10/cuda/CUDAFunctions.cpp:112.)
  return torch._C._cuda_getDeviceCount() > 0
/home/dkarageo/development/ManTraNet-pytorch/MantraNet/mantranet.py:432: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  x_idx = np.arange(-left, w + right)
/home/dkarageo/development/ManTraNet-pytorch/MantraNet/mantranet.py:432: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  x_idx = np.arange(-left, w + right)
/home/dkarageo/development/ManTraNet-pytorch/MantraNet/mantranet.py:433: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  y_idx = np.arange(-top, h + bottom)
/home/dkarageo/development/ManTraNet-pytorch/MantraNet/mantranet.py:433: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  y_idx = np.arange(-top, h + bottom)
Traceback (most recent call last):
  File "/home/dkarageo/development/ManTraNet-pytorch/MantraNet/export_onnx_mantranet.py", line 37, in <module>
    torch.onnx.export(model,
  File "/home/dkarageo/development/ManTraNet-pytorch/venv/lib/python3.9/site-packages/torch/onnx/__init__.py", line 305, in export
    return utils.export(model, args, f, export_params, verbose, training,
  File "/home/dkarageo/development/ManTraNet-pytorch/venv/lib/python3.9/site-packages/torch/onnx/utils.py", line 118, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
  File "/home/dkarageo/development/ManTraNet-pytorch/venv/lib/python3.9/site-packages/torch/onnx/utils.py", line 719, in _export
    _model_to_graph(model, args, verbose, input_names,
  File "/home/dkarageo/development/ManTraNet-pytorch/venv/lib/python3.9/site-packages/torch/onnx/utils.py", line 499, in _model_to_graph
    graph, params, torch_out, module = _create_jit_graph(model, args)
  File "/home/dkarageo/development/ManTraNet-pytorch/venv/lib/python3.9/site-packages/torch/onnx/utils.py", line 440, in _create_jit_graph
    graph, torch_out = _trace_and_get_graph_from_model(model, args)
  File "/home/dkarageo/development/ManTraNet-pytorch/venv/lib/python3.9/site-packages/torch/onnx/utils.py", line 391, in _trace_and_get_graph_from_model
    torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
  File "/home/dkarageo/development/ManTraNet-pytorch/venv/lib/python3.9/site-packages/torch/jit/_trace.py", line 1166, in _get_trace_graph
    outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
  File "/home/dkarageo/development/ManTraNet-pytorch/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/dkarageo/development/ManTraNet-pytorch/venv/lib/python3.9/site-packages/torch/jit/_trace.py", line 127, in forward
    graph, out = torch._C._create_graph_by_tracing(
  File "/home/dkarageo/development/ManTraNet-pytorch/venv/lib/python3.9/site-packages/torch/jit/_trace.py", line 118, in wrapper
    outs.append(self.inner(*trace_inputs))
  File "/home/dkarageo/development/ManTraNet-pytorch/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/dkarageo/development/ManTraNet-pytorch/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1098, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/home/dkarageo/development/ManTraNet-pytorch/MantraNet/mantranet.py", line 653, in forward
    return self.AnomalyDetector(self.IMTFE(x))
  File "/home/dkarageo/development/ManTraNet-pytorch/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/dkarageo/development/ManTraNet-pytorch/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1098, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/home/dkarageo/development/ManTraNet-pytorch/MantraNet/mantranet.py", line 524, in forward
    x = symm_pad(x, (2, 2, 2, 2))
  File "/home/dkarageo/development/ManTraNet-pytorch/MantraNet/mantranet.py", line 435, in symm_pad
    x_pad = reflect(x_idx, -0.5, w - 0.5)
  File "/home/dkarageo/development/ManTraNet-pytorch/MantraNet/mantranet.py", line 425, in reflect
    out = np.where(normed_mod >= rng, double_rng - normed_mod, normed_mod) + minx
TypeError: '>=' not supported between instances of 'numpy.ndarray' and 'Tensor'

Code utilized for exporting to ONNX:

from pathlib import Path

import onnx
import torch.onnx

from mantranet import device, pre_trained_model


EXPORT_PATH: Path = Path("models/mantranet_v4.onnx")
BATCH_SIZE: int = 1


# Load pretrained model to default device.
model = pre_trained_model()
model.to(device)
model.eval()

# Initialize a random utility tensor for tracing and testing the model.
x: torch.Tensor = torch.rand(BATCH_SIZE, 3, 600, 600, requires_grad=True) * 255

# Export model
EXPORT_PATH.parent.mkdir(parents=True, exist_ok=True)
torch.onnx.export(model,
                  x,
                  EXPORT_PATH,
                  export_params=True,
                  opset_version=15,
                  do_constant_folding=True,
                  input_names=["input"],
                  output_names=["output"],
                  dynamic_axes={"input": {0: "batch_size",
                                          2: "height",
                                          3: "width"},
                                "output": {0: "batch_size",
                                           2: "height",
                                           3: "width"}})

It would be really useful to support the increasing in popularity ONNX runtime for inference.

The model's performance seems terrible on my custom dataset

Hi!

I really appreciate your solid and excellent work! Recently, I just try to use the ManTraNet model to detect the tampering of my own dataset. My solution is to freeze the weights of IMTFE and try to fine-tune the weights of the anomaly detection network. But the performance on both train and test datasets seems quite terrible, that the f1 scores are less than 0.1 and loss didn't go down properly either. Do you have any suggestions? May I replace the detection network with other image segmentation networks?

the network output will become nan

image
image
I am very grateful to the author for the pytorch version code. During my training, I found that the network output will become nan. Has the author encountered such a problem?

Details about model weights

The authors have published .h5 file weights in the original paper. How did you convert the .h5 file to .pt? i.e how did you convert the weights trained by Keras code to pytorch?

Some questions

Hi! I would like to ask if the parameters of the pre-trained model you provided were converted from the original author's tensorflow version? Then can you provide the relevant training code? Thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.