Coder Social home page Coder Social logo

rd4ad's People

Contributors

hq-deng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

rd4ad's Issues

Code for cifar10

Hi, could you provide the code for cifar10 dataset to produce reported 86.5 AUROC? That would be very helpful, thanks

关于正常样本常常误判

虽然框架能把绝大部分异常部分检出,但是正常的样本也常常被误判,有没有什么好的办法能够避免的

about OCE

Hi, sir.
Thanks for your great job, I have some confuse about OCBE :

MFF aligns multi-scale features from teacher E and OCE condenses the obtained rich feature to a compact bottleneck code.
But, the MMF part can did all things above. Why still use OCE module? Table 5 shows ablation study on Pre, Pre+OCE, Pre+OCE+MFF.
did you do the ablation study on pre+MFF?

OCBE module condenses the multi-scale patterns into an extreme low-dimensional space for downstream normal representation reconstruction. Then, the abnormal representations generated by the teacher model are likely to be abandoned by OCBE. why????

I am looking forward to your reply!
Thanks.

Can this model perform classification?

Hi, first I was deeply impressed by your model, and I have a question

Is auroc_sp the number of classification?

Can this model perform classication?

Thank you for your wonderful work and I look forward to your reply.

about seed

Hi, sir.

Could I know how you set the random seed. Just set it to 111 or get the mean with a couple of seeds.

Thanks.

可以询问一下这个问题怎么解决吗

Traceback (most recent call last):
File "/homec/ssli/RD4AD/main.py", line 120, in
train(i)
File "/homec/ssli/RD4AD/main.py", line 105, in train
auroc_px, auroc_sp, aupro_px = evaluation(encoder, bn, decoder, test_dataloader, device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/RD4AD/test.py", line 88, in evaluation
aupro_list.append(compute_pro(gt.squeeze(0).cpu().numpy().astype(int),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homec/ssli/RD4AD/test.py", line 378, in compute_pro
df = df.append({"pro": mean(pros), "fpr": fpr, "threshold": th}, ignore_index=True)
^^^^^^^^^
File "/homec/ssli/lib/python3.11/site-packages/pandas/core/generic.py", line 6293, in getattr
return object.getattribute(self, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'DataFrame' object has no attribute 'append'. Did you mean: '_append'?

RUN ERROE!

Traceback (most recent call last):
File "/media/pmj/e/code/anomaly_code/RD4AD-main/main.py", line 121, in
train(i)
File "/media/pmj/e/code/anomaly_code/RD4AD-main/main.py", line 105, in train
auroc_px, auroc_sp, aupro_px = evaluation(encoder, bn, decoder, test_dataloader, device)
File "/media/pmj/e/code/anomaly_code/RD4AD-main/test.py", line 89, in evaluation
anomaly_map[np.newaxis,:,:]))
File "/media/pmj/e/code/anomaly_code/RD4AD-main/test.py", line 355, in compute_pro
df = pd.DataFrame([], columns=["pro", "fpr", "threshold"])
File "/home/pmj/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py", line 490, in init
mgr = init_dict({}, index, columns, dtype=dtype)
File "/home/pmj/anaconda3/lib/python3.7/site-packages/pandas/core/internals/construction.py", line 239, in init_dict
val = construct_1d_arraylike_from_scalar(np.nan, len(index), nan_dtype)
File "/home/pmj/anaconda3/lib/python3.7/site-packages/pandas/core/dtypes/cast.py", line 1449, in construct_1d_arraylike_from_scalar
dtype = dtype.dtype
AttributeError: type object 'object' has no attribute 'dtype'

Some minor corrections in the test.py script

In line 143 there is one full stop to much. Change the line from:

test_path = '../mvtec/' + _class_

to:

test_path = './mvtec/' + _class_

In line 144 the standard string when using the provided main.py should be:

ckp_path = './checkpoints/' + 'wres50_'+_class_+'.pth'

At last in line 356 the type should be changed from np.bool to bool:

binary_amaps = np.zeros_like(amaps, dtype=bool)

Thanks for your work. I hope this helps some people searching for the source of some errors when running the program.

loss funtion problem

Hi there!

The paper states that "we calculate their vector-wise cosine similarity loss along the channel axis and obtain a 2-D anomaly map M(WxK)", but the code uses
loss += torch.mean(1-cos_loss(a[item].view(a[item].shape[0],-1), b[item].view(b[item].shape[0],-1))).
It seems that the loss value is calculated when the matrix is flattened to a single vector.

Does the loss function need the formula 1 given in the paper to obtain a 2D anomaly map Mk? Like this:
sim_map = 1 - F.cosine_similarity(a[item], b[item])
loss += (sim_map.view(sim_map.shape[0],-1).mean(-1)).mean()

I think calculating the loss value in this way is consistent with the paper!

However, training the model in this way can lead to a decrease in accuracy. For example, the carpet dataset only achieved an image-level AUC of 92.3%. To eliminate the possibility of slow model convergence, we set the epoch to 1000, but still obtained the same result. (The code only has one modification in the loss function.)

So, the formula provided in the paper cannot achieve the performance reported in the paper?
I'm very confused about why this happens.

Looking forward to your reply!

How to use "def vis_nd(name,class) "

How do I output the experimental values and the final heatmap?" I know I can use "def vis_nd(name,class)" to output a heatmap, but how exactly do I use it?

cifar

Regarding the CIFAR and MNIST datasets if they are 3232 in size, the 77 convolution in ResNet should not apply, did the authors make a change?

The relationship ?

I would like to ask why the multi-scale anomaly score map can directly calculate the AUROC with the ground truth. What is the relationship between the anomalies represented by the feature similarity output of the network layer and the anomalies shown by the annotations?

Change the Encoder-Decoder Architecture

Hi there!

First of all, I want to express my appreciation for your remarkable work!
I am currently working on implementing your AD approach with a smaller encoder-decoder network that can run on an edge device (something like Mobilenet_v3_large/small). .
Would you happen to have any advice or suggestions for me as to how I can go about achieving this?

Thank you in advance!

memory complexity and reasoning time

Greetings, I am delighted to peruse the program and paper you authored. Your paper highlights the intricate analysis of the model, encompassing memory complexity and reasoning time, which stands as the pivotal procedure within your program. Could you kindly elucidate the methodology employed to obtain these measurements?

AssertionError: set(masks.flatten()) must be {0, 1}

hello

assert set(masks.flatten()) == {0, 1}, "set(masks.flatten()) must be {0, 1}"
AssertionError: set(masks.flatten()) must be {0, 1}

This error occurs when testing. how should we solve it?
But our image is already a binary map, so this problem should not occur.

How to do the experiments of 4.2. One-Class Novelty Detection

Thanks for your great job, I have some questions:
1.the MVTecDataset has about 17 classes data, the code shows every class train and save a model, do you try to train a model can work on all classes data?
2.how to train One-Class Novelty Detection with cifar10 and mnist, just train with one class data, the others for test as anomaly? could you show the code?
3.How to predict a picture and judge whether it is abnormal?
inputs = encoder(img)
outputs = decoder(bn(inputs))#bn(inputs))
loss = loss_fucntion(inputs, outputs)
judge whether it is abnormal with loss? and the the threshold?
4. the Official Code not save the encoder model, the encoder parameter just use the pretrain model and not update?
torch.save({'bn': bn.state_dict(), 'decoder': decoder.state_dict()}, ckp_path)
many thanks

The problem about the evaluation

Thank you for sharing!

Your clear and compact declaration inspired me a lot. But I get some issues assessing the validity of the evaluation. As we know, RD4AD was evaluated on the widely used MVTecAD dataset and has demonstrated amazing performance. Since MVTecAD contains only the training and test set, the paper has followed its dataset setup and does not mention the concept of the validation set. However, I am still confused about this:

  1. Because there is no validation set, the final results have been generated with a model known to be performing well on the test set. Is it reasonable to generate the results in this way? Is it somehow trapped in a circular analysis?
  2. The state-of-the-art papers in the field of image anomaly detection use the evaluation setup without a validation set, whether using publicly available datasets (e.g. MVTecAD) or self-built datasets. In your opinion, how should this phenomenon be explained? Does this mean that a fixed paradigm has been formed in the field of image anomaly detection?

Thank you very much for your kind reading!

License

Thanks for releasing the official implementation! I am working on integrating Reverse Distillation model into Anomalib. Can you provide a license for your code?

train result

Why the training results are almost the same in training 10,100 or 200 epochs

How to do your ablation study

Dear Author
I want to do the ablation study of your network ,but i have some question about it 。
ablation_table
if I want to use the single-layer features M1, how should I do the ablation study,like the first pic below or the latter one?
ablation1
ablation2

How to calculate the predicted mask ?

After calculating the heat map, how to convert map to mask( Foreground is 1, background is 0 )?

anomaly_map, _ = cal_anomaly_map(inputs, outputs, img.shape[-1], amap_mode='mul')
anomaly_map = gaussian_filter(anomaly_map, sigma=4)

What needs to be done after Gaussian filtering to achieve the mask?
Any suggestions will be appreciated !

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.