mala-lab / inctrl Goto Github PK
View Code? Open in Web Editor NEWOfficial implementation of CVPR'24 paper 'Toward Generalist Anomaly Detection via In-context Residual Learning with Few-shot Sample Prompts'.
Official implementation of CVPR'24 paper 'Toward Generalist Anomaly Detection via In-context Residual Learning with Few-shot Sample Prompts'.
hello, i download the provided model and few-shot normal samples as you said in Readme, and i just test the candle datas from visa testset with 2 shot, and the tesetset is spilit by "1cls.csv", the result i get is AUC-ROC: 0.8773, AUC-PR: 0.8693; it's obviously lower than the results describled in paper: AUROC-0.916, AUPRC: 0.920.
I can not find out where the problem is, so could you give me some suggesionts for check?
how to visualize the segment result by heatmap or by mask output
Hello,
I am currently working on a project where I need to train and test a model using my custom dataset, which is structured similarly to the MVTec dataset format. I've been trying to adapt the workflow and methodologies used for the MVTec dataset to fit my dataset's requirements but have encountered some challenges, particularly in generating the custom_dataset.pt file.
Could anyone provide some insights or a step-by-step guide on how to:
Adapt the existing training and testing pipeline for a custom dataset that aligns with the MVTec format? Are there specific parameters or configurations that need to be adjusted in the code to accommodate the differences in the dataset?
Generate the few_shot.pt file for my dataset. What is the process or script used to create this file from the dataset? Are there specific requirements for the dataset structure or format to successfully generate this file?
For context, my dataset contains images and annotations that mirror the structure used in the MVTec dataset, including similar categories and anomaly types. My goal is to leverage the existing frameworks and tools used for MVTec to achieve comparable performance on my dataset.
I appreciate any advice, scripts, or documentation that could help me navigate these challenges. Thank you in advance for your time and assistance.
Best regards,
Thank you for doing a great job. I have a question here: First, I use my own few-shot normal sample training to verify defect detection. I need x.pt format file for the parameter --few_shot_dir in Python test.py. I don't know how to convert a normal sample to. pt?
few_shot_path = os.path.join(cfg.few_shot_dir, cfg.category+".pt")
normal_list = torch.load(few_shot_path)
Please give me help. thanks.
I am concerned that the few normal images you provided may overlap with the test set I have personally split.
where's the loss definition
from binary_focal_loss import BinaryFocalLoss
Thank you for your work. Can you provide the code for visual detection
Hello, your paper has inspired me a lot, and I would like to reproduce the code. So, I would like to ask whether executing python main.py --normal_json_path $normal-json-files-for-training --outlier_json_path $abnormal-json-files-for-training --val_normal_json_path $normal-json-files-for-testing --val_outlier_json_path $abnormal-json-files-for-testing during the training process requires running one JSON file for each category in each dataset?
I understand that the available model is pre-trained on MVTec. Could you make your model pre-trained on the full dataset of ViSA available?
Best Regards,
hello,the Step 2 Download the Few shot Normal Samples for Inference on [Google Drive],Where can I get the link?thanks
Can you provide the download address for vit'b_16_plus_240 laion400m_e32-699c4b84.pt
Hi, you can use **_torch.save()_** to generate .pt file for your own few-shot samples.
Originally posted by @Diana1026 in #7 (comment)
Thank you for this work.I want to test my own model,but I don't know exactly where should I use torch.save() to generate my own .pt file. And I don't know the structure of .pt file. Could you please give a more specific explaination? Thanks a lot.
Hello, I validated the 8-shot performance using the provided pre-trained model and few shot samples, and the results were similar to 2-shot, not as high as mentioned in the paper. I did the following: (1) checkpoints/8/checkpoint.pyth modified TEST CHECKPOINT-FILE-PATH (2) changed/fs_samples/visa/2/in the provided test command to/fs_samples/visa/8/, results are as follows:
Did I miss any operational steps?
Hello, could you please publish the training and testing process in detail, as well as the organization of the files and the associated json files, it's really not very good to reproduce the code
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.