Coder Social home page Coder Social logo

mahmoodlab / uni Goto Github PK

View Code? Open in Web Editor NEW
252.0 4.0 32.0 5.93 MB

Towards a general-purpose foundation model for computational pathology - Nature Medicine

License: Other

Jupyter Notebook 97.76% Python 2.24%
foundation foundation-model histopathology mahmoodlab pathology uni pathology-foundation-model nature-medicine mass-100k pathology-dinov2

uni's People

Contributors

faisalml avatar fedshyvana avatar georgebatch avatar richarizardd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

uni's Issues

CLAM config for HunCRC slide data

Hello authors,

Thank you for you amazing work.
Can you publish the config of CLAM for extracting HunCRC data ?

I can't extract, even with very low thresholds.
{'seg_params': {'seg_level': -1, 'sthresh': 1, 'mthresh': 1, 'close': 4, 'use_otsu': True, 'keep_ids': 'none', 'exclude_ids': 'none'}, 'filter_params': {'a_t': 1, 'a_h': 1, 'max_n_holes': 10}, 'patch_params': {'use_padding': True, 'contour_fn': 'four_pt'}, 'vis_params': {'vis_level': -1, 'line_thickness': 50}}

Thank you.

ROI feature extraction

Hello, thank you very much for your contribution. In your paper, you mentioned using the UNI pre-trained encoder to extract features from histopathology ROI, and also provided sample code. What I want to know is whether the ROI needs to be segmented in advance or not. Using the whole WSI image as input, if an image has multiple ROIs, can you provide me with a code example, I will be grateful

Does 20x magnification correspond to 0.5 microns per pixel?

Hello,

Does the 20x magnification correspond to the resolution of 0.5 microns per pixel (mupp)? I am asking because magnification is not standardised, and I have encountered slides from different scanners with 20x magnification, corresponding to a resolution from 0.23 mupp to 0.55 mupp.

Many thanks,
George

training epoch

hello,can you tell me how many epoch you trained , thanks

How was figure 3e generated in the paper?

Screenshot 2024-03-25 at 8 51 32 PM

I used something similar to this to extract the attention scores for the penultimate layer, as explained in the caption for figure 3e. However, I found that the attention maps I'm getting are a lot less "intuitive" compared to the ones shown in this figure.

Was this figure generated with a fine-tuned UNI model on the ROI level task or is it just showing the attention maps of the SSL model (no fine-tuning)?

Also, are the 448^2, 896^2 and 1344^2 attention maps computed by concatenating the attention map for each non-overlapping 224^2 patch together?

training gpu

hello,can you tell me how many A100s you used,and how many hours you trianed,thanks very much

Clarifying augmentations used for student teacher, regarding fine cell detail

Thank you for releasing the weights attributed to UNI, as well as the fantastic paper. I am having some trouble understanding the use of some of the augmentations used by UNI. This video explaining DINO video timestamp 20:00 mins, suggests that the goal of using a local and global crop for S and T is that the student must learn a more global representation from what is offered, such that small structures shouldn't contribute in favor of a global structure representation. However, in pathology, the small structures matter a lot, like cell nuclei etc. Is this understanding correct? If so, is it necessary to bypass this for a good representation?

Weights are saved twice after running the walkthrough notebook: `models--MahmoodLab--UNI` and `models--MahmoodLab--uni`

Dear authors,

I ran the walkthrough notebook, and the model was downloaded twice (see screenshot).

  1. I get the symbolic link
    assets/ckpts/vit_large_patch16_224.dinov2.uni_mass100k/pytorch_model.bin pointing to a file in .cache/huggingface/hub/models--MahmoodLab--UNI/.

  2. The other downloaded model is in .cache/huggingface/hub/models--MahmoodLab--uni/

I think there are 2 different places in the code, one with capital letters "UNI" and one with non-capital letters "uni". They do not link to each other since each folder weighs 1.2 GB, while together, they occupy 2.4 GB.

Best wishes,
George

Screenshot 2024-03-26 at 15 07 46

Wrong information about UniToPatho dataset

Hi, thank you for your amazing work.

In your paper, you wrote UniToPatho includes 9536 1,812x1,812 patches, but it actually contains 8669 1,812x1,812 patches and 867 15,855×15,855 patches
Not a big problem, just let you know.

Results of the PANDA Competition

Thank you very much for your great pre-trained model.

Regarding the results of the PANDA competition, can you tell us the results of your submission using kaggle's leaderborad, we can only claim PANDA's performance to be good if it exceeds some of the top solutions in 2019.

Label for IDH1 mutation prediction

Thank you for this great work!

May I ask where to find the corresponding label for IDH1 mutation prediction task (TCGA & EBRAINS)?

Thanks!

Local and global crop scales

Hi, nice work! Is there information about the local and global crop scales during pre-training publicly available?

Access granted but I am unable to access the model's weights

Dear authors,

I have been granted access from the huggingface model but running the code in the Github Repo with my login token isn't working for some reason. Did something change?

image

image

As you can see above I have explicitly logged in using my user token and I am still not granted access. Previously with other huggingface models, this has not happened.

** I have censored my user token for obvious privacy concerns

Conch in arxiv

Thank you for your great work!

I've read your another paper called 'CONCH' in arxiv (https://arxiv.org/pdf/2307.12914.pdf).
Can I ask you is this work an extension of CONCH?
And if not, are you preparing an additional publication for CONCH?

About data downloda

Hello, I think your work is very meaningful. But I would like to inquire which of the data used can be downloaded. Can you provide a download address? Thank you.

How to use UNI for segmentation task?

Dear authors,
thanks for your great work!
In your publication you mention that you also used UNI for segmentation of images. Could you please provide insights how to use UNI for segmentation tasks?
Thank you!
Kind regards.

No module named 'faiss'

Dear authors,

thank you for providing the code.
When testing your code with the uni_walkthrough.ipynb an error occurs, because the 'faiss' module is missing.

Note: can be fixed with: pip install faiss-cpu h5py ipywidgets.

Best,
Leon

Pretraining on 256x256 and 512x512, recommended inference on 224x224

Dear authors,

In the paper (and on HuggingFace), you mention that you used a dataset "composed of 75,832,905 [256×256] and 24,297,995 [512×512] histology images at 20× resolution". However, in the example code on GitHub and HuggingFace, you suggest using patch size 224x224 (224 is in the model name too: assets/ckpts/vit_large_patch16_224.dinov2.uni_mass100k/). Can you please explain why? Can the model accept other patch sizes?

Many thanks,
George

What is the reasoning behind using ImageNet normalization constansts?

Dear authors,

Thank you for releasing this work! I think it will bring great value to the community.

In the Nature paper you say "All pretrained encoders use ImageNet mean and standard deviation parameters for image normalization (including UNI)". The code examples are also consistent with it.

Can you please clarify the reason for sticking to the ImageNet normalization constants? As far as I understand, they were computed on the original ImageNet dataset. Since you had a different dataset for pre-training the UNI model, why not calculate the constants on your dataset?

Many thanks,
George

Model weights result in nan-values for half precision

I'm not sure if this is resolvable, but the UNI weights result in nan values when doing training or inference on float16. The following is on a H&E stain image with imagenet normalization:

path = hf_hub_download("MahmoodLab/UNI", filename="pytorch_model.bin")
model = timm.create_model("vit_large_patch16_224", init_values=1e-5, num_classes=0).to(device)

missing_k, unexpected_k = model.load_state_dict(torch.load(path), strict=False)
print(f'Missing keys: {missing_k}')
print(f'Unexpected keys: {unexpected_k}')

with torch.autocast(device_type='cuda', dtype=torch.float32):
    print(f'float32 output: {model(batch_img)}')
with torch.autocast(device_type='cuda', dtype=torch.float16):
    print(f'float16 output: {model(batch_img)}')

model_imagenet = timm.create_model("vit_large_patch16_224", init_values=1e-5, num_classes=0, pretrained=True).to(device)

with torch.autocast(device_type='cuda', dtype=torch.float32):
    print(f'float32 output: {model_imagenet(batch_img)}')
with torch.autocast(device_type='cuda', dtype=torch.float16):
    print(f'float16 output: {model_imagenet(batch_img)}')

Output: Half-precision works for default ImageNet-pretrain weights but not UNI.

Missing keys: []
Unexpected keys: []
float32 output: tensor([[-0.9344, -0.0447,  2.0671,  ...,  0.1991,  1.0729, -0.1812]],
       device='cuda:0', grad_fn=<SelectBackward0>)
float16 output: tensor([[nan, nan, nan,  ..., nan, nan, nan]], device='cuda:0',
       grad_fn=<SelectBackward0>)
float32 output: tensor([[ 1.3607,  0.1251, -0.2508,  ...,  0.2557, -0.1732,  0.6628]],
       device='cuda:0', grad_fn=<SelectBackward0>)
float16 output: tensor([[ 1.3607,  0.1251, -0.2508,  ...,  0.2557, -0.1732,  0.6628]],
       device='cuda:0', grad_fn=<SelectBackward0>)

access request accepted, authentification succeeded, but weights downloading and model creating failed

Hi,

I followed the procedure (access request via huggingface + authentification via login(token) + weights downloading and model creating), but got error with model = timm.create_model("hf-hub:MahmoodLab/uni", pretrained=True, init_values=1e-5, dynamic_img_size=True)

"GatedRepoError: 401 Client Error. (Request ID: Root=1-65fd9fbe-75542d9c27933ae862f55525)

Cannot access gated repo for url https://huggingface.co/MahmoodLab/UNI/resolve/main/config.json.
Repo model MahmoodLab/UNI is gated. You must be authenticated to access it."

that's just like my request has not been accepted yet, but it's not the case (first snapshot). I also tried to log out relog in hugging face in case these's a update delay....but I still got the same error. Detailed snapshots are shown below, I really don't know where the problem is....could you help me please? I'm using jupyter notebook on servors of my institution.

Thank you

1711120164210
1711120377731
1711120432472

Training loss(es) during pre-training

Hello,
I was wondering if you could provide additional details on the evolution of loss functions during the pre-training of UNI.
It has indeed been observed that instabilities or convergence issues may hinder the pre-training. Is this something you already observed ?

Congratulations for this groundbreaking work and for publicly releasing weights.

Mucinous tissue

Hello,

I have a question concerning the data used to train the model : Was mucinous tissue included in it?

When using UNI with CLAM, the attention scores seem incoherent if there is mucinous tissue in the slide. It tends to focus on the mucin rather than the tumor.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.