Skip to content

Finetuning Cellpose-SAM #1353

@licmn

Description

@licmn

I have 17 spatial transcriptomics samples (1 sample shown) with paired H&E images. I am looking to predict cell types from H&E, and for that, I finetuned Cellpose-SAM model for predicting 13 classes.

Image Image

I followed Cellpose semantic segmentation code (https://github.com/MouseLand/cellpose/blob/e3879a1cc58d4aa313d50977bb9b31ab11f89a2e/paper/cpsam/semantic.py), only changing there the number of labels. I used their hyperparameters as well:

# Hyperparameters:
rdrop=0.4 # In vit_sam.Transformer initialization
learning_rate = 5e-5 
weight_decay = 0.1 
batch_size = 8 
n_epochs = 500
bsize = 256
rescale = False 
scale_range = 0.5

Also, I’ve read about the mean cell diameter parameter for cellpose, but have not used it. Here is the loss curve from finetuning. I can see that past epoch 150-200 the model starts to overfit on the training data without improving performance on the withheld test samples. I would truly appreciate any advice or feedback on how to fine-tune the model better. What would you try changing first in the finetuning process to improve training?

Here is the training curve

Image

Thanks a lot for your help!
Luisa

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions