Skip to content

Commit 9e5cc9c

Browse files
authored
Merge pull request #1260 from MouseLand/add_back_cp3_nb
add notebook for running cellpose3 with denoising and segmentation
2 parents ed035ef + 706cfa7 commit 9e5cc9c

File tree

1 file changed

+316
-0
lines changed

1 file changed

+316
-0
lines changed

notebooks/run_cellpose3.ipynb

Lines changed: 316 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,316 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {
6+
"id": "Nc9k-7j1-CUF"
7+
},
8+
"source": [
9+
"<a href=\"https://colab.research.google.com/github/MouseLand/cellpose/blob/main/notebooks/run_cellpose3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
10+
]
11+
},
12+
{
13+
"cell_type": "markdown",
14+
"metadata": {},
15+
"source": [
16+
"# Install and run cellpose3 for denoising and segmentation\n",
17+
"## ⚠️ **Warning:** this notebook will install cellpose3 which is not forwards compatible with cellpose4 (CPSAM). Be careful with your environments and the `pip` command below. ⚠️\n",
18+
"\n",
19+
"## The dedicated denoising components were removed from cellpose4 by training on noisy images, and cellpose4 only has a segmentation network. "
20+
]
21+
},
22+
{
23+
"cell_type": "markdown",
24+
"metadata": {
25+
"id": "U_WCmrG5-CUL"
26+
},
27+
"source": [
28+
"# Running cellpose3 in colab with a GPU\n",
29+
"\n",
30+
"<font size = 4>Cellpose3 now allows you to restore and segment noisy/blurry/low res images!\n",
31+
"\n",
32+
"For more details on Cellpose3 check out the [paper](https://www.biorxiv.org/content/10.1101/2024.02.10.579780v1).\n",
33+
"\n",
34+
"Mount your google drive to access all your image files. This also ensures that the segmentations are saved to your google drive."
35+
]
36+
},
37+
{
38+
"cell_type": "markdown",
39+
"metadata": {
40+
"id": "HrakTaa9-CUQ"
41+
},
42+
"source": [
43+
"## Installation\n",
44+
"\n",
45+
"Install cellpose -- by default the torch GPU version is installed in COLAB notebook."
46+
]
47+
},
48+
{
49+
"cell_type": "code",
50+
"execution_count": null,
51+
"metadata": {
52+
"colab": {
53+
"base_uri": "https://localhost:8080/"
54+
},
55+
"id": "efSQoWFw-CUU",
56+
"outputId": "472a7900-7821-4bc6-d3b3-00a463476721"
57+
},
58+
"outputs": [],
59+
"source": [
60+
"!pip install \"opencv-python-headless>=4.9.0.80\"\n",
61+
"!pip install cellpose==3.1.1.2"
62+
]
63+
},
64+
{
65+
"cell_type": "markdown",
66+
"metadata": {
67+
"id": "j7uUatzC-CUY"
68+
},
69+
"source": [
70+
"Check CUDA version and that GPU is working in cellpose and import other libraries."
71+
]
72+
},
73+
{
74+
"cell_type": "code",
75+
"execution_count": null,
76+
"metadata": {
77+
"colab": {
78+
"base_uri": "https://localhost:8080/"
79+
},
80+
"id": "a8muq8KG-CUa",
81+
"outputId": "75fabdc8-a976-476d-9f79-d9fc6213eccb"
82+
},
83+
"outputs": [],
84+
"source": [
85+
"!nvcc --version\n",
86+
"!nvidia-smi\n",
87+
"\n",
88+
"import os, shutil\n",
89+
"import numpy as np\n",
90+
"import matplotlib.pyplot as plt\n",
91+
"from cellpose import core, utils, io, models, metrics\n",
92+
"from glob import glob\n",
93+
"\n",
94+
"use_GPU = core.use_gpu()\n",
95+
"yn = ['NO', 'YES']\n",
96+
"print(f'>>> GPU activated? {yn[use_GPU]}')"
97+
]
98+
},
99+
{
100+
"cell_type": "markdown",
101+
"metadata": {
102+
"id": "SzD7QlBP-CUd"
103+
},
104+
"source": [
105+
"## Images\n",
106+
"\n",
107+
"Load in your own data or use ours (below)"
108+
]
109+
},
110+
{
111+
"cell_type": "code",
112+
"execution_count": null,
113+
"metadata": {
114+
"colab": {
115+
"base_uri": "https://localhost:8080/",
116+
"height": 568
117+
},
118+
"id": "PYevQVQd-CUe",
119+
"outputId": "895a5ed4-b2cc-482d-d741-32218eee76bc"
120+
},
121+
"outputs": [],
122+
"source": [
123+
"import numpy as np\n",
124+
"import time, os, sys\n",
125+
"from urllib.parse import urlparse\n",
126+
"import matplotlib.pyplot as plt\n",
127+
"import matplotlib as mpl\n",
128+
"%matplotlib inline\n",
129+
"mpl.rcParams['figure.dpi'] = 200\n",
130+
"from cellpose import utils, io\n",
131+
"\n",
132+
"# download noisy images from website\n",
133+
"url = \"http://www.cellpose.org/static/data/test_poisson.npz\"\n",
134+
"filename = \"test_poisson.npz\"\n",
135+
"utils.download_url_to_file(url, filename)\n",
136+
"dat = np.load(filename, allow_pickle=True)[\"arr_0\"].item()\n",
137+
"\n",
138+
"imgs = dat[\"test_noisy\"]\n",
139+
"plt.figure(figsize=(8,3))\n",
140+
"for i, iex in enumerate([2, 18, 20]):\n",
141+
" img = imgs[iex].squeeze()\n",
142+
" plt.subplot(1,3,1+i)\n",
143+
" plt.imshow(img, cmap=\"gray\", vmin=0, vmax=1)\n",
144+
" plt.axis('off')\n",
145+
"plt.tight_layout()\n",
146+
"plt.show()"
147+
]
148+
},
149+
{
150+
"cell_type": "markdown",
151+
"metadata": {
152+
"id": "g1dO0Oia-CUk"
153+
},
154+
"source": [
155+
"Mount your google drive here if you want to load your own images:"
156+
]
157+
},
158+
{
159+
"cell_type": "code",
160+
"execution_count": null,
161+
"metadata": {
162+
"cellView": "form",
163+
"id": "1qyAEK7R-CUp"
164+
},
165+
"outputs": [],
166+
"source": [
167+
"\n",
168+
"#@markdown ###Run this cell to connect your Google Drive to Colab\n",
169+
"\n",
170+
"#@markdown * Click on the URL.\n",
171+
"\n",
172+
"#@markdown * Sign in your Google Account.\n",
173+
"\n",
174+
"#@markdown * Copy the authorization code.\n",
175+
"\n",
176+
"#@markdown * Enter the authorization code.\n",
177+
"\n",
178+
"#@markdown * Click on \"Files\" site on the right. Refresh the site. Your Google Drive folder should now be available here as \"drive\".\n",
179+
"\n",
180+
"#mounts user's Google Drive to Google Colab.\n",
181+
"\n",
182+
"from google.colab import drive\n",
183+
"drive.mount('/content/gdrive')\n"
184+
]
185+
},
186+
{
187+
"cell_type": "markdown",
188+
"metadata": {
189+
"id": "-KYaPm0H-CUs"
190+
},
191+
"source": [
192+
"## run denoising and segmentation"
193+
]
194+
},
195+
{
196+
"cell_type": "code",
197+
"execution_count": null,
198+
"metadata": {
199+
"colab": {
200+
"base_uri": "https://localhost:8080/"
201+
},
202+
"id": "wm6YEVJN-CUu",
203+
"outputId": "f9c222c8-013d-4cbe-ba07-aa0172f8532f"
204+
},
205+
"outputs": [],
206+
"source": [
207+
"# RUN CELLPOSE3\n",
208+
"\n",
209+
"from cellpose import denoise, io\n",
210+
"\n",
211+
"io.logger_setup() # run this to get printing of progress\n",
212+
"\n",
213+
"# DEFINE CELLPOSE MODEL\n",
214+
"# model_type=\"cyto3\" or \"nuclei\", or other model\n",
215+
"# restore_type: \"denoise_cyto3\", \"deblur_cyto3\", \"upsample_cyto3\", \"denoise_nuclei\", \"deblur_nuclei\", \"upsample_nuclei\"\n",
216+
"model = denoise.CellposeDenoiseModel(gpu=True, model_type=\"cyto3\",\n",
217+
" restore_type=\"denoise_cyto3\")\n",
218+
"\n",
219+
"# define CHANNELS to run segementation on\n",
220+
"# grayscale=0, R=1, G=2, B=3\n",
221+
"# channels = [cytoplasm, nucleus]\n",
222+
"# if NUCLEUS channel does not exist, set the second channel to 0\n",
223+
"# channels = [0,0]\n",
224+
"# IF ALL YOUR IMAGES ARE THE SAME TYPE, you can give a list with 2 elements\n",
225+
"# channels = [0,0] # IF YOU HAVE GRAYSCALE\n",
226+
"# channels = [2,3] # IF YOU HAVE G=cytoplasm and B=nucleus\n",
227+
"# channels = [2,1] # IF YOU HAVE G=cytoplasm and R=nucleus\n",
228+
"# OR if you have different types of channels in each image\n",
229+
"# channels = [[2,3], [0,0], [0,0]]\n",
230+
"\n",
231+
"# if you have a nuclear channel, you can use the nuclei restore model on the nuclear channel with\n",
232+
"# model = denoise.CellposeDenoiseModel(..., chan2_restore=True)\n",
233+
"\n",
234+
"# NEED TO SPECIFY DIAMETER OF OBJECTS\n",
235+
"# in this case we have them from the ground-truth masks\n",
236+
"diams = dat[\"diam_test\"]\n",
237+
"\n",
238+
"masks, flows, styles, imgs_dn = model.eval(imgs, diameter=diams, channels=[0,0])\n"
239+
]
240+
},
241+
{
242+
"cell_type": "markdown",
243+
"metadata": {
244+
"id": "tH33nBAE-CUy"
245+
},
246+
"source": [
247+
"plot results"
248+
]
249+
},
250+
{
251+
"cell_type": "code",
252+
"execution_count": null,
253+
"metadata": {
254+
"colab": {
255+
"base_uri": "https://localhost:8080/",
256+
"height": 1000
257+
},
258+
"id": "8bAJc0qt-CU0",
259+
"outputId": "906b3476-c272-4cd8-a9cb-a1f46eacce5c"
260+
},
261+
"outputs": [],
262+
"source": [
263+
"plt.figure(figsize=(8,12))\n",
264+
"for i, iex in enumerate([2, 18, 20]):\n",
265+
" img = imgs[iex].squeeze()\n",
266+
" plt.subplot(3,3,1+i)\n",
267+
" plt.imshow(img, cmap=\"gray\", vmin=0, vmax=1)\n",
268+
" plt.axis('off')\n",
269+
" plt.title(\"noisy\")\n",
270+
"\n",
271+
" img_dn = imgs_dn[iex].squeeze()\n",
272+
" plt.subplot(3,3,4+i)\n",
273+
" plt.imshow(img_dn, cmap=\"gray\", vmin=0, vmax=1)\n",
274+
" plt.axis('off')\n",
275+
" plt.title(\"denoised\")\n",
276+
"\n",
277+
" plt.subplot(3,3,7+i)\n",
278+
" plt.imshow(img_dn, cmap=\"gray\", vmin=0, vmax=1)\n",
279+
" outlines = utils.outlines_list(masks[iex])\n",
280+
" for o in outlines:\n",
281+
" plt.plot(o[:,0], o[:,1], color=[1,1,0])\n",
282+
" plt.axis('off')\n",
283+
" plt.title(\"segmentation\")\n",
284+
"\n",
285+
"plt.tight_layout()\n",
286+
"plt.show()"
287+
]
288+
}
289+
],
290+
"metadata": {
291+
"accelerator": "GPU",
292+
"colab": {
293+
"gpuType": "T4",
294+
"provenance": []
295+
},
296+
"kernelspec": {
297+
"display_name": "cp4",
298+
"language": "python",
299+
"name": "python3"
300+
},
301+
"language_info": {
302+
"codemirror_mode": {
303+
"name": "ipython",
304+
"version": 3
305+
},
306+
"file_extension": ".py",
307+
"mimetype": "text/x-python",
308+
"name": "python",
309+
"nbconvert_exporter": "python",
310+
"pygments_lexer": "ipython3",
311+
"version": "3.10.0"
312+
}
313+
},
314+
"nbformat": 4,
315+
"nbformat_minor": 0
316+
}

0 commit comments

Comments
 (0)