You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/foundation/sam3.md
+113Lines changed: 113 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -179,3 +179,116 @@ The Workflow block allows you to:
179
179
-**Unified Architecture**: Handles both detection and segmentation in a single model.
180
180
181
181
For more technical details, refer to the [official SAM 3 paper](https://ai.meta.com/research/publications/sam-3-segment-anything-with-concepts/).
182
+
183
+
184
+
## How to use SAM 3 taking advantage of hot SAM3 instances maintained by Roboflow
185
+
186
+
In below examples we are taking advantage of the serverless infrastructure which handles GPU provisioning automatically, making it ideal for applications that need on-demand segmentation without managing infrastructure.
187
+
188
+
### 1. SAM3 Concept Segmentation workflow
189
+
190
+
This example demonstrates using SAM3 with the workflow approach which allows you to combine SAM3's concept segmentation with visualization in a single pipeline. Here, we're segmenting all dogs in an image and automatically visualizing the results with polygon overlays.
191
+
If you have created a workflow in Roboflow platform you can use `workspace_name` and `workflow_id` instead of `specification` to run it.
192
+
193
+
```python
194
+
import base64
195
+
196
+
import cv2 as cv
197
+
import numpy as np
198
+
199
+
from inference_sdk import InferenceHTTPClient
200
+
201
+
# 2. Connect to your workflow
202
+
client = InferenceHTTPClient(
203
+
api_url="https://serverless.roboflow.com",
204
+
api_key="<YOUR_ROBOFLOW_API_KEY>"
205
+
)
206
+
207
+
# 3. Run your workflow on an image
208
+
workflow_spec = {
209
+
"version": "1.0",
210
+
"inputs": [
211
+
{
212
+
"type": "InferenceImage",
213
+
"name": "image"
214
+
}
215
+
],
216
+
"steps": [
217
+
{
218
+
"type": "roboflow_core/sam3@v1",
219
+
"name": "sam",
220
+
"images": "$inputs.image",
221
+
"class_names": "dog"
222
+
},
223
+
{
224
+
"type": "roboflow_core/polygon_visualization@v1",
225
+
"name": "polygon_visualization",
226
+
"image": "$inputs.image",
227
+
"predictions": "$steps.sam.predictions"
228
+
}
229
+
],
230
+
"outputs": [
231
+
{
232
+
"type": "JsonField",
233
+
"name": "output",
234
+
"coordinates_system": "own",
235
+
"selector": "$steps.polygon_visualization.image"
236
+
}
237
+
]
238
+
}
239
+
240
+
result = client.run_workflow(
241
+
specification=workflow_spec,
242
+
images={
243
+
"image": "https://media.roboflow.com/inference/dog.jpeg"# Path or url to your image file
0 commit comments