You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: devops/k8s/README_MODEL_SERVING.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ This tutorial will guide you to deploy your models to target computing devices,
4
4
5
5
The entire workflow is as follows:
6
6
1. create a model card by uploading your trained model file and related configuration (YAML)
7
-
2. bind (login) computing resource to FedML MLOps model serving platform (https://model.fedml.ai)
7
+
2. bind (login) computing resource to FedML MLOps model serving platform (https://open.fedml.ai)
8
8
- Kubernetes mode
9
9
- CLI mode
10
10
3. start the deployment and get the inference API once the deployment is finished
@@ -14,7 +14,7 @@ When your model deployment is finished, you will get an endpoint URL and inferen
14
14
15
15
```curl -XPOST https://$YourEndPointIngressDomainName/inference/api/v1/predict -H 'accept: application/json' -d'{ "model_version": "v11-Thu Jan 05 08:20:24 GMT 2023", "model_name": "model_340_18_fedml_test_model_v11-Thu-Jan-05-08-20-24-GMT-2023", "data": "This is our test data. Please fill in here with your real data.", "end_point_id": 336, "model_id": 18, "token": "2e081ef115d04ee8adaffe5c1d0bfbac"}'```
16
16
17
-
You may run your model deployment flow via the ModelOps(model.fedml.ai) and CLI.
17
+
You may run your model deployment flow via the ModelOps(open.fedml.ai) and CLI.
Notes: $YourEndPointIngressDomainName is your model serving end point URL host which will be used in your inference API, e.g.
76
76
@@ -137,7 +137,7 @@ List model in the remote model repository:
137
137
Build local model repository as zip model package:
138
138
```fedml model package -n $model_name```
139
139
140
-
Push local model repository to ModelOps(model.fedml.ai):
140
+
Push local model repository to ModelOps(open.fedml.ai):
141
141
```fedml model push -n $model_name -u $user_id -k $user_api_key```
142
142
143
143
Pull remote model(ModelOps) to local model repository:
@@ -158,4 +158,4 @@ A: Yes.
158
158
159
159
160
160
4. Q: During deployment, what if the k8s service does not have a public IP? \
161
-
A: During deployment, we don't need to initiate access to your k8s service from model.fedml.ai, only your k8s cluster can initiate access to model.fedml.ai
161
+
A: During deployment, we don't need to initiate access to your k8s service from open.fedml.ai, only your k8s cluster can initiate access to open.fedml.ai
0 commit comments