Skip to content

Commit 85e3729

Browse files
authored
Merge pull request #830 from FedML-AI/dev/v0.7.0
update to v0.8.0
2 parents fc779bf + b17717b commit 85e3729

File tree

60 files changed

+1947
-1375
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

60 files changed

+1947
-1375
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -243,7 +243,7 @@ Here `hierarchical` means that inside each FL Client (data silo), there are mult
243243

244244
## **FedML Beehive Examples**
245245

246-
- [Federated Learning on Android Smartphones](./doc/en/cross-device/examples/mqtt_s3_fedavg_mnist_lr_example.md)
246+
- [Federated Learning on Android Smartphones](./doc/en/cross-device/examples/cross_device_android_example.md)
247247

248248

249249
# FedML on Smartphone and IoTs

android/fedmlsdk/src/main/java/ai/fedml/edge/service/ClientAgentManager.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ public void registerMessageReceiveHandlers(final long edgeId) {
7575
edgeCommunicator.subscribe(startTrainTopic, (OnTrainStartListener) this::handleTrainStart);
7676
final String stopTrainTopic = "flserver_agent/" + edgeId + "/stop_train";
7777
edgeCommunicator.subscribe(stopTrainTopic, (OnTrainStopListener) this::handleTrainStop);
78-
final String MLOpsQueryStatusTopic = "/mlops/report_device_status";
78+
final String MLOpsQueryStatusTopic = "mlops/report_device_status";
7979
edgeCommunicator.subscribe(MLOpsQueryStatusTopic, (OnMLOpsMsgListener) this::handleMLOpsMsg);
8080

8181
final String exitTrainWithExceptionTopic = "flserver_agent/" + edgeId + "/exit_train_with_exception";

android/fedmlsdk/src/main/java/ai/fedml/edge/service/communicator/message/MessageDefine.java

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,8 +44,8 @@ public interface MessageDefine {
4444

4545

4646
// Client Status
47-
String MQTT_LAST_WILL_TOPIC = "/flclient_agent/last_will_msg";
48-
String MQTT_REPORT_ACTIVE_STATUS_TOPIC = "/flclient_agent/active";
47+
String MQTT_LAST_WILL_TOPIC = "flclient_agent/last_will_msg";
48+
String MQTT_REPORT_ACTIVE_STATUS_TOPIC = "flclient_agent/active";
4949

5050
String MSG_MLOPS_CLIENT_STATUS_OFFLINE = "OFFLINE";
5151
String MSG_MLOPS_CLIENT_STATUS_IDLE = "IDLE";

android/fedmlsdk/src/main/java/ai/fedml/edge/service/component/ProfilerEventLogger.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
import ai.fedml.edge.utils.LogHelper;
88

99
public class ProfilerEventLogger {
10-
private static final String EVENT_TOPIC = "/mlops/events";
10+
private static final String EVENT_TOPIC = "mlops/events";
1111
private static final int EVENT_TYPE_STARTED = 0;
1212
private static final int EVENT_TYPE_ENDED = 1;
1313

devops/k8s/README_MODEL_SERVING.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ This tutorial will guide you to deploy your models to target computing devices,
44

55
The entire workflow is as follows:
66
1. create a model card by uploading your trained model file and related configuration (YAML)
7-
2. bind (login) computing resource to FedML MLOps model serving platform (https://model.fedml.ai)
7+
2. bind (login) computing resource to FedML MLOps model serving platform (https://open.fedml.ai)
88
- Kubernetes mode
99
- CLI mode
1010
3. start the deployment and get the inference API once the deployment is finished
@@ -14,7 +14,7 @@ When your model deployment is finished, you will get an endpoint URL and inferen
1414

1515
```curl -XPOST https://$YourEndPointIngressDomainName/inference/api/v1/predict -H 'accept: application/json' -d'{ "model_version": "v11-Thu Jan 05 08:20:24 GMT 2023", "model_name": "model_340_18_fedml_test_model_v11-Thu-Jan-05-08-20-24-GMT-2023", "data": "This is our test data. Please fill in here with your real data.", "end_point_id": 336, "model_id": 18, "token": "2e081ef115d04ee8adaffe5c1d0bfbac"}'```
1616

17-
You may run your model deployment flow via the ModelOps(model.fedml.ai) and CLI.
17+
You may run your model deployment flow via the ModelOps(open.fedml.ai) and CLI.
1818

1919
Model Deployment CLI:
2020

@@ -23,7 +23,7 @@ fedml model deploy -n $model_name --on_premise -d $device_id_list -u $user_id -k
2323
2424
e.g. fedml model deploy -n fedml_sample_model -u 1420 -k c9356b9c4ce44363bb66366b290201 -dt md.on_premise_device -d [178077,178076]
2525
26-
Note: You may find your device id in the Computing Resource page at the ModelOps(model.fedml.ai) platform.
26+
Note: You may find your device id in the Computing Resource page at the ModelOps(open.fedml.ai) platform.
2727
In the $device_id_list, the master device should be the first item.
2828
```
2929

@@ -62,15 +62,15 @@ Inference end point ingress will be used as your model serving endpoint URL whic
6262
```kubectl get nodes --show-labels```
6363

6464
### 4). Prepare parameters will be used in the next step.
65-
You should fetch $YourAccountId and $YourApiKey from ModelOps(model.fedml.ai) which will be used in the next step.
65+
You should fetch $YourAccountId and $YourApiKey from ModelOps(open.fedml.ai) which will be used in the next step.
6666

6767
### 5). You may run the Helm Charts Installation commands to install FedML model serving packages to the above labeled nodes.
6868

6969
```kubectl create namespace $YourNameSpace```
7070

71-
```helm install --set env.fedmlAccountId="$YourAccountId" --set env.fedmlApiKey="$YourApiKey" --set env.fedmlVersion="release" fedml-model-premise-slave fedml-model-premise-slave-0.7.397.tgz -n $YourNameSpace```
71+
```helm install --set env.fedmlAccountId="$YourAccountId" --set env.fedmlApiKey="$YourApiKey" --set env.fedmlVersion="release" fedml-model-premise-slave fedml-model-premise-slave-latest.tgz -n $YourNameSpace```
7272

73-
```helm install --set env.fedmlAccountId="$YourAccountId" --set env.fedmlApiKey="$YourApiKey" --set env.fedmlVersion="release" --set "inferenceGateway.ingress.host=$YourEndPointIngressDomainName" --set "inferenceGateway.ingress.className=nginx" fedml-model-premise-master fedml-model-premise-master-0.7.397.tgz -n $YourNameSpace```
73+
```helm install --set env.fedmlAccountId="$YourAccountId" --set env.fedmlApiKey="$YourApiKey" --set env.fedmlVersion="release" --set "inferenceGateway.ingress.host=$YourEndPointIngressDomainName" --set "inferenceGateway.ingress.className=nginx" fedml-model-premise-master fedml-model-premise-master-latest.tgz -n $YourNameSpace```
7474

7575
Notes: $YourEndPointIngressDomainName is your model serving end point URL host which will be used in your inference API, e.g.
7676

@@ -137,7 +137,7 @@ List model in the remote model repository:
137137
Build local model repository as zip model package:
138138
```fedml model package -n $model_name```
139139

140-
Push local model repository to ModelOps(model.fedml.ai):
140+
Push local model repository to ModelOps(open.fedml.ai):
141141
```fedml model push -n $model_name -u $user_id -k $user_api_key```
142142

143143
Pull remote model(ModelOps) to local model repository:
@@ -158,4 +158,4 @@ A: Yes.
158158

159159

160160
4. Q: During deployment, what if the k8s service does not have a public IP? \
161-
A: During deployment, we don't need to initiate access to your k8s service from model.fedml.ai, only your k8s cluster can initiate access to model.fedml.ai
161+
A: During deployment, we don't need to initiate access to your k8s service from open.fedml.ai, only your k8s cluster can initiate access to open.fedml.ai

devops/k8s/fedml-model-inference-ingress/Chart.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
apiVersion: v2
22
name: fedml-model-inference-ingress
3-
description: A Helm chart for master on-premise device on FedML model serving platform(model.fedml.ai)
3+
description: A Helm chart for master on-premise device on FedML model serving platform(open.fedml.ai)
44

55
# A chart can be either an 'application' or a 'library' chart.
66
#
@@ -15,7 +15,7 @@ type: application
1515
# This is the chart version. This version number should be incremented each time you make changes
1616
# to the chart and its templates, including the app version.
1717
# Versions are expected to follow Semantic Versioning (https://semver.org/)
18-
version: 0.7.377
18+
version: 0.7.700
1919

2020
# This is the version number of the application being deployed. This version number should be
2121
# incremented each time you make changes to the application. Versions are not expected to

devops/k8s/fedml-model-inference-ingress/values.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ service:
4343
ingress:
4444
enabled: true
4545
className: ""
46-
annotations: {}
46+
annotations:
4747
kubernetes.io/ingress.class: nginx
4848
ingress.kubernetes.io/cors-allow-headers: '*'
4949
ingress.kubernetes.io/cors-allow-methods: 'PUT, GET, POST, OPTIONS, HEAD, DELETE, PATCH'
@@ -59,7 +59,7 @@ ingress:
5959
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
6060
# kubernetes.io/tls-acme: "true"
6161
hosts:
62-
- host: model.fedml.ai
62+
- host: open.fedml.ai
6363
paths:
6464
- path: /inference
6565
pathType: Prefix
-5.09 KB
Binary file not shown.
5.16 KB
Binary file not shown.

devops/k8s/fedml-model-premise-master/Chart.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
apiVersion: v2
22
name: fedml-model-premise-master
3-
description: A Helm chart for master on-premise device on FedML model serving platform(model.fedml.ai)
3+
description: A Helm chart for master on-premise device on FedML model serving platform(open.fedml.ai)
44

55
# A chart can be either an 'application' or a 'library' chart.
66
#
@@ -15,7 +15,7 @@ type: application
1515
# This is the chart version. This version number should be incremented each time you make changes
1616
# to the chart and its templates, including the app version.
1717
# Versions are expected to follow Semantic Versioning (https://semver.org/)
18-
version: 0.7.397
18+
version: 0.7.700
1919

2020
# This is the version number of the application being deployed. This version number should be
2121
# incremented each time you make changes to the application. Versions are not expected to

0 commit comments

Comments
 (0)