Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions en_US/changes/known-issues-5.9.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,4 @@
| ------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------- |
| 5.0.0 | **Node Crash if Linux monotonic clock steps backward**<br />In certain virtual Linux environments, the operating system is unable to keep the clocks monotonic, which may cause Erlang VM to exit with the message `OS monotonic time stepped backwards!`. | For such environments, you may set the `+c` flag to `false` in `etc/vm.args`. | |
| 5.3.0 | **Limitation in SAML-Based SSO**<br />EMQX Dashboard supports Single Sign-On based on the Security Assertion Markup Language (SAML) 2.0 standard and integrates with Okta and OneLogin as identity providers. However, the SAML-based SSO currently does not support a certificate signature verification mechanism and is incompatible with Azure Entra ID due to its complexity. | - | |
| 5.1.0 | **Replicant nodes may hang on startup when new core nodes are added to the cluster**<br />During cluster changes that involve adding new core nodes, the newly added cores may occasionally fail to start replication-related processes required by replicant nodes. This, in turn, caused upgraded or newly added replicant nodes to hang during startup.<br />In Kubernetes deployments, this led to the controller repeatedly restarting replicant pods due to failing readiness probes.<br />This problem typically occurs during upgrade rollouts, for example, when expanding an existing 2-core + 2-replicant cluster by adding two new core nodes and two new replicants running a newer EMQX version. | If one or more replicant nodes hang during startup after being (re)deployed, consider forcefully restarting the newly added core nodes one at a time until the replicants unblock and complete startup. | Fixed in 6.0.1 |
4 changes: 3 additions & 1 deletion en_US/changes/known-issues-6.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,6 @@

| Since version | Issue | Workaround | Status |
| ------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------ |
| 6.0.0 | **Cannot perform rolling upgrade from cluster running 5.x to 6.0.0 when older bridges are in the configuration**<br />Clusters that have started running from older EMQX versions that contain the now deprecated `bridges` configuration root will fail to sync their configuration to the new 6.0 nodes, because the latter have dropped for such roots and thus fail to start the corresponding Connectors, Actions and Sources. | Starting from 6.0.1, an RPC call is made to the older node to upgrade the configuration to convert `bridges` into `connectors`, `sources` and `actions`, facilitating rolling upgrades with less manual invervention.<br />Alternatively, each affected bridge can be updated via HTTP API or CLI to induce a configuration update (e.g., change the description) which will also upgrade the persisted `cluster.hocon` file.<br />The following Connector/Sources/Actions might still require manual changes before attempting a rolling upgrade:<br /> - GCP PubSub Consumer<br /> - Kafka Consumer<br />If there are any such sources in the configuration that still contain the `topic_mapping` field, the field must be removed from config and then one "Source + Rule" pair must be created for each entry. | |
| 6.0.0 | **Cannot perform rolling upgrade from cluster running 5.x to 6.0.0 when older bridges are in the configuration**<br />Clusters that have started running from older EMQX versions that contain the now deprecated `bridges` configuration root will fail to sync their configuration to the new 6.0 nodes, because the latter have dropped for such roots and thus fail to start the corresponding Connectors, Actions and Sources. | Starting from 6.0.1, an RPC call is made to the older node to upgrade the configuration to convert `bridges` into `connectors`, `sources,` and `actions`, facilitating rolling upgrades with less manual intervention.<br />Alternatively, each affected bridge can be updated via HTTP API or CLI to induce a configuration update (e.g., change the description), which will also upgrade the persisted `cluster.hocon` file.<br />The following Connector/Sources/Actions might still require manual changes before attempting a rolling upgrade:<br /> - GCP PubSub Consumer<br /> - Kafka Consumer<br />If there are any such sources in the configuration that still contain the `topic_mapping` field, the field must be removed from config and then one "Source + Rule" pair must be created for each entry. | |
| 5.1.0 | **Replicant nodes may hang on startup when new core nodes are added to the cluster**<br />During cluster changes that involve adding new core nodes, the newly added cores may occasionally fail to start replication-related processes required by replicant nodes. This, in turn, caused upgraded or newly added replicant nodes to hang during startup.<br />In Kubernetes deployments, this led to the controller repeatedly restarting replicant pods due to failing readiness probes.<br />This problem typically occurs during upgrade rollouts, for example, when expanding an existing 2-core + 2-replicant cluster by adding two new core nodes and two new replicants running a newer EMQX version. | If one or more replicant nodes hang during startup after being (re)deployed, consider forcefully restarting the newly added core nodes one at a time until the replicants unblock and complete startup. | Fixed in 6.0.1 |

9 changes: 5 additions & 4 deletions zh_CN/changes/known-issues-5.9.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,8 @@

## e5.9.0

| 始于版本 | 问题描述 | 解决方法 | 状态 |
| -------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ---- |
| 5.0.0 | **Linux 单调时钟回调导致 EMQX 节点重启**<br />在某些虚拟 Linux 环境中,操作系统无法保持时钟的单调性,这可能会导致 Erlang VM 因为错误消息 `OS monotonic time stepped backwards!` 而退出。 | 对于这类环境,可以在 `etc/vm.args` 中将 `+c` 标志设置为 `false`。 | |
| 5.3.0 | **基于 SAML 的单点登录限制**<br />EMQX Dashboard 支持基于安全断言标记语言(SAML)2.0 标准的单点登录(SSO),并与 Okta 和 OneLogin 作为身份提供商集成。然而,基于 SAML 的 SSO 目前不支持证书签名验证机制,并且由于其复杂性,无法与 Azure Entra ID 兼容。 | - | |
| 始于版本 | 问题描述 | 解决方法 | 状态 |
| -------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ----------------- |
| 5.0.0 | **Linux 单调时钟回调导致 EMQX 节点重启**<br />在某些虚拟 Linux 环境中,操作系统无法保持时钟的单调性,这可能会导致 Erlang VM 因为错误消息 `OS monotonic time stepped backwards!` 而退出。 | 对于这类环境,可以在 `etc/vm.args` 中将 `+c` 标志设置为 `false`。 | |
| 5.3.0 | **基于 SAML 的单点登录限制**<br />EMQX Dashboard 支持基于安全断言标记语言(SAML)2.0 标准的单点登录(SSO),并与 Okta 和 OneLogin 作为身份提供商集成。然而,基于 SAML 的 SSO 目前不支持证书签名验证机制,并且由于其复杂性,无法与 Azure Entra ID 兼容。 | - | |
| 5.1.0 | **新增核心节点时,复制节点在启动阶段可能出现启动失败**<br />在涉及新增核心节点的集群变更过程中,新加入的核心节点有时可能无法正确启动复制节点所依赖的复制相关进程,进而导致升级后的或新添加的复制节点在启动时发生启动失败。<br />在 Kubernetes 部署中,这种情况会导致复制节点的就绪探针检查失败,从而被控制器不断地终止并重启复制节点的 Pod。<br />该问题通常出现在升级过程中,例如在原有的“两个核心节点 + 两个复制节点”集群基础上,添加两个运行新版 EMQX 的核心节点和两个复制节点时。 | 如果一个或多个复制节点在(重新)部署后启动时出现启动失败的情况,可以尝试依次强制重启新添加的核心节点,直到复制节点解除卡顿并完成启动。 | 已在 6.0.1 中修复 |
7 changes: 4 additions & 3 deletions zh_CN/changes/known-issues-6.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

## 6.0.0

| 始于版本 | 问题描述 | 解决方法 | 状态 |
| -------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ---- |
| 6.0.0 | **当配置中包含旧版桥接(bridges)时,无法从运行 5.x 的集群滚动升级到 6.0.0**<br />如果集群是从较早版本的 EMQX 启动,并且配置中包含现已弃用的 `bridges` 配置根项,则无法将配置同步到新的 6.0 节点。因为 6.0 版本已移除对该配置根项的支持,导致无法启动相应的连接器(Connector)、动作(Action)和 Source。 | 从 6.0.1 起,系统会通过 RPC 调用旧节点,将配置中的 `bridges` 自动转换为 `connectors`、`sources` 和 `actions`,从而减少手动干预,实现平滑滚动升级。<br />或者,也可以通过 HTTP API 或 CLI 手动更新每个受影响的桥接配置(例如修改描述字段),以触发配置更新并升级持久化的 `cluster.hocon` 文件。<br />以下连接器、Source 或动作类型在尝试滚动升级前仍可能需要手动修改:<br />- GCP PubSub 消费者<br />- Kafka 消费者<br />如果这些配置中仍包含 `topic_mapping` 字段,需要手动从配置中移除,并为每个条目创建一个 “Source + 规则” 对。 | |
| 始于版本 | 问题描述 | 解决方法 | 状态 |
| -------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ----------------- |
| 6.0.0 | **当配置中包含旧版桥接(bridges)时,无法从运行 5.x 的集群滚动升级到 6.0.0**<br />如果集群是从较早版本的 EMQX 启动,并且配置中包含现已弃用的 `bridges` 配置根项,则无法将配置同步到新的 6.0 节点。因为 6.0 版本已移除对该配置根项的支持,导致无法启动相应的连接器(Connector)、动作(Action)和 Source。 | 从 6.0.1 起,系统会通过 RPC 调用旧节点,将配置中的 `bridges` 自动转换为 `connectors`、`sources` 和 `actions`,从而减少手动干预,实现平滑滚动升级。<br />或者,也可以通过 HTTP API 或 CLI 手动更新每个受影响的桥接配置(例如修改描述字段),以触发配置更新并升级持久化的 `cluster.hocon` 文件。<br />以下连接器、Source 或动作类型在尝试滚动升级前仍可能需要手动修改:<br />- GCP PubSub 消费者<br />- Kafka 消费者<br />如果这些配置中仍包含 `topic_mapping` 字段,需要手动从配置中移除,并为每个条目创建一个 “Source + 规则” 对。 | |
| 5.1.0 | **新增核心节点时,复制节点在启动阶段可能出现启动失败**<br />在涉及新增核心节点的集群变更过程中,新加入的核心节点有时可能无法正确启动复制节点所依赖的复制相关进程,进而导致升级后的或新添加的复制节点在启动时发生启动失败。<br />在 Kubernetes 部署中,这种情况会导致复制节点的就绪探针检查失败,从而被控制器不断地终止并重启复制节点的 Pod。<br />该问题通常出现在升级过程中,例如在原有的“两个核心节点 + 两个复制节点”集群基础上,添加两个运行新版 EMQX 的核心节点和两个复制节点时。 | 如果一个或多个复制节点在(重新)部署后启动时出现启动失败的情况,可以尝试依次强制重启新添加的核心节点,直到复制节点解除卡顿并完成启动。 | 已在 6.0.1 中修复 |
Loading