Skip to content

Commit b0f7f06

Browse files
authored
fix: img alt (#2672)
1 parent 8cab13c commit b0f7f06

File tree

7 files changed

+77
-71
lines changed

7 files changed

+77
-71
lines changed

docs/en/developer/20-community/00-contributor/04-how-to-profiling.md

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ go tool pprof -http="0.0.0.0:8081" http://localhost:8080/debug/pprof/profile?sec
99
```
1010

1111
Open `<your-ip>:8081` and select `Flame Graph` from the VIEW menus in the site header:
12-
<img src="https://user-images.githubusercontent.com/172204/208336392-5b64bb9b-cce8-4562-9e05-c3d538e9d8a6.png"/>
12+
<img alt="CPU profiling" src="https://user-images.githubusercontent.com/172204/208336392-5b64bb9b-cce8-4562-9e05-c3d538e9d8a6.png"/>
1313

1414
## Query Level CPU Profiling
1515

@@ -25,15 +25,16 @@ Currently, it does not work on Mac, with either intel or Arm.
2525
### Enable memory profiling
2626

2727
1. Build `databend-query` with `memory-profiling` feature enabled:
28-
```
29-
cargo build --bin databend-query --release --features memory-profiling
30-
```
28+
29+
```
30+
cargo build --bin databend-query --release --features memory-profiling
31+
```
3132

3233
2. Fire up `databend`, using environment variable `MALLOC_CONF` to enable memory profiling:
33-
34-
```
35-
MALLOC_CONF=prof:true,lg_prof_interval:30 ./target/release/databend-query
36-
```
34+
35+
```
36+
MALLOC_CONF=prof:true,lg_prof_interval:30 ./target/release/databend-query
37+
```
3738

3839
### Generate heap profile
3940

@@ -43,14 +44,15 @@ Generate a call graph in `pdf` illustrating memory allocation during this interv
4344
jeprof --pdf ./target/release/databend-query heap.prof > heap.pdf
4445
```
4546

46-
<img src="https://user-images.githubusercontent.com/172204/204963954-f6eacf10-d8bd-4469-9c8d-7d30955f1a78.png" width="600"/>
47+
<img alt="Generate heap profile" src="https://user-images.githubusercontent.com/172204/204963954-f6eacf10-d8bd-4469-9c8d-7d30955f1a78.png" width="600"/>
4748

4849
### Fast jeprof
50+
4951
jeprof is very slow for large heap analysis, the bottleneck is `addr2line`, if you want to speed up from **30 minutes to 3s**, please use :
52+
5053
```
5154
git clone https://github.com/gimli-rs/addr2line
5255
cd addr2line
5356
cargo b --examples -r
5457
cp ./target/release/examples/addr2line <your-addr2line-find-with-whereis-addr2line>
5558
```
56-

docs/en/guides/10-deploy/01-deploy/00-understanding-deployment-modes.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ Databend deployment provides two modes: standalone and cluster, each with differ
2424

2525
In standalone mode, a standard configuration consists of a single Meta node and a single Query node. This minimal setup is suitable for testing purposes or small-scale deployments. However, it is important to note that standalone mode is not recommended for production environments due to its limited scalability and the absence of high availability features.
2626

27-
<img src="/img/deploy/deploy-standalone-arch.png"/>
27+
<img alt="Standalone Deployment" src="/img/deploy/deploy-standalone-arch.png"/>
2828

2929
In a Standalone Databend Deployment, it is possible to host both the Meta and Query nodes on a single server. The following topics in the documentation assist you in setting up and deploying a standalone Databend:
3030

@@ -37,7 +37,7 @@ Cluster mode is designed for larger-scale deployments and provides enhanced capa
3737

3838
In a Databend cluster, multiple Query nodes can be deployed, and it is possible to create a more powerful Query cluster by grouping specific Query nodes together (using Cluster IDs) for different query performance requirements. A Databend cluster has the capacity to accommodate multiple Query clusters. By default, Databend leverages computational concurrency to its maximum potential, allowing a single SQL query to utilize all available CPU cores within a single Query node. However, when utilizing a Query cluster, Databend takes advantage of concurrent scheduling and executes computations across the entire cluster. This approach maximizes system performance and provides enhanced computational capabilities.
3939

40-
<img src="/img/deploy/deploy-cluster-arch.png"/>
40+
<img alt="Cluster Deployment" src="/img/deploy/deploy-cluster-arch.png"/>
4141

4242
#### Query Cluster Size
4343

docs/en/guides/10-deploy/02-upgrade/10-compatibility.md

Lines changed: 33 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
---
22
title: Compatibility
33
sidebar_label: Compatibility
4-
description:
5-
Investigate and manage the compatibility
4+
description: Investigate and manage the compatibility
65
---
76

87
This guideline will introduce how to investigate and manage the compatibility:
8+
99
- between databend-query and databend-meta.
1010
- between different versions of databend-meta.
1111

@@ -74,6 +74,7 @@ When handshaking:
7474
Handshake succeeds if both of these two assertions hold.
7575

7676
E.g.:
77+
7778
- `S: (ver=3, min_cli_ver=1)` is compatible with `C: (ver=3, min_srv_ver=2)`.
7879
- `S: (ver=4, min_cli_ver=4)` is **NOT** compatible with `C: (ver=3, min_srv_ver=2)`.
7980
Because although `S.ver(4) >= C.min_srv_ver(3)` holds,
@@ -96,65 +97,67 @@ S.ver: 2 3 4
9697
The following is an illustration of the latest query-meta compatibility:
9798

9899
| `Meta\Query` | [0.9.41, 1.1.34) | [1.1.34, 1.2.287) | [1.2.287, 1.2.361) | [1.2.361, +∞) |
99-
|:-------------------|:-----------------|:---------------|:-----------|:-----------|
100-
| [0.8.30, 0.8.35) | ||| |
101-
| [0.8.35, 0.9.23) | ||| |
102-
| [0.9.23, 0.9.42) | ||| |
103-
| [0.9.42, 1.1.32) | ||| |
104-
| [1.1.32, 1.2.63) | ||| |
105-
| [1.2.63, 1.2.226) | ||| |
106-
| [1.2.226, 1.2.258) | ||| |
107-
| [1.2.258, +∞) | ||| |
100+
| :----------------- | :--------------- | :---------------- | :----------------- | :------------ |
101+
| [0.8.30, 0.8.35) || | | |
102+
| [0.8.35, 0.9.23) || | | |
103+
| [0.9.23, 0.9.42) || | | |
104+
| [0.9.42, 1.1.32) || | | |
105+
| [1.1.32, 1.2.63) || | | |
106+
| [1.2.63, 1.2.226) || | | |
107+
| [1.2.226, 1.2.258) || | | |
108+
| [1.2.258, +∞) || | | |
108109

109110
History versions that are not included in the above chart:
110111

111112
- Query `[0.7.59, 0.8.80)` is compatible with Meta `[0.8.30, 0.9.23)`.
112113
- Query `[0.8.80, 0.9.41)` is compatible with Meta `[0.8.35, 0.9.42)`.
113114

114-
115-
<img src="/img/deploy/compatibility.excalidraw.png"/>
115+
<img alt="Compatibility status" src="/img/deploy/compatibility.excalidraw.png"/>
116116

117117
# Compatibility between databend-query
118118

119119
## Version Compatibility Matrix
120120

121-
| Query version | Backward compatible with | Key Changes |
122-
|:-------------------|:--------------------------|:------------|
123-
| [-∞, 1.2.307) | [-∞, 1.2.311) | Original format |
124-
| [1.2.307, 1.2.311) | [-∞, 1.2.311) | Added Role info with PB/JSON support |
125-
| [1.2.311, 1.2.709) | [1.2.307, +∞) | Role info serialized to PB only |
126-
| [1.2.709, +∞) | [1.2.709, +∞) | **Important**: Fuse storage path changed |
121+
| Query version | Backward compatible with | Key Changes |
122+
| :----------------- | :----------------------- | :--------------------------------------- |
123+
| [-∞, 1.2.307) | [-∞, 1.2.311) | Original format |
124+
| [1.2.307, 1.2.311) | [-∞, 1.2.311) | Added Role info with PB/JSON support |
125+
| [1.2.311, 1.2.709) | [1.2.307, +∞) | Role info serialized to PB only |
126+
| [1.2.709, +∞) | [1.2.709, +∞) | **Important**: Fuse storage path changed |
127127

128128
## Important Changes & Upgrade Instructions
129129

130130
### Version 1.2.307
131+
131132
- Support deserialize Role info with PB and JSON
132133
- Only support serialize Role info to JSON
133134
- **Upgrade to this version first** if you're on an earlier version
134135

135136
### Version 1.2.311
137+
136138
- Only support serialize Role info to PB
137139
- **Upgrade to this version next** after reaching 1.2.307
138140
- Example upgrade path: `1.2.306 -> 1.2.307 -> 1.2.311 -> 1.2.312`
139141

140142
### Version 1.2.709
143+
141144
- **Important Change**: Fuse storage path modified
142145
- ⚠️ Versions before 1.2.709 may not be able to read some data from versions 1.2.709+
143146
- ⚠️ **Recommendation**: All nodes under the same tenant should be upgraded together
144147
- Avoid mixing nodes with versions before and after 1.2.709 to prevent potential data access issues
145148

146149
### Version 1.2.764
150+
147151
- If you need specify a different storage location for `system_history` tables. All nodes under the same tenant need to be upgraded to 1.2.764+
148152

149153
## Compatibility between databend-meta
150154

151-
| Meta version | Backward compatible with |
152-
|:--------------------|:-------------------------|
153-
| [0.9.41, 1.2.212) | [0.9.41, 1.2.212) |
154-
| [1.2.212, 1.2.479) | [0.9.41, 1.2.479) |
155-
| [1.2.479, 1.2.655) | [1.2.288, 1.2.655) |
156-
| [1.2.655, +∞) | [1.2.288, +∞) |
157-
155+
| Meta version | Backward compatible with |
156+
| :----------------- | :----------------------- |
157+
| [0.9.41, 1.2.212) | [0.9.41, 1.2.212) |
158+
| [1.2.212, 1.2.479) | [0.9.41, 1.2.479) |
159+
| [1.2.479, 1.2.655) | [1.2.288, 1.2.655) |
160+
| [1.2.655, +∞) | [1.2.288, +∞) |
158161

159162
![](@site/static/img/deploy/compat-meta-meta-1-2-655.svg)
160163

@@ -182,16 +185,15 @@ History versions that are not included in the above chart:
182185
- `1.2.655` 2024-11-11 Introduce on-disk `V004`, using WAL based Raft log storage,
183186
which is compatible with `V002`. The oldest compatible version is `1.2.288`(`1.2.212~1.2.287` are removed).
184187

185-
186188
## Compatibility of databend-meta on-disk data
187189

188190
The on-disk data of Databend-meta evolves over time while maintaining backward compatibility.
189191

190192
| DataVersion | Databend-version | Min Compatible with |
191-
|:------------|:-----------------|:--------------------|
192-
| V004 | 1.2.655 | V002 |
193-
| V003 | 1.2.547 | V002 |
194-
| V002 | 1.2.53 | V001 |
193+
| :---------- | :--------------- | :------------------ |
194+
| V004 | 1.2.655 | V002 |
195+
| V003 | 1.2.547 | V002 |
196+
| V002 | 1.2.53 | V001 |
195197
| V001 | 1.1.40 | V0 |
196198

197199
### Identifying the versions

docs/en/guides/10-deploy/03-monitor/30-tracing.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -235,12 +235,12 @@ tokio-console # for meta console, http://127.0.0.1:6669
235235

236236
**databend-query**
237237

238-
<img src="/img/tracing/query-console.png"/>
238+
<img alt="databend-query" src="/img/tracing/query-console.png"/>
239239

240240
**databend-meta**
241241

242-
<img src="/img/tracing/meta-console.png"/>
242+
<img alt="databend-meta" src="/img/tracing/meta-console.png"/>
243243

244244
**task in console**
245245

246-
<img src="/img/tracing/task-in-console.png"/>
246+
<img alt="task in console" src="/img/tracing/task-in-console.png"/>

docs/en/guides/40-load-data/02-load-db/airbyte.md

Lines changed: 16 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -3,18 +3,17 @@ title: Airbyte
33
---
44

55
<p align="center">
6-
<img src="/img/integration/integration-airbyte.png"/>
6+
<img alt="Airbyte" src="/img/integration/integration-airbyte.png"/>
77
</p>
88

99
## What is [Airbyte](https://airbyte.com/)?
1010

11-
12-
* Airbyte is an open-source data integration platform that syncs data from applications, APIs & databases to data warehouses lakes & DBs.
13-
* You could load data from any airbyte source to Databend.
11+
- Airbyte is an open-source data integration platform that syncs data from applications, APIs & databases to data warehouses lakes & DBs.
12+
- You could load data from any airbyte source to Databend.
1413

1514
Currently we implemented an experimental airbyte destination allow you to send data from your airbyte source to databend
1615

17-
**NOTE**:
16+
**NOTE**:
1817

1918
currently we only implemented the `append` mode, which means the destination will only append data to the table, and will not overwrite, update or delete any data.
2019
Plus, we assume that your databend destination is **S3 Compatible** since we used presign to copy data from databend stage to table.
@@ -32,21 +31,25 @@ Please read [this](../../10-deploy/01-deploy/01-non-production/00-deploying-loca
3231
## Create a Databend User
3332

3433
Connect to Databend server with MySQL client:
34+
3535
```shell
36-
mysql -h127.0.0.1 -uroot -P3307
36+
mysql -h127.0.0.1 -uroot -P3307
3737
```
3838

3939
Create a user:
40+
4041
```sql
4142
CREATE USER user1 IDENTIFIED BY 'abc123';
4243
```
4344

4445
Create a Database:
46+
4547
```sql
4648
CREATE DATABASE airbyte;
4749
```
4850

4951
Grant privileges for the user:
52+
5053
```sql
5154
GRANT ALL PRIVILEGES ON airbyte.* TO user1;
5255
```
@@ -56,24 +59,27 @@ GRANT ALL PRIVILEGES ON airbyte.* TO user1;
5659
To use Databend with Airbyte, you should add our customized connector to your Airbyte Instance.
5760
You could add the destination in Settings -> Destinations -> Custom Destinations -> Add a Custom Destination Page.
5861
Our custom destination image is `datafuselabs/destination-databend:alpha`
62+
5963
<p align="center">
60-
<img src="/img/integration/integration-airbyte-plugins.png"/>
64+
<img alt="Configure Airbyte" src="/img/integration/integration-airbyte-plugins.png"/>
6165
</p>
6266

6367
## Setup Databend destination
64-
**Note**:
68+
69+
**Note**:
6570

6671
You should have a databend instance running and accessible from your airbyte instance.
6772
For local airbyte, you could not connect docker compose with your localhost network.
6873
You may take a look at [ngrok](https://ngrok.com/) to tunnel your service(**NEVER** expose it on your production environment).
6974

7075
<p align="center">
71-
<img src="/img/integration/integration-airbyte-destinations.png"/>
76+
<img alt="Setup Databend destination" src="/img/integration/integration-airbyte-destinations.png"/>
7277
</p>
7378

7479
## Test your integration
80+
7581
You could use Faker source to test your integration, after sync completed, you could run the following command to see expected uploaded data.
7682

7783
```sql
7884
select * from default._airbyte_raw_users limit 5;
79-
```
85+
```

docs/en/sql-reference/10-sql-commands/40-explain-cmds/explain-perf.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,6 @@ bendsql --quote-style never --query="EXPLAIN PERF SELECT avg(number) FROM number
2626

2727
Then, you can open the `demo.html` file in your browser to view the flame graphs:
2828

29-
<img src="https://github.com/user-attachments/assets/07acfefa-a1c3-4c00-8c43-8ca1aafc3224"/>
29+
<img alt="graphs" src="https://github.com/user-attachments/assets/07acfefa-a1c3-4c00-8c43-8ca1aafc3224"/>
3030

3131
If the query finishes very quickly, it may not collect enough data, resulting in an empty flame graph.

docs/en/sql-reference/20-sql-functions/07-aggregate-functions/aggregate-windowfunnel.md

Lines changed: 10 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Funnel Analysis
44
---
55

66
<p align="center">
7-
<img src="https://datafuse-1253727613.cos.ap-hongkong.myqcloud.com/learn/databend-funnel.png" width="550"/>
7+
<img alt="Databend Funnel Analysis" src="https://datafuse-1253727613.cos.ap-hongkong.myqcloud.com/learn/databend-funnel.png" width="550"/>
88
</p>
99

1010
## WINDOW_FUNNEL
@@ -13,25 +13,24 @@ Similar to `windowFunnel` in ClickHouse (they were created by the same author),
1313

1414
The function works according to the algorithm:
1515

16-
- The function searches for data that triggers the first condition in the chain and sets the event counter to 1. This is the moment when the sliding window starts.
16+
- The function searches for data that triggers the first condition in the chain and sets the event counter to 1. This is the moment when the sliding window starts.
1717

18-
- If events from the chain occur sequentially within the window, the counter is incremented. If the sequence of events is disrupted, the counter isn’t incremented.
19-
20-
- If the data has multiple event chains at varying completion points, the function will only output the size of the longest chain.
18+
- If events from the chain occur sequentially within the window, the counter is incremented. If the sequence of events is disrupted, the counter isn’t incremented.
2119

20+
- If the data has multiple event chains at varying completion points, the function will only output the size of the longest chain.
2221

2322
```sql
2423
WINDOW_FUNNEL( <window> )( <timestamp>, <cond1>, <cond2>, ..., <condN> )
2524
```
2625

2726
**Arguments**
2827

29-
- `<timestamp>` — Name of the column containing the timestamp. Data types supported: integer types and datetime types.
30-
- `<cond>` — Conditions or data describing the chain of events. Must be `Boolean` datatype.
28+
- `<timestamp>` — Name of the column containing the timestamp. Data types supported: integer types and datetime types.
29+
- `<cond>` — Conditions or data describing the chain of events. Must be `Boolean` datatype.
3130

3231
**Parameters**
3332

34-
- `<window>` — Length of the sliding window, it is the time interval between the first and the last condition. The unit of `window` depends on the `timestamp` itself and varies. Determined using the expression `timestamp of cond1 <= timestamp of cond2 <= ... <= timestamp of condN <= timestamp of cond1 + window`.
33+
- `<window>` — Length of the sliding window, it is the time interval between the first and the last condition. The unit of `window` depends on the `timestamp` itself and varies. Determined using the expression `timestamp of cond1 <= timestamp of cond2 <= ... <= timestamp of condN <= timestamp of cond1 + window`.
3534

3635
**Returned value**
3736

@@ -40,7 +39,6 @@ All the chains in the selection are analyzed.
4039

4140
Type: `UInt8`.
4241

43-
4442
**Example**
4543

4644
Determine if a set period of time is enough for the user to SELECT a phone and purchase it twice in the online store.
@@ -52,7 +50,6 @@ Set the following chain of events:
5250
3. The user adds to the shopping cart(`event_name = 'cart'`).
5351
4. The user complete the purchase (`event_name = 'purchase'`).
5452

55-
5653
```sql
5754
CREATE TABLE events(user_id BIGINT, event_name VARCHAR, event_timestamp TIMESTAMP);
5855

@@ -124,7 +121,6 @@ Result:
124121
+-------+-------+
125122
```
126123

127-
* User `100126` level is 2 (`login -> visit`) .
128-
* user `100125` level is 3 (`login -> visit -> cart`).
129-
* User `100123` level is 4 (`login -> visit -> cart -> purchase`).
130-
124+
- User `100126` level is 2 (`login -> visit`) .
125+
- user `100125` level is 3 (`login -> visit -> cart`).
126+
- User `100123` level is 4 (`login -> visit -> cart -> purchase`).

0 commit comments

Comments
 (0)