Skip to content

Commit fa871ba

Browse files
authored
improve public documentation for authoring an integration
1 parent 1b0878f commit fa871ba

13 files changed

+278
-107
lines changed

docs/extend/_publish_an_integration.md

Lines changed: 19 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -3,32 +3,39 @@ mapped_pages:
33
- https://www.elastic.co/guide/en/integrations-developer/current/_publish_an_integration.html
44
---
55

6-
# Publish an integration [_publish_an_integration]
6+
# Publish an integration via Pull Request[_publish_an_integration]
77

8-
When your integration is done, it’s time to open a PR to include it in the integrations repository. Before opening your PR, run:
8+
When your integration is done, it’s time to open a PR to include it in the integrations repository.
9+
Before opening your PR, make sure you have:
910

11+
1. Pass all checks
12+
Run:
1013
```bash
1114
elastic-package check
1215
```
1316

14-
The `check` command ensures the package is built correctly, formatted properly, and aligned with the spec. Passing the `check` command is required before adding your integration to the repository.
17+
This command validates that your package is built correctly, formatted properly, and aligned with the specification. Passing this `check` is required before submitting your integration.
1518

16-
When CI is happy, merge your PR into the integrations repository.
19+
2. Added a new entry into `changelog.yml`
20+
Update the package’s `changelog.yml` with a clear description of your changes for the new version.
1721

18-
CI will kick off a build job for the main branch, which can release your integration to the package-storage. It means that it will open a PR to the Package Storage/snapshot with the built integration if only the package version doesn’t already exist in the storage (hasn’t been released yet).
22+
3. Bumped the package version
23+
If you are releasing new changes, increment the version in your manifest.yml file. This is required for the package to be published.
1924

25+
4. Wrote clear PR title and description
26+
- Use a concise, descriptive title (e.g., `[New Integration] Add Acme Logs integration`).
27+
- In the PR description, summarize what your integration or change does, list key features or fixes, reference related issues, and provide testing instructions.
28+
- Ensure your documentation, sample events, and tests are included and up to date.
2029

21-
## Promote [_promote]
30+
::::{tip}
31+
A well-written PR with clear documentation, versioning, and testing instructions will speed up the review and publishing process!
32+
::::
2233

23-
Now that you’ve tested your integration with {{kib}}, it’s time to promote it to staging or production. Run:
2434

25-
```bash
26-
elastic-package promote
27-
```
35+
When CI is happy, merge your PR into the integrations repository.
2836

29-
The tool will open 2 pull requests (promote and delete) to the package-storage: target and source branches.
37+
Once the PR with the new version of the package is merged, the required CI pipelines are triggered to release that new version into Package Storage V2 and make them available in https://epr.elastic.co.
3038

31-
Please review both pull requests on your own, check if CI is happy and merge - first target, then source. Once any PR is merged, the CI will kick off a job to bake a new Docker image of package-storage (tracking). Ideally the "delete" PR should be merged once the CI job for "promote" is done, as the Docker image of previous stage depends on the later one.
3239

3340
::::{tip}
3441
When you are ready for your changes in the integration to be released, remember to bump up the package version. It is up to you, as the package developer, to decide how many changes you want to release in a single version. For example, you could implement a change in a PR and bump up the package version in the same PR. Or you could implement several changes across multiple pull requests and then bump up the package version in the last of these pull requests or in a separate follow up PR.

docs/extend/add-data-stream.md

Lines changed: 23 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -20,24 +20,39 @@ apache
2020

2121
A data stream defines multiple {{es}} assets, like index templates, ingest pipelines, and field definitions. These assets are loaded into {{es}} when a user installs an integration using the {{fleet}} UI in {{kib}}.
2222

23-
A data stream also defines a policy template. Policy templates include variables that allow users to configure the data stream using the {{fleet}} UI in {{kib}}. Then, the {{agent}} interprets the resulting policy to collect relevant information from the product or service being observed. Policy templates can also define an integration’s supported [`deployment_modes`](/extend/define-deployment-modes.md#deployment_modes).
23+
A data stream also defines a policy template. Policy templates include variables that allow users to configure the data stream using the {{fleet}} UI in {{kib}}. Then, the {{agent}} interprets the resulting policy to collect relevant information from the product or service being observed. Policy templates can also define an integration’s supported [`deployment_modes`](/extend/define-deployment-modes.md#set-deployment-modes).
2424

2525
See [data streams](docs-content://reference/fleet/data-streams.md) for more information.
2626

2727
::::
2828

29+
## How to add a data stream [how-to]
2930

30-
Bootstrap a new data stream using the TUI wizard. In the directory of your package, run:
31+
1. Boostrap a new data stream
32+
33+
In your package directory, run:
3134

3235
```bash
3336
elastic-package create data-stream
3437
```
3538

36-
Follow the prompts to name, title, and select your data stream type. Then, run this command each time you add a new data stream to your integration.
39+
Follow the prompts to set the name, title, and type (logs, metrics, etc.) for the data stream. Repeat this command for each new data stream you want to add.
40+
41+
2. Configure the data stream
42+
43+
After bootstrapping, manually adjust the generated files to suit your use case:
44+
45+
* Define required variables:
46+
In the policy template, specify variables that users can configure (e.g., paths, ports, log levels).
47+
* Define used fields:
48+
Edit the fields/ files to describe the structure and types of data your stream will ingest.
49+
* Define ingest pipeline definitions:
50+
If needed, create or update ingest pipelines to parse, enrich, or transform incoming data before it’s indexed.
51+
* Update the {{agent}} stream configuration:
52+
Ensure the {{agent}}’s stream configuration matches your data collection requirements and references the correct variables and pipelines.
3753

38-
Next, manually adjust the data stream:
54+
3. How data streams are used
3955

40-
* define required variables
41-
* define used fields
42-
* define ingest pipeline definitions (if necessary)
43-
* update the {{agent}}'s stream configuration
56+
* When the integration is installed, each data stream is registered in {{es}} as a managed, time-based resource.
57+
* Data sent to the data stream is automatically routed to the correct backing indices, with lifecycle management (rollover, retention) handled by Elasticsearch.
58+
* Users can query, visualize, and analyze data from each stream in {{kib}}, using the single data stream name (e.g., `logs-apache.access`).

0 commit comments

Comments
 (0)