Skip to content

Commit a584249

Browse files
Linh NguyenGitHub Enterprise
authored andcommitted
Merge branch 'main' into PLAT-227083/Dataflow-schedule-behaviour-not-stated-in-documentation
2 parents 377f69a + 60e8112 commit a584249

File tree

152 files changed

+721
-206
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

152 files changed

+721
-206
lines changed

help/accessibility/features.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ Users with disabilities frequently rely on hardware and software, known as assis
1818
Experience Platform strives to support full keyboard accessibility.
1919

2020
The following navigational elements facilitate accessibility:
21+
2122
* The Tab key moves between UI elements, sections, and menu groups.
2223
* Arrow keys move within menu groups to set focus to individual active elements.
2324
* Shift + Tab moves backwards through the tab order.
File renamed without changes.

help/data-prep/functions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -242,7 +242,7 @@ For information on the object copy feature, see the section [below](#object-copy
242242
243243
| Function | Description | Parameters | Syntax | Expression | Sample output |
244244
| -------- | ----------- | ---------- | -------| ---------- | ------------- |
245-
| json_to_object | Deserialize JSON content from the given string. | <ul><li>STRING: **Required** The JSON string to be deserialized.</li></ul> | json_to_object​(STRING) | json_to_object​({"info":{"firstName":"John","lastName": "Doe"}}) | An object representing the JSON. |
245+
| json_to_object | Deserialize JSON content from the given string. | <ul><li>STRING: **Required** The JSON string to be deserialized.</li></ul> | json_to_object​(STRING) | `json_to_object​({"info":{"firstName":"John","lastName": "Doe"}})` | An object representing the JSON. |
246246

247247
{style="table-layout:auto"}
248248

help/data-science-workspace/authoring/feature-pipeline.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -34,10 +34,11 @@ The following workflow takes place when a feature pipeline is run:
3434
## Getting started
3535

3636
To run a recipe in any organization, the following is required:
37-
- An input dataset.
38-
- The Schema of the dataset.
39-
- A transformed schema and an empty dataset based on that schema.
40-
- An output schema and an empty dataset based on that schema.
37+
38+
- An input dataset.
39+
- The Schema of the dataset.
40+
- A transformed schema and an empty dataset based on that schema.
41+
- An output schema and an empty dataset based on that schema.
4142

4243
All of the above datasets need to be uploaded to the [!DNL Experience Platform] UI. To set this up, use the Adobe-provided [bootstrap script](https://github.com/adobe/experience-platform-dsw-reference/tree/master/bootstrap).
4344

help/data-science-workspace/home.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@ Today's enterprise puts a high priority on mining big data for predictions and i
2727
As important as it is, getting from data to insights can come at a high cost. It typically requires skilled data scientists who conduct intensive and time-consuming data research to develop machine-learning models, or recipes, which power intelligent services. The process is lengthy, the technology is complex, and skilled data scientists can be hard to find.
2828

2929
With [!DNL Data Science Workspace], Adobe Experience Platform allows you to bring experience-focused AI across the enterprise, streamlining and accelerating data-to-insights-to-code with:
30+
3031
- A machine learning framework and runtime
3132
- Integrated access to your data stored in Adobe Experience Platform
3233
- A unified data schema built on [!DNL Experience Data Model] (XDM)

help/data-science-workspace/jupyterlab/access-notebook-data.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -529,7 +529,9 @@ df1.show(10)
529529
You can auto generate the above example in JupyterLab buy using the following method:
530530

531531
Select the Data icon tab (highlighted below) in the left-navigation of JupyterLab. The **[!UICONTROL Datasets]** and **[!UICONTROL Schemas]** directories appear. Select **[!UICONTROL Datasets]** and right-click, then select the **[!UICONTROL Explore Data in Notebook]** option from the dropdown menu on the dataset you wish to use. An executable code entry appears at the bottom of your notebook.
532+
532533
And
534+
533535
- Use **[!UICONTROL Explore Data in Notebook]** to generate a read cell.
534536
- Use **[!UICONTROL Write Data in Notebook]** to generate a write cell.
535537

help/data-science-workspace/jupyterlab/analyze-your-data.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -114,6 +114,7 @@ If you restart your kernel and run all the cells again, you should get the same
114114
### Explore your data
115115

116116
Now that we can access your data, let's focus on the data itself by using statistics and visualization. The dataset that we are using is a retail dataset which gives miscellaneous information about 45 different stores on a given day. Some characteristics for a given `date` and `store` include the following:
117+
117118
- `storeType`
118119
- `weeklySales`
119120
- `storeSize`
@@ -155,6 +156,7 @@ This means 22 stores are of `storeType` `A`, 17 are `storeType` `B`, and 6 are `
155156
#### Data visualization
156157

157158
Now that we know our data frame values, we want to supplement this with visualizations to make things clearer and easier to identify patterns. Graphs are also useful when conveying results to an audience. Some [!DNL Python] libraries which are useful for visualization include:
159+
158160
- [Matplotlib](https://matplotlib.org/)
159161
- [pandas](https://pandas.pydata.org/)
160162
- [seaborn](https://seaborn.pydata.org/)
@@ -195,6 +197,7 @@ Notice the diagonal of 1's down the center. This shows that when comparing a var
195197
## Next steps
196198

197199
This tutorial went over how to create a new Jupyter Notebook in the Data Science Workspace and how to access data externally as well as from [!DNL Adobe Experience Platform]. Specifically, we went over the following steps:
200+
198201
- Create a new Jupyter Notebook
199202
- Access datasets and schemas
200203
- Explore datasets

help/data-science-workspace/jupyterlab/create-a-model.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -133,6 +133,7 @@ For an in-depth tutorial on using the `platform_sdk` data loader, please visit t
133133
### External sources {#external-sources}
134134

135135
This section shows you how to import a JSON or CSV file to a pandas object. Official documentation from the pandas library can be found here:
136+
136137
- [read_csv](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)
137138
- [read_json](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_json.html)
138139

@@ -172,6 +173,7 @@ def load(config_properties):
172173
>[!NOTE]
173174
>
174175
>As mentioned in the [Configuration File section](#configuration-files), the following configuration parameters are set for you when you access data from Experience Platform using `client_context = get_client_context(config_properties)`:
176+
>
175177
> - `ML_FRAMEWORK_IMS_USER_CLIENT_ID`
176178
> - `ML_FRAMEWORK_IMS_TOKEN`
177179
> - `ML_FRAMEWORK_IMS_ML_TOKEN`

help/data-science-workspace/jupyterlab/overview.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -258,6 +258,7 @@ For a list of supported packages in Python, R, and PySpark, copy and paste `!con
258258
![example](../images/jupyterlab/user-guide/libraries.PNG)
259259

260260
In addition, the following dependencies are used but not listed:
261+
261262
* CUDA 11.2
262263
* CUDNN 8.1
263264

help/data-science-workspace/models-recipes/create-retails-sales-dataset.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ This tutorial provides you with the prerequisites and assets required for all ot
2020
## Getting started
2121

2222
Before starting this tutorial, you must have the following prerequisites:
23+
2324
- Access to [!DNL Adobe Experience Platform]. If you do not have access to an organization in [!DNL Experience Platform], please speak to your system administrator before proceeding.
2425
- Authorization to make [!DNL Experience Platform] API calls. Complete the [Authenticate and access Adobe Experience Platform APIs](https://www.adobe.com/go/platform-api-authentication-en) tutorial to obtain the following values in order to successful complete this tutorial:
2526
- Authorization: `{ACCESS_TOKEN}`
@@ -111,6 +112,7 @@ for more information.
111112
You have also successfully ingested Retail Sales sample data into [!DNL Experience Platform] using the provided bootstrap script.
112113

113114
To continue working with the ingested data:
115+
114116
- [Analyze your data using Jupyter Notebooks](../jupyterlab/analyze-your-data.md)
115117
- Use Jupyter Notebooks in Data Science Workspace to access, explore, visualize, and understand your data.
116118
- [Package source files into a Recipe](./package-source-files-recipe.md)

0 commit comments

Comments
 (0)