Skip to content

Conversation

@bramdoppen
Copy link
Member

Description

This PR introduces an example function for the Media Library. Its purpose is to demonstrate Media Library functions, how to properly access image assets via the asset container, and how to generate localized alt-text.

The example intentionally focuses on clarity and guidance rather than prescribing a rigid pattern. It uses agent.action.prompt and enforces JSON-shaped output, but because AI models may not always return strict JSON, this should be viewed as a working example rather than a guaranteed contract. Everyone may also choose to add in their own functions that enforce JSON, the function is designed flexibly enough for that.

It also avoids using agent.action.translate because that action currently expects a full document and is not yet compatible with Media Library use cases. Once Media Library and agent actions evolve, we can update this example accordingly. Until then, this version provides valuable early guidance and unblocks everyone who need a functional starting point.

The PR also introduces an internationalized array structure for alt text (mirroring the Studio’s internationalized array plugin), so the frontend can query language-tagged items in a familiar format. This format is approved internally by the Media Library team.

At the end is a video demonstration that visually shows how the alt-text appears after upload.


What to review

  • Review the example function with the understanding that it is inspirational,
  • Confirm that dereferencing from the asset container to the image asset is correct and aligns with how Media Library stores keywords on the actual image.
  • Ensure that the use of agent.action.prompt is appropriate given its current behavior and limitations.
  • Check that explanations are clear enough for everyone who want a practical starting point.

Testing

Automated tests were not added.

The function was manually tested in a Media Library environment and works end-to-end:

  • All text fields are generated for the provided languages
  • JSON output is enforced through prompt instructions (with the understanding that AI may occasionally deviate)
  • The structure was reviewed with the Media Library team to ensure consistency with expected behavior

This manual testing approach is sufficient for an inspirational example that everyone can adapt.


Notes for release

This adds an inspirational, fully working example function for the Media Library to help everyone understand:

  • How to access the underlying image asset from the asset container
  • How to generate multilingual values via agent.action.prompt
  • How to structure internationalized text arrays consistently

Limitations:

  • agent.action.prompt is instructed to produce JSON, but AI responses may still vary
  • agent.action.translate cannot be used yet because it requires a full document input
  • This is intended as an early example to help everyone get started; we can revise it as agent actions evolve

Video:

Screen.Recording.2025-11-28.at.14.46.14.mov

@vercel
Copy link

vercel bot commented Nov 28, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
page-building-studio Ready Ready Preview Comment Nov 28, 2025 3:22pm
test-studio Ready Ready Preview Comment Nov 28, 2025 3:22pm
3 Skipped Deployments
Project Deployment Preview Comments Updated (UTC)
e2e-studio Ignored Ignored Nov 28, 2025 3:22pm
studio-workshop Ignored Ignored Preview Nov 28, 2025 3:22pm
test-next-studio Ignored Ignored Nov 28, 2025 3:22pm

@github-actions
Copy link
Contributor

github-actions bot commented Nov 28, 2025

🧪 E2E Preview environment

🔑 Environment Variables for Local Testing

This is the preview URL for the E2E tests: https://e2e-studio-7tfvw5ddb.sanity.dev

To run the E2E tests locally, you can use the following environment variables, then run pnpm test:e2e --ui to open the Playwright test runner.

💬 Remember to build the project first with pnpm build:e2e.

  SANITY_E2E_PROJECT_ID=ittbm412
  SANITY_E2E_BASE_URL=https://e2e-studio-7tfvw5ddb.sanity.dev
  SANITY_E2E_DATASET="update depending the project you want to test (pr-11337-chromium-19767755953 || pr-11337-firefox-19767755953 )"
  SANITY_E2E_DATASET_CHROMIUM=pr-11337-chromium-19767755953
  SANITY_E2E_DATASET_FIREFOX=pr-11337-firefox-19767755953

@github-actions
Copy link
Contributor

github-actions bot commented Nov 28, 2025

📊 Playwright Test Report

Download Full E2E Report

This report contains test results, including videos of failing tests.

@github-actions
Copy link
Contributor

github-actions bot commented Nov 28, 2025

⚡️ Editor Performance Report

Updated Fri, 28 Nov 2025 15:35:27 GMT

Benchmark reference
latency of sanity@latest
experiment
latency of this branch
Δ (%)
latency difference
article (title) 27.4 efps (37ms) 29.4 efps (34ms) -3ms (-6.8%)
article (body) 39.5 efps (25ms) 38.7 efps (26ms) +1ms (+2.2%)
article (string inside object) 28.6 efps (35ms) 29.4 efps (34ms) -1ms (-2.9%)
article (string inside array) 26.3 efps (38ms) 24.4 efps (41ms) +3ms (+7.9%)
recipe (name) 43.5 efps (23ms) 47.6 efps (21ms) -2ms (-8.7%)
recipe (description) 71.4 efps (14ms) 71.4 efps (14ms) +0ms (-/-%)
recipe (instructions) 99.9+ efps (6ms) 99.9+ efps (5ms) -1ms (-/-%)
singleString (stringField) 58.8 efps (17ms) 62.5 efps (16ms) -1ms (-5.9%)
synthetic (title) 17.2 efps (58ms) 17.5 efps (57ms) -1ms (-1.7%)
synthetic (string inside object) 18.5 efps (54ms) 17.7 efps (57ms) +3ms (+4.6%)

efps — editor "frames per second". The number of updates assumed to be possible within a second.

Derived from input latency. efps = 1000 / input_latency

Detailed information

🏠 Reference result

The performance result of sanity@latest

Benchmark latency p75 p90 p99 blocking time test duration
article (title) 37ms 42ms 64ms 89ms 15ms 9.5s
article (body) 25ms 32ms 64ms 121ms 124ms 5.9s
article (string inside object) 35ms 39ms 50ms 84ms 9ms 5.8s
article (string inside array) 38ms 40ms 48ms 90ms 0ms 6.0s
recipe (name) 23ms 26ms 33ms 54ms 0ms 7.3s
recipe (description) 14ms 18ms 20ms 22ms 0ms 4.0s
recipe (instructions) 6ms 10ms 11ms 26ms 0ms 3.0s
singleString (stringField) 17ms 19ms 22ms 35ms 0ms 6.7s
synthetic (title) 58ms 61ms 66ms 114ms 216ms 16.0s
synthetic (string inside object) 54ms 57ms 62ms 261ms 445ms 7.5s

🧪 Experiment result

The performance result of this branch

Benchmark latency p75 p90 p99 blocking time test duration
article (title) 34ms 37ms 39ms 77ms 24ms 9.0s
article (body) 26ms 36ms 78ms 122ms 134ms 6.2s
article (string inside object) 34ms 37ms 55ms 100ms 13ms 5.8s
article (string inside array) 41ms 46ms 62ms 96ms 3ms 6.2s
recipe (name) 21ms 24ms 26ms 44ms 0ms 7.2s
recipe (description) 14ms 17ms 20ms 31ms 0ms 4.0s
recipe (instructions) 5ms 9ms 11ms 14ms 0ms 3.0s
singleString (stringField) 16ms 19ms 21ms 26ms 0ms 7.1s
synthetic (title) 57ms 58ms 60ms 88ms 197ms 14.3s
synthetic (string inside object) 57ms 59ms 71ms 264ms 476ms 7.5s

📚 Glossary

column definitions

  • benchmark — the name of the test, e.g. "article", followed by the label of the field being measured, e.g. "(title)".
  • latency — the time between when a key was pressed and when it was rendered. derived from a set of samples. the median (p50) is shown to show the most common latency.
  • p75 — the 75th percentile of the input latency in the test run. 75% of the sampled inputs in this benchmark were processed faster than this value. this provides insight into the upper range of typical performance.
  • p90 — the 90th percentile of the input latency in the test run. 90% of the sampled inputs were faster than this. this metric helps identify slower interactions that occurred less frequently during the benchmark.
  • p99 — the 99th percentile of the input latency in the test run. only 1% of sampled inputs were slower than this. this represents the worst-case scenarios encountered during the benchmark, useful for identifying potential performance outliers.
  • blocking time — the total time during which the main thread was blocked, preventing user input and UI updates. this metric helps identify performance bottlenecks that may cause the interface to feel unresponsive.
  • test duration — how long the test run took to complete.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants