diff --git a/src/content/docs/sandbox/guides/execute-commands.mdx b/src/content/docs/sandbox/guides/execute-commands.mdx index 2e465f32d0574d1..c7f8f461722d42c 100644 --- a/src/content/docs/sandbox/guides/execute-commands.mdx +++ b/src/content/docs/sandbox/guides/execute-commands.mdx @@ -133,6 +133,30 @@ await sandbox.exec('python /workspace/analyze.py data.csv'); ``` +## Work with large output + +The SDK has no output size limits, allowing you to process large files and datasets without restrictions: + + +``` +// Read and process large files +const result = await sandbox.exec('cat large-dataset.csv'); +console.log('Dataset size:', result.stdout.length, 'bytes'); + +// Generate large output +await sandbox.exec('python generate-report.py > /tmp/large-report.json'); +const report = await sandbox.readFile('/tmp/large-report.json'); + +// Process binary data +await sandbox.exec('base64 video.mp4 > /tmp/encoded.txt'); +const encoded = await sandbox.readFile('/tmp/encoded.txt'); +``` + + +:::note +While there are no artificial output size limits, be mindful of Worker memory constraints when processing very large outputs. For extremely large datasets, consider streaming results or writing to files instead of capturing all output in memory. +::: + ## Best practices - **Check exit codes** - Always verify `result.success` and `result.exitCode`