-
-
Notifications
You must be signed in to change notification settings - Fork 70
feat: add extension templates #535
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
WalkthroughAdds many new deployment templates under Changes
Sequence Diagram(s)sequenceDiagram
participant UI as User/API
participant Engine as TemplateEngine
participant Executor as Executor
participant Docker as DockerService
rect rgb(245,250,255)
Note over Engine,Executor: Typical template flow: pull → run → (optional) validate
end
UI->>Engine: choose template + variables
Engine->>Executor: execute run sequence
alt step uses docker compose pull
Executor->>Docker: docker compose -f <file> pull
Docker-->>Executor: success / fail
else docker image pull
Executor->>Docker: docker pull image:tag
Docker-->>Executor: success / fail
Executor->>Docker: docker run / docker compose up -d
Docker-->>Executor: container started / error
end
alt validate step present
Executor->>Executor: perform healthcheck (curl)
end
Executor-->>Engine: aggregated step results
Engine-->>UI: final status
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 18
🧹 Nitpick comments (10)
api/templates/deploy-ente.yaml (2)
33-37: Use safer script execution pattern instead of curl-piped-to-shell.The command
sh -c "$(curl -fsSL {{ quickstart_url }})"is an unsafe pattern that executes remote code without verification. While the URL is parameterized, this approach bypasses typical security controls.Better practices:
- Download the script to a temporary file
- Verify its integrity (checksum validation)
- Execute the verified script
- Clean up
This mitigates risks from compromised repositories, MITM attacks, or script tampering.
Example refactor:
- name: "Download and execute Ente quickstart script" type: "command" properties: - cmd: "sh -c \"$(curl -fsSL {{ quickstart_url }})\"" + cmd: | + SCRIPT_PATH="/tmp/ente_quickstart_$$.sh" + curl -fsSL "{{ quickstart_url }}" -o "$SCRIPT_PATH" + chmod +x "$SCRIPT_PATH" + "$SCRIPT_PATH" + rm "$SCRIPT_PATH"Or require users to provide a checksum and validate it before execution.
12-29: Add validation for URL parameter to prevent injection attacks.The
quickstart_urlvariable lacks validation or constraints. Consider adding a pattern validation to ensure only HTTPS URLs from trusted sources (e.g., GitHub) are accepted:quickstart_url: type: "string" description: "URL of the Ente quickstart installation script" default: "https://raw.githubusercontent.com/ente-io/ente/main/server/quickstart.sh" is_required: true + pattern: "^https://.*\.sh$" # If schema supports pattern validationThis prevents accidental or malicious use of non-HTTPS URLs or non-shell scripts.
api/templates/deploy-damselfly.yaml (1)
31-47: Document or validate hardcoded default paths for NAS deployment.The default paths (
/volume1/dockerdata,/volume1/photo) are specific to Synology NAS environments. While appropriate for that context, they may cause confusion or failures for non-NAS deployments.Consider adding a note in the description or adjusting defaults to be more portable:
config_path: type: "string" - description: "Path to store Damselfly configuration and database files" + description: "Path to store Damselfly configuration and database files (e.g., /volume1/dockerdata for Synology, ~/damselfly for Linux)" - default: "/volume1/dockerdata/damselfly" + default: "./damselfly_config" is_required: trueAlternatively, add documentation explaining these are Synology-specific defaults.
api/templates/deploy-wikijs.yaml (2)
27-31: Avoid executing remote scripts via piped curl; use package manager.The Docker installation pattern
curl -fsSL https://get.docker.com | bashexecutes arbitrary remote code. Better approach:- name: "Install Docker if missing" type: "command" properties: - cmd: "curl -fsSL https://get.docker.com | bash" + cmd: | + if ! command -v docker &>/dev/null; then + curl -fsSL https://get.docker.com | bash + fi timeout: 300Even better: use the system package manager (apt, yum, etc.) rather than piping remote installation scripts. Consider checking if Docker is already installed before attempting installation.
42-46: Handle case where install directory already exists.
git clonewill fail if the directory exists. Add error handling or check before cloning:- name: "Clone Wiki.js repository" type: "command" properties: - cmd: "git clone https://github.com/Requarks/wiki {{ install_dir }}" + cmd: | + if [ ! -d "{{ install_dir }}" ]; then + git clone https://github.com/Requarks/wiki {{ install_dir }} + else + echo "Directory {{ install_dir }} already exists. Skipping clone." + fi timeout: 120This prevents failures on re-runs and allows idempotent execution.
api/templates/deploy-stash.yaml (1)
33-46: Improve Docker installation error handling and idempotency.The Docker installation commands lack safeguards. If Docker is already installed, the installation script still runs. Consider adding error checking to make the installation more robust.
Apply this diff to improve error handling:
- name: "Install Docker if not installed" type: "command" properties: - cmd: "curl -fsSL https://get.docker.com | bash" + cmd: | + if ! command -v docker &> /dev/null; then + curl -fsSL https://get.docker.com | bash + fi timeout: 300 - name: "Install Docker Compose plugin if missing" type: "command" properties: cmd: | if ! docker compose version >/dev/null 2>&1; then - sudo apt-get update && sudo apt-get install -y docker-compose-plugin + sudo apt-get update && sudo apt-get install -y docker-compose-plugin || exit 1 fi timeout: 300api/templates/deploy-memo.yaml (1)
51-69: Validate Docker image version format before file generation.The
{{ image_version }}variable is embedded directly into the Docker Compose file without validation. If a user provides an invalid image reference, the deployment will fail at runtime. Consider adding validation or providing a description warning users to use valid Docker image tags.Add a validation step before file generation or improve the description to clarify expected format.
This could be mitigated by adding a pre-execution validation step, or clarifying in the variable description that it must be a valid Docker image reference (e.g., "namespace/image:tag").
api/templates/deploy-penpot.yaml (2)
45-55: Add conditional checks for Docker installation consistency.Unlike deploy-stash.yaml and deploy-supabase.yaml, this template unconditionally runs Docker and Docker Compose installation commands. This is inefficient if Docker is already installed. For consistency and robustness, add checks before installation.
Apply this diff to add conditional installation checks:
- name: "Install Docker (if not installed)" type: "command" properties: - cmd: "curl -fsSL https://get.docker.com | bash" + cmd: | + if ! command -v docker &> /dev/null; then + curl -fsSL https://get.docker.com | bash + fi timeout: 300 - name: "Install Docker Compose Plugin (if not installed)" type: "command" properties: - cmd: "sudo apt-get update && sudo apt-get install -y docker-compose-plugin" + cmd: | + if ! docker compose version >/dev/null 2>&1; then + sudo apt-get update && sudo apt-get install -y docker-compose-plugin + fi timeout: 300
25-29: Remove unused docker_image_tag variable.The
docker_image_tagvariable at lines 25-29 is defined but never used in the execution steps. Either use it in the docker compose workflow or remove it to avoid confusion.If this variable is not needed, remove it. If it should be used to override image versions in the downloaded compose file, add a sed step to update the image tags.
api/templates/deploy-photoprism.yaml (1)
37-41: Remove unused docker_memory_limit variable.The
docker_memory_limitvariable is defined but never referenced in any execution step. Remove it or, if it's intended to be used in Docker configuration, add the necessary steps to apply this limit.Remove this variable or add implementation steps to apply the memory limit to running containers.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (19)
api/templates/deploy-apache-superset.yaml(1 hunks)api/templates/deploy-calcom.yaml(1 hunks)api/templates/deploy-damselfly.yaml(1 hunks)api/templates/deploy-ente.yaml(1 hunks)api/templates/deploy-ghost.yaml(1 hunks)api/templates/deploy-gotify.yaml(1 hunks)api/templates/deploy-immich.yaml(1 hunks)api/templates/deploy-memo.yaml(1 hunks)api/templates/deploy-n8n.yaml(1 hunks)api/templates/deploy-neko.yaml(1 hunks)api/templates/deploy-ntfy.yaml(1 hunks)api/templates/deploy-penpot.yaml(1 hunks)api/templates/deploy-photoprism.yaml(1 hunks)api/templates/deploy-pinry.yaml(1 hunks)api/templates/deploy-stash.yaml(1 hunks)api/templates/deploy-supabase.yaml(1 hunks)api/templates/deploy-wikijs.yaml(1 hunks)package.json(1 hunks)version.txt(1 hunks)
🧰 Additional context used
🪛 LanguageTool
version.txt
[grammar] ~1-~1: Hier könnte ein Fehler sein.
Context: v0.1.0-alpha.41
(QB_NEW_DE)
🪛 YAMLlint (1.37.1)
api/templates/deploy-n8n.yaml
[error] 92-92: syntax error: found character '%' that cannot start any token
(syntax)
🔇 Additional comments (2)
package.json (1)
3-3: Version bump is consistent with PR scope.The version increment to 0.1.0-alpha.41 aligns with the addition of multiple deployment templates.
version.txt (1)
1-1: Version update is aligned across files.Version correctly updated to match package.json. The static analysis hint is a false positive (language detection mismatch on a version string).
| tag: | ||
| type: "string" | ||
| description: "Docker image tag" | ||
| default: "latest" | ||
| is_required: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Avoid "latest" tag; use pinned versions for reproducibility.
Using :latest tags in production should be avoided to ensure predictable builds. The default "latest" tag makes deployments non-deterministic and can introduce breaking changes across deployments.
Recommend changing the default to a specific version (e.g., "3.0.2") to ensure consistent behavior across environments.
tag:
type: "string"
description: "Docker image tag"
- default: "latest"
+ default: "3.0.2"
is_required: trueAlternatively, if supporting multiple versions is desired, provide clear documentation on the implications of using "latest" or require users to explicitly specify a version.
🤖 Prompt for AI Agents
In api/templates/deploy-apache-superset.yaml around lines 19 to 23, the template
default uses the unstable "latest" Docker tag which makes deployments
non-deterministic; change the default to a concrete, tested version (e.g.,
"3.0.2") or remove the default so users must specify a version, and update any
docs/comments to recommend pinning image tags for reproducible deployments.
| - name: "Change to Cal.com directory" | ||
| type: "command" | ||
| properties: | ||
| cmd: "cd {{ install_dir }}" | ||
| timeout: 30 | ||
|
|
||
| - name: "Copy and configure .env file" | ||
| type: "command" | ||
| properties: | ||
| cmd: "cp .env.example .env" | ||
| timeout: 30 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix state persistence between commands: cd + cp pattern will fail.
The cd {{ install_dir }} command does not persist the working directory to the subsequent cp .env.example .env step. Each command-type step executes independently, so the copy command will run in the original working directory and fail with "No such file or directory."
Apply one of these fixes:
Option 1 (preferred): Combine into a single command
- - name: "Change to Cal.com directory"
- type: "command"
- properties:
- cmd: "cd {{ install_dir }}"
- timeout: 30
-
- name: "Copy and configure .env file"
type: "command"
properties:
- cmd: "cp .env.example .env"
+ cmd: "cp {{ install_dir }}/.env.example {{ install_dir }}/.env"
timeout: 30Option 2: Use bash -c to group commands
- name: "Copy and configure .env file"
type: "command"
properties:
- cmd: "cp .env.example .env"
+ cmd: "bash -c 'cd {{ install_dir }} && cp .env.example .env'"
timeout: 30📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: "Change to Cal.com directory" | |
| type: "command" | |
| properties: | |
| cmd: "cd {{ install_dir }}" | |
| timeout: 30 | |
| - name: "Copy and configure .env file" | |
| type: "command" | |
| properties: | |
| cmd: "cp .env.example .env" | |
| timeout: 30 | |
| - name: "Copy and configure .env file" | |
| type: "command" | |
| properties: | |
| cmd: "cp {{ install_dir }}/.env.example {{ install_dir }}/.env" | |
| timeout: 30 |
| - name: "Change to Cal.com directory" | |
| type: "command" | |
| properties: | |
| cmd: "cd {{ install_dir }}" | |
| timeout: 30 | |
| - name: "Copy and configure .env file" | |
| type: "command" | |
| properties: | |
| cmd: "cp .env.example .env" | |
| timeout: 30 | |
| - name: "Change to Cal.com directory" | |
| type: "command" | |
| properties: | |
| cmd: "cd {{ install_dir }}" | |
| timeout: 30 | |
| - name: "Copy and configure .env file" | |
| type: "command" | |
| properties: | |
| cmd: "bash -c 'cd {{ install_dir }} && cp .env.example .env'" | |
| timeout: 30 |
🤖 Prompt for AI Agents
In api/templates/deploy-calcom.yaml around lines 51-61, the step that runs `cd
{{ install_dir }}` does not persist for the next independent `cp .env.example
.env` step so the copy fails; replace the two separate command steps with a
single command-step that runs both actions in one invocation (change into the
install_dir and then copy the .env file) OR keep a single step but run a shell
that groups commands (e.g., use bash -c or bash -lc to run the cd and cp
together with &&); ensure the final step runs in the target directory context
and retains proper timeouts.
| - name: "Change to Ghost directory" | ||
| type: "command" | ||
| properties: | ||
| cmd: "cd {{ install_dir }}" | ||
| timeout: 30 | ||
|
|
||
| - name: "Copy example config files" | ||
| type: "command" | ||
| properties: | ||
| cmd: "cp .env.example .env && cp caddy/Caddyfile.example caddy/Caddyfile" | ||
| working_dir: "{{ install_dir }}" | ||
| timeout: 30 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix working directory handling in chained steps.
Line 84 executes cd {{ install_dir }} but this won't affect subsequent steps since each step runs in isolation. The subsequent steps at lines 88-92 specify working_dir: "{{ install_dir }}" which is the correct approach. Line 84 is redundant and the cd command will have no effect.
Apply this diff to remove the redundant step:
- - name: "Change to Ghost directory"
- type: "command"
- properties:
- cmd: "cd {{ install_dir }}"
- timeout: 30
-
- name: "Copy example config files"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: "Change to Ghost directory" | |
| type: "command" | |
| properties: | |
| cmd: "cd {{ install_dir }}" | |
| timeout: 30 | |
| - name: "Copy example config files" | |
| type: "command" | |
| properties: | |
| cmd: "cp .env.example .env && cp caddy/Caddyfile.example caddy/Caddyfile" | |
| working_dir: "{{ install_dir }}" | |
| timeout: 30 | |
| - name: "Copy example config files" | |
| type: "command" | |
| properties: | |
| cmd: "cp .env.example .env && cp caddy/Caddyfile.example caddy/Caddyfile" | |
| working_dir: "{{ install_dir }}" | |
| timeout: 30 |
🤖 Prompt for AI Agents
In api/templates/deploy-ghost.yaml around lines 81 to 92, the step that runs `cd
{{ install_dir }}` is redundant because each step runs in isolation and the
working_dir is already set for subsequent commands; remove the entire "Change to
Ghost directory" step (the cd command block) and keep the "Copy example config
files" step which correctly uses working_dir: "{{ install_dir }}".
| cmd: | | ||
| sed -i "s|DOMAIN=.*|DOMAIN={{ domain }}|" .env && \ | ||
| sed -i "s|ADMIN_DOMAIN=.*|ADMIN_DOMAIN={{ admin_domain }}|" .env && \ | ||
| sed -i "s|DATABASE_ROOT_PASSWORD=.*|DATABASE_ROOT_PASSWORD={{ database_root_password }}|" .env && \ | ||
| sed -i "s|DATABASE_PASSWORD=.*|DATABASE_PASSWORD={{ database_password }}|" .env && \ | ||
| sed -i "s|SMTP_HOST=.*|SMTP_HOST={{ smtp_host }}|" .env && \ | ||
| sed -i "s|SMTP_PORT=.*|SMTP_PORT={{ smtp_port }}|" .env && \ | ||
| sed -i "s|SMTP_USER=.*|SMTP_USER={{ smtp_user }}|" .env && \ | ||
| sed -i "s|SMTP_PASS=.*|SMTP_PASS={{ smtp_pass }}|" .env |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Escape sed pattern delimiters in variable substitutions to prevent injection.
Template variables in sed patterns need escaping. If a variable like {{ smtp_host }} contains special characters (e.g., /, &, \), the sed command will fail or behave unexpectedly.
Apply this diff to properly escape sed delimiters:
- sed -i "s|DOMAIN=.*|DOMAIN={{ domain }}|" .env && \
- sed -i "s|ADMIN_DOMAIN=.*|ADMIN_DOMAIN={{ admin_domain }}|" .env && \
- sed -i "s|DATABASE_ROOT_PASSWORD=.*|DATABASE_ROOT_PASSWORD={{ database_root_password }}|" .env && \
- sed -i "s|DATABASE_PASSWORD=.*|DATABASE_PASSWORD={{ database_password }}|" .env && \
- sed -i "s|SMTP_HOST=.*|SMTP_HOST={{ smtp_host }}|" .env && \
- sed -i "s|SMTP_PORT=.*|SMTP_PORT={{ smtp_port }}|" .env && \
- sed -i "s|SMTP_USER=.*|SMTP_USER={{ smtp_user }}|" .env && \
- sed -i "s|SMTP_PASS=.*|SMTP_PASS={{ smtp_pass }}|" .env
+ sed -i "s|DOMAIN=.*|DOMAIN=$(printf '%s\n' "{{ domain }}" | sed -e 's/[\/&]/\\&/g')|" .env && \
+ sed -i "s|ADMIN_DOMAIN=.*|ADMIN_DOMAIN=$(printf '%s\n' "{{ admin_domain }}" | sed -e 's/[\/&]/\\&/g')|" .env && \
+ sed -i "s|DATABASE_ROOT_PASSWORD=.*|DATABASE_ROOT_PASSWORD=$(printf '%s\n' "{{ database_root_password }}" | sed -e 's/[\/&]/\\&/g')|" .env && \
+ sed -i "s|DATABASE_PASSWORD=.*|DATABASE_PASSWORD=$(printf '%s\n' "{{ database_password }}" | sed -e 's/[\/&]/\\&/g')|" .env && \
+ sed -i "s|SMTP_HOST=.*|SMTP_HOST=$(printf '%s\n' "{{ smtp_host }}" | sed -e 's/[\/&]/\\&/g')|" .env && \
+ sed -i "s|SMTP_PORT=.*|SMTP_PORT=$(printf '%s\n' "{{ smtp_port }}" | sed -e 's/[\/&]/\\&/g')|" .env && \
+ sed -i "s|SMTP_USER=.*|SMTP_USER=$(printf '%s\n' "{{ smtp_user }}" | sed -e 's/[\/&]/\\&/g')|" .env && \
+ sed -i "s|SMTP_PASS=.*|SMTP_PASS=$(printf '%s\n' "{{ smtp_pass }}" | sed -e 's/[\/&]/\\&/g')|" .envAlternatively, consider using envsubst or a more robust templating approach instead of sed.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| cmd: | | |
| sed -i "s|DOMAIN=.*|DOMAIN={{ domain }}|" .env && \ | |
| sed -i "s|ADMIN_DOMAIN=.*|ADMIN_DOMAIN={{ admin_domain }}|" .env && \ | |
| sed -i "s|DATABASE_ROOT_PASSWORD=.*|DATABASE_ROOT_PASSWORD={{ database_root_password }}|" .env && \ | |
| sed -i "s|DATABASE_PASSWORD=.*|DATABASE_PASSWORD={{ database_password }}|" .env && \ | |
| sed -i "s|SMTP_HOST=.*|SMTP_HOST={{ smtp_host }}|" .env && \ | |
| sed -i "s|SMTP_PORT=.*|SMTP_PORT={{ smtp_port }}|" .env && \ | |
| sed -i "s|SMTP_USER=.*|SMTP_USER={{ smtp_user }}|" .env && \ | |
| sed -i "s|SMTP_PASS=.*|SMTP_PASS={{ smtp_pass }}|" .env | |
| cmd: | | |
| sed -i "s|DOMAIN=.*|DOMAIN=$(printf '%s\n' "{{ domain }}" | sed -e 's/[\/&]/\\&/g')|" .env && \ | |
| sed -i "s|ADMIN_DOMAIN=.*|ADMIN_DOMAIN=$(printf '%s\n' "{{ admin_domain }}" | sed -e 's/[\/&]/\\&/g')|" .env && \ | |
| sed -i "s|DATABASE_ROOT_PASSWORD=.*|DATABASE_ROOT_PASSWORD=$(printf '%s\n' "{{ database_root_password }}" | sed -e 's/[\/&]/\\&/g')|" .env && \ | |
| sed -i "s|DATABASE_PASSWORD=.*|DATABASE_PASSWORD=$(printf '%s\n' "{{ database_password }}" | sed -e 's/[\/&]/\\&/g')|" .env && \ | |
| sed -i "s|SMTP_HOST=.*|SMTP_HOST=$(printf '%s\n' "{{ smtp_host }}" | sed -e 's/[\/&]/\\&/g')|" .env && \ | |
| sed -i "s|SMTP_PORT=.*|SMTP_PORT=$(printf '%s\n' "{{ smtp_port }}" | sed -e 's/[\/&]/\\&/g')|" .env && \ | |
| sed -i "s|SMTP_USER=.*|SMTP_USER=$(printf '%s\n' "{{ smtp_user }}" | sed -e 's/[\/&]/\\&/g')|" .env && \ | |
| sed -i "s|SMTP_PASS=.*|SMTP_PASS=$(printf '%s\n' "{{ smtp_pass }}" | sed -e 's/[\/&]/\\&/g')|" .env |
🤖 Prompt for AI Agents
In api/templates/deploy-ghost.yaml around lines 97 to 105, the sed substitution
uses unescaped template variables which can break or be exploited if values
contain sed delimiter characters (like |, /, &, or backslashes); update the
template so each variable is escaped before being used in the sed pattern
(escape delimiter chars and & and backslashes), or switch to a safer approach
such as using a different sed delimiter and passing pre-escaped values, or
replace sed with envsubst/a templating tool to safely perform the substitutions.
| tag: | ||
| type: "string" | ||
| description: "Docker image tag" | ||
| default: "latest" | ||
| is_required: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Replace "latest" tag with a pinned version.
Same concern as Apache Superset: using the "latest" tag creates non-deterministic deployments. Specify a concrete version (e.g., "2.7.0").
🤖 Prompt for AI Agents
In api/templates/deploy-gotify.yaml around lines 19 to 23, the parameter default
uses the non-deterministic "latest" Docker tag; change the default to a pinned
concrete version (e.g., "2.7.0") by replacing "latest" with the chosen version
string, and update the description if needed to indicate a pinned version is
required to ensure deterministic deployments.
| - name: "Set absolute data path" | ||
| type: "command" | ||
| properties: | ||
| cmd: "export DATA_PATH=$(readlink -f {{ install_dir }})" | ||
| timeout: 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove redundant variable export that won't persist.
Line 48 exports DATA_PATH but this won't be available in subsequent steps since each execution step runs in isolation. The variable is already being computed again in line 68, so this step is redundant and can be removed.
Apply this diff to remove the redundant step:
- - name: "Set absolute data path"
- type: "command"
- properties:
- cmd: "export DATA_PATH=$(readlink -f {{ install_dir }})"
- timeout: 10
-
- name: "Build or pull Pinry image"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: "Set absolute data path" | |
| type: "command" | |
| properties: | |
| cmd: "export DATA_PATH=$(readlink -f {{ install_dir }})" | |
| timeout: 10 |
🤖 Prompt for AI Agents
In api/templates/deploy-pinry.yaml around lines 45-49, remove the entire "Set
absolute data path" step (the command that exports DATA_PATH via readlink)
because exported environment variables in that isolated step won't persist to
later steps and the data path is computed again later; simply delete that step
block so the later computation at line ~68 remains the single source of truth.
| - name: "Run Pinry container" | ||
| type: "command" | ||
| properties: | ||
| cmd: | | ||
| DATA_PATH=$(readlink -f {{ install_dir }}) && \ | ||
| docker run -d -p={{ host_port }}:80 -v=${DATA_PATH}:/data getpinry/pinry | ||
| timeout: 120 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix Docker port mapping syntax.
Line 69 uses incorrect Docker syntax -p={{ host_port }}:80. The correct syntax for the -p flag is -p {{ host_port }}:80 (space-separated, not =).
Apply this diff to fix the Docker port syntax:
- name: "Run Pinry container"
type: "command"
properties:
cmd: |
DATA_PATH=$(readlink -f {{ install_dir }}) && \
- docker run -d -p={{ host_port }}:80 -v=${DATA_PATH}:/data getpinry/pinry
+ docker run -d -p {{ host_port }}:80 -v ${DATA_PATH}:/data getpinry/pinry
timeout: 120📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: "Run Pinry container" | |
| type: "command" | |
| properties: | |
| cmd: | | |
| DATA_PATH=$(readlink -f {{ install_dir }}) && \ | |
| docker run -d -p={{ host_port }}:80 -v=${DATA_PATH}:/data getpinry/pinry | |
| timeout: 120 | |
| - name: "Run Pinry container" | |
| type: "command" | |
| properties: | |
| cmd: | | |
| DATA_PATH=$(readlink -f {{ install_dir }}) && \ | |
| docker run -d -p {{ host_port }}:80 -v ${DATA_PATH}:/data getpinry/pinry | |
| timeout: 120 |
🤖 Prompt for AI Agents
In api/templates/deploy-pinry.yaml around lines 64 to 70, the docker run command
uses incorrect -p flag syntax "-p={{ host_port }}:80"; change it to use a
space-separated flag "-p {{ host_port }}:80" so the port mapping is parsed
correctly by Docker, leaving the rest of the command intact.
| - name: "Update StashApp port mapping" | ||
| type: "command" | ||
| properties: | ||
| cmd: | | ||
| sed -i "s/9999:9999/{{ host_port }}:9999/g" {{ install_dir }}/docker-compose.yml | ||
| timeout: 20 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add input validation for host_port in sed command.
The sed command at line 60 substitutes the host_port directly. If host_port contains special regex characters, it could break the command. While the default (9999) is safe, user input should be validated.
Apply this diff to escape the port variable:
- name: "Update StashApp port mapping"
type: "command"
properties:
cmd: |
- sed -i "s/9999:9999/{{ host_port }}:9999/g" {{ install_dir }}/docker-compose.yml
+ sed -i "s/9999:9999/$(printf '%s\n' "{{ host_port }}" | sed -e 's/[\/&]/\\&/g'):9999/g" {{ install_dir }}/docker-compose.yml
timeout: 20📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: "Update StashApp port mapping" | |
| type: "command" | |
| properties: | |
| cmd: | | |
| sed -i "s/9999:9999/{{ host_port }}:9999/g" {{ install_dir }}/docker-compose.yml | |
| timeout: 20 | |
| - name: "Update StashApp port mapping" | |
| type: "command" | |
| properties: | |
| cmd: | | |
| sed -i "s/9999:9999/$(printf '%s\n' "{{ host_port }}" | sed -e 's/[\/&]/\\&/g'):9999/g" {{ install_dir }}/docker-compose.yml | |
| timeout: 20 |
🤖 Prompt for AI Agents
In api/templates/deploy-stash.yaml around lines 56 to 61, the sed substitution
injects {{ host_port }} directly into a regex, which can break the command if
the value contains regex metacharacters; validate and sanitize the input by
enforcing that host_port contains only digits and is within the valid port range
(1-65535) before template rendering, or otherwise escape all sed/regex
metacharacters (or use a different non-regex delimiter) when inserting the
value; update the template rendering logic to perform this validation/escaping
and fail fast with a clear error if the provided host_port is invalid.
| - name: "Clone Supabase repository" | ||
| type: "command" | ||
| properties: | ||
| cmd: "git clone --depth 1 https://github.com/supabase/supabase" | ||
| timeout: 120 | ||
|
|
||
| - name: "Create project directory" | ||
| type: "command" | ||
| properties: | ||
| cmd: "mkdir -p {{ install_dir }}" | ||
| timeout: 20 | ||
|
|
||
| - name: "Copy Supabase Docker files" | ||
| type: "command" | ||
| properties: | ||
| cmd: "cp -rf supabase/docker/* {{ install_dir }}" | ||
| timeout: 30 | ||
|
|
||
| - name: "Copy example environment file" | ||
| type: "command" | ||
| properties: | ||
| cmd: "cp supabase/docker/.env.example {{ install_dir }}/.env" | ||
| timeout: 20 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix command sequencing and working directory assumptions.
The workflow has implicit dependencies on working directory state that may fail:
- Line 39:
git clonecreates "supabase" in the current directory. - Lines 51, 57: Subsequent commands assume "supabase/" exists in the same directory.
- No explicit working directory isolation between steps—commands run in whatever PWD was left by previous steps.
If the initial git clone fails or runs in an unexpected directory, subsequent cp commands will fail silently until the timeout.
Apply this diff to make the workflow explicit and robust:
- name: "Clone Supabase repository"
type: "command"
properties:
- cmd: "git clone --depth 1 https://github.com/supabase/supabase"
+ cmd: "git clone --depth 1 https://github.com/supabase/supabase supabase-src"
timeout: 120
- name: "Create project directory"
type: "command"
properties:
cmd: "mkdir -p {{ install_dir }}"
timeout: 20
- name: "Copy Supabase Docker files"
type: "command"
properties:
- cmd: "cp -rf supabase/docker/* {{ install_dir }}"
+ cmd: "cp -rf supabase-src/docker/* {{ install_dir }}"
timeout: 30
- name: "Copy example environment file"
type: "command"
properties:
- cmd: "cp supabase/docker/.env.example {{ install_dir }}/.env"
+ cmd: "cp supabase-src/docker/.env.example {{ install_dir }}/.env"
timeout: 20📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: "Clone Supabase repository" | |
| type: "command" | |
| properties: | |
| cmd: "git clone --depth 1 https://github.com/supabase/supabase" | |
| timeout: 120 | |
| - name: "Create project directory" | |
| type: "command" | |
| properties: | |
| cmd: "mkdir -p {{ install_dir }}" | |
| timeout: 20 | |
| - name: "Copy Supabase Docker files" | |
| type: "command" | |
| properties: | |
| cmd: "cp -rf supabase/docker/* {{ install_dir }}" | |
| timeout: 30 | |
| - name: "Copy example environment file" | |
| type: "command" | |
| properties: | |
| cmd: "cp supabase/docker/.env.example {{ install_dir }}/.env" | |
| timeout: 20 | |
| - name: "Clone Supabase repository" | |
| type: "command" | |
| properties: | |
| cmd: "git clone --depth 1 https://github.com/supabase/supabase supabase-src" | |
| timeout: 120 | |
| - name: "Create project directory" | |
| type: "command" | |
| properties: | |
| cmd: "mkdir -p {{ install_dir }}" | |
| timeout: 20 | |
| - name: "Copy Supabase Docker files" | |
| type: "command" | |
| properties: | |
| cmd: "cp -rf supabase-src/docker/* {{ install_dir }}" | |
| timeout: 30 | |
| - name: "Copy example environment file" | |
| type: "command" | |
| properties: | |
| cmd: "cp supabase-src/docker/.env.example {{ install_dir }}/.env" | |
| timeout: 20 |
| - name: "Install dependencies in container" | ||
| type: "command" | ||
| properties: | ||
| cmd: "docker exec wiki-app yarn" | ||
| timeout: 120 | ||
|
|
||
| - name: "Start Wiki.js in dev mode" | ||
| type: "command" | ||
| properties: | ||
| cmd: "docker exec -d wiki-app yarn dev" | ||
| timeout: 60 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚨 Hardcoded container name "wiki-app" may not exist; verify or derive from variables.
The docker exec wiki-app commands assume the container is named "wiki-app", but this depends on Docker Compose's naming conventions in the cloned repository's docker-compose.yml. If the service is named differently or configured differently, these commands will fail with "No such container" error.
Options to fix:
- Verify the actual container name in the repo's docker-compose.yml first
- Dynamically discover the running container:
- name: "Install dependencies in container" type: "command" properties: - cmd: "docker exec wiki-app yarn" + cmd: | + CONTAINER=$(docker ps --filter "ancestor=wiki:latest" --format "{{.Names}}" | head -n 1) + [ -n "$CONTAINER" ] || exit 1 + docker exec "$CONTAINER" yarn timeout: 120
- Add documentation explaining that this template assumes the cloned repo has a service named "wiki-app" in its docker-compose file
This is a reliability issue that will cause deployment failures.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: "Install dependencies in container" | |
| type: "command" | |
| properties: | |
| cmd: "docker exec wiki-app yarn" | |
| timeout: 120 | |
| - name: "Start Wiki.js in dev mode" | |
| type: "command" | |
| properties: | |
| cmd: "docker exec -d wiki-app yarn dev" | |
| timeout: 60 | |
| - name: "Install dependencies in container" | |
| type: "command" | |
| properties: | |
| cmd: | | |
| CONTAINER=$(docker ps --filter "ancestor=wiki:latest" --format "{{.Names}}" | head -n 1) | |
| [ -n "$CONTAINER" ] || exit 1 | |
| docker exec "$CONTAINER" yarn | |
| timeout: 120 | |
| - name: "Start Wiki.js in dev mode" | |
| type: "command" | |
| properties: | |
| cmd: "docker exec -d wiki-app yarn dev" | |
| timeout: 60 |
🤖 Prompt for AI Agents
In api/templates/deploy-wikijs.yaml around lines 54 to 64, the commands hardcode
the container name "wiki-app" which can be absent if the repository's
docker-compose service is named differently; replace hardcoded container usage
with a robust approach: read/derive the service name from the cloned repo's
docker-compose.yml or accept it as a template variable, and run commands via
docker compose exec (or docker-compose run/exec using the service name) or
dynamically resolve the running container ID (e.g., docker-compose ps -q
<service> or docker ps filtering by project/service) before invoking yarn;
alternatively, document clearly that this template requires a service named
"wiki-app" if you choose to keep it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
♻️ Duplicate comments (6)
api/templates/deploy-keycloak.yaml (1)
49-59: Security: Remove or randomize default bootstrap credentials.Keycloak bootstrap credentials defaulting to "admin"/"admin" should be handled more securely. Same concern as noted in deploy-pie-hole.yaml.
api/templates/deploy-influx-db.yaml (1)
49-59: Security: Remove or randomize default credentials.Default admin credentials (admin/admin123) should be handled more securely. Same concern as deploy-pie-hole.yaml.
api/templates/deploy-click-house.yaml (1)
73-77: Security: Default password should not be hardcoded.Default password "changeme" may be used as-is in production. While the name suggests it should be changed, this should be enforced (e.g., make it required, generate random, or fail if not customized).
api/templates/deploy-n8n.yaml (1)
72-73: Format inconsistency: string ports/volumes instead of list format.Lines 72–73 use string format instead of list format like most other templates. Ensure this is compatible with your template engine.
Suggested fix:
- ports: "{{ host_port }}:{{ container_port }}" - volumes: "{{ data_volume }}" + ports: + - "{{ host_port }}:{{ container_port }}" + volumes: + - "{{ data_volume }}"api/templates/deploy-matomo.yaml (2)
84-85: Format inconsistency: hardcoded container port and string formats.Line 84 hardcodes container port 80 instead of parameterizing it, and uses string format instead of list format. Line 85 volumes are strings instead of lists.
Suggested fix:
- ports: "{{ host_http_port }}:80" - volumes: "{{ data_volume }}" + ports: + - "{{ host_http_port }}:80" + volumes: + - "{{ data_volume }}"If parameterizing container ports is desired, add a
container_http_portvariable.
55-59: Security: Weak placeholder password in default credentials.Database password defaults to "yourpassword" which, while marked as a placeholder, is too simplistic. Consider requiring explicit password input or generating a secure random default.
🧹 Nitpick comments (5)
api/templates/deploy-appflowy.yaml (2)
12-41: Clarify the semantics ofis_requiredwhen variables have default values.All variables are marked
is_required: truebut each has adefaultvalue provided. This is semantically contradictory—if a variable has a default, it is not truly required. Please clarify the intended behavior:
- Should
is_requiredindicate whether users must explicitly provide a value (overriding the default)?- Or should
is_required: trueonly apply to variables without defaults?Recommend adjusting the template structure to ensure this is unambiguous for downstream consumers of these templates.
71-76: Remove or document the commented-out validation step.The commented-out HTTP health-check validation (lines 71–76) suggests incomplete or abandoned work. Either complete the validation logic and uncomment it, or remove it entirely to keep the template maintainable.
api/templates/deploy-rocket-chat.yaml (1)
61-65: Review root_url default value.The root_url default of "http://localhost:3000" is environment-specific and should likely be overridden by users in most deployments. Consider using a placeholder like "http://your-domain.com" or marking this as a required override in documentation.
api/templates/deploy-gitea.yaml (1)
37-42: Document SSH port exposure requirement for Gitea.Unlike Rocket.Chat, Gitea legitimately needs SSH port exposure for Git operations via SSH. However, this should be documented in the template or in related deployment docs. Consider adding a comment explaining why port 22 is exposed.
api/templates/deploy-metabase.yaml (1)
72-72: Hardcoded container ports reduce template flexibility.Container ports 3000 (web) and 22 (ssh) are hardcoded in the execution step instead of being parameterized with variables. This prevents users from customizing internal container port mappings if needed. Consider adding
container_http_portandcontainer_ssh_portvariables if customization is desired.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (18)
api/templates/deploy-appflowy.yaml(1 hunks)api/templates/deploy-click-house.yaml(1 hunks)api/templates/deploy-etherpad.yaml(1 hunks)api/templates/deploy-gitea.yaml(1 hunks)api/templates/deploy-glances.yaml(1 hunks)api/templates/deploy-influx-db.yaml(1 hunks)api/templates/deploy-jellyfin.yaml(1 hunks)api/templates/deploy-keycloak.yaml(1 hunks)api/templates/deploy-matomo.yaml(1 hunks)api/templates/deploy-meili-search.yaml(1 hunks)api/templates/deploy-metabase.yaml(1 hunks)api/templates/deploy-n8n.yaml(1 hunks)api/templates/deploy-outline.yaml(1 hunks)api/templates/deploy-pie-hole.yaml(1 hunks)api/templates/deploy-portainer.yaml(1 hunks)api/templates/deploy-rocket-chat.yaml(1 hunks)package.json(1 hunks)version.txt(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- package.json
🧰 Additional context used
🪛 LanguageTool
version.txt
[grammar] ~1-~1: Hier könnte ein Fehler sein.
Context: v0.1.0-alpha.51
(QB_NEW_DE)
🔇 Additional comments (2)
api/templates/deploy-appflowy.yaml (1)
1-10: Adopt consistent naming convention across metadata and variables sections.Line 10 uses camelCase (
isVerified), but the variables section uses snake_case (is_required). If these templates are parsed programmatically, inconsistent conventions can cause issues with code generation or validation tooling.Could you verify that the naming convention is consistent across all deployment templates? If there's a schema or specification for these templates, please share it to ensure alignment.
version.txt (1)
1-1: Verify version consistency across package.json and deployment.The version bump from v0.1.0-alpha.49 to v0.1.0-alpha.51 (skipping alpha.50) should be intentional. Confirm that
package.jsonreflects the same version and that this skip is documented or intentional.The LanguageTool static analysis hint is a false positive—the version string format is valid.
| admin_password: | ||
| type: "string" | ||
| description: "Admin password for Etherpad" | ||
| default: "admin" | ||
| is_required: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Remove weak default admin password.
The default admin_password of "admin" is insecure and must not be committed. Never store actual secret values in YAML files that could be committed to version control. Replace with a placeholder requiring user override before deployment.
🤖 Prompt for AI Agents
In api/templates/deploy-etherpad.yaml around lines 49 to 53, remove the
hard-coded insecure default for admin_password ("admin") and replace it with a
non-secret placeholder or no default so users must provide a value before
deployment; keep is_required: true (or set default to a clearly labeled
placeholder like "<REPLACE_WITH_ADMIN_PASSWORD>") and update any deployment docs
to instruct using environment/secret management rather than committing real
secrets in the YAML.
| ports: | ||
| - "{{ host_port }}:{{ container_port }}" | ||
| volumes: | ||
| - "{{ data_volume }}" | ||
| environment: | ||
| - "ETHERPAD_ADMIN_PASSWORD={{ admin_password }}" | ||
| restart: "unless-stopped" | ||
| timeout: 120 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Standardize environment variable and port mapping syntax.
Port mapping uses list format (line 72–73) and environment uses list format (line 77), but other templates use different formats. Standardize across all templates for consistency.
🤖 Prompt for AI Agents
In api/templates/deploy-etherpad.yaml around lines 72 to 79, standardize the
port and environment syntax to match other templates: change the port mapping
from a YAML list to the same mapping style used across the repo and convert the
environment entry from a list item string to a key: value mapping (e.g.
ETHERPAD_ADMIN_PASSWORD: "{{ admin_password }}"); ensure spacing and quoting
match the repo convention so the ports and environment blocks use the same
representation as the other templates.
| glances_conf: | ||
| type: "string" | ||
| description: "Path to custom glances.conf file" | ||
| default: "./glances.conf:/etc/glances/glances.conf" | ||
| is_required: false | ||
|
|
||
| os_release: | ||
| type: "string" | ||
| description: "Path to /etc/os-release to display host OS info" | ||
| default: "/etc/os-release:/etc/os-release:ro" | ||
| is_required: false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle optional volumes more carefully.
The optional volumes glances_conf and os_release use placeholder syntax but have defaults pointing to host paths. If a user doesn't override these or the paths don't exist, the container run may fail with a mount error. Consider either:
- Making them required if essential to functionality
- Providing conditional logic to skip mounting if not provided
- Documenting the safe defaults for typical deployments
🤖 Prompt for AI Agents
In api/templates/deploy-glances.yaml around lines 43 to 53, the optional volume
fields glances_conf and os_release currently have host-path defaults which can
cause container mount failures when the host path is absent or the user doesn't
override them; change the defaults to an empty/nullable value (or remove the
host-path default) and update the template rendering logic to only emit the
corresponding volume/volumeMount entries when a non-empty value is provided, or
alternatively mark the fields as required if they are essential; also update the
field descriptions to document safe defaults and expected formats so users know
to provide valid host paths when necessary.
| media_dir: | ||
| type: "string" | ||
| description: "Directory containing media files" | ||
| default: "/path/to/media" | ||
| is_required: true | ||
|
|
||
| config_dir: | ||
| type: "string" | ||
| description: "Directory for Jellyfin configuration" | ||
| default: "/path/to/config" | ||
| is_required: true | ||
|
|
||
| cache_dir: | ||
| type: "string" | ||
| description: "Directory for Jellyfin cache" | ||
| default: "/path/to/cache" | ||
| is_required: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Default placeholder paths may cause failures at runtime.
Media, config, and cache directories default to /path/to/... placeholders. If these are not overridden, the deployment will fail or behave unexpectedly. Consider using relative paths (e.g., ./jellyfin-config) or documenting that these must be provided by users.
🤖 Prompt for AI Agents
In api/templates/deploy-jellyfin.yaml around lines 43 to 59, the media_dir,
config_dir, and cache_dir fields use placeholder defaults (/path/to/...) which
can cause runtime failures; replace those placeholders with sensible relative
defaults (for example ./jellyfin-media, ./jellyfin-config, ./jellyfin-cache) or
remove the defaults entirely and keep is_required: true, and update the
parameter descriptions to explicitly instruct users that these paths must be
provided if defaults are removed.
| ports: "{{ host_port }}:{{ container_port }}" | ||
| volumes: | ||
| - "{{ config_dir }}:/config" | ||
| - "{{ cache_dir }}:/cache" | ||
| - "{{ media_dir }}:/media" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent port/volume format across templates.
Line 78 uses a string format for ports: ports: "{{ host_port }}:{{ container_port }}", while other templates (deploy-pie-hole.yaml, deploy-keycloak.yaml) use YAML list format with leading dashes. Confirm this is compatible with your template execution engine, or standardize to list format for consistency.
Suggested fix to match other templates:
- ports: "{{ host_port }}:{{ container_port }}"
+ ports:
+ - "{{ host_port }}:{{ container_port }}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ports: "{{ host_port }}:{{ container_port }}" | |
| volumes: | |
| - "{{ config_dir }}:/config" | |
| - "{{ cache_dir }}:/cache" | |
| - "{{ media_dir }}:/media" | |
| ports: | |
| - "{{ host_port }}:{{ container_port }}" | |
| volumes: | |
| - "{{ config_dir }}:/config" | |
| - "{{ cache_dir }}:/cache" | |
| - "{{ media_dir }}:/media" |
🤖 Prompt for AI Agents
In api/templates/deploy-jellyfin.yaml around lines 78 to 82, the ports entry is
a quoted string while other templates use a YAML list with leading dashes;
change the ports line to a YAML list item (e.g. add a leading dash so it becomes
a list entry like the other templates) and ensure volumes remain as list items —
verify the template engine supports this list format or adjust templating syntax
to match the rest of the repo for consistency.
| validate: | ||
| - name: "Check HTTP response from n8n" | ||
| type: "command" | ||
| properties: | ||
| cmd: "curl -fsS -o /dev/null -w '%{http_code}\\n' http://localhost:{{ host_port }} | grep -E '^(200|301|302)$'" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent validation section across templates.
The validate block here is uncommented, while all other templates (deploy-pie-hole.yaml, deploy-keycloak.yaml, etc.) have it commented out. Confirm if this is intentional or if it should be commented for consistency.
Suggested fix if validation should be disabled:
- validate:
- - name: "Check HTTP response from n8n"
- type: "command"
- properties:
- cmd: "curl -fsS -o /dev/null -w '%{http_code}\\n' http://localhost:{{ host_port }} | grep -E '^(200|301|302)$'"
- timeout: 60
+ # validate:
+ # - name: "Check HTTP response from n8n"
+ # type: "command"
+ # properties:
+ # cmd: "curl -fsS -o /dev/null -w '%{http_code}\\n' http://localhost:{{ host_port }} | grep -E '^(200|301|302)$'"
+ # timeout: 60Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In api/templates/deploy-n8n.yaml around lines 82 to 86 the validate block is
active while other templates have their validate blocks commented out; if
validation should be disabled for consistency, comment out these lines (the
entire validate: block and its indented children) so it matches the other
templates, or if it should remain enabled, update the other templates to
uncomment their validate blocks—pick one approach and apply it consistently
across all deploy-*.yaml templates.
| redis_password: | ||
| type: "string" | ||
| description: "Password for Redis" | ||
| default: "your-redis-password" | ||
| is_required: true | ||
|
|
||
| postgres_user: | ||
| type: "string" | ||
| description: "PostgreSQL username" | ||
| default: "outline" | ||
| is_required: true | ||
|
|
||
| postgres_password: | ||
| type: "string" | ||
| description: "PostgreSQL password" | ||
| default: "your-postgres-password" | ||
| is_required: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Remove default passwords from template.
Templates must not include default passwords or credentials in committed YAML files. Never store actual secret values in YAML files that could be committed to version control. Lines 49–53 and 61–65 contain default values for redis_password and postgres_password.
Replace these with:
- Prompts/placeholders marked for user override (e.g.,
REQUIRED_USER_OVERRIDE) - References to a secrets management system
- Or document that users must provide these via environment variables
| web_password: | ||
| type: "string" | ||
| description: "Password for Pi-hole admin dashboard" | ||
| default: "admin" | ||
| is_required: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security: Avoid hardcoded default credentials in deployment templates.
Default credentials like "admin" and "admin123" pose security risks if not changed before deployment. Consider either:
- Making credential variables required with no defaults
- Generating random defaults in the execution engine
- Documenting a mandatory setup step to change credentials
This applies to all deployment templates in this PR.
🤖 Prompt for AI Agents
In api/templates/deploy-pie-hole.yaml around lines 73 to 77, the web_password
field currently sets a hardcoded default ("admin"), which is a security risk;
remove the hardcoded default and require the value to be provided at deployment
(i.e., leave type/description and is_required:true but delete the default
field), or alternatively implement a secure random-password generator at runtime
and remove the static default; also update template docs/readme to state that
credentials must be supplied or will be generated and ensure the same change is
applied consistently to all other deployment templates in this PR.
| host_port: | ||
| type: "integer" | ||
| description: "Host port to expose" | ||
| default: 9443 | ||
| is_required: true | ||
|
|
||
| container_port: | ||
| type: "integer" | ||
| description: "Container port to map" | ||
| default: 9443 | ||
| is_required: true | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Standardize port variable naming across templates.
This template uses host_port and container_port, while others use host_http_port/host_ssh_port. Adopt a consistent naming convention across all templates for maintainability and reduced confusion.
🤖 Prompt for AI Agents
In api/templates/deploy-portainer.yaml around lines 31 to 42, the port variables
are named host_port and container_port which is inconsistent with other
templates that use host_http_port/host_ssh_port; rename them to the standardized
names (e.g., host_http_port and container_http_port), update the description and
default values accordingly, keep type as integer and is_required true, and then
search-and-replace any references to the old keys in this template and across
the repo (including docs, template consumers, and CI configs) to use the new
standardized variable names to ensure no broken references remain.
| host_ssh_port: | ||
| type: "integer" | ||
| description: "Host SSH port for Git operations" | ||
| default: 22 | ||
| is_required: true | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reconsider SSH port exposure for Rocket.Chat.
Rocket.Chat does not typically require SSH port exposure. The host_ssh_port mapping (port 22) appears unnecessary and increases the attack surface. Consider removing this unless there's a specific use case documented.
🤖 Prompt for AI Agents
In api/templates/deploy-rocket-chat.yaml around lines 37 to 42 the template
declares host_ssh_port (integer, default 22, is_required: true) which exposes
SSH unnecessarily for Rocket.Chat; remove the host_ssh_port parameter and any
associated port-mapping or service binding references in this template (and
update related docs/usages) unless there is a documented, specific use-case that
requires SSH — if a use-case exists, change the parameter to optional with a
clear justification and default null and ensure port mapping is gated by that
setting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 27
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
api/internal/features/file-manager/service/manager.go (1)
113-117: Fields populated with misleading zero values.Since the stat extraction is disabled,
Permissions,OwnerId, andGroupIdwill always be0. This creates a misleading API response where clients cannot distinguish between:
- A file that genuinely has permission value 0
- Permission data that is unavailable/unsupported
Consider one of these approaches:
- Remove the fields from
FileDatastruct (lines 152, 155-156) if they're no longer supported:type FileData struct { Path string `json:"path"` Name string `json:"name"` Size int64 `json:"size"` CreatedAt string `json:"created_at"` UpdatedAt string `json:"updated_at"` FileType string `json:"file_type"` - Permissions int64 `json:"permissions"` IsHidden bool `json:"is_hidden"` Extension *string `json:"extension"` - OwnerId int64 `json:"owner_id"` - GroupId int64 `json:"group_id"` }
- Use pointer types (
*int64) sonilindicates "not available":- Permissions int64 `json:"permissions"` + Permissions *int64 `json:"permissions,omitempty"` - OwnerId int64 `json:"owner_id"` + OwnerId *int64 `json:"owner_id,omitempty"` - GroupId int64 `json:"group_id"` + GroupId *int64 `json:"group_id,omitempty"`
🧹 Nitpick comments (29)
api/templates/deploy-alist.yaml (2)
43-52: Consider running as non-root user for security.The default values for
PUIDandPGIDare set to 0 (root). While this may work, running containers as root violates the principle of least privilege. Consider using a non-root UID/GID by default (e.g., 1000).Example adjustment:
PUID: type: "integer" description: "Host user ID to run container" - default: 0 + default: 1000 is_required: false PGID: type: "integer" description: "Host group ID to run container" - default: 0 + default: 1000 is_required: falseThis follows Docker security best practices and allows users to override as needed.
38-42: Consider using absolute path for data_volume.The default value
"./alist_data"is a relative path, which may behave inconsistently depending on where the deployment process is executed from. Consider using an absolute path or a well-defined directory structure (e.g.,/opt/alist/dataor environment-based path).api/templates/deploy-authelia.yaml (3)
19-23: Consider pinning the Docker image tag to a specific version.Using
"latest"as the default can lead to unpredictable deployments if the upstream image changes unexpectedly. Pinning to a specific semantic version improves reproducibility and stability across environments.Apply this diff to use a pinned version:
tag: type: "string" description: "Docker image tag" - default: "latest" + default: "1.45.1" is_required: trueAlternatively, document a recommendation for users to specify their preferred version when instantiating the template.
37-47: Consider using absolute paths for volume mounts.The default paths for
config_pathandsecrets_pathare relative (./authelia/config,./authelia/secrets), which depends on the working directory at deployment time. This can lead to inconsistent behavior. Consider either:
- Recommending absolute paths (e.g.,
$HOME/.authelia/config) in documentation, or- Providing a configuration option to resolve paths relative to a known base directory.
74-79: Consider enabling the validation step.The validation block is commented out but appears functionally complete. If HTTP health checks are important for deployment verification, consider uncommenting this section (or clarifying why it should remain disabled).
api/templates/deploy-appsmith.yaml (2)
19-23: Consider pinning the Docker image tag to a specific version.Using
latestas the default tag can lead to unpredictable deployments and make reproducibility difficult, especially for production environments. Consider pinning to a specific stable version (e.g.,latest-stable,v3.19.0, or the current release at time of authoring).Apply this diff to use a pinned version:
tag: type: "string" description: "Docker image tag" - default: "latest" + default: "v3.19.0" is_required: trueAlternatively, document the trade-off in the template description if
latestis intentional for auto-updating deployments.
43-47: Clarify volume path guidance.The default volume path uses a relative path (
./stacks), which depends on the current working directory at runtime. For production deployments, consider documenting whether users should use absolute paths or if there are caveats around relative path behavior.Consider adding a note in the description field:
volume_path: type: "string" - description: "Path on host for persistent data (stacks directory)" + description: "Path on host for persistent data (stacks directory). Use absolute path for production." default: "./stacks" is_required: trueapi/templates/deploy-teleport.yaml (3)
12-47: Consider relaxingis_required: truefor non-critical variables.All seven variables are marked as required, which forces users to explicitly provide values even for fields with sensible defaults (e.g., timezone). Consider marking some as optional to improve usability.
For example,
timezoneandconfig_volumecould reasonably default without mandatory user input.
23-27: Hardcoded default paths may cause environment-specific issues.The default values for
host_port,container_port,data_volume, andconfig_volumeassume specific file system layouts and available ports. Consider whether:
- Port 3080 is consistently available across deployments
- Relative paths like
./teleport_dataand./teleport_configwork reliably in all execution contexts (current working directory may vary)These defaults may need to be more flexible or clearly documented in deployment instructions.
Also applies to: 28-32, 33-37, 38-42
76-81: Commented validation block suggests incomplete implementation.The validation step is fully commented out with no explanation of its intended use or status. Clarify:
- Is this a work-in-progress placeholder for future validation?
- Should it be removed if not needed?
- If the validation is valid, uncomment and enable it (though note the HTTP check may fail on non-standard deployments)
Would you like me to help enable or remove the validation logic based on your intended workflow?
api/templates/deploy-pocket-base.yaml (2)
33-37: Consider adding volume path pre-creation or documenting host path expectations.The template mounts
{{ data_volume }}as persistent storage but doesn't ensure the host path exists. If the path doesn't exist, the container may fail to start or create it with incorrect permissions. This could lead to silent runtime failures or permission issues.Clarify in the template documentation whether the deployment executor creates missing paths, or add a setup step to ensure the directory exists with appropriate permissions.
63-68: Consider enabling the commented validation block.The health check validates that PocketBase responds correctly after deployment. Leaving it commented risks deploying non-functional instances without detection. Either enable this validation or document why it's deferred (e.g., awaiting executor feature completion).
api/templates/deploy-node-red.yaml (1)
38-42: Consider using absolute or named volume fordata_volumedefault.The default
data_volume: "./node-red_data"uses a relative path, which may lead to inconsistent data persistence across different execution environments or when the deployment tool is run from different directories. For better reliability and reproducibility, consider defaulting to an absolute path or a Docker named volume.Examples:
# Option 1: Absolute path default: "/var/lib/node-red_data" # Option 2: Named volume (Docker will manage storage) default: "node-red-data"api/templates/deploy-coolify.yaml (1)
44-46: Reconsider hardcoding Docker image tag to "latest".Both the pull and run steps hardcode the image tag to
"latest"(lines 45–46, 54–55). While convenient for quick deployments, this introduces reproducibility and stability risks:
- Image changes between deployments without explicit version control.
- Difficult to pin exact behavior or debug issues across environments.
Consider allowing users to override the tag via a template variable with a sensible default (e.g., a specific stable version), or document why "latest" is intentional here.
If versioning is preferred, apply this diff to add a tag variable:
variables: compose_file: type: "string" description: "Path to the Docker Compose file" default: "./docker-compose.yml" is_required: true + image_tag: + type: "string" + description: "Docker image tag for Coolify" + default: "latest" + is_required: false container_name:Then update the pull and run steps:
- tag: "latest" + tag: "{{ image_tag }}"Also applies to: 54-55
api/templates/deploy-cyber-chef.yaml (3)
12-42: Consider allowing optional environment variables for flexibility.The template defines image, networking, and volume configuration well, but there's no way for users to pass environment variables to the container (e.g., for logging level, cache settings, etc.). Adding an optional
env_varsvariable could make the template more flexible without added complexity.Example addition:
env_vars: type: "string" description: "Environment variables as comma-separated key=value pairs (optional)" default: "" is_required: falseThen reference it in the run step
properties:env: "{{ env_vars }}"
38-42: Data volume path requires care with relative paths.The
data_volumedefaults to./cyberchef_data, a relative path. Ensure the template engine or deployment system documents where relative paths are resolved from. For production use, users should explicitly override with absolute paths. Consider noting this in the variable description or providing an example.
44-65: Add optional resource constraints for production deployments.The container run step lacks CPU/memory limits. Adding optional
cpu_limitandmemory_limitvariables would help prevent resource exhaustion and improve deployment reliability in shared environments.cpu_limit: type: "string" description: "CPU limit (e.g., '1' for 1 CPU, '0.5' for half CPU)" default: "" is_required: false memory_limit: type: "string" description: "Memory limit (e.g., '512m', '2g')" default: "" is_required: falseThen conditionally apply in the run step if values are provided.
api/templates/deploy-siyuan.yaml (1)
89-94: Consider enabling the validation block to confirm service readiness post-deployment.The HTTP health-check validation is commented out. Enabling it would provide confidence that the SiYuan service is responding correctly after the container starts, improving deployment reliability.
Recommended fix: Uncomment and integrate the validation step:
- # validate: - # - name: "Check HTTP response from SiYuan UI" - # type: "command" - # properties: - # cmd: "curl -fsS -o /dev/null -w '%{http_code}\\n' http://localhost:{{ host_port }} | grep -E '^(200|301|302)$'" - # timeout: 60 + validate: + - name: "Check HTTP response from SiYuan UI" + type: "command" + properties: + cmd: "curl -fsS -o /dev/null -w '%{http_code}\\n' http://localhost:{{ host_port }} | grep -E '^(200|301|302)$'" + timeout: 60api/templates/deploy-loki.yaml (2)
55-80: Consider adding explicit network configuration.Without explicit Docker network setup, Promtail and Loki may not be able to communicate reliably. Define a named Docker network in the template or document that the system handles network initialization separately.
82-87: Decide on the validation block: uncomment or remove.The validation block is commented out. If it's valid and useful, uncomment it; if it's not needed, remove it to avoid confusion.
api/templates/deploy-adgaurd-home.yaml (1)
38-42: Consider the DNS port conflict risk for port 53 default.Using port 53 as the default DNS port can conflict with existing DNS services (e.g., systemd-resolved, dnsmasq) already listening on the host. While AdGuard Home deployments commonly use this port, consider either:
- Documenting this prerequisite clearly in the template description
- Using a non-standard default port (e.g., 5353) to reduce conflicts
- Adding validation to detect conflicts before running
This is acceptable if users are expected to adjust this for their environment, but it's worth noting the conflict potential.
api/templates/deploy-logseq.yaml (1)
94-99: Clarify the status of the commented-out validation block.The validation step is commented out, suggesting incomplete implementation. Decide whether to:
- Implement and enable the health check validation for robustness.
- Remove the commented block entirely if validation is out of scope.
If validation is desired, uncomment and test the block to ensure the HTTP health check works correctly in the deployment context.
api/templates/deploy-keepass-xc.yaml (1)
94-99: Note: HTTP validation assumes localhost deployment.The validation step uses
http://localhost:{{ host_port }}/, which will only work if the validation runs on the same machine where the container is deployed. If deployments happen on remote hosts, this validation will fail.Consider documenting this limitation or adding a configurable host variable for validation.
api/templates/deploy-rss-hub.yaml (1)
87-92: Consider uncommenting the validation step.The commented validation block uses a curl-based health check, which is a good practice for deployment templates. Uncomment it or document why it's disabled (e.g., curl may not be available in the execution environment, or RSSHub needs additional startup time).
api/templates/deploy-archive-box.yaml (1)
55-60: Fix or remove commented validation step due to syntax concerns.The commented-out validation step has a potential issue in the curl command:
cmd: "curl -fsS -o /dev/null -w '%{http_code}\\n' http://localhost:{{ host_port }} | grep -E '^(200|301|302)$'"The
greppattern expects a single HTTP status code line, but the curl command uses%{http_code}\nwhich should output only the code. However, the escaping with\\nin YAML may not translate correctly to a newline at runtime. Additionally, the pipe to grep may fail if curl encounters errors.If this validation step is intended for future use, consider:
- Fixing the shell escaping (use
\nor proper YAML multiline syntax).- Adding error handling (e.g.,
|| trueor-fflag to curl).- Testing to ensure variable interpolation of
{{ host_port }}works in command context.Alternatively, if this step is not ready, remove it entirely to keep the template clean.
api/templates/deploy-trilium-notes.yaml (1)
46-46: Consider absolute path or environment variable for data directory.The default value
~/trilium-datauses a tilde for home directory expansion, which may not work correctly in all execution contexts (e.g., containerized executors, CI/CD systems where the working user context differs). Consider using an absolute path or environment variable reference to ensure reliable path resolution across different execution environments.data_dir: type: "string" description: "Path on host for persisting Trilium data" - default: "~/trilium-data" + default: "/var/lib/trilium-data" is_required: true(Adjust the path based on project conventions for data storage locations.)
api/templates/deploy-tool-jet.yaml (1)
107-127: Enable or document the HTTP validation step.Lines 129–134 contain a commented-out HTTP validation check that would confirm the ToolJet deployment is healthy and responding. While optional, enabling this would catch deployment failures early.
Either enable the validation step (remove comments) or document why it is intentionally skipped (e.g., due to execution engine limitations).
api/templates/deploy-reactive-resume.yaml (2)
101-102: Port configuration lacks consistency.Internally exposed ports (3100 for server, 3000 for client) are hardcoded in the environment variables (lines 101–102, 131–132) but the mapped host ports are parameterized via
host_port_serverandhost_port_client. This means the internal application endpoints won't match if internal ports need to differ.Consider either:
- Add variables for internal ports if they should be configurable.
- Document the assumption that server runs on port 3100 and client on port 3000.
If adding variables is desirable, introduce them:
host_port_server: type: "integer" description: "Host port to expose for the server" default: 3100 is_required: true + internal_port_server: + type: "integer" + description: "Internal port the server listens on" + default: 3100 + is_required: true + host_port_client: type: "integer" description: "Host port to expose for the client" default: 3000 is_required: true + internal_port_client: + type: "integer" + description: "Internal port the client listens on" + default: 3000 + is_required: trueThen update environment variables:
environment: - PUBLIC_URL=http://localhost:{{ host_port_client }} - - PUBLIC_SERVER_URL=http://localhost:{{ host_port_server }} + - PUBLIC_SERVER_URL=http://localhost:{{ internal_port_server }} - POSTGRES_DB=postgresAlso applies to: 131-132
138-142: Clarify the purpose of the commented-out validation step.The validation step is commented out with no explanation. Consider either removing it entirely or documenting why it's disabled and under what conditions it should be uncommented.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (25)
api/.air.toml(1 hunks)api/api/versions.json(1 hunks)api/internal/features/file-manager/service/manager.go(2 hunks)api/templates/deploy-adgaurd-home.yaml(1 hunks)api/templates/deploy-alist.yaml(1 hunks)api/templates/deploy-appsmith.yaml(1 hunks)api/templates/deploy-archive-box.yaml(1 hunks)api/templates/deploy-authelia.yaml(1 hunks)api/templates/deploy-coolify.yaml(1 hunks)api/templates/deploy-cyber-chef.yaml(1 hunks)api/templates/deploy-keepass-xc.yaml(1 hunks)api/templates/deploy-logseq.yaml(1 hunks)api/templates/deploy-loki.yaml(1 hunks)api/templates/deploy-next-cloud.yaml(1 hunks)api/templates/deploy-nginx-proxy-manager.yaml(1 hunks)api/templates/deploy-node-red.yaml(1 hunks)api/templates/deploy-pocket-base.yaml(1 hunks)api/templates/deploy-reactive-resume.yaml(1 hunks)api/templates/deploy-rss-hub.yaml(1 hunks)api/templates/deploy-siyuan.yaml(1 hunks)api/templates/deploy-teleport.yaml(1 hunks)api/templates/deploy-tool-jet.yaml(1 hunks)api/templates/deploy-trilium-notes.yaml(1 hunks)api/templates/deploy-watch-tower.yaml(1 hunks)api/templates/deploy-zulip.yaml(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- api/api/versions.json
🔇 Additional comments (34)
api/templates/deploy-watch-tower.yaml (4)
1-11: Metadata structure is clear and well-documented.The template metadata is properly defined with a descriptive description, appropriate icon and category, and a reasonable default version. Setting
isVerified: falsefor a new template is appropriate.
12-42: Variables are well-defined with clear descriptions and sensible defaults.All five variables are appropriately typed and documented. The default
interval_secondsof 86400 (24 hours) is reasonable for Watchtower's update-check frequency. The optionalcleanupvariable provides flexibility without requiring explicit configuration.
43-65: Verify template variable syntax and timeout appropriateness.The execution section follows a clear pull-then-run pattern. However, please confirm the following:
- The conditional template syntax
{{ if cleanup }}--cleanup{{ end }}on line 62 — ensure this aligns with your templating engine's syntax (e.g., Go templates, Jinja2, etc.).- The 120-second timeout for the run step may be tight if the container takes time to initialize. Verify this is sufficient for Watchtower to start and stabilize.
The Docker socket mount on line 61 is necessary for Watchtower's functionality.
66-71: Clarify intent of commented-out validation block.The validation block is commented out. Please clarify whether this should be:
- Uncommented and enabled as a standard health check for this template, or
- Removed entirely if validation is not required.
api/templates/deploy-alist.yaml (1)
86-91: Clarify intent for commented validation block.The validation block is currently commented out. Please clarify whether this is intentionally deferred for a future implementation or if it should be removed. If validation is intended, consider enabling it; otherwise, remove the commented code to reduce maintenance overhead.
api/templates/deploy-appsmith.yaml (3)
1-10: Well-structured metadata section.The metadata block is complete and follows good practices—including verification flag, clear categorization, and semantic versioning.
49-72: Execution plan is well-formed.The pull and run steps properly use templated variables, include reasonable timeouts (300s for pull, 120s for run), and configure restart policies appropriately for a stateful application.
74-79: Uncomment or remove the validation block.The health check validation is commented out. Clarify whether:
- It should be enabled to verify deployment success, or
- It was intentionally disabled pending future work (add a TODO comment if so)
If enabled, verify that the curl command works as expected after container startup (the port variable reference and HTTP status codes look reasonable).
If this validation should be enabled, apply this diff:
- # validate: - # - name: "Check HTTP response from Appsmith" - # type: "command" - # properties: - # cmd: "curl -fsS -o /dev/null -w '%{http_code}\\n' http://localhost:{{ host_port_http }} | grep -E '^(200|301|302)$'" - # timeout: 60 + validate: + - name: "Check HTTP response from Appsmith" + type: "command" + properties: + cmd: "curl -fsS -o /dev/null -w '%{http_code}\\n' http://localhost:{{ host_port_http }} | grep -E '^(200|301|302)$'" + timeout: 60Alternatively, if intentionally disabled, replace with a TODO comment explaining why.
api/templates/deploy-teleport.yaml (2)
1-10: Metadata structure looks well-formed.The template metadata follows a consistent structure with appropriate identification fields and categorization.
59-74: Verify Docker Compose file integration.The execution block references
{{ container_name }},{{ host_port }}, etc., but there's no visible integration with thecompose_filevariable (line 13-17). Confirm that:
- The container configuration is persisted to or validated against the referenced Docker Compose file
- Or clarify whether the Docker Compose file variable is obsolete and should be removed from the variables section
api/templates/deploy-pocket-base.yaml (1)
1-61: Well-structured deployment template with sensible defaults.The template is clean, consistent with the broader templating pattern, and uses proper variable substitution throughout. Metadata is complete, variable definitions are clear, Docker actions are properly configured with realistic timeouts (300s for pull, 180s for run), and the restart policy ensures resilience.
api/templates/deploy-node-red.yaml (2)
74-79: Clarify or enable the validation block.The commented-out validation checks for HTTP status codes from the Node-RED endpoint. Clarify whether this should be:
- Uncommented and enabled as part of the deployment verification flow, or
- Removed entirely if validation is not part of the template execution model.
If validation is intended to be part of deployment success criteria, enabling it would help ensure the service is actually accessible before marking the deployment complete.
1-79: Template structure and completeness looks solid.The metadata is complete, variables are well-typed and documented with sensible defaults, execution steps follow a clear pull-then-run pattern with appropriate timeouts, and the configuration supports Node-RED's core requirements (port binding, data persistence, timezone). The template provides good user flexibility with overridable defaults.
api/templates/deploy-coolify.yaml (3)
1-10: Verify metadata field naming convention.The metadata section uses mixed naming conventions:
isVerified(camelCase, line 10) while other fields likeis_requireduse snake_case. Ensure this aligns with the naming convention adopted across other templates in the codebase.Can you confirm whether
isVerifiedshould beis_verifiedto maintain consistency with snake_case field naming? You can search the codebase to verify the convention used in existing templates.
12-37: Verify all variables are truly required.All five variables are marked
is_required: true(lines 17, 22, 27, 32, 37). Whilecompose_fileandcontainer_nameare clearly essential, verify whetherhost_port,container_port, anddata_volumeshould allow optional usage with their defaults. This affects the deployment UX when users deploy with minimal configuration.Provide clarity on which variables must always be explicitly set vs. which can reasonably use defaults in most scenarios.
63-68: Address the commented-out validation block.The validation block (lines 63–68) is commented out. Clarify the intent:
- If incomplete: Finish the validation logic and enable it to ensure post-deployment health checks.
- If deferred: Consider removing it or opening a tracking issue rather than leaving commented code in version control.
- If intentionally optional: Document why validation is skipped for this template.
Commented code can obscure intent and clutter maintenance over time.
Should this validation be enabled, refined, removed, or tracked separately?
api/templates/deploy-zulip.yaml (4)
12-47: Verify path expansion fordata_dirdefault value.Line 46 uses
~/zulip-dataas the default path. The tilde (~) may not reliably expand in all execution contexts (e.g., within Docker, container runtimes, or programmatic templating engines). Consider using an absolute path or an environment variable placeholder for better portability and predictability.
49-69: Verify the docker pull+run pattern aligns with the execution runtime.The execution block uses two sequential docker steps:
pull(lines 51-57) andrun(lines 59-69). Confirm that:
- The runtime supports
action: "pull"as a separate step (vs. implicit inaction: "run")- The pull step timeout (300s) and run timeout (180s) are appropriate for typical Zulip deployments
- The
restart: "unless-stopped"policy is supported by the execution engineIf the runtime automatically pulls images during
run, the explicitpullstep may be redundant or need justification (e.g., to apply custom timeouts or improve observability).
71-76: Clarify intent of commented validation block.The validation step (lines 71–76) is commented out. Clarify whether this should be:
- Uncommented to enable HTTP health checking post-deployment
- Removed if validation is intentionally skipped
- Updated to reflect actual Zulip response codes and paths
If left commented, consider adding a brief explanation (e.g., "Optional: uncomment to enable validation").
12-47: Consider adding environment variables for Zulip configuration.The template defines container networking and volume paths but does not expose environment variables for Zulip-specific configuration (e.g.,
ZULIP_DOMAIN,INITIAL_SERVER_ICON_SOURCE,POSTGRES_PASSWORD, SMTP settings). Depending on the Zulip container's defaults:
- If the container auto-configures with sensible defaults, this is acceptable and should be documented.
- If configuration is needed for a functional deployment, consider adding an
environmentsection tovariableswith the most common/critical settings.Verify whether this aligns with the pattern used in other templates in this PR.
api/templates/deploy-cyber-chef.yaml (2)
1-72: Template structure and execution flow look solid.The YAML syntax is well-formed, variables are appropriately typed with sensible defaults, and the two-step execution (pull → run) follows Docker best practices. Port mapping, volume binding, and restart policy are correctly configured.
67-72: Validation step is well-positioned but should remain optional.The commented HTTP validation check is a good addition for when users enable it. However, the timeout of 60 seconds might be aggressive for initial startup; consider noting in docs that this should only be enabled after initial deployment verification.
api/templates/deploy-adgaurd-home.yaml (2)
43-47: Verify relative path behavior for data_volume.The default value
./adguard_datais a relative path that depends on the working directory during execution. Verify this is the intended behavior for your execution engine. If the template runs from varying working directories, consider:
- Using an absolute path or environment variable reference
- Documenting the expected working directory clearly
- Using a named Docker volume instead (e.g.,
adguard_data:/opt/adguardhome/work)
49-73: Template structure and execution plan look solid.The pull and run steps are well-configured with appropriate timeouts (300s for pull, 180s for run). Volume mounts are correctly mapped for AdGuard Home's dual-directory structure (work and conf), and the restart policy "unless-stopped" is a good default. The commented-out validation step is sensible for future enablement.
api/templates/deploy-nginx-proxy-manager.yaml (1)
88-92: Clarify the status of the commented-out validation step.The validation block is commented out without explanation. If this step is intentionally disabled (e.g., pending runtime support), document this. If it should be active, uncomment and ensure it works correctly.
Please clarify:
- Is the validation step commented out intentionally, pending further work?
- If it should be enabled, has it been tested to confirm it works?
- Are there other templates in this PR with similar commented validation that should follow the same approach?
api/templates/deploy-archive-box.yaml (4)
41-53: Verify docker run properties and variable interpolation in ports/volumes.The "Run ArchiveBox container" step uses standard docker run properties. Verify that:
- Port mapping format
"{{ host_port }}:8000"is correctly interpolated at runtime.- Volume mounting format
"{{ data_volume_path }}:/data"is correctly interpolated.- The list format for ports and volumes is correctly parsed by the execution engine.
- The restart policy value "unless-stopped" is valid for the target docker runtime.
Also note: the image reference should likely be pinned to a specific version rather than "latest" for reproducibility and security (as discussed in the pull request summary mentioning this is a verified template initiative).
31-39: No issues found—docker action implementation follows established patterns.The verification confirms that the execution engine already supports
type: "docker"with both "pull" and "run" actions. Multiple existing templates (deploy-firefly-iii.yaml, deploy-click-house.yaml, deploy-glances.yaml, deploy-nginx-proxy-manager.yaml, and 68+ additional occurrences) use the same properties format. The code in deploy-archive-box.yaml at lines 31-39 correctly follows this established schema withaction: "pull",image,tag, andtimeoutproperties.
12-29: Variable schema is consistent with established conventions.All three verification points have been confirmed:
- The
{{ variable_name }}interpolation syntax is correct and used throughout all templates- The
is_requiredfield name is the correct convention, consistently used across deploy-zulip.yaml, deploy-next-cloud.yaml, and other templates- The type system (both "string" and "integer") is properly supported and used across multiple templates
The code in lines 12-29 of deploy-archive-box.yaml follows all established patterns in the codebase.
1-10: Metadata structure is consistent with other deployment templates—no issues found.After verification, the metadata section in
deploy-archive-box.yamlmatches the exact structure and conventions used across all other deployment templates inapi/templates/. The docker action format (pull/run), variable definitions withis_requiredflags, and template variable syntax ({{ variable_name }}) are all consistent with existing templates likedeploy-appsmith.yamlanddeploy-alist.yaml. The template follows established codebase patterns correctly.api/templates/deploy-trilium-notes.yaml (3)
49-69: Template structure and execution steps look sound.The metadata, variables, and execution blocks are well-structured and follow a clear pattern. All required configuration is present, timeouts are reasonable, and template variable interpolation is correct.
67-67: Container path/home/node/trilium-datais correct for standard Trilium image.The default container data path for Trilium Notes is
/home/node/trilium-data, which matches the volume mount in the YAML template. The template correctly maps the host's{{ data_dir }}variable to this internal path, ensuring data persistence.Note: Rootless image deployments use
/home/trilium/trilium-datainstead. If the template must support both image types, you may need a conditional configuration.
16-16: Remove the Docker tag format concern—the tag is valid and documented.The
0.52-latesttag is a legitimate series tag published by the Trilium project, as confirmed in their official Docker documentation and release pages. This is the standard tagging approach used by zadam/trilium—combining series versions with a "latest" suffix for that series is a documented pattern, not unconventional.Likely an incorrect or invalid review comment.
api/internal/features/file-manager/service/manager.go (1)
8-8: Clarify the reason for disabling syscall usage.The commented import is consistent with the disabled stat extraction logic below, but no explanation is provided for why this platform-specific functionality was removed.
api/templates/deploy-tool-jet.yaml (1)
55-127: Verify whether the deployment execution engine supports template variables independs_onfields.The templates show inconsistent usage:
deploy-tool-jet.yamluses template variables independs_on(lines 123–125:- "{{ container_name_db }}"and- "{{ container_name_redis }}"), whiledeploy-reactive-resume.yamluses hardcoded service names (lines 108, 133–134:depends_on: [postgres, server]). The execution engine source code is not present in the visible codebase. Confirm that your execution engine interpolates template variables independs_onbefore container orchestration, or refactor to use hardcoded service names consistent with standard Docker Compose semantics.
| // sysInfo := info.Sys() | ||
| var ownerId, groupId int64 | ||
| var permissions int64 | ||
|
|
||
| if statInfo, ok := sysInfo.(*syscall.Stat_t); ok { | ||
| ownerId = int64(statInfo.Uid) | ||
| groupId = int64(statInfo.Gid) | ||
| permissions = int64(statInfo.Mode & 0777) | ||
| } | ||
| // if statInfo, ok := sysInfo.(*syscall.Stat_t); ok { | ||
| // ownerId = int64(statInfo.Uid) | ||
| // groupId = int64(statInfo.Gid) | ||
| // permissions = int64(statInfo.Mode & 0777) | ||
| // } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Breaking change: File permission and ownership data is no longer returned.
Commenting out the stat extraction logic means the API will now return 0 for permissions, owner_id, and group_id fields instead of the actual system values. This is a breaking behavioral change that could impact:
- Clients that rely on permission data for access control decisions
- UI components displaying file ownership information
- Security auditing or compliance features
If this change addresses platform compatibility issues (e.g., syscall.Stat_t is Unix-specific), consider using Go build tags instead:
//go:build unix
package service
import "syscall"
func extractStatInfo(info os.FileInfo) (ownerId, groupId, permissions int64) {
if sysInfo := info.Sys(); sysInfo != nil {
if statInfo, ok := sysInfo.(*syscall.Stat_t); ok {
return int64(statInfo.Uid), int64(statInfo.Gid), int64(statInfo.Mode & 0777)
}
}
return 0, 0, 0
}Alternatively, if these fields are no longer supported, remove them from the FileData struct to avoid returning misleading zero values.
Run the following script to check for API consumers that may depend on these fields:
🏁 Script executed:
#!/bin/bash
# Search for usage of permissions, owner_id, or group_id fields from FileData responses
rg -nP --type=go -C3 '\b(Permissions|OwnerId|GroupId)\b' -g '!**/service/manager.go'Length of output: 2905
</script>
🏁 Script executed:
#!/bin/bash
# Find FileData struct definition and API usage
echo "=== FileData struct definition ==="
rg -nP --type=go -B2 -A8 'type FileData struct' -g '!**/test/**'
echo -e "\n=== Handlers returning FileData ==="
rg -nP --type=go -B3 -A3 'FileData' -g '!**/service/manager.go' | head -50Length of output: 1102
</script>
🏁 Script executed:
#!/bin/bash
# Get complete FileData struct definition and find API consumers
echo "=== Complete FileData struct ==="
sed -n '145,200p' api/internal/features/file-manager/service/manager.go
echo -e "\n=== API routes/handlers using file manager ==="
rg -nP --type=go 'FileData|file.*manager' api/internal/features/file-manager/handler/ api/internal/routes/ 2>/dev/null | head -40
echo -e "\n=== Check manager.go for OwnerId and GroupId fields ==="
rg -nP --type=go 'OwnerId|GroupId' api/internal/features/file-manager/Length of output: 2040
</script>
🏁 Script executed:
#!/bin/bash
# View the full context of the commented code and assignments
echo "=== Context around lines 94-120 ==="
sed -n '80,125p' api/internal/features/file-manager/service/manager.go
echo -e "\n=== Search for routes/handlers in file-manager ==="
find api/internal/features/file-manager -name '*.go' -type f | xargs grep -l 'http\|route\|handler' | head -10
echo -e "\n=== List all files in file-manager ==="
find api/internal/features/file-manager -name '*.go' -type fLength of output: 2121
</script>
🏁 Script executed:
#!/bin/bash
# Check test expectations and controller exposure
echo "=== Manager test file (checking for permission/owner assertions) ==="
cat api/internal/features/file-manager/tests/manager_test.go
echo -e "\n=== List files controller (checking HTTP exposure) ==="
cat api/internal/features/file-manager/controller/list_files.go | head -60Length of output: 4469
Breaking change: File permission and ownership data is no longer returned.
Commenting out the stat extraction logic means the API will now return 0 for permissions, owner_id, and group_id fields instead of actual system values. This is confirmed as a breaking behavioral change—these fields are part of the FileData struct and are exposed via the HTTP API through the ListFiles controller.
Clients that rely on permission data for access control decisions, UI display of file ownership, or security auditing will now receive zero values.
If this change addresses platform compatibility issues (e.g., syscall.Stat_t being Unix-specific), use Go build tags instead:
//go:build unix
package service
import "syscall"
func extractStatInfo(info os.FileInfo) (ownerId, groupId, permissions int64) {
if sysInfo := info.Sys(); sysInfo != nil {
if statInfo, ok := sysInfo.(*syscall.Stat_t); ok {
return int64(statInfo.Uid), int64(statInfo.Gid), int64(statInfo.Mode & 0777)
}
}
return 0, 0, 0
}Alternatively, if these fields are no longer supported on all platforms, remove them from the FileData struct to avoid returning misleading zero values.
🤖 Prompt for AI Agents
In api/internal/features/file-manager/service/manager.go around lines 94-102 the
stat extraction was commented out causing permissions, owner_id and group_id to
always be zero; restore platform-safe stat extraction by moving the
syscall-dependent logic into a dedicated helper implemented with Go build tags
(e.g., an extractStatInfo in a file with //go:build unix that uses
syscall.Stat_t and returns ownerId, groupId, permissions) and provide a non-unix
fallback implementation that returns zeros so the build remains portable;
alternatively, if ownership/permissions cannot be supported across all targets,
remove those fields from FileData and update API types/handlers accordingly.
| metadata: | ||
| id: "deploy-adguard-home" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix filename typo: "adgaurd" should be "adguard".
Line 2 shows the id as "deploy-adguard-home" but the filename is deploy-adgaurd-home.yaml. The filename is misspelled (missing 'u' in "adguard"). For consistency and clarity, rename the file to deploy-adguard-home.yaml.
🤖 Prompt for AI Agents
In api/templates/deploy-adgaurd-home.yaml around lines 1 to 2, the filename
contains a typo ("adgaurd"); rename the file to deploy-adguard-home.yaml to
match the id and correct spelling, and update any references (scripts, CI,
manifests, imports) that point to the old filename to use the new
deploy-adguard-home.yaml name.
| category: "Security" | ||
| type: "install" | ||
| version: "1.0.0" | ||
| isVerified: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change isVerified to false for new templates.
Line 10 marks this template as verified, but new templates should start as unverified until maintainers confirm they are production-ready and tested.
- isVerified: true
+ isVerified: false📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| isVerified: true | |
| isVerified: false |
🤖 Prompt for AI Agents
In api/templates/deploy-keepass-xc.yaml around line 10, the template is marked
as verified via "isVerified: true" but new templates must start unverified;
change the value to "isVerified: false" so maintainers can review and confirm
the template before marking it verified.
| compose_file: | ||
| type: "string" | ||
| description: "Path to the Docker Compose file" | ||
| default: "./docker-compose.yml" | ||
| is_required: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unused compose_file variable.
The compose_file variable is marked as required but is never used in the execution steps (lines 65–99). This creates confusion and dead code.
Remove the variable:
variables:
- compose_file:
- type: "string"
- description: "Path to the Docker Compose file"
- default: "./docker-compose.yml"
- is_required: true
container_name:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| compose_file: | |
| type: "string" | |
| description: "Path to the Docker Compose file" | |
| default: "./docker-compose.yml" | |
| is_required: true | |
| variables: | |
| container_name: |
🤖 Prompt for AI Agents
In api/templates/deploy-keepass-xc.yaml around lines 13 to 17, the
'compose_file' input is declared and marked required but is never referenced in
the workflow steps (lines ~65–99); remove this unused variable declaration from
the template to eliminate dead code and confusion, and if any downstream logic
actually needs a compose file, instead add/propagate a properly named input
where it’s referenced or update the steps to use the existing file input
accordingly.
| access_auth_code: | ||
| type: "string" | ||
| description: "Access authorization code for SiYuan (set a secure code)" | ||
| default: "changeme" | ||
| is_required: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Default authorization code "changeme" poses a security risk.
The access_auth_code variable defaults to "changeme", which is an insecure placeholder. Although marked as required, users might deploy this template without changing the default, resulting in an insecurely configured application with a weak/known default credential.
Recommended fix: Replace the default with a more secure or empty value to force users to provide a strong authentication code:
access_auth_code:
type: "string"
description: "Access authorization code for SiYuan (set a secure code)"
- default: "changeme"
+ default: ""
is_required: trueAlternatively, consider generating a secure default at runtime or documenting the requirement more prominently.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| access_auth_code: | |
| type: "string" | |
| description: "Access authorization code for SiYuan (set a secure code)" | |
| default: "changeme" | |
| is_required: true | |
| access_auth_code: | |
| type: "string" | |
| description: "Access authorization code for SiYuan (set a secure code)" | |
| default: "" | |
| is_required: true |
🤖 Prompt for AI Agents
In api/templates/deploy-siyuan.yaml around lines 43 to 47, the access_auth_code
default is set to the insecure placeholder "changeme"; remove the insecure
default so the field is empty or unset (forcing explicit input), or set it to a
generated/placeholder that cannot be used (e.g., no default) and update the
template to require users to supply a strong secret at deploy time; ensure
validation/docs note that a non-empty, strong access_auth_code must be provided.
| execution: | ||
| run: | ||
| - name: "Pull Teleport image" | ||
| type: "docker" | ||
| properties: | ||
| action: "pull" | ||
| image: "public.ecr.aws/gravitational/teleport" | ||
| tag: "latest" | ||
| timeout: 300 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Image tag strategy uses latest, which creates production stability risks.
Using the latest tag (line 56) means deployments pull unpredictable Teleport versions, potentially introducing breaking changes or security issues across environments. This is especially problematic for a security-focused tool like Teleport.
Consider:
- Pinning to specific stable versions (e.g.,
"v14.0.0") - Allowing version to be user-configurable via a new
teleport_versionvariable - Documenting the stability/compatibility implications of
latest
🤖 Prompt for AI Agents
In api/templates/deploy-teleport.yaml around lines 49 to 57, the deployment
currently pulls the Teleport image using the unstable "latest" tag; change this
to a pinned, configurable version. Replace the hardcoded tag with a new variable
(e.g., teleport_version) defaulting to a specific stable release (like
"v14.0.0"), update any template/values files to expose that variable, and ensure
the template uses the variable for the image tag so deployments pin a known
Teleport version while allowing overrides.
| host_port: | ||
| type: "integer" | ||
| description: "Host port to expose for ToolJet" | ||
| default: 80 | ||
| is_required: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Default port 80 requires elevated privileges.
Line 34 sets the default host port to 80, which requires root/administrator privileges on most systems. This may cause deployment failures in non-privileged environments (e.g., Docker containers without special capabilities, restricted user accounts).
Consider a higher default port (e.g., 8080) or make port selection mandatory without a "privileged" default.
host_port:
type: "integer"
description: "Host port to expose for ToolJet"
- default: 80
+ default: 8080
is_required: true📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| host_port: | |
| type: "integer" | |
| description: "Host port to expose for ToolJet" | |
| default: 80 | |
| is_required: true | |
| host_port: | |
| type: "integer" | |
| description: "Host port to expose for ToolJet" | |
| default: 8080 | |
| is_required: true |
🤖 Prompt for AI Agents
In api/templates/deploy-tool-jet.yaml around lines 31 to 35, the default
host_port is set to 80 which requires elevated privileges; change the default to
a non-privileged port (e.g., 8080) or remove the default and keep is_required
true so callers must provide a port; update the description to note the
non-privileged default or that selection is mandatory to avoid deployment
failures in restricted environments.
| postgres_password: | ||
| type: "string" | ||
| description: "Password for PostgreSQL database" | ||
| default: "tooljet" | ||
| is_required: true | ||
|
|
||
| secret_key: | ||
| type: "string" | ||
| description: "Secret key for ToolJet application" | ||
| default: "replace-with-secure-key" | ||
| is_required: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace weak default credentials with secure randomization or enforce validation.
Lines 40 and 46 contain placeholder credentials ("tooljet" and "replace-with-secure-key") as defaults. Deploying with these defaults poses a critical security risk. Production deployments must never use weak, predictable credentials.
Remove defaults entirely, make them required without suggestions, or implement automatic secure generation (e.g., random tokens, strong passwords).
Suggested approach:
postgres_password:
type: "string"
description: "Password for PostgreSQL database"
- default: "tooljet"
is_required: true
secret_key:
type: "string"
description: "Secret key for ToolJet application"
- default: "replace-with-secure-key"
is_required: true📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| postgres_password: | |
| type: "string" | |
| description: "Password for PostgreSQL database" | |
| default: "tooljet" | |
| is_required: true | |
| secret_key: | |
| type: "string" | |
| description: "Secret key for ToolJet application" | |
| default: "replace-with-secure-key" | |
| is_required: true | |
| postgres_password: | |
| type: "string" | |
| description: "Password for PostgreSQL database" | |
| is_required: true | |
| secret_key: | |
| type: "string" | |
| description: "Secret key for ToolJet application" | |
| is_required: true |
| data_volume_path: | ||
| type: "string" | ||
| description: "Host path for persistent data storage" | ||
| default: "./data" | ||
| is_required: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Clarify behavior of relative path for persistent data storage.
The default path "./data" is relative and may not resolve consistently depending on the execution context or working directory of the deployment process. This could lead to data loss or unexpected behavior across different deployment environments.
Recommend using an absolute path or documenting how the execution engine resolves relative paths from the template location. Consider also allowing users to override this at deployment time.
I'll help you verify this review comment by examining the deployment configuration and related context. Let me gather information about how paths are handled in this deployment template.
</shell_scripts]
🏁 Script executed:
#!/bin/bash
# Inspect the deploy-tool-jet.yaml file and surrounding context
echo "=== File Content ==="
cat api/templates/deploy-tool-jet.yaml
echo -e "\n=== Directory Structure ==="
find api/templates -type f | head -20
echo -e "\n=== Search for path-related documentation ==="
find . -name "*.md" -o -name "*.txt" -o -name "README*" | xargs grep -l "path\|deployment" 2>/dev/null | head -10
echo -e "\n=== Check for other volume/path configurations ==="
rg -n "volume_path|data.*path" --type=yamlLength of output: 6016
Let me investigate further to understand how the deployment process handles relative paths:
🏁 Script executed:
#!/bin/bash
# Check deploy-pinry.yaml for absolute path handling
echo "=== deploy-pinry.yaml path handling ==="
grep -A5 -B5 "absolute data path" api/templates/deploy-pinry.yaml
echo -e "\n=== deploy-pinry.yaml data_path variable ==="
grep -A5 "data_path:" api/templates/deploy-pinry.yaml | head -15
echo -e "\n=== CLI reference documentation on paths ==="
grep -A10 -B2 "path\|working.dir\|execution" docs/cli/commands/install.md | head -40
echo -e "\n=== Check for path resolution in execution logic ==="
find . -type f \( -name "*.ts" -o -name "*.js" -o -name "*.py" \) | xargs grep -l "data_volume_path\|volume.*path" 2>/dev/null | head -10Length of output: 1860
Let me search for more information about how the execution engine and templates are processed:
🏁 Script executed:
#!/bin/bash
# Check deploy-pinry.yaml install_dir variable
echo "=== deploy-pinry.yaml variables section ==="
head -100 api/templates/deploy-pinry.yaml | grep -A20 "variables:"
# Look for execution engine or template processor
echo -e "\n=== Search for execution engine code ==="
find . -type f \( -name "*.ts" -o -name "*.js" \) -path "*/src/*" | xargs grep -l "execution\|template.*process\|volume.*mount" 2>/dev/null | head -10
# Search for any path-related TODOs or comments
echo -e "\n=== Search for path-related issues or comments ==="
rg -n "TODO|FIXME|NOTE.*path|relative.*path|absolute.*path" --type=yaml | head -20
# Check for any scripts that handle path resolution
echo -e "\n=== Look for deployment or execution scripts ==="
find . -type f \( -name "*.sh" -o -name "*.bash" \) | head -15Length of output: 817
Let me check which templates properly handle absolute path conversion:
🏁 Script executed:
#!/bin/bash
# Find all templates that explicitly convert paths to absolute
echo "=== Templates with explicit absolute path conversion ==="
rg -l "readlink|realpath|absolute" api/templates/ --type=yaml
# For each template with data_volume_path, check if it converts to absolute
echo -e "\n=== Templates using data_volume_path and their path handling ==="
for file in $(rg -l "data_volume_path" api/templates/ --type=yaml); do
echo "--- $file ---"
grep -A2 'data_volume_path:' "$file" | head -4
echo "Has path conversion: $(grep -c 'readlink\|realpath\|absolute' "$file")"
done
# Check the actual volume mount usage to see if context is provided
echo -e "\n=== Compare volume mount patterns ==="
echo "deploy-tool-jet.yaml:"
grep "data_volume_path" api/templates/deploy-tool-jet.yaml
echo -e "\ndeploy-pinry.yaml:"
grep -A5 "Set absolute data path" api/templates/deploy-pinry.yaml
grep "install_dir" api/templates/deploy-pinry.yaml | grep -v "description"Length of output: 1502
Convert data_volume_path to absolute path in execution, matching the pattern used in deploy-pinry.yaml.
The relative path "./data" creates an inconsistency with how other templates handle this. The deploy-pinry.yaml template explicitly converts its install_dir to an absolute path using readlink -f, proving that relative path resolution is a recognized concern in this codebase. Meanwhile, five other templates—including deploy-tool-jet.yaml—use relative defaults for data_volume_path without conversion.
Add a step in the execution to resolve the path to absolute before using it in volume mounts, similar to the pinry pattern:
- name: "Set absolute data path"
type: "command"
properties:
cmd: "export DATA_VOLUME_PATH=$(readlink -f {{ data_volume_path }})"
timeout: 10Then reference the exported variable in subsequent volume mounts to ensure consistent path resolution across deployment environments.
🤖 Prompt for AI Agents
In api/templates/deploy-tool-jet.yaml around lines 49 to 53, the
data_volume_path default is a relative path ("./data") but is not converted to
an absolute path before use; add a pre-step to resolve it to an absolute path
(using readlink -f or equivalent) and export it to an environment variable
(e.g., DATA_VOLUME_PATH) and then update subsequent volume mount references to
use that exported absolute variable so mounts are consistent with
deploy-pinry.yaml.
Issue
Link to related issue(s):
Description
Short summary of what this PR changes or introduces.
Scope of Change
Select all applicable areas impacted by this PR:
Screenshot / Video / GIF (if applicable)
Attach or embed screenshots, screen recordings, or GIFs demonstrating the feature or fix.
Related PRs (if any)
Link any related or dependent PRs across repos.
Additional Notes for Reviewers (optional)
Anything reviewers should know before testing or merging (e.g., environment variables, setup steps).
Developer Checklist
To be completed by the developer who raised the PR.
Reviewer Checklist
To be completed by the reviewer before merge.
Summary by CodeRabbit