Skip to content

Commit e14b8c1

Browse files
authored
Merge pull request #731 from OWASP/feat/external-dns
feat: add external-dns and acm capabilities
2 parents e644a6b + 168cb94 commit e14b8c1

15 files changed

+218
-29
lines changed

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,3 +21,6 @@ db.zip
2121
node_modules
2222
.npm
2323
gcp/k8s/secret-volume.yml
24+
25+
aws/k8s/ctfd-ingress.yaml
26+
wrongsecrets-balancer-ingress.yml

aws/README.md

Lines changed: 19 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -42,13 +42,14 @@ The terraform code is loosely based on [this EKS managed Node Group TF example](
4242

4343
1. export your AWS credentials (`export AWS_PROFILE=awsuser`)
4444
2. check whether you have the right profile by doing `aws sts get-caller-identity`. Make sure you have the right account and have the rights to do this.
45-
3. Do `terraform init` (if required, use tfenv to select TF 0.14.0 or higher )
46-
4. The bucket ARN will be asked in the next 2 steps. Take the one provided to you in the output earlier (e.g., `arn:aws:s3:::terraform-20230102231352749300000001`).
47-
5. Do `terraform plan`
48-
6. Do `terraform apply`. Note: the apply will take 10 to 20 minutes depending on the speed of the AWS backplane.
49-
7. When creation is done, do `aws eks update-kubeconfig --region eu-west-1 --name wrongsecrets-exercise-cluster --kubeconfig ~/.kube/wrongsecrets`
50-
8. Do `export KUBECONFIG=~/.kube/wrongsecrets`
51-
9. Run `./build-and-deploy-aws.sh` to install all the required materials (helm for calico, secrets management, autoscaling, etc.)
45+
3. Ensure you have set all the right variables in `terraform.tfvars`. **Optional:** If you want to use a custom domain with TLS, also fill out your domain name(s) and Route53 hosted zone here. Delegate (sub)domains to Route53 nameservers if you're not hosting your domain with Route53: [using the AWS docs](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingNewSubdomain.html)
46+
4. Do `terraform init` (if required, use tfenv to select TF 0.14.0 or higher )
47+
5. The bucket ARN will be asked in the next 2 steps. Take the one provided to you in the output earlier (e.g., `arn:aws:s3:::terraform-20230102231352749300000001`).
48+
6. Do `terraform plan`
49+
7. Do `terraform apply`. Note: the apply will take 10 to 20 minutes depending on the speed of the AWS backplane.
50+
8. When creation is done, do `aws eks update-kubeconfig --region eu-west-1 --name wrongsecrets-exercise-cluster --kubeconfig ~/.kube/wrongsecrets`
51+
9. Do `export KUBECONFIG=~/.kube/wrongsecrets`
52+
10. Run `./build-and-deploy-aws.sh` to install all the required materials (helm for calico, secrets management, autoscaling, etc.)
5253

5354
Your EKS cluster should be visible in [eu-west-1](https://eu-west-1.console.aws.amazon.com/eks/home?region=eu-west-1#/clusters) by default. Want a different region? You can modify `terraform.tfvars` or input it directly using the `region` variable in plan/apply.
5455

@@ -158,9 +159,12 @@ The documentation below is auto-generated to give insight on what's created via
158159

159160
| Name | Source | Version |
160161
|------|--------|---------|
162+
| <a name="module_acm_balancer"></a> [acm\_balancer](#module\_acm\_balancer) | terraform-aws-modules/acm/aws | n/a |
163+
| <a name="module_acm_ctfd"></a> [acm\_ctfd](#module\_acm\_ctfd) | terraform-aws-modules/acm/aws | n/a |
161164
| <a name="module_cluster_autoscaler_irsa_role"></a> [cluster\_autoscaler\_irsa\_role](#module\_cluster\_autoscaler\_irsa\_role) | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.46 |
162165
| <a name="module_ebs_csi_irsa_role"></a> [ebs\_csi\_irsa\_role](#module\_ebs\_csi\_irsa\_role) | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.46 |
163166
| <a name="module_eks"></a> [eks](#module\_eks) | terraform-aws-modules/eks/aws | 20.24.2 |
167+
| <a name="module_external_dns_irsa_role"></a> [external\_dns\_irsa\_role](#module\_external\_dns\_irsa\_role) | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.46 |
164168
| <a name="module_load_balancer_controller_irsa_role"></a> [load\_balancer\_controller\_irsa\_role](#module\_load\_balancer\_controller\_irsa\_role) | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.46 |
165169
| <a name="module_vpc"></a> [vpc](#module\_vpc) | terraform-aws-modules/vpc/aws | ~> 5.13.0 |
166170

@@ -204,24 +208,32 @@ The documentation below is auto-generated to give insight on what's created via
204208

205209
| Name | Description | Type | Default | Required |
206210
|------|-------------|------|---------|:--------:|
211+
| <a name="input_balancer_domain_name"></a> [balancer\_domain\_name](#input\_balancer\_domain\_name) | The domain name to use | `string` | `""` | no |
207212
| <a name="input_cluster_name"></a> [cluster\_name](#input\_cluster\_name) | The EKS cluster name | `string` | `"wrongsecrets-exercise-cluster"` | no |
208213
| <a name="input_cluster_version"></a> [cluster\_version](#input\_cluster\_version) | The EKS cluster version to use | `string` | `"1.30"` | no |
214+
| <a name="input_ctfd_domain_name"></a> [ctfd\_domain\_name](#input\_ctfd\_domain\_name) | The domain name to use | `string` | `""` | no |
209215
| <a name="input_extra_allowed_ip_ranges"></a> [extra\_allowed\_ip\_ranges](#input\_extra\_allowed\_ip\_ranges) | Allowed IP ranges in addition to creator IP | `list(string)` | `[]` | no |
216+
| <a name="input_hosted_zone_id"></a> [hosted\_zone\_id](#input\_hosted\_zone\_id) | The ID of the Route53 Hosted Zone to use | `string` | `""` | no |
210217
| <a name="input_region"></a> [region](#input\_region) | The AWS region to use | `string` | `"eu-west-1"` | no |
211218
| <a name="input_state_bucket_arn"></a> [state\_bucket\_arn](#input\_state\_bucket\_arn) | ARN of the state bucket to grant access to the s3 user | `string` | n/a | yes |
212219

213220
## Outputs
214221

215222
| Name | Description |
216223
|------|-------------|
224+
| <a name="output_balancer_acm_cert_arn"></a> [balancer\_acm\_cert\_arn](#output\_balancer\_acm\_cert\_arn) | Balancer ACM certificate ARN |
225+
| <a name="output_balancer_domain_name"></a> [balancer\_domain\_name](#output\_balancer\_domain\_name) | Balancer domain name |
217226
| <a name="output_cluster_autoscaler_role"></a> [cluster\_autoscaler\_role](#output\_cluster\_autoscaler\_role) | Cluster autoscaler role |
218227
| <a name="output_cluster_autoscaler_role_arn"></a> [cluster\_autoscaler\_role\_arn](#output\_cluster\_autoscaler\_role\_arn) | Cluster autoscaler role arn |
219228
| <a name="output_cluster_endpoint"></a> [cluster\_endpoint](#output\_cluster\_endpoint) | Endpoint for EKS control plane. |
220229
| <a name="output_cluster_id"></a> [cluster\_id](#output\_cluster\_id) | The id of the cluster |
221230
| <a name="output_cluster_name"></a> [cluster\_name](#output\_cluster\_name) | The EKS cluster name |
222231
| <a name="output_cluster_security_group_id"></a> [cluster\_security\_group\_id](#output\_cluster\_security\_group\_id) | Security group ids attached to the cluster control plane. |
232+
| <a name="output_ctfd_acm_cert_arn"></a> [ctfd\_acm\_cert\_arn](#output\_ctfd\_acm\_cert\_arn) | CTFd ACM certificate ARN |
233+
| <a name="output_ctfd_domain_name"></a> [ctfd\_domain\_name](#output\_ctfd\_domain\_name) | CTFd domain name |
223234
| <a name="output_ebs_role"></a> [ebs\_role](#output\_ebs\_role) | EBS CSI driver role |
224235
| <a name="output_ebs_role_arn"></a> [ebs\_role\_arn](#output\_ebs\_role\_arn) | EBS CSI driver role |
236+
| <a name="output_external_dns_role_arn"></a> [external\_dns\_role\_arn](#output\_external\_dns\_role\_arn) | External DNS role |
225237
| <a name="output_irsa_role"></a> [irsa\_role](#output\_irsa\_role) | The role name used in the IRSA setup |
226238
| <a name="output_irsa_role_arn"></a> [irsa\_role\_arn](#output\_irsa\_role\_arn) | The role ARN used in the IRSA setup |
227239
| <a name="output_load_balancer_controller_role"></a> [load\_balancer\_controller\_role](#output\_load\_balancer\_controller\_role) | Load balancer controller role |

aws/acm.tf

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
# Uncomment for ssl using ACM
2+
module "acm_balancer" {
3+
source = "terraform-aws-modules/acm/aws"
4+
5+
count = var.balancer_domain_name != "" ? 1 : 0
6+
7+
validation_method = "DNS"
8+
9+
domain_name = var.balancer_domain_name
10+
zone_id = var.hosted_zone_id
11+
12+
subject_alternative_names = [
13+
"*.${var.balancer_domain_name}"
14+
]
15+
16+
wait_for_validation = true
17+
}
18+
19+
module "acm_ctfd" {
20+
source = "terraform-aws-modules/acm/aws"
21+
22+
count = var.ctfd_domain_name != "" ? 1 : 0
23+
24+
validation_method = "DNS"
25+
26+
domain_name = var.ctfd_domain_name
27+
zone_id = var.hosted_zone_id
28+
29+
subject_alternative_names = [
30+
"*.${var.ctfd_domain_name}"
31+
]
32+
33+
wait_for_validation = true
34+
}

aws/k8s-aws-alb-script-cleanup.sh

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,4 @@ echo "Cleanup helm chart"
3737
helm uninstall aws-load-balancer-controller \
3838
-n kube-system
3939

40-
echo "Cleanup k8s ALB"
41-
kubectl delete -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"
42-
4340
kubectl delete serviceaccount -n kube-system aws-load-balancer-controller

aws/k8s-aws-alb-script.sh

Lines changed: 27 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -71,25 +71,41 @@ sleep 10
7171

7272
EKS_CLUSTER_VERSION=$(aws eks describe-cluster --name $CLUSTERNAME --region $AWS_REGION --query cluster.version --output text)
7373

74-
# echo "apply -f k8s/secret-challenge-vault-service.yml in 10 s"
75-
# sleep 10
76-
# kubectl apply -f k8s/secret-challenge-vault-service.yml
77-
echo "apply -f k8s/wrongsecrets-balancer-service.yml in 10 s"
74+
EXTERNAL_DNS_ROLE_ARN="$(terraform output -raw external_dns_role_arn)"
75+
kubectl create serviceaccount -n kube-system external-dns
76+
kubectl annotate serviceaccount -n kube-system --overwrite external-dns eks.amazonaws.com/role-arn=${EXTERNAL_DNS_ROLE_ARN}
77+
78+
echo "apply -f k8s/external-dns-*.yaml in 10 s"
7879
sleep 10
80+
kubectl apply -f k8s/external-dns-clusterrole.yaml
81+
kubectl apply -f k8s/external-dns-clusterrolebinding.yaml
82+
kubectl apply -f k8s/external-dns-deployment.yaml
83+
84+
85+
echo "apply -f k8s/wrongsecrets-balancer-service.yml"
7986
kubectl apply -f k8s/wrongsecrets-balancer-service.yml
80-
# echo "apply -f k8s/secret-challenge-vault-ingress.yml in 1 s"
81-
# sleep 1
82-
# kubectl apply -f k8s/secret-challenge-vault-ingress.yml
83-
echo "apply -f k8s/wrongsecrets-balancer-ingress.yml in 10 s"
84-
sleep 10
87+
88+
export BALANCER_DOMAIN_NAME="$(terraform output -raw balancer_domain_name)"
89+
90+
envsubst <./k8s/wrongsecrets-balancer-ingress.yml.tpl >./k8s/wrongsecrets-balancer-ingress.yml
91+
92+
echo "apply -f k8s/wrongsecrets-balancer-ingress.yml"
8593
kubectl apply -f k8s/wrongsecrets-balancer-ingress.yml
8694

95+
echo "apply -f k8s/ctfd-service.yaml"
8796
kubectl apply -f k8s/ctfd-service.yaml
97+
98+
export CTFD_DOMAIN_NAME="$(terraform output -raw ctfd_domain_name)"
99+
envsubst <./k8s/ctfd-ingress.yaml.tpl >./k8s/ctfd-ingress.yaml
100+
101+
echo "apply -f k8s/ctfd-ingress.yaml"
88102
kubectl apply -f k8s/ctfd-ingress.yaml
89103

90-
echo "waiting 10 s for loadBalancer"
91-
sleep 10
104+
echo "waiting 20 s for load balancer"
105+
sleep 20
92106
echo "Wrongsecrets ingress: http://$(kubectl get ingress wrongsecrets-balancer -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')"
107+
echo "Wrongsecrets host: http://$(kubectl get ingress wrongsecrets-balancer -o jsonpath='{.spec.rules[0].host}')"
93108
echo "ctfd ingress: http://$(kubectl get ingress -n ctfd ctfd -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')"
109+
echo "ctfd host: http://$(kubectl get ingress -n ctfd ctfd -o jsonpath='{.spec.rules[0].host}')"
94110

95111
echo "Do not forget to cleanup afterwards! Run k8s-aws-alb-script-cleanup.sh"

aws/k8s/ctfd-ingress.yaml renamed to aws/k8s/ctfd-ingress.yaml.tpl

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,12 +7,13 @@ metadata:
77
alb.ingress.kubernetes.io/scheme: internet-facing
88
alb.ingress.kubernetes.io/target-type: instance
99
alb.ingress.kubernetes.io/success-codes: 200-399
10-
acme.cert-manager.io/http01-edit-in-place: "true"
11-
# cert-manager.io/issue-temporary-certificate: "true"
1210
#uncomment and configure below if you want to use tls, don't forget to override the cookie to a secure value!
13-
# alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:<region>:<account>:certificate/xxxxxx
11+
# alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS13-1-2-2021-06
1412
# alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
15-
# alb.ingress.kubernetes.io/actions.ssl-redirect: '443'
13+
# alb.ingress.kubernetes.io/ssl-redirect: "443"
14+
# external-dns.alpha.kubernetes.io/hostname: ${CTFD_DOMAIN_NAME}
15+
# The certificate ARN can be discovered automatically by the ALB Ingress Controller based on the host value in the ingress, or you can specify it manually by uncommenting and customizing the line below
16+
# alb.ingress.kubernetes.io/certificate-arn: <certificate-arn>
1617
spec:
1718
ingressClassName: alb
1819
rules:
@@ -25,3 +26,4 @@ spec:
2526
name: ctfd
2627
port:
2728
number: 80
29+
host: ${CTFD_DOMAIN_NAME} # Specify the hostname to route to the service
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
apiVersion: rbac.authorization.k8s.io/v1
2+
kind: ClusterRole
3+
metadata:
4+
name: external-dns
5+
labels:
6+
app.kubernetes.io/name: external-dns
7+
rules:
8+
- apiGroups: [""]
9+
resources: ["services", "endpoints", "pods", "nodes"]
10+
verbs: ["get", "watch", "list"]
11+
- apiGroups: ["extensions", "networking.k8s.io"]
12+
resources: ["ingresses"]
13+
verbs: ["get", "watch", "list"]
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
apiVersion: rbac.authorization.k8s.io/v1
2+
kind: ClusterRoleBinding
3+
metadata:
4+
name: external-dns-viewer
5+
labels:
6+
app.kubernetes.io/name: external-dns
7+
roleRef:
8+
apiGroup: rbac.authorization.k8s.io
9+
kind: ClusterRole
10+
name: external-dns
11+
subjects:
12+
- kind: ServiceAccount
13+
name: external-dns
14+
namespace: kube-system
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
apiVersion: apps/v1
2+
kind: Deployment
3+
metadata:
4+
name: external-dns
5+
namespace: kube-system
6+
labels:
7+
app: external-dns
8+
spec:
9+
selector:
10+
matchLabels:
11+
app: external-dns
12+
strategy:
13+
type: Recreate
14+
template:
15+
metadata:
16+
labels:
17+
app: external-dns
18+
spec:
19+
serviceAccountName: external-dns
20+
securityContext:
21+
fsGroup: 65534
22+
containers:
23+
- name: external-dns
24+
image: bitnami/external-dns:0.15.0
25+
resources:
26+
limits:
27+
memory: 256Mi
28+
cpu: 500m
29+
args:
30+
- --source=ingress
31+
- --provider=aws
32+
- --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
33+
- --txt-owner-id=external-dns

aws/k8s/wrongsecrets-balancer-ingress.yml renamed to aws/k8s/wrongsecrets-balancer-ingress.yml.tpl

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,12 +7,13 @@ metadata:
77
alb.ingress.kubernetes.io/scheme: internet-facing
88
alb.ingress.kubernetes.io/target-type: instance
99
alb.ingress.kubernetes.io/success-codes: 200-399
10-
acme.cert-manager.io/http01-edit-in-place: "true"
11-
# cert-manager.io/issue-temporary-certificate: "true"
1210
#uncomment and configure below if you want to use tls, don't forget to override the cookie to a secure value!
13-
# alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:<region>:<account>:certificate/xxxxxx
11+
# alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS13-1-2-2021-06
1412
# alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
15-
# alb.ingress.kubernetes.io/actions.ssl-redirect: '443'
13+
# alb.ingress.kubernetes.io/ssl-redirect: "443"
14+
# external-dns.alpha.kubernetes.io/hostname: ${BALANCER_DOMAIN_NAME}
15+
# The certificate ARN can be discovered automatically by the ALB Ingress Controller based on the host value in the ingress, or you can specify it manually by uncommenting and customizing the line below
16+
# alb.ingress.kubernetes.io/certificate-arn: <certificate-arn>
1617
spec:
1718
ingressClassName: alb
1819
rules:
@@ -25,3 +26,4 @@ spec:
2526
name: wrongsecrets-balancer
2627
port:
2728
number: 80
29+
host: ${BALANCER_DOMAIN_NAME} # Specify the hostname to route to the service

0 commit comments

Comments
 (0)