Need suggestions for better infra CICD

alright, so for this project i use docker containers for everything. my compose file has a redis cache for session management, python container hosting a flask api, frontend served inside an nginx reverse proxy as static pages (with a reverse proxy on the /api route to the flask backend, as well as a reverse proxy for a pgadmin subdomain). RN, on PR acceptance to main, i build all the containers in a gh action, publish them to docker hub in a private repo, then on the fly, convert the docker compose file into a cloud formation template, generate our env variables, create an ecs context, and re-up
46 Replies
Josh
Josh13mo ago
here is the justfile script that gh uses to publish
# Publishes to AWS
publish:
#!/usr/bin/env bash
curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh
set -euxo pipefail
docker compose -f docker-compose-aws.yml build
docker compose -f docker-compose-aws.yml push
docker context create ecs --from-env aws
docker context use aws
docker compose -f docker-compose-aws.yml up
# Publishes to AWS
publish:
#!/usr/bin/env bash
curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh
set -euxo pipefail
docker compose -f docker-compose-aws.yml build
docker compose -f docker-compose-aws.yml push
docker context create ecs --from-env aws
docker context use aws
docker compose -f docker-compose-aws.yml up
with the GH action
push:
if: ((github.event.pull_request.merged == true) && (contains(github.head_ref, 'dependabot/github_actions/') == false) && (contains(github.head_ref, 'skip-release/') == false)) || (github.event_name == 'workflow_dispatch')
name: AWS Release
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.ref }}

- name: setup docker compose
uses: KengoTODA/actions-setup-docker-compose@main
with:
version: '2.12.2'

- uses: docker/login-action@v2
with:
username: ${{ secrets.PROD_DOCKER_USERNAME }}
password: ${{ secrets.PROD_DOCKER_PASSWORD }}

- name: setup just
uses: extractions/setup-just@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2.0.0
with:
aws-access-key-id: ${{ secrets.PROD_AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.PROD_AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.PROD_AWS_DEFAULT_REGION }}

- name: init env file
run: |
touch .env
// generates env here
- name: release code
run: 'just publish'
push:
if: ((github.event.pull_request.merged == true) && (contains(github.head_ref, 'dependabot/github_actions/') == false) && (contains(github.head_ref, 'skip-release/') == false)) || (github.event_name == 'workflow_dispatch')
name: AWS Release
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.ref }}

- name: setup docker compose
uses: KengoTODA/actions-setup-docker-compose@main
with:
version: '2.12.2'

- uses: docker/login-action@v2
with:
username: ${{ secrets.PROD_DOCKER_USERNAME }}
password: ${{ secrets.PROD_DOCKER_PASSWORD }}

- name: setup just
uses: extractions/setup-just@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2.0.0
with:
aws-access-key-id: ${{ secrets.PROD_AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.PROD_AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.PROD_AWS_DEFAULT_REGION }}

- name: init env file
run: |
touch .env
// generates env here
- name: release code
run: 'just publish'
Vayne
Vayne13mo ago
This is pretty dope btw
Josh
Josh13mo ago
which part? how hacky it is lmao
Vayne
Vayne13mo ago
Ive def seen worse lol But yeah gimme a sec ok nice i can type normally now so the hackiest bit is prolly the publish to dockerhub + convert to CF
Josh
Josh13mo ago
Correct That's what I'd like to clean up
Vayne
Vayne13mo ago
if you use fargate + ecs, it can most likely handle most of that, but youll be switching your CF config for a task definition
Josh
Josh13mo ago
Specifically cf part. Pushing to the reg I'm not too concerned about but still love to clean up
Vayne
Vayne13mo ago
that eccs can then use to spawn your containers/clusters also nice parallel w/ last chat CF basically AWS' equivalent to TF and ecs is the container orchestrator piece same w/ eks if you ever wanna hate your life
Josh
Josh13mo ago
So would I have to manually define this separately from my compose file Lol
Vayne
Vayne13mo ago
yep, we had an internal lib that would just update it after every change then on push the git action would just read it, push it to ecr then deploy a blue/green ecs cluster based on it
Josh
Josh13mo ago
Then with that, what would the new process of triggering an update be
Vayne
Vayne13mo ago
orz but yeah its a pain in the balls unless u have a bit of tooling around it but yeah, what are the current pain points? we can see whether or not we can optimize those
Josh
Josh13mo ago
Mainly downtime, and how long the whole thing takes. The gh action takes ~4 min, when I guess isn't terrible. Time that prod is down is probably ~20 min Or more But moving everything over to ecs and cf template may solve part of that I'm suspecting
Vayne
Vayne13mo ago
cool thing with git actions is that you can check which steps take the longest and use that as a hint to optimize
Josh
Josh13mo ago
alright, so far its going well. I have a CF template in progress. My first unclear question is where should my env variables come from. Should we be setting them as paramaters in the AWS cft, pass them via some sort of aws cli command when we re-up, or some other way that im unaware of holy hell your fast
Vayne
Vayne13mo ago
ecs/fargate can manage secrets hmmm, the way i did it was to add them to secrets manager and refer to them thru iac/the cli
Josh
Josh13mo ago
okay, do you have an example of how that would look in the cf file / am i on the right track with this
"ContainerDefinitions": [
{
"Name": "pgadmin",
"Essential": "false",
"Image": "dpage/pgadmin4",
"secrets": [
{
"name": "PGADMIN_DEFAULT_EMAIL",
"valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:secret-id"
}
]
},
]
"ContainerDefinitions": [
{
"Name": "pgadmin",
"Essential": "false",
"Image": "dpage/pgadmin4",
"secrets": [
{
"name": "PGADMIN_DEFAULT_EMAIL",
"valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:secret-id"
}
]
},
]
that way i dont have to reference them in the CLI and can keep them stored in aws
Vayne
Vayne13mo ago
👀 looks alright to me
Josh
Josh13mo ago
huge making big progress
Vayne
Vayne13mo ago
Hypers
Josh
Josh13mo ago
alrighty new question I need to reference a secret in the healthcheck for another container
Vayne
Vayne13mo ago
oof interesting
Josh
Josh13mo ago
oh wait nvm we are good, they are in the same container
Vayne
Vayne13mo ago
could prolly have an iam policy in that other container + get the secret <nice and yeah in ecs most of the time you could have sidecar containers
Josh
Josh13mo ago
so i think i can just have it pull it via shell
Vayne
Vayne13mo ago
they might get injected by the task def/ecs
Vayne
Vayne13mo ago
Using Secrets Manager - Amazon Elastic Container Service
When you inject a secret as an environment variable, you can specify the full contents of a secret, a specific JSON key within a secret, or a specific version of a secret to inject. This helps you control the sensitive data exposed to your container. For more information about secret versioning, see
Vayne
Vayne13mo ago
Using Secrets Manager - Amazon Elastic Container Service
Use Secrets Manager to protect sensitive data and rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.
Josh
Josh13mo ago
Okay yeah that's what I was looking at I think I'm doing it right then
Vayne
Vayne13mo ago
Hypers great find then, and yeah usually the docs are pretty solid
Josh
Josh13mo ago
Oh gosh I just realized I'm gonna have to do a full database migration
Vayne
Vayne13mo ago
👁️
Josh
Josh13mo ago
From current prod to this new one Welp, I guess I needed figure it out sooner or later
Vayne
Vayne13mo ago
ah, yeah this might be slightly annoying, but if its in ecs, you might be able to blue green or just manually tie containers to an rds cluster
Josh
Josh13mo ago
I actually don't think it will be that bad, hell I might be able to do it from the pgadmin portal I have on prod currently I have prod tied to a persistent volume currently, but the way I'm doing it now is very black boxy
Vayne
Vayne13mo ago
👁️
Josh
Josh13mo ago
stay tuned lol its gonna take me a bit to finish translating over my docker compose to cft although so far its pretty 1:1 on paramaters
Vayne
Vayne13mo ago
oh nice yeah wasnt sure how that one was gonna go, cause we use pulumi KEKW
Josh
Josh13mo ago
{
"Name": "pgadmin",
"Essential": "false",
"Image": "dpage/pgadmin4",
"environment": [
{ "name": "PGADMIN_DEFAULT_EMAIL", "value": "admin@biblish.com" }
],
"secrets": [
{
"name": "PGADMIN_DEFAULT_PASSWORD",
"valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:secret-id"
}
],
"DependsOn": {
"condition": "HEALTHY",
"containerName": "db"
}
},
{
"Name": "pgadmin",
"Essential": "false",
"Image": "dpage/pgadmin4",
"environment": [
{ "name": "PGADMIN_DEFAULT_EMAIL", "value": "admin@biblish.com" }
],
"secrets": [
{
"name": "PGADMIN_DEFAULT_PASSWORD",
"valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:secret-id"
}
],
"DependsOn": {
"condition": "HEALTHY",
"containerName": "db"
}
},
vs docker compose
pgadmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: admin@biblish.com
PGADMIN_DEFAULT_PASSWORD: ${POSTGRES_PASSWORD}
depends_on:
db:
condition: service_healthy
pgadmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: admin@biblish.com
PGADMIN_DEFAULT_PASSWORD: ${POSTGRES_PASSWORD}
depends_on:
db:
condition: service_healthy
Vayne
Vayne13mo ago
docker compose 🤝
Josh
Josh13mo ago
indeed alrighty next question; where should i handle SSL. inside the ALB, or my docker image w/ nginx im guessing the ALB, in which case ill need to do some sort of redirect on the ALB level from 443 external to the docker image's :80 well, i think i figured it out. gonna try and deploy it tomorrow with the client and see how it goes.
Vayne
Vayne13mo ago
Up to you, i actually like doing it at the alb level But you can also add it to your docker image as an nginx proxy
Josh
Josh13mo ago
this is what im trying first
Vayne
Vayne13mo ago
Iirc you can set the alb to an ecs target group
Vayne
Vayne13mo ago
Creating an Application Load Balancer - Amazon ECS
This section walks you through the process of creating an Application Load Balancer in the AWS Management Console. For information about how to create an Application Load Balancer using the AWS CLI, see Tutorial: Create an Application Load Balancer using the AWS CLI
Vayne
Vayne13mo ago
And thatll come with an alb healthcheck n shit, but would basically allow you to move to blue green deployments by just creating 2 target groups and shifting traffic from the LB