• Home
  • CD With AWS ECS And CodePipeline Using Terraform – Easy Steps
CD with AWS ECS and CodePipeline Using Terraform
webbuilder November 14, 2023 0 Comments

CD With AWS ECS And CodePipeline Using Terraform – Easy Steps

Achieving smooth and reliable deployment processes is crucial. This article explores the integration of AWS Elastic Container Service (ECS) with AWS CodePipeline using Terraform. By automating the Continuous Deployment (CD) pipeline, developers can streamline the release of containerized applications, ensuring efficiency and reliability. So, let’s begin!

STEP 1: Establishing IAM Role and CodeCommit Credentials

Create ECS Task Service Role:

Establish a dedicated IAM role for ECS tasks, enabling them to execute AWS service requests on your behalf.

CD with AWS ECS and CodePipeline Using Terraform

Attach CodeCommit Access Policy:

Enhance your USER credentials by assigning the AWSCodeCommitPowerUser Policy, ensuring the necessary permissions for optimal AWS CodeCommit utilization.

CD with AWS ECS and CodePipeline Using Terraform

Generate Git Credentials:

Securely interact with AWS CodeCommit by generating HTTPS Git credentials. These credentials enable essential actions like cloning, pushing, and pulling from the designated CodeCommit Repository, ensuring a secure and streamlined code version control process.

STEP 2: Terraform Scripts for Infrastructure Deployment

Leveraging Terraform, we orchestrate the construction of the foundational elements required for our ECS and CodePipeline setup.

Providers Configuration

Ensure Terraform uses the correct AWS provider version and define the AWS provider configuration with the desired profile and region.



terraform {

  required_providers {

    aws = {

      source  = “hashicorp/aws”

      version = “~> 3.51”





provider “aws” {

  profile = “default”

  region  = “us-east-1”




VPC Setup

Define the VPC, Subnets, and Internet Gateway resources using Terraform to establish a secure and scalable network infrastructure.



resource "aws_vpc" "ecs-vpc" {

  cidr_block = var.cidr


  tags = {

    Name = "ecs-vpc"




resource "aws_subnet" "pub-subnets" {

  count                   = length(var.azs)

  vpc_id                  = aws_vpc.ecs-vpc.id

  availability_zone       = var.azs[count.index]

  cidr_block              = var.subnets-ip[count.index]

  map_public_ip_on_launch = true


  tags = {

    Name = "pub-subnets"




resource "aws_internet_gateway" "i-gateway" {

  vpc_id = aws_vpc.ecs-vpc.id


  tags = {

    Name = "ecs-igtw"





Variables Definition

Specify variables essential for VPC configuration, such as CIDR blocks, availability zones, and subnet IP ranges.



variable "cidr" {

  type    = string

  default = ""



variable "azs" {

  type    = list(string)

  default = ["us-east-1a", "us-east-1b"]



variable "subnets-ip" {

  type    = list(string)

  default = ["", ""]




IAM Roles & Policies

Define the IAM roles and policies necessary for CodeBuild, granting access to ECR and S3.



resource "aws_iam_role" "codebuild-role" {

  name             = "codebuild-role"

  assume_role_policy = jsonencode({

    Version = "2012-10-17"

    Statement = [


        Action = "sts:AssumeRole"

        Effect = "Allow"

        Principal = {

          Service = "codebuild.amazonaws.com"







resource "aws_iam_role_policy" "codebuild-policy" {

  role = aws_iam_role.codebuild-role.name


  policy = jsonencode({

    Version = "2012-10-17"

    Statement = [


        Action   = ["codecommit:GitPull"]

        Effect   = "Allow"

        Resource = "*"


      # ... (other permissions for ECR, S3, and logging)






Route Tables

Configure a single route table for both public subnets.



resource "aws_route_table" "pub-table" {

  vpc_id = aws_vpc.ecs-vpc.id



# ... (route, route association configurations)



Security Groups

Define security groups for ECS Service and Application Load Balancer.



resource "aws_security_group" "sg1" {

  # ... (ECS Service security group configuration)



resource "aws_security_group" "sg2" {

  # ... (ALB security group configuration)




Application Load Balancer (ALB)

Create an ALB with specified listeners and target groups.



resource "aws_lb" "app-lb" {

  # ... (ALB configuration)



resource "aws_lb_target_group" "tg-group" {

  # ... (Target group configuration)



resource "aws_lb_listener" "lb-listener" {

  # ... (Listener configuration)




ECS & ECR Configuration

Set up ECS Cluster, ECR Repository, Task Definition, and ECS Service using Terraform.



resource "aws_ecr_repository" "ecr-repo" {

  # ... (ECR repository configuration)



resource "aws_ecs_cluster" "ecs-cluster" {

  # ... (ECS cluster configuration)



resource "aws_ecs_task_definition" "task" {

  # ... (ECS task definition configuration)



resource "aws_ecs_service" "svc" {

  # ... (ECS service configuration)




CI/CD Pipeline Configuration

Define resources and stages for the CodePipeline, integrating CodeCommit, CodeBuild, and ECS deployment.



resource "aws_codecommit_repository" "repo" {

  # ... (CodeCommit repository configuration)



resource "aws_codebuild_project" "repo-project" {

  # ... (CodeBuild project configuration)



resource "aws_codepipeline" "pipeline" {

  # ... (CodePipeline configuration with source, build, and deploy stages)




Data and Outputs

Retrieve and output essential data, such as IAM role information and ALB DNS.



data "aws_iam_role" "pipeline_role" {

  name = "codepipeline-role"



data "aws_iam_role" "ecs-task" {

  name = "ecsTaskExecutionRole"



output "repo_url" {

  value = aws_codecommit_repository.repo.clone_url_http



output "alb_dns" {

  value = aws_lb.app-lb.dns_name




Extra Variables

Specify additional variables like repository name, branch name, build project, and URI repo.



variable "repo_name" {

  type    = string

  default = "dev-repo"



variable "branch_name" {

  type    = string

  default = "master"



variable "build_project" {

  type    = string

  default = "dev-build-repo"



variable "uri_repo" {

  type = string

  # (URI_REPO value is in a TF_VAR in my PC)




By organizing your infrastructure as code with Terraform, this script automates the deployment and configuration of the essential AWS resources for your ECS and CodePipeline setup.


STEP 3: Implementing a Golang HTTP Server for ECS Tasks

In this step, we create a straightforward Golang HTTP server designed to retrieve the private IP addresses of ECS tasks. The server listens on port 5000, facilitating communication within the ECS cluster.



package main


import (







func main() {

log.Print("HTTPserver: Enter main()")

http.HandleFunc("/", handler)

log.Fatal(http.ListenAndServe("", nil))



// handler function to print request headers/params

func handler(w http.ResponseWriter, r *http.Request) {

log.Printf("Request from address: %q\n", r.RemoteAddr)

fmt.Fprintf(w, "%s %s %s\n", r.Method, r.URL, r.Proto)


fmt.Fprintf(w, "Host = %q\n", r.Host)

fmt.Fprintf(w, "RemoteAddr = %q\n", r.RemoteAddr)


// Parse and print form data

if err := r.ParseForm(); err != nil {



for k, v := range r.Form {

fmt.Fprintf(w, "Form[%q] = %q\n", k, v)



// Retrieve and print local IP address

fmt.Fprintf(w, "\n===> Local IP: %q\n\n", GetOutboundIP())



// GetOutboundIP retrieves the outbound IP address using a UDP connection

func GetOutboundIP() net.IP {

conn, err := net.Dial("udp", "")

if err != nil {



defer conn.Close()


localAddr := conn.LocalAddr().(*net.UDPAddr)


return localAddr.IP





– The `main` function initializes an HTTP server on port 5000.

– The `handler` function processes incoming requests, printing request details and form data.

– The `GetOutboundIP` function establishes a UDP connection to a known external IP address (``) to determine the local IP address. This method helps retrieve the private IP address of the ECS task within the cluster.


This Golang code serves as a foundation for obtaining essential information about ECS tasks, facilitating communication and coordination within the ECS cluster.


STEP 4: Dockerizing the Golang HTTP Server

In this step, we containerize the Golang HTTP server using a Dockerfile. The resulting Docker image will encapsulate the HTTP server application, making it portable and deployable across different environments.



# Build Stage

FROM golang:alpine AS builder


# Set environment variables



    GOOS=linux \



# Set working directory

WORKDIR /build


# Copy the Golang source code

COPY ./HTTPserver.go .


# Build the Golang application

RUN go build -o HTTPserver ./HTTPserver.go


# Create a distribution directory



# Copy the compiled application

RUN cp /build/HTTPserver .


# Final Stage

FROM scratch


# Copy the built application from the builder stage

COPY --from=builder /dist/HTTPserver /


# Expose the application port



# Define the entry point for the application





– The Dockerfile employs a multi-stage build. The first stage (`builder`) builds the Golang HTTP server binary.

– The second stage creates a minimalistic image (`scratch`) that only includes the compiled HTTP server binary.

– The `EXPOSE 5000` instruction informs Docker that the application will listen on port 5000.

– The `ENTRYPOINT` specifies the command to run when the container starts, launching the Golang HTTP server.


This Dockerfile serves as a blueprint for packaging the Golang HTTP server into a Docker image, streamlining deployment and ensuring consistency across different environments.

STEP 05: Make The TF_VAR

Create a file named terraform.tfvars in the same directory as your Terraform scripts.


Open terraform.tfvars and set the values for your variables. Here’s an example based on the provided Terraform script:


# terraform.tfvars


cidr = ""

azs = ["us-east-1a", "us-east-1b"]

subnets-ip = ["", ""]


repo_name = "dev-repo"

branch_name = "master"

build_project = "dev-build-repo"

uri_repo = "<YOUR_URI_REPO_VALUE>"


Replace <YOUR_URI_REPO_VALUE> with the actual value for your URI repo. Terraform will use this file to set the values of these variables during the execution.


When you run your Terraform commands (e.g., terraform apply), Terraform will automatically read the values from terraform. tfvars.

STEP 06: Infrastructure Deployment Process

Execute the following commands to initialize, validate, plan, and apply the Terraform configuration for creating the infrastructure:



# Initialize Terraform in the project directory

terraform init


# Validate the Terraform configuration

terraform validate


# Generate and review the Terraform execution plan

terraform plan


# Apply the Terraform configuration and automatically approve changes

terraform apply -auto-approve



Upon completion of the infrastructure creation, retrieve the outputs using `terraform output` to obtain essential information about the deployed resources.

STEP 07: Uploading Files to CodeCommit Repository

Follow these steps to upload the Dockerfile, buildspect.yml, and Golang code to the CodeCommit repository:


1. Clone the Repository:


   git clone <CodeCommit_Repository_URL>

   cd <Repository_Name>



2. Copy Files to the Repository Folder:

Copy the buildspect.yml, Dockerfile, and Golang code to the cloned repository folder.



   cp /path/to/buildspect.yml .

   cp /path/to/Dockerfile .

   cp -r /path/to/Golang_Code/* .



3. Commit the Changes:


   git add .

   git commit -m "Add Dockerfile, buildspect.yml, and Golang code"



4. Push Changes to CodeCommit Repository:


   git push origin master



Ensure that you replace `<CodeCommit_Repository_URL>` with the actual URL of your CodeCommit repository. This sequence of commands adds your files to the repository, commits the changes, and pushes them to the CodeCommit repository for version control.

STEP 08: Verify the Pipeline

CD with AWS ECS and CodePipeline Using Terraform


After the completion of the “Build” stage, inspect the Docker image within the ECR repository to ensure successful image creation and validation.

CD with AWS ECS and CodePipeline Using TerraformCD with AWS ECS and CodePipeline Using Terraform

STEP 09: Monitor the ECS Service

CD with AWS ECS and CodePipeline Using Terraform

Once the “Deploy” stage concludes, monitor the ECS Service to confirm the successful deployment. Check the ECS Tasks to ensure they are running as expected.

CD with AWS ECS and CodePipeline Using Terraform

CD with AWS ECS and CodePipeline Using Terraform

STEP 10: Validate the Target Group

Following the completion of the deployment, inspect the Target Group to verify that the ECS Service has been correctly registered and is actively serving traffic.CD with AWS ECS and CodePipeline Using Terraform

STEP 11: Validate Application Load Balancer Operations

After the deployment stages are complete, confirm the proper functioning of the Application Load Balancer (ALB). Verify that the ALB is efficiently distributing incoming traffic to the ECS tasks, ensuring seamless operation.

CD with AWS ECS and CodePipeline Using Terraform


In this guide, we’ve successfully demystified the integration of AWS services, Terraform, and CodePipeline for a streamlined containerized workflow with Amazon ECS. By establishing a robust infrastructure and implementing CI/CD practices, you’ve gained valuable insights into optimizing development and deployment processes.


Improve your cloud experience with Triotech Systems. Our tailored solutions and expertise ensure efficient AWS infrastructure management. Contact us today to discover how Triotech Systems can propel your development cycles to new heights.


Amazon Elastic Container Service (ECS) simplifies the deployment and management of containerized applications on AWS. It allows you to run Docker containers without managing the underlying infrastructure, making it an ideal choice for scalable and flexible container orchestration.

Terraform is an Infrastructure as Code (IaC) tool that enables the automated provisioning of cloud resources. With Terraform, users can define and manage infrastructure in a declarative configuration file, promoting consistency and reliability in AWS deployments.

AWS CodePipeline automates the end-to-end release process, from source code changes to deployment. Integrating it with AWS ECS streamlines continuous integration and continuous deployment (CI/CD) workflows, ensuring efficient and reliable application delivery.

A Dockerfile is a script used to create a Docker image. In the context of ECS, it defines the configuration for building a container image that runs your application. This file is crucial for creating consistent and reproducible container environments.

After deploying tasks on ECS, AWS provides tools like Amazon CloudWatch for monitoring and AWS CloudTrail for auditing. By leveraging these services, you can track performance metrics, set up alarms, and troubleshoot issues effectively in your ECS environment.

Recent Posts

Leave Comment