• Home
  • Using Terraform To Build A CI/CD Pipeline For Amazon ECS/Fargate
Build a CI/CD Pipeline
webbuilder September 11, 2023 0 Comments

Using Terraform to Build a CI/CD Pipeline for Amazon ECS/Fargate

Knowing how to Build a CI/CD Pipeline is essential practice for modern software development, and we can overstate the importance of efficient and automated CI/CD pipelines. In simpler words, CI ensures that code changes are frequently integrated and tested, while CD automates the deployment of those changes to production.

This article will show you how to use Terraform to build a CI/CD pipeline for Amazon Elastic Container Service (ECS) and Fargate. Terraform is an open-source infrastructure as code (IaC) tool that can automate the provisioning and management of AWS resources.

We will use AWS CodePipeline to automate the execution of these stages. CodePipeline is a managed continuous delivery service that can be used to automate the deployment of applications to AWS. By the end of this article, you will have a working CI/CD pipeline for ECS/Fargate that you can use to deploy your applications to production. Let’s get started!

Build a CI/CD Pipeline

Steps To Build A CI/CD Pipeline with Terraform

Defining your infrastructure as code is crucial before building your CI/CD pipeline with Terraform. This involves describing the AWS resources required for your CI/CD pipeline, including CodePipeline, ECS, Fargate clusters, and supporting components. Terraform uses a declarative syntax, allowing you to specify the desired state of your infrastructure.

Step 1: Create an IAM Role for CodePipeline and ECS

In this initial step, we will establish the necessary IAM roles and permissions to enable seamless communication between AWS services and your CI/CD pipeline.

Create an IAM Role for CodePipeline: codepipeline-role

Begin by creating an IAM role specifically for CodePipeline. This role will grant the necessary permissions for CodePipeline to interact with other AWS services during the pipeline execution.

resource "aws_iam_role" "codepipeline-role" {

  name = "my-codepipeline-role"

  assume_role_policy = jsonencode({

    Version = "2012-10-17",

    Statement = [{

      Action = "sts:AssumeRole",

      Principal = {

        Service = "codepipeline.amazonaws.com"


      Effect = "Allow",

      Sid = "",




# Attach policies to the `codepipeline-role` as needed

Create an IAM Policy for CodePipeline: codepipeline-policy

Next, define an IAM policy (codepipeline-policy) and attach it to the codepipeline-role. This policy should grant the permissions required for CodePipeline to perform its actions.

resource "aws_iam_policy" "codepipeline-policy" {

  name = "my-codepipeline-policy"

  description = "Policy for CodePipeline"

  policy = <<EOF


  "Version": "2012-10-17",

  "Statement": [


      "Action": [


















      "Effect": "Allow",

      "Resource": "*"






# Attach the `codepipeline-policy` to the `codepipeline-role`

resource "aws_iam_role_policy_attachment" "codepipeline-attach" {

  policy_arn = aws_iam_policy.codepipeline-policy.arn

  role       = aws_iam_role.codepipeline-role.name


Generate AWS CodeCommit Credentials

Generate HTTPS Git credentials that will enable your CI/CD pipeline to clone, push, and pull from your AWS CodeCommit repository. These credentials will be used in later steps of the pipeline.

# Generate AWS CodeCommit credentials and configure them in your CI/CD environment.

# Instructions for generating credentials can be found in the AWS CodeCommit documentation.

With these IAM roles and credentials in place, your CI/CD pipeline will have the necessary permissions to interact with AWS services securely and efficiently. This foundational setup ensures smooth execution throughout the pipeline’s lifecycle.

Step 2: Terraform Scripts to Build the Infrastructure

In this step, we’ll dive into the Terraform scripts that lay the foundation for your CI/CD pipeline’s infrastructure. These scripts define the AWS resources required for your project, including the Virtual Private Cloud (VPC), IAM roles and policies, route tables, security groups, Application Load Balancer (ALB), Amazon Elastic Container Registry (ECR), Amazon Elastic Container Service (ECS), and more.

Configure Terraform Providers

Begin by specifying the Terraform providers required for your infrastructure. This section tells Terraform which providers to use and their versions.

terraform {

  required_providers {

    aws = {

      source  = "hashicorp/aws"

      version = "~> 3.51"




provider "aws" {

  profile = "default"  # Specify your AWS profile

  region  = "us-east-1"  # Specify your desired AWS region


Define the VPC

Define the Virtual Private Cloud (VPC) that will host your ECS tasks. This section creates the VPC, public subnets, and an Internet Gateway.

resource "aws_vpc" "ecs-vpc" {

  cidr_block = "${var.cidr}"

  tags = {

    Name = "ecs-vpc"



# Define public subnets

resource "aws_subnet" "pub-subnets" {

  count                   = length(var.azs)

  vpc_id                  = "${aws_vpc.ecs-vpc.id}"

  availability_zone       = "${var.azs[count.index]}"

  cidr_block              = "${var.subnets-ip[count.index]}"

  map_public_ip_on_launch = true

  tags = {

    Name = "pub-subnets"



# Create an Internet Gateway

resource "aws_internet_gateway" "i-gateway" {

  vpc_id = "${aws_vpc.ecs-vpc.id}"

  tags = {

    Name = "ecs-igtw"



Define Variables

Specify the variables needed for your VPC configuration, such as the VPC CIDR block, availability zones, and subnet CIDR blocks.

variable "cidr" {

  type    = string

  default = ""


variable "azs" {

  type = list(string)

  default = [





variable "subnets-ip" {

  type = list(string)

  default = [





Define IAM Roles and Policies

Create IAM roles and policies required for your CI/CD pipeline, ECS tasks, and other AWS resources.

resource "aws_iam_role" "codebuild-role" {

  name = "codebuild-role"

  # Assume role policy for CodeBuild

  assume_role_policy = jsonencode({

    Version = "2012-10-17",

    Statement = [


        Action = "sts:AssumeRole",

        Effect = "Allow",

        Principal = {

          Service = "codebuild.amazonaws.com"






resource "aws_iam_role_policy" "codebuild-policy" {

  role = "${aws_iam_role.codebuild-role.name}"

  # IAM policies for CodeBuild

  policy = jsonencode({

    Version = "2012-10-17",

    Statement = [


        Action   = ["codecommit:GitPull"]

        Effect   = "Allow"

        Resource = "*"



        Action = [










        Effect   = "Allow"

        Resource = "*"



        Action = [





        Effect   = "Allow"

        Resource = "*"



        Action = [







        Effect   = "Allow"

        Resource = "*"





Define Route Tables

Create a route table for the public subnets.

resource "aws_route_table" "pub-table" {

  vpc_id = "${aws_vpc.ecs-vpc.id}"


resource "aws_route" "pub-route" {

  route_table_id         = "${aws_route_table.pub-table.id}"

  destination_cidr_block = ""

  gateway_id             = "${aws_internet_gateway.i-gateway.id}"


resource "aws_route_table_association" "as-pub" {

  count          = length(var.azs)

  route_table_id = "${aws_route_table.pub-table.id}"

  subnet_id      = "${aws_subnet.pub-subnets[count.index].id}"


Define Security Groups

Specify security groups for the ECS service and the Application Load Balancer (ALB).

resource "aws_security_group" "sg1" {

  name        = "golang-server"

  description = "Port 5000"

  vpc_id      = aws_vpc.ecs-vpc.id

  ingress {

    description      = "Allow Port 5000"

    from_port        = 5000

    to_port          = 5000

    protocol         = "tcp"

    cidr_blocks      = [""]

    ipv6_cidr_blocks = ["::/0"]


  egress {

    description = "Allow all IP and ports outbound"

    from_port   = 0

    to_port     = 0

    protocol    = "-1"

    cidr_blocks = [""]



resource "aws_security_group" "sg2" {

  name        = "golang-server-alb"

  description = "Port 80"

  vpc_id      = aws_vpc.ecs-vpc.id

  ingress {

    description      = "Allow Port 80"

    from_port        = 80

    to_port          = 80

    protocol         = "tcp"

    cidr_blocks      = [""]

    ipv6_cidr_blocks = ["::/0"]


  egress {

    description = "Allow all IP and ports outbound"

    from_port   = 0

    to_port     = 0

    protocol    = "-1"

    cidr_blocks = ["

Step 3: Create an HTTP Simple Server with Golang

We will create a simple HTTP server using the Go programming language in this step. This server is designed to run within your ECS tasks and will help you retrieve the private IP addresses of the ECS instances.

package main

import (






func main() {

log.Print("HTTPserver: Enter main()")

http.HandleFunc("/", handler)

log.Fatal(http.ListenAndServe("", nil))


// Handler function to print request headers/params

func handler(w http.ResponseWriter, r *http.Request) {

log.Printf("Request from address: %q\n", r.RemoteAddr)

fmt.Fprintf(w, "%s %s %s\n", r.Method, r.URL, r.Proto)

fmt.Fprintf(w, "Host = %q\n", r.Host)

fmt.Fprintf(w, "RemoteAddr = %q\n", r.RemoteAddr)

if err := r.ParseForm(); err != nil {



for k, v := range r.Form {

fmt.Fprintf(w, "Form[%q] = %q\n", k, v)


fmt.Fprintf(w, "\n===> Local IP: %q\n\n", GetOutboundIP())


// Function to get the outbound IP of the ECS task

func GetOutboundIP() net.IP {

conn, err := net.Dial("udp", "")

if err != nil {



defer conn.Close()

localAddr := conn.LocalAddr().(*net.UDPAddr)

return localAddr.IP


This Go program creates an HTTP server that listens on port 5000. When accessed, it prints request information, such as the method, URL, and headers, and then retrieves and displays the local private IP address of the ECS task. This simple server will be used in your ECS tasks as part of your CI/CD pipeline to provide information about the running tasks.

Step 4: Create a Dockerfile

In this step, we will create a Dockerfile that defines the instructions for building a Docker image containing your Golang-based HTTP server. This Docker image will deploy your application within the ECS tasks.

# Use the official Golang Alpine image as the builder stage

FROM golang:alpine AS builder

# Set environment variables for Go



    GOOS=linux \


# Set the working directory for the build stage

WORKDIR /build

# Copy the Go application source code to the container

COPY ./HTTPserver.go .

# Build the Go application

RUN go build -o HTTPserver ./HTTPserver.go

# Create a new stage for the final image

FROM scratch

# Copy the binary from the builder stage to the final stage

COPY --from=builder /build/HTTPserver /

# Expose port 5000 for the HTTP server


# Define the entry point for the container


This Dockerfile uses a multi-stage build approach. The first stage (builder) sets up the Go development environment, copies the Go application source code, and builds the application. The second stage creates a minimal Docker image based on scratch and copies the binary from the builder stage. It also exposes port 5000 and sets the HTTP server’s entry point.

Place this Dockerfile in your CodeCommit repository, and it will be used in the CI/CD pipeline to build the Docker image for your Golang-based HTTP server.

Step 5: Create TF_VAR Variables

This step will create the necessary Terraform variables to store sensitive information like the uri_repo needed for your infrastructure. These variables will be used in your Terraform configuration.

export TF_VAR_uri_repo="<ID_ACCOUNT>.dkr.ecr.<REGION>.amazonaws.com/<ECR_REPOSITORY_NAME>"

Setting the TF_VAR_uri_repo variable ensures that sensitive information is securely managed and can be easily passed into your Terraform configuration.

Step 6: Create the Infrastructure – Terraform Commands

Now that your Terraform variables are set, you can proceed to create the infrastructure using the following Terraform commands:

Initialize Terraform

terraform init

This command initializes your Terraform working directory and downloads the necessary providers.

Validate Configuration:

terraform validate

Use this command to check your Terraform configuration for syntax errors and other issues.

Plan Infrastructure

terraform plan

This command generates an execution plan that describes what Terraform will do when you apply the configuration. It allows you to review the changes before applying them.

Apply Infrastructure Changes

terraform apply -auto-approve

This command applies the Terraform configuration to create the infrastructure. The -auto-approve flag automatically confirms the changes without prompting for confirmation.

Once the creation process is complete, you will receive output information from Terraform, which may include details about the resources created, such as AWS resource IDs, DNS names, and other important information. These outputs will be useful for the next steps in your CI/CD pipeline.

Step 7: Upload Dockerfile, Code, and Buildspec Files to the CodeCommit Repository

In this step, you will upload your Dockerfile, application code, and the buildspec.yml file to your CodeCommit repository. This ensures that your CI/CD pipeline has access to these files for building and deploying your application.

Clone the CodeCommit Repository

Use the following command to clone your CodeCommit repository to your local development environment:

git clone <repository-clone-url>

Copy Files to the Cloned Repository Folder

Navigate to the cloned repository folder and copy the following files:

  • Dockerfile: This file defines the instructions for building your Docker image.
  • Application code: Your Golang application code.
  • buildspec.yml: This file is used by the CI/CD pipeline to specify the build and deployment process.

After copying the files, commit your changes:

git add .

git commit -m "Add Dockerfile, Golang code, and buildspec.yml"

Push Changes to the CodeCommit Repository

Push the committed changes to your CodeCommit repository:

git push

This step ensures that your Dockerfile, application code, and buildspec file are now stored in your CodeCommit repository, making them accessible to your CI/CD pipeline.

Step 8: Check the Pipeline

In this step, you will monitor the progress of your CI/CD pipeline and verify that it is functioning as expected.

Pipeline Execution

Allow your CI/CD pipeline to execute its stages, including source code retrieval, building, and deployment.

Check the “Build” Stage

After the “Build” stage of your pipeline is complete, you can check the status of the Docker image in the Amazon Elastic Container Registry (ECR) repository. The Docker image created during the build process should now be available in your ECR repository. You can access the ECR repository via the AWS Management Console or by using the AWS Command Line Interface (CLI).

Step 9: Check the ECS Service

After the “Deploy” stage of your CI/CD pipeline is complete, it’s essential to validate the successful deployment of your application in the ECS (Elastic Container Service) cluster. Follow these steps to ensure your application is running as expected:

Check the ECS Service

Navigate to the AWS Management Console or use the AWS CLI to access the ECS service. Locate your ECS service, which should correspond to the service defined in your Terraform configuration (e.g., “golang-Service”).

Inspect Tasks

Check the tasks in your ECS service to ensure the containers are up and healthy. You should see the functions associated with your deployment, each running the Docker image built during the pipeline’s “Build” stage.

Step 10: Check the Target Group

In this step, you will verify the configuration and health of the Target Group associated with your ECS service and application load balancer.

Check the Target Group

Go to the AWS Management Console or use the AWS CLI to access the Target Groups service. Locate the Target Group associated with your ECS service (e.g., “tg-group”). Ensure that the targets (ECS tasks) are registered and healthy within the Target Group.

Step 11: Check the Operation of the Application Load Balancer

To confirm that your application is accessible through the Application Load Balancer (ALB), follow these steps:

Check ALB Listener Rules

Access the AWS Management Console or use the AWS CLI to navigate to your Application Load Balancer’s settings. Verify that the listener rules are correctly configured to route traffic to your ECS service and Target Group.

Test Application Accessibility

Use a web browser, a tool like curl, or an HTTP client to access your application through the ALB’s DNS name or endpoint. Confirm that your application responds as expected and is accessible over the specified port (e.g., port 80).

Step 12: FINAL STEP – Delete the Infrastructure

When you’ve confirmed that your CI/CD pipeline has successfully deployed and is running your application as expected, you can move on to cleaning up your infrastructure. Use Terraform to destroy the resources you’ve provisioned:

terraform destroy -auto-approve

Executing this command will tear down the AWS resources created by your Terraform configuration, including the ECS service, the Application Load Balancer, and associated resources. Ensure you are certain about deleting the infrastructure, as this action cannot be undone.


This article has illustrated the power of automating your CI/CD pipeline for Amazon ECS/Fargate with Terraform, enabling you to streamline software deployment and development processes. By maintaining control over infrastructure provisioning, Docker image creation, and deployments, you can ensure the reliability and efficiency of your application delivery to production environments.

Triotech Systems guides organizations through implementing CI/CD pipelines, infrastructure as code, and cloud-native solutions. Our expertise can help you leverage these technologies effectively, optimizing your development workflows and ensuring your applications are delivered seamlessly and securely. Don’t hesitate to contact us for personalized assistance in achieving your DevOps and cloud automation goals.


Security measures should include access controls, secure code scanning, automated testing, image scanning, and encryption. AWS Identity and Access Management (IAM) roles should limit access, and AWS Secrets Manager can securely store sensitive data.

Utilize ECS Auto Scaling, which dynamically adjusts the number of tasks based on CloudWatch alarms, or use AWS Fargate Spot to optimize costs while maintaining high availability automatically.

Be mindful of resource provisioning, container sizes, and AWS data transfer costs. Implement AWS Cost Explorer and budgeting to monitor expenses and leverage Terraform’s cost-effective infrastructure provisioning.

Challenges may include learning curves, maintaining infrastructure as code, and handling complex dependencies. Address them by investing in training, using Terraform modules, and embracing CI/CD best practices.

ECS/Fargate is simpler to set up and manage but has fewer features than Kubernetes. Consider ECS/Fargate for ease of use and Kubernetes for advanced orchestration needs. Evaluate your specific requirements to make the right choice.

Recent Posts

Leave Comment