Deploy your NextJS application on a AWS Lambda with Terraform

Date: 2024-19-06 | Estimated reading time: 10 minutes

In this time, we're going to use Terraform, an IaC tool, to deploy our infrastructure on AWS. I'm going to suppose you have the knowledge enough to understand the code I'll provide here. However, I'll explain each block of code inserted here. Now, let's dive into it.

Implementation

First of all, let's create a simple nextjs app using create-next-app command line tool which is the recommended method in accordance with Nextjs.

npx create-next-app@latest

On installation, you will see some prompts in order to setup your application. When you finish the installation process, you can type npm run dev and you will see your application running in your browser.

Now, go to the next.config.js file, and add a property in this configuration to copy just only necessary files for a production deployment when you build your application.

app/next.config.js
/** @type {import('next').NextConfig} */
const nextConfig = {
    output: 'standalone'
}
 
module.exports = nextConfig

So, when you build your application using npm run build, a folder called .next will be created and it will contain a standalone folder. This folder contains files you can use in order to put your application into production without need to install node_modules. This folder will be used on a docker image.

A lambda function uses deployment packages to deploy code. Lambda supports two types of deployment packages: zip file and container image. We'll use a container image as the deployment package. Let's dockerize our application.

app/Dockerfile
FROM public.ecr.aws/lambda/nodejs:18.2023.11.15.18 as builder
WORKDIR /app
COPY . .
RUN npm ci && npm run build
 
FROM public.ecr.aws/lambda/nodejs:18.2023.11.15.18 as runner
COPY --from=public.ecr.aws/awsguru/aws-lambda-adapter:0.7.1 /lambda-adapter /opt/extensions/lambda-adapter
ENV PORT=8080 NODE_ENV=production
ENV AWS_LWA_ENABLE_COMPRESSION=true
WORKDIR ${LAMBDA_TASK_ROOT}
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
COPY --from=builder /app/run.sh ./run.sh
RUN ln -s /tmp/cache ./.next/cache
ENTRYPOINT [ "/bin/bash" , "./run.sh" ]

In the code above, we are using multi-stage builds so that the final image contains only what's need to run the application. The image nodejs:18 is used and retrieved from the public AWS Elastic Container Registry. An important step here is the usage of a lambda adapter. This adapter allows developers to build web apps with familiar frameworks without need to include new code dependency. Also, we added env variables to change the port where the app will listen to and the node environment. We also enabled gzip compression for response body using the AWS_LWA_ENABLE_COMPRESSION env variable. The five COPY lines copy files from the previous stage to the current one. Just the necessary files are copied and a soft link is created to point to /tmp/cache from .next/cache. Finally, an ENTRYPOINT is added to run a bash script. The bash script contains the following code.

app/run.sh
#!/bin/bash -x
[ ! -d '/tmp/cache' ] && mkdir -p /tmp/cache
exec node server.js

It's time to implement terraform code to create resources on AWS. Let's setup Terraform and the providers we need to.

app/infrastructure/terraform.tf
terraform {
  required_version = ">= 1.6.3"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~>5.25"
    }
  }
 
backend "s3" {
    bucket  = "luchocode-apps-bucket"
    key     = "terraform.tfstate"
    region  = "us-east-1"
    encrypt = true
  }
}
 
provider "aws" {
  region = "us-east-1"
  default_tags {
    tags = {
      Environment = terraform.workspace
    }
  }
}

I've added a backend block to store terraform state file remotely. It's a good practice to store this in a remote location instead of locally as this lets people access the state data and work together on that infrastrucure resources. Also, the aws provider defined above has a default_tags property which is a generalized way to include tags on resources that use this provider. The Environment tag is set to the terraform.workspace value. Terraform starts with a single, default workspace named default that you cannot delete. If you have not created a new workspace, you are using the default workspace in your Terraform working directory. Having multiple workspaces is useful if you want to deploy your infrastructure for different environments -e.g. prod, stg, qa, etc.

We'll use a null_resource terraform resource to build, tag and push the docker image into an ECR. It's important to use vars to avoid hardcoding here.

app/infrastructure/main.tf
data "aws_caller_identity" "current" {}
 
locals {
  app_checksum = "${var.lambda_function_name}-${formatdate("YYYYMMDDhhmmss", timestamp())}"
  ecr_url      = "${data.aws_caller_identity.current.account_id}.dkr.ecr.${var.aws_region}.amazonaws.com/${var.ecr_name}:${local.app_checksum}"
}
 
resource "null_resource" "docker_image" {
  triggers = {  
    checksum = local.app_checksum
  }
 
  provisioner "local-exec" {
    command = "docker build -t ${var.ecr_name} ../."
  }
 
  provisioner "local-exec" {
    command = "docker tag ${var.ecr_name}:latest ${local.ecr_url}"
  }
 
  provisioner "local-exec" {
    command = "docker push ${local.ecr_url}"
  }
}

Take a look at the trigger block. This block defines when the resource will be executed. If the checksum's value changes in the next execution, this resource will build, tag and push the image into the ECR. Otherwise it will keep as it is. The ecr_name variable is set with the name of the container to save the image, and the ecr_url contains the full path where the image will be pushed into. The aws_region variable is set with the region's name where the ECR is located.

Now, let's create a lambda function using the aws_lambda_function terraform resource.

app/infrastructure/main.tf
data "aws_iam_policy_document" "assume_role" {
  statement {
    effect = "Allow"
 
    principals {
      type        = "Service"
      identifiers = ["lambda.amazonaws.com"]
    }
 
    actions = ["sts:AssumeRole"]
  }
}
 
resource "aws_iam_role" "iam_for_lambda" {
  name               = "${var.lambda_function_name}_role_${terraform.workspace}"
  assume_role_policy = data.aws_iam_policy_document.assume_role.json
}
 
resource "aws_lambda_function" "application" {
  function_name    = "${var.lambda_function_name}_${terraform.workspace}"
  role             = aws_iam_role.iam_for_lambda.arn
  description      = "Nextjs application"
  image_uri        = local.ecr_url
  package_type     = "Image"
  timeout          = 15
  source_code_hash = local.app_checksum
  depends_on       = [null_resource.docker_image]
}

The code above uses the aws_iam_policy_document data source to create an IAM policy document. This policy defines a statement allowing a lambda service to request temporary security credentials, assume a specified IAM role, and inherit the permissions assigned to that role. The policy is used in the aws_iam_role terraform resource. Finally, the lambda function is created using the aws_lambda_function terraform resource. It's important to wait for having the docker image in the ECR before to create the lambda function (depends_on property). Remember to set the values of the variables on a file with extension .tfvars.

Let's create a HTTPS endpoint for invoking the lambda function. It is possible using the aws_lambda_function_url resource.

app/infrastructure/main.tf
resource "aws_lambda_function_url" "application_url" {
  function_name      = "${var.lambda_function_name}_${terraform.workspace}"
  authorization_type = "NONE"
  cors {
    allow_methods = ["GET", "HEAD"]
    allow_origins = ["*"]
  }
  depends_on = [aws_lambda_function.application]
}

As you can see, this http endpoint implements cors block to define how different origins can access our funcion url. The authorization_type is set to "NONE" in order to create a public endpoint.

Deployment

Finally, we need to create these resources on AWS. Be sure to export the AWS_ACCESS_KEY_ID and the AWS_SECRET_ACCESS_KEY env variables before to carry on.

Terraform will handle all the creation phase for you with just typing the following commands.

terminal
# (optional) Create terraform workspace. Otherwise the resources will be created in the default terraform workspace
terraform workspace create dev
terraform workspace select dev
# Initialize terraform
terraform init
# See the infrastructure plan
terraform plan -var-file=vars-dev.tfvars
# Apply the changes
terraform apply -var-file=vars-dev.tfvars -auto-approve