Deploy your NextJS application on a AWS Lambda with Terraform
Date: 2024-19-06 | Estimated reading time: 10 minutesIn this time, we're going to use Terraform, an IaC tool, to deploy our infrastructure on AWS. I'm going to suppose you have the knowledge enough to understand the code I'll provide here. However, I'll explain each block of code inserted here. Now, let's dive into it.
Implementation
First of all, let's create a simple nextjs
app using create-next-app
command line tool which is the recommended method in accordance with Nextjs.
On installation, you will see some prompts in order to setup your application. When you finish the installation process, you can type npm run dev
and you will see your application running in your browser.
Now, go to the next.config.js
file, and add a property in this configuration to copy just only necessary files for a production deployment when you build
your application.
So, when you build your application using npm run build
, a folder called .next
will be created and it will contain a standalone
folder. This folder contains files you can use in order to put your application into production without need to install node_modules
. This folder will be used on a docker image.
A lambda function uses deployment packages to deploy code. Lambda supports two types of deployment packages: zip file and container image. We'll use a container image as the deployment package. Let's dockerize our application.
In the code above, we are using multi-stage builds so that the final image contains only what's need to run the application. The image nodejs:18
is used and retrieved from the public AWS Elastic Container Registry. An important step here is the usage of a lambda adapter. This adapter allows developers to build web apps with familiar frameworks without need to include new code dependency. Also, we added env variables to change the port where the app will listen to and the node environment. We also enabled gzip
compression for response body using the AWS_LWA_ENABLE_COMPRESSION
env variable. The five COPY
lines copy files from the previous stage to the current one. Just the necessary files are copied and a soft link is created to point to /tmp/cache
from .next/cache
. Finally, an ENTRYPOINT
is added to run a bash script. The bash script contains the following code.
It's time to implement terraform code to create resources on AWS. Let's setup Terraform and the providers we need to.
I've added a backend
block to store terraform state file remotely. It's a good practice to store this in a remote location instead of locally as this lets people access the state data and work together on that infrastrucure resources. Also, the aws
provider defined above has a default_tags
property which is a generalized way to include tags on resources that use this provider. The Environment
tag is set to the terraform.workspace
value. Terraform starts with a single, default workspace named default
that you cannot delete. If you have not created a new workspace, you are using the default workspace in your Terraform working directory. Having multiple workspaces is useful if you want to deploy your infrastructure for different environments -e.g. prod, stg, qa, etc.
We'll use a null_resource
terraform resource to build, tag and push the docker image into an ECR. It's important to use vars
to avoid hardcoding here.
Take a look at the trigger block. This block defines when the resource will be executed. If the checksum's value changes in the next execution, this resource will build, tag and push the image into the ECR. Otherwise it will keep as it is. The ecr_name
variable is set with the name of the container to save the image, and the ecr_url
contains the full path where the image will be pushed into. The aws_region
variable is set with the region's name where the ECR is located.
Now, let's create a lambda function using the aws_lambda_function
terraform resource.
The code above uses the aws_iam_policy_document
data source to create an IAM policy document. This policy defines a statement allowing a lambda service to request temporary security credentials, assume a specified IAM role, and inherit the permissions assigned to that role. The policy is used in the aws_iam_role
terraform resource. Finally, the lambda function is created using the aws_lambda_function
terraform resource. It's important to wait for having the docker image in the ECR before to create the lambda function (depends_on
property). Remember to set the values of the variables on a file with extension .tfvars
.
Let's create a HTTPS endpoint for invoking the lambda function. It is possible using the aws_lambda_function_url
resource.
As you can see, this http endpoint implements cors
block to define how different origins can access our funcion url. The authorization_type
is set to "NONE"
in order to create a public endpoint.
Deployment
Finally, we need to create these resources on AWS. Be sure to export the AWS_ACCESS_KEY_ID
and the AWS_SECRET_ACCESS_KEY
env variables before to carry on.
Terraform will handle all the creation phase for you with just typing the following commands.