Recently while working with creating my own Terraform lambdas I wanted to try to create some similar deployment functionality to AWS SAM or serverless.tf. The functionality is this:
The code for infrastructure and business logic is stored side-by-site and when business-logic code changes, only then is it deployed out. If you update infrastructure, but the code remains the same, no need to recompile and push.
This greatly reduces time to deploy changes, especially if there are larger projects with multiple lambdas and only one is changing. So, how can we do this in our own terraform module?
Well first lets look at how this works within the serverless.tf provided serverless-aws-lambda module. The meat of this is done in two related files called package.tf
and package.py
.
Determine Packaging
The first thing to checkout is the archive_prepare
external data source. Keep in mind, within terraform, external data is run on every apply. This resource calls the package.py
python script with the argument prepare
. External data providers also receive any query data via stdin. The script is invoked and runs the method prepare_command method. According to the comment it does this:
Generates a content hash of the source_path, which is used to determine if the Lambda code has changed, ignoring file modification and access times. Outputs a filename and a command to run if the archive needs to be built.
1 | def prepare_command(args): |
So what is output is a filename
, build_plan
, build_plan_filename
, timestamp
and was_missing
bool. All used by later steps to determine how the packaging will work and where it will be output.
Do the Packaging
With this info, terraform can now determine if packaging is needed for the lambda and how to go about it. Supported bundling of dependencies is supported by Python or Javascript. The package.py
is a very complex script, so I won’t go into details, but it handles the packaging via a local-exec
call.
1 | # Build the zip archive whenever the filename changes. |
Here if the output of package.py prepare
results in a change it will trigger this null_resource
and the local-exec
. That local-exec
will install dependencies and copy them and the lambda code into the provided archive. That can then be transferred to AWS via the aws_lambda_function
resource.
Package Summary
To summarize it this is the flow:
- Review code location for changes via external data provider
- Trigger
local-exec
if the data provider shows a change. This creates the lambda zip - Upload lambda zip via
aws_lambda_function
What If…
So this works great, but what if we wanted to do a non-Python/Javascript lambda? And not mess with the complexity of the package.py
file? It isn’t too bad. Here are some snippets of an example using Dotnet, Bash and Terraform. The samples of the code below can be found in a GitHub repo here: https://github.com/1davidmichael/terraform-aws-dotnet-lambda
First the terraform, we use an external data provider + bash script to see if the source directory contents has changed via a hash. Then we use a null_resource
local-exec
triggered off the data.external.build_folder
to build the lambda zip.
1 | # Outputs the following: |
The bash script folder_contents.sh
checks the md5sums of all files in the directory and uses that to generate a single hash value and output the zip location and hash via json back to terraform.
Note that everything output must be json or terraform chokes. So all output to stdout has to be redirected to stderr except the json output.
1 |
|
Using the info provided from terraform and the folder_contents.sh
script we use the dotnet CLI to create a lambda zip.
1 |
|
Now we just use the typical AWS terraform resources to create our lambda and we are good to go with a basic Dotnet lambda! Best of all, the lambda zip in AWS will only be updated when files have changed. There are some limits to this but it is overall very straightforward.