Choose Your Character: Same Service, Different Tools Part 3

May 3, 2024

Welcome back! In our last adventure, we used CDK to build and deploy our file upload service. This time, we’re leaving the realm of first-party AWS tools and using Terraform from Hashicorp. Terraform is an infrastructure as code (IaC) tool capable of deploying resources across multiple platforms, not just AWS.

I have a GitHub Codespace preconfigured with Terraform and the AWS CLI installed. You can fork it here: https://github.com/pchinjr/serverless-file-upload. If this is your first time running this Codespace, you’ll have to configure your AWS credentials, set the AWS_PROFILE environment variable to your Codespace environment, and log in with AWS SSO.

  
$ aws sso configure
$ export AWS_PROFILE={YOUR-AWS-PROFILE-NAME}
$ aws sso login
  

We’ll start our Terraform project with three files: provider.tf, variables.tf, and main.tf.

  
provider "aws" {
  region = "us-east-1"
}
  

provider.tf Boilerplate to tell Terraform we’re using AWS.

  
variable "region" {
  default = "us-east-1"
}

data "aws_caller_identity" "current" {}

locals {
    account_id = data.aws_caller_identity.current.account_id
}
  

variables.tf These are global variables that can be referenced throughout.

  
output "account_id" {
  value = local.account_id
}

resource "aws_s3_bucket" "file_upload" {
  bucket = "file-upload-${var.region}-${local.account_id}"
}
  

main.tf This is the main manifest that declares all of your resources.

With these three files, we’re telling Terraform that this is an AWS project and we want to create an S3 Bucket named by concatenating the region and account ID. Note that we use a data block to retrieve the account ID and store it as a local variable. This limits the amount of hard-coded information in our Terraform manifest. The first time you create a Terraform project you have to run $ terraform init, so go ahead and do that now.

Already we can see the hierarchical and composable nature of Terraform. Composability incurs a cost of complexity, so Terraform tracks the state of your resources in a diff-able format. This means you can ask Terraform how your resources will change before a deployment. We can preview the resource execution plan with the command $ terraform plan. You should see an output like this:

  
  $ terraform plan

Terraform used the selected providers to generate the following execution plan. 
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_s3_bucket.file_upload will be created
  + resource "aws_s3_bucket" "file_upload" {
      + acceleration_status         = (known after apply)
      + acl                         = (known after apply)
      + arn                         = (known after apply)
      + bucket                      = "file-upload-us-east-1-xxxxxxxxxxxxxx"
      + bucket_domain_name          = (known after apply)
      + bucket_prefix               = (known after apply)
      + bucket_regional_domain_name = (known after apply)
      + force_destroy               = false
      + hosted_zone_id              = (known after apply)
      + id                          = (known after apply)
      + object_lock_enabled         = (known after apply)
      + policy                      = (known after apply)
      + region                      = (known after apply)
      + request_payer               = (known after apply)
      + tags_all                    = (known after apply)
      + website_domain              = (known after apply)
      + website_endpoint            = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
  

The output tells us that it will create an S3 bucket, and you can quickly glance at the last line to ensure it matches your expectations. We wanted to make one bucket, and nothing else, so this is looking good so far.

Now deploy with the command $ terraform apply which applies your execution plan to your provider’s environment. Here’s what my output looks like:

  
 $ terraform apply

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with
the following symbols:
  + create

Terraform will perform the following actions:

  # aws_s3_bucket.file_upload will be created
  + resource "aws_s3_bucket" "file_upload" {
      + acceleration_status         = (known after apply)
      + acl                         = (known after apply)
      + arn                         = (known after apply)
      + bucket                      = "file-upload-us-east-1-xxxxxxxxxx"
      + bucket_domain_name          = (known after apply)
      + bucket_prefix               = (known after apply)
      + bucket_regional_domain_name = (known after apply)
      + force_destroy               = false
      + hosted_zone_id              = (known after apply)
      + id                          = (known after apply)
      + object_lock_enabled         = (known after apply)
      + policy                      = (known after apply)
      + region                      = (known after apply)
      + request_payer               = (known after apply)
      + tags_all                    = (known after apply)
      + website_domain              = (known after apply)
      + website_endpoint            = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_s3_bucket.file_upload: Creating...
aws_s3_bucket.file_upload: Creation complete after 0s [id=file-upload-us-east-1-xxxxxxxxxx]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
  

We can check to see if our bucket has been created with the command $ aws s3 ls

  
$ aws s3 ls
2024-04-17 17:49:03 file-upload-us-east-1-837132623653
  

Inconvenient Truths

Great success! But, here is where things get interesting because there is no Cloudformation stack and there is no source code preprocessing. Terraform is using your credentials to make changes with AWS APIs. After a successful apply, Terraform generates a terraform.tfstate file to keep track of the last known state.

Terraform focuses solely on infrastructure and does not directly bundle or process source code. With previous tools, source code was automatically bundled and uploaded to the Lambda service. A typical solution and best practice would be to build a CI/CD pipeline to handle generating code artifacts and invoking Terraform afterward.

A more advanced approach would be combining frameworks. For example, using Serverless Framework to deploy Lambdas and API Gateway while using Terraform for networking architecture like VPCs. At the time of writing, CDK for Terraform is under active development which allows a developer to use CDK constructs written in a language like TypeScript or Python to synthesize an intermediary format that Terraform can execute. I’ll keep an eye on its development and evaluate it if or when it releases a v1.0.

By encountering these limitations, we can see how Terraform gives you the flexibility to mix and match providers with a tradeoff in convenience. To be fair, a lower-level IaC tool like raw CloudFormation doesn’t process source code either and is only compatible with AWS.

What are we going to do? Lambda resources in Terraform can be pointed to a local .zip file or an S3 Bucket containing the .zip file.  For this exercise, we will produce a .zip file of our Lambda code and reference it in Terraform.

Here is the full upload.mjs code for reference:

  
// This is the Lambda function that will be triggered by the API Gateway POST request
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({region: 'us-east-1'});

export const lambdaHandler = async (event) => {
    try {
        console.log(event)
        const body = JSON.parse(event.body);
        const decodedFile = Buffer.from(body.file, 'base64');
        const input = {
            "Body": decodedFile,
            "Bucket": process.env.BUCKET_NAME,
            "Key": body.filename,
            "ContentType": body.contentType
          };
        const command = new PutObjectCommand(input);
        const uploadResult = await s3.send(command);
        return {
            statusCode: 200,
            body: JSON.stringify({ message: "Praise Cage!", uploadResult }),
        };
    } catch (err) {
        console.error(err);
        return {
            statusCode: 500,
            body: JSON.stringify({ message: "Error uploading file", error: err.message }),
        };
    }
};
  

We can zip the file with the following command: $ zip upload.zip upload.mjs.

You should now have a /src/upload directory containing two files, upload.mjs and upload.zip. We’re able to reuse the same upload code and our only dependency is the AWS-SDKV3 which is included by default in the Lambda runtime.

Now we can turn back to our [main.tf](<http://main.tf>) and define several things:

  • Lambda execution role
  • Lambda policy to act on the S3 Bucket
  • The upload Lambda resource
  • An API Gateway resource
  • API Gateway lambda integration
  • API Gateway deployment
  • API Gateway permission to invoke the upload Lambda

Once again, Terraform requires more explicit configurations compared to SAM or CDK to maintain its flexibility. This highlights the amount of configuration that SAM and CDK assume for developers.

  
output "account_id" {
  value = local.account_id
}

resource "aws_s3_bucket" "file_upload" {
  bucket = "file-upload-${var.region}-${local.account_id}"
}

resource "aws_iam_role" "lambda_exec_role" {
  name = "lambda_exec_role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [{
      Action = "sts:AssumeRole",
      Effect = "Allow",
      Principal = {
        Service = "lambda.amazonaws.com"
      }
    }]
  })
}

resource "aws_iam_role_policy" "lambda_policy" {
  name = "lambda_exec_policy"
  role = aws_iam_role.lambda_exec_role.id

  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Action = [
          "logs:CreateLogGroup",
          "logs:CreateLogStream",
          "logs:PutLogEvents"
        ],
        Resource = "arn:aws:logs:*:*:*"
      },
      {
        Effect = "Allow",
        Action = [
          "s3:PutObject"
        ],
        Resource = "${aws_s3_bucket.file_upload.arn}/*"
      }
    ]
  })
}

resource "aws_lambda_function" "upload_lambda" {
  function_name = "UploadFunction"
  handler       = "upload.lambdaHandler"  
  runtime       = "nodejs20.x"     
  filename      = "${path.module}/src/upload/upload.zip"
  source_code_hash = filebase64sha256("${path.module}/src/upload/upload.zip")
  role = aws_iam_role.lambda_exec_role.arn
  environment {
    variables = {
      BUCKET_NAME: aws_s3_bucket.file_upload.bucket
    }
  }
}

resource "aws_api_gateway_rest_api" "api" {
  name        = "UploadAPI"
  description = "API for file uploads"
}

resource "aws_api_gateway_resource" "api_resource" {
  rest_api_id = aws_api_gateway_rest_api.api.id
  parent_id   = aws_api_gateway_rest_api.api.root_resource_id
  path_part   = "upload"
}

resource "aws_api_gateway_method" "post_method" {
  rest_api_id   = aws_api_gateway_rest_api.api.id
  resource_id   = aws_api_gateway_resource.api_resource.id
  http_method   = "POST"
  authorization = "NONE"
}

resource "aws_api_gateway_integration" "lambda_integration" {
  rest_api_id = aws_api_gateway_rest_api.api.id
  resource_id = aws_api_gateway_resource.api_resource.id
  http_method = aws_api_gateway_method.post_method.http_method

  integration_http_method = "POST"
  type                    = "AWS_PROXY"
  uri                     = aws_lambda_function.upload_lambda.invoke_arn
}

resource "aws_api_gateway_deployment" "api_deployment" {
  depends_on = [
    aws_api_gateway_integration.lambda_integration,
  ]

  rest_api_id = aws_api_gateway_rest_api.api.id
  stage_name  = "prod"
}

resource "aws_lambda_permission" "api_lambda_permission" {
  statement_id  = "AllowAPIGatewayInvoke"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.upload_lambda.function_name
  principal     = "apigateway.amazonaws.com"

  source_arn = "${aws_api_gateway_rest_api.api.execution_arn}/*/*"
}

output "api_endpoint" {
  value = "${aws_api_gateway_deployment.api_deployment.invoke_url}/upload"
}
  

main.tf with the upload Lambda and API Gateway resources

Run the command $ terraform plan to see the changes to the Terraform state and review the execution plan. Then run the command $ terraform apply to apply the execution plan and deploy the new API Gateway and Lambda function. Take note of the api_endpoint so we can test it with a curl command. Be sure to replace the endpoint address with your values.

  
curl -X POST https://{YOUR_API_ENDPONT}.execute-api.us-east-1.amazonaws.com/prod/upload \
     -H "Content-Type: application/json" \
     -d '{
    "filename": "example.txt",
    "file": "UHJhaXNlIENhZ2UhIEhhbGxvd2VkIGJ5IHRoeSBuYW1lLg==",
    "contentType": "text/plain"
}'
  

I made a text file with the contents: Praise Cage! Hallowed be thy name. We can encode the file to Base64 from the command line $ base64 text.txt > text.txt.base64, using my example we get an output of UHJhaXNlIENhZ2UhIEhhbGxvd2VkIGJ5IHRoeSBuYW1lLg== which we use to build a curl command to test our endpoint.

Time for Dynamodb and Some Event-Driven Architecture

Now that we can upload a file to S3, we’ll use an S3 event trigger to invoke a Lambda function to write the file’s metadata to Dynamodb. We could use a single Lambda to synchronously upload the file and save the metadata. However, we can improve our user experience by separating the process into two Lambas. The user receives a faster response because they won’t have to wait for a database operation. The individual Lambdas are more performant because there’s less code to provision and errors can be handled separately. This small tweak in architecture scales your process into parallel pipelines. As a developer, you’ll want to leverage all the native AWS service architecture you can. It’s less code to write AND the operational responsibility shifts to the vendor, AWS. AWS is responsible for capturing and distributing the event.

Let’s get it!

First, we need to refactor main.tf to make our IAM configuration modular. This refactoring introduces a cleaner separation of concerns, reduces duplicated IAM statements, and centralizes role management, making the Terraform configuration easier to manage and scale. Additionally, it adheres to best practices by ensuring policies are targeted and roles are not overly permissive. We'll update main.tf with distinct roles for S3 access and DynamoDB access.

  
resource "aws_iam_role" "lambda_s3_role" {
  name = "lambda_s3_role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [{
      Action = "sts:AssumeRole",
      Effect = "Allow",
      Principal = {
        Service = "lambda.amazonaws.com"
      }
    },
    ]
  })
}

resource "aws_iam_role_policy" "s3_policy" {
   name   = "s3_access_policy"
   role = aws_iam_role.lambda_s3_role.id

  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Action = [
          "s3:PutObject"
        ],
        Resource = "${aws_s3_bucket.file_upload.arn}/*"
      },
      {
      Effect = "Allow",
      Action = [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      Resource = "arn:aws:logs:*:*:*",
    }
    ]
  })
}

resource "aws_lambda_function" "upload_lambda" {
  function_name = "UploadFunction"
  handler       = "upload.lambdaHandler"  
  runtime       = "nodejs20.x"     
  filename      = "${path.module}/src/upload/upload.zip"
  source_code_hash = filebase64sha256("${path.module}/src/upload/upload.zip")
  role = aws_iam_role.lambda_s3_role.arn # <--- UPDATE ROLE
  environment {
    variables = {
      BUCKET_NAME: aws_s3_bucket.file_upload.bucket
    }
  }
}
  

S3 specific role and policy

  
resource "aws_iam_role" "lambda_dynamodb_role" {
  name = "lambda_dynamodb_role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [{
      Action = "sts:AssumeRole",
      Effect = "Allow",
      Principal = {
        Service = "lambda.amazonaws.com"
      }
    }]
  })
}

resource "aws_iam_role_policy" "dynamodb_policy" {
   name   = "dynamodb_access_policy"
   role = aws_iam_role.lambda_dynamodb_role.id

  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Action = [
          "dynamodb:Query",
          "dynamodb:Scan"
        ],
        Resource = "arn:aws:dynamodb:*:*:table/${aws_dynamodb_table.file_metadata.name}/*"
      },
      {
        Effect = "Allow",
        Action = [
          "dynamodb:GetItem",
          "dynamodb:PutItem",
          "dynamodb:UpdateItem",
          "dynamodb:DeleteItem"
        ],
        Resource = "arn:aws:dynamodb:*:*:table/${aws_dynamodb_table.file_metadata.name}"
      },
      {
        Effect = "Allow",
        Action = [
          "logs:CreateLogGroup",
          "logs:CreateLogStream",
          "logs:PutLogEvents"
        ],
        Resource = "arn:aws:logs:*:*:*"
      }
    ]
  })
}
  

Dynamodb specific role and policy

  
resource "aws_dynamodb_table" "file_metadata" {
  name           = "FileMetadata"
  billing_mode   = "PAY_PER_REQUEST"
  hash_key       = "FileId"
  range_key      = "UploadDate"

  attribute {
    name = "FileId"
    type = "S"
  }

  attribute {
    name = "UploadDate"
    type = "S"
  }

  attribute {
    name = "SyntheticKey"
    type = "S"
  }

  global_secondary_index {
    name               = "UploadDateIndex"
    hash_key           = "SyntheticKey"
    range_key          = "UploadDate"
    projection_type    = "ALL"
  }
}

resource "aws_s3_bucket_notification" "bucket_notification" {
  bucket = aws_s3_bucket.file_upload.bucket

  lambda_function {
    lambda_function_arn = aws_lambda_function.write_metadata.arn
    events              = ["s3:ObjectCreated:*"]
  }

  depends_on = [aws_lambda_permission.allow_s3_invocation]
}

resource "aws_lambda_permission" "allow_s3_invocation" {
  statement_id  = "AllowExecutionFromS3"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.write_metadata.function_name
  principal     = "s3.amazonaws.com"
  source_arn    = aws_s3_bucket.file_upload.arn
}

resource "aws_lambda_function" "write_metadata" {
  function_name = "WriteMetadata"
  handler       = "writeMetadata.lambdaHandler"  
  runtime       = "nodejs20.x"     
  filename      = "${path.module}/src/writeMetadata/writeMetadata.zip"
  source_code_hash = filebase64sha256("${path.module}/src/writeMetadata/writeMetadata.zip")
  role = aws_iam_role.lambda_dynamodb_role.arn
  environment {
    variables = {
      TABLE_NAME = aws_dynamodb_table.file_metadata.name
    }
  }
}
  

New resources for a Dynamodb table, S3 notification, and WriteMetdata Lambda function

Next, let’s add the WriteMetadata Lambda code to the project and zip it.

$ zip writeMetadata.zip writeMetadata.mjs

  
// This function writes the metadata of the uploaded file to the DynamoDB table.
import { DynamoDBClient, PutItemCommand } from "@aws-sdk/client-dynamodb";
const dynamoDBClient = new DynamoDBClient({ region: 'us-east-1' });

export const lambdaHandler = async (event) => {
    try {
        const record = event.Records[0].s3;
        const key = decodeURIComponent(record.object.key.replace(/\+/g, " "));
        const size = record.object.size;
        const uploadDate = new Date().toISOString();  // Assuming the current date as upload date
        const dbParams = {
            TableName: process.env.TABLE_NAME,
            Item: {
                FileId: { S: key },
                UploadDate: { S: uploadDate },
                FileSize: { N: size.toString() },
                SyntheticKey: { S: 'FileUpload'}
            }
        };

        await dynamoDBClient.send(new PutItemCommand(dbParams));

        return { statusCode: 200, body: 'Metadata saved successfully.' };
    } catch (err) {
        console.error(err);
        return { statusCode: 500, body: `Error: ${err.message}` };
    }
};
  

/src/writeMetadata/writeMetadata.mjs is the same source code as before.

Now you can run the command $ terraform plan and you should see an output that shows Plan: 8 to add, 1 to change, 2 to destroy. We’re destroying the previous Lambda execution role and policy, adding a bunch of resources, and updating the existing Upload Lambda with the new S3 role policy.

Go ahead and deploy with $ terraform apply. Test your endpoint with another curl command with a new file name

  
curl -X POST https://{YOUR_API_ENDPONT}.execute-api.us-east-1.amazonaws.com/prod/upload \
     -H "Content-Type: application/json" \
     -d '{
    "filename": "example2.txt",
    "file": "UHJhaXNlIENhZ2UhIEhhbGxvd2VkIGJ5IHRoeSBuYW1lLg==",
    "contentType": "text/plain"
}'
  

If all goes well, then this will trigger the new WriteMetadata Lambda function and insert a new row in your Dynamodb table.

Huzzah!

💡 If you’re having trouble, stop by the Serverless Guru Discord to get some help or just to say “Praise Cage!”

Time to GET the Data

The final feature we have to implement is a GET endpoint that fetches the file metadata from Dynamodb within a date range. As a refresher, we’ve set up the Dynamodb table with a Global Secondary Index using a synthetic key. This lets us query the table efficiently by date range without specifying a primary key.

In our main.tf file we’ll add a new GET endpoint for API Gateway and the final GetMetadata Lambda function along with all the permissions. We also need to update the api_gateway_deployment resource.

  
resource "aws_api_gateway_deployment" "api_deployment" {
  depends_on = [
    aws_api_gateway_integration.lambda_integration,
    aws_api_gateway_integration.lambda_metadata_integration
  ]

  rest_api_id = aws_api_gateway_rest_api.api.id
  stage_name  = "prod"

  # Force new deployment if APIs change
  triggers = {
    redeployment = sha1(join(",", [jsonencode(aws_api_gateway_integration.lambda_integration), jsonencode(aws_api_gateway_integration.lambda_metadata_integration)]))
  }
}

resource "aws_api_gateway_resource" "api_metadata_resource" {
  rest_api_id = aws_api_gateway_rest_api.api.id
  parent_id   = aws_api_gateway_rest_api.api.root_resource_id
  path_part   = "metadata"
}

resource "aws_api_gateway_method" "get_method" {
  rest_api_id   = aws_api_gateway_rest_api.api.id
  resource_id   = aws_api_gateway_resource.api_metadata_resource.id
  http_method   = "GET"
  authorization = "NONE"
}

resource "aws_api_gateway_integration" "lambda_metadata_integration" {
  rest_api_id = aws_api_gateway_rest_api.api.id
  resource_id = aws_api_gateway_resource.api_metadata_resource.id
  http_method = aws_api_gateway_method.get_method.http_method

  integration_http_method = "POST"  // Lambda uses POST for invocations
  type                    = "AWS_PROXY"
  uri                     = aws_lambda_function.get_metadata.invoke_arn
}

resource "aws_lambda_permission" "api_metadata_lambda_permission" {
  statement_id  = "AllowAPIGatewayInvokeQuery"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.get_metadata.function_name
  principal     = "apigateway.amazonaws.com"
  source_arn    = "${aws_api_gateway_rest_api.api.execution_arn}/*/*/metadata"
}

resource "aws_lambda_function" "get_metadata" {
  function_name = "GetMetadata"
  handler       = "getMetadata.lambdaHandler"  
  runtime       = "nodejs20.x"     
  filename      = "${path.module}/src/getMetadata/getMetadata.zip"
  source_code_hash = filebase64sha256("${path.module}/src/getMetadata/getMetadata.zip")
  role = aws_iam_role.lambda_dynamodb_role.arn
  environment {
    variables = {
      TABLE_NAME = aws_dynamodb_table.file_metadata.name
    }
  }
}
  

main.tf adds the GET endpoint to API Gateway

Add the getMetadata.mjs file under /src/getMetadata/ and zip it with $ zip getMetadata.zip getMetadata.mjs

  
// This function retrieves metadata for files uploaded between a specified date range
import { DynamoDBClient, QueryCommand } from "@aws-sdk/client-dynamodb";
const dynamoDBClient = new DynamoDBClient({ region: 'us-east-1' });

export const lambdaHandler = async (event) => {
    try {
        // Extract query parameters from the event
        const startDate = event.queryStringParameters?.startDate; // e.g., '2023-03-20'
        const endDate = event.queryStringParameters?.endDate; // e.g., '2023-03-25'

        // Validate date format or implement appropriate error handling
        if (!startDate || !endDate) {
            return {
                statusCode: 400,
                body: JSON.stringify({ message: "Start date and end date must be provided" }),
            };
        }
          
        const params = {
            TableName: process.env.TABLE_NAME,
            IndexName: 'UploadDateIndex',
            KeyConditionExpression: 'SyntheticKey = :synKeyVal AND UploadDate BETWEEN :startDate AND :endDate',
            ExpressionAttributeValues: {
                ":synKeyVal": { S: "FileUpload" },
                ":startDate": { S: `${startDate}T00:00:00Z` },
                ":endDate": { S: `${endDate}T23:59:59Z` }
            }
        };

        const command = new QueryCommand(params);
        const response = await dynamoDBClient.send(command);

        return {
            statusCode: 200,
            body: JSON.stringify(response.Items),
        };
    } catch (err) {
        console.error(err);
        return {
            statusCode: 500,
            body: JSON.stringify({ message: "Error querying metadata", error: err.message }),
        };
    }
};
  

/src/getMetadata/getMetadata.mjs

Once again, run the command $ terraform plan to review the execution plan, and deploy with $ terraform apply.

By this point, you should be able to curl your new GET endpoint with a valid date range and the Lambda function will return with the file metadata in your Dynamodb table.

  
curl -X GET "https://{YOUR_API_ENDPOINT}.execute-api.us-east-1.amazonaws.com/prod/metadata?startDate=2024-04-01&endDate=2024-04-22"
  

Concluding Thoughts on Terraform for Serverless

Throughout this tutorial, we've navigated the complexities of using Terraform, a potent tool that extends beyond the AWS ecosystem, to manage serverless applications. Unlike AWS-specific tools like SAM or CDK, Terraform offers a provider-agnostic approach, enabling us to manage resources across multiple cloud environments from a single framework.

Key Takeaways

  • Flexibility and Control: Terraform's flexibility allows us to fine-tune our infrastructure and security settings. This granular control, while powerful, does come with the responsibility of managing more details, such as state files, explicit IAM roles, and bring-your-own code artifacts.
  • State Management: Unlike CloudFormation, Terraform’s state management is explicit and decentralized. This means you must handle state files carefully, especially in team environments, to avoid conflicts and ensure consistency. Utilizing services like Terraform Cloud or storing state files in secured S3 buckets can mitigate risks and enhance collaboration.
  • Comprehensive Infrastructure as Code: By incorporating everything from networking resources to serverless functions, Terraform can serve as the backbone of complex deployments. Its ability to integrate with multiple providers and services makes it an excellent choice for hybrid cloud strategies.

Moving Forward

To build on what we've learned, consider exploring these advanced Terraform topics:

  • Modules: Reusable Terraform modules can help encapsulate and standardize cloud patterns within your organization or the broader community.
  • Terraform Cloud: For teams, Terraform Cloud offers advanced features for collaboration, state locking, and automation that enhance the capabilities we've discussed.
  • Provider Ecosystem: Dive deeper into the vast ecosystem of Terraform providers to manage resources in Azure, Google Cloud, and beyond, reinforcing the multi-cloud capabilities that can keep your architecture flexible and future-proof.

Final Reflections

As you advance in your use of Terraform, remember the balance between flexibility and complexity. Each layer of customization with Terraform increases your control but also the need for detailed management and oversight. Whether you’re automating a multi-cloud environment or a complex AWS architecture, Terraform provides the tools necessary to build robust, scalable, and secure infrastructures.

Stay curious, and keep building. And as always—Praise Cage!

References

Terraform Quick Start - https://developer.hashicorp.com/terraform/tutorials/aws-get-started

Serverless Handbook
Access free book

The dream team

At Serverless Guru, we're a collective of proactive solution finders. We prioritize genuineness, forward-thinking vision, and above all, we commit to diligently serving our members each and every day.

See open positions

Looking for skilled architects & developers?

Join businesses around the globe that trust our services. Let's start your serverless journey. Get in touch today!
Ryan Jones
Founder
Book a meeting
arrow
Founder
Eduardo Marcos
Chief Technology Officer
Chief Technology Officer
Book a meeting
arrow

Join the Community

Gather, share, and learn about AWS and serverless with enthusiasts worldwide in our open and free community.