Choose Your Character: Same Service, Different Tools Part 2

April 22, 2024

In the previous article, we used SAM to build and deploy a service that facilitates a file uploaded to an S3 bucket and asynchronously writes the file’s metadata to Dynamodb, which we can query. Now we’ll use another first-party AWS tool called CDK (Cloud Development Kit). CDK is a framework that allows developers to define and deploy infrastructure resources using familiar programming languages, such as TypeScript, Python, and Java, instead of writing CloudFormation templates.

You can launch a GitHub Codespace from my repository, the AWS environment is automatically created with the CDK CLI, AWS SDK, and AWS CLI. You can use the same administrator account created for the SAM walkthrough.

As a refresher, this is an example /.aws/config file

  
[profile AdministratorAccess-{AWS_ACCOUNT_NUMBER}]
sso_session = default
sso_account_id = {AWS_ACCOUNT_NUMBER}
sso_role_name = AdministratorAccess
region = us-east-1
output = text
[sso-session default]
sso_start_url = https://{APP_ID_PORTAL}.awsapps.com/start
sso_region = us-east-1
sso_registration_scopes = sso:account:access
  

With this configured you can use your SSO credentials to log into the AWS CLI using $ aws sso login.

If you’re having trouble logging in, and your /.aws/config file is present, double-check that AWS_PROFILE environment variable is also present on your terminal session with $ echo $AWS_PROFILE.

If it is missing, then you can $ export AWS_PROFILE=AdministratorAccess-{AWS_ACCOUNT_NUMBER} to make it match the information in the config file.

Introducing CDK

To use CDK, you have to go through a bootstrapping process to sync up your local code with a deployment stack. You’ll only bootstrap it once. We can initialize an empty directory with a boilerplate CDK project, and bootstrap the environment with the following commands:

  
$ cdk init serverless-upload-cdk --language typescript
$ npm run build # build the app with TSC
$ cdk ls # list the existing stacks
$ cdk synth # builds the CloudFormation
$ cdk bootstrap aws://{ACCOUNT-NUMBER}/{REGION} # replace with your acct number and region
  

You should see output like this:

  
@user ➜ /workspaces/serverless-file-upload/serverless-upload-cdk (main) $ cdk bootstrap aws://{AWS_ACCOUNT}/{REGION}
⏳  Bootstrapping environment aws://{AWS_ACCOUNT}/{REGION}...
Trusted accounts for deployment: (none)
Trusted accounts for lookup: (none)
Using default execution policy of 'arn:aws:iam::aws:policy/AdministratorAccess'. Pass '--cloudformation-execution-policies' to customize.
CDKToolkit: creating CloudFormation changeset...
✅  Environment aws://{AWS_ACCOUNT}/{REGION} bootstrapped.
  

CDK provides a unified abstraction for building all kinds of applications. Your CDK project is comprised of “stacks” in the /lib folder. Each stack is comprised of “constructs” defining the AWS resources that are part of your application. The entry point for your app is in the /bin folder.

A typical CDK project will have a folder structure that looks like this:

  
serverless-upload-cdk/
├── bin/
│   └── serverless-upload-cdk.ts  // your app's entry point
├── lib/
│   └── serverless-upload-cdk-stack.ts  // defines the stack(s) of your app
├── test/
│   └── my-cdk-app.test.ts  // tests for your app
├── package.json  // Node.js manifest file for project metadata and dependencies
├── package-lock.json  // describes the exact tree generated in node_modules
├── tsconfig.json  // configuration options for your TypeScript project
└── cdk.json  // tells the CDK Toolkit how to execute your app
  

Now that we have the CDK project bootstrapped, we can start adding our code. We’ll start by writing out our CDK stacks in the /lib directory with TypeScript. We have one SeverlessUploadCdkStack class that creates an S3 Bucket, a Lambda function, and an API Gateway endpoint.

  
import { CfnOutput, RemovalPolicy, Stack, StackProps } from 'aws-cdk-lib';
import { Construct } from 'constructs';
import { Bucket } from 'aws-cdk-lib/aws-s3';
import * as lambda from 'aws-cdk-lib/aws-lambda';
import * as apigateway from 'aws-cdk-lib/aws-apigatewayv2';
import * as apigatewayIntegrations from 'aws-cdk-lib/aws-apigatewayv2-integrations';
import * as path from 'path';

export class ServerlessUploadCdkStack extends Stack {
  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);

    // Create an S3 bucket for storing uploaded files
    const uploadBucket = new Bucket(this, 'FileUploadBucket', {
      versioned: true,
      removalPolicy: RemovalPolicy.DESTROY,
      autoDeleteObjects: true
    });

    // Create a Lambda function for uploading files to S3
    const uploadFunction = new lambda.Function(this, 'uploadFunction', {
      runtime: lambda.Runtime.NODEJS_20_X,
      handler: 'upload.lambdaHandler',
      code: lambda.Code.fromAsset(path.join(__dirname, '../lambda')),
      environment: {
        BUCKET_NAME: uploadBucket.bucketName,
      },
    });

    // Grant the Lambda function permission to put objects in the S3 bucket
    uploadBucket.grantPut(uploadFunction);

    // Create an HTTP API endpoint with API Gateway
    const api = new apigateway.HttpApi(this, 'FileUploadApi');

    // Add a POST route to the API that integrates with the Lambda function
    api.addRoutes({
      path: '/upload',
      methods: [apigateway.HttpMethod.POST],
      integration: new apigatewayIntegrations.HttpLambdaIntegration('LambdaIntegration', uploadFunction),
    });

    // Output the endpoint URL to the stack outputs
    new CfnOutput(this, 'EndpointUrl', {
      value: `${api.apiEndpoint}/upload`,
    });
  }
}
  

/lib/serverless-upload-cdk-stack.ts

Next, we can write the entry point for our app that utilizes this stack class in the /bin directory.

  
#!/usr/bin/env node
import 'source-map-support/register';
import * as cdk from 'aws-cdk-lib';
import { ServerlessUploadCdkStack } from '../lib/serverless-upload-cdk-stack';

const app = new cdk.App();
new ServerlessUploadCdkStack(app, 'ServerlessUploadCdkStack', {
});
  

/bin/serverless-upload-cdk.ts

Create a new folder /lambda where we can reuse the same upload function code from the SAM project.

  
// This is the Lambda function that will be triggered by the API Gateway POST request
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({region: 'us-east-1'});

export const lambdaHandler = async (event) => {
    try {
        console.log(event)
        const body = JSON.parse(event.body);
        const decodedFile = Buffer.from(body.file, 'base64');
        const input = {
            "Body": decodedFile,
            "Bucket": process.env.BUCKET_NAME,
            "Key": body.filename,
            "ContentType": body.contentType
          };
        const command = new PutObjectCommand(input);
        const uploadResult = await s3.send(command);
        return {
            statusCode: 200,
            body: JSON.stringify({ message: "Praise Cage!", uploadResult }),
        };
    } catch (err) {
        console.error(err);
        return {
            statusCode: 500,
            body: JSON.stringify({ message: "Error uploading file", error: err.message }),
        };
    }
};
  

/lambda/upload.mjs

Anytime we modify the CDK code, we’ll have to make sure to $ npm run build to transpile the TypeScript source code. If you want to preview the generated stack run $ cdk synth.

Go ahead and run $ cdk deploy to upload and deploy our project. Now we can check the AWS Console to verify that our stack was deployed and you’ll be able to take note of your API Gateway endpoint in the Output section.

Serverless Upload CDK Stack page showing endpoint address from the CloudFormation outputs
Take note of the endpoint address from the CloudFormation outputs

Now we can issue the same curl command from the SAM project to upload a Base64 encoded text file. I made a text file with the contents: Praise Cage! Hallowed by thy name. We can encode the file to Base64 from the command line $ base64 text.txt > text.txt.base64, using my example we get an output of UHJhaXNlIENhZ2UhIEhhbGxvd2VkIGJ5IHRoeSBuYW1lLg== which we use to build a curl command to test our endpoint. Be sure to replace the endpoint address with your values.

  
curl -X POST https://wy338hkp59.execute-api.us-east-1.amazonaws.com/upload \
     -H "Content-Type: application/json" \
     -d '{
    "filename": "example.txt",
    "file": "UHJhaXNlIENhZ2UhIEhhbGxvd2VkIGJ5IHRoeSBuYW1lLg==",
    "contentType": "text/plain"
}'
  

If all goes well, you should receive a success message and see a new file in your S3 Bucket.

  
{
  "message": "Praise Cage!",
  "uploadResult": {
    "$metadata": {
      "httpStatusCode": 200,
      "requestId": "94FCEJX4GA833HHW",
      "extendedRequestId": "xj2NBJKm7+XCxEQGu2DG8RXZpb76HiFpV0LPlDan+aZroonWQ0FOJOzsU6I7J0+i3wqqOqd+8xk=",
      "attempts": 1,
      "totalRetryDelay": 0
    },
    "ETag": "\"e8699570a76ba129bfdcca7607113593\"",
    "ServerSideEncryption": "AES256",
    "VersionId": "30855XD1__4Q.C74hxftMzA00xN8sANK"
  }
}
  

Sprinkling in Some Asynchronous Event Processing

Just like we did in the SAM project, we’ll now use CDK to provision a Dynamodb table and add an S3 event trigger to a new Lambda function that will write the metadata to Dynamodb. All of the action happens in the ServerlessUploadCdkStack class. We’ll include new imports for interacting with Dynamodb and enable S3 notifications, and use TypeScript to define the new resources. We’ll be using the same tactics from the SAM project to asynchronously process the metadata triggered by a new file landing in the S3 bucket. As a refresher, also notice that we’re using a Global Secondary Index with a synthetic key to enable querying the metadata table by UploadDate range without knowing the row’s unique primary key.

  
// new imports
import { Table, AttributeType, BillingMode, ProjectionType } from 'aws-cdk-lib/aws-dynamodb';
import * as s3n from 'aws-cdk-lib/aws-s3-notifications';

export class ServerlessUploadCdkStack extends Stack {
	// previous code stays the same
  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);

    // Create a DynamoDB table
    const fileMetadataTable = new Table(this, 'FileMetadataTable', {
      tableName: 'FileMetadata',
      billingMode: BillingMode.PAY_PER_REQUEST,
      partitionKey: { name: 'FileId', type: AttributeType.STRING },
      sortKey: { name: 'UploadDate', type: AttributeType.STRING },
      removalPolicy: RemovalPolicy.DESTROY, // adjust this as needed
    });

    // Add a global secondary index to the table
    fileMetadataTable.addGlobalSecondaryIndex({
      indexName: 'UploadDateIndex',
      partitionKey: { name: 'SyntheticKey', type: AttributeType.STRING },
      sortKey: { name: 'UploadDate', type: AttributeType.STRING },
      projectionType: ProjectionType.ALL,
    });

    // Create a Lambda function to handle S3 events
    const writeMetadataFunction = new lambda.Function(this, 'writeMetadataFunction', {
      runtime: lambda.Runtime.NODEJS_20_X,
      handler: 'writeMetadata.lambdaHandler',
      code: lambda.Code.fromAsset(path.join(__dirname, '../lambda')),
      environment: {
        TABLE_NAME: fileMetadataTable.tableName,
      },
    });

    // Grant the Lambda function permissions to write to the DynamoDB table
    fileMetadataTable.grantWriteData(writeMetadataFunction);

    // Set up the S3 event notification to trigger the Lambda function
    uploadBucket.addEventNotification(EventType.OBJECT_CREATED, new s3n.LambdaDestination(writeMetadataFunction));
  }
}
  

The writeMetadataFunction code is the same as before. By this point, you should see how handy it is that we can reuse the same Lambda function code even if it’s managed by different tools.

  
// This function writes the metadata of the uploaded file to the DynamoDB table.
import { DynamoDBClient, PutItemCommand } from "@aws-sdk/client-dynamodb";
const dynamoDBClient = new DynamoDBClient({ region: 'us-east-1' });

export const handler = async (event) => {
    try {
        const record = event.Records[0].s3;
        const key = decodeURIComponent(record.object.key.replace(/\+/g, " "));
        const size = record.object.size;
        const uploadDate = new Date().toISOString();  // Assuming the current date as upload date
        const dbParams = {
            TableName: process.env.TABLE_NAME,
            Item: {
                FileId: { S: key },
                UploadDate: { S: uploadDate },
                FileSize: { N: size.toString() },
                SyntheticKey: { S: 'FileUpload'}
            }
        };

        await dynamoDBClient.send(new PutItemCommand(dbParams));

        return { statusCode: 200, body: 'Metadata saved successfully.' };
    } catch (err) {
        console.error(err);
        return { statusCode: 500, body: `Error: ${err.message}` };
    }
};
  

/lambda/writeMetadata.mjs

You can test out the writeMetadataFunction by issuing a new curl command with a different file name.

  
curl -X POST https://wy338hkp59.execute-api.us-east-1.amazonaws.com/upload \
     -H "Content-Type: application/json" \
     -d '{
    "filename": "example2.txt",
    "file": "UHJhaXNlIENhZ2UhIEhhbGxvd2VkIGJ5IHRoeSBuYW1lLg==",
    "contentType": "text/plain"
}'
  

Inspect the table items to make sure your metadata landed correctly

File Meta Data page with metadata landed correctly

FileMetadata, I Choose You!

As a challenge, try adding a new Lambda function to query the FileMetadata table with read permissions, along with a GET /metadata endpoint to API Gateway. Go ahead, I’ll wait…

I bet you did great, all we have to do is add some more constructs to our CDK stack class.

  
// Create a Lambda function to query the file metadata by date range
const getMetadataFunction = new lambda.Function(this, "getMetadataFunction", {
  runtime: lambda.Runtime.NODEJS_20_X,
  handler: "getMetadata.lambdaHandler",
  code: lambda.Code.fromAsset(path.join(__dirname, "../lambda")),
  environment: {
    TABLE_NAME: fileMetadataTable.tableName,
  },
});

// Grant the Lambda function permissions to read from the DynamoDB table
fileMetadataTable.grantReadData(getMetadataFunction);

// Add a GET route to query the file metadata by date range
api.addRoutes({
  path: "/metadata",
  methods: [apigateway.HttpMethod.GET],
  integration: new apigatewayIntegrations.HttpLambdaIntegration(
    "LambdaIntegration",
    getMetadataFunction
  ),
});
  

New constructs appended to the existing /lib/serverless-upload-stack.ts class file

We’ll be able to reuse the getMetadata function.

  
// This function retrieves metadata for files uploaded between a specified date range
import { DynamoDBClient, QueryCommand } from "@aws-sdk/client-dynamodb";
const dynamoDBClient = new DynamoDBClient({ region: 'us-east-1' });

export const lambdaHandler = async (event) => {
    try {
        // Extract query parameters from the event
        const startDate = event.queryStringParameters?.startDate; // e.g., '2023-03-20'
        const endDate = event.queryStringParameters?.endDate; // e.g., '2023-03-25'

        // Validate date format or implement appropriate error handling
        if (!startDate || !endDate) {
            return {
                statusCode: 400,
                body: JSON.stringify({ message: "Start date and end date must be provided" }),
            };
        }
          
        const params = {
            TableName: process.env.TABLE_NAME,
            IndexName: 'UploadDateIndex',
            KeyConditionExpression: 'SyntheticKey = :synKeyVal AND UploadDate BETWEEN :startDate AND :endDate',
            ExpressionAttributeValues: {
                ":synKeyVal": { S: "FileUpload" },
                ":startDate": { S: `${startDate}T00:00:00Z` },
                ":endDate": { S: `${endDate}T23:59:59Z` }
            }
        };

        const command = new QueryCommand(params);
        const response = await dynamoDBClient.send(command);

        return {
            statusCode: 200,
            body: JSON.stringify(response.Items),
        };
    } catch (err) {
        console.error(err);
        return {
            statusCode: 500,
            body: JSON.stringify({ message: "Error querying metadata", error: err.message }),
        };
    }
};
  

/lambda/getMetadata.js

Now we can issue a curl command to query the table, be sure to replace your API Gateway ID and use a valid date range for your data.

  
curl -X GET "https://wy338hkp59.execute-api.us-east-1.amazonaws.com/metadata?startDate=2024-04-01&endDate=2024-04-03"
  

The Same But Different

Congratulations! You now have the same service deployed with SAM and CDK! We've seen firsthand how each tool operates and integrates with AWS services.

Both AWS CDK and AWS SAM are designed to make the deployment and management of AWS resources more efficient and less error-prone. They embrace the infrastructure as code (IaC) philosophy, ensuring that resource provisioning is repeatable and manageable. The use of the same Lambda function code in both projects underscores their common goal of simplifying serverless application deployment.

Distinct Paths

While they share a common purpose, CDK and SAM differ in their approach and capabilities:

  • CDK offers a high-level abstraction using familiar programming languages, enabling developers to construct reusable and modular cloud infrastructure components. This abstraction can greatly simplify the definition of complex cloud architectures but introduces additional complexity when debugging.
  • SAM, on the other hand, provides a more focused and streamlined approach to defining serverless applications, with a simpler, declarative syntax. However, it's more specialized and less flexible for non-serverless components.

Pros & Cons Recap

  • CDK’s Pros: The use of familiar programming languages, higher-level abstractions, and the ability to create reusable components make CDK powerful and flexible.
  • CDK’s Cons: The learning curve and potential complexity in managing the abstraction layers can be challenging.
  • SAM’s Pros: Its simplicity, serverless-focused design, and integration with CloudFormation make SAM straightforward and efficient for serverless applications.
  • SAM’s Cons: Limited scope to serverless and less programming language flexibility can be restrictive for broader applications.

What’s The Best Tool?

Choosing between CDK and SAM is like selecting the right Nicolas Cage character for a mission: it depends on the context and needs of the project. If your project requires complex, multi-faceted cloud architectures that can benefit from modular, reusable components and you prefer using traditional programming languages, CDK is the way to go. If, however, you need a streamlined, serverless-focused tool with a straightforward declarative syntax, SAM will serve you better.

Ultimately, both tools are powerful allies in the AWS ecosystem, designed to empower developers to build and deploy applications more efficiently. The decision between CDK and SAM should be guided by the specific requirements of your project, and the desired workflow and development experience. We don’t have to stop here, in future articles, we’ll keep exploring more tools to deploy this same service.

References

Official CDK documentation: https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html

AWS IAM Identity Center: https://docs.aws.amazon.com/sdkref/latest/guide/access-sso.html

Serverless Handbook
Access free book

The dream team

At Serverless Guru, we're a collective of proactive solution finders. We prioritize genuineness, forward-thinking vision, and above all, we commit to diligently serving our members each and every day.

See open positions

Looking for skilled architects & developers?

Join businesses around the globe that trust our services. Let's start your serverless journey. Get in touch today!
Ryan Jones
Founder
Book a meeting
arrow
Founder
Eduardo Marcos
Chief Technology Officer
Chief Technology Officer
Book a meeting
arrow

Join the Community

Gather, share, and learn about AWS and serverless with enthusiasts worldwide in our open and free community.