Private Serverless REST API with API Gateway: Part 1

March 21, 2024

Introduction

Internet APIs offer a convenient way to expose functionality but are inherently exposed to the vulnerability of the internet. In today’s interconnected world, security and compliance are paramount concerns for businesses in various sectors. Restricting access to resources by placing them within a VPC with no internet access can help meet privacy regulations such as HIPAA and PCI DSS, keeping APIs private to ensure sensitive data remains within a controlled environment mitigating the risk of unauthorized access breaches.

In this tutorial, we will look at a case study on how a Serverless REST API can be built with Amazon API Gateway (APIGW) and accessed privately by clients in an Amazon VPC.

Prerequisites

To proceed with this tutorial make sure you have the following software installed:

To verify if all of the prerequisites are installed, you can run the following commands:

  
# check if the correct AWS credentials are set up
aws sts get-caller-identity

# check if you have NodeJS installed
node -v

# check if NPM is installed
npm -v

# check if you have Homebrew installed (I'm using Mac)
brew -v

# check if you have Terraform installed
terraform -v
  

Architecture

Part 1

For the first part, we will focus on building the private API in a VPC.

Picture showing a private API in a VPC

Part 2

In the next part, we will show how clients in another VPC can privately access our private API via an Amazon VPC peering connection and Route53 resolver endpoints.

Picture showing steps an EC2 instance client accessing a private API in another VPC via VPC peering

Setting up Terraform

In this step, we will create our project directory and initialize Terraform.

  
mkdir tf-private-apigw && cd tf-private-apigw
  

The next thing to do after creating the project directory is to set up the necessary files and configurations required to initialize Terraform. Create the following files:

  
touch variables.tf provider.tf terraform.tfvars locals.tf
  

Next, we define some local variables in the 'locals.tf' file:

  
data "aws_availability_zones" "available" {}

data "aws_ami" "amazon_linux_2" {
  most_recent = true
  owners     = ["amazon"]

  filter {
    name   = "name"
    values = ["amzn2-ami-hvm*"]
  }
}
locals {
  ddb_table_name  = "claimsTable"
  env             = "dev"
  az1             = data.aws_availability_zones.available.names[0]
  az2             = data.aws_availability_zones.available.names[1]
  ami             = data.aws_ami.amazon_linux_2.id
}
  

Next, we define more variables in the 'variables.tf' file:

  
variable "region" {
  type    = string
  default = "us-east-1"
}
variable "account_id" {
  type    = number
}

variable "tag_environment" {
  type    = string
  default = "dev"
}

variable "tag_project" {
  type    = string
  default = "my-tf-project"
}
  

Next, we pass some values for our Terraform variables in the 'terraform.tfvars':

  
region               = "eu-central-1"
account_id           = <>
tag_environment      = "dev"
tag_project          = "tf-private-apigw"
  

Before initializing Terraform, we need to define the provider which in our case is AWS, as well as the version of Terraform, a default region, and the AWS credentials to use when deploying to AWS.

Copy and paste the following code in the 'provider.tf':

  
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }

}

provider "aws" {
  region                   = var.region
  shared_credentials_files = ["~/.aws/credentials"]
  profile                  = "default"
  default_tags {
    tags = {
      Environment = var.tag_environment
      Project     = var.tag_project
    }

  }

}
  

Run 'terraform init' to initialize the project. A successful terraform initialization should produce a similar successful output.

Figure 1 - the expected output after running terraform init command

So far our folder structure looks like this:

  
|-locals.tf
|-provider.tf
|-terraform.tfvars.tf
|-variables.tf
  

Building the Infrastructure for the API

Building the API’s VPC

We will create a file called 'api-vpc.tf' where we will define all the networking configurations necessary for our private API.

  
➜  tf-private-apigw git:(main) touch api-vpc.tf
  

Creating a VPC with DNS Support

We define the VPC CIDR and most importantly, we enable DNS hostnames and DNS support which will allow resources within the VPC to be automatically assigned DNS names and enable Amazon DNS for name resolution respectively.

  
resource "aws_vpc" "api_vpc" {
  cidr_block           = "10.0.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true
  tags = {
    Name = "api-vpc"
  }

}
  

Create two private subnets for high-availability:

  
resource "aws_subnet" "private_sn_az1" {
  vpc_id                  = aws_vpc.api_vpc.id
  cidr_block              = "10.0.1.0/24"
  availability_zone       = local.az1
  map_public_ip_on_launch = false
  tags = {
    Name = "private-sn-az1"
  }
}
resource "aws_subnet" "private_sn_az2" {
  vpc_id                  = aws_vpc.api_vpc.id
  cidr_block              = "10.0.2.0/24"
  availability_zone       = local.az2
  map_public_ip_on_launch = false
  tags = {
    Name = "private-sn-az2"
  }
}
  

Create private route tables for the private subnets:

  
resource "aws_route_table" "private_rt_az1" {
  vpc_id = aws_vpc.api_vpc.id

  tags = {
    Name = "private_rt_az1"
  }
}
resource "aws_route_table" "private_rt_az2" {
  vpc_id = aws_vpc.api_vpc.id

  tags = {
    Name = "private_rt_az2"
  }
}
  

Associate the private route tables to the private subnets.

  
resource "aws_route_table_association" "private_rta1_az1" {
  subnet_id      = aws_subnet.private_sn_az1.id
  route_table_id = aws_route_table.private_rt_az1.id
}
resource "aws_route_table_association" "private_rta_az2" {
  subnet_id      = aws_subnet.private_sn_az2.id
  route_table_id = aws_route_table.private_rt_az2.id
}api-vpc.tf
  

Next, we check the resources that would be created/modified by our Terraform scripts. To do that, we run the following command:

  
tf-private-apigw git:(main) terraform plan
  

The 'terraform plan' command validates our Terraform code and shows us a list of the resources that would be created, modified, or destroyed.

Deploy the changes to AWS using 'terraform apply —auto-approve'.

Our folder structure now looks like this:

  
|-locals.tf
|-provider.tf
|-terraform.tfvars.tf
|-variables.tf
|-api-vpc.tf
  

Creating the DynamoDB table and Gateway Endpoint

Create a file called 'ddb.tf':

  
tf-private-apigw git:(main) ✗ touch ddb.tf
  

Create a DynamoDB table

  
resource "aws_dynamodb_table" "claims_table" {
  name           = local.ddb_table_name
  billing_mode   = "PAY_PER_REQUEST"
  hash_key       = "PK"
  range_key      = "SK"
  attribute {
    name = "PK"
    type = "S"
  }

  attribute {
    name = "SK"
    type = "S"
  }
  
  tags = {
    Name = "Healthcare Insurance Claims Table"
  }
}

  

Create the DynamoDB Gateway Endpoint in the 'api-vpc.tf' file:

  
resource "aws_vpc_endpoint" "ddb_ep" {
  service_name = "com.amazonaws.${var.region}.dynamodb"
  vpc_endpoint_type = "Gateway"
  vpc_id = aws_vpc.api_vpc.id
  route_table_ids = [aws_route_table.private_rt_az1.id, aws_route_table.private_rt_az2.id]
    tags = {
    Name = "dynamodb-gateway-endpoint"
  }

}
  

Our folder structure now looks like this:

  
|-locals.tf
|-provider.tf
|-terraform.tfvars.tf
|-variables.tf
|-api-vpc.tf
|-ddb.tf
  

Building the Private Lambda Functions

Creating Lambda Backend Functions

First, we will create a directory in src for our Lambda functions:

  
tf-private-apigw git:(main) ✗ mkdir src && cd src
src git:(main) ✗ mkdir handlers archives && cd handlers
hanlders git:(main) ✗ touch create.mjs
  

Create the DynamoDB Document Client in 'src/libs/ddbDocClient.mjs' which we will use when reading and writing to 'claimsTable'.

  
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocumentClient } from "@aws-sdk/lib-dynamodb";
const client = new DynamoDBClient({region: "eu-central-1"});
export const ddbDocClient = DynamoDBDocumentClient.from(client);
  

Here is the code for the 'createClaim' Lambda function:

  
'use strict'

import { PutCommand } from "@aws-sdk/lib-dynamodb";
import { ddbDocClient } from "./libs/ddbDocClient.mjs";
import { randomUUID } from "crypto";

const tableName = process.env.DYNAMODB_TABLE_NAME  

export const handler = async (event) => {
    console.log("Event===", JSON.stringify(event, null, 2))
    if (event.httpMethod !=="POST") {
        throw new Error(`Expecting POST method, received ${event.httpMethod}`);
    }

    if (event.queryStringParameters.memberId === null) {
        throw new Error(`memberId missing`);   
    } else if (event.queryStringParameters.policyId === null) {
        throw new Error(`policyId missing`);
    } else if (event.queryStringParameters.memberName === null) {
        throw new Error(`memberName missing`);
    }

    const {memberId, policyId, memberName} = event.queryStringParameters

    const parsedBody = JSON.parse(event.body || {})
    const now = new Date().toISOString()
    const claimId = randomUUID()
   

    const params = {
        TableName:  tableName,
        Item: {
            PK: `MEMBER#${memberId}`,
            SK: `CLAIM#${claimId}`,
            ...parsedBody,
            policyId, 
            memberName,
            createdAt: now,
            updatedAt: now,
    }
    
}
    let response;
    const command = new PutCommand(params)
    try {
        const data = await ddbDocClient.send(command)
        console.log("Success, claim created", data)
        response = {
            statusCode: 201,
            body: params.Item,
        }

            
        } catch (err) {
            console.log("Error", err)
            response = {
                statusCode: err.statusCode || 500,
                body: JSON.stringify({err})
            }
        }
    console.log("response===", response)
    return response
}
  

The other Lambda functions ('get.mjs', 'update.mjs', & 'delete.mjs') will be set up in a similar manner. We won’t go into the Lambda code here as this is not the principal focus of this tutorial but you can refer to this GitHub repository for the complete code.

Creating the Lambda infrastructure

Create a file called 'lambda.tf' :

  
tf-private-apigw git:(main) ✗ touch lambda.tf
  


Lambda execution role

First, we will create an execution role for our Lambda functions and attach two AWS Managed IAM Policies - 'AWSLambdaVPCAccessExecutionRole' and 'AmazonDynamoDBFullAccess'to grant sufficient permission for our functions to access VPC networking, push logs to CloudWatch, and interact with our DynamoDB table:

  
resource "aws_iam_role" "lambda_exec_role" {
  name = "lambda-exec-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Sid    = ""
      Principal = {
        Service = "lambda.amazonaws.com"
      }
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "lambda_vpc_execution" {
  role       = aws_iam_role.lambda_exec_role.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
}
resource "aws_iam_role_policy_attachment" "ddb_full_access" {
  role       = aws_iam_role.lambda_exec_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess"
}
  

We will go over just a single private Lambda function for demonstrative purposes. Private Lambda functions have Network Interface Cards (NIC) in the VPCs that they need access to. So this means we need to attach a security group. We will create a security group called 'private_lambda_sg' that allows only HTTPS traffic on port 443.

Security Groups

Create a file called 'security-groups.tf' which will set up our first security group for the private Lambda Elastic Network Interfaces (ENI):

  
resource "aws_security_group" "private_lambda_sg" {
  name        = "private-lambda-sg"
  description = "Security group for private lambdas"
  vpc_id      = aws_vpc.api_vpc.id

  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["${aws_subnet.private_sn_az1.cidr_block}", "${aws_subnet.private_sn_az2.cidr_block}"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
 
  lifecycle {
    create_before_destroy = true
  }
}
  

Adding variables for Lambda

Let’s add some variables that we will use for our Lambda functions starting first with the 'variables.tf' file:

  
variable "lambda_runtime" {
  type    = string
  default = "nodejs20.x"
}
variable "lambda_timeout" {
  type    = number
  default = 30
}
variable "claim_function_name" {
  type    = string
  default = "claimFunction"
}
variable "create_function_name" {
  type    = string
  default = "createClaim"
}
variable "get_function_name" {
  type    = string
  default = "getClaim"
}
variable "update_function_name" {
  type    = string
  default = "updateClaim"
}
variable "delete_function_name" {
  type    = string
  default = "deleteClaim"
  

Then we will pass in the values in our 'terraform.tfvars' file:

  
lambda_runtime       = "nodejs20.x"
lambda_timeout       = 30
create_function_name = "createClaim"
get_function_name    = "getClaim"
update_function_name = "updateClaim"
delete_function_name = "deleteClaim"
  

'createClaim' Lambda function infrastructure

Here is the code to create the Lambda infrastructure for the 'createClaim' Lambda function:

  
data "archive_file" "create_handler_zip" {
  type        = "zip"
  source_dir = "${path.module}/src/handlers/"
  output_path = "${path.module}/src/archives/create.zip"

}

resource "aws_lambda_function" "createClaim" {
  filename      = data.archive_file.create_handler_zip.output_path
  function_name = var.create_function_name
  handler       = "create.handler"
  role          = aws_iam_role.lambda_exec_role.arn
  timeout       = 30
  runtime       = "nodejs20.x"
  source_code_hash = data.archive_file.create_handler_zip.output_base64sha256

  vpc_config {
    subnet_ids         = [aws_subnet.private_sn_az1.id, aws_subnet.private_sn_az2.id]
    security_group_ids = [aws_security_group.private_lambda_sg.id]
  }
  logging_config {
    log_format = "Text"
  }

  environment {
    variables = {
      DYNAMODB_TABLE_NAME = local.ddb_table_name
    }
  }
}
  

The key things to note about the above configuration are:

  • We must attach the 'lambda_exec_role' role’s ARN so our function can push logs to CloudWatch log group, create the NIC cards in the listed VPC Subnets, and interact with DynamoDB.
  • We must use the 'vpc_config' parameter to specify the subnets which the Lambda function can access.

Lambda Logs

Creating the CloudWatch log group that will allow the function to send logs for monitoring:

  
resource "aws_cloudwatch_log_group" "createClaim" {
  name              = "/aws/lambda/${aws_lambda_function.createClaim.function_name}"
  retention_in_days = 14
}
  

API Gateway permissions

The final step is creating the permissions that will allow API Gateway to invoke our Lambda function:

  
resource "aws_lambda_permission" "apigw_create_permission" {
  statement_id  = "AllowExecutionFromAPIGateway"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.createClaim.function_name
  principal     = "apigateway.amazonaws.com"

  source_arn    = "${aws_api_gateway_rest_api.this.execution_arn}/*"
}
  

Once more, the above steps will mostly be the same for the other functions that you can obtain in this GitHub repository.

Our folder structure now looks like this:

  
|- src/
	|- archives/
	|- handlers/
		|- libs/
			|- ddbDocClient.mjs
		|- create.mjs
		|- get.mjs
		|- update.mjs
		|- delete.mjs
|- locals.tf
|- provider.tf
|- terraform.tfvars.tf
|- variables.tf
|- api-vpc.tf
|- lambda.tf
|- security-groups.tf
  

Building the Private API

Creating the API

Create a file called 'apigw.tf':

  
tf-private-apigw git:(main) ✗ touch apigw.tf
  

Here we create our API and specify the endpoint type as 'PRIVATE'. This is the key part that makes the API private:

  
resource "aws_api_gateway_rest_api" "this" {
  name        = "claims-api"
  description = "Private API for claims service"
  endpoint_configuration {
    types = ["PRIVATE"]
  }
}
  

Create an APIGW resource called 'claim':

  
resource "aws_api_gateway_resource" "claim" {
  rest_api_id = aws_api_gateway_rest_api.this.id
  parent_id   = aws_api_gateway_rest_api.this.root_resource_id
  path_part   = "claim"
}
  

Create another APIGW resource for interacting with single claim resources by id:

  
resource "aws_api_gateway_resource" "claim_id" {
  rest_api_id = aws_api_gateway_rest_api.this.id
  parent_id   = aws_api_gateway_resource.claim.id
  path_part   = "{id}"
}
  

Create a 'dev' stage for the API:

  
resource "aws_api_gateway_stage" "dev" {
  deployment_id = aws_api_gateway_deployment.this.id
  rest_api_id   = aws_api_gateway_rest_api.this.id
  stage_name    = "dev"  
}
  

Create the 'GET', 'POST', 'PUT', and 'DELETE' HTTP methods for the 'claim' resource:

  
resource "aws_api_gateway_method" "post_claim" {
  rest_api_id   = aws_api_gateway_rest_api.this.id
  resource_id   = aws_api_gateway_resource.claim.id
  http_method   = "POST"
  authorization = "NONE"
}
resource "aws_api_gateway_method" "get_claim" {
  rest_api_id   = aws_api_gateway_rest_api.this.id
  resource_id   = aws_api_gateway_resource.claim_id.id
  http_method   = "GET"
  authorization = "NONE"
}

# create aws_api_gateway_method_settings
resource "aws_api_gateway_method" "put_claim" {
  rest_api_id   = aws_api_gateway_rest_api.this.id
  resource_id   = aws_api_gateway_resource.claim_id.id
  http_method   = "PUT"
  authorization = "NONE"
}
resource "aws_api_gateway_method" "delete_claim" {
  rest_api_id   = aws_api_gateway_rest_api.this.id
  resource_id   = aws_api_gateway_resource.claim_id.id
  http_method   = "DELETE"
  authorization = "NONE"
}
  

Create the Lambda proxy integrations:

  
resource "aws_api_gateway_integration" "post_claim_lambda" {
  rest_api_id = aws_api_gateway_rest_api.this.id
  resource_id = aws_api_gateway_resource.claim.id
  http_method = aws_api_gateway_method.post_claim.http_method

  integration_http_method = "POST"
  type                    = "AWS_PROXY"
  uri                     = aws_lambda_function.createClaim.invoke_arn
}
resource "aws_api_gateway_integration" "get_claim_lambda" {
  rest_api_id = aws_api_gateway_rest_api.this.id
  resource_id = aws_api_gateway_resource.claim_id.id
  http_method = aws_api_gateway_method.get_claim.http_method

  integration_http_method = "POST"
  type                    = "AWS_PROXY"
  uri                     = aws_lambda_function.getClaim.invoke_arn
}

resource "aws_api_gateway_integration" "put_claim_lambda" {
  rest_api_id = aws_api_gateway_rest_api.this.id
  resource_id = aws_api_gateway_resource.claim_id.id
  http_method = aws_api_gateway_method.put_claim.http_method

  integration_http_method = "POST"
  type                    = "AWS_PROXY"
  uri                     = aws_lambda_function.updateClaim.invoke_arn
}

resource "aws_api_gateway_integration" "delete_claim_lambda" {
  rest_api_id = aws_api_gateway_rest_api.this.id
  resource_id = aws_api_gateway_resource.claim_id.id
  http_method = aws_api_gateway_method.delete_claim.http_method

  integration_http_method = "POST"
  type                    = "AWS_PROXY"
  uri                     = aws_lambda_function.deleteClaim.invoke_arn
}
  

Create a Resource policy for the private API. A Resource policy is mandatory for API Gateway Private APIs. Our policy allows the '“execute-api:Invoke”' action only from the Interface Endpoint in our api-vpc:

  
resource "aws_api_gateway_rest_api_policy" "claim_policy" {
  rest_api_id = aws_api_gateway_rest_api.this.id
  policy      = jsonencode({
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Principal": "*",
        "Action": "execute-api:Invoke",
        "Resource": "${aws_api_gateway_rest_api.this.execution_arn}*"
      },
      {
              "Effect": "Deny",
              "Principal": "*",
              "Action": "execute-api:Invoke",
              "Resource": "${aws_api_gateway_rest_api.this.execution_arn}*",
              "Condition": {
                  "StringNotEquals": {
                      "aws:SourceVpce": "${aws_vpc_endpoint.execute_api_ep.id}"
                  }
              }
          }
    ]
  })
}
  

Finally, we create a deployment for our API:

  
resource "aws_api_gateway_deployment" "this" {
  rest_api_id = aws_api_gateway_rest_api.this.id

  depends_on = [
   aws_api_gateway_rest_api_policy.claim_policy, aws_api_gateway_integration.post_claim_lambda, aws_api_gateway_integration.delete_claim_lambda, aws_api_gateway_integration.get_claim_lambda, aws_api_gateway_integration.put_claim_lambda
  ]

}
  

Before applying the updated changes, we will create some outputs for our endpoints which we will use for making API calls.

Create the 'outputs.tf' file in the root of our Terraform project and create the following outputs:

  
output "claim_url" {
  description = "The API Gateway invocation url pointing to the stage"
  value = "https://${aws_api_gateway_rest_api.this.id}.execute-api.${var.region}.amazonaws.com/${aws_api_gateway_stage.dev.stage_name}/${aws_api_gateway_resource.claim.path_part}"
}
output "claim_id_url" {
  description = "The API Gateway invocation url for a single resource pointing to the stage"
  value = "https://${aws_api_gateway_rest_api.this.id}.execute-api.${var.region}.amazonaws.com/${aws_api_gateway_stage.dev.stage_name}/${aws_api_gateway_resource.claim_id.path_part}"
}
  

Run 'terraform validate', to verify the syntax and structure of our updated Terraform code and configuration files. Run terraform plan to see the changes that Terraform will make to our infrastructure based on the updated configuration. Finally, run 'terraform apply —auto-approve' to apply the changes to our infrastructure on AWS.

Creating the Interface Endpoint for the private API

For clients in our VPC to connect to the private API endpoint, we will need to set up a VPC Interface Endpoint in our api-vpc.

We will create another security group in the 'security-groups.tf' file that allows only HTTPS traffic on port 443 to the Interface Endpoint for our private API:

  
resource "aws_security_group" "execute_api_ep_sg" {
  name        = "execute-api-endpoint-sg"
  description = "Security group for API Gateway VPC endpoint"
  vpc_id      = aws_vpc.api_vpc.id

  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    # cidr_blocks = ["0.0.0.0/0"]
    cidr_blocks = ["${aws_subnet.private_sn_az1.cidr_block}", "${aws_subnet.private_sn_az2.cidr_block}", "${aws_subnet.client_private_sn_az1.cidr_block}", "${aws_subnet.client_private_sn_az2.cidr_block}"]

  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  lifecycle {
    create_before_destroy = true
  }
}
  

Now let’s create the VPC Interface Endpoint in the 'api-vpc.tf' file:

  
resource "aws_vpc_endpoint" "execute_api_ep" {
  vpc_endpoint_type   = "Interface"
  private_dns_enabled = true
  vpc_id              = aws_vpc.api_vpc.id
  service_name        = "com.amazonaws.${var.region}.execute-api"
  security_group_ids  = [aws_security_group.execute_api_ep_sg.id]
  subnet_ids          = [aws_subnet.private_sn_az1.id, aws_subnet.private_sn_az2.id]
  tags = {
    Name = "execute-api-endpoint"
  }
}
  

We will add another layer of security to the interface endpoint by adding a VPC endpoint policy that only allows the '“execute-api:Invoke”' action:

  
resource "aws_vpc_endpoint_policy" "execute_api_ep_policy" {
  vpc_endpoint_id = aws_vpc_endpoint.execute_api_ep.id

  policy = jsonencode({
    "Version" : "2012-10-17",
    "Statement" : [
      {
        "Effect" : "Allow",
        "Principal" : {
          "AWS" : "*"
        },
        "Action" : [
          "execute-api:Invoke"
        ],
        "Resource" : "*"
      }
    ]
  })
}
  

Let’s add some outputs for the VPC Interface Endpoint for our private API in the 'output.tf' file:

  
output "execute_api_endpoint" {
  value = aws_vpc_endpoint.execute_api_ep.id
}
output "execute_api_endpoint_dns_name" {
    value = aws_vpc_endpoint.execute_api_ep.dns_entry[0].dns_name
}
output "execute_api_arn" {
    value = aws_api_gateway_rest_api.this.arn
}
  

Run 'terraform validate', to verify the syntax and structure of our updated Terraform code and configuration files. Run 'terraform plan' to see the changes that Terraform will make to our infrastructure based on the updated configuration. And finally, run 'terraform apply —auto-approve' to apply the changes to our infrastructure on AWS.

Once Terraform is done applying the changes successfully, you will see similar outputs like this in our CLI:

terraform outputs after a successful terraform apply

So far, we have created the private API, the Lambda functions and DynamoDB table with all the necessary networking and security. Now it is time to test our private API endpoint.

Our folder structure now looks like this:

  
|- src/
	|- archives/
	|- handlers/
		|- libs/
			|- ddbDocClient.mjs
		|- create.mjs
		|- get.mjs
		|- update.mjs
		|- delete.mjs
|- locals.tf
|- provider.tf
|- terraform.tfvars.tf
|- variables.tf
|- api-vpc.tf
|- lambda.tf
|- security-groups.tf
|- apigw.tf
  

Testing the endpoints with an EC2 client

Setting up an EC2 client

Since this is a private API Gateway API, we cannot test the endpoint over the internet with a program like Postman.

Instead, we will set up an EC2 instance in the api-vpc and use CURL to test the endpoints.

Create a file called 'ec2.tf':

  
➜  tf-private-apigw git:(main) ✗ touch ec2.tf
  

Create an execution role for the EC2 instance:

  
resource "aws_iam_role" "ec2_exec_role" {
  name = "ec2-exec-role"

  assume_role_policy = jsonencode(
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
})

  tags = {
    tag-key = "ec2-exec-role"
  }
}
  

Attach the 'AmazonSSMManagedInstanceCore'  managed policy which will permit AWS Systems Manager service’s (SSM) core functionality:

  
resource "aws_iam_policy_attachment" "ssm_manager_attachment" {
  name       = "ec2-exec-attachement"
  roles      = [aws_iam_role.ec2_exec_role.name]
  policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
}
  

Create an instance profile for EC2 that will assume the 'ec2-exec-role':

  
resource "aws_iam_instance_profile" "ec2_instance_profile" {
  name = "ec2-instance-profile"
  role = aws_iam_role.ec2_exec_role.name
}

  

Create the Ec2 instance that we will use for testing:

  
resource "aws_instance" "api_vpc_instance" {
  ami                    = local.ami
  instance_type          = "t2.micro"
  subnet_id              = aws_subnet.private_sn_az1.id
  vpc_security_group_ids = [ aws_security_group.execute_api_ep_sg.id]
  key_name               = "default-euc1"
  iam_instance_profile   = aws_iam_instance_profile.ec2_instance_profile.name

  tags = {
    Name = "api-vpc-instance"
  }
}

  

We will use AWS System Manager Session Manager to access the EC2 instance in the private subnet of 'api-vpc' to test the API Gateway private API. AWS System Manager supports VPC Interface endpoints which provide private network connectivity from a VPC to the AWS System Manager service without going over the public Internet hence limiting the attack surface for any bad actors.

AWS System Manager Session Manager service requires two Interface endpoints to be able to access our private EC2 instance.

We will set up the endpoints in the 'api-vpc.tf' file. We need to create the security groups for the interface endpoint that will allow only HTTPS traffic on port 443:

  
resource "aws_security_group" "ssm2_ep_sg" {
  name        = "ssm2-endpoint-sg"
  description = "Security group for SSM endpoints for api-vpc instances"
  vpc_id      = aws_vpc.api_vpc.id

  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["${aws_subnet.private_sn_az1.cidr_block}", "${aws_subnet.private_sn_az2.cidr_block}"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
 
  lifecycle {
    create_before_destroy = true
  }
}
  

Next, we set up the Interface endpoints for SSM and SSM messages:

  
resource "aws_vpc_endpoint" "ssm2_ep" {
  vpc_endpoint_type   = "Interface"
  private_dns_enabled = true
  service_name        = "com.amazonaws.${var.region}.ssm"

  vpc_id              = aws_vpc.api_vpc.id
  security_group_ids  = [aws_security_group.ssm2_ep_sg.id]
  subnet_ids          = [aws_subnet.private_sn_az1.id]
  tags = {
    Name = "ssm2-endpoint"
  }
}
resource "aws_vpc_endpoint" "ssm2_messages_ep" {
  vpc_endpoint_type   = "Interface"
  private_dns_enabled = true
  service_name        = "com.amazonaws.${var.region}.ssmmessages"

  vpc_id              = aws_vpc.api_vpc.id
  security_group_ids  = [aws_security_group.ssm2_ep_sg.id]
  subnet_ids          = [aws_subnet.private_sn_az1.id]
  tags = {
    Name = "ssm2-messages-endpoint"
  }
}
  

Run 'terraform validate', to verify the syntax and structure of our updated Terraform code and configuration files. Run 'terraform plan' to see the changes that Terraform will make to our infrastructure based on the updated configuration. Finally, run 'terraform apply —auto-approve' to apply the changes to our infrastructure on AWS.

Once Terraform is done applying the changes successfully, you will see similar outputs like this in our CLI:

terraform outputs after a successful terraform apply
Terraform outputs after a successful terraform apply

Follow the steps below to launch the System Manager Session Manager to our api-vpc-instance:

         1. Open the Amazon EC2 console

         2. Click on the Instance ID of our api-vpc-instance

EC2 Console with instances

         3. Click on “Connect”

EC2 instance Connect

         4. Click on Session Manager and Connect

EC2 instance Session Manager

         5. If you see the CLI then the session was successfully established

Successful Session connection to EC2 instance

Now we are ready to test the private API endpoint.

Testing the Endpoint

All tests will be run in the System Manager Session manager connection to our api-vpc-instance.

Run the commands below in the terminal of the client instance.

CreateClaim

We will make 3 calls to our claim API endpoint to create 3 items in the DynamoDB 'claimsTable'. Make sure you update the '—location' with your own 'claim_url' or 'claim_id_url' output if you are following along:

  
curl --location 'https://c1w4eq5yf2.execute-api.eu-central-1.amazonaws.com/dev/claim?policyId=123&memberId=123&memberName=JohnDoe' \
--header 'Content-Type: application/json' \
--data '{
  "policyType": "Health", 
"claimAmount": 500, 
"description": "malaria treatment", 
"status": "pending"
}'

curl --location 'https://c1w4eq5yf2.execute-api.eu-central-1.amazonaws.com/dev/claim?policyId=456&memberId=456&memberName=MaryJane' \
--header 'Content-Type: application/json' \
--data '{
  "policyType": "Health", 
"claimAmount": 650, 
"description": "alergy treatment", 
"status": "pending"
}'

curl --location 'https://c1w4eq5yf2.execute-api.eu-central-1.amazonaws.com/dev/claim?policyId=123&memberId=123&memberName=JohnDoe' \
--header 'Content-Type: application/json' \
--data '{
  "policyType": "Health", 
"claimAmount": 1500, 
"description": "snake bite treatment", 
"status": "pending"
}'
  

The image below shows the results when we create 3 claims:

Picture showing curl requests and results

We can equally see the items in our claimsTable in the Amazon DynamoDB Console:

Picture showing DyanmoDB Items created

Get

  
curl --location 'https://c1w4eq5yf2.execute-api.eu-central-1.amazonaws.com/dev/claim/2afe60e7-3f11-485a-8a69-16389ff52fbd?policyId=123&memberId=123&memberName=JohnDoe'
  

Update

  
curl --location --request PUT 'https://c1w4eq5yf2.execute-api.eu-central-1.amazonaws.com/dev/claim?policyId=123&memberId=123&memberName=JohnDoe' \
--header 'Content-Type: application/json' \
--data '{
  "policyType": "Health", 
"claimAmount": 2500, 
"description": "malaria treatment booster", 
"status": "pending"
}'
  

Delete

  
curl --location --request DELETE 'https://c1w4eq5yf2.execute-api.eu-central-1.amazonaws.com/dev/claim/2afe60e7-3f11-485a-8a69-16389ff52fbd?policyId=123&memberId=123&memberName=JohnDoe'
  

Traffic flow

In the CLI of our api-vpc-instance, you can run do a DNS lookup on the domain name of our API:

  
sh-4.2$ nslookup c1w4eq5yf2.execute-api.eu-central-1.amazonaws.com
Server:         10.0.0.2
Address:        10.0.0.2#53

Non-authoritative answer:
Name:   c1w4eq5yf2.execute-api.eu-central-1.amazonaws.com
Address: 10.0.1.230
Name:   c1w4eq5yf2.execute-api.eu-central-1.amazonaws.com
Address: 10.0.2.29

sh-4.2$ nslookup execute-api.eu-central-1.amazonaws.com
Server:         10.0.0.2
Address:        10.0.0.2#53

Non-authoritative answer:
Name:   execute-api.eu-central-1.amazonaws.com
Address: 10.0.1.230
Name:   execute-api.eu-central-1.amazonaws.com
Address: 10.0.2.29

sh-4.2$ nslookup vpce-0e865440ab43f1688-45wvi9ri.execute-api.eu-central-1.vpce.amazonaws.com
Server:         10.0.0.2
Address:        10.0.0.2#53

Non-authoritative answer:
Name:   vpce-0e865440ab43f1688-45wvi9ri.execute-api.eu-central-1.vpce.amazonaws.com
Address: 10.0.1.230
Name:   vpce-0e865440ab43f1688-45wvi9ri.execute-api.eu-central-1.vpce.amazonaws.com
Address: 10.0.2.29
  

Notice how the api-vpc’s Amazon DNS server at '10.0.0.2' resolves the domain name of the private API and its Interface VPC Endpoint to the same private IP addresses, '10.0.1.230' in 'private_sn_az1' and '10.0.2.29' in 'private_sn_az2'.

To explain how a request from an instance in the VPC reaches the DynamoDB table, we will use the diagram below:

Picture showing steps required to test API Gateway private API
  1. The client calls the private API endpoint (in our case, 'GET<https://c1w4eq5yf2.execute-api.eu-central amazonaws.com/dev/claim/2afe60e7-3f11-485a-8a69-16389ff52fbd?policyId=123&memberId=123&memberName=JohnDoe>'). The Amazon DNS server at 10.0.02,  resolves 'c1w4eq5yf2.execute-api.eu-central-1.amazonaws.com' to the private IP addresses, of the VPC Interface Endpoint of APIGW.
  2. Amazon API Gateway passes the payload to our private Lambda through an integration request.
  3. Lambda performs CRUD operation on DynamoDB Table.
    • The private Lambda service’s ENI in the private subnet uses the private route table’s managed route to the DynamoDB service.
    • The payload is routed to the DynamoDB table through the DynamoDB Gateway interface endpoint.

You can also use the AWS Network Manager VPC Reachability Analyzer to view how the Layer 3 IP Packet moves through the various VPC constructs before hitting the private API Interface Endpoint. Below is a snippet of an analysis that traces a packet from our test instance. This is useful for troubleshooting:

AWS Network Manager Reachability Analyzer path details

Cleaning Up

To clean up all resources created by Terraform, run 'terraform destroy --auto-approve' in the project root directory:

  
tf-private-apigw git:(main) terraform destroy --auto-approve
  

Recap

So far in Part 1 of this tutorial, we have succeeded in defining a Serverless private API Gateway REST API which can only be accessed from a VPC. Our VPC didn’t have any public subnets or route tables. We used a VPC Gateway Endpoint to access our DynamoDB table without going over the internet. For our tests, we set up an EC2 instance in the VPC and used AWS System Manager Session Manager to connect to the instance via another VPC Interface Endpoint without going over the internet. From the instance, we were able to test our private endpoints successfully. At this point, clients in the same VPC as the private API endpoint can consume private Serverless REST APIs.

In part 2 of this tutorial, we will provide access to our private API Gateway REST API to clients in another VPC.

Research/References

https://repost.aws/knowledge-center/api-gateway-private-endpoint-connection

https://repost.aws/knowledge-center/vpc-peering-connection-create

https://repost.aws/knowledge-center/ec2-systems-manager-vpc-endpoints

https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-privatelink.html

https://docs.aws.amazon.com/whitepapers/latest/best-practices-api-gateway-private-apis-integration/rest-api.html

Serverless Handbook
Access free book

The dream team

At Serverless Guru, we're a collective of proactive solution finders. We prioritize genuineness, forward-thinking vision, and above all, we commit to diligently serving our members each and every day.

See open positions

Looking for skilled architects & developers?

Join businesses around the globe that trust our services. Let's start your serverless journey. Get in touch today!
Ryan Jones
Founder
Book a meeting
arrow
Founder
Eduardo Marcos
Chief Technology Officer
Chief Technology Officer
Book a meeting
arrow

Join the Community

Gather, share, and learn about AWS and serverless with enthusiasts worldwide in our open and free community.