Internet APIs offer a convenient way to expose functionality but are inherently exposed to the vulnerability of the internet. In today’s interconnected world, security and compliance are paramount concerns for businesses in various sectors. Restricting access to resources by placing them within a VPC with no internet access can help meet privacy regulations such as HIPAA and PCI DSS, keeping APIs private to ensure sensitive data remains within a controlled environment mitigating the risk of unauthorized access breaches.
In this tutorial, we will look at a case study on how a Serverless REST API can be built with Amazon API Gateway (APIGW) and accessed privately by clients in an Amazon VPC.
Prerequisites
To proceed with this tutorial make sure you have the following software installed:
To verify if all of the prerequisites are installed, you can run the following commands:
# check if the correct AWS credentials are set up
aws sts get-caller-identity
# check if you have NodeJS installed
node -v
# check if NPM is installed
npm -v
# check if you have Homebrew installed (I'm using Mac)
brew -v
# check if you have Terraform installed
terraform -v
Architecture
Part 1
For the first part, we will focus on building the private API in a VPC.
Part 2
In the next part, we will show how clients in another VPC can privately access our private API via an Amazon VPC peering connection and Route53 resolver endpoints.
Setting up Terraform
In this step, we will create our project directory and initialize Terraform.
mkdir tf-private-apigw && cd tf-private-apigw
The next thing to do after creating the project directory is to set up the necessary files and configurations required to initialize Terraform. Create the following files:
Before initializing Terraform, we need to define the provider which in our case is AWS, as well as the version of Terraform, a default region, and the AWS credentials to use when deploying to AWS.
Copy and paste the following code in the 'provider.tf':
We will create a file called 'api-vpc.tf' where we will define all the networking configurations necessary for our private API.
➜ tf-private-apigw git:(main) touch api-vpc.tf
Creating a VPC with DNS Support
We define the VPC CIDR and most importantly, we enable DNS hostnames and DNS support which will allow resources within the VPC to be automatically assigned DNS names and enable Amazon DNS for name resolution respectively.
Create the DynamoDB Document Client in 'src/libs/ddbDocClient.mjs' which we will use when reading and writing to 'claimsTable'.
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocumentClient } from "@aws-sdk/lib-dynamodb";
const client = new DynamoDBClient({region: "eu-central-1"});
export const ddbDocClient = DynamoDBDocumentClient.from(client);
Here is the code for the 'createClaim' Lambda function:
'use strict'
import { PutCommand } from "@aws-sdk/lib-dynamodb";
import { ddbDocClient } from "./libs/ddbDocClient.mjs";
import { randomUUID } from "crypto";
const tableName = process.env.DYNAMODB_TABLE_NAME
export const handler = async (event) => {
console.log("Event===", JSON.stringify(event, null, 2))
if (event.httpMethod !=="POST") {
throw new Error(`Expecting POST method, received ${event.httpMethod}`);
}
if (event.queryStringParameters.memberId === null) {
throw new Error(`memberId missing`);
} else if (event.queryStringParameters.policyId === null) {
throw new Error(`policyId missing`);
} else if (event.queryStringParameters.memberName === null) {
throw new Error(`memberName missing`);
}
const {memberId, policyId, memberName} = event.queryStringParameters
const parsedBody = JSON.parse(event.body || {})
const now = new Date().toISOString()
const claimId = randomUUID()
const params = {
TableName: tableName,
Item: {
PK: `MEMBER#${memberId}`,
SK: `CLAIM#${claimId}`,
...parsedBody,
policyId,
memberName,
createdAt: now,
updatedAt: now,
}
}
let response;
const command = new PutCommand(params)
try {
const data = await ddbDocClient.send(command)
console.log("Success, claim created", data)
response = {
statusCode: 201,
body: params.Item,
}
} catch (err) {
console.log("Error", err)
response = {
statusCode: err.statusCode || 500,
body: JSON.stringify({err})
}
}
console.log("response===", response)
return response
}
The other Lambda functions ('get.mjs', 'update.mjs', & 'delete.mjs') will be set up in a similar manner. We won’t go into the Lambda code here as this is not the principal focus of this tutorial but you can refer to this GitHub repository for the complete code.
Creating the Lambda infrastructure
Create a file called 'lambda.tf' :
tf-private-apigw git:(main) ✗ touch lambda.tf
Lambda execution role
First, we will create an execution role for our Lambda functions and attach two AWS Managed IAM Policies - 'AWSLambdaVPCAccessExecutionRole' and 'AmazonDynamoDBFullAccess'to grant sufficient permission for our functions to access VPC networking, push logs to CloudWatch, and interact with our DynamoDB table:
resource "aws_iam_role" "lambda_exec_role" {
name = "lambda-exec-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "lambda.amazonaws.com"
}
}
]
})
}
resource "aws_iam_role_policy_attachment" "lambda_vpc_execution" {
role = aws_iam_role.lambda_exec_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
}
resource "aws_iam_role_policy_attachment" "ddb_full_access" {
role = aws_iam_role.lambda_exec_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess"
}
We will go over just a single private Lambda function for demonstrative purposes. Private Lambda functions have Network Interface Cards (NIC) in the VPCs that they need access to. So this means we need to attach a security group. We will create a security group called 'private_lambda_sg' that allows only HTTPS traffic on port 443.
Security Groups
Create a file called 'security-groups.tf' which will set up our first security group for the private Lambda Elastic Network Interfaces (ENI):
The key things to note about the above configuration are:
We must attach the 'lambda_exec_role' role’s ARN so our function can push logs to CloudWatch log group, create the NIC cards in the listed VPC Subnets, and interact with DynamoDB.
We must use the 'vpc_config' parameter to specify the subnets which the Lambda function can access.
Lambda Logs
Creating the CloudWatch log group that will allow the function to send logs for monitoring:
Create a Resource policy for the private API. A Resource policy is mandatory for API Gateway Private APIs. Our policy allows the '“execute-api:Invoke”' action only from the Interface Endpoint in our api-vpc:
Before applying the updated changes, we will create some outputs for our endpoints which we will use for making API calls.
Create the 'outputs.tf' file in the root of our Terraform project and create the following outputs:
output "claim_url" {
description = "The API Gateway invocation url pointing to the stage"
value = "https://${aws_api_gateway_rest_api.this.id}.execute-api.${var.region}.amazonaws.com/${aws_api_gateway_stage.dev.stage_name}/${aws_api_gateway_resource.claim.path_part}"
}
output "claim_id_url" {
description = "The API Gateway invocation url for a single resource pointing to the stage"
value = "https://${aws_api_gateway_rest_api.this.id}.execute-api.${var.region}.amazonaws.com/${aws_api_gateway_stage.dev.stage_name}/${aws_api_gateway_resource.claim_id.path_part}"
}
Run 'terraform validate', to verify the syntax and structure of our updated Terraform code and configuration files. Run terraform plan to see the changes that Terraform will make to our infrastructure based on the updated configuration. Finally, run 'terraform apply —auto-approve' to apply the changes to our infrastructure on AWS.
Creating the Interface Endpoint for the private API
For clients in our VPC to connect to the private API endpoint, we will need to set up a VPC Interface Endpoint in our api-vpc.
We will create another security group in the 'security-groups.tf' file that allows only HTTPS traffic on port 443 to the Interface Endpoint for our private API:
Let’s add some outputs for the VPC Interface Endpoint for our private API in the 'output.tf' file:
output "execute_api_endpoint" {
value = aws_vpc_endpoint.execute_api_ep.id
}
output "execute_api_endpoint_dns_name" {
value = aws_vpc_endpoint.execute_api_ep.dns_entry[0].dns_name
}
output "execute_api_arn" {
value = aws_api_gateway_rest_api.this.arn
}
Run 'terraform validate', to verify the syntax and structure of our updated Terraform code and configuration files. Run 'terraform plan' to see the changes that Terraform will make to our infrastructure based on the updated configuration. And finally, run 'terraform apply —auto-approve' to apply the changes to our infrastructure on AWS.
Once Terraform is done applying the changes successfully, you will see similar outputs like this in our CLI:
So far, we have created the private API, the Lambda functions and DynamoDB table with all the necessary networking and security. Now it is time to test our private API endpoint.
We will use AWS System Manager Session Manager to access the EC2 instance in the private subnet of 'api-vpc' to test the API Gateway private API. AWS System Manager supports VPC Interface endpoints which provide private network connectivity from a VPC to the AWS System Manager service without going over the public Internet hence limiting the attack surface for any bad actors.
AWS System Manager Session Manager service requires two Interface endpoints to be able to access our private EC2 instance.
We will set up the endpoints in the 'api-vpc.tf' file. We need to create the security groups for the interface endpoint that will allow only HTTPS traffic on port 443:
Run 'terraform validate', to verify the syntax and structure of our updated Terraform code and configuration files. Run 'terraform plan' to see the changes that Terraform will make to our infrastructure based on the updated configuration. Finally, run 'terraform apply —auto-approve' to apply the changes to our infrastructure on AWS.
Once Terraform is done applying the changes successfully, you will see similar outputs like this in our CLI:
Follow the steps below to launch the System Manager Session Manager to our api-vpc-instance:
1. Open the Amazon EC2 console
2. Click on the Instance ID of our api-vpc-instance
3. Click on “Connect”
4. Click on Session Manager and Connect
5. If you see the CLI then the session was successfully established
Now we are ready to test the private API endpoint.
Testing the Endpoint
All tests will be run in the System Manager Session manager connection to our api-vpc-instance.
Run the commands below in the terminal of the client instance.
CreateClaim
We will make 3 calls to our claim API endpoint to create 3 items in the DynamoDB 'claimsTable'. Make sure you update the '—location' with your own 'claim_url' or 'claim_id_url' output if you are following along:
Notice how the api-vpc’sAmazon DNS server at '10.0.0.2' resolves the domain name of the private API and its Interface VPC Endpoint to the same private IP addresses, '10.0.1.230' in 'private_sn_az1' and '10.0.2.29' in 'private_sn_az2'.
To explain how a request from an instance in the VPC reaches the DynamoDB table, we will use the diagram below:
The client calls the private API endpoint (in our case, 'GET<https://c1w4eq5yf2.execute-api.eu-central amazonaws.com/dev/claim/2afe60e7-3f11-485a-8a69-16389ff52fbd?policyId=123&memberId=123&memberName=JohnDoe>'). The Amazon DNS server at 10.0.02, resolves 'c1w4eq5yf2.execute-api.eu-central-1.amazonaws.com' to the private IP addresses, of the VPC Interface Endpoint of APIGW.
Amazon API Gateway passes the payload to our private Lambda through an integration request.
Lambda performs CRUD operation on DynamoDB Table.
The private Lambda service’s ENI in the private subnet uses the private route table’s managed route to the DynamoDB service.
The payload is routed to the DynamoDB table through the DynamoDB Gateway interface endpoint.
You can also use the AWS Network Manager VPC Reachability Analyzer to view how the Layer 3 IP Packet moves through the various VPC constructs before hitting the private API Interface Endpoint. Below is a snippet of an analysis that traces a packet from our test instance. This is useful for troubleshooting:
Cleaning Up
To clean up all resources created by Terraform, run 'terraform destroy --auto-approve' in the project root directory:
So far in Part 1 of this tutorial, we have succeeded in defining a Serverless private API Gateway REST API which can only be accessed from a VPC. Our VPC didn’t have any public subnets or route tables. We used a VPC Gateway Endpoint to access our DynamoDB table without going over the internet. For our tests, we set up an EC2 instance in the VPC and used AWS System Manager Session Manager to connect to the instance via another VPC Interface Endpoint without going over the internet. From the instance, we were able to test our private endpoints successfully. At this point, clients in the same VPC as the private API endpoint can consume private Serverless REST APIs.
In part 2 of this tutorial, we will provide access to our private API Gateway REST API to clients in another VPC.
At Serverless Guru, we're a collective of proactive solution finders. We prioritize genuineness, forward-thinking vision, and above all, we commit to diligently serving our members each and every day.
Internet APIs offer a convenient way to expose functionality but are inherently exposed to the vulnerability of the internet. In today’s interconnected world, security and compliance are paramount concerns for businesses in various sectors. Restricting access to resources by placing them within a VPC with no internet access can help meet privacy regulations such as HIPAA and PCI DSS, keeping APIs private to ensure sensitive data remains within a controlled environment mitigating the risk of unauthorized access breaches.
In this tutorial, we will look at a case study on how a Serverless REST API can be built with Amazon API Gateway (APIGW) and accessed privately by clients in an Amazon VPC.
Prerequisites
To proceed with this tutorial make sure you have the following software installed:
To verify if all of the prerequisites are installed, you can run the following commands:
# check if the correct AWS credentials are set up
aws sts get-caller-identity
# check if you have NodeJS installed
node -v
# check if NPM is installed
npm -v
# check if you have Homebrew installed (I'm using Mac)
brew -v
# check if you have Terraform installed
terraform -v
Architecture
Part 1
For the first part, we will focus on building the private API in a VPC.
Part 2
In the next part, we will show how clients in another VPC can privately access our private API via an Amazon VPC peering connection and Route53 resolver endpoints.
Setting up Terraform
In this step, we will create our project directory and initialize Terraform.
mkdir tf-private-apigw && cd tf-private-apigw
The next thing to do after creating the project directory is to set up the necessary files and configurations required to initialize Terraform. Create the following files:
Before initializing Terraform, we need to define the provider which in our case is AWS, as well as the version of Terraform, a default region, and the AWS credentials to use when deploying to AWS.
Copy and paste the following code in the 'provider.tf':
We will create a file called 'api-vpc.tf' where we will define all the networking configurations necessary for our private API.
➜ tf-private-apigw git:(main) touch api-vpc.tf
Creating a VPC with DNS Support
We define the VPC CIDR and most importantly, we enable DNS hostnames and DNS support which will allow resources within the VPC to be automatically assigned DNS names and enable Amazon DNS for name resolution respectively.
Create the DynamoDB Document Client in 'src/libs/ddbDocClient.mjs' which we will use when reading and writing to 'claimsTable'.
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocumentClient } from "@aws-sdk/lib-dynamodb";
const client = new DynamoDBClient({region: "eu-central-1"});
export const ddbDocClient = DynamoDBDocumentClient.from(client);
Here is the code for the 'createClaim' Lambda function:
'use strict'
import { PutCommand } from "@aws-sdk/lib-dynamodb";
import { ddbDocClient } from "./libs/ddbDocClient.mjs";
import { randomUUID } from "crypto";
const tableName = process.env.DYNAMODB_TABLE_NAME
export const handler = async (event) => {
console.log("Event===", JSON.stringify(event, null, 2))
if (event.httpMethod !=="POST") {
throw new Error(`Expecting POST method, received ${event.httpMethod}`);
}
if (event.queryStringParameters.memberId === null) {
throw new Error(`memberId missing`);
} else if (event.queryStringParameters.policyId === null) {
throw new Error(`policyId missing`);
} else if (event.queryStringParameters.memberName === null) {
throw new Error(`memberName missing`);
}
const {memberId, policyId, memberName} = event.queryStringParameters
const parsedBody = JSON.parse(event.body || {})
const now = new Date().toISOString()
const claimId = randomUUID()
const params = {
TableName: tableName,
Item: {
PK: `MEMBER#${memberId}`,
SK: `CLAIM#${claimId}`,
...parsedBody,
policyId,
memberName,
createdAt: now,
updatedAt: now,
}
}
let response;
const command = new PutCommand(params)
try {
const data = await ddbDocClient.send(command)
console.log("Success, claim created", data)
response = {
statusCode: 201,
body: params.Item,
}
} catch (err) {
console.log("Error", err)
response = {
statusCode: err.statusCode || 500,
body: JSON.stringify({err})
}
}
console.log("response===", response)
return response
}
The other Lambda functions ('get.mjs', 'update.mjs', & 'delete.mjs') will be set up in a similar manner. We won’t go into the Lambda code here as this is not the principal focus of this tutorial but you can refer to this GitHub repository for the complete code.
Creating the Lambda infrastructure
Create a file called 'lambda.tf' :
tf-private-apigw git:(main) ✗ touch lambda.tf
Lambda execution role
First, we will create an execution role for our Lambda functions and attach two AWS Managed IAM Policies - 'AWSLambdaVPCAccessExecutionRole' and 'AmazonDynamoDBFullAccess'to grant sufficient permission for our functions to access VPC networking, push logs to CloudWatch, and interact with our DynamoDB table:
resource "aws_iam_role" "lambda_exec_role" {
name = "lambda-exec-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "lambda.amazonaws.com"
}
}
]
})
}
resource "aws_iam_role_policy_attachment" "lambda_vpc_execution" {
role = aws_iam_role.lambda_exec_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
}
resource "aws_iam_role_policy_attachment" "ddb_full_access" {
role = aws_iam_role.lambda_exec_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess"
}
We will go over just a single private Lambda function for demonstrative purposes. Private Lambda functions have Network Interface Cards (NIC) in the VPCs that they need access to. So this means we need to attach a security group. We will create a security group called 'private_lambda_sg' that allows only HTTPS traffic on port 443.
Security Groups
Create a file called 'security-groups.tf' which will set up our first security group for the private Lambda Elastic Network Interfaces (ENI):
The key things to note about the above configuration are:
We must attach the 'lambda_exec_role' role’s ARN so our function can push logs to CloudWatch log group, create the NIC cards in the listed VPC Subnets, and interact with DynamoDB.
We must use the 'vpc_config' parameter to specify the subnets which the Lambda function can access.
Lambda Logs
Creating the CloudWatch log group that will allow the function to send logs for monitoring:
Create a Resource policy for the private API. A Resource policy is mandatory for API Gateway Private APIs. Our policy allows the '“execute-api:Invoke”' action only from the Interface Endpoint in our api-vpc:
Before applying the updated changes, we will create some outputs for our endpoints which we will use for making API calls.
Create the 'outputs.tf' file in the root of our Terraform project and create the following outputs:
output "claim_url" {
description = "The API Gateway invocation url pointing to the stage"
value = "https://${aws_api_gateway_rest_api.this.id}.execute-api.${var.region}.amazonaws.com/${aws_api_gateway_stage.dev.stage_name}/${aws_api_gateway_resource.claim.path_part}"
}
output "claim_id_url" {
description = "The API Gateway invocation url for a single resource pointing to the stage"
value = "https://${aws_api_gateway_rest_api.this.id}.execute-api.${var.region}.amazonaws.com/${aws_api_gateway_stage.dev.stage_name}/${aws_api_gateway_resource.claim_id.path_part}"
}
Run 'terraform validate', to verify the syntax and structure of our updated Terraform code and configuration files. Run terraform plan to see the changes that Terraform will make to our infrastructure based on the updated configuration. Finally, run 'terraform apply —auto-approve' to apply the changes to our infrastructure on AWS.
Creating the Interface Endpoint for the private API
For clients in our VPC to connect to the private API endpoint, we will need to set up a VPC Interface Endpoint in our api-vpc.
We will create another security group in the 'security-groups.tf' file that allows only HTTPS traffic on port 443 to the Interface Endpoint for our private API:
Let’s add some outputs for the VPC Interface Endpoint for our private API in the 'output.tf' file:
output "execute_api_endpoint" {
value = aws_vpc_endpoint.execute_api_ep.id
}
output "execute_api_endpoint_dns_name" {
value = aws_vpc_endpoint.execute_api_ep.dns_entry[0].dns_name
}
output "execute_api_arn" {
value = aws_api_gateway_rest_api.this.arn
}
Run 'terraform validate', to verify the syntax and structure of our updated Terraform code and configuration files. Run 'terraform plan' to see the changes that Terraform will make to our infrastructure based on the updated configuration. And finally, run 'terraform apply —auto-approve' to apply the changes to our infrastructure on AWS.
Once Terraform is done applying the changes successfully, you will see similar outputs like this in our CLI:
So far, we have created the private API, the Lambda functions and DynamoDB table with all the necessary networking and security. Now it is time to test our private API endpoint.
We will use AWS System Manager Session Manager to access the EC2 instance in the private subnet of 'api-vpc' to test the API Gateway private API. AWS System Manager supports VPC Interface endpoints which provide private network connectivity from a VPC to the AWS System Manager service without going over the public Internet hence limiting the attack surface for any bad actors.
AWS System Manager Session Manager service requires two Interface endpoints to be able to access our private EC2 instance.
We will set up the endpoints in the 'api-vpc.tf' file. We need to create the security groups for the interface endpoint that will allow only HTTPS traffic on port 443:
Run 'terraform validate', to verify the syntax and structure of our updated Terraform code and configuration files. Run 'terraform plan' to see the changes that Terraform will make to our infrastructure based on the updated configuration. Finally, run 'terraform apply —auto-approve' to apply the changes to our infrastructure on AWS.
Once Terraform is done applying the changes successfully, you will see similar outputs like this in our CLI:
Follow the steps below to launch the System Manager Session Manager to our api-vpc-instance:
1. Open the Amazon EC2 console
2. Click on the Instance ID of our api-vpc-instance
3. Click on “Connect”
4. Click on Session Manager and Connect
5. If you see the CLI then the session was successfully established
Now we are ready to test the private API endpoint.
Testing the Endpoint
All tests will be run in the System Manager Session manager connection to our api-vpc-instance.
Run the commands below in the terminal of the client instance.
CreateClaim
We will make 3 calls to our claim API endpoint to create 3 items in the DynamoDB 'claimsTable'. Make sure you update the '—location' with your own 'claim_url' or 'claim_id_url' output if you are following along:
Notice how the api-vpc’sAmazon DNS server at '10.0.0.2' resolves the domain name of the private API and its Interface VPC Endpoint to the same private IP addresses, '10.0.1.230' in 'private_sn_az1' and '10.0.2.29' in 'private_sn_az2'.
To explain how a request from an instance in the VPC reaches the DynamoDB table, we will use the diagram below:
The client calls the private API endpoint (in our case, 'GET<https://c1w4eq5yf2.execute-api.eu-central amazonaws.com/dev/claim/2afe60e7-3f11-485a-8a69-16389ff52fbd?policyId=123&memberId=123&memberName=JohnDoe>'). The Amazon DNS server at 10.0.02, resolves 'c1w4eq5yf2.execute-api.eu-central-1.amazonaws.com' to the private IP addresses, of the VPC Interface Endpoint of APIGW.
Amazon API Gateway passes the payload to our private Lambda through an integration request.
Lambda performs CRUD operation on DynamoDB Table.
The private Lambda service’s ENI in the private subnet uses the private route table’s managed route to the DynamoDB service.
The payload is routed to the DynamoDB table through the DynamoDB Gateway interface endpoint.
You can also use the AWS Network Manager VPC Reachability Analyzer to view how the Layer 3 IP Packet moves through the various VPC constructs before hitting the private API Interface Endpoint. Below is a snippet of an analysis that traces a packet from our test instance. This is useful for troubleshooting:
Cleaning Up
To clean up all resources created by Terraform, run 'terraform destroy --auto-approve' in the project root directory:
So far in Part 1 of this tutorial, we have succeeded in defining a Serverless private API Gateway REST API which can only be accessed from a VPC. Our VPC didn’t have any public subnets or route tables. We used a VPC Gateway Endpoint to access our DynamoDB table without going over the internet. For our tests, we set up an EC2 instance in the VPC and used AWS System Manager Session Manager to connect to the instance via another VPC Interface Endpoint without going over the internet. From the instance, we were able to test our private endpoints successfully. At this point, clients in the same VPC as the private API endpoint can consume private Serverless REST APIs.
In part 2 of this tutorial, we will provide access to our private API Gateway REST API to clients in another VPC.