How to reuse an AWS S3 bucket for multiple SLS Framework

March 2, 2020

When using Serverless Framework, the default behaviour is the creation of a S3 bucket for each serverless.yml file, since they are treated as separated projects.

As described in the documentation, when you run serverless deploy we have the following steps happening:

  1. An AWS CloudFormation template is created from your serverless.yml.
  2. If a Stack has not yet been created, then it is created with no resources except for an S3 Bucket, which will store zip files of your Function code.
  3. The code of your Functions is then packaged into zip files.
  4. Serverless fetches the hashes for all files of the previous deployment (if any) and compares them against the hashes of the local files.
  5. Serverless terminates the deployment process if all file hashes are the same.
  6. Zip files of your Functions’ code are uploaded to your Code S3 Bucket.
  7. Any IAM Roles, Functions, Events and Resources are added to the AWS CloudFormation template.
  8. The CloudFormation Stack is updated with the new CloudFormation template.
  9. Each deployment publishes a new version for each function in your service.

AWS has a soft limit of 100 S3 buckets per account. You can increase your account bucket limit to a maximum of 1,000 buckets, but depending on your workload, this can still be a problem.

How can you leverage the benefits of Serverless Framework and still keep your AWS sane? The answer relies on one option of the serverless.yml file called deploymentBucket.

Anatomy of “deploymentBucket” option

In the serverless.yml file reference, we can define a provider.deploymentBucket and set the following options:

# serverless.yml
service: ...
provider:
  ...
  deploymentBucket:
    name: com.serverless.${self:provider.region}.deploys
    maxPreviousDeploymentArtifacts: 10
    blockPublicAccess: true
    serverSideEncryption: AES256
    sseKMSKeyId: arn:aws:kms:us-east-1:xxxxxxxxxxxx:key/aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa
    sseCustomerAlgorithim: AES256
    sseCustomerKey: string
    sseCustomerKeyMD5: md5sum
    tags:
      key1: value1
      key2: value2

Breaking down each option, we would have:

  • name: Deployment bucket name. Default is generated by the framework
  • maxPreviousDeploymentArtifacts: On every deployment, the framework prunes the bucket to remove artifacts older than this limit. The default is 5
  • blockPublicAccess: Prevents public access via ACLs or bucket policies. Default is false
  • serverSideEncryption: server-side encryption method
  • sseKMSKeyId: when using server-side encryption
  • sseCustomerAlgorithim: when using server-side encryption and custom keys
  • sseCustomerKey: when using server-side encryption and custom keys
  • sseCustomerKeyMD5: when using server-side encryption and custom keys
  • tags: Tags that will be added to each of the deployment resources

To reuse the same bucket across multiple Serverless Framework projects, we need to set the same deploymentBucket.name across these projects.

Let’s create an example to understand it a little bit better.

Using “deploymentBucket” in multiple projects

To illustrate a better scenario, let’s imagine the following requirements:

  • A serverless.yml to define our bucket
  • A serverless.yml for serviceA
  • A serverless.yml for serviceB
  • A serverless.yml for serviceC

We could translate it in the following structure:

resources/
  s3/
    serverless.yml
services/
  serviceA/
    serverless.yml
  serviceB/
    serverless.yml
  serviceC/
    serverless.yml

And in our resources/s3/serverless.yml we can add:

org: your-org-name
app: shared-app-name
service: ${self:app}-shared-bucket-artifacts
provider:
  name: aws
  runtime: nodejs12.x
  stage: ${opt:stage, "dev"}
  region: ${opt:region, "us-west-2"}
  profile: ${opt:profile, "default"}
custom:
  basename: ${self:service}-${self:provider.stage}
  bucketname: ${self:custom.basename}-${self:provider.region}-artifacts
resources:
  Resources:
    S3SharedBucketArtifacts:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: ${self:custom.bucketname}
outputs:
  S3SharedBucketArtifactsName:
    Ref: S3SharedBucketArtifacts
  S3SharedBucketArtifactsArn:
    Fn::GetAtt: S3SharedBucketArtifacts.Arn

In the file above, we’re defining an S3 bucket using CloudFormation and exporting it using Serverless Framework Pro feature called Outputs.

While you can supercharge your development workflow with Serverless Framework Pro, it is not a requirement. You can use CloudFormation export/import to achieve the same solution.

Moving to /services/serviceA/serverless.yml, we have:

org: your-org-name
app: shared-app-name
service: ${self:app}-serviceA
provider:
  name: aws
  runtime: nodejs12.x
  stage: ${opt:stage, "dev"}
  region: ${opt:region, "us-west-2"}
  profile: ${opt:profile, "default"}
  deploymentBucket:
    name: ${self:custom.sharedBucketName}
custom:
  basename: ${self:service}-${self:provider.stage}
  sharedBucketName: ${output:${self:app}-shared-bucket-artifacts.S3SharedBucketArtifactsName}
package:
  exclude:
    - ./**
  include:
    - index.js
functions:
  test:
    name: ${self:custom.basename}-test
    handler: index.handler
    description: Returns "Hello World". Dummy function for API deployment
    events:
      - http:
          path: /test
          method: any
          cors: true

As we can see above, we are using provider.deploymentBucket.name and consuming the exported bucket name from our previous file using ${output:${self:app}-shared-bucket-artifacts.S3SharedBucketArtifactsName}. As mentioned before, ${output:...} is a Serverless Framework Pro feature, but you can do the same with CloudFormation.

You can check the full example in this pull request.

Conclusion

With a simple change, you can avoid hitting the limits of your AWS account and still benefit from the usage of Serverless Framework.

Also keeping your Cloud environment and workflow development tidy and neat!

Serverless Handbook
Access free book

The dream team

At Serverless Guru, we're a collective of proactive solution finders. We prioritize genuineness, forward-thinking vision, and above all, we commit to diligently serving our members each and every day.

See open positions

Looking for skilled architects & developers?

Join businesses around the globe that trust our services. Let's start your serverless journey. Get in touch today!
Ryan Jones - Founder
Ryan Jones
Founder
Speak to a Guru
arrow
Edu Marcos - CTO
Edu Marcos
Chief Technology Officer
Speak to a Guru
arrow
Mason Toberny
Mason Toberny
Head of Enterprise Accounts
Speak to a Guru
arrow

Join the Community

Gather, share, and learn about AWS and serverless with enthusiasts worldwide in our open and free community.