Looking for Senior AWS Serverless Architects & Engineers?
Let's TalkIn this article, we are going look at how to Serverless Framework Pro can help you:
- setup a monorepo
- easily implement Continuous Integration in your project
- make your project more secure by removing local environment variables and secrets
- make your development faster by automating deployments to staging and prod
- make your deployments higher quality by automatically applying automated tests and deployment safeguards
Github Repository: https://github.com/serverless-guru/serverless-dashboard-example
Creating a Deployment Profile
When it comes to deployment patterns, we often want to deploy to different targets for different reasons:
- prod for real users
- staging for validating the app works before real users use it
- dev for developers trying things out in the cloud while developing
Managing AWS credentials, safeguards, and environment variables for all stages on all developer machines can be difficult. That is where Deployment Profiles can help. So let’s login to Serverless dashboard and click on profiles at the top. You will see you already have a default profile. What we want to do is create profiles for prod and staging.
When creating a profile, you are given 2 options for credentials
- use credentials already existing on your computer or CI environment
- reference an AWS Role to be used
Let’s choose the second one. You will be taken to the AWS console to make an IAM role. You will notice a field called external id is filled out. This is to allow the serverless dashboard to assume this role when it deploys. Next we will choose the roles permissions, give it a name, and click create.
Next, let’s copy the ARN from our created role, and paste in the shared AWS account field, give the profile a name and click create.
Adding Safeguards to our Deployer Profile
Adding Safeguards is a very easy way to enforce certain workflow decisions. For example, as a company you may decide deployments to prod should only happen on Tuesdays, or Lambda functions should not have any wildcard permissions. Safeguards allow us to confirm certain conditions are true before deploying.
While in your profile, click on safeguard policies, and click add policy. You will see a drop down for safeguard options. Some safeguards have additional config which you can set underneath (example, valid deployment times). You can also set the severity to warning or error. Error will stop the deployment, while warn will still allow the deployment to go through. Give it a name, and click save policy. Now any deployment using this profile will go through this safeguard policy.
Attaching Environment Variables to our Deployment Profile
One common approach for dealing with environment variables is putting them in a .env file which is ignored by the git repository. This might be fine for secrets related to developer test accounts on third party services. But managing all secrets for all stages can be difficult. It is also easy to break the rule of least privilege by giving out too many credentials and environment variables.
We can simplify this by instead attaching environment variables to the deployment profile rather than requiring developers to have untracked .env files on their computers. To do this, we can click on the parameters section in our deployer profile to set secrets and keys needed in our app.
Now in our severless.yml file, we can reference this parameter using the ${param:SECRET}syntax. We can see an example of this in our store service in our example project:
environment:
ACME_PASSWORD: ${param:ACME_PASSWORD}
Continuous Integration (CI)
Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build and deployed to a stage. Our deployment profiles will correspond to branches in our repository, and any time a push or merge occurs on the target branch, Serverless Framework Pro will deploy the correct stage.
There are many ways to manage branches and deployment stages. One strategy is to let developers deploy to a dev branch while working on a feature. Merges into master will automatically deploy to a staging branch, and merges into prod will deploy to production.
Before we look at how to set this up, it might be worth asking the question, how is a git based CI setup better than just deploying to staging and prod from our local machines?
- deployments through CI are way simpler and easier to manage ( with the help of deployment profiles). Being able to deploy to different stages simply my merging a branch is very nice. The alternative is managing credentials and environment variables on everyone’s machines, which is potentially less secure, and way more work.
- deployments through CI don’t forget to run tests
- deployments through CI are more visible than local deploys from a developers machine. This is really important because if a deployment causes an issue, we want the timeline of events to be clear and visible.
- deployments through CI are guaranteed to be in sync with our git repository. When deploying from our local machine, it’s possible to deploy code that is not checked in. That means the next deploy will likely overwrite the first deployments non checked in code. CI insures that this will never happen.
- If you have multiple services, you don’t need a big spreadsheet keeping track of what has been deployed where. With CI it’s always the same answer, the prod branch determines what is deployed in production. That is the one, highly visible path to production. There are no invisible backdoor deployments.
Ok, so let’s setup an application and link it to our github repo. The first step is to create an app in the Serverless dashboard, and set the default deployment profile to default.
Hold on a minute... What is the purpose of the default profile? Why not use the prod or staging profiles we made earlier?
The reason we have staging and prod profiles is because they are key points in our development workflow that we want to pre configure with credentials and environment variables to make development simpler, more secure, and to ensure all tests, checks, and safeguards are implemented automatically without developers having to remember them.
But when a developer is working on a feature, and they want to test it out on the cloud, we may decide to make it more flexible and less rigorous. For example, we can allow developers to use their own AWS credentials, or we could assign the default deployment profile to a dedicated developer AWS account provided by our company. In short, the default deployment profile should be seen as a flexible profile that is not designated to any specific stage in our workflow.
Ok, so let’s add some stages to our app by clicking app settings, and clicking stages on the side menu. Each deployment profile will correspond to a stage within our app.
Setting up our monorepo to deploy individual services from our Github Repository
There are 2 common approaches to managing multiple services in github: 1 repo per service, 1 repo for all your services (monorepo). Our example project is structured as a monorepo, so let’s look at how to configure our Github repository with Serverless Framework Pro to deploy multiple services.
On the services page of your app in Serverless Framework Pro, you will see an Add Service button, and click the Deploy from Github button to setup our new service. It may ask you to install Serverless Framework as an integration on Github. Once that integration has been setup, you will be able to see all of your repositories. To add a service, select the repository that contains your project. Next you will have to select a folder as your base directory. Every folder that contains a serverless.yml file in your repository will qualify as a base directory. That means if you only have 1 serverless.yml file, only 1 option will be available to you, but if you have 4 serverless.yml files like we do in our example project, you will have 4 options, or 4 potential services to make.
You will also have to choose a trigger directory. By default, this service will be deployed when any change is made in the repository. This makes sense if we only had 1 serverless.yml file in our repository. But if you want to have multiple services in 1 repository with multiple serverless.yml files (a monorepo), then you will want deployments to be triggered only when changes to a particular folder are made.
In our example project we have the following folder structure, each containing a serverless.yml file.
/resources/api
/resources/db
/services/books
/services/stores
If I am setting up a books service, changes in the stores service should not trigger a deployment of the books service. So we can update the trigger folder to be /services/books so only changes made in the books folder will cause this service to be deployed.
The last thing we will set is the branch and deployment profiles. We will set our master branch to deploy to staging, and our prod branch to deploy to production (the prod branch will have to exist on your Github repository before you are able to configure it to your prod deployment profile).
https://miro.medium.com/max/720/1*5M_Nxjf7Tbax8bQRHQDxDw.gif
Once your service has been configured, you will be able to manually deploy it from the Serverless dashboard, or let Github trigger deployments as you merge features into master and merge master into prod.
Testing
In our example project, we have a few jest tests. Serverless Framework Pro will automatically run tests for you before deploying and will fail upon test failure. All we need to do to set this up is have a script in our package.json called test
{
"scripts": {
"test": "jest"
}
}
This is great because we can now be confident that tests will always be run before deployments.
How services reference resources with Serverless Framework Pro Outputs
In our example project, we have separated our resources from our services. This allows us to separate rarely changing parts of our code (resources) from rapidly changing parts of our code (services). But how can our services reference our resources? In our database serverless.yml file, there is an outputs section:
outputs:
arn:
Fn::GetAtt:
- db
- Arn
name: ${self:app}-${self:custom.base}
If our project is using Serverless Framework Pro, these values will be visible in our dashboard:
and made available to any other service in our application by using ${outputs.SERVICENAME.OUTPUTNAME}. We can see an example of this in our books service serverless.yml file:
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:PutItem
Resource: ${output:resource-db.arn}
environment:
TABLE: ${output:resource-db.name}
Summary
Ok, let's take a step back and look at what we have set up and make some observations:
- We don’t need a .env file at all. All secrets, passwords, and resource references are handled by Serverless Framework Pro.
- Deployments to staging and production are automatic and easy.
- Keeping track of what gets deployed is now simpler since all deployments originate from checked in code on github branches
- Deployments will always run through automated tests
- Having a mono repo for multiple services is now pretty easy since we can trigger deployments based on changes to specific service folders