Mistakes to Sidestep When Going For A Serverless Design

October 14, 2020

Moving on from legacy monolithic coding patterns to a distributed micro-service design approach can get tricky. You can’t possibly account for all complications that might happen, but arming yourself with basic knowledge can significantly help your serverless journey. Let’s explore certain prime areas of focus when moving to serverless.

Function Design:

Breaking down a giant application into individual functions that perform only one specific task is the mantra for serverless design, but determining which service deserves its independent function is key.

Points to consider when designing functions:

1. Understand your function’s use with reference to the end user’s interaction. Does it perform a singular, efficient task that serves its role in the overall scheme? A SRP (Single Responsibility Principle) design approach should be your aim.

2. Gauge the performance of a function as it integrates with the entire system. A function that overall does little can end up costing as much as a function programmed for a more complex task. Serverless billing is based on compute time, not how simple the function is. Avoid splitting up of services into smaller functions that offer little value.

3. Don’t reinvent the wheel. Utilize open-source tools for your use case if available.

4. The choice of programming language has a compounding effect on how well the system performs. Interpreted languages like Python, NodeJS perform better than compiler-based languages such as Java, C#.

5. Dependency between functions should be minimal. One functional update shouldn’t affect other services in the process. Using libraries that require updates to all functions negates the idea of a non-monolithic design.

6. Micro-service design is asynchronous. The possibility of failure is always looming. Implementing a Dead Letter Queue (DLQ) captures those events that have failed and can be processed later.

7. Keep synchronous functions to a minimum.

Security Blunders:

Security is a critical issue because a distributed serverless design increases the points of potential security breaches. The responsibility of maintaining the security of your code and data lies in your hands, not those of your cloud service provider. AWS provides VPC and API authorization for the sole purpose of securing your architecture. However, their effective implementation is your responsibility.

These areas commonly compromise security:

1. Not following the “least privilege” ideology when assigning permissions to functions is a very common issue. Providing specific access ensures that a service/function cannot do anything else than what it was designed to perform.

2. Secrets stored as plain text in environment variables. It’s important to always encrypt your environment variable’s values. Fetching them during cold starts and invalidating every few minutes is one way to solve this issue.

3. Using third-party libraries that fail to meet optimal security requirements to shield your system from attacks is also another common vulnerability.

Storage and Data Access:

There are a variety of ways to store your data, either manipulated or transformed. The scope of managing data and its security patterns get broader when you have a distributed system. Data retrieval and write patterns can have a powerful impact on your system depending on how you strategize their storage.

These practices cause poor performance of a function and affect data security:

1. Using the same relational database for all functions. Each micro-service may perform better had it been given an appropriate database to work with. A service using NoSQL database will perform vastly better for key-value based data type than a relational database. Explore performance metrics with GraphQL, Relational, and NoSQL for the data type you are working with.

2. Data shared directly between microservices instead of using APIs. Accessing from data sources via APIs ensures security as it maintains a centralized approach for data flow.

Consider implementing a CQRS (Command Query Response Segregation) architecture for a data store. It dictates the reads being done by queries and writes done by commands. This approach provides security, scalability, and performance.

Too Little Done For Observability and Monitoring:

Serverless systems are known for not providing accessible feedback. Logging at the functional level is needed to view how each service interacts with one another and proactive monitoring prevents imminent failure and ensures system resiliency.

Common pitfalls when dealing with monitoring:

1. Not setting up logging tools can create a disaster when something fails. This hides the point of failure and the problem will continue to occur until identified. An ELK stack (Elastisearch, Logstash and Kibana) or Splunk provides just the solution for logging a distributed system.

2. Failure to periodically monitor baseline parameters. Some example parameters for AWS services we recommend monitoring are:

a. For Lambda: error rate, throttle count, regional concurrency.

b. For SQS: Message age.

c. For API Gateway: Success rate, 4XX rates, 5XX rates.

d. Alerts and flow rate for event processing pipelines. Eg Kinesis, DynamoDB streams.

3. The setup of tracing tools at the time of function implementation isn’t done in most cases. Tracing tells you where your flow is broken and the latency at each step. Zipkin & Jaeger are the popular open-source tools designed specifically for this.

Overthinking about Vendor Lock-In:

Developing using a specific cloud platform can lead to questioning how dependent the architecture is on that specific provider. The less reliant you are on your service provider’s products, the more agile you can be on moving between cloud platforms. Having a clear idea of what works well for the system and the ultimate end goal of your product can help alleviate this pain point.

What to think about when designing on the cloud with a vendor:

1. Choose a programming language that is popularly supported by other providers.

2. Evaluate the trade-offs between cost and ease of implementation when choosing a cloud-specific service to one that is cloud-agnostic. For instance, to set up tracing when using AWS X-ray, the functional setup works well on AWS (obviously), but a service such as Epsagon is agentless and comes with various third-party integrations that you may require for your functions.

At the end of the day, concerns about vendor lock-in boil down to cost and performance. A cloud-specific service may couple better with your architecture and be a more cost-efficient path to your end goal than using a third-party service that works with all providers. Though if you wish to retain the ability to move between vendors, then it’s worth evaluating what matters most, as you should be confident in your initial serverless architecture.

Serverless Handbook
Access free book

The dream team

At Serverless Guru, we're a collective of proactive solution finders. We prioritize genuineness, forward-thinking vision, and above all, we commit to diligently serving our members each and every day.

See open positions

Looking for skilled architects & developers?

Join businesses around the globe that trust our services. Let's start your serverless journey. Get in touch today!
Ryan Jones
Founder
Book a meeting
arrow
Founder
Eduardo Marcos
Chief Technology Officer
Chief Technology Officer
Book a meeting
arrow

Join the Community

Gather, share, and learn about AWS and serverless with enthusiasts worldwide in our open and free community.