Securing Serverless Architecture on AWS: Challenges and Solutions

Rahul Mahant 3rd Oct 2023 - 7 mins read

What is AWS serverless architecture?

A serverless architecture allows you to develop and deploy applications without the need to manage infrastructure. While your application still runs on servers, AWS takes care of server management.

One of the industry tools to address security challenges in serverless computing is the "AWS Lambda Layers" combined with tools like the "Serverless Framework." Using AWS Lambda Layers, you can manage shared code and libraries, ensuring consistent security policies and dependencies.

The following are the challenges of serverless architecture with their solutions:-

1) Cold start

A cold start is when a new container that runs Lambda needs to boot up .NET or Java have longer cold starts compared to other languages, taking up to a few seconds to get up and running. Try to use Python, Node.js, or Go for user-facing services.

If you use a virtual private cloud (VPC), you add another 10 seconds to some cold starts. This is your own network within AWS. You can have private services like the SQL database. Using VPC is the only way to access the SQL database if you do not want to open it to the public. A 10-second cold start is a consequence of attaching an Elastic Network Interface (ENI). How often this happens depends on the amount of memory you assign to Lambda; with more memory, you get a better network, which means less sharing of ENI. If you assign 3 GB of memory, Lambda’s ENI won’t be shared, and there will always be a 10-second cold start—but this should soon be resolved by AWS.


Don't use a VPC if you don't need access to private services or just because it looks more secure. Lambda is secure enough, and this is AWS's official recommendation.

Use Node.js, Python, and Go. If you use those and avoid VPCs, you’ll never even notice a cold start.

Use library Lambda Warmer by Jeremy Daly. It periodically triggers Lambda, or several Lambdas, and keeps them warm.

2) Observability - monitoring, logging, and tracing

When errors crop up in serverless, it's much harder to resolve them than in their more traditional counterparts. Serverless systems are distributed, which means every part of the system produces logs. It can be difficult to make sense of these.


Use a common correlation ID to identify which logs belong to the same requests.

Use new CloudWatch features named ‘Logs Insights’.

Service X-ray is indispensable. It allows you to visualize connected services and trace calls that flow through different parts of the system so you can see the architecture, execution time, error rate, and so on.

Third-party services like Epsagon, Thundra, IOpipe and DataDog are extremely useful.

3) Connecting to SQL databases

One of the main issues with SQL databases is that if you don’t want to give public access to the database, you must use a VPC. We’ve already mentioned the main pitfalls it brings. Another is that opening the connection is expensive. That’s why you have a connection pool in traditional systems.

A connection pool is a set of open connections. You get one when you need one, and then you return it to the pool. In serverless, there is nowhere to hold this connection pool; Lambdas come and go. You can either open and close the connection each time, which is expensive or keep them open and risk running out of available connections.


Use the library Serverless MySQL by Jeremy Daly. On the end of each request, it checks if it needs to close the connection or not. It is not a perfect solution, but an acceptable one.

4) DDOS and other attacks = wallet attack

Because serverless is automatically scalable, this means in the case of a DDOS attack that, your bill will increase.


Set the following to limit your use of resources:

Lambda Concurrent Execution Limit

Lambda timeout

API Gateway throttling

5) CloudFormation

CloudFormation is an AWS solution for infrastructure as code. It's used by most of the tools you need for serverless deployment, including the most popular Serverless Framework. There's a limit of 200 resources per CloudFormation stack; that sounds a lot, but you'll usually need several resources for one Lambda, and you'll basically reach this limit within 30 to 35 Lambda. This isn't much if you consider that you should favour small Lambdas.


The best way is to have one stack just for Lambdas that are part of the same microservice.

Use nested CloudFormation stacks, which may not be ideal but works.

You can use Terraform instead of CloudFormation, which is much harder compared to tools like Serverless Framework.


Talk to our experts to discuss your requirements

Real boy icon sized sample pic Real girl icon sized sample pic Real boy icon sized sample pic
India Directory