Building Security Guardrails with AWS Resource Control Policies
AWS recently introduced Resource Control Policies (RCPs), powerful guardrails designed to enhance your organization’s infrastructure security while enabling teams to work flexibly within defined, secure boundaries. In this blog post, we'll cover the essentials of RCPs—from foundational setup to practical use cases like enforcing S3 object encryption and managing OIDC role assumptions with external services like GitLab.
Let's get into it!
Getting Started with RCPs 📚
RCPs enable centralized control over the maximum available permissions for resources within your organization. Someone misconfigured a policy allowing external access to your resources? RCPs can protect you. A team forgot to encrypt their data? RCPs can help with that!
If you've ever worked with AWS Service Control Policies (SCPs), you'll pick up on RCPs quickly! They're simply JSON policy documents with nearly identical syntax and capabilities. But there are some important distinctions to be made. Let's take a look.
- RCPs can only control permissions for a few resource types:
- Amazon S3
- AWS Key Management Service
- AWS Secrets Manager
- Amazon SQS
- AWS Security Token Service
- RCPs do not support the following condition keys:
NotPrincipal
NotAction
- RCPs cannot restrict kms:RetireGrant
When actions are performed in AWS, all applicable policies are evaluated in this order to determine whether the action is allowed or denied.
- Default Deny
- Resource Control Policy (RCP)
- Service Control Policy (SCP)
- Resource-based Policies
- Identity-based Policies
- IAM Permissions Boundaries
- Session Policies
Enabling RCPs in AWS ☁️
Before RCPs can be used, you must have AWS Organizations set up. Then it's simply a matter of enabling the capability which can be done in the console or via the command line as below.
# get aws organizations root id
export aws_root_id=$(aws --profile mgmt organizations list-roots | jq -r '.Roots[].Id')
# enable RCP policies
aws --profile mgmt organizations enable-policy-type --root-id $aws_root_id --policy-type RESOURCE_CONTROL_POLICY
{
"Root": {
"Id": "r-xxxx",
"Arn": "arn:aws:organizations::111111111111:root/o-XXXXXXXXXX/r-xxxx",
"Name": "Root",
"PolicyTypes": [
{
"Type": "RESOURCE_CONTROL_POLICY",
"Status": "PENDING_ENABLE"
}
]
}
}
After enabling, AWS will attach a default RCP on every Organization Unit (OU) and account within the AWS Organization. This cannot be altered or deleted.
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "*",
"Resource": "*"
}
]
}
Now that we're all set up, let's create our first policy!
Enforcing S3 Object Encryption 🔐 🪣
While S3 now offers default server-side encryption, we can optionally enable encryption with a KMS key. For some organizations, this may be a compliance or regulatory requirement.
With RCP, we can ensure that every S3 Object uploaded to an S3 bucket is encrypted with a KMS key and deny those that are not.
First, we'll write the JSON-formatted policy and save it as a file e.g. enforce_s3_object_encryption.json .
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "*",
"Condition": {
"Null": {
"s3:x-amz-server-side-encryption-aws-kms-key-id": "true"
}
}
}
]
}
Next, we'll create the policy making it available in AWS Organizations. You must run this from your AWS Management account. For clarity, I'll specify the profile "mgmt" when running commands for the Management account. Note, that this will not apply the policy yet.
$ aws --profile mgmt organizations create-policy --content file://enforce_s3_object_encryption.json --name enforceS3ObjectEncryption --type RESOURCE_CONTROL_POLICY --description "Denys unencrypted objects being added to S3 buckets"
{
"Policy": {
"PolicySummary": {
"Id": "p-xxxxxxxxxx",
"Arn": "arn:aws:organizations::1111111111:policy/o-xxxxxxxxxx/resource_control_policy/p-xxxxxxxxxx",
"Name": "enforceS3ObjectEncryption",
"Description": "Denys unencrypted objects being added to S3 buckets",
"Type": "RESOURCE_CONTROL_POLICY",
"AwsManaged": false
},
"Content": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Deny\",\"Principal\":\"*\",\"Action\":\"s3:PutObject\",\"Resource\":\"*\",\"Condition\":{\"Null\":{\"s3:x-amz-server-side-encryption-aws-kms-key-id\":\"true\"}}}]}"
Finally, we'll apply the policy to a particular OU. It's best practice to test policies within an OU rather than directly applying it to the Root OU. For this example, I'll target my Dev OU which contains my Dev AWS account. RCPs do not impact the Management account so you'll have to test in a different account that's already joined to your AWS Organization.
You'll need the "policy id" from the previous command.
$ aws --profile mgmt organizations attach-policy --policy-id p-xxxxxxxxxx --target-id ou-xxxx-xxxxxxxx
Now that the policy is in place, let's test it! Make sure to test from the AWS account you applied the policy to.
If you don't already have an S3 bucket handy, you can quickly create one. Just ensure encryption is set to Server-side encryption with Amazon S3 managed keys (SSE-S3). This is the default configuration when creating a bucket with the command below.
$ aws --profile dev s3api create-bucket --bucket rcp-enforce-s3-object-encryption-23948234234123
{
"Location": "/rcp-enforce-s3-object-encryption-23948234234123"
}
Now we can upload a file to the bucket.
# make file
$ touch datafile
#upload to bucket
$ aws --profile dev s3 cp datafile s3://rcp-enforce-s3-object-encryption-23948234234123/
Upon doing so you should be denied with an error message similar to the below:
upload failed: ./datafile to s3://rcp-enforce-s3-object-encryption-23948234234123/datafile An error occurred (AccessDenied) when calling the PutObject operation: User: arn:aws:iam::111111111111:user/dev_user is not authorized to perform: s3:PutObject on resource: "arn:aws:s3:::rcp-enforce-s3-object-encryption-23948234234123/datafile" with an explicit deny in a resource control policy
Nice! We've successfully blocked uploading objects to S3 when a KMS key is not used for encryption.
We can quickly create a KMS key and update our bucket to validate that objects encrypted with a KMS key can be uploaded.
$ aws --profile dev kms create-key
{
"KeyMetadata": {
"AWSAccountId": "319192286947",
"KeyId": "44d55009-ccb8-4beb-8e2c-9066fe3aa0d5",
"Arn": "arn:aws:kms:us-east-1:1111111111:key/44d55009-ccb8-4beb-8e2c-9066fe3aa0d5",
"CreationDate": "2024-11-14T17:37:37.540000-07:00",
"Enabled": true,
"Description": "",
"KeyUsage": "ENCRYPT_DECRYPT",
"KeyState": "Enabled",
"Origin": "AWS_KMS",
"KeyManager": "CUSTOMER",
"CustomerMasterKeySpec": "SYMMETRIC_DEFAULT",
"KeySpec": "SYMMETRIC_DEFAULT",
"EncryptionAlgorithms": [
"SYMMETRIC_DEFAULT"
],
"MultiRegion": false
}
}
Reference the KeyId from the previous command:
$ aws --profile dev s3api put-bucket-encryption --bucket rcp-enforce-s3-object-encryption-23948234234123 --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "aws:kms", "KMSMasterKeyID": "44d55009-ccb8-4beb-8e2c-9066fe3aa0d5"}}]}
Let's try to upload the object again.
$ aws --profile dev s3 cp datafile s3://rcp-enforce-s3-object-encryption-23948234234123/
upload: ./datafile to s3://rcp-enforce-s3-object-encryption-23948234234123/datafile
Nice! The file was successfully uploaded.
If you have CloudTrail with S3 data events enabled you'll be able to see the access denied message from our original attempt.
"eventTime": "2024-11-15T01:09:40Z",
"eventSource": "s3.amazonaws.com",
"eventName": "PutObject",
"awsRegion": "us-east-1",
"sourceIPAddress": "xx.xxx.xxx.xx",
"userAgent": "[Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36]",
"errorCode": "AccessDenied",
"errorMessage": "User: arn:aws:iam::111111111111:user/dev_user is not authorized to perform: s3:PutObject on resource: \"arn:aws:s3:::rcp-enforce-s3-object-encryption-23948234234123/datafile.json\" with an explicit deny in a resource control policy",
"requestParameters": {
"X-Amz-Date": "20241115T010939Z",
"bucketName": "rcp-enforce-s3-object-encryption-23948234234123",
"X-Amz-Algorithm": "AWS4-HMAC-SHA256",
"x-amz-acl": "bucket-owner-full-control",
"X-Amz-SignedHeaders": "content-md5;content-type;host;x-amz-acl;x-amz-storage-class",
"Host": "rcp-enforce-s3-object-encryption-23948234234123.s3.us-east-1.amazonaws.com",
"X-Amz-Content-Sha256": "UNSIGNED-PAYLOAD",
"X-Amz-Expires": "300",
"key": "datafile.json",
"x-amz-storage-class": "STANDARD"
Enforcing Trusted OIDC Subjects from GitLab 🦊
Let's dive into one more use case. OpenID Connect (OIDC) enables authentication from other platforms and services into AWS. It's common for CI/CD pipelines from platforms like GitLab or GitHub to assume an AWS IAM Role in an AWS account for deployment of infrastructure and apps. Unfortunately, if the IAM Role's Trust Policy is misconfigured, this can lead to anyone with a GitLab or GitHub account to compromise the role and potentially the AWS account. You can check out a previous blog post about how this works.
We'll start by creating a new RCP file called enforce_oidc_subject.json.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "sts:AssumeRoleWithWebIdentity",
"Resource": "*",
"Condition": {
"StringNotEqualsIfExists": {
"gitlab.com:sub": "project_path:tyler/oidc-provider:ref_type:branch:ref:main"
}
}
}
]
}
This policy will restrict the action sts:AssumeRoleWithWebIdentity unless it's coming from GitLab from a particular project from the main branch.
If you host GitLab on-prem and have a custom domain name i.e., not gitlab.com, then alternatively you may just want to enforce the request that comes from your custom GitLab domain rather than a particular project or branch. This can be done like so:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "sts:AssumeRoleWithWebIdentity",
"Resource": "*",
"Condition": {
"StringNotEqualsIfExists": {
"gitlab.com:aud": "https://custom.domain.gitlab.com"
}
}
}
]
}
Next, we'll create the policy.
$ aws --profile mgmt organizations create-policy --content file://enforce_oidc_subject.json --name enforceOidcSubject --type RESOURCE_CONTROL_POLICY --description "Denys Untrusted OIDC Role Assumption."
Then deploy the policy to the Dev OU using the policy id from the previous command.
$ aws --profile mgmt organizations attach-policy --policy-id p-xxxxxxxxxx --target-id ou-cxxx-xxxxxxxx
Once applied, we can run a pipeline job from GitLab. You can use a pipeline configuration file (.gitlab-ci.yml) like below. If you need help setting this configuration up, refer to the blog post mentioned earlier which will walk you through this.
variables:
AWS_DEFAULT_REGION: us-east-1
AWS_PROFILE: "oidc"
oidc:
image:
name: amazon/aws-cli:latest
entrypoint: [""]
id_tokens:
GITLAB_OIDC_TOKEN:
aud: https://gitlab.com
script:
- aws sts get-caller-identity
Now, if the configuration is correct we should get a successful run. This would be run from the project, tyler/oidc-provider.
$ aws sts get-caller-identity
{
"UserId": "AROAUUUKXxxxxxxxxxxxx:botocore-session-1731638060",
"Account": "1111111111",
"Arn": "arn:aws:sts::1111111111:assumed-role/gitlab-oidc-role/botocore-session-1731638060"
}
Cleaning up project directory and file based variables
00:01
Job succeeded
But if we try running from a different project e.g., hacker/oidc-abuse then we'll get an error. Take note that even though it's the RCP blocking this access, it doesn't directly tell us that like before. Instead, we get a generic AccessDenied error message.
$ aws sts get-caller-identity
An error occurred (AccessDenied) when calling the AssumeRoleWithWebIdentity operation: Not authorized to perform sts:AssumeRoleWithWebIdentity
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 1
Additionally, we see the same generic AccessDenied message in the CloudTrail logs.
"eventTime": "2024-11-15T02:52:32Z",
"eventSource": "sts.amazonaws.com",
"eventName": "AssumeRoleWithWebIdentity",
"awsRegion": "us-east-1",
"sourceIPAddress": "xx.xxx.xxx.xxx",
"userAgent": "aws-cli/2.21.1 md/awscrt#0.22.0 ua/2.0 os/linux#5.15.154+ md/arch#x86_64 lang/python#3.12.6 md/pyimpl#CPython cfg/retry-mode#standard md/installer#docker md/distrib#amzn.2 md/prompt#off md/command#sts.get-caller-identity",
"errorCode": "AccessDenied",
"errorMessage": "An unknown error occurred",
"requestParameters": {
"roleArn": "arn:aws:iam::111111111111:role/gitlab-oidc-role",
"roleSessionName": "botocore-session-1731639152"
Wrap Up
This was a look at the all-new Resource Control Policies from AWS. These policies help provide much-needed security guardrails to protect resources at scale, especially in large organizations with potentially hundreds of AWS accounts and teams.
If you followed along and created resources, make sure to delete them. RCPs do not have a cost but the S3 bucket and KMS key do have a minimal associated cost.
# delete data in s3
$ aws --profile dev s3 rm s3://rcp-enforce-s3-object-encryption-23948234234123/ --recursive
delete: s3://rcp-enforce-s3-object-encryption-23948234234123/datafile
# delete s3 bucket
$ aws --profile dev s3api delete-bucket --bucket rcp-enforce-s3-object-encryption-23948234234123
# delete kms key (7 days is soonest)
$ aws --profile dev kms schedule-key-deletion --key-id 44d55009-ccb8-4beb-8e2c-9066fe3aa0d5 --pending-window-in-days 7
{
"KeyId": "arn:aws:kms:us-east-1:1111111111:key/44d55009-ccb8-4beb-8e2c-9066fe3aa0d5",
"DeletionDate": "2024-11-21T17:46:37.181000-07:00",
"KeyState": "PendingDeletion",
"PendingWindowInDays": 7
}