awslabs / aws-config-rules Goto Github PK
View Code? Open in Web Editor NEW[Node, Python, Java] Repository of sample Custom Rules for AWS Config.
Home Page: http://aws.amazon.com/config/
License: Creative Commons Zero v1.0 Universal
[Node, Python, Java] Repository of sample Custom Rules for AWS Config.
Home Page: http://aws.amazon.com/config/
License: Creative Commons Zero v1.0 Universal
Hey guys,
im using a lot of the rules you do provide here over quite some time (THANKS!!). But i have issues with some of the rules since they only work when I invoke them on demand.
iam-inactive-user.py
iam-unused-keys.py
iam-unused-user.py
....(and so on)
These AWS Config rules are triggered on resource change, but...
Do you have the same problems? I could maybe change the trigger model to periodic but then i would need to rewrite the whole code since there would be no configurationitem in the invokingevent.
Any thoughts on this? Thanks guys!
best,
Markus
The desired instance type rule does not seem to work using desiredInstanceType as a parameter because after it's through evaluating it says, "no results are available." I'm not using t2.smalls in my environment so I would expect there to be multiple violations. I tried changing instance types because I figured that would trigger the rule, but the results didn't change. When I looked at the monitoring tab for the lambda function, it didn't appear that the function was invoked even though I specified in the ARN of the function in the rule.
The configEvent class from aws-lambda-java-events has been updated one of the methods and needs to be updated in the Source code also.
isEventLeftScope -> getEventLeftScope
exceptionList
parameter defined under https://github.com/awslabs/aws-config-rules/blob/master/python/VPC_SG_OPEN_ONLY_TO_AUTHORIZED_PORTS/parameters.json
does not take more than 1 security group IDs for whitelisting.
i need help on below as i am trying to setup custom config rule and my code is below.. please help
import json
import boto3
APPLICABLE_RESOURCES = ["AWS::EC2::EIP"]
def evaluate_compliance(configuration_item, rule_parameters):
if configuration_item["resourceType"] not in APPLICABLE_RESOURCES:
return "NOT_APPLICABLE"
config = boto3.client("config",region_name='us-east-1')
resource_information = config.get_resource_config_history(
resourceType=configuration_item["resourceType"],
resourceId=configuration_item["resourceId"]
)
client = boto3.client('ec2',region_name= 'us-east-1')
addresses_dict = client.describe_addresses()
for eip_dict in addresses_dict['Addresses']:
if "NetworkInterfaceId" in eip_dict:
return "COMPLIANT"
return "NON_COMPLIANT"
def lambda_handler(event, context):
invoking_event = json.loads(event["invokingEvent"])
configuration_item = invoking_event["configurationItem"]
rule_parameters = json.loads(event["ruleParameters"])
result_token = "No token found."
if "resultToken" in event:
result_token = event["resultToken"]
config = boto3.client("config",region_name='us-east-1')
config.put_evaluations(
Evaluations=[
{
"ComplianceResourceType":
configuration_item["resourceType"],
"ComplianceResourceId":
configuration_item["resourceId"],
"ComplianceType":
evaluate_compliance(configuration_item, rule_parameters),
"Annotation":
"EIP is Free",
"OrderingTimestamp":
configuration_item["configurationItemCaptureTime"]
},
],
ResultToken=result_token
)
This config rule shows compliant even for non-compliant resources.
Use of get_client
parameter naming conventions
gherkin
RDK
...
Originally posted by @HieronymusLex in #112 (comment)
Please provide comments on the gherkin if any.
Description
Check if all the IAM users have Permission boundary attached. The rule is NON_COMPLAINT if the PermissionBoundary is not attached to the IAM user.
Trigger
Periodic
Reports on:
AWS::IAM::User
Rule Parameters:
PermissionBoundaryName
Optional
Enter the permission boundary name to check with the IAM user policies. (separated by ",")
Scenarios:
Scenario 1:
Given: No IAM users in Account
Then: Return NOT_APPLICABLE
Scenario 2:
Given: IAM users present in Account
And: No permission boundary policy in the Account
Then: Return NOT_APPLICABLE
Scenario 3:
Given: IAM users present in Account
And: Permission boundary policies present in Account
And: IAM user does not have permission boundary attached
Then: Return NON_COMPLAINT
Scenario 4:
Given: IAM users present in Account
And: Permission boundary policies present in Account
And: IAM user does have permission boundary attached
Then: Return COMPLAINT
Scenario 5:
Given: Valid Rule paramter PermissionBoundaryName provided
And: IAM users present in Account
And: IAM user does not have permission boundary attached
Then: Return NON_COMPLAINT
Scenario 6:
Given: Valid Rule paramter PermissionBoundaryName provided
And: IAM users present in Account
And: IAM user does have permission boundary attached
And: The Permission Boundary attached to user is not the one listed in parameter.
Then: Return NON_COMPLAINT
Scenario 7:
Given: Valid Rule paramter PermissionBoundaryName provided
And: IAM users present in Account
And: IAM user does have permission boundary attached
And: The Permission Boundary attached to user is the one listed in parameter.
Then: Return COMPLAINT
Scenario 12:
Given: Rule paramter PermissionBoundaryName provided
And: Not Valid one
Then: Return ERROR
int' object has no attribute 'getitem': TypeError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 40, in lambda_handler
invoking_event = json.loads(event["invokingEvent"])
TypeError: 'int' object has no attribute 'getitem'
We should update the aws-config-rules/RULES.md to be organized/# based on "CIS Amazon Web Services Foundations". (i.e. include the section/subsection etc)
The goal is to make audit etc easy to tie out what is covered.
Use case is to ensure that all KMS keys that have configured a delete wait period must have a sufficiently long enough wait period. The time period should configurable and be above the threshold defined by the company policy, for example 30 days.
The key error seems to be different now. I had to change it to match configurationItemsDiff
The lambda errors with the following
'NoneType' object is not subscriptable: TypeError
Traceback (most recent call last):
File "/var/task/RDS_NOT_PUBLIC.py", line 315, in lambda_handler
compliance_result = evaluate_compliance(event, configuration_item, valid_rule_parameters)
File "/var/task/RDS_NOT_PUBLIC.py", line 80, in evaluate_compliance
is_publicly_accessible = configuration_item["configuration"]["publiclyAccessible"]
TypeError: 'NoneType' object is not subscriptable
any ideas?
Similar to rds_db_instance_encrypted.js but need to check ebs volumes
Request from @slashrun: New rule to verify that the EFS filesystems are encrypted.
Please provide comments on the gherkin if any.
Description
Check whether Amazon EFS Filesytems are configured to encrypt the file data using AWS Key Management Service (AWS KMS).
Trigger
Periodic (note that Amazon EFS is not a supported resource by AWS Config)
Reports on:
AWS::EFS::FileSystem
Feature:
In order to: to protect data at rest
As: a Security Officer
I want: To ensure that all my EFS filesystem are encrypted, optionally with a specific KMS key.
Rule Parameters:
| ---------------------- | --------- | -------------------------------------------------------- |
| Parameter Name | Type | Description |
| ---------------------- | --------- | -------------------------------------------------------- |
| KmsKeyId | Optional | ID or ARN of the KMS key that is used to encrypt the |
| | | EFS filesystem. |
| ---------------------- | --------- | -------------------------------------------------------- |
Scenarios:
Scenario 1:
Given: No EFS filesystem is present
Then: Return NOT_APPLICABLE
Scenario 2:
Given: At least one EFS filesystem is present
And: The "Encrypted" key is set to False (or not present) on DescribeFileSystems
Then: Return NON_COMPLIANT on this EFS Filesystem
Scenario 3:
Given: At least one EFS filesystem is present
And: The "Encrypted" key is set to True on DescribeFileSystems
And: KmsKeyId parameter is not configured
Then: Return COMPLIANT on this EFS Filesystem
Scenario 4:
Given: At least one EFS filesystem is present
And: The "Encrypted" key is set to True on DescribeFileSystems
And: KmsKeyId parameter is configured
And: KmsKeyId key on DescribeFileSystems is not matching KmsKeyId parameter (or not present)
Then: Return NON_COMPLIANT on this EFS Filesystem
Scenario 5:
Given: At least one EFS filesystem is present
And: The "Encrypted" key is set to True on DescribeFileSystems
And: KmsKeyId parameter is configured
And: KmsKeyId key on DescribeFileSystems is matching KmsKeyId parameter
Then: Return COMPLIANT on this EFS Filesystem
DescribeFileSystems API doc
Hi Guys,
One of your rule that check if users have MFA enabled is wrongly evaluating cli users that don't have a valid login_profile hence can't have MFA. I have made some changes to the code which will make it only look for relevant users:
import json
import boto3
APPLICABLE_RESOURCES = ["AWS::IAM::User"]
def evaluate_compliance(configuration_item):
if configuration_item["resourceType"] not in APPLICABLE_RESOURCES:
return "NOT_APPLICABLE"
user_name = configuration_item["configuration"]["userName"]
iam = boto3.client("iam")
try:
iam.get_login_profile(UserName=user_name)
except:
return "COMPLIANT"
mfa = iam.list_mfa_devices(UserName=user_name)
if len(mfa["MFADevices"]) > 0:
return "COMPLIANT"
else:
return "NON_COMPLIANT"
def lambda_handler(event, context):
invoking_event = json.loads(event["invokingEvent"])
configuration_item = invoking_event["configurationItem"]
result_token = "No token found."
if "resultToken" in event:
result_token = event["resultToken"]
config = boto3.client("config")
config.put_evaluations(
Evaluations=[
{
"ComplianceResourceType":
configuration_item["resourceType"],
"ComplianceResourceId":
configuration_item["resourceId"],
"ComplianceType":
evaluate_compliance(configuration_item),
"Annotation":
"The user doesn't have MFA enabled.",
"OrderingTimestamp":
configuration_item["configurationItemCaptureTime"]
},
],
ResultToken=result_token
)
Can some explain what Parameters need to be passed to this script? For example, to block Ports 20-21.
Use case is a detective control to ensure that a whitelist of KMS keys are not being scheduled for deletion.
Hello Folks,
Am trying to set aws config custom rule for security group with reference to this blog https://aws.amazon.com/blogs/security/how-to-monitor-aws-account-configuration-changes-and-api-calls-to-amazon-ec2-security-groups/. Here am trying with method 1 and it is mentioned at step 4 (Create the Python Lambda function using the code from the AWS Config rules library on GitHub). when i checked the code at the source (https://github.com/awslabs/aws-config-rules/blob/master/python/ec2_security_group_ingress.py) returns 404 page not found. Would be great if i get the code for testing.
Much appreciate your help on this!!!
getting an error when running the lambda function
An error occurred (ValidationException) when calling the PutEvaluations operation: 1 validation error detected: Value ' ... 'evaluations' failed to satisfy constraint: Member must have length less than or equal to 100: ValidationException
Traceback (most recent call last):
File "/var/task/IAM_USER_MFA_ENABLED.py", line 428, in lambda_handler
AWS_CONFIG_CLIENT.put_evaluations(Evaluations=evaluations, ResultToken=resultToken, TestMode=testMode)
File "/var/runtime/botocore/client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 612, in _make_api_call
raise error_class(parsed_response, operation_name)
any ideas?
Hey guys,
anyone having the same issue? The rules that check for inactive users (or users that have never logged in) is change triggered. So when somebody logs in, there is no change to the resource and it wont get evaluated again. So the user is still shown as inactive or unused. (Both rules act the same way)
When you change the trigger type to periodic there has to be some different code since you wont get the resource as input parameter, correct?
best regards,
Markus
Use case is detective control that alerts personnel that someone has scheduled a key deletion. Personnel is now able to act on the deletion scheduled alert to determine if any remediation is required.
Without the required and enforced alert, a scheduled key deletion may not be detected immediately.
ec2_security_group_ingress.py checks fail and the compliance status keeps being reported as NON_COMPLIANT with annotation message "authorize_security_group_ingress failure on group" even if the monitored security group only has rules for ports 80 and 443.
The code started to work properly after adding Ipv6Ranges to the security group definitions in REQUIRED_PERMISSIONS as follows:
REQUIRED_PERMISSIONS = [
{
"IpProtocol" : "tcp",
"FromPort" : 80,
"ToPort" : 80,
"UserIdGroupPairs" : [],
"IpRanges" : [{"CidrIp" : "0.0.0.0/0"}],
"PrefixListIds" : [],
"Ipv6Ranges" : []
},
{
"IpProtocol" : "tcp",
"FromPort" : 443,
"ToPort" : 443,
"UserIdGroupPairs" : [],
"IpRanges" : [{"CidrIp" : "0.0.0.0/0"}],
"PrefixListIds" : [],
"Ipv6Ranges" : []
}]
Without adding this, the function was not properly parsing and populating the authorize_permissions list from the monitored security group.
Im testing multiple of these, the MFA test ill use as example.
I get no results available.
Im probably doing something simply wrong but any pointers on why i would get no results?
Within the code and also Rules.MD, add comments that indicate
* what type of rule this (periodic/trigger) is and
* Recommended scope (resource type). Can be high level like IAM but needed to avoid log spamming and complicating troubleshooting.
* IAM policy needed
Would be great to have a similar script as "ec2-exposed-group.py" but for NACLs. In other words, be able to detect 0.0.0.0/0 from either ingress/egress and revoke that rule from the NACL rule.
Config rule that is executed on configuration changes that only alerts on ports not defined as approved. Example port 21 and 443 are approved and a execution of a SG update/addition opens port 80.
Hi
So I went through the config_enabled_checker based script and followed the documentation. The lambda function seems to be running periodically however in the CloudWatch logs there seems to be error that is coming -
invokingEvent.s3ObjectKey is not defined
Changing the default encryption( None -> AES-256 or AES-256 -> None) in the S3 bucket does not trigger the rule re-evaluation, although it is stated in the template that it's change triggered:
# Trigger Type: Change Triggered
Re-evaluating via button inside the rule details after a change presents the correct evaluation results, so I assume my rule (and lambda) is working correctly.
I am assuming this will happen to any js that uses s3objectkey but specifically I run into the issue running iam_password_policy_enabled-periodic.js
I get this error from the Cloudtrail logs mentioning that the lambda function is not running correctly and that the error is because of
Error: invokingEvent.s3ObjectKey is not defined
Is an s3ObjectKey suppose to be in the event?
Charges per rule per region are just crazy expensive!
This is a "best practices" question. I have inherited a custom AWS Config Rule that implements auto tagging (doing a lookup to an external database).
This modifies the underlying resource. I'm wondering if this is considered bad practice. Also, can it result in the rule being re-triggered as it is a configuration change.
New rule to verify that the Amazon EMR clusters' master node has no public IP
Please provide comments on the gherkin if any.
Description:
Checks that Amazon EMR clusters' master node does not have a public IP. This rule only checks clusters in RUNNING or WAITING state.
Trigger:
Periodic
Resource Type to report on:
AWS::EMR::Cluster
Rule Parameters:
None
Scenarios:
Scenario 1:
Given: No EMR cluster in RUNNING or WAITING state
Then: Return NOT_APPLICABLE
Scenario 2:
Given: At least 1 EMR cluster is in RUNNING or WAITING state
And: The master node instance has a value specified for the key PublicDnsName in the ListInstances EMR API call
And: The master node instance has a value specified for the key PublicDnsName in the DescribeInstances EC2 API call
Then: Return NON_COMPLIANT on this cluster
Scenario 3:
Given: At least 1 EMR cluster is in RUNNING or WAITING state
And: The master node instance has a value specified for the key PublicDnsName in the ListInstances EMR API call
And: The master node instance has no key PublicDnsName in the DescribeInstances EC2 API call
Then: Return COMPLIANT on this cluster
Scenario 4:
Given: At least 1 EMR cluster is in RUNNING or WAITING state
And: The master node has "" as the value for key PublicDnsName in the ListInstances API call
Then: Return COMPLIANT on this cluster
EMR API doc
Hello,
Is there a suggested deployment for iam-mfa.py? I tried to deploy it as a triggered rule, but it never triggered on users adding an MFA token. AWS Support says that MFA token changes don't count as config changes, so that makes some sense. But then I'm not sure how the rule can work as described in RULES.md.
Support suggests deploying as a periodic rule, but not having per-user results undermines the usefulness of the rule.
Thanks for any suggestions,
Ross
Getting result as no results available in the AWS config console.
I have created a Lambda function with the same code present here.
Can you please help.
Hi everyone,
I have the chance to work with the Service team, and they are always interested to take inputs on what a new Rules in the AWS Console could be (I mean those rules). Since I started managing this repo, I saw some very interesting idea of rules that could be useful for everyone using AWS.
So question: Are some of you folks interested to give direct feedback on new Managed Config Rules ideas, or provide some of your own ideas?
Please +1 if interested!
Cheers,
Jon
PS: Full disclaimer, there is no engagement from the Service team to actually release those ideas. They do value (a lot) customer feedback in any case.
As the title suggests, an AWS Config rule to monitor when configuration changes are made to EC2 ENIs.
Hey guys, I came up with a solution for using AWS Config for on-premise resources using AWS Config and SSM. It'd be appreciated if you have time to take a look and see if it's worth to be formalized :)
This is an example of using a customized rule for checking Windows AD users:
https://github.com/totoleon/aws-config-rule-ad-users
Use case is a preventive security control preventing accidental or unauthorized key deletion by ensuring kms:ScheduleKeyDeletion is MFA enforced.
New rule to verify that the S3 has VPC endpoint enabled.
Please provide comments on the gherkin if any.
Description
Check whether VPC Endpoint is enabled for each VPC to access Amazon S3.
Trigger
Periodic (because VPC Configuration Item does not record Endpoint status)
Reports on:
AWS::EC2::VPC
Rule Parameters:
None
Scenarios:
Scenario 1:
Given: No VPC is present
Then: Return NOT_APPLICABLE
Scenario 2:
Given: At least one VPC is present
And: The S3 service is not present in the list of "ServiceName" on DescribeVpcEndpoints API
Then: Return NON_COMPLIANT on this VPC
Scenario 3:
Given: At least one VPC is present
And: The S3 service is present in the "ServiceName" key on DescribeVpcEndpoints API
And: The "VpcEndpointState" key value is not "Available"
Then: Return NON_COMPLIANT on this VPC
Scenario 4:
Given: At least one VPC is present
And: The S3 service is present in the "ServiceName" key on DescribeVpcEndpoints API
And: The "VpcEndpointState" key value is "Available"
Then: Return COMPLIANT on this VPC
DescribeVpcEndpoints API doc
Please make a rule we can use to monitor for users that use wide open 0.0.0.0.
It only checks for certain ports not wide open groups.
Hello
I'm trying to use the lambda IAM_USER_USED_LAST_90_DAYS.py to obtain user list with last connexion > 90 days but nothing goes up to AWS Config, when trying to re-evaluate my config rule, i get a "No results available" while I have some accounts that are matching the rule.
I've set-up my lanbda with a role containing policies allowing me :
Is there something that I'm missing ?
Thx
WiLL
New rule to verify that the Redshift cluster is not publicly accessible.
Please provide comments on the gherkin if any.
Description:
Check whether the Amazon Redshift clusters are not publicly accessible. The rule is NON_COMPLIANT if the publiclyAccessible field is true in the cluster configuration item.
Trigger:
Configuration Change on AWS::Redshift::Cluster
Reports on:
AWS::Redshift::Cluster
Rule Parameters:
None
Scenarios:
Scenario: 1
Given: The publiclyAccessible field is true in the cluster configuration item.
Then: Return NON_COMPLIANT
Scenario: 2
Given: The publiclyAccessible field is false in the cluster configuration item.
Then: Return COMPLIANT
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.