Weekend Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70percent

Amazon Web Services DOP-C02 AWS Certified DevOps Engineer - Professional Exam Practice Test

Demo: 118 questions
Total 407 questions

AWS Certified DevOps Engineer - Professional Questions and Answers

Question 1

A company is developing a mobile app that requires extensive automated testing across multiple device types. The company is using AWS CodePipeline for its CI/CD pipeline. The company must implement a scalable testing solution that can handle increased test loads as the app grows. Which solution will meet these requirements with the LEAST management overhead?

Options:

A.

Integrate AWS Device Farm with the pipeline to run the tests and scale as needed.

B.

Deploy a fleet of Amazon EC2 instances with various mobile device emulators and auto scaling to run the tests. Create a custom AWS Lambda function to invoke EC2 test runs.

C.

Implement a containerized testing solution that uses Amazon Elastic Container Service (Amazon ECS) with auto scaling. Configure the pipeline to invoke an AWS Lambda function to start the test runs on the ECS cluster.

D.

Use AWS Lambda functions with custom runtime emulators to run the tests. Integrate the Lambda functions with the pipeline.

Question 2

A DevOps engineer is deploying a new version of a company's application in an AWS CodeDeploy deployment group associated with its Amazon EC2 instances. After some time, the deployment fails. The engineer realizes that all the events associated with the specific deployment ID are in a Skipped status and code was not deployed in the instances associated with the deployment group.

What are valid reasons for this failure? (Select TWO.).

Options:

A.

The networking configuration does not allow the EC2 instances to reach the internet via a NAT gateway or internet gateway and the CodeDeploy endpoint cannot be reached.

B.

The IAM user who triggered the application deployment does not have permission to interact with the CodeDeploy endpoint.

C.

The target EC2 instances were not properly registered with the CodeDeploy endpoint.

D.

An instance profile with proper permissions was not attached to the target EC2 instances.

E.

The appspec. yml file was not included in the application revision.

Question 3

A company uses AWS WAF to protect its cloud infrastructure. A DevOps engineer needs to give an operations team the ability to analyze log messages from AWS WAR. The operations team needs to be able to create alarms for specific patterns in the log output.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Create an Amazon CloudWatch Logs log group. Configure the appropriate AWS WAF web ACL to send log messages to the log group. Instruct the operations team to create CloudWatch metric filters.

B.

Create an Amazon OpenSearch Service cluster and appropriate indexes. Configure an Amazon Kinesis Data Firehose delivery stream to stream log data to the indexes. Use OpenSearch Dashboards to create filters and widgets.

C.

Create an Amazon S3 bucket for the log output. Configure AWS WAF to send log outputs to the S3 bucket. Instruct the operations team to create AWS Lambda functions that detect each desired log message pattern. Configure the Lambda functions to publish to an Amazon Simple Notification Service (Amazon SNS) topic.

D.

Create an Amazon S3 bucket for the log output. Configure AWS WAF to send log outputs to the S3 bucket. Use Amazon Athena to create an external table definition that fits the log message pattern. Instruct the operations team to write SOL queries and to create Amazon CloudWatch metric filters for the Athena queries.

Question 4

A development team wants to use AWS CloudFormation stacks to deploy an application. However, the developer IAM role does not have the required permissions to provision the resources that are specified in the AWS CloudFormation template. A DevOps engineer needs to implement a solution that allows the developers to deploy the stacks. The solution must follow the principle of least privilege.

Which solution will meet these requirements?

Options:

A.

Create an IAM policy that allows the developers to provision the required resources. Attach the policy to the developer IAM role.

B.

Create an IAM policy that allows full access to AWS CloudFormation. Attach the policy to the developer IAM role.

C.

Create an AWS CloudFormation service role that has the required permissions. Grant the developer IAM role a cloudformation:* action. Use the new service role during stack deployments.

D.

Create an AWS CloudFormation service role that has the required permissions. Grant the developer IAM role the iam:PassRole permission. Use the new service role during stack deployments.

Question 5

A company has many AWS accounts. During AWS account creation the company uses automation to create an Amazon CloudWatch Logs log group in every AWS Region that the company operates in. The automaton configures new resources in the accounts to publish logs to the provisioned log groups in their Region.

The company has created a logging account to centralize the logging from all the other accounts. A DevOps engineer needs to aggregate the log groups from all the accounts to an existing Amazon S3 bucket in the logging account.

Which solution will meet these requirements in the MOST operationally efficient manner?

Options:

A.

In the logging account create a CloudWatch Logs destination with a destination policy. For each new account subscribe the CloudWatch Logs log groups to the. Destination Configure a single Amazon Kinesis data stream and a single Amazon Kinesis Data Firehose delivery stream to deliver the logs from the CloudWatch Logs destination to the S3 bucket.

B.

In the logging account create a CloudWatch Logs destination with a destination policy for each Region. For each new account subscribe the CloudWatch Logs log groups to the destination. Configure a single Amazon Kinesis data stream and a single Amazon Kinesis Data Firehose delivery stream to deliver the logs from all the CloudWatch Logs destinations to the S3 bucket.

C.

In the logging account create a CloudWatch Logs destination with a destination policy for each Region. For each new account subscribe the CloudWatch Logs log groups to the destination Configure an Amazon Kinesis data stream and an Amazon Kinesis Data Firehose delivery stream for each Region to deliver the logs from the CloudWatch Logs destinations to the S3 bucket.

D.

In the logging account create a CloudWatch Logs destination with a destination policy. For each new account subscribe the CloudWatch Logs log groups to the destination. Configure a single Amazon Kinesis data stream to deliver the logs from the CloudWatch Logs destination to the S3 bucket.

Question 6

A DevOps engineer is using AWS CodeDeploy across a fleet of Amazon EC2 instances in an EC2 Auto Scaling group. The associated CodeDeploy deployment group, which is integrated with EC2 Auto Scaling, is configured to perform in-place deployments with codeDeployDefault.oneAtATime During an ongoing new deployment, the engineer discovers that, although the overall deployment finished successfully, two out of five instances have the previous application revision deployed. The other three instances have the newest application revision

What is likely causing this issue?

Options:

A.

The two affected instances failed to fetch the new deployment.

B.

A failed Afterinstall lifecycle event hook caused the CodeDeploy agent to roll back to the previous version on the affected instances

C.

The CodeDeploy agent was not installed in two affected instances.

D.

EC2 Auto Scaling launched two new instances while the new deployment had not yet finished, causing the previous version to be deployed on the affected instances.

Question 7

A company uses S3 to store images and requires multi-Region DR with two-way replication and ≤15-minute latency.

Which steps meet the requirements? (Select THREE.)

Options:

A.

Enable S3 Replication Time Control (RTC) for each replication rule.

B.

Create S3 Multi-Region Access Point (active/passive).

C.

Call SubmitMultiRegionAccessPointRoutes during failover.

D.

Enable S3 Transfer Acceleration.

E.

Use Route 53 ARC routing control.

F.

Use Route 53 ARC to shift traffic during failover.

Question 8

A company in a highly regulated industry is building an artifact by using AWS CodeBuild and AWS CodePipeline. The company must connect to an external authenticated API during the building process.

The company's DevOps engineer needs to encrypt the build outputs by using an AWS Key Management Service (AWS KMS) key. The external API credentials must be reset each month. The DevOps engineer has created a new key in AWS KMS.

Which solution will meet these requirements?

Options:

A.

Store the API credentials in AWS Systems Manager Parameter Store. Update the key policy for the CodeBuild IAM service role to have access to the KMS key. Set CODEBUILD_KMS_KEY_ID as the new key ID.

B.

Store the API credentials in AWS Systems Manager Parameter Store. Update the key policy for the CodePipeline IAM service role to have access to the KMS key. Add the key to the pipeline.

C.

Store the API credentials in AWS Secrets Manager. Update the key policy for the CodeBuild IAM service role to have access to the KMS key. Set CODEBUILD_KMS_KEY_ID as the new key ID.

D.

Store the API credentials in AWS Secrets Manager. Update the key policy for the CodePipeline IAM service role to have access to the KMS key. Add the key to the pipeline.

Question 9

A company has enabled all features for its organization in AWS Organizations. The organization contains 10 AWS accounts. The company has turned on AWS CloudTrail in all the accounts. The company expects the number of AWS accounts in the organization to increase to 500 during the next year. The company plans to use multiple OUs for these accounts.

The company has enabled AWS Config in each existing AWS account in the organization. A DevOps engineer must implement a solution that enables AWS Config automatically for all future AWS accounts that are created in the organization.

Which solution will meet this requirement?

Options:

A.

In the organization's management account, create an Amazon EventBridge rule that reacts to a CreateAccount API call. Configure the rule to invoke an AWS Lambda function that enables trusted access to AWS Config for the organization.

B.

In the organization's management account, create an AWS CloudFormation stack set to enable AWS Config. Configure the stack set to deploy automatically when an account is created through Organizations.

C.

In the organization's management account, create an SCP that allows the appropriate AWS Config API calls to enable AWS Config. Apply the SCP to the root-level OU.

D.

In the organization's management account, create an Amazon EventBridge rule that reacts to a CreateAccount API call. Configure the rule to invoke an AWS Systems Manager Automation runbook to enable AWS Config for the account.

Question 10

A company detects unusual login attempts in many of its AWS accounts. A DevOps engineer must implement a solution that sends a notification to the company's security team when multiple failed login attempts occur. The DevOps engineer has already created an Amazon Simple Notification Service (Amazon SNS) topic and has subscribed the security team to the SNS topic.

Which solution will provide the notification with the LEAST operational effort?

Options:

A.

Configure AWS CloudTrail to send management events to an Amazon CloudWatch Logs log group. Create a CloudWatch Logs metric filter to match failed ConsoleLogin events. Create a CloudWatch alarm that is based on the metric filter. Configure an alarm action to send messages to the SNS topic.

B.

Configure AWS CloudTrail to send management events to an Amazon S3 bucket. Create an Amazon Athena query that returns a failure if the query finds failed logins in the logs in the S3 bucket. Create an Amazon EventBridge rule to periodically run the query. Create a second EventBridge rule to detect when the query fails and to send a message to the SNS topic.

C.

Configure AWS CloudTrail to send data events to an Amazon CloudWatch Logs log group. Create a CloudWatch Logs metric filter to match failed ConsoleLogin events. Create a CloudWatch alarm that is based on the metric filter. Configure an alarm action to send messages to the SNS topic.

D.

Configure AWS CloudTrail to send data events to an Amazon S3 bucket. Configure an Amazon S3 event notification for the s3:ObjectCreated event type. Filter the event type by ConsoleLogin failed events. Configure the event notification to forward to the SNS topic.

Question 11

A company uses Amazon RDS for Microsoft SQL Server as its primary database and must ensure cross-Region high availability with RPO < 1 min and RTO < 10 min.

Which solution meets these requirements?

Options:

A.

Use RDS Multi-AZ DB cluster with cross-Region read replicas. Automate failover via Route 53.

B.

Use Multi-AZ cluster with snapshots copied cross-Region.

C.

Use single-AZ RDS + DMS continuous replication.

D.

Use single-AZ with Backup and restore.

Question 12

A company is migrating an application to Amazon Elastic Container Service (Amazon ECS). The company wants to consolidate log data in Amazon CloudWatch in the us-west-2 Region. No CloudWatch log groups currently exist for Amazon ECS.

The company receives the following error code when an ECS task attempts to launch:

“service my-service-name was unable to place a task because no container instance met all of its requirements.”

The ECS task definition includes the following container log configuration:

"logConfiguration": {

"logDriver": "awslogs",

"options": {

"awslogs-create-group": "true",

"awslogs-group": "awslogs-mytask",

"awslogs-region": "us-west-2",

"awslogs-stream-prefix": "awslogs-mytask",

"mode": "non-blocking",

"max-buffer-size": "25m"

}

}

The ECS cluster uses an Amazon EC2 Auto Scaling group to provide capacity for tasks. EC2 instances launch an Amazon ECS-optimized AMI.

Which solution will fix the problem?

Options:

A.

Modify the ECS infrastructure IAM role to add the logs:CreateLogStream and logs:PutLogEvents permissions.

B.

Modify the ECS log configuration to use blocking mode.

C.

Modify the ECS container instance IAM role to add the logs:CreateLogStream and logs:PutLogEvents permissions.

D.

Modify the ECS log configuration by setting the awslogs-create-group option to false.

Question 13

A DevOps engineer must implement a solution that immediately terminates Amazon EC2 instances in Auto Scaling groups when cryptocurrency mining activity is detected.

Which solution will meet these requirements with the LEAST development effort?

Options:

A.

Configure Amazon Route 53 to send query logs directly to Amazon CloudWatch Logs. Create an AWS Lambda function that runs every 5 minutes and checks the query logs for domains related to cryptocurrency activity. If the domains are found, terminate the identified EC2 instances.

B.

Configure VPC Flow Logs to send flow logs to an Amazon S3 bucket. Create an AWS Lambda function that runs every 5 minutes and invokes an Amazon Athena query to find IP addresses associated with cryptocurrency activity. If the IP addresses are found, terminate the identified EC2 instances.

C.

Enable Amazon GuardDuty. Monitor EC2 findings. Create an Amazon EventBridge rule with GuardDuty as the event source. Create an AWS Lambda function that is triggered by the EventBridge rule. Configure the Lambda function to parse the event and terminate the identified EC2 instances.

D.

Enable AWS Security Hub. Monitor EC2 findings. Create an Amazon EventBridge rule with Security Hub as the event source. Create an AWS Lambda function that is triggered by the EventBridge rule. Configure the Lambda function to parse the event and terminate the identified EC2 instances.

Question 14

A company uses AWS Organizations to manage multiple AWS accounts. The company needs a solution to improve the company's management of AWS resources in a production account.

The company wants to use AWS CloudFormation to manage all manually created infrastructure. The company must have the ability to strictly control who can make manual changes to AWS infrastructure. The solution must ensure that users can deploy new infrastructure only by making changes to a CloudFormation template that is stored in an AWS CodeConnections compatible Git provider.

Which combination of steps will meet these requirements with the LEAST implementation effort? (Select THREE).

Options:

A.

Configure the CloudFormation infrastructure as code (IaC) generator to scan for existing resources in the AWS account. Create a CloudFormation template that includes the scanned resources. Import the CloudFormation template into a new CloudFormation stack.

B.

Configure AWS Config to scan for existing resources in the AWS account. Create a CloudFormation template that includes the scanned resources. Import the CloudFormation template into a new CloudFormation stack.

C.

Use CodeConnections to establish a connection between the Git provider and AWS CodePipeline. Push the CloudFormation template to the Git repository. Run a pipeline in CodePipeline that deploys the CloudFormation stack for every merge into the Git repository.

D.

Use CodeConnections to establish a connection between the Git provider and CloudFormation. Push the CloudFormation template to the Git repository. Sync the Git repository with the CloudFormation stack.

E.

Create an IAM role, and set CloudFormation as the principal. Grant the IAM role access to manage the stack resources. Create an SCP that denies all actions to all the principals except by the IAM role. Link the SCP with the production OU.

F.

Create an IAM role, and set CloudFormation as the principal. Grant the IAM role access to manage the stack resources. Create an SCP that allows all actions to only the IAM role. Link the SCP with the production OU.

Question 15

A DevOps learn has created a Custom Lambda rule in AWS Config. The rule monitors Amazon Elastic Container Repository (Amazon ECR) policy statements for ecr:' actions. When a noncompliant repository is detected, Amazon EventBridge uses Amazon Simple Notification Service (Amazon SNS) to route the notification to a security team.

When the custom AWS Config rule is evaluated, the AWS Lambda function fails to run.

Which solution will resolve the issue?

Options:

A.

Modify the Lambda function's resource policy to grant AWS Config permission to invoke the function.

B.

Modify the SNS topic policy to include configuration changes for EventBridge to publish to the SNS topic.

C.

Modify the Lambda function's execution role to include configuration changes for custom AWS Config rules.

D.

Modify all the ECR repository policies to grant AWS Config access to the necessary ECR API actions.

Question 16

A company's application runs on Amazon EC2 instances. The application writes to a log file that records the username, date, time: and source IP address of the login. The log is published to a log group in Amazon CloudWatch Logs

The company is performing a root cause analysis for an event that occurred on the previous day The company needs to know the number of logins for a specific user from the past 7 days

Which solution will provide this information'?

Options:

A.

Create a CloudWatch Logs metric filter on the log group Use a filter pattern that matches the username. Publish a CloudWatch metric that sums the number of logins over the past 7 days.

B.

Create a CloudWatch Logs subscription on the log group Use a filter pattern that matches the username Publish a CloudWatch metric that sums the number of logins over the past 7 days

C.

Create a CloudWatch Logs Insights query that uses an aggregation function to count the number of logins for the username over the past 7 days. Run the query against the log group

D.

Create a CloudWatch dashboard. Add a number widget that has a filter pattern that counts the number of logins for the username over the past 7 days directly from the log group

Question 17

A company has an organization in AWS Organizations. The organization includes workload accounts that contain enterprise applications. The company centrally manages users from an operations account. No users can be created in the workload accounts. The company recently added an operations team and must provide the operations team members with administrator access to each workload account.

Which combination of actions will provide this access? (Choose three.)

Options:

A.

Create a SysAdmin role in the operations account. Attach the AdministratorAccess policy to the role. Modify the trust relationship to allow the sts:AssumeRole action from the workload accounts.

B.

Create a SysAdmin role in each workload account. Attach the AdministratorAccess policy to the role. Modify the trust relationship to allow the sts:AssumeRole action from the operations account.

C.

Create an Amazon Cognito identity pool in the operations account. Attach the SysAdmin role as an authenticated role.

D.

In the operations account, create an IAM user for each operations team member.

E.

In the operations account, create an IAM user group that is named SysAdmins. Add an IAM policy that allows the sts:AssumeRole action for the SysAdmin role in each workload account. Add all operations team members to the group.

F.

Create an Amazon Cognito user pool in the operations account. Create an Amazon Cognito user for each operations team member.

Question 18

A company uses Amazon Redshift as its data warehouse solution. The company wants to create a dashboard to view changes to the Redshift users and the queries the users perform.

Which combination of steps will meet this requirement? (Select TWO.)

Options:

A.

Create an Amazon CloudWatch log group. Create an AWS CloudTrail trail that writes to the CloudWatch log group.

B.

Create a new Amazon S3 bucket. Configure default audit logging on the Redshift cluster. Configure the S3 bucket as the target.

C.

Configure the Redshift cluster database audit logging to include user activity logs. Configure Amazon CloudWatch as the target.

D.

Create an Amazon CloudWatch dashboard that has a log widget. Configure the widget to display user details from the Redshift logs.

E.

Create an AWS Lambda function that uses Amazon Athena to query the Redshift logs. Create an Amazon CloudWatch dashboard that has a custom widget type that uses the Lambda function.

Question 19

A company uses AWS CDK and CodePipeline with CodeBuild to deploy applications. The company wants to enforce unit tests before deployment; deployment proceeds only if tests pass.

Which steps enforce this? (Select TWO.)

Options:

A.

Update CodeBuild build commands to run tests then deploy, set OnFailure to ABORT.

B.

Update CodeBuild commands to run tests then deploy, add --rollback true to cdk deploy.

C.

Update CodeBuild commands to run tests then deploy, add --require-approval any-change flag.

D.

Create tests with AWS CDK assertions module, using template.hasResourceProperties assertions.

E.

Create tests that use cdk diff and fail if any resource changes are detected.

Question 20

A company uses Amazon RDS for Microsoft SQL Server as its primary database. They need high availability within and across AWS Regions, with an RPO <1 min and RTO <10 min. Route 53 CNAME is used for the DB endpoint and must redirect to standby during failover.

Which solution meets these requirements?

Options:

A.

Deploy an Amazon RDS for SQL Server Multi-AZ DB cluster with cross-Region read replicas. Use automation to promote replica and update Route 53.

B.

Deploy RDS Multi-AZ with snapshots copied every 5 minutes; use Lambda to restore snapshot and update Route 53 on failover.

C.

Deploy Single-AZ RDS and use AWS DMS to continuously replicate to another Region. Use CloudWatch alarms for failover notification.

D.

Deploy Single-AZ RDS and use AWS Backup for cross-Region backups every 30 seconds. Use automation to restore and update Route 53 during failover.

Question 21

A company uses an organization in AWS Organizations that has all features enabled. The company uses AWS Backup in a primary account and uses an AWS Key Management Service (AWS KMS) key to encrypt the backups.

The company needs to automate a cross-account backup of the resources that AWS Backup backs up in the primary account. The company configures cross-account backup in the Organizations management account. The company creates a new AWS account in the organization and configures an AWS Backup backup vault in the new account. The company creates a KMS key in the new account to encrypt the backups. Finally, the company configures a new backup plan in the primary account. The destination for the new backup plan is the backup vault in the new account.

When the AWS Backup job in the primary account is invoked, the job creates backups in the primary account. However, the backups are not copied to the new account's backup vault.

Which combination of steps must the company take so that backups can be copied to the new account's backup vault? (Select TWO.)

Options:

A.

Edit the backup vault access policy in the new account to allow access to the primary account.

B.

Edit the backup vault access policy in the primary account to allow access to the new account.

C.

Edit the backup vault access policy in the primary account to allow access to the KMS key in the new account.

D.

Edit the key policy of the KMS key in the primary account to share the key with the new account.

E.

Edit the key policy of the KMS key in the new account to share the key with the primary account.

Question 22

A company uses the AWS Cloud Development Kit (AWS CDK) to define its application. The company uses a pipeline that consists of AWS CodePipeline and AWS CodeBuild to deploy the CDK application. The company wants to introduce unit tests to the pipeline to test various infrastructure components. The company wants to ensure that a deployment proceeds if no unit tests result in a failure.

Which combination of steps will enforce the testing requirement in the pipeline? (Select TWO.)

Options:

A.

Update the CodeBuild build phase commands to run the tests then to deploy the application. Set the OnFailure phase property to ABORT.

B.

Update the CodeBuild build phase commands to run the tests then to deploy the application. Add the --rollback true flag to the cdk deploy command.

C.

Update the CodeBuild build phase commands to run the tests then to deploy the application. Add the --require-approval any-change flag to the cdk deploy command.

D.

Create a test that uses the AWS CDK assertions module. Use the template.hasResourceProperties assertion to test that resources have the expected properties.

E.

Create a test that uses the cdk diff command. Configure the test to fail if any resources have changed.

Question 23

A company has a workflow that generates a file for each of the company's products and stores the files in a production environment Amazon S3 bucket. The company's users can access the S3 bucket.

Each file contains a product ID. Product IDs for products that have not been publicly announced are prefixed with a specific UUID. Product IDs are 12 characters long. IDs for products that have not been publicly announced begin with the letter P.

The company does not want information about products that have not been publicly announced to be available in the production environment S3 bucket.

Which solution will meet these requirements?

Options:

A.

Create a new staging S3 bucket. Generate all files in the new staging bucket. Create an Amazon Macie custom data identifier to identify product IDs in the new bucket that begin with the specific UUID. Launch an Amazon Macie sensitive data discovery job with the custom data identifier. Copy all files that do not have a Macie finding to the production S3 bucket.

B.

Create an Amazon Macie custom data identifier to identify product IDs in the production bucket that begin with the specific UUID. Launch an Amazon Macie sensitive data discovery job with the custom data identifier. Remove all files that have a Macie finding from the production S3 bucket.

C.

Create a new staging S3 bucket. Generate all files in the new staging bucket. Launch an Amazon Macie sensitive data discovery job with a managed data identifier. Copy all files that do not have a Macie finding to the production S3 bucket.

D.

Create an Amazon Macie sensitive data discovery job with a managed data identifier. Remove all files that have a Macie finding from the production S3 bucket.

Question 24

A company is testing a web application that runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The company uses a blue green deployment process with immutable instances when deploying new software.

During testing users are being automatically logged out of the application at random times. Testers also report that when a new version of the application is deployed all users are logged out. The development team needs a solution to ensure users remain logged m across scaling events and application deployments.

What is the MOST operationally efficient way to ensure users remain logged in?

Options:

A.

Enable smart sessions on the load balancer and modify the application to check tor an existing session.

B.

Enable session sharing on the toad balancer and modify the application to read from the session store.

C.

Store user session information in an Amazon S3 bucket and modify the application to read session information from the bucket.

D.

Modify the application to store user session information in an Amazon ElastiCache cluster.

Question 25

A company uses an organization in AWS Organizations to manage many AWS accounts. The company has enabled all features for the organization. The company uses AWS CloudFormation StackSets to deploy configurations to the accounts. The company uses AWS Config to monitor an Amazon S3 bucket. The company needs to ensure that all object uploads to the S3 bucket use AWS Key Management Service (AWS KMS) encryption. Which solution will meet these requirements?

Options:

A.

Create an AWS Config conformance pack that includes the s3-bucket-server-side-encryption-enabled rule. Deploy the conformance pack to the accounts. Configure the rule to target an Amazon Simple Notification Service (Amazon SNS) topic.

B.

Create an SCP that includes a deny statement for the s3:createBucket action and a condition statement where s3:x-amz-server-side-encryption is not aws:kms. Attach the SCP to the root of the organization.

C.

Create an AWS CloudFormation stack set to enable an AWS CloudTrail trail to capture S3 data events for the organization. In the stack set, create an Amazon EventBridge rule to match S3 PutObject events that do not use AWS KMS encryption. Configure the rule to target an Amazon Simple Notification Service (Amazon SNS) topic.

D.

Create an SCP that includes a deny statement for the s3:putObject action and a condition where s3:x-amz-server-side-encryption is not aws:kms. Attach the SCP to the root of the organization.

Question 26

A company has application code in an AWS CodeConnections compatible Git repository. The company wants to configure unit tests to run when pull requests are opened. The company wants to ensure that the test status is visible in pull requests when the tests are completed. The company wants to save output data files that the tests generate to an Amazon S3 bucket after the tests are finished. Which combination of solutions will meet these requirements? (Select THREE.)

Options:

A.

Create an IAM service role to allow access to the resources that are required to run the tests.

B.

Create a pipeline in AWS CodePipeline that has a test stage. Create a trigger to run the pipeline when pull requests are created or updated. Add a source action to report test results.

C.

Create an AWS CodeBuild project to run the tests. Enable webhook triggers to run the tests when pull requests are created or updated. Enable build status reporting to report test results.

D.

Create a buildspec.yml file that has a reports section to upload output files when the tests have finished running.

E.

Create a buildspec.yml file that has an artifacts section to upload artifacts when the tests have finished running.

F.

Create an appspec.yml file that has a files section to upload output files when the tests have finished running.

Question 27

A company is launching an application that stores raw data in an Amazon S3 bucket. Three applications need to access the data to generate reports. The data must be redacted differently for each application before

the applications can access the data.

Which solution will meet these requirements?

Options:

A.

Create an S3 bucket for each application. Configure S3 Same-Region Replication (SRR) from the raw data's S3 bucket to each application's S3 bucket. Configure each application to consume data from its own S3 bucket.

B.

Create an Amazon Kinesis data stream. Create an AWS Lambda function that is invoked by object creation events in the raw data's S3 bucket. Program the Lambda function to redact data for each application. Publish the data on the Kinesis data stream. Configure each application to consume data from the Kinesis data stream.

C.

For each application, create an S3 access point that uses the raw data's S3 bucket as the destination. Create an AWS Lambda function that is invoked by object creation events in the raw data's S3 bucket. Program the Lambda function to redact data for each application. Store the data in each application's S3 access point. Configure each application to consume data from its own S3 access point.

D.

Create an S3 access point that uses the raw data's S3 bucket as the destination. For each application, create an S3 Object Lambda access point that uses the S3 access point. Configure the AWS Lambda function for each S3 Object Lambda access point to redact data when objects are retrieved. Configure each application to consume data from its own S3 Object Lambda access point.

Question 28

A DevOps engineer is architecting a continuous development strategy for a company's software as a service (SaaS) web application running on AWS. For application and security reasons users subscribing to this application are distributed across multiple. Application Load Balancers (ALBs) each of which has a dedicated Auto Scaling group and fleet of Amazon EC2 instances The application does not require a build stage and when it is committed to AWS CodeCommit, the application must trigger a simultaneous deployment to all ALBs Auto Scaling groups and EC2 fleets.

Which architecture will meet these requirements with the LEAST amount of configuration?

Options:

A.

Create a single AWS CodePipeline pipeline that deploys the application in parallel using unique AWS CodeDeploy applications and deployment groups created for each ALB-Auto Scaling group pair.

B.

Create a single AWS CodePipeline pipeline that deploys the application using a single AWS CodeDeploy application and single deployment group.

C.

Create a single AWS CodePipeline pipeline that deploys the application in parallel using a single AWS CodeDeploy application and unique deployment group for each ALB-Auto Scaling group pair.

D.

Create an AWS CodePipeline pipeline for each ALB-Auto Scaling group pair that deploys the application using an AWS CodeDeploy application and deployment group created for the same ALB-Auto Scaling group pair.

Question 29

A company is storing 100 GB of log data in .csv format in an Amazon S3 bucket. SQL developers want to query this data and generate graphs to visualize it. The SQL developers also need an efficient, automated way to store metadata from the .csv file. Which combination of steps will meet these requirements with the LEAST amount of effort? (Select THREE.)

Options:

A.

Filter the data through AWS X-Ray to visualize the data.

B.

Filter the data through Amazon QuickSight to visualize the data.

C.

Query the data with Amazon Athena.

D.

Use the AWS Glue Data Catalog as the persistent metadata store.

E.

Use Amazon DynamoDB as the persistent metadata store.

F.

Query the data with Amazon Redshift.

Question 30

A company uses AWS CloudFormation stacks to deploy updates to its application. The stacks consist of different resources. The resources include AWS Auto Scaling groups, Amazon EC2 instances, Application Load Balancers (ALBs), and other resources that are necessary to launch and maintain independent stacks. Changes to application resources outside of CloudFormation stack updates are not allowed.

The company recently attempted to update the application stack by using the AWS CLI. The stack failed to update and produced the following error message: "ERROR: both the deployment and the CloudFormation stack rollback failed. The deployment failed because the following resource(s) failed to update: [AutoScalingGroup]."

The stack remains in a status of UPDATE_ROLLBACK_FAILED. *

Which solution will resolve this issue?

Options:

A.

Update the subnet mappings that are configured for the ALBs. Run the aws cloudformation update-stack-set AWS CLI command.

B.

Update the 1AM role by providing the necessary permissions to update the stack. Run the aws cloudformation continue-update-rollback AWS CLI command.

C.

Submit a request for a quota increase for the number of EC2 instances for the account. Run the aws cloudformation cancel-update-stack AWS CLI command.

D.

Delete the Auto Scaling group resource. Run the aws cloudformation rollback-stack AWS CLI command.

Question 31

A company runs an application on Amazon EKS. The company needs comprehensive logging for control plane and nodes, analyze API requests, and monitor container performance with minimal operational overhead.

Which solution meets these requirements?

Options:

A.

Enable CloudTrail for control plane logging; deploy Logstash as a ReplicaSet on nodes; use OpenSearch to store and analyze logs.

B.

Enable control plane logging for EKS and send logs to CloudWatch; use CloudWatch Container Insights for node and container logs; use CloudWatch Logs Insights to query logs.

C.

Enable API server control plane logging and send to S3; deploy Kubernetes Event Exporter on nodes; send logs to S3; use Athena and QuickSight for analysis.

D.

Use AWS Distro for OpenTelemetry; stream logs to Firehose; analyze data in Redshift.

Question 32

A company is developing a web application and is using AWS CodeBuild for its CI/CD pipeline. The company must generate multiple artifacts from a single build process. The company also needs the ability to determine which build generated each artifact. The artifacts must be stored in an Amazon S3 bucket for further processing and deployment. Builds occur frequently and are based on a large Git repository. The company needs to optimize build times. Which solution will meet these requirements with the MOST operational efficiency?

Options:

A.

Configure the buildspec.yml file to specify multiple artifacts with different file sets. Enable local caching for the build process by using source cache mode. Use environment variables to dynamically name artifacts based on the build ID.

B.

Configure the buildspec.yml file to output all files as a single artifact. Enable local caching for the build process by using custom cache mode. Create an AWS Lambda function that is invoked by CodeBuild completion. Program the Lambda function to split the artifact into multiple files and to upload the files to the S3 bucket with dynamic names based on build ID.

C.

Create separate CodeBuild projects for each artifact type. Enable local caching for the build process by using Docker layer cache mode. Configure each project to output a single artifact to the S3 bucket with a dynamic name based on build ID. Use AWS Step Functions to orchestrate the projects in parallel.

D.

Set up CodeBuild to generate a single ZIP artifact that contains all files. Enable S3 caching for the build process. Use AWS CodePipeline with a custom action to extract the files and reorganize the files into multiple artifacts in the S3 bucket. Configure the custom action to dynamically name the files based on the time of the build.

Question 33

A company runs applications on Windows and Linux Amazon EC2 instances The instances run across multiple Availability Zones In an AWS Region. The company uses Auto Scaling groups for each application.

The company needs a durable storage solution for the instances. The solution must use SMB for Windows and must use NFS for Linux. The solution must also have sub-millisecond latencies. All instances will read and write the data.

Which combination of steps will meet these requirements? (Select THREE.)

Options:

A.

Create an Amazon Elastic File System (Amazon EFS) file system that has targets in multiple Availability Zones

B.

Create an Amazon FSx for NetApp ONTAP Multi-AZ file system.

C.

Create a General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume to use for shared storage.

D.

Update the user data for each application's launch template to mount the file system

E.

Perform an instance refresh on each Auto Scaling group.

F.

Update the EC2 instances for each application to mount the file system when new instances are launched

Question 34

A company uses AWS Organizations to manage multiple accounts. Information security policies require that all unencrypted Amazon EBS volumes be marked as non-compliant. A DevOps engineer needs to automatically deploy the solution and ensure that this compliance check is always present.

Which solution will accomplish this?

Options:

A.

Create an AWS CloudFormation template that defines an AWS Inspector rule to check whether EBS encryption is enabled. Save the template to an Amazon S3 bucket that has been shared with all accounts within the company. Update the account creation script pointing to the CloudFormation template in Amazon S3.

B.

Create an AWS Config organizational rule to check whether EBS encryption is enabled and deploy the rule using the AWS CLI. Create and apply an SCP to prohibit stopping and deleting AWS Config across the organization.

C.

Create an SCP in Organizations. Set the policy to prevent the launch of Amazon EC2 instances without encryption on the EBS volumes using a conditional expression. Apply the SCP to all AWS accounts. Use Amazon Athena to analyze the AWS CloudTrail output, looking for events that deny an ec2: RunInstances action.

D.

Deploy an IAM role to all accounts from a single trusted account. Build a pipeline with AWS CodePipeline with a stage in AWS Lambda to assume the IAM role, and list all EBS volumes in the account. Publish a report to Amazon S3.

Question 35

A company's application has an API that retrieves workload metrics. The company needs to audit, analyze, and visualize these metrics from the application to detect issues at scale.

Which combination of steps will meet these requirements? (Select THREE).

Options:

A.

Configure an Amazon EventBridge schedule to invoke an AWS Lambda function that calls the API to retrieve workload metrics. Store the workload metric data in an Amazon S3 bucket.

B.

Configure an Amazon EventBridge schedule to invoke an AWS Lambda function that calls the API to retrieve workload metrics. Store the workload metric data in an Amazon DynamoDB table that has a DynamoDB stream enabled.

C.

Create an AWS Glue crawler to catalog the workload metric data in the Amazon S3 bucket. Create views in Amazon Athena for the cataloged data.

D.

Connect an AWS Glue crawler to the Amazon DynamoDB stream to catalog the workload metric data. Create views in Amazon Athena for the cataloged data.

E.

Create Amazon QuickSight datasets from the Amazon Athena views. Create a QuickSight analysis to visualize the workload metric data as a dashboard.

F.

Create an Amazon CloudWatch dashboard that has custom widgets that invoke AWS Lambda functions. Configure the Lambda functions to query the workload metrics data from the Amazon Athena views.

Question 36

A company has a single developer writing code for an automated deployment pipeline. The developer is storing source code in an Amazon S3 bucket for each project. The company wants to add more developers to the team but is concerned about code conflicts and lost work The company also wants to build a test environment to deploy newer versions of code for testing and allow developers to automatically deploy to both environments when code is changed in the repository.

What is the MOST efficient way to meet these requirements?

Options:

A.

Create an AWS CodeCommit repository tor each project, use the mam branch for production code: and create a testing branch for code deployed to testing Use feature branches to develop new features and pull requests to merge code to testing and main branches.

B.

Create another S3 bucket for each project for testing code, and use an AWS Lambda function to promote code changes between testing and production buckets Enable versioning on all buckets to prevent code conflicts.

C.

Create an AWS CodeCommit repository for each project, and use the main branch for production and test code with different deployment pipelines for each environment Use feature branches to develop new features.

D.

Enable versioning and branching on each S3 bucket, use the main branch for production code, and create a testing branch for code deployed to testing. Have developers use each branch for developing in each environment.

Question 37

A company has configured Amazon RDS storage autoscaling for its RDS DB instances. A DevOps team needs to visualize the autoscaling events on an Amazon CloudWatch dashboard. Which solution will meet this requirement?

Options:

A.

Create an Amazon EventBridge rule that reacts to RDS storage autoscaling events from RDS events. Create an AWS Lambda function that publishes a CloudWatch custom metric. Configure the EventBridge rule to invoke the Lambda function. Visualize the custom metric by using the CloudWatch dashboard.

B.

Create a trail by using AWS CloudTrail with management events configured. Configure the trail to send the management events to Amazon CloudWatch Logs. Create a metric filter in CloudWatch Logs to match the RDS storage autoscaling events. Visualize the metric filter by using the CloudWatch dashboard.

C.

Create an Amazon EventBridge rule that reacts to RDS storage autoscaling events (rom the RDS events. Create a CloudWatch alarm. Configure the EventBridge rule to change the status of the CloudWatch alarm. Visualize the alarm status by using the CloudWatch dashboard.

D.

Create a trail by using AWS CloudTrail with data events configured. Configure the trail to send the data events to Amazon CloudWatch Logs. Create a metric filter in CloudWatch Logs to match the RDS storage autoscaling events. Visualize the metric filter by using the CloudWatch dashboard.

Question 38

A company manages multiple AWS accounts by using AWS Organizations with OUS for the different business divisions, The company is updating their corporate network to use new IP address ranges. The company has 10 Amazon S3 buckets in different AWS accounts. The S3 buckets store reports for the different divisions. The S3 bucket configurations allow only private corporate network IP addresses to access the S3 buckets.

A DevOps engineer needs to change the range of IP addresses that have permission to access the contents of the S3 buckets The DevOps engineer also needs to revoke the permissions of two OUS in the company

Which solution will meet these requirements?

Options:

A.

Create a new SCP that has two statements, one that allows access to the new range of IP addresses for all the S3 buckets and one that demes access to the old range of IP addresses for all the S3 buckets. Set a permissions boundary for the OrganzauonAccountAccessRole role In the two OUS to deny access to the S3 buckets.

B.

Create a new SCP that has a statement that allows only the new range of IP addresses to access the S3 buckets. Create another SCP that denies access to the S3 buckets. Attach the second SCP to the two OUS

C.

On all the S3 buckets, configure resource-based policies that allow only the new range of IP addresses to access the S3 buckets. Create a new SCP that denies access to the S3 buckets. Attach the SCP to the two OUs.

D.

On all the S3 buckets, configure resource-based policies that allow only the new range of IP addresses to access the S3 buckets. Set a permissions boundary for the OrganizationAccountAccessRole role in the two OUS to deny access to the S3 buckets.

Question 39

A company is adopting AWS CodeDeploy to automate its application deployments for a Java-Apache Tomcat application with an Apache Webserver. The development team started with a proof of concept, created a deployment group for a developer environment, and performed functional tests within the application. After completion, the team will create additional deployment groups for staging and production.

The current log level is configured within the Apache settings, but the team wants to change this configuration dynamically when the deployment occurs, so that they can set different log level configurations depending on the deployment group without having a different application revision for each group.

How can these requirements be met with the LEAST management overhead and without requiring different script versions for each deployment group?

Options:

A.

Tag the Amazon EC2 instances depending on the deployment group. Then place a script into the application revision that calls the metadata service and the EC2 API to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference the script as part of the AfterInstall lifecycle hook in the appspec.yml file.

B.

Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ NAME to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the BeforeInstall lifecycle hook in the appspec.yml file.

C.

Create a CodeDeploy custom environment variable for each environment. Then place a script into the application revision that checks this environment variable to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the ValidateService lifecycle hook in the appspec.yml file.

D.

Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ID to identify which deployment group the instance is part of to configure the log level settings. Reference this script as part of the Install lifecycle hook in the appspec.yml file.

Question 40

A company uses AWS Organizations to manage its AWS accounts. The company has a root OU that has a child OU. The root OU has an SCP that allows all actions on all resources. The child OU has an SCP that allows all actions for Amazon DynamoDB and AWS Lambda, and denies all other actions.

The company has an AWS account that is named vendor-data in the child OU. A DevOps engineer has an 1AM user that is attached to the AdministratorAccess 1AM policy in the vendor-data account. The DevOps engineer attempts to launch an Amazon EC2 instance in the vendor-data account but receives an access denied error.

Which change should the DevOps engineer make to launch the EC2 instance in the vendor-data account?

Options:

A.

Attach the AmazonEC2FullAccess 1AM policy to the 1AM user.

B.

Create a new SCP that allows all actions for Amazon EC2. Attach the SCP to the vendor-data account.

C.

Update the SCP in the child OU to allow all actions for Amazon EC2.

D.

Create a new SCP that allows all actions for Amazon EC2. Attach the SCP to the root OU.

Question 41

A DevOps engineer is building the infrastructure for an application. The application needs to run on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that includes Amazon EC2 instances. The EC2 instances need to use an Amazon Elastic File System (Amazon EFS) file system as a storage backend. The Amazon EFS Container Storage Interface (CSI) driver is installed on the EKS cluster.

When the DevOps engineer starts the application, the EC2 instances do not mount the EFS file system.

Which solutions will fix the problem? (Select THREE.)

Options:

A.

Switch the EKS nodes from Amazon EC2 to AWS Fargate.

B.

Add an inbound rule to the EFS file system's security group to allow NFS traffic from the EKS cluster.

C.

Create an IAM role that allows the Amazon EFS CSI driver to interact with the file system.

D.

Set up AWS DataSync to configure file transfer between the EFS file system and the EKS nodes.

E.

Create a mount target for the EFS file system in the subnet of the EKS nodes.

F.

Disable encryption on the EFS file system.

Question 42

A company is running an application on Amazon EC2 instances in an Auto Scaling group. Recently an issue occurred that prevented EC2 instances from launching successfully and it took several hours for the support team to discover the issue. The support team wants to be notified by email whenever an EC2 instance does not start successfully.

Which action will accomplish this?

Options:

A.

Add a health check to the Auto Scaling group to invoke an AWS Lambda function whenever an instance status is impaired.

B.

Configure the Auto Scaling group to send a notification to an Amazon SNS topic whenever a failed instance launch occurs.

C.

Create an Amazon CloudWatch alarm that invokes an AWS Lambda function when a failed Attachinstances Auto Scaling API call is made.

D.

Create a status check alarm on Amazon EC2 to send a notification to an Amazon SNS topic whenever a status check fail occurs.

Question 43

A company needs to update its order processing application to improve resilience and availability. The application requires a stateful database and uses a single-node Amazon RDS DB instance to store customer orders and transaction history. A DevOps engineer must make the database highly available.

Which solution will meet this requirement?

Options:

A.

Migrate the database to Amazon DynamoDB global tables. Configure automatic failover between AWS Regions by using Amazon Route 53 health checks.

B.

Migrate the database to Amazon EC2 instances in multiple Availability Zones. Use Amazon Elastic Block Store (Amazon EBS) Mult-Attach to connect all the instances to a single EBS volume.

C.

Use the RDS DB instance as the source instance to create read replicas in multiple Availability Zones. Deploy an Application Load Balancer to distribute read traffic across the read replicas.

D.

Modify the RDS DB instance to be a Multi-AZ deployment. Verify automatic failover to the standby instance if the primary instance becomes unavailable.

Question 44

A company wants to use AWS Systems Manager documents to bootstrap physical laptops for developers The bootstrap code Is stored in GitHub A DevOps engineer has already created a Systems Manager activation, installed the Systems Manager agent with the registration code, and installed an activation ID on all the laptops.

Which set of steps should be taken next?

Options:

A.

Configure the Systems Manager document to use the AWS-RunShellScnpt command to copy the files from GitHub to Amazon S3, then use the aws-downloadContent plugin with a sourceType of S3

B.

Configure the Systems Manager document to use the aws-configurePackage plugin with an install action and point to the Git repository

C.

Configure the Systems Manager document to use the aws-downloadContent plugin with a sourceType of GitHub and sourcelnfo with the repository details.

D.

Configure the Systems Manager document to use the aws:softwarelnventory plugin and run the script from the Git repository

Question 45

A company has multiple accounts in an organization in AWS Organizations. The company's SecOps team needs to receive an Amazon Simple Notification Service (Amazon SNS) notification if any account in the organization turns off the Block Public Access feature on an Amazon S3 bucket. A DevOps engineer must implement this change without affecting the operation of any AWS accounts. The implementation must ensure that individual member accounts in the organization cannot turn off the notification.

Which solution will meet these requirements?

Options:

A.

Designate an account to be the delegated Amazon GuardDuty administrator account. Turn on GuardDuty for all accounts across the organization. In the GuardDuty administrator account, create an SNS topic. Subscribe the SecOps team's email address to the SNS topic. In the same account, create an Amazon EventBridge rule that uses an event pattern for GuardDuty findings and a target of the SNS topic.

B.

Create an AWS CloudFormation template that creates an SNS topic and subscribes the SecOps team’s email address to the SNS topic. In the template, include an Amazon EventBridge rule that uses an event pattern of CloudTrail activity for s3:PutBucketPublicAccessBlock and a target of the SNS topic. Deploy the stack to every account in the organization by using CloudFormation StackSets.

C.

Turn on AWS Config across the organization. In the delegated administrator account, create an SNS topic. Subscribe the SecOps team's email address to the SNS topic. Deploy a conformance pack that uses the s3-bucket-level-public-access-prohibited AWS Config managed rule in each account and uses an AWS Systems Manager document to publish an event to the SNS topic to notify the SecOps team.

D.

Turn on Amazon Inspector across the organization. In the Amazon Inspector delegated administrator account, create an SNS topic. Subscribe the SecOps team’s email address to the SNS topic. In the same account, create an Amazon EventBridge rule that uses an event pattern for public network exposure of the S3 bucket and publishes an event to the SNS topic to notify the SecOps team.

Question 46

A company is deploying a new application that uses Amazon EC2 instances. The company needs a solution to query application logs and AWS account API activity Which solution will meet these requirements?

Options:

A.

Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon CloudWatch Logs Configure AWS CloudTrail to deliver the API logs to Amazon S3 Use CloudWatch to query both sets of logs.

B.

Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon CloudWatch Logs Configure AWS CloudTrail to deliver the API logs to CloudWatch Logs Use CloudWatch Logs Insights to query both sets of logs.

C.

Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon Kinesis Configure AWS CloudTrail to deliver the API logs to Kinesis Use Kinesis to load the data into Amazon Redshift Use Amazon Redshift to query both sets of logs.

D.

Use the Amazon CloudWatch agent to send logs from the EC2 instances to Amazon S3 Use AWS CloudTrail to deliver the API togs to Amazon S3 Use Amazon Athena to query both sets of logs in Amazon S3.

Question 47

A DevOps engineer needs to design a cloud-based solution to standardize deployment artifacts for AWS Cloud deployments and on-premises deployments. There is currently no routing traffic between the on-premises data center and the AWS environment.

The solution must be able to consume downstream packages from public repositories and must be highly available. Data must be encrypted in transit and at rest. The solution must store the deployment artifacts in object storage and deploy the deployment artifacts into Amazon Elastic Container Service (Amazon ECS). The deployment artifacts must be encrypted in transit if the deployment artifacts travel across the public internet.

The DevOps engineer needs to deploy this solution in less than two weeks.

Which solution will meet these requirements?

Options:

A.

Use a third-party software VPN appliance to connect the on-premises data center and AWS. Use AWS CodeArtifact to store the deployment artifacts.

B.

Use an AWS Direct Connect connection and a VPN connection to connect the on-premises data center to AWS. Deploy third-party artifact management software on Amazon EC2 instances.

C.

Use two AWS VPN connections to connect the on-premises data center to AWS. Use AWS CodeArtifact to store the deployment artifacts.

D.

Use parallel AWS Direct Connect connections to connect the on-premises data center to AWS. Deploy third-party artifact management software on Amazon EC2 instances.

Question 48

A company has developed a serverless web application that is hosted on AWS. The application consists of Amazon S3. Amazon API Gateway, several AWS Lambda functions, and an Amazon RDS for MySQL database. The company is using AWS CodeCommit to store the source code. The source code is a combination of AWS Serverless Application Model (AWS SAM) templates and Python code.

A security audit and penetration test reveal that user names and passwords for authentication to the database are hardcoded within CodeCommit repositories. A DevOps engineer must implement a solution to automatically detect and prevent hardcoded secrets.

What is the MOST secure solution that meets these requirements?

Options:

A.

Enable Amazon CodeGuru Profiler. Decorate the handler function with @with_lambda_profiler(). Manually review the recommendation report. Write the secret to AWS Systems Manager Parameter Store as a secure string. Update the SAM templates and the Python code to pull the secret from Parameter Store.

B.

Associate the CodeCommit repository with Amazon CodeGuru Reviewer. Manually check the code review for any recommendations. Choose the option to protect the secret. Update the SAM templates and the Python code to pull the secret from AWS Secrets Manager.

C.

Enable Amazon CodeGuru Profiler. Decorate the handler function with @with_lambda_profiler(). Manually review the recommendation report. Choose the option to protect the secret. Update the SAM templates and the Python code to pull the secret from AWS Secrets Manager.

D.

Associate the CodeCommit repository with Amazon CodeGuru Reviewer. Manually check the code review for any recommendations. Write the secret to AWS Systems Manager Parameter Store as a string. Update the SAM templates and the Python code to pull the secret from Parameter Store.

Question 49

An ecommerce company hosts a web application on Amazon EC2 instances that are in an Auto Scaling group. The company deploys the application across multiple Availability Zones.

Application users are reporting intermittent performance issues with the application.

The company enables basic Amazon CloudWatch monitoring for the EC2 instances. The company identifies and implements a fix for the performance issues. After resolving the issues, the company wants to implement a monitoring solution that will quickly alert the company about future performance issues.

Which solution will meet this requirement?

Options:

A.

Enable detailed monitoring for the EC2 instances. Create custom CloudWatch metrics for application-specific performance indicators. Set up CloudWatch alarms based on the custom metrics. Use CloudWatch Logs Insights to analyze application logs for error patterns.

B.

Use AWS X-Ray to implement distributed tracing. Integrate X-Ray with Amazon CloudWatch RUM. Use Amazon EventBridge to trigger automatic scaling actions based on custom events.

C.

Use Amazon CloudFront to deliver the application. Use AWS CloudTrail to monitor API calls. Use AWS Trusted Advisor to generate recommendations to optimize performance. Use Amazon GuardDuty to detect potential performance issues.

D.

Enable VPC Flow Logs. Use Amazon Data Firehose to stream flow logs to Amazon S3. Use Amazon Athena to analyze the logs and to send alerts to the company.

Question 50

A development team uses AWS CodeCommit, AWS CodePipeline, and AWS CodeBuild to develop and deploy an application. Changes to the code are submitted by pull requests. The development team reviews and merges the pull requests, and then the pipeline builds and tests the application.

Over time, the number of pull requests has increased. The pipeline is frequently blocked because of failing tests. To prevent this blockage, the development team wants to run the unit and integration tests on each pull request before it is merged.

Which solution will meet these requirements?

Options:

A.

Create a CodeBuild project to run the unit and integration tests. Create a CodeCommit approval rule template. Configure the template to require the successful invocation of the CodeBuild project. Attach the approval rule to the project's CodeCommit repository.

B.

Create an Amazon EventBridge rule to match pullRequestCreated events from CodeCommit Create a CodeBuild project to run the unit and integration tests. Configure the CodeBuild project as a target of the EventBridge rule that includes a custom event payload with the CodeCommit repository and branch information from the event.

C.

Create an Amazon EventBridge rule to match pullRequestCreated events from CodeCommit. Modify the existing CodePipeline pipeline to not run the deploy steps if the build is started from a pull request. Configure the EventBridge rule to run the pipeline with a custom payload that contains the CodeCommit repository and branch information from the event.

D.

Create a CodeBuild project to run the unit and integration tests. Create a CodeCommit notification rule that matches when a pull request is created or updated. Configure the notification rule to invoke the CodeBuild project.

Question 51

A company uses an Amazon API Gateway regional REST API to host its application API. The REST API has a custom domain. The REST API's default endpoint is deactivated.

The company's internal teams consume the API. The company wants to use mutual TLS between the API and the internal teams as an additional layer of authentication.

Which combination of steps will meet these requirements? (Select TWO.)

Options:

A.

Use AWS Certificate Manager (ACM) to create a private certificate authority (CA). Provision a client certificate that is signed by the private CA.

B.

Provision a client certificate that is signed by a public certificate authority (CA). Import the certificate into AWS Certificate Manager (ACM).

C.

Upload the provisioned client certificate to an Amazon S3 bucket. Configure the API Gateway mutual TLS to use the client certificate that is stored in the S3 bucket as the trust store.

D.

Upload the provisioned client certificate private key to an Amazon S3 bucket. Configure the API Gateway mutual TLS to use the private key that is stored in the S3 bucket as the trust store.

E.

Upload the root private certificate authority (CA) certificate to an Amazon S3 bucket. Configure the API Gateway mutual TLS to use the private CA certificate that is stored in the S3 bucket as the trust store.

Question 52

A company has an organization in AWS Organizations. A DevOps engineer needs to maintain multiple AWS accounts that belong to different OUs in the organization. All resources, including 1AM policies and Amazon S3 policies within an account, are deployed through AWS CloudFormation. All templates and code are maintained in an AWS CodeCommit repository Recently, some developers have not been able to access an S3 bucket from some accounts in the organization.

The following policy is attached to the S3 bucket.

What should the DevOps engineer do to resolve this access issue?

Options:

A.

Modify the S3 bucket policy Turn off the S3 Block Public Access setting on the S3 bucket In the S3 policy, add the awsSourceAccount condition. Add the AWS account IDs of all developers who are experiencing the issue.

B.

Verify that no 1AM permissions boundaries are denying developers access to the S3 bucket Make the necessary changes to IAM permissions boundaries. Use an AWS Config recorder in the individual developer accounts that are experiencing the issue to revert any changes that are blocking access. Commit the fix back into the CodeCommit repository. Invoke deployment through Cloud Formation to apply the changes.

C.

Configure an SCP that stops anyone from modifying 1AM resources in developer OUs. In the S3 policy, add the awsSourceAccount condition. Add the AWS account IDs of all developers who are experiencing the issue Commit the fix back into the CodeCommit repository Invoke deployment through CloudFormation to apply the changes

D.

Ensure that no SCP is blocking access for developers to the S3 bucket Ensure that no 1AM policy permissions boundaries are denying access to developer 1AM users Make the necessary changes to the SCP and 1AM policy permissions boundaries in the CodeCommit repository Invoke deployment through CloudFormation to apply the changes

Question 53

A company needs to implement failover for its application. The application includes an Amazon CloudFront distribution and a public Application Load Balancer (ALB) in an AWS Region. The company has configured the ALB as the default origin for the distribution.

After some recent application outages, the company wants a zero-second RTO. The company deploys the application to a secondary Region in a warm standby configuration. A DevOps engineer needs to automate the failover of the application to the secondary Region so that HTTP GET requests meet the desired RTO.

Which solution will meet these requirements?

Options:

A.

Create a second CloudFront distribution that has the secondary ALB as the default origin. Create Amazon Route 53 alias records that have a failover policy and Evaluate Target Health set to Yes for both CloudFront distributions. Update the application to use the new record set.

B.

Create a new origin on the distribution for the secondary ALB. Create a new origin group. Set the original ALB as the primary origin. Configure the origin group to fail over for HTTP 5xx status codes. Update the default behavior to use the origin group.

C.

Create Amazon Route 53 alias records that have a failover policy and Evaluate Target Health set to Yes for both ALBs. Set the TTL of both records to 0. Update the distribution's origin to use the new record set.

D.

Create a CloudFront function that detects HTTP 5xx status codes. Configure the function to return a 307 Temporary Redirect error response to the secondary ALB if the function detects 5xx status codes. Update the distribution's default behavior to send origin responses to the function.

Question 54

A DevOps engineer is implementing governance controls for a company that requires its infrastructure to be housed within the United States. The engineer must restrict which AWS Regions can be used, and ensure an alert is sent as soon as possible if any activity outside the governance policy takes place. The controls should be automatically enabled on any new Region outside the United States (US).

Which combination of actions will meet these requirements? (Select TWO.)

Options:

A.

Create an AWS Organizations SCP that denies access to all non-global services in non-US Regions. Attach the policy to the root of the organization.

B.

Configure AWS CloudTrail to send logs to Amazon CloudWatch Logs and enable it for all Regions. Use a CloudWatch Logs metric filter to send an alert on any service activity in non-US Regions.

C.

Use an AWS Lambda function that checks for AWS service activity and deploy it to all Regions. Write an Amazon EventBridge rule that runs the Lambda function every hour, sending an alert if activity is found in a non-US Region.

D.

Use an AWS Lambda function to query Amazon Inspector to look for service activity in non-US Regions and send alerts if any activity is found.

E.

Write an SCP using the aws: RequestedRegion condition key limiting access to US Regions. Apply the policy to all users, groups, and roles

Question 55

A DevOps engineer needs a resilient CI/CD pipeline that builds container images, stores them in ECR, scans images for vulnerabilities, and is resilient to outages in upstream source image repositories.

Which solution meets this?

Options:

A.

Create a private ECR repo, scan images on push, replicate images from upstream repos with a replication rule.

B.

Create a public ECR repo to cache images from upstream repos, create a private repo to store images, scan images on push.

C.

Create a public ECR repo, configure a pull-through cache rule, create a private repo to store images, enable basic scanning.

D.

Create a private ECR repo, enable basic scanning, create a pull-through cache rule.

Question 56

A company has an application that runs on a fleet of Amazon EC2 instances. The application requires frequent restarts. The application logs contain error messages when a restart is required. The application logs are published to a log group in Amazon CloudWatch Logs.

An Amazon CloudWatch alarm notifies an application engineer through an Amazon Simple Notification Service (Amazon SNS) topic when the logs contain a large number of restart-related error messages. The application engineer manually restarts the application on the instances after the application engineer receives a notification from the SNS topic.

A DevOps engineer needs to implement a solution to automate the application restart on the instances without restarting the instances.

Which solution will meet these requirements in the MOST operationally efficient manner?

Options:

A.

Configure an AWS Systems Manager Automation runbook that runs a script to restart the application on the instances. Configure the SNS topic to invoke the runbook.

B.

Create an AWS Lambda function that restarts the application on the instances. Configure the Lambda function as an event destination of the SNS topic.

C.

Configure an AWS Systems Manager Automation runbook that runs a script to restart the application on the instances. Create an AWS Lambda function to invoke the runbook. Configure the Lambda function as an event destination of the SNS topic.

D.

Configure an AWS Systems Manager Automation runbook that runs a script to restart the application on the instances. Configure an Amazon EventBridge rule that reacts when the CloudWatch alarm enters ALARM state. Specify the runbook as a target of the rule.

Question 57

A company uses AWS Organizations and AWS Control Tower to manage all the company's AWS accounts. The company uses the Enterprise Support plan.

A DevOps engineer is using Account Factory for Terraform (AFT) to provision new accounts. When new accounts are provisioned, the DevOps engineer notices that the support plan for the new accounts is set to the Basic Support plan. The DevOps engineer needs to implement a solution to provision the new accounts with the Enterprise Support plan.

Which solution will meet these requirements?

Options:

A.

Use an AWS Config conformance pack to deploy the account-part-of-organizations AWS Config rule and to automatically remediate any noncompliant accounts.

B.

Create an AWS Lambda function to create a ticket for AWS Support to add the account to the Enterprise Support plan. Grant the Lambda function the support:ResolveCase permission.

C.

Add an additional value to the control_tower_parameters input to set the AWSEnterpriseSupport parameter as the organization's management account number.

D.

Set the aft_feature_enterprise_support feature flag to True in the AFT deployment input configuration. Redeploy AFT and apply the changes.

Question 58

A company has a continuous integration pipeline where the company creates container images by using AWS CodeBuild. The created images are stored in Amazon Elastic Container Registry (Amazon ECR). Checking for and fixing the vulnerabilities in the images takes the company too much time. The company wants to identify the image vulnerabilities quickly and notify the security team of the vulnerabilities. Which combination of steps will meet these requirements with the LEAST operational overhead? (Select TWO.)

Options:

A.

Activate Amazon Inspector enhanced scanning for Amazon ECR. Configure the enhanced scanning to use continuous scanning. Set up a topic in Amazon Simple Notification Service (Amazon SNS).

B.

Create an Amazon EventBridge rule for Amazon Inspector findings. Set an Amazon Simple Notification Service (Amazon SNS) topic as the rule target.

C.

Activate AWS Lambda enhanced scanning for Amazon ECR. Configure the enhanced scanning to use continuous scanning. Set up a topic in Amazon Simple Email Service (Amazon SES).

D.

Create a new AWS Lambda function. Invoke the new Lambda function when scan findings are detected.

E.

Activate default basic scanning for Amazon ECR for all container images. Configure the default basic scanning to use continuous scanning. Set up a topic in Amazon Simple Notification Service (Amazon SNS).

Question 59

A company is building a new pipeline by using AWS CodePipeline and AWS CodeBuild in a build account. The pipeline consists of two stages. The first stage is a CodeBuild job to build and package an AWS Lambda function. The second stage consists of deployment actions that operate on two different AWS accounts a development environment account and a production environment account. The deployment stages use the AWS Cloud Format ion action that CodePipeline invokes to deploy the infrastructure that the Lambda function requires.

A DevOps engineer creates the CodePipeline pipeline and configures the pipeline to encrypt build artifacts by using the AWS Key Management Service (AWS KMS) AWS managed key for Amazon S3 (the aws/s3 key). The artifacts are stored in an S3 bucket When the pipeline runs, the Cloud Formation actions fail with an access denied error.

Which combination of actions must the DevOps engineer perform to resolve this error? (Select TWO.)

Options:

A.

Create an S3 bucket in each AWS account for the artifacts Allow the pipeline to write to the S3 buckets. Create a CodePipeline S3 action to copy the artifacts to the S3 bucket in each AWS account Update the CloudFormation actions to reference the artifacts S3 bucket in the production account.

B.

Create a customer managed KMS key Configure the KMS key policy to allow the IAM roles used by the CloudFormation action to perform decrypt operations Modify the pipeline to use the customer managed KMS key to encrypt artifacts.

C.

Create an AWS managed KMS key Configure the KMS key policy to allow the development account and the production account to perform decrypt operations. Modify the pipeline to use the KMS key to encrypt artifacts.

D.

In the development account and in the production account create an IAM role for CodePipeline. Configure the roles with permissions to perform CloudFormation operations and with permissions to retrieve and decrypt objects from the artifacts S3 bucket. In the CodePipeline account configure the CodePipeline CloudFormation action to use the roles.

E.

In the development account and in the production account create an IAM role for CodePipeline Configure the roles with permissions to perform CloudFormationoperations and with permissions to retrieve and decrypt objects from the artifacts S3 bucket. In the CodePipelme account modify the artifacts S3 bucket policy to allow the roles access Configure the CodePipeline CloudFormation action to use the roles.

Question 60

A company uses containers for its applications The company learns that some container Images are missing required security configurations

A DevOps engineer needs to implement a solution to create a standard base image The solution must publish the base image weekly to the us-west-2 Region, us-east-2 Region, and eu-central-1 Region.

Which solution will meet these requirements?

Options:

A.

Create an EC2 Image Builder pipeline that uses a container recipe to build the image. Configure the pipeline to distribute the image to an Amazon Elastic Container Registry (Amazon ECR) repository in us-west-2. Configure ECR replication from us-west-2 to us-east-2 and from us-east-2 to eu-central-1 Configure the pipeline to run weekly

B.

Create an AWS CodePipeline pipeline that uses an AWS CodeBuild project to build the image Use AWS CodeOeploy to publish the image to an Amazon Elastic Container Registry (Amazon ECR) repository in us-west-2 Configure ECR replication from us-west-2 to us-east-2 and from us-east-2 to eu-central-1 Configure the pipeline to run weekly

C.

Create an EC2 Image Builder pipeline that uses a container recipe to build the Image Configure the pipeline to distribute the image to Amazon Elastic Container Registry (Amazon ECR) repositories in all three Regions. Configure the pipeline to run weekly.

D.

Create an AWS CodePipeline pipeline that uses an AWS CodeBuild project to build the image Use AWS CodeDeploy to publish the image to Amazon Elastic Container Registry (Amazon ECR) repositories in all three Regions. Configure the pipeline to run weekly.

Question 61

A company has an application that runs on Amazon EC2 instances behind an Application Load Balancer (ALB) The EC2 Instances are in multiple Availability Zones The application was misconfigured in a single Availability Zone, which caused a partial outage of the application.

A DevOps engineer made changes to ensure that the unhealthy EC2 instances in one Availability Zone do not affect the healthy EC2 instances in the other Availability Zones. The DevOps engineer needs to test the application's failover and shift where the ALB sends traffic During failover. the ALB must avoid sending traffic to the Availability Zone where the failure has occurred.

Which solution will meet these requirements?

Options:

A.

Turn off cross-zone load balancing on the ALB Use Amazon Route 53 Application Recovery Controller to start a zonal shift away from the Availability Zone

B.

Turn off cross-zone load balancing on the ALB's target group Use Amazon Route 53 Application Recovery Controller to start a zonal shift away from the Availability Zone

C.

Create an Amazon Route 53 Application Recovery Controller resource set that uses the DNS hostname of the ALB Start a zonal shift for the resource set away from the Availability Zone

D.

Create an Amazon Route 53 Application Recovery Controller resource set that uses the ARN of the ALB's target group Create a readiness check that uses the ElbV2TargetGroupsCanServeTraffic rule

Question 62

A security review has identified that an AWS CodeBuild project is downloading a database population script from an Amazon S3 bucket using an unauthenticated request. The security team does not allow unauthenticated requests to S3 buckets for this project.

How can this issue be corrected in the MOST secure manner?

Options:

A.

Add the bucket name to the AllowedBuckets section of the CodeBuild project settings. Update the build spec to use the AWS CLI to download the database population script.

B.

Modify the S3 bucket settings to enable HTTPS basic authentication and specify a token. Update the build spec to use cURL to pass the token and download the database population script.

C.

Remove unauthenticated access from the S3 bucket with a bucket policy. Modify the service role for the CodeBuild project to include Amazon S3 access. Use the AWS CLI to download the database population script.

D.

Remove unauthenticated access from the S3 bucket with a bucket policy. Use the AWS CLI to download the database population script using an IAM access key and a secret access key.

Question 63

A company requires its internal business teams to launch resources through pre-approved AWS CloudFormation templates only. The security team requires automated monitoring when resources drift from their expected state.

Which strategy should be used to meet these requirements?

Options:

A.

Allow users to deploy CloudFormation stacks using a CloudFormation service role only. Use CloudFormation drift detection to detect when resources have drifted from their expected state.

B.

Allow users to deploy CloudFormation stacks using a CloudFormation service role only. Use AWS Config rules to detect when resources have drifted from their expected state.

C.

Allow users to deploy CloudFormation stacks using AWS Service Catalog only. Enforce the use of a launch constraint. Use AWS Config rules to detect when resources have drifted from their expected state.

D.

Allow users to deploy CloudFormation stacks using AWS Service Catalog only. Enforce the use of a template constraint. Use Amazon EventBridge notifications to detect when resources have drifted from their expected state.

Question 64

A company runs a development environment website and database on an Amazon EC2 instance that uses Amazon Elastic Block Store (Amazon EBS) storage. The company wants to make the instance more resilient to underlying hardware issues. The company wants to automatically recover the EC2 instance if AWS determines the instance has lost network connectivity.

Which solution will meet these requirements?

Options:

A.

Add the EC2 instance to an Auto Scaling group. Set the minimum, maximum, and desired capacity to 1.

B.

Add the EC2 instance to an Auto Scaling group. Configure a lifecycle hook to detach the EBS volume if the EC2 instance shuts down or terminates.

C.

Create an Amazon CloudWatch alarm for the StatusCheckFailed_System metric. Add an EC2 action to recover the instance when the alarm state is in ALARM.

D.

Create an Amazon CloudWatch alarm for the NetworkOut metric. Add an EC2 action to recover the instance when the alarm state is in INSUFFICIENT_DATA.

Question 65

The security team depends on AWS CloudTrail to detect sensitive security issues in the company's AWS account. The DevOps engineer needs a solution to auto-remediate CloudTrail being turned off in an AWS account.

What solution ensures the LEAST amount of downtime for the CloudTrail log deliveries?

Options:

A.

Create an Amazon EventBridge rule for the CloudTrail StopLogging event. Create an AWS Lambda (unction that uses the AWS SDK to call StartLogging on the ARN of the resource in which StopLogging was called. Add the Lambda function ARN as a target to the EventBridge rule.

B.

Deploy the AWS-managed CloudTrail-enabled AWS Config rule set with a periodic interval to 1 hour. Create an Amazon EventBridge rule tor AWS Config rules compliance change. Create an AWS Lambda function that uses the AWS SDK to call StartLogging on the ARN of the resource in which StopLoggmg was called. Add the Lambda function ARN as a target to the EventBridge rule.

C.

Create an Amazon EventBridge rule for a scheduled event every 5 minutes. Create an AWS Lambda function that uses the AWS SDK to call StartLogging on a CloudTrail trail in the AWS account. Add the Lambda function ARN as a target to the EventBridge rule.

D.

Launch a t2 nano instance with a script running every 5 minutes that uses the AWS SDK to query CloudTrail in the current account. If the CloudTrail trail is disabled have the script re-enable the trail.

Question 66

A company’s web app publishes JSON logs with transaction status to CloudWatch Logs. The company wants a dashboard showing the number of successful transactions with the least operational overhead.

Which solution meets this?

Options:

A.

Create an OpenSearch cluster and subscription filter to send logs; create OpenSearch dashboard with queries for success.

B.

Create a CloudWatch subscription filter with Lambda to parse logs and publish custom metrics; create CloudWatch dashboard with metric graph.

C.

Create a CloudWatch metric filter on the log group with a pattern matching success; create CloudWatch dashboard with metric graph.

D.

Create a Kinesis data stream subscribed to the log group; filter logs by success; send to Lambda; Lambda publishes custom metrics; dashboard uses metric graph.

Question 67

A company detects unusual login attempts in many of its AWS accounts. A DevOps engineer must implement a solution that sends a notification to the company's security team when multiple failed login attempts occur. The DevOps engineer has already created an Amazon Simple Notification Service (Amazon SNS) topic and has subscribed the security team to the SNS topic.

Which solution will provide the notification with the LEAST operational effort?

Options:

A.

Configure AWS CloudTrail to send log management events to an Amazon CloudWatch Logs log group. Create a CloudWatch Logs metric filter to match failed ConsoleLogin events. Create a CloudWatch alarm that is based on the metric filter. Configure an alarm action to send messages to the SNS topic.

B.

Configure AWS CloudTrail to send log management events to an Amazon S3 bucket. Create an Amazon Athena query that returns a failure if the query finds failed logins in the logs in the S3 bucket. Create an Amazon EventBridge rule to periodically run the query. Create a second EventBridge rule to detect when the query fails and to send a message to the SNS topic.

C.

Configure AWS CloudTrail to send log data events to an Amazon CloudWatch Logs log group. Create a CloudWatch logs metric filter to match failed Consolel_ogin events. Create a CloudWatch alarm that is based on the metric filter. Configure an alarm action to send messages to the SNS topic.

D.

Configure AWS CloudTrail to send log data events to an Amazon S3 bucket. Configure an Amazon S3 event notification for the s3:ObjectCreated event type. Filter the event type by ConsoleLogin failed events. Configure the event notification to forward to the SNS topic.

Question 68

A company is reviewing its 1AM policies. One policy written by the DevOps engineer has been (lagged as too permissive. The policy is used by an AWS Lambda function that issues a stop command to Amazon EC2 instances tagged with Environment: NonProduccion over the weekend. The current policy is:

What changes should the engineer make to achieve a policy ot least permission? (Select THREE.)

Options:

A.

Option A

B.

option B

C.

option C

D.

option D

E.

Option E

F.

Option F

Question 69

A company that uses electronic patient health records runs a fleet of Amazon EC2 instances with an Amazon Linux operating system. The company must continuously ensure that the EC2 instances are running operating system patches and application patches that are in compliance with current privacy regulations. The company uses a custom repository to store application patches.

A DevOps engineer needs to automate the deployment of operating system patches and application patches. The DevOps engineer wants to use both the default operating system patch repository and the custom patch repository.

Which solution will meet these requirements with the LEAST effort?

Options:

A.

Use AWS Systems Manager to create a new custom patch baseline that includes the default operating system repository and the custom repository. Run the AWS-RunPatchBaseline document by using the Run command to verify and install patches. Use the BaselineOverride API to configure the new custom patch baseline.

B.

Use AWS Direct Connect to integrate the custom repository with the EC2 instances. Use Amazon EventBridge events to deploy the patches.

C.

Use the yum-config-manager command to add the custom repository to the /etc/yum.repos.d configuration. Run the yum-config-manager-enable command to activate the new repository.

D.

Use AWS Systems Manager to create a patch baseline for the default operating system repository and a second patch baseline for the custom repository. Run the AWS-RunPatchBaseline document by using the Run command to verify and install patches. Use the BaselineOverride API to configure the default patch baseline and the custom patch baseline.

Question 70

A DevOps engineer is creating a CI/CD pipeline to build container images. The engineer needs to store container images in Amazon Elastic Container Registry (Amazon ECR) and scan the images for common vulnerabilities. The CI/CD pipeline must be resilient to outages in upstream source container image repositories.

Which solution will meet these requirements?

Options:

A.

Create an ECR private repository in the private registry to store the container images and scan images when images are pushed to the repository. Configure a replication rule in the private registry to replicate images from upstream repositories.

B.

Create an ECR public repository in the public registry to cache images from upstream source repositories. Create an ECR private repository to store images. Configure the private repository to scan images when images are pushed to the repository.

C.

Create an ECR public repository in the public registry. Configure a pull through cache rule for the repository. Create an ECR private repository to store images. Configure the ECR private registry to perform basic scanning.

D.

Create an ECR private repository in the private registry to store the container images. Enable basic scanning for the private registry, and create a pull through cache rule.

Question 71

A company’s web app runs on EC2 Linux instances and needs to monitor custom metrics for API response and DB query latency across instances with least overhead.

Which solution meets this?

Options:

A.

Install CloudWatch agent on instances, configure it to collect custom metrics, and instrument app to send metrics to agent.

B.

Use Amazon Managed Service for Prometheus to scrape metrics, use CloudWatch agent to forward metrics to CloudWatch.

C.

Create Lambda to poll app endpoints and DB, calculate metrics, send to CloudWatch via PutMetricData.

D.

Implement custom logging in app; use CloudWatch Logs Insights to extract and analyze metrics.

Question 72

A DevOps administrator is configuring a repository to store a company's container images. The administrator needs to configure a lifecycle rule that automatically deletes container images that have a specific tag and that are older than 15 days. Which solution will meet these requirements with the MOST operational efficiency?

Options:

A.

Create a repository in Amazon Elastic Container Registry (Amazon ECR). Add a lifecycle policy to the repository to expire images that have the matching tag after 15 days.

B.

Create a repository in AWS CodeArtifact. Add a repository policy to the CodeArtifact repository to expire old assets that have the matching tag after 15 days.

C.

Create a bucket in Amazon S3. Add a bucket lifecycle policy to expire old objects that have the matching tag after 15 days.

D.

Create an EC2 Image Builder container recipe. Add a build component to expire the container that has the matching tag after 15 days.

Question 73

A global company manages multiple AWS accounts by using AWS Control Tower. The company hosts internal applications and public applications.

Each application team in the company has its own AWS account for application hosting. The accounts are consolidated in an organization in AWS Organizations. One of the AWS Control Tower member accounts serves as a centralized DevOps account with CI/CD pipelines that application teams use to deploy applications to their respective target AWS accounts. An 1AM role for deployment exists in the centralized DevOps account.

An application team is attempting to deploy its application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster in an application AWS account. An 1AM role for deployment exists in the application AWS account. The deployment is through an AWS CodeBuild project that is set up in the centralized DevOps account. The CodeBuild project uses an 1AM service role for CodeBuild. The deployment is failing with an Unauthorized error during attempts to connect to the cross-account EKS cluster from CodeBuild.

Which solution will resolve this error?

Options:

A.

Configure the application account's deployment 1AM role to have a trust relationship with the centralized DevOps account. Configure the trust relationship to allow the sts:AssumeRole action. Configure the application account's deployment 1AM role to have the required access to the EKS cluster. Configure the EKS cluster aws-auth ConfigMap to map the role to the appropriate system permissions.

B.

Configure the centralized DevOps account's deployment I AM role to have a trust relationship with the application account. Configure the trust relationship to allow the sts:AssumeRole action. Configure the centralized DevOps account's deployment 1AM role to allow the required access to CodeBuild.

C.

Configure the centralized DevOps account's deployment 1AM role to have a trust relationship with the application account. Configure the trust relationship to allow the sts:AssumeRoleWithSAML action. Configure the centralized DevOps account's deployment 1AM role to allow the required access to CodeBuild.

D.

Configure the application account's deployment 1AM role to have a trust relationship with the AWS Control Tower management account. Configure the trust relationship to allow the sts:AssumeRole action. Configure the application account's deployment 1AM role to have the required access to the EKS cluster. Configure the EKS cluster aws-auth ConfigMap to map the role to the appropriate system permissions.

Question 74

A company has a file-reading application that saves files to a database running on Amazon EC2 instances. Regulations require daily file deletions from EC2 instances and deletion of database records older than 60 days. Database record deletion must occur after file deletion. The company needs email notifications for any deletion script failures.

Which solution will meet these requirements with the LEAST development effort?

Options:

A.

Use AWS Systems Manager State Manager to automatically invoke an Automation document at the specified time daily. Configure the Automation document to run deletion scripts sequentially via run command. Create an EventBridge rule to send failure notifications to Amazon SNS.

B.

Use AWS Systems Manager State Manager to automatically invoke an Automation document at the specified time daily. Configure the Automation document to run deletion scripts sequentially. Add a conditional check for errors as the last step and send failure notifications via Amazon SES.

C.

Create an EventBridge rule to invoke a Lambda function at the specified time. Configure the Lambda function to run deletion scripts sequentially and send failure notifications via SNS.

D.

Create an EventBridge rule to invoke a Lambda function at the specified time. Configure the Lambda function to run deletion scripts sequentially and send failure notifications via SES.

Question 75

A company has deployed an application in a single AWS Region. The application backend uses Amazon DynamoDB tables and Amazon S3 buckets.

The company wants to deploy the application in a secondary Region. The company must ensure that the data in the DynamoDB tables and the S3 buckets persists across both Regions. The data must also immediately propagate across Regions.

Which solution will meet these requirements with the MOST operational efficiency?

Options:

A.

Implement two-way S3 bucket replication between the primary Region's S3 buckets and the secondary Region's S3 buckets. Convert the DynamoDB tables into global tables. Set the secondary Region as the additional Region.

B.

Implement S3 Batch Operations copy jobs between the primary Region and the secondary Region for all S3 buckets. Convert the DynamoDB tables into global tables. Set the secondary Region as the additional Region.

C.

Implement two-way S3 bucket replication between the primary Region's S3 buckets and the secondary Region's S3 buckets. Enable DynamoDB streams on the DynamoDB tables in both Regions. In each Region, create an AWS Lambda function that subscribes to the DynamoDB streams. Configure the Lambda function to copy new records to the DynamoDB tables in the other Region.

D.

Implement S3 Batch Operations copy jobs between the primary Region and the secondary Region for all S3 buckets. Enable DynamoDB streams on the DynamoDB tables in both Regions. In each Region, create an AWS Lambda function that subscribes to the DynamoDB streams. Configure the Lambda function to copy new records to the DynamoDB tables in the other Region.

Question 76

A company uses AWS Organizations to manage its AWS accounts. The organization root has a child OU that is named Department. The Department OU has a child OU that is named Engineering. The default FullAWSAccess policy is attached to the root, the Department OU. and the Engineering OU.

The company has many AWS accounts in the Engineering OU. Each account has an administrative 1AM role with the AdmmistratorAccess 1AM policy attached. The default FullAWSAccessPolicy is also attached to each account.

A DevOps engineer plans to remove the FullAWSAccess policy from the Department OU The DevOps engineer will replace the policy with a policy that contains an Allow statement for all Amazon EC2 API operations.

What will happen to the permissions of the administrative 1AM roles as a result of this change'?

Options:

A.

All API actions on all resources will be allowed

B.

All API actions on EC2 resources will be allowed. All other API actions will be denied.

C.

All API actions on all resources will be denied

D.

All API actions on EC2 resources will be denied. All other API actions will be allowed.

Question 77

A DevOps engineer is planning to deploy a Ruby-based application to production. The application needs to interact with an Amazon RDS for MySQL database and should have automatic scaling and high availability. The stored data in the database is critical and should persist regardless of the state of the application stack.

The DevOps engineer needs to set up an automated deployment strategy for the application with automatic rollbacks. The solution also must alert the application team when a deployment fails.

Which combination of steps will meet these requirements? (Select THREE.)

Options:

A.

Deploy the application on AWS Elastic Beanstalk. Deploy an Amazon RDS for MySQL DB instance as part of the Elastic Beanstalk configuration.

B.

Deploy the application on AWS Elastic Beanstalk. Deploy a separate Amazon RDS for MySQL DB instance outside of Elastic Beanstalk.

C.

Configure a notification email address that alerts the application team in the AWS Elastic Beanstalk configuration.

D.

Configure an Amazon EventBridge rule to monitor AWS Health events. Use an Amazon Simple Notification Service (Amazon SNS) topic as a target to alert the application team.

E.

Use the immutable deployment method to deploy new application versions.

F.

Use the rolling deployment method to deploy new application versions.

Question 78

A video-sharing company stores its videos in an Amazon S3 bucket. The company needs to analyze user access patterns such as the number of users who access a specific video each month.

Which solution will meet these requirements with the LEAST development effort?

Options:

A.

Enable Amazon S3 server access logging. Load the access logs into an Amazon Aurora database. Run SQL queries on the Aurora database to analyze the user access patterns.

B.

Enable Amazon S3 server access logging. Use Amazon Athena to create an external table that contains the access logs. Run SQL queries on the Athena table to analyze the user access patterns.

C.

Invoke an AWS Lambda function for every S3 object access event. Configure the Lambda function to write the file access information, including user ID, S3 bucket ID, and file key, to an Amazon Aurora database. Run SQL queries on the Aurora database to analyze the user access patterns.

D.

Record a log message in Amazon CloudWatch Logs for every S3 object access event. Configure a log stream in CloudWatch Logs to write the file access information, including user ID, S3 bucket ID, and file key, to an Amazon Managed Service for Apache Flink application. Perform a sliding window analysis on the user access patterns.

Question 79

A company manages an application that stores logs in Amazon CloudWatch Logs. The company wants to archive the logs to an Amazon S3 bucket. Logs are rarely accessed after 90 days and must be retained for 10 years.

Which combination of steps should a DevOps engineer take to meet these requirements? (Select TWO.)

Options:

A.

Configure a CloudWatch Logs subscription filter to use AWS Glue to transfer all logs to an S3 bucket.

B.

Configure a CloudWatch Logs subscription filter to use Amazon Data Firehose to stream all logs to an S3 bucket.

C.

Configure a CloudWatch Logs subscription filter to stream all logs to an S3 bucket.

D.

Configure the S3 bucket lifecycle policy to transition logs to S3 Glacier Instant Retrieval after 90 days and to expire logs after 3,650 days.

E.

Configure the S3 bucket lifecycle policy to transition logs to Reduced Redundancy after 90 days and to expire logs after 3,650 days.

Question 80

A company used a lift-and-shift strategy to migrate a workload to AWS. The company has an Auto Scaling group of Amazon EC2 instances. Each EC2 instance runs a web application, a database, and a Redis cache.

Users are experiencing large variations in the web application's response times. Requests to the web application go to a single EC2 instance that is under significant load. The company wants to separate the application components to improve availability and performance.

Which solution will meet these requirements?

Options:

A.

Create a Network Load Balancer and an Auto Scaling group for the web application. Migrate the database to an Amazon Aurora Serverless database. Create an Application Load Balancer and an Auto Scaling group for the Redis cache.

B.

Create an Application Load Balancer and an Auto Scaling group for the web application. Migrate the database to an Amazon Aurora database that has a Multi-AZ deployment. Create a Network Load Balancer and an Auto Scaling group in a single Availability Zone for the Redis cache.

C.

Create a Network Load Balancer and an Auto Scaling group for the web application. Migrate the database to an Amazon Aurora Serverless database. Create an Amazon ElastiCache (Redis OSS) cluster for the cache. Create a target group that has a DNS target type that contains the ElastiCache (Redis OSS) cluster hostname.

D.

Create an Application Load Balancer and an Auto Scaling group for the web application. Migrate the database to an Amazon Aurora database that has a Multi-AZ deployment. Create an Amazon ElastiCache (Redis OSS) cluster for the cache.

Question 81

A company has multiple development groups working in a single shared AWS account. The Senior Manager of the groups wants to be alerted via a third-party API call when the creation of resources approaches the service limits for the account.

Which solution will accomplish this with the LEAST amount of development effort?

Options:

A.

Create an Amazon CloudWatch Event rule that runs periodically and targets an AWS Lambda function. Within the Lambda function, evaluate the current state of the AWS environment and compare deployed resource values to resource limits on the account. Notify the Senior Manager if the account is approaching a service limit.

B.

Deploy an AWS Lambda function that refreshes AWS Trusted Advisor checks, and configure an Amazon CloudWatch Events rule to run the Lambda function periodically. Create another CloudWatch Events rule with an event pattern matching Trusted Advisor events and a target Lambda function. In the target Lambda function, notify the Senior Manager.

C.

Deploy an AWS Lambda function that refreshes AWS Personal Health Dashboard checks, and configure an Amazon CloudWatch Events rule to run the Lambda function periodically. Create another CloudWatch Events rule with an event pattern matching Personal Health Dashboard events and a target Lambda function. In the target Lambda function, notify the Senior Manager.

D.

Add an AWS Config custom rule that runs periodically, checks the AWS service limit status, and streams notifications to an Amazon SNS topic. Deploy an AWS Lambda function that notifies the Senior Manager, and subscribe the Lambda function to the SNS topic.

Question 82

A DevOps engineer is working on a project that is hosted on Amazon Linux and has failed a security review. The DevOps manager has been asked to review the company buildspec. yaml die for an AWS CodeBuild project and provide recommendations. The buildspec. yaml file is configured as follows:

What changes should be recommended to comply with AWS security best practices? (Select THREE.)

Options:

A.

Add a post-build command to remove the temporary files from the container before termination to ensure they cannot be seen by other CodeBuild users.

B.

Update the CodeBuild project role with the necessary permissions and then remove the AWS credentials from the environment variable.

C.

Store the db_password as a SecureString value in AWS Systems Manager Parameter Store and then remove the db_password from the environment variables.

D.

Move the environment variables to the 'db.-deploy-bucket ‘Amazon S3 bucket, add a prebuild stage to download then export the variables.

E.

Use AWS Systems Manager run command versus sec and ssh commands directly to the instance.

Question 83

A company has an AWS CodePipeline pipeline that is configured with an Amazon S3 bucket in the eu-west-1 Region. The pipeline deploys an AWS Lambda application to the same Region. The pipeline consists of an AWS CodeBuild project build action and an AWS CloudFormation deploy action.

The CodeBuild project uses the aws cloudformation package AWS CLI command to build an artifact that contains the Lambda function code’s .zip file and the CloudFormation template. The CloudFormation deploy action references the CloudFormation template from the output artifact of the CodeBuild project’s build action.

The company wants to also deploy the Lambda application to the us-east-1 Region by using the pipeline in eu-west-1. A DevOps engineer has already updated the CodeBuild project to use the aws cloudformation package command to produce an additional output artifact for us-east-1.

Which combination of additional steps should the DevOps engineer take to meet these requirements? (Choose two.)

Options:

A.

Modify the CloudFormation template to include a parameter for the Lambda function code’s zip file location. Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to pass in the us-east-1 artifact location as a parameter override.

B.

Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to use the CloudFormation template from the us-east-1 output artifact.

C.

Create an S3 bucket in us-east-1. Configure the S3 bucket policy to allow CodePipeline to have read and write access.

D.

Create an S3 bucket in us-east-1. Configure S3 Cross-Region Replication (CRR) from the S3 bucket in eu-west-1 to the S3 bucket in us-east-1.

E.

Modify the pipeline to include the S3 bucket for us-east-1 as an artifact store. Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to use the CloudFormation template from the us-east-1 output artifact.

Question 84

A company has multiple AWS accounts. The company uses AWS IAM Identity Center (AWS Single Sign-On) that is integrated with AWS Toolkit for Microsoft Azure DevOps. The attributes for access control feature is enabled in IAM Identity Center.

The attribute mapping list contains two entries. The department key is mapped to ${path:enterprise.department}. The costCenter key is mapped to ${path:enterprise.costCenter}.

All existing Amazon EC2 instances have a department tag that corresponds to three company departments (d1, d2, d3). A DevOps engineer must create policies based on the matching attributes. The policies must minimize administrative effort and must grant each Azure AD user access to only the EC2 instances that are tagged with the user’s respective department name.

Which condition key should the DevOps engineer include in the custom permissions policies to meet these requirements?

Options:

A.
B.
C.
D.
Question 85

A company uses an organization in AWS Organizations to manage multiple AWS accounts in a hierarchical structure. An SCP that is associated with the organization root allows IAM users to be created.

A DevOps team must be able to create IAM users with any level of permissions. Developers must also be able to create IAM users. However, developers must not be able to grant new IAM users excessive permissions. The developers have the CreateAndManageUsers role in each account. The DevOps team must be able to prevent other users from creating IAM users.

Which combination of steps will meet these requirements? (Select TWO.)

Options:

A.

Create an SCP in the organization to deny users the ability to create and modify IAM users. Attach the SCP to the root of the organization. Attach the CreateAndManageUsers role to developers.

B.

Create an SCP in the organization to grant users that have the DeveloperBoundary policy attached the ability to create new IAM users and to modify IAM users. Configure the SCP to require users to attach the PermissionBoundaries policy to any new IAM user. Attach the SCP to the root of the organization.

C.

Create an IAM permissions policy named PermissionBoundaries within each account. Configure the PermissionBoundaries policy to specify the maximum permissions that a developer can grant to a new IAM user.

D.

Create an IAM permissions policy named PermissionBoundaries within each account. Configure PermissionsBoundaries to allow users who have the PermissionBoundaries policy to create new IAM users.

E.

Create an IAM permissions policy named DeveloperBoundary within each account. Configure the DeveloperBoundary policy to allow developers to create IAM users and to assign policies to IAM users only if the developer includes the PermissionBoundaries policy as the permissions boundary. Attach the DeveloperBoundary policy to the CreateAndManageUsers role within each account.

Question 86

A software team is using AWS CodePipeline to automate its Java application release pipeline The pipeline consists of a source stage, then a build stage, and then a deploy stage. Each stage contains a single action that has a runOrder value of 1.

The team wants to integrate unit tests into the existing release pipeline. The team needs a solution that deploys only the code changes that pass all unit tests.

Which solution will meet these requirements?

Options:

A.

Modify the build stage. Add a test action that has a runOrder value of 1. Use AWS CodeDeploy as the action provider to run unit tests.

B.

Modify the build stage Add a test action that has a runOrder value of 2 Use AWS CodeBuild as the action provider to run unit tests

C.

Modify the deploy stage Add a test action that has a runOrder value of 1 Use AWS CodeDeploy as the action provider to run unit tests

D.

Modify the deploy stage Add a test action that has a runOrder value of 2 Use AWS CodeBuild as the action provider to run unit tests

Question 87

A company’s EC2 fleet must maintain up-to-date security patches and compliance reporting.

Which solution meets these requirements?

Options:

A.

Use Systems Manager Patch Manager with AWS Config compliance rules and automation documents.

B.

SSH into each instance manually.

C.

Rebuild instances in Auto Scaling groups with latest AMIs.

D.

Use CloudFormation redeployment for every patch.

Question 88

A company's application development team uses Linux-based Amazon EC2 instances as bastion hosts. Inbound SSH access to the bastion hosts is restricted to specific IP addresses, as defined in the associated security groups. The company's security team wants to receive a notification if the security group rules are modified to allow SSH access from any IP address.

What should a DevOps engineer do to meet this requirement?

Options:

A.

Create an Amazon EventBridge rule with a source of aws.cloudtrail and the event name AuthorizeSecurityGroupIngress. Define an Amazon Simple Notification Service (Amazon SNS) topic as the target.

B.

Enable Amazon GuardDuty and check the findings for security groups in AWS Security Hub. Configure an Amazon EventBridge rule with a custom pattern that matches GuardDuty events with an output of NON_COMPLIANT. Define an Amazon Simple Notification Service (Amazon SNS) topic as the target.

C.

Create an AWS Config rule by using the restricted-ssh managed rule to check whether security groups disallow unrestricted incoming SSH traffic. Configure automatic remediation to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic.

D.

Enable Amazon Inspector. Include the Common Vulnerabilities and Exposures-1.1 rules package to check the security groups that are associated with the bastion hosts. Configure Amazon Inspector to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic.

Question 89

A company must encrypt all AMIs that the company shares across accounts. A DevOps engineer has access to a source account where an unencrypted custom AMI has been built. The DevOps engineer also has access to a target account where an Amazon EC2 Auto Scaling group will launch EC2 instances from the AMI. The DevOps engineer must share the AMI with the target account.

The company has created an AWS Key Management Service (AWS KMS) key in the source account.

Which additional steps should the DevOps engineer perform to meet the requirements? (Choose three.)

Options:

A.

In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the KMS key in the copy action.

B.

In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the default Amazon Elastic Block Store (Amazon EBS) encryption key in the copy action.

C.

In the source account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role in the target account.

D.

In the source account, modify the key policy to give the target account permissions to create a grant. In the target account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role.

E.

In the source account, share the unencrypted AMI with the target account.

F.

In the source account, share the encrypted AMI with the target account.

Question 90

A company runs an application in an Auto Scaling group of Amazon EC2 instances behind an Application Load Balancer (ALB). The EC2 instances run Docker containers that make requests to a MySQL database that runs on separate EC2 instances. A DevOps engineer needs to update the application to use a serverless architecture. Which solution will meet this requirement with the FEWEST changes?

Options:

A.

Replace the containers that run on EC2 instances and the ALB with AWS Lambda functions. Replace the MySQL database with an Amazon Aurora Serverless v2 database that is compatible with MySQL.

B.

Replace the containers that run on EC2 instances with AWS Fargate. Replace the MySQL database with an Amazon Aurora Serverless v2 database that is compatible with MySQL.

C.

Replace the containers that run on EC2 instances and the ALB with AWS Lambda functions. Replace the MySQL database with Amazon DynamoDB tables.

D.

Replace the containers that run on EC2 instances with AWS Fargate. Replace the MySQL database with Amazon DynamoDB tables.

Question 91

A company uses an Amazon Aurora PostgreSQL global database that has two secondary AWS Regions. A DevOps engineer has configured the database parameter group to guarantee an RPO of 60 seconds. Write operations on the primary cluster are occasionally blocked because of the RPO setting.

The DevOps engineer needs to reduce the frequency of blocked write operations.

Which solution will meet these requirements?

Options:

A.

Add an additional secondary cluster to the global database.

B.

Enable write forwarding for the global database.

C.

Remove one of the secondary clusters from the global database.

D.

Configure synchronous replication for the global database.

Question 92

A company is developing a web application's infrastructure using AWS CloudFormation The database engineering team maintains the database resources in a Cloud Formation template, and the software development team maintains the web application resources in a separate CloudFormation template. As the scope of the application grows, the software development team needs to use resources maintained by the database engineering team However, both teams have their own review and lifecycle management processes that they want to keep. Both teams also require resource-level change-set reviews. The software development team would like to deploy changes to this template using their Cl/CD pipeline.

Which solution will meet these requirements?

Options:

A.

Create a stack export from the database CloudFormation template and import those references into the web application CloudFormation template

B.

Create a CloudFormation nested stack to make cross-stack resource references and parameters available in both stacks.

C.

Create a CloudFormation stack set to make cross-stack resource references and parameters available in both stacks.

D.

Create input parameters in the web application CloudFormation template and pass resource names and IDs from the database stack.

Question 93

An application running on a set of Amazon EC2 instances in an Auto Scaling group requires a configuration file to operate. The instances are created and maintained with AWS CloudFormation. A DevOps engineer wants the instances to have the latest configuration file when launched and wants changes to the configuration file to be reflected on all the instances with a minimal delay when the CloudFormation template is updated. Company policy requires that application configuration files be maintained along with AWS infrastructure configuration files m source control.

Which solution will accomplish this?

Options:

A.

In the CloudFormaiion template add an AWS Config rule. Place the configuration file content in the rule's InputParameters property and set the Scope property to the EC2 Auto Scaling group. Add an AWS Systems Manager Resource Data Sync resource to the template to poll for updates to the configuration.

B.

In the CloudFormation template add an EC2 launch template resource. Place the configuration file content in the launch template. Configure the cfn-mit script to run when the instance is launched and configure the cfn-hup script to poll for updates to the configuration.

C.

In the CloudFormation template add an EC2 launch template resource. Place the configuration file content in the launch template. Add an AWS Systems Manager Resource Data Sync resource to the template to poll for updates to the configuration.

D.

In the CloudFormation template add CloudFormation imt metadata. Place the configuration file content m the metadata. Configure the cfn-init script to run when the instance is launched and configure the cfn-hup script to poll for updates to the configuration.

Question 94

A company frequently creates Docker images stored in Amazon ECR, with both tagged and untagged versions. The company wants to delete stale or unused images while keeping a minimum count.

Which solution meets this requirement?

Options:

A.

Use S3 lifecycle policies (not applicable).

B.

Use ECR Lifecycle Policies based on image age or count.

C.

Schedule Lambda to delete by age.

D.

Use Systems Manager automation scripts.

Question 95

A company is running its ecommerce website on AWS. The website is currently hosted on a single Amazon EC2 instance in one Availability Zone. A MySQL database runs on the same EC2 instance. The company needs to eliminate single points of failure in the architecture to improve the website's availability and resilience. Which solution will meet these requirements with the LEAST configuration changes to the website?

Options:

A.

Deploy the application by using AWS Fargate containers. Migrate the database to Amazon DynamoDB. Use Amazon API Gateway to route requests.

B.

Deploy the application on EC2 instances across multiple Availability Zones. Put the EC2 instances into an Auto Scaling group behind an Application Load Balancer. Migrate the database to Amazon Aurora Multi-AZ. Use Amazon CloudFront for content delivery.

C.

Use AWS Elastic Beanstalk to deploy the application across multiple AWS Regions. Migrate the database to Amazon Redshift. Use Amazon ElastiCache for session management.

D.

Migrate the application to AWS Lambda functions. Use Amazon S3 for static content hosting. Migrate the database to Amazon DocumentDB (with MongoDB compatibility).

Question 96

A DevOps engineer needs to install antivirus software on all Amazon EC2 instances in an AWS account. The EC2 instances run the most recent Amazon Linux version. The solution must detect all instances and use an AWS Systems Manager document to install the software if missing.

Which solution will meet these requirements?

Options:

A.

Create an association in Systems Manager State Manager targeting all managed nodes. Include the software and Systems Manager document.

B.

Use AWS Config with a custom rule to check for antivirus installation. Configure automatic remediation using the Systems Manager document.

C.

Use Amazon Inspector to detect missing software and associate with Systems Manager automation.

D.

Use EventBridge to detect EC2 RunInstances events and trigger SSM automation.

Question 97

A company is implementing a standardized security baseline across its AWS accounts. The accounts are in an organization in AWS Organizations. The company must deploy consistent IAM roles and policies across all existing and future accounts in the organization. Which solution will meet these requirements with the MOST operational efficiency?

Options:

A.

Enable AWS Control Tower in the management account. Configure AWS Control Tower Account Factory customization to deploy the required IAM roles and policies to all accounts.

B.

Activate trusted access for AWS CloudFormation StackSets in Organizations. In the management account, create a stack set that has service-managed permissions to deploy the required IAM roles and policies to all accounts. Enable automatic deployment for the stack set.

C.

In each member account, create IAM roles that have permissions to create and manage resources. In the management account, create an AWS CloudFormation stack set that has self-managed permissions to deploy the required IAM roles and policies to all accounts. Enable automatic deployment for the stack set.

D.

In the management account, create an AWS CodePipeline pipeline. Configure the pipeline to use AWS CloudFormation to automate the deployment of the required IAM roles and policies. Set up cross-account IAM roles to allow CodePipeline to deploy resources in the member accounts.

Question 98

A company uses AWS Key Management Service (AWS KMS) keys and manual key rotation to meet regulatory compliance requirements. The security team wants to be notified when any keys have not been rotated after 90 days.

Which solution will accomplish this?

Options:

A.

Configure AWS KMS to publish to an Amazon Simple Notification Service (Amazon SNS) topic when keys are more than 90 days old.

B.

Configure an Amazon EventBridge event to launch an AWS Lambda function to call the AWS Trusted Advisor API and publish to an Amazon Simple Notification Service (Amazon SNS) topic.

C.

Develop an AWS Config custom rule that publishes to an Amazon Simple Notification Service (Amazon SNS) topic when keys are more than 90 days old.

D.

Configure AWS Security Hub to publish to an Amazon Simple Notification Service (Amazon SNS) topic when keys are more than 90 days old.

Question 99

A company is implementing an Amazon Elastic Container Service (Amazon ECS) cluster to run its workload. The company architecture will run multiple ECS services on the cluster. The architecture includes an Application Load Balancer on the front end and uses multiple target groups to route traffic.

A DevOps engineer must collect application and access logs. The DevOps engineer then needs to send the logs to an Amazon S3 bucket for near-real-time analysis.

Which combination of steps must the DevOps engineer take to meet these requirements? (Choose three.)

Options:

A.

Download the Amazon CloudWatch Logs container instance from AWS. Configure this instance as a task. Update the application service definitions to include the logging task.

B.

Install the Amazon CloudWatch Logs agent on the ECS instances. Change the logging driver in the ECS task definition to awslogs.

C.

Use Amazon EventBridge to schedule an AWS Lambda function that will run every 60 seconds and will run the Amazon CloudWatch Logs create-export-task command. Then point the output to the logging S3 bucket.

D.

Activate access logging on the ALB. Then point the ALB directly to the logging S3 bucket.

E.

Activate access logging on the target groups that the ECS services use. Then send the logs directly to the logging S3 bucket.

F.

Create an Amazon Kinesis Data Firehose delivery stream that has a destination of the logging S3 bucket. Then create an Amazon CloudWatch Logs subscription filter for Kinesis Data Firehose.

Question 100

A company uses AWS Organizations to manage its AWS accounts. A DevOps engineer must ensure that all users who access the AWS Management Console are authenticated through the company's corporate identity provider (IdP).

Which combination of steps will meet these requirements? (Select TWO.)

Options:

A.

Use Amazon GuardDuty with a delegated administrator account. Use GuardDuty to enforce denial of 1AM user logins

B.

Use AWS 1AM Identity Center to configure identity federation with SAML 2.0.

C.

Create a permissions boundary in AWS 1AM Identity Center to deny password logins for 1AM users.

D.

Create 1AM groups in the Organizations management account to apply consistent permissions for all 1AM users.

E.

Create an SCP in Organizations to deny password creation for 1AM users.

Question 101

A company uses an organization in AWS Organizations to manage multiple AWS accounts. The company needs a solution to detect sensitive information in Amazon S3 buckets in all the company’s accounts. When the solution detects sensitive data, the solution must collect all the findings and make them available to the company’s security officer in a single location. The solution must move S3 objects that contain sensitive information to a quarantine S3 bucket.

Which solutions will meet these requirements with the LEAST operational overhead? (Select TWO.)

Options:

A.

Enable AWS Security Hub in the organization. Enable Amazon Macie for all the accounts in the organization. Configure Macie to send findings to Security Hub.

B.

Create an AWS Service Catalog product to provision S3 buckets. Configure Service Catalog to create a new S3 bucket. Configure S3 Event Notifications to send ObjectCreated events to an Amazon Simple Queue Service (Amazon SQS) queue.

C.

Create an AWS Lambda function to copy S3 objects from S3 buckets to a dedicated quarantine bucket. Configure the Lambda function to delete copied objects from the original buckets. Configure an Amazon EventBridge rule to invoke the Lambda function in response to sensitive information findings from Amazon Macie.

D.

Configure an AWS Lambda function to run when new objects are created or when existing objects are updated. Configure the Lambda function to determine whether objects contain sensitive data. Configure the function to move objects that contain sensitive data to a quarantine bucket and to delete the original objects.

E.

Configure SCPs to prevent the creation of S3 buckets and objects that contain suspected sensitive data. Configure the SCPs to move objects that are suspected to contain sensitive data to a dedicated quarantine S3 bucket.

Question 102

A company uses an organization in AWS Organizations to manage its AWS accounts. The company's automation account contains a CI/CD pipeline that creates and configures new AWS accounts.

The company has a group of internal service teams that provide services to accounts in the organization. The service teams operate out of a set of services accounts. The service teams want to receive an AWS CloudTrail event in their services accounts when the CreateAccount API call creates a new account.

How should the company share this CloudTrail event with the service accounts?

Options:

A.

Create an Amazon EventBridge rule in the automation account to send account creation events to the default event bus in the services accounts. Update the default event bus in the services accounts to allow events from the automation account.

B.

Create a custom Amazon EventBridge event bus in the services accounts. Update the custom event bus to allow events from the automation account. Create an EventBridge rule in the services account that directly listens to CloudTrail events from the automation account.

C.

Create a custom Amazon EventBridge event bus in the automation account and the services accounts. Create an EventBridge rule and policy that connects the custom event buses that are in the automation account and the services accounts.

D.

Create a custom Amazon EventBridge event bus in the automation account. Create an EventBridge rule and policy that connects the custom event bus to the default event buses in the services accounts.

Question 103

A company runs a microservices application on Amazon EKS. Users report delays accessing an account summary feature during peak hours. CloudWatch metrics and logs show normal CPU and memory utilization on EKS nodes. The DevOps engineer cannot identify where delays occur within the microservices.

Which solution will meet these requirements?

Options:

A.

Deploy the AWS X-Ray daemon as a DaemonSet in the EKS cluster. Use the X-Ray SDK to instrument the application code. Redeploy the application.

B.

Enable CloudWatch Container Insights for the EKS cluster. Use the Container Insights data to diagnose delays.

C.

Create alarms based on existing CloudWatch metrics. Set up SNS email alerts.

D.

Increase the timeout settings in the application code for network operations.

Question 104

A company uses an organization in AWS Organizations to manage its AWS accounts. The company recently acquired another company that has standalone AWS accounts. The acquiring company's DevOps team needs to consolidate the administration of the AWS accounts for both companies and retain full administrative control of the accounts. The DevOps team also needs to collect and group findings across all the accounts to implement and maintain a security posture.

Which combination of steps should the DevOps team take to meet these requirements? (Select TWO.)

Options:

A.

Invite the acquired company's AWS accounts to join the organization. Create an SCP that has full administrative privileges. Attach the SCP to the management account.

B.

Invite the acquired company's AWS accounts to join the organization. Create the OrganizationAccountAccessRole 1AM role in the invited accounts. Grant permission to the management account to assume the role.

C.

Use AWS Security Hub to collect and group findings across all accounts. Use Security Hub to automatically detect new accounts as the accounts are added to the organization.

D.

Use AWS Firewall Manager to collect and group findings across all accounts. Enable all features for the organization. Designate an account in the organization as the delegated administrator account for Firewall Manager.

E.

Use Amazon Inspector to collect and group findings across all accounts. Designate an account in the organization as the delegated administrator account for Amazon Inspector.

Question 105

A DevOps team is deploying microservices for an application on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The cluster uses managed node groups.

The DevOps team wants to enable auto scaling for the microservice Pods based on a specific CPU utilization percentage. The DevOps team has already installed the Kubernetes Metrics Server on the cluster.

Which solution will meet these requirements in the MOST operationally efficient way?

Options:

A.

Edit the Auto Scaling group that is associated with the worker nodes of the EKS cluster. Configure the Auto Scaling group to use a target tracking scaling policy to scale when the average CPU utilization of the Auto Scaling group reaches a specific percentage.

B.

Deploy the Kubernetes Horizontal Pod Autoscaler (HPA) and the Kubernetes Vertical Pod Autoscaler (VPA) in the cluster. Configure the HPA to scale based on the target CPU utilization percentage. Configure the VPA to use the recommender mode setting.

C.

Run the AWS Systems Manager AWS-UpdateEKSManagedNodeGroup Automation document. Modify the values for NodeGroupDesiredSize, NodeGroupMaxSize, and NodeGroupMinSize to be based on an estimate for the required node size.

D.

Deploy the Kubernetes Horizontal Pod Autoscaler (HPA) and the Kubernetes Cluster Autoscaler in the cluster. Configure the HPA to scale based on the target CPU utilization percentage. Configure the Cluster Autoscaler to use the auto-discovery setting.

Question 106

A company sends its AWS Network Firewall flow logs to an Amazon S3 bucket. The company then analyzes the flow logs by using Amazon Athena. The company needs to transform the flow logs and add additional data before the flow logs are delivered to the existing S3 bucket. Which solution will meet these requirements?

Options:

A.

Create an AWS Lambda function to transform the data and to write a new object to the existing S3 bucket. Configure the Lambda function with an S3 trigger for the existing S3 bucket. Specify all object create events for the event type. Acknowledge the recursive invocation.

B.

Enable Amazon EventBridge notifications on the existing S3 bucket. Create a custom EventBridge event bus. Create an EventBridge rule that is associated with the custom event bus. Configure the rule to react to all object create events for the existing S3 bucket and to invoke an AWS Step Functions workflow. Configure a Step Functions task to transform the data and to write the data into a new S3 bucket.

C.

Create an Amazon EventBridge rule that is associated with the default EventBridge event bus. Configure the rule to react to all object create events for the existing S3 bucket. Define a new S3 bucket as the target for the rule. Create an EventBridge input transformation to customize the event before passing the event to the rule target.

D.

Create an Amazon Data Firehose delivery stream that is configured with an AWS Lambda transformer. Specify the existing S3 bucket as the destination. Change the Network Firewall logging destination from Amazon S3 to Firehose.

Question 107

A company uses an organization in AWS Organizations to manage its AWS accounts. The company's DevOps team has developed an AWS Lambda function that calls the Organizations API to create new AWS accounts.

The Lambda function runs in the organization's management account. The DevOps team needs to move the Lambda function from the management account to a dedicated AWS account. The DevOps team must ensure that the Lambda function has the ability to create new AWS accounts only in Organizations before the team deploys the Lambda function to the new account.

Which solution will meet these requirements?

Options:

A.

In the management account, create a new IAM role that has the necessary permission to create new accounts in Organizations. Allow the role to be assumed by the Lambda execution role in the new AWS account. Update the Lambda function code to assume the role when the Lambda function creates new AWS accounts. Update the Lambda execution role to ensure that it has permission to assume the new role.

B.

In the management account, turn on delegated administration for Organizations. Create a new delegation policy that grants the new AWS account permission to create new AWS accounts in Organizations. Ensure that the Lambda execution role has the organizations:CreateAccount permission.

C.

In the management account, create a new IAM role that has the necessary permission to create new accounts in Organizations. Allow the role to be assumed by the Lambda service principal. Update the Lambda function code to assume the role when the Lambda function creates new AWS accounts. Update the Lambda execution role to ensure that it has permission to assume the new role.

D.

In the management account, enable AWS Control Tower. Turn on delegated administration for AWS Control Tower. Create a resource policy that allows the new AWS account to create new AWS accounts in AWS Control Tower. Update the Lambda function code to use the AWS Control Tower API in the new AWS account. Ensure that the Lambda execution role has the controltower:CreateManagedAccount permission.

Question 108

A company uses an Amazon Elastic Kubernetes Service (Amazon EKS) cluster to deploy its web applications on containers. The web applications contain confidential data that cannot be decrypted without specific credentials.

A DevOps engineer has stored the credentials in AWS Secrets Manager. The secrets are encrypted by an AWS Key Management Service (AWS KMS) customer managed key. A Kubernetes service account for a third-party tool makes the secrets available to the applications. The service account assumes an IAM role that the company created to access the secrets.

The service account receives an Access Denied (403 Forbidden) error while trying to retrieve the secrets from Secrets Manager.

What is the root cause of this issue?

Options:

A.

The IAM role that is attached to the EKS cluster does not have access to retrieve the secrets from Secrets Manager.

B.

The key policy for the customer managed key does not allow the Kubernetes service account IAM role to use the key.

C.

The key policy for the customer managed key does not allow the EKS cluster IAM role to use the key.

D.

The IAM role that is assumed by the Kubernetes service account does not have permission to access the EKS cluster.

Question 109

A company wants to ensure that their EC2 instances are secure. They want to be notified if any new vulnerabilities are discovered on their instances and they also want an audit trail of all login activities on the instances.

Which solution will meet these requirements'?

Options:

A.

Use AWS Systems Manager to detect vulnerabilities on the EC2 instances Install the Amazon Kinesis Agent to capture system logs and deliver them to Amazon S3.

B.

Use AWS Systems Manager to detect vulnerabilities on the EC2 instances Install the Systems Manager Agent to capture system logs and view login activity in the CloudTrail console.

C.

Configure Amazon CloudWatch to detect vulnerabilities on the EC2 instances Install the AWS Config daemon to capture system logs and view them in the AWS Config console.

D.

Configure Amazon Inspector to detect vulnerabilities on the EC2 instances Install the Amazon CloudWatch Agent to capture system logs and record them via Amazon CloudWatch Logs.

Question 110

A company uses an Amazon Elastic Kubernetes Service (Amazon EKS) cluster to host its machine learning (ML) application. As the ML model and the container image grow, pod startup time has increased to several minutes. The DevOps engineer created an EventBridge rule that triggers Systems Manager automation to prefetch container images from ECR. Node groups and the cluster have tags configured.

What should the DevOps engineer do next to meet the requirements?

Options:

A.

Create an IAM role allowing EventBridge to use Systems Manager to run commands in the control plane nodes. Create a State Manager association using control plane node tags.

B.

Create an IAM role allowing EventBridge to use Systems Manager to run commands in the worker nodes. Create a State Manager association using the nodes’ machine size.

C.

Create an IAM role allowing EventBridge to use Systems Manager to run commands in the worker nodes. Create a State Manager association using the nodes’ tags to prefetch container images.

D.

Create an IAM role allowing EventBridge to use Systems Manager to run commands in the control plane nodes. Create a State Manager association using the nodes’ tags.

Question 111

An ecommerce company is receiving reports that its order history page is experiencing delays in reflecting the processing status of orders. The order processing system consists of an AWS Lambda function that uses reserved concurrency. The Lambda function processes order messages from an Amazon Simple Queue Service (Amazon SQS) queue and inserts processed orders into an Amazon DynamoDB table. The DynamoDB table has auto scaling enabled for read and write capacity.

Which actions should a DevOps engineer take to resolve this delay? (Choose two.)

Options:

A.

Check the ApproximateAgeOfOldestMessage metric for the SQS queue. Increase the Lambda function concurrency limit.

B.

Check the ApproximateAgeOfOldestMessage metnc for the SQS queue Configure a redrive policy on the SQS queue.

C.

Check the NumberOfMessagesSent metric for the SQS queue. Increase the SQS queue visibility timeout.

D.

Check the WriteThrottleEvents metric for the DynamoDB table. Increase the maximum write capacity units (WCUs) for the table's scaling policy.

E.

Check the Throttles metric for the Lambda function. Increase the Lambda function timeout.

Question 112

A company recently launched multiple applications that use Application Load Balancers. Application response time often slows down when the applications experience problems A DevOps engineer needs to Implement a monitoring solution that alerts the company when the applications begin to perform slowly The DevOps engineer creates an Amazon Simple Notification Semce (Amazon SNS) topic and subscribe the company's email address to the topic

What should the DevOps engineer do next to meet the requirements?

Options:

A.

Create an Amazon EventBridge rule that invokes an AWS Lambda function to query the applications on a 5-minute interval Configure the Lambda function to publish a notification to the SNS topic when the applications return errors.

B.

Create an Amazon CloudWatch Synthetics canary that runs a custom script to query the applications on a 5-minute interval. Configure the canary to use the SNS topic when the applications return errors.

C.

Create an Amazon CloudWatch alarm that uses the AWS/AppljcabonELB namespace RequestCountPerTarget metric Configure the CloudWatch alarm to send a notification when the number of connections becomes greater than the configured number of threads that the application supports Configure the CloudWatch alarm to use the SNS topic.

D.

Create an Amazon CloudWatch alarm that uses the AWS/ApplicationELB namespace RequestCountPerTarget metric Configure the CloudWatch alarm to send a notification when the average response time becomes greater than the longest response time that the application supports Configure the CloudWatch alarm to use the SNS topic

Question 113

A company uses AWS Control Tower and Organizations for a multi-account environment. It needs to create new accounts and ensure they receive a consistent baseline configuration.

Which solution meets the requirement with the least overhead?

Options:

A.

Use Account Factory Customization (AFC) blueprints for baseline setup.

B.

Use Account Factory + StackSets post-setup.

C.

Use Organizations with Lambda applying baseline via access role.

D.

Use Organizations + StackSets manually.

Question 114

A company runs an application with an Amazon EC2 and on-premises configuration. A DevOps engineer needs to standardize patching across both environments. Company policy dictates that patching only happens during non-business hours.

Which combination of actions will meet these requirements? (Choose three.)

Options:

A.

Add the physical machines into AWS Systems Manager using Systems Manager Hybrid Activations.

B.

Attach an IAM role to the EC2 instances, allowing them to be managed by AWS Systems Manager.

C.

Create IAM access keys for the on-premises machines to interact with AWS Systems Manager.

D.

Run an AWS Systems Manager Automation document to patch the systems every hour.

E.

Use Amazon EventBridge scheduled events to schedule a patch window.

F.

Use AWS Systems Manager Maintenance Windows to schedule a patch window.

Question 115

A company uses AWS Control Tower to deploy multiple AWS accounts. A security team must automate Control Tower guardrails applied to all accounts in an OU, with version control and rollback capabilities.

Which solution meets these requirements?

Options:

A.

Create CloudFormation templates per guardrail stored in CodeCommit. Use AWS::ControlTower::EnableControl resources. Automate via CodeBuild.

B.

Same as A but for each account.

C.

Store CloudFormation templates per guardrail in a Git repo. Use CodePipeline in the security account with EventBridge triggering deployments.

D.

Store templates in S3 and trigger deployment with EventBridge PutObject.

Question 116

A media company has several thousand Amazon EC2 instances in an AWS account. The company is using Slack and a shared email inbox for team communications and important updates. A DevOps engineer needs to send all AWS-scheduled EC2 maintenance notifications to the Slack channel and the shared inbox. The solution must include the instances' Name and Owner tags.

Which solution will meet these requirements?

Options:

A.

Integrate AWS Trusted Advisor with AWS Config Configure a custom AWS Config rule to invoke an AWS Lambda function to publish notifications to an Amazon Simple Notification Service (Amazon SNS) topic Subscribe a Slack channel endpoint and the shared inbox to the topic.

B.

Use Amazon EventBridge to monitor for AWS Health Events Configure the maintenance events to target an Amazon Simple Notification Service (Amazon SNS) topic Subscribe an AWS Lambda function to the SNS topic to send notifications to the Slack channel and the shared inbox.

C.

Create an AWS Lambda function that sends EC2 maintenance notifications to the Slack channel and the shared inbox Monitor EC2 health events by using Amazon CloudWatch metrics Configure a CloudWatch alarm that invokes the Lambda function when a maintenance notification is received.

D.

Configure AWS Support integration with AWS CloudTrail Create a CloudTrail lookup event to invoke an AWS Lambda function to pass EC2 maintenance notifications to Amazon Simple Notification Service (Amazon SNS) Configure Amazon SNS to target the Slack channel and the shared inbox.

Question 117

A company has an application that runs on AWS Lambda and sends logs to Amazon CloudWatch Logs. An Amazon Kinesis data stream is subscribed to the log groups in CloudWatch Logs. A single consumer Lambda function processes the logs from the data stream and stores the logs in an Amazon S3 bucket.

The company's DevOps team has noticed high latency during the processing and ingestion of some logs.

Which combination of steps will reduce the latency? (Select THREE.)

Options:

A.

Create a data stream consumer with enhanced fan-out. Set the Lambda function that processes the logs as the consumer.

B.

Increase the ParallelizationFactor setting in the Lambda event source mapping.

C.

Configure reserved concurrency for the Lambda function that processes the logs.

D.

Increase the batch size in the Kinesis data stream.

E.

Turn off the ReportBatchltemFailures setting in the Lambda event source mapping.

F.

Increase the number of shards in the Kinesis data stream.

Question 118

A company is migrating its container-based workloads to an AWS Organizations multi-account environment. The environment consists of application workload accounts that the company uses to deploy and run the containerized workloads. The company has also provisioned a shared services account tor shared workloads in the organization.

The company must follow strict compliance regulations. All container images must receive security scanning before they are deployed to any environment. Images can be consumed by downstream deployment mechanisms after the images pass a scan with no critical vulnerabilities. Pre-scan and post-scan images must be isolated from one another so that a deployment can never use pre-scan images.

A DevOps engineer needs to create a strategy to centralize this process.

Which combination of steps will meet these requirements with the LEAST administrative overhead? (Select TWO.)

Options:

A.

Create Amazon Elastic Container Registry (Amazon ECR) repositories in the shared services account: one repository for each pre-scan image and one repository for each post-scan image. Configure Amazon ECR image scanning to run on new image pushes to the pre-scan repositories. Use resource-based policies to grant the organization write access to the pre-scan repositories and read access to the post-scan repositories.

B.

Create pre-scan Amazon Elastic Container Registry (Amazon ECR) repositories in each account that publishes container images. Create repositories for post-scan images in the shared services account. Configure Amazon ECR image scanning to run on new image pushes to the pre-scan repositories. Use resource-based policies to grant the organization read access to the post-scan repositories.

C.

Configure image replication for each image from the image's pre-scan repository to the image's post-scan repository.

D.

Create a pipeline in AWS CodePipeline for each pre-scan repository. Create a source stage that runs when new images are pushed to the pre-scan repositories. Create a stage that uses AWS CodeBuild as the action provider. Write a buildspec.yaml definition that determines the image scanning status and pushes images without critical vulnerabilities lo the post-scan repositories.

E.

Create an AWS Lambda function. Create an Amazon EventBridge rule that reacts to image scanning completed events and invokes the Lambda function. Write function code that determines the image scanning status and pushes images without critical vulnerabilities to the post-scan repositories.

Demo: 118 questions
Total 407 questions