Month End Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70percent

Amazon Web Services SAA-C03 AWS Certified Solutions Architect - Associate (SAA-C03) Exam Practice Test

Demo: 196 questions
Total 649 questions

AWS Certified Solutions Architect - Associate (SAA-C03) Questions and Answers

Question 1

A company is building a serverless application to process clickstream data from its website. The clickstream data is sent to an Amazon Kinesis Data Streams data stream from the application web servers.

The company wants to enrich the clickstream data by joining the clickstream data with customer profile data from an Amazon Aurora Multi-AZ database. The company wants to use Amazon Redshift to analyze the enriched data. The solution must be highly available.

Which solution will meet these requirements?

Options:

A.

Use an AWS Lambda function to process and enrich the clickstream data. Use the same Lambda function to write the clickstream data to Amazon S3. Use Amazon Redshift Spectrum to query the enriched data in Amazon S3.

B.

Use an Amazon EC2 Spot Instance to poll the data stream and enrich the clickstream data. Configure the EC2 instance to use the COPY command to send the enriched results to Amazon Redshift.

C.

Use an Amazon Elastic Container Service (Amazon ECS) task with AWS Fargate Spot capacity to poll the data stream and enrich the clickstream data. Configure an Amazon EC2 instance to use the COPY command to send the enriched results to Amazon Redshift.

D.

Use Amazon Kinesis Data Firehose to load the clickstream data from Kinesis Data Streams to Amazon S3. Use AWS Glue crawlers to infer the schema and populate the AWS Glue Data Catalog. Use Amazon Athena to query the raw data in Amazon S3.

Question 2

A gaming company has a web application that displays game scores. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The application stores data in an Amazon RDS for MySQL database.

Users are experiencing long delays and interruptions caused by degraded database read performance. The company wants to improve the user experience.

Which solution will meet this requirement?

Options:

A.

Use an Amazon ElastiCache (Redis OSS) cache in front of the database.

B.

Use Amazon RDS Proxy between the application and the database.

C.

Migrate the application from EC2 instances to AWS Lambda functions.

D.

Use an Amazon Aurora Global Database to create multiple read replicas across multiple AWS Regions.

Question 3

A company deployed a three-tier web application in a single Availability Zone in the us-east-1 Region on a single Amazon EC2 instance. Usage of the application is growing.

A solutions architect needs to ensure that the application can handle the growing amount of traffic and that the application is resilient. The solution must be cost-effective.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Create two additional EC2 instances spread across two separate Availability Zones. Create an Application Load Balancer (ALB). Configure the ALB to route traffic to a target group that contains all three instances. Create an Amazon CloudWatch alarm to scale the EC2 instances vertically to handle the application traffic.

B.

Create eight additional EC2 instances spread across three separate Availability Zones. Create an Application Load Balancer (ALB). Configure the ALB to route traffic to a target group that contains all nine instances. Create an Amazon CloudWatch alarm to scale the EC2 instances horizontally to handle the application traffic.

C.

Create an EC2 Auto Scaling group that contains a minimum of three EC2 instances in the same Availability Zone. Create an Application Load Balancer (ALB). Configure the ALB to route traffic to a target group that contains all the instances. Configure scheduled scaling for the Auto Scaling group.

D.

Create an EC2 Auto Scaling group that contains a minimum of three EC2 instances spread across Availability Zones. Create an Application Load Balancer (ALB). Configure the ALB to route traffic to a target group that contains all the instances. Create an Amazon CloudWatch alarm to scale the EC2 instances horizontally to handle the application traffic.

Question 4

A solutions architect is building a static website hosted on Amazon S3. The website uses an Amazon Aurora PostgreSQL database accessed through an AWS Lambda function. The production website uses a Lambda alias that points to a specific version of the Lambda function.

Database credentials must rotate every 2 weeks. Previously deployed Lambda versions must always use the most recent credentials.

Which solution will meet these requirements?

Options:

A.

Store credentials in AWS Secrets Manager. Turn on rotation. Write code in the Lambda function to retrieve credentials from Secrets Manager.

B.

Include the credentials in the Lambda function code and update the function regularly.

C.

Use Lambda environment variables and update them when new credentials are available.

D.

Store credentials in AWS Systems Manager Parameter Store. Turn on rotation. Write code to retrieve credentials from Parameter Store.

Question 5

A company has an application that serves clients that are deployed in more than 20.000 retail storefront locations around the world. The application consists of backend web services that are exposed over HTTPS on port 443 The application is hosted on Amazon EC2 Instances behind an Application Load Balancer (ALB). The retail locations communicate with the web application over the public internet. The company allows each retail location to register the IP address that the retail location has been allocated by its local ISP.

The company's security team recommends to increase the security of the application endpoint by restricting access to only the IP addresses registered by the retail locations.

What should a solutions architect do to meet these requirements?

Options:

A.

Associate an AWS WAF web ACL with the ALB Use IP rule sets on the ALB to filter traffic Update the IP addresses in the rule to Include the registered IP addresses

B.

Deploy AWS Firewall Manager to manage the ALB. Configure firewall rules to restrict traffic to the ALB Modify the firewall rules to include the registered IP addresses.

C.

Store the IP addresses in an Amazon DynamoDB table. Configure an AWS Lambda authorization function on the ALB to validate that incoming requests are from the registered IP addresses.

D.

Configure the network ACL on the subnet that contains the public interface of the ALB Update the ingress rules on the network ACL with entries for each of the registered IP addresses.

Question 6

A company decides to use AWS Key Management Service (AWS KMS) for data encryption operations. The company must create a KMS key and automate the rotation of the key. The company also needs the ability to deactivate the key and schedule the key for deletion.

Which solution will meet these requirements?

Options:

A.

Create an asymmetric customer managed KMS key. Enable automatic key rotation.

B.

Create a symmetric customer managed KMS key. Disable the envelope encryption option.

C.

Create a symmetric customer managed KMS key. Enable automatic key rotation.

D.

Create an asymmetric customer managed KMS key. Disable the envelope encryption option.

Question 7

A company runs multiple applications on Amazon EC2 instances in a VPC. Application A runs in a private subnet that has a custom route table and network ACL. Application B runs in a second private subnet in the same VPC.

The company needs to prevent Application A from sending traffic to Application B.

Which solution will meet this requirement?

Options:

A.

Add a deny outbound rule to a security group that is associated with Application B. Configure the rule to prevent Application B from sending traffic to Application A.

B.

Add a deny outbound rule to a security group that is associated with Application A. Configure the rule to prevent Application A from sending traffic to Application B.

C.

Add a deny outbound rule to the custom network ACL for the Application B subnet. Configure the rule to prevent Application B from sending traffic to IP addresses that are associated with the Application A subnet.

D.

Add a deny outbound rule to the custom network ACL for the Application A subnet. Configure the rule to prevent Application A from sending traffic to IP addresses that are associated with the Application B subnet.

Question 8

A company has primary and secondary data centers that are 500 miles (804.7 km) apart and interconnected with high-speed fiber-optic cable. The company needs a highly available and secure network connection between its data centers and a VPC on AWS for a mission-critical workload.

A solutions architect must choose a connection solution that provides maximum resiliency.

Which solution meets these requirements?

Options:

A.

Two AWS Direct Connect connections from the primary data center terminating at two Direct Connect locations on two separate devices

B.

A single AWS Direct Connect connection from each of the primary and secondary data centers terminating at one Direct Connect location on the same device

C.

Two AWS Direct Connect connections from each of the primary and secondary data centers terminating at two Direct Connect locations on two separate devices

D.

A single AWS Direct Connect connection from each of the primary and secondary data centers terminating at one Direct Connect location on two separate devices

Question 9

A company wants to release a new device that will collect data to track overnight sleep on an intelligent mattress. Sensors will send data that will be uploaded to an Amazon S3 bucket. Each mattress generates about 2 MB of data each night.

An application must process the data and summarize the data for each user. The application must make the results available as soon as possible. Every invocation of the application will require about 1 GB of memory and will finish running within 30 seconds.

Which solution will run the application MOST cost-effectively?

Options:

A.

AWS Lambda with a Python script

B.

AWS Glue with a Scala job

C.

Amazon EMR with an Apache Spark script

D.

AWS Glue with a PySpark job

Question 10

A company uses a Microsoft SQL Server database. The applications currently connect using SQL Server protocols. The company wants to migrate to Amazon Aurora PostgreSQL with minimal changes to application code.

Which combination of steps will meet these requirements? (Select TWO.)

Options:

A.

Use AWS SCT to rewrite SQL queries in the applications.

B.

Enable Babelfish on Aurora PostgreSQL to run SQL Server queries.

C.

Migrate the database schema and data using AWS SCT and AWS DMS.

D.

Use Amazon RDS Proxy to connect the applications to Aurora PostgreSQL.

E.

Use AWS DMS to rewrite SQL queries in the applications.

Question 11

A transaction processing company has weekly scripted batch jobs that run on Amazon EC2 instances. The EC2 instances are in an Auto Scaling group. The number of transactions can vary, but the baseline CPU utilization that is noted on each run is at least 60%. The company needs to provision the capacity 30 minutes before the jobs run.

Currently, engineers complete this task by manually modifying the Auto Scaling group parameters. The company does not have the resources to analyze the required capacity trends for the Auto Scaling group counts. The company needs an automated way to modify the Auto Scaling group's desired capacity.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Create a dynamic scaling policy for the Auto Scaling group. Configure the policy to scale based on the CPU utilization metric. Set the target value for the metric to 60%.

B.

Create a scheduled scaling policy for the Auto Scaling group. Set the appropriate desired capacity, minimum capacity, and maximum capacity. Set the recurrence to weekly. Set the start time to 30 minutes before the batch jobs run.

C.

Create a predictive scaling policy for the Auto Scaling group. Configure the policy to scale based on forecast. Set the scaling metric to CPU utilization. Set the target value for the metric to 60%. In the policy, set the instances to pre-launch 30 minutes before the jobs run.

D.

Create an Amazon EventBridge event to invoke an AWS Lambda function when the CPU utilization metric value for the Auto Scaling group reaches 60%. Configure the Lambda function to increase the Auto Scaling group's desired capacity and maximum capacity by 20%.

Question 12

A company hosts dozens of multi-tier applications on AWS. The presentation layer and logic layer are Amazon EC2 Linux instances that use Amazon EBS volumes.

The company needs a solution to ensure that operating system vulnerabilities are not introduced to the EC2 instances when the company deploys new features. The company uses custom AMIs to deploy EC2 instances in an Auto Scaling group. The solution must scale to handle all applications that the company hosts.

Which solution will meet these requirements?

Options:

A.

Use Amazon Inspector to patch operating system vulnerabilities. Invoke Amazon Inspector when a new AMI is deployed.

B.

Use AWS Backup to back up the EBS volume of each updated instance. Use the EBS backup volumes to create new AMIs. Use the existing Auto Scaling group to deploy the new AMIs.

C.

Use AWS Systems Manager Patch Manager to patch operating system vulnerabilities in the custom AMIs.

D.

Use EC2 Image Builder to create new AMIs when the company deploys new features. Include the update-linux component in the build components of the new AMIs. Use the existing Auto Scaling group to deploy the new AMIs.

Question 13

A company uses Amazon API Gateway to manage its REST APIs that third-party service providers access The company must protect the REST APIs from SQL injection and cross-site scripting attacks.

What is the MOST operationally efficient solution that meets these requirements?

Options:

A.

Configure AWS Shield.

B.

Configure AWS WAR

C.

Set up API Gateway with an Amazon CloudFront distribution Configure AWS Shield in CloudFront.

D.

Set up API Gateway with an Amazon CloudFront distribution. Configure AWS WAF in CloudFront

Question 14

A media company hosts a web application on AWS. The application gives users the ability to upload and view videos. The application stores the videos in an Amazon S3 bucket. The company wants to ensure that only authenticated users can upload videos. Authenticated users must have the ability to upload videos only within a specified time frame after authentication. Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Configure the application to generate IAM temporary security credentials for authenticated users.

B.

Create an AWS Lambda function that generates pre-signed URLs when a user authenticates.

C.

Develop a custom authentication service that integrates with Amazon Cognito to control and log direct S3 bucket access through the application.

D.

Use AWS Security Token Service (AWS STS) to assume a pre-defined IAM role that grants authenticated users temporary permissions to upload videos directly to the S3 bucket.

Question 15

A company decides to use AWS Key Management Service (AWS KMS) for data encryption operations. The company must create a KMS key and automate the rotation of the key. The company also needs the ability to deactivate the key and schedule the key for deletion.

Which solution will meet these requirements?

Options:

A.

Create an asymmetric customer managed KMS key. Enable automatic key rotation.

B.

Create a symmetric customer managed KMS key. Disable the envelope encryption option.

C.

Create a symmetric customer managed KMS key. Enable automatic key rotation.

D.

Create an asymmetric customer managed KMS key. Disable the envelope encryption option.

Question 16

A company hosts a video streaming web application in a VPC. The company uses a Network Load Balancer (NLB) to handle TCP traffic for real-time data processing. There have been unauthorized attempts to access the application.

The company wants to improve application security with minimal architectural change to prevent unauthorized attempts to access the application.

Which solution will meet these requirements?

Options:

A.

Implement a series of AWS WAF rules directly on the NLB to filter out unauthorized traffic.

B.

Recreate the NLB with a security group to allow only trusted IP addresses.

C.

Deploy a second NLB in parallel with the existing NLB configured with a strict IP address allow list.

D.

Use AWS Shield Advanced to provide enhanced DDoS protection and prevent unauthorized access attempts.

Question 17

A company wants to migrate its on-premises Oracle database to Amazon Aurora. The company wants to use a secure and encrypted network to transfer the data. Which combination of steps will meet these requirements? (Select TWO.)

Options:

A.

Use AWS Application Migration Service to migrate the data.

B.

Use AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS) to migrate the data.

C.

Use AWS Direct Connect SiteLink to transfer data from the on-premises environment to AWS.

D.

Use AWS Site-to-Site VPN to establish a connection to transfer the data from the on-premises environment to AWS.

E.

Use AWS App2Container to migrate the data.

Question 18

An application is experiencing performance issues based on increased demand. This increased demand is on read-only historical records that are pulled from an Amazon RDS-hosted database with custom views and queries. A solutions architect must improve performance without changing the database structure.

Which approach will improve performance and MINIMIZE management overhead?

Options:

A.

Deploy Amazon DynamoDB, move all the data, and point to DynamoDB.

B.

Deploy Amazon ElastiCache (Redis OSS) and cache the data for the application.

C.

Deploy Memcached on Amazon EC2 and cache the data for the application.

D.

Deploy Amazon DynamoDB Accelerator (DAX) on Amazon RDS to improve cache performance.

Question 19

An ecommerce company runs an application that uses an Amazon DynamoDB table in a single AWS Region. The company wants to deploy the application to a second Region. The company needs to support multi-active replication with low latency reads and writes to the existing DynamoDB table in both Regions.

Which solution will meet these requirements in the MOST operationally efficient way?

Options:

A.

Create a DynamoDB global secondary index (GSI) for the existing table. Create a new table in the second Region. Convert the existing DynamoDB table to a global table. Specify the new table as the secondary table.

B.

Enable Amazon DynamoDB Streams for the existing table. Create a new table in the second Region. Create a new application that uses the DynamoDB Streams Kinesis Adapter and the Amazon Kinesis Client Library (KCL). Configure the new application to read data from the DynamoDB table in the first Region and to write the data to the new table in the second Region.

C.

Convert the existing DynamoDB table to a global table. Choose the appropriate second Region to achieve active-active write capabilities in both Regions.

D.

Enable Amazon DynamoDB Streams for the existing table. Create a new table in the second Region. Create an AWS Lambda function in the first Region that reads data from the table in the first Region and writes the data to the new table in the second Region. Set a DynamoDB stream as the input trigger for the Lambda function.

Question 20

A company stores data for multiple business units in a single Amazon S3 bucket that is in the company's payer AWS account. To maintain data isolation, the business units store data in separate prefixes in the S3 bucket by using an S3 bucket policy.

The company plans to add a large number of dynamic prefixes. The company does not want to rely on a single S3 bucket policy to manage data access at scale. The company wants to develop a secure access management solution in addition to the bucket policy to enforce prefix-level data isolation.

Options:

A.

Configure the S3 bucket policy to deny s3:GetObject permissions for all users. Configure the bucket policy to allow s3:* access to individual business units.

B.

Enable default encryption on the S3 bucket by using server-side encryption with Amazon S3 managed keys (SSE-S3).

C.

Configure resource-based permissions on the S3 bucket by creating an S3 access point for each business unit.

D.

Use pre-signed URLs to provide access to the S3 bucket.

Question 21

A healthcare company is running an Amazon EMR cluster on Amazon EC2 instances to process data that is stored in Amazon S3. The company must ensure that the data processing jobs have access only to the relevant data in Amazon S3. Each job must have specific EMR runtime roles.

Which combination of steps will meet these requirements? (Select THREE.)

Options:

A.

Set up security configurations in Amazon EMR, and set EnableApplicationScopedIAMRole to true.

B.

Set up runtime roles to assume the EC2 instance profile of the Amazon EMR cluster.

C.

Set up an EC2 instance profile for the Amazon EMR cluster to assume the runtime roles.

D.

For each IAM role that serves as an EMR runtime role, set up a trust policy with the EC2 instance profile role.

E.

Establish a trust policy between the EMR runtime roles and the EMR service role of the cluster.

F.

Set up security configurations in Amazon EMR, and set EnableInTransitEncryption to true.

Question 22

A company maintains its accounting records in a custom application that runs on Amazon EC2 instances. The company needs to migrate the data to an AWS managed service for development and maintenance of the application data. The solution must require minimal operational support and provide immutable, cryptographically verifiable logs of data changes.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Copy the records from the application into an Amazon Redshift cluster.

B.

Copy the records from the application into an Amazon Neptune cluster.

C.

Copy the records from the application into an Amazon Timestream database.

D.

Copy the records from the application into an Amazon Quantum Ledger Database (Amazon QLDB) ledger.

Question 23

An analytics application runs on multiple Amazon EC2 Linux instances that use Amazon Elastic File System (Amazon EFS) Standard storage. The files vary in size and access frequency. The company accesses the files infrequently after 30 days. However, users sometimes request older files to generate reports.

The company wants to reduce storage costs for files that are accessed infrequently. The company also wants throughput to adjust based on the size of the file system. The company wants to use the TransitionToIA Amazon EFS lifecycle policy to transition files to Infrequent Access (IA) storage after 30 days.

Which solution will meet these requirements?

Options:

A.

Configure files to transition back to Standard storage when a user accesses the files again. Specify the provisioned throughput mode.

B.

Specify the provisioned throughput mode only.

C.

Configure files to transition back to Standard storage when a user accesses the files again. Specify the bursting throughput mode.

D.

Specify the bursting throughput mode only.

Question 24

A company hosts an industrial control application that receives sensor input through Amazon Kinesis Data Streams. The application needs to support new sensors for real-time anomaly detection in monitored equipment.

The company wants to integrate new sensors in a loosely-coupled, fully managed, and serverless way. The company cannot modify the application code.

Which solution will meet these requirements?

Options:

A.

Forward the existing stream in Kinesis Data Streams to Amazon Managed Service for Apache Flink for anomaly detection. Use a second stream in Kinesis Data Streams to send the Flink output to the application.

B.

Use Amazon Data Firehose to stream data to Amazon S3. Use Amazon Redshift Spectrum to perform anomaly detection on the S3 data. Use S3 Event Notifications to invoke an AWS Lambda function that sends analyzed data to the application through a second stream in Kinesis Data Streams.

C.

Configure Amazon EC2 instances in an Auto Scaling group to consume data from the data stream and to perform anomaly detection. Create a second stream in Kinesis Data Streams to send data from the EC2 instances to the application.

D.

Configure an Amazon Elastic Container Service (Amazon ECS) task that uses Amazon EC2 instances to consume data from the data stream and to perform anomaly detection. Create a second stream in Kinesis Data Streams to send data from the containers to the application.

Question 25

An ecommerce company runs a PostgreSQL database on an Amazon EC2 instance. The database stores data in Amazon Elastic Block Store (Amazon EBS) volumes. The daily peak input/output transactions per second (IOPS) do not exceed 15,000 IOPS. The company wants to migrate the database to Amazon RDS for PostgreSQL and to provision disk IOPS performance that is independent of disk storage capacity.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Configure General Purpose SSD (gp2) EBS volumes. Provision a 5 TiB volume.

B.

Configure Provisioned IOPS SSD (io1) EBS volumes. Provision 15,000 IOPS.

C.

Configure General Purpose SSD (gp3) EBS volumes. Provision 15,000 IOPS.

D.

Configure magnetic EBS volumes to achieve maximum IOPS.

Question 26

A company is migrating a legacy application from an on-premises data center to AWS. The application relies on hundreds of cron Jobs that run between 1 and 20 minutes on different recurring schedules throughout the day.

The company wants a solution to schedule and run the cron jobs on AWS with minimal refactoring. The solution must support running the cron jobs in response to an event in the future.

Which solution will meet these requirements?

Options:

A.

Create a container image for the cron jobs. Use Amazon EventBridge Scheduler to create a recurring schedule. Run the cron job tasks as AWS Lambda functions.

B.

Create a container image for the cron jobs. Use AWS Batch on Amazon Elastic Container Service (Amazon ECS) with a scheduling policy to run the cron jobs.

C.

Create a container image for the cron jobs. Use Amazon EventBridge Scheduler to create a recurring schedule Run the cron job tasks on AWS Fargate.

D.

Create a container image for the cron jobs. Create a workflow in AWS Step Functions that uses a Wait state to run the cron jobs at a specified time. Use the RunTask action to run the cron job tasks on AWS Fargate.

Question 27

A company must follow strict regulations for the management of data encryption keys. The company manages its own key externally and imports the key into AWS Key Management Service (AWS KMS). The company must control the imported key material and must rotate the key material on a regular schedule.

A solutions architect needs to import the key material into AWS KMS and rotate the key without interrupting applications that use the key.

Which solution will meet these requirements?

Options:

A.

Create a new AWS KMS key that has the same key ID as the existing key. Import new key material into the key.

B.

Schedule the existing AWS KMS key for deletion. Create a new KMS key that has new key material.

C.

Import new key material into the existing AWS KMS key. Set an expiration time for the old key material.

D.

Enable automatic key rotation for the existing AWS KMS key.

Question 28

A company is using Amazon DocumentDB global clusters to support an ecommerce application. The application serves customers across multiple AWS Regions. To ensure business continuity, the company needs a solution to minimize downtime during maintenance windows or other disruptions.

Which solution will meet these requirements?

Options:

A.

Regularly create manual snapshots of the DocumentDB instance in the primary Region.

B.

Perform a managed failover to a secondary Region when needed.

C.

Perform a failover to a replica DocumentDB instance within the primary Region.

D.

Configure increased replication lag to manage cross-Region replication.

Question 29

A company stores customer data in a multitenant Amazon S3 bucket. Each customer's data is stored in a prefix that is unique to the customer. The company needs to migrate data for specific customers to a new. dedicated S3 bucket that is in the same AWS Region as the source bucket. The company must preserve object metadata such as creation date and version IDs.

After the migration is finished, the company must delete the source data for the migrated customers from the original multitenant S3 bucket.

Which combination of solutions will meet these requirements with the LEAST overhead? (Select THREE.)

Options:

A.

Create a new S3 bucket as a destination bucket. Enable versioning on the new bucket.

B.

Use S3 batch operations to copy objects from the specified prefixes to the destination bucket.

C.

Use the S3 CopyObject API, and create a script to copy data to the destination S3 bucket.

D.

Configure S3 Same-Region Replication (SRR) to replicate existing data from the specified prefixes in the source bucket to the destination bucket.

E.

Configure AWS DataSync to migrate data from the specified prefixes in the source bucket to the destination bucket.

F.

Use an S3 Lifecycle policy to delete objects from the source bucket after the data is migrated to the destination bucket.

Question 30

An ecommerce company stores terabytes of customer data in the AWS Cloud. The data contains personally identifiable information (PII). The company wants to use the data in three applications. Only one of the applications needs to process the PII. The PII must be removed before the other two applications process the data.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Store the data in an Amazon DynamoDB table. Create a proxy application layer to intercept and process the data that each application requests.

B.

Store the data in an Amazon S3 bucket. Process and transform the data by using S3 Object Lambda before returning the data to the requesting application.

C.

Process the data and store the transformed data in three separate Amazon S3 buckets so that each application has its own custom dataset. Point each application to its respective S3 bucket.

D.

Process the data and store the transformed data in three separate Amazon DynamoDB tables so that each application has its own custom dataset. Point each application to its respective DynamoDB table.

Question 31

A company is migrating mobile banking applications to run on Amazon EC2 instances in a VPC. Backend service applications run in an on-premises data center. The data center has an AWS Direct Connect connection into AWS. The applications that run in the VPC need to resolve DNS requests to an on-premises Active Directory domain that runs in the data center.

Which solution will meet these requirements with the LEAST administrative overhead?

Options:

A.

Provision a set of EC2 instances across two Availability Zones in the VPC as caching DNS servers to resolve DNS queries from the application servers within the VPC.

B.

Provision an Amazon Route 53 private hosted zone. Configure NS records that point to on-premises DNS servers.

C.

Create DNS endpoints by using Amazon Route 53 Resolver. Add conditional forwarding rules to resolve DNS namespaces between the on-premises data center and the VPC.

D.

Provision a new Active Directory domain controller in the VPC with a bidirectional trust between this new domain and the on-premises Active Directory domain.

Question 32

A company is storing data in Amazon S3 buckets. The company needs to retain any objects that contain personally identifiable information (PII) that might need to be reviewed.

A solutions architect must develop an automated solution to identify objects that contain PII and apply the necessary controls to prevent deletion before review.

Which combination of steps should the solutions architect take to meet these requirements? (Select THREE.)

Options:

A.

Create a job in Amazon Macie to scan the S3 buckets for the relevant sensitive data identifiers.

B.

Move the identified objects to the S3 Glacier Deep Archive storage class.

C.

Create an AWS Lambda function that performs an S3 Object Lock legal hold operation on the identified objects.

D.

Create an AWS Lambda function that applies an S3 Object Lock retention period to the identified objects in governance mode.

E.

Create an Amazon EventBridge rule that invokes the AWS Lambda function when Amazon Macie detects sensitive data.

F.

Configure multi-factor authentication (MFA) delete on the S3 buckets.

Question 33

A company runs an application on EC2 instances that need access to RDS credentials stored in AWS Secrets Manager.

Which solution meets this requirement?

Options:

A.

Create an IAM role, and attach the role to each EC2 instance profile. Use an identity-based policy to grant the role access to the secret.

B.

Create an IAM user, and attach the user to each EC2 instance profile. Use a resource-based policy to grant the user access to the secret.

C.

Create a resource-based policy for the secret. Use EC2 Instance Connect to access the secret.

D.

Create an identity-based policy for the secret. Grant direct access to the EC2 instances.

Question 34

A company runs database workloads on AWS that are the backend for the company's customer portals. The company runs a Multi-AZ database cluster on Amazon RDS for PostgreSQL.

The company needs to implement a 30-day backup retention policy. The company currently has both automated RDS backups and manual RDS backups. The company wants to maintain both types of existing RDS backups that are less than 30 days old.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Configure the RDS backup retention policy to 30 days tor automated backups by using AWS Backup. Manually delete manual backups that are older than 30 days.

B.

Disable RDS automated backups. Delete automated backups and manual backups that are older than 30 days. Configure the RDS backup retention policy to 30 days tor automated backups.

C.

Configure the RDS backup retention policy to 30 days for automated backups. Manually delete manual backups that are older than 30 days

D.

Disable RDS automated backups. Delete automated backups and manual backups that are older than 30 days automatically by using AWS CloudFormation. Configure the RDS backup retention policy to 30 days for automated backups.

Question 35

Question:

A genomics research company is designing a scalable architecture for a loosely coupled workload. Tasks in the workload are independent and can be processed in parallel. The architecture needs to minimize management overhead and provide automatic scaling based on demand.

Options:

Options:

A.

Use a cluster of Amazon EC2 instances. Use AWS Systems Manager to manage the workload.

B.

Implement a serverless architecture that uses AWS Lambda functions.

C.

Use AWS ParallelCluster to deploy a dedicated high-performance cluster.

D.

Implement vertical scaling for each workload task.

Question 36

A media company uses an Amazon CloudFront distribution to deliver content over the internet The company wants only premium customers to have access to the media streams and file content. The company stores all content in an Amazon S3 bucket. The company also delivers content on demand to customers for a specific purpose, such as movie rentals or music downloads.

Which solution will meet these requirements?

Options:

A.

Generate and provide S3 signed cookies to premium customers

B.

Generate and provide CloudFront signed URLs to premium customers.

C.

Use origin access control (OAC) to limit the access of non-premium customers

D.

Generate and activate field-level encryption to block non-premium customers.

Question 37

A company is designing a website that displays stock market prices to users. The company wants to use Amazon ElastiCache (Redis OSS) for the data caching layer. The company needs to ensure that the website's data caching layer can automatically fail over to another node if necessary.

Options:

A.

Enable read replicas in ElastiCache (Redis OSS). Promote the read replica when necessary.

B.

Enable Multi-AZ in ElastiCache (Redis OSS). Fail over to a second node when necessary.

C.

Export a backup of the ElastiCache (Redis OSS) cache to an Amazon S3 bucket. Restore the cache to a second cluster when necessary.

D.

Export a backup of the ElastiCache (Redis OSS) cache by using AWS Backup. Restore the cache to a second cluster when necessary.

Question 38

A company has an application that receives and processes purchase orders. The application supports only XML data. The company needs to configure the application to accept orders in JSON format. The company does not want to modify the application.

A solutions architect is using an Amazon API Gateway HTTP API to create a new purchase order API. The solutions architect needs to modify the application DNS record to point to the new HTTP API.

Options:

A.

Use an HTTP proxy integration to pass XML requests to the application. For JSON requests, use API Gateway mappings to convert the purchase orders to XML. Use an AWS Lambda function that is integrated with API Gateway to call the application.

B.

Use an HTTP proxy integration to pass XML requests to the application. For JSON requests, use an AWS Lambda function that is integrated with API Gateway to convert the purchase orders from JSON to XML and to call the application.

C.

Use an HTTP custom integration to pass XML requests to the application. For JSON requests, use API Gateway mappings to convert the purchase orders to XML. Use an AWS Lambda function that is integrated with API Gateway to call the application.

D.

Use an HTTP custom integration to pass XML requests to the application. For JSON requests, use an AWS Lambda function that is integrated with API Gateway to convert the purchase orders to JSON and to call the application.

Question 39

A company runs an on-premises managed file transfer solution to collect images from its clients. The company uses an open source transfer tool to transfer and integrate the images into the company's workflow. The company then runs a custom application to add watermarks to the images.

The company needs to migrate this workload to AWS and wants to use AWS managed services where possible. Uploaded images must be stored as objects. The company wants to automate the watermark addition.

Which solution will meet these requirements?

Options:

A.

Use AWS DataSync to automate file transfers. Store the images in an Amazon S3 bucket. Use an application that runs on Amazon EC2 instances to add watermarks.

B.

Use REST APIs to transfer files. Store the images in an Amazon S3 bucket. Use AWS Batch jobs to add watermarks.

C.

Use SFTP with AWS Transfer Family to automate file transfers into Amazon S3 buckets. Configure the Transfer Family workflow to invoke an AWS Lambda function to add watermarks.

D.

Use AWS Transfer Family to transfer images. Store the images in Amazon S3 Glacier Deep Archive. Run an AWS Step Functions state machine to add watermarks.

Question 40

A company is creating an application. The company stores data from tests of the application in multiple on-premises locations.

The company needs to connect the on-premises locations to VPCs in an AWS Region in the AWS Cloud. The number of accounts and VPCs will increase during the next year. The network architecture must simplify the administration of new connections and must provide the ability to scale.

Which solution will meet these requirements with the LEAST administrative overhead?

Options:

A.

Create a peering connection between the VPCs. Create a VPN connection between the VPCs and the on-premises locations.

B.

Launch an Amazon EC2 instance. On the instance, include VPN software that uses a VPN connection to connect all VPCs and on-premises locations.

C.

Create a transit gateway. Create VPC attachments for the VPC connections. Create VPNattachments for the on-premises connections.

D.

Create an AWS Direct Connect connection between the on-premises locations and a central VPC. Connect the central VPC to other VPCs by using peering connections.

Question 41

A company runs a production application on a fleet of Amazon EC2 instances. The application reads messages from an Amazon Simple Queue Service (Amazon SQS) queue and processes the messages in parallel. The message volume is unpredictable and highly variable.

The company must ensure that the application continually processes messages without any downtime.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Use only Spot Instances to handle the maximum capacity required.

B.

Use only Reserved Instances to handle the maximum capacity required.

C.

Use Reserved Instances to handle the baseline capacity. Use Spot Instances to provide additional capacity when required.

D.

Use Reserved Instances in an EC2 Auto Scaling group to handle the minimum capacity. Configure an auto scaling policy that is based on the SQS queue backlog.

Question 42

A company needs to archive an on-premises relational database. The company wants to retain the data. The company needs to be able to run SQL queries on the archived data to create annual reports. Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Use AWS Database Migration Service (AWS DMS) to migrate the on-premises database to an Amazon RDS instance. Retire the on-premises database. Maintain the RDS instance in a stopped state until the data is needed for reports.

B.

Set up database replication from the on-premises database to an Amazon EC2 instance. Retire the on-premises database. Make a snapshot of the EC2 instance. Maintain the EC2 instance in a stopped state until the data is needed for reports.

C.

Create a database backup on premises. Use AWS DataSync to transfer the data to Amazon S3. Create an S3 Lifecycle configuration to move the data to S3 Glacier Deep Archive. Restore the backup to Amazon EC2 instances to run reports.

D.

Use AWS Database Migration Service (AWS DMS) to migrate the on-premises databases to Amazon S3 in Apache Parquet format. Store the data in S3 Glacier Flexible Retrieval. Use Amazon Athena to run reports.

Question 43

A company runs a web application on Amazon EC2 instances in an Auto Scaling group that has a target group. The company designed the application to work with session affinity (sticky sessions) for a better user experience.

The application must be available publicly over the internet as an endpoint. A WAF must be applied to the endpoint for additional security. Session affinity (sticky sessions) must be configured on the endpoint.

Options:

A.

Create a public Network Load Balancer. Specify the application target group.

B.

Create a Gateway Load Balancer. Specify the application target group.

C.

Create a public Application Load Balancer. Specify the application target group.

D.

Create a second target group. Add Elastic IP addresses to the EC2 instances.

E.

Create a web ACL in AWS WAF. Associate the web ACL with the endpoint.

Question 44

A company is migrating its databases to Amazon RDS for PostgreSQL. The company is migrating its applications to Amazon EC2 instances. The company wants to optimize costs for long-running workloads.

Which solution will meet this requirement MOST cost-effectively?

Options:

A.

Use On-Demand Instances for the Amazon RDS for PostgreSQL workloads. Purchase a 1 year Compute Savings Plan with the No Upfront option for the EC2 instances.

B.

Purchase Reserved Instances for a 1 year term with the No Upfront option for the Amazon RDS for PostgreSQL workloads. Purchase a 1 year EC2 Instance Savings Plan with the No Upfront option for the EC2 instances.

C.

Purchase Reserved Instances for a 1 year term with the Partial Upfront option for the Amazon RDS for PostgreSQL workloads. Purchase a 1 year EC2 Instance Savings Plan with the Partial Upfront option for the EC2 instances.

D.

Purchase Reserved Instances for a 3 year term with the All Upfront option for the Amazon RDS for PostgreSQL workloads. Purchase a 3 year EC2 Instance Savings Plan with the All Upfront option for the EC2 instances.

Question 45

A media company hosts a web application on AWS for uploading videos. Only authenticated users should upload within a specified time frame after authentication.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Configure the application to generate IAM temporary security credentials for authenticated users.

B.

Create an AWS Lambda function that generates pre-signed URLs when a user authenticates.

C.

Develop a custom authentication service that integrates with Amazon Cognito to control and log direct S3 bucket access through the application.

D.

Use AWS Security Token Service (AWS STS) to assume a pre-defined IAM role that grants authenticated users temporary permissions to upload videos directly to the S3 bucket.

Question 46

A global ecommerce company is designing a three-tier application on AWS. The application includes a web tier that serves static content, an application tier that handles business logic, and a database tier that stores product information and user data. The application interacts with a relational database.

The company needs a highly available application architecture to serve global users with low latency, with the least operational overhead.

Which solution will meet these requirements?

Options:

A.

Deploy Amazon EC2 instances in an Auto Scaling group for the application tier and web tier in a single AWS Region. Use an Application Load Balancer to distribute web traffic. Use an Amazon RDS database and Multi-AZ deployments for the database tier.

B.

Set up an Amazon CloudFront distribution that uses an Amazon S3 bucket as the origin. Use Amazon Elastic Container Service (Amazon ECS) containers on AWS Fargate to deploy the application tier to each AWS Region where the company operates. Use an Amazon Aurora global database for the database tier.

C.

Use an Amazon S3 bucket to store the static web content. Use Amazon EC2 Auto Scaling and EC2 Spot Instances for the application tier. Use Amazon RDS for MySQL with read replicas for the database tier. Use AWS Database Migration Service (AWS DMS) to replicate data to secondary AWS Regions.

D.

Use an Amazon S3 bucket to store static web content. Use AWS Lambda functions to handle serverless backend logic in the application tier. Use Amazon API Gateway to invoke the Lambda functions for web requests. Use an Amazon DynamoDB database for the database tier. Deploy the DynamoDB database across multiple AWS Regions.

Question 47

A company manages multiple AWS accounts in an organization in AWS Organizations. The company's applications run on Amazon EC2 instances in multiple AWS Regions. The company needs a solution to simplify the management of security rules across the accounts in its organization. The solution must apply shared security group rules, audit security groups, and detect unused and redundant rules in VPC security groups across all AWS environments.

Which solution will meet these requirements with the MOST operational efficiency?

Options:

A.

Use AWS Firewall Manager to create a set of rules based on the security requirements. Replicate the rules to all the AWS accounts and Regions.

B.

Use AWS CloudFormation StackSets to provision VPC security groups based on the specifications across multiple accounts and Regions. Deploy AWS Network Firewall to define the firewall rules to control network traffic across multiple accounts and Regions.

C.

Use AWS CloudFormation StackSets to provision VPC security groups based on the specifications across multiple accounts and Regions. Configure AWS Config and AWS Lambda to evaluate compliance information and to automate enforcement across all accounts and Regions.

D.

Use AWS Network Firewall to build policies based on the security requirements. Centrally apply the new policies to all the VPCs and accounts.

Question 48

An ecommerce company hosts a three-tier web application in a VPC. The web tier runs on Amazon EC2 instances in two Availability Zones. The company stores a product catalog and customer sales information in Amazon DynamoDB.

The company's finance team uses a reporting application to generate reports of daily product sales. When the finance team runs the daily reports, a sudden performance decrease affects website customers.

The company wants to improve the performance of the system.

Which solution will meet these requirements with MINIMAL changes to the current architecture?

Options:

A.

Migrate the application to larger EC2 instances. Migrate the database to Amazon RDS for MySQL. Configure a read replica of the database in a second Availability Zone.

B.

Increase the compute capacity of the EC2 instances. Migrate the database to Amazon ElastiCache (Memcached).

C.

Implement DynamoDB Accelerator (DAX).

D.

Configure DynamoDB streams.

Question 49

A company currently stores 5 TB of data in on-premises block storage systems. The company's current storage solution provides limited space for additional data. The company runs applications on premises that must be able to retrieve frequently accessed data with low latency. The company requires a cloud-based storage solution.

Which solution will meet these requirements with the MOST operational efficiency?

Options:

A.

Use Amazon S3 File Gateway Integrate S3 File Gateway with the on-premises applications to store and directly retrieve files by using the SMB file system.

B.

Use an AWS Storage Gateway Volume Gateway with cached volumes as iSCSt targets.

C.

Use an AWS Storage Gateway Volume Gateway with stored volumes as iSCSI targets.

D.

Use an AWS Storage Gateway Tape Gateway. Integrate Tape Gateway with the on-premises applications to store virtual tapes in Amazon S3.

Question 50

A company hosts an application on AWS. The application gives users the ability to upload photos and store the photos in an Amazon S3 bucket. The company wants to use Amazon CloudFront and a custom domain name to upload the photo files to the S3 bucket in the eu-west-1 Region.

Which solution will meet these requirements? (Select TWO.)

Options:

A.

Use AWS Certificate Manager (ACM) to create a public certificate in the us-east-1 Region. Use the certificate in CloudFront

B.

Use AWS Certificate Manager (ACM) to create a public certificate in eu-west-1. Use the certificate in CloudFront.

C.

Configure Amazon S3 to allow uploads from CloudFront. Configure S3 Transfer Acceleration.

D.

Configure Amazon S3 to allow uploads from CloudFront origin access control (OAC).

E.

Configure Amazon S3 to allow uploads from CloudFront. Configure an Amazon S3 website endpoint.

Question 51

A media company runs an application on multiple Amazon EC2 instances that requires high storage input/output operations per second (IOPS).

To achieve the necessary performance, a solutions architect wants to stripe multiple Amazon EBS volumes together and attach the volumes to EC2 instances. The solutions architect wants to receive a notification when IOPS are over-provisioned.

Which solution will meet these requirements?

Options:

A.

Configure auto scaling for the EBS volumes to automatically increase or decrease IOPS based on the EC2 instance CPU utilization metric.

B.

Deploy the application on an EC2 instance type that supports the highest possible IOPS.

C.

Create a custom AWS Config rule to monitor the provisioned IOPS for the EBS volumes that are attached to the EC2 instances and to send notifications.

D.

Adjust the IOPS of each EBS volume daily based on Amazon CloudWatch metrics for IOPS utilization.

Question 52

Question:

A machine learning (ML) team is building an application that uses data that is in an Amazon S3 bucket. The ML team needs a storage solution for its model training workflow on AWS. The ML team requires high-performance storage that supports frequent access to training datasets. The storage solution must integrate natively with Amazon S3. Which solution will meet these requirements with the LEAST operational overhead?

Options:

Options:

A.

Use Amazon Elastic Block Store (Amazon EBS) volumes to provide high-performance storage. Use AWS DataSync to migrate data from the S3 bucket to EBS volumes.

B.

Use Amazon EC2 ML instances to provide high-performance storage. Store training data on Amazon EBS volumes. Use the S3 Copy API to copy data from the S3 bucket to EBS volumes.

C.

Use Amazon FSx for Lustre to provide high-performance storage. Store training datasets in Amazon S3 Standard storage.

D.

Use Amazon EMR to provide high-performance storage. Store training datasets in Amazon S3 Glacier Instant Retrieval storage.

Question 53

A solutions architect is creating a data reporting application that will send traffic through third-party network firewalls in an AWS security account. The firewalls and application servers must be load balanced.

The application uses TCP connections to generate reports. The reports can run for several hours and can be idle for up to 1 hour. The reports must not time out during an idle period.

Which solution will meet these requirements?

Options:

A.

Use a Gateway Load Balancer (GWLB) for the firewalls. Use an Application Load Balancer (ALB) for the application servers. Set the ALB idle timeout period to 1 hour.

B.

Use a single firewall in the security account. Use an Application Load Balancer (ALB) for the application servers. Set the ALB idle timeout and firewall idle timeout periods to 1 hour.

C.

Use a Gateway Load Balancer (GWLB) for the firewalls. Use an Application Load Balancer (ALB) for the application servers. Set the idle timeout periods for the ALB, the GWLB, and the firewalls to 1 hour.

D.

Use a Gateway Load Balancer (GWLB) for the firewalls. Use an Application Load Balancer (ALB) for the application servers. Configure the ALB idle timeout period to 1 hour. Increase the application server capacity to finish the report generation faster.

Question 54

A company is developing a content sharing platform that currently handles 500 GB of user-generated media files. The company expects the amount of content to grow significantly in the future. The company needs a storage solution that can automatically scale, provide high durability, and allow direct user uploads from web browsers.

Options:

A.

Store the data in an Amazon Elastic Block Store (Amazon EBS) volume with Multi-Attach enabled.

B.

Store the data in an Amazon Elastic File System (Amazon EFS) Standard file system.

C.

Store the data in an Amazon S3 Standard bucket.

D.

Store the data in an Amazon S3 Express One Zone bucket.

Question 55

A company is implementing a new policy to enhance the security of its AWS environment. The policy requires all administrative actions that users perform on the AWS Management Console to be secured by multi-factor authentication (MFA).

Which solution will allow the company to enforce this policy in the MOST operationally efficient way?

Options:

A.

Enable MFA on the root account. Ensure that all administrators use the root account to perform administrative actions.

B.

Create an IAM policy that requires MFA to be enabled for the IAM roles that administrators assume to perform administrative actions.

C.

Configure an Amazon CloudWatch alarm that sends an email notification when an administrator performs an administrative action without MFA.

D.

Use AWS Config to periodically audit IAM users and to automatically attach an IAM policy that requires MFA when AWS Config detects administrative actions.

Question 56

A company hosts an Amazon EC2 instance in a private subnet in a new VPC. The VPC also has a public subnet that has the default route set to an internet gateway. The private subnet does not have outbound internet access.

The EC2 instance needs to have the ability to download monthly security updates from an outside vendor. However, the company must block any connections that are initiated from the internet.

Which solution will meet these requirements?

Options:

A.

Configure the private subnet route table to use the internet gateway as the default route.

B.

Create a NAT gateway in the public subnet. Configure the private subnet route table to use the NAT gateway as the default route.

C.

Create a NAT instance in the private subnet. Configure the private subnet route table to use the NAT instance as the default route.

D.

Create a NAT instance in the private subnet. Configure the private subnet route table to use the internet gateway as the default route.

Question 57

A company needs a solution to integrate transaction data from several Amazon DynamoDB tables into an existing Amazon Redshift data warehouse. The solution must maintain the provisioned throughput of DynamoDB.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Create an Amazon S3 bucket. Configure DynamoDB to export to the bucket on a regular schedule. Use an Amazon Redshift COPY command to read from the S3 bucket.

B.

Use an Amazon Redshift COPY command to read directly from each DynamoDB table.

C.

Create an Amazon S3 bucket. Configure an AWS Lambda function to read from the DynamoDB tables and write to the S3 bucket on a regular schedule. Use Amazon Redshift Spectrum to access the data in the S3 bucket.

D.

Use Amazon Athena Federated Query with a DynamoDB connector and an Amazon Redshift connector to read directly from the DynamoDB tables.

Question 58

A developer needs to export the contents of several Amazon DynamoDB tables into Amazon S3 buckets to comply with company data regulations. The developer uses the AWS CLI to runcommands to export from each table to the proper S3 bucket. The developer sets up AWS credentials correctly and grants resources appropriate permissions. However, the exports of some tables fail.

What should the developer do to resolve this issue?

Options:

A.

Ensure that point-in-time recovery is enabled on the DynamoDB tables.

B.

Ensure that the target S3 bucket is in the same AWS Region as the DynamoDB table.

C.

Ensure that DynamoDB streaming is enabled for the tables.

D.

Ensure that DynamoDB Accelerator (DAX) is enabled.

Question 59

A media company needs to migrate its Windows-based video editing environment to AWS. The company's current environment processes 4K video files that require sustained throughput of 2 GB per second across multiple concurrent users.

The company's storage needs increase by 1 TB each week. The company needs a shared file system that supports SMB protocol and can scale automatically based on storage demands.

Which solution will meet these requirements?

Options:

A.

Deploy an Amazon FSx for Windows File Server Multi-AZ file system with SSD storage.

B.

Deploy an Amazon Elastic File System (Amazon EFS) file system in Max I/O mode. Provision mount targets in multiple Availability Zones.

C.

Deploy an Amazon FSx for Lustre file system with a Persistent 2 deployment type. Provision the file system with 2 TB of storage.

D.

Deploy Amazon S3 File Gateway by using multiple cached gateway instances. Configure S3 Transfer Acceleration.

Question 60

A company has a website that handles dynamic traffic loads. The website architecture is based on Amazon EC2 instances in an Auto Scaling group that is configured to use scheduled scaling. Each EC2 instance runs code from an Amazon Elastic File System (Amazon EFS) volume and stores shared data back to the same volume.

The company wants to optimize costs for the website.

Which solution will meet this requirement?

Options:

A.

Reconfigure the Auto Scaling group to set a desired number of instances. Turn off scheduled scaling.

B.

Create a new launch template version for the Auto Scaling group that uses larger EC2 instances.

C.

Reconfigure the Auto Scaling group to use a target tracking scaling policy.

D.

Replace the EFS volume with instance store volumes.

Question 61

Question:

A company uses Apache Hadoop and Spark on-prem. The infrastructure is complex and not scalable. They want to reduce operational complexity but keep data processing on-premises.

Options:

Options:

A.

Use Site-to-Site VPN to access on-prem HDFS. Use Amazon EMR to process the data.

B.

Use AWS DataSync to connect to on-prem HDFS. Use Amazon EMR to process the data.

C.

Migrate to Amazon EMR on AWS Outposts.

D.

Use AWS Snowball to migrate data to S3. Use EMR to process.

Question 62

An ecommerce company wants a disaster recovery solution for its Amazon RDS DB instances that run Microsoft SQL Server Enterprise Edition. The company's current recovery point objective (RPO) and recovery time objective (RTO) are 24 hours.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Create a cross-Region read replica and promote the read replica to the primary instance

B.

Use AWS Database Migration Service (AWS DMS) to create RDS cross-Region replication.

C.

Use cross-Region replication every 24 hours to copy native backups to an Amazon S3 bucket

D.

Copy automatic snapshots to another Region every 24 hours.

Question 63

A company has a transaction-processing application that is backed by an Amazon RDS MySQL database. When the load on the application increases, a large number of database connections are opened and closed frequently, which causes latency for the database transactions.

A solutions architect determines that the root cause of the latency is poor connection handling by the application. The solutions architect cannot modify the application code. The solutions architect needs to manage database connections to improve the database performance during periods of high load.

Which solution will meet these requirements?

Options:

A.

Upgrade the database instance to a larger instance type to handle a large number of database connections.

B.

Configure Amazon RDS storage autoscaling to dynamically increase the provisioned IOPS.

C.

Use Amazon RDS Proxy to pool and share database connections.

D.

Convert the database instance to a Multi-AZ deployment.

Question 64

A company operates multiple VPCs in a single AWS account. Account users need temporary access to Amazon S3 buckets. The S3 buckets are private and have no public endpoints.

The solution must follow the principle of least privilege for access to each environment and must avoid distributing permanent access keys.

Which solution will meet these requirements?

Options:

A.

Create a gateway VPC endpoint for Amazon S3 in each VPC. Attach an endpoint policy that allows only environment-scoped IAM roles to access the S3 buckets.

B.

Configure the S3 buckets to use SSE-S3. Create bucket policies that allow access only from the VPC CIDR blocks.

C.

Define separate S3 access points for each environment. Allow users to assume a role associated with the access points. Use the default Amazon S3 endpoints.

D.

Route S3 traffic through a NAT gateway. Configure bucket policies that allow traffic only from the NAT gateway’s public IP addresses.

Question 65

A company has a business system that generates hundreds of reports each day. The business system saves the reports to a network share in CSV format. The company needs to store this data in the AWS Cloud in near-real time for analysis.

Options:

A.

Use AWS DataSync to transfer the files to Amazon S3. Create a scheduled task that runs at the end of each day.

B.

Create an Amazon S3 File Gateway. Update the business system to use a new network share from the S3 File Gateway.

C.

Use AWS DataSync to transfer the files to Amazon S3. Create an application that uses the DataSync API in the automation workflow.

D.

Deploy an AWS Transfer for SFTP endpoint. Create a script that checks for new files on the network share and uploads the new files by using SFTP.

Question 66

A company runs an environment where data is stored in an Amazon S3 bucket. The objects are accessed frequently throughout the day. The company has strict data encryption requirements fordata that is stored in the S3 bucket. The company currently uses AWS Key Management Service (AWS KMS) for encryption.

The company wants to optimize costs associated with encrypting S3 objects without making additional calls to AWS KMS.

Which solution will meet these requirements?

Options:

A.

Use server-side encryption with Amazon S3 managed keys (SSE-S3).

B.

Use an S3 Bucket Key for server-side encryption with AWS KMS keys (SSE-KMS) on the new objects.

C.

Use client-side encryption with AWS KMS customer managed keys.

D.

Use server-side encryption with customer-provided keys (SSE-C) stored in AWS KMS.

Question 67

A company runs an online order management system on AWS. The company stores order and inventory data for the previous 5 years in an Amazon Aurora MySQL database. The company deletes inventory data after 5 years.

The company wants to optimize costs to archive data.

Which solution will meet this requirement?

Options:

A.

Create an AWS Glue crawler to export data to Amazon S3. Create an AWS Lambda function to compress the data.

B.

Use the SELECT INTO OUTFILE S3 query on the Aurora database to export the data to Amazon S3. Configure S3 Lifecycle rules on the S3 bucket.

C.

Create an AWS Glue DataBrew job to migrate data from Aurora to Amazon S3. Configure S3 Lifecycle rules on the S3 bucket.

D.

Use the AWS Schema Conversion Tool (AWS SCT) to replicate data from Aurora to Amazon S3. Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class.

Question 68

A company has stored millions of objects across multiple prefixes in an Amazon S3 bucket by using the Amazon S3 Glacier Deep Archive storage class. The company needs to delete all data older than 3 years except for a subset of data that must be retained. The company has identified the data that must be retained and wants to implement a serverless solution.

Which solution will meet these requirements?

Options:

A.

Use S3 Inventory to list all objects. Use the AWS CLI to create a script that runs on an Amazon EC2 instance that deletes objects from the inventory list.

B.

Use AWS Batch to delete objects older than 3 years except for the data that must be retained

C.

Provision an AWS Glue crawler to query objects older than 3 years. Save the manifest file of old objects. Create a script to delete objects in the manifest.

D.

Enable S3 Inventory. Create an AWS Lambda function to filter and delete objects. Invoke the Lambda function with S3 Batch Operations to delete objects by using the inventory reports.

Question 69

A company asks a solutions architect to review the architecture for its messaging application. The application uses TCP and UDP traffic. The company is planning to deploy a new VoIP feature, but its 10 test users in other countries are reporting poor call quality.

The VoIP application runs on an Amazon EC2 instance with more than enough resources. The HTTP portion of the company's application behind an Application Load Balancer has no issues.

What should the solutions architect recommend for the company to do to address the VoIP performance issues?

Options:

A.

Use AWS Global Accelerator.

B.

Implement Amazon CloudFront into the architecture.

C.

Use an Amazon Route 53 geoproximity routing policy.

D.

Migrate from Application Load Balancers to Network Load Balancers.

Question 70

A company is migrating its online shopping platform to AWS and wants to adopt a serverless architecture.

The platform has a user profile and preference service that does not have a defined schema. The platform allows user-defined fields.

Profile information is updated several times daily. The company must store profile information in a durable and highly available solution. The solution must capture modifications to profile data for future processing.

Which solution will meet these requirements?

Options:

A.

Use an Amazon RDS for PostgreSQL instance to store profile data. Use a log stream in Amazon CloudWatch Logs to capture modifications.

B.

Use an Amazon DynamoDB table to store profile data. Use Amazon DynamoDB Streams to capture modifications.

C.

Use an Amazon ElastiCache (Redis OSS) cluster to store profile data. Use Amazon Data Firehose to capture modifications.

D.

Use an Amazon Aurora Serverless v2 cluster to store the profile data. Use a log stream in Amazon CloudWatch Logs to capture modifications.

Question 71

A company uses Amazon EC2 instances to host its internal systems. As part of a deployment operation, an administrator tries to use the AWS

CLI to terminate an EC2 instance. However, the administrator receives a 403 (Access Denied) error message.

The administrator is using an IAM role that has the following IAM policy attached:

What is the cause of the unsuccessful request?

Options:

A.

The EC2 instance has a resource-based policy with a Deny statement.

B.

The principal has not been specified in the policy statement.

C.

The "Action" field does not grant the actions that are required to terminate the EC2 instance.

D.

The request to terminate the EC2 instance does not originate from the CIDR blocks 192.0.2.0/24 or 203.0.113.0/24.

Question 72

A company is storing data that will not be frequently accessed in the AWS Cloud. If the company needs to access the data, the data must be retrieved within 12 hours. The company wants a solution that is cost-effective for storage costs per gigabyte.

Which Amazon S3 storage class will meet these requirements?

Options:

A.

S3 Standard

B.

S3 Glacier Flexible Retrieval

C.

S3 One Zone-Infrequent Access (S3 One Zone-IA)

D.

S3 Standard-Infrequent Access (S3 Standard-IA)

Question 73

A company wants to share data that is collected from self-driving cars with the automobile community. The data will be made available from within an Amazon S3 bucket. The company wants to minimize its cost of making this data available to other AWS accounts.

What should a solutions architect do to accomplish this goal?

Options:

A.

Create an S3 VPC endpoint for the bucket.

B.

Configure the S3 bucket to be a Requester Pays bucket.

C.

Create an Amazon CloudFront distribution in front of the S3 bucket.

D.

Require that the files be accessible only with the use of the BitTorrent protocol.

Question 74

A company is building a mobile gaming app. The company wants to serve users from around the world with low latency. The company needs a scalable solution to host the application and to route user requests to the location that is nearest to each user.

Which solution will meet these requirements?

Options:

A.

Use an Application Load Balancer to route requests to Amazon EC2 instances that are deployed across multiple Availability Zones.

B.

Use a Regional Amazon API Gateway REST API to route requests to AWS Lambda functions.

C.

Use an edge-optimized Amazon API Gateway REST API to route requests to AWS Lambda functions.

D.

Use an Application Load Balancer to route requests to containers in an Amazon ECS cluster.

Question 75

A company has an application that uses an Amazon RDS for PostgreSQL database. The company is developing an application feature that will store sensitive information for an individual in the database.

During a security review of the environment, the company discovers that the RDS DB instance is not encrypting data at rest. The company needs a solution that will provide encryption at rest for all the existing data and for any new data that is entered for an individual.

Which combination of steps should the company take to meet these requirements? (Select TWO.)

Options:

A.

Create a snapshot of the DB instance. Enable encryption on the snapshot. Use the encrypted snapshot to create a new DB instance. Adjust the application configuration to use the new DB instance.

B.

Create a snapshot of the DB instance. Create an encrypted copy of the snapshot. Use the encrypted snapshot to create a new DB instance. Adjust the application configuration to use the new DB instance.

C.

Modify the configuration of the DB instance by enabling encryption. Create a snapshot of the DB instance. Use the snapshot to create a new DB instance. Adjust the application configuration to use the new DB instance.

D.

Use AWS Key Management Service (AWS KMS) to create a new default AWS managed aws/rds key. Select this key as the encryption key for operations with Amazon RDS.

E.

Use AWS Key Management Service (AWS KMS) to create a new customer managed key. Select this key as the encryption key for operations with Amazon RDS.

Question 76

A financial services company has a two-tier consumer banking application. The frontend serves static web content. The backend consists of APIs. The company needs to migrate the frontendcomponent to AWS. The backend of the application will remain on-premises. The company must protect the application from common web vulnerabilities and attacks.

Options:

A.

Migrate the frontend to Amazon EC2 instances. Deploy an Application Load Balancer (ALB) in front of the instances. Use the instances to invoke the on-premises APIs. Associate AWS WAF rules with the instances.

B.

Deploy the frontend as an Amazon CloudFront distribution that has multiple origins. Configure one origin to be an Amazon S3 bucket that serves the static web content. Configure a second origin to route traffic to the on-premises APIs based on the URL pattern. Associate AWS WAF rules with the distribution.

C.

Migrate the frontend to Amazon EC2 instances. Deploy a Network Load Balancer (NLB) in front of the instances. Use the instances to invoke the on-premises APIs. Create an AWS Network Firewall instance. Route all traffic through the Network Firewall instance.

D.

Deploy the frontend as a static website based on an Amazon S3 bucket. Use an Amazon API Gateway REST API and a set of Amazon EC2 instances to invoke the on-premises APIs. AssociateAWS WAF rules with the REST API and the S3 bucket.

Question 77

A company hosts its multi-tier, public web application in the AWS Cloud. The web application runs on Amazon EC2 instances, and its database runs on Amazon RDS. The company is anticipating a large increase in sales during an upcoming holiday weekend. A solutions architect needs to build asolution to analyze the performance of the web application with a granularity of no more than 2 minutes.

What should the solutions architect do to meet this requirement?

Options:

A.

Send Amazon CloudWatch logs to Amazon Redshift. Use Amazon QuickSight to perform further analysis.

B.

Enable detailed monitoring on all EC2 instances. Use Amazon CloudWatch metrics to perform further analysis.

C.

Create an AWS Lambda function to fetch EC2 logs from Amazon CloudWatch Logs. Use Amazon CloudWatch metrics to perform further analysis.

D.

Send EC2 logs to Amazon S3. Use Amazon Redshift to fetch togs from the S3 bucket to process raw data tor further analysis with Amazon QuickSight.

Question 78

A company has an application with a REST-based interface that allows data to be received in near-real time from a third-party vendor. Once received, the application processes and stores the data for further analysis. The application is running on Amazon EC2 instances.

The third-party vendor has received many 503 Service Unavailable Errors when sending data to the application. When the data volume spikes, the compute capacity reaches its maximum limit and the application is unable to process all requests.

Which design should a solutions architect recommend to provide a more scalable solution?

Options:

A.

Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions.

B.

Use Amazon API Gateway on top of the existing application. Create a usage plan with a quota limit for the third-party vendor.

C.

Use Amazon Simple Notification Service (Amazon SNS) to ingest the data. Put the EC2 instances in an Auto Scaling group behind an Application Load Balancer.

D.

Repackage the application as a container. Deploy the application using Amazon Elastic Container Service (Amazon ECS) using the EC2 launch type with an Auto Scaling group.

Question 79

A social media company wants to store its database of user profiles, relationships, and interactions in the AWS Cloud. The company needs an application to monitor any changes in the database. The application needs to analyze the relationships between the data entities and to provide recommendations to users.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Use Amazon Neptune to store the information. Use Amazon Kinesis Data Streams to process changes in the database.

B.

Use Amazon Neptune to store the information. Use Neptune Streams to process changes in the database.

C.

Use Amazon Quantum Ledger Database (Amazon QLDB) to store the information. Use Amazon Kinesis Data Streams to process changes in the database.

D.

Use Amazon Quantum Ledger Database (Amazon QLDB) to store the information. Use Neptune Streams to process changes in the database.

Question 80

A company hosts its applications in multiple private and public subnets in a VPC. The applications in the private subnets need to access an API. The API is available on the internet and is hosted in the company's on-premises data center. A solutions architect needs to establish connectivity for applications in the private subnets.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Create a transit gateway to connect the VPC to the on-premises network. Use the transit gateway to route API calls from the private subnets to the on-premises data center.

B.

Create a NAT gateway in the public subnet of the VPC. Use the NAT gateway to allow the private subnets to access the API over the internet.

C.

Establish an AWS PrivateLink connection to connect the VPC to the on-premises network. Use PrivateLink to make API calls from the private subnets to the on-premises data center.

D.

Implement an AWS Site-to-Site VPN connection between the VPC and the on-premises data center. Use the VPN connection to make API calls from the private subnets to the on-premises data center.

Question 81

A media company has an ecommerce website to sell music. Each music file is stored as an MP3 file. Premium users of the website purchase music files and download the files. The company wants to store music files on AWS. The company wants to provide access only to the premium users. The company wants to use the same URL for all premium users.

Which solution will meet these requirements?

Options:

A.

Store the MP3 files on a set of Amazon EC2 instances that have Amazon Elastic Block Store (Amazon EBS) volumes attached. Manage access to the files by creating an IAM user and an IAM policy for each premium user.

B.

Store all the MP3 files in an Amazon S3 bucket. Create a presigned URL for each MP3 file. Share the presigned URLs with the premium users.

C.

Store all the MP3 files in an Amazon S3 bucket. Create an Amazon CloudFront distribution that uses the S3 bucket as the origin. Generate CloudFront signed cookies for the music files. Share the signed cookies with the premium users.

D.

Store all the MP3 files in an Amazon S3 bucket. Create an Amazon CloudFront distribution that uses the S3 bucket as the origin. Use a CloudFront signed URL for each music file. Share the signed URLs with the premium users.

Question 82

A company wants to design a microservices architecture for an application. Each microservice must perform operations that can be completed within 30 seconds.

The microservices need to expose RESTful APIs and must automatically scale in response to varying loads. The APIs must also provide client access control and rate limiting to maintain equitable usage and service availability.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 to host each microservice. Use Amazon API Gateway to manage the RESTful API requests.

B.

Deploy each microservice as a set of AWS Lambda functions. Use Amazon API Gateway to manage the RESTful API requests.

C.

Host each microservice on Amazon EC2 instances in Auto Scaling groups behind an Elastic Load Balancing (ELB) load balancer. Use the ELB to manage the RESTful API requests.

D.

Deploy each microservice on Amazon Elastic Beanstalk. Use Amazon CloudFront to manage the RESTful API requests.

Question 83

A company recently launched a new application for its customers. The application runs on multiple Amazon EC2 instances across two Availability Zones. End users use TCP to communicate with the application.

The application must be highly available and must automatically scale as the number of users increases.

Which combination of steps will meet these requirements MOST cost-effectively? (Select TWO.)

Options:

A.

Add a Network Load Balancer in front of the EC2 instances.

B.

Configure an Auto Scaling group for the EC2 instances.

C.

Add an Application Load Balancer in front of the EC2 instances.

D.

Manually add more EC2 instances for the application.

E.

Add a Gateway Load Balancer in front of the EC2 instances.

Question 84

A company is building new learning management applications on AWS. The company is using Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 to host the applications. The company must ensure that container images are secure. Company administrators must receive notifications of any security vulnerabilities in the images.

Which combination of solutions will meet these requirements? (Select TWO.)

Options:

A.

Modify the ECS cluster properties to use privileged mode. Enable host-based logging.

B.

Use the AWS Config conformance pack for Amazon ECS. Use AWS Config to notify administrators if any security vulnerabilities are detected.

C.

Configure AWS WAF to invoke an Amazon CloudWatch alarm when a new security vulnerability is detected.

D.

Use Amazon Inspector to scan container images in Amazon Elastic Container Registry (Amazon ECR).

E.

Use AWS Systems Manager Parameter Store to encrypt container images.

Question 85

A company runs a containerized application on a Kubernetes cluster in an on-premises data center. The company is using a MongoDB database for data storage. The company wants to migrate some of these environments to AWS, but no code changes or deployment method changes are possible at this time. The company needs a solution that minimizes operational overhead.

Options:

A.

Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes for compute and MongoDB on EC2 for data storage.

B.

Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute and Amazon DynamoDB for data storage.

C.

Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes for compute and Amazon DynamoDB for data storage.

D.

Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB compatibility) for data storage.

Question 86

A company is redesigning its data intake process. In the existing process, the company receives data transfers and uploads the data to an Amazon S3 bucket every night. The company uses AWS Glue crawlers and jobs to prepare the data for a machine learning (ML) workflow.

The company needs a low-code solution to run multiple AWS Glue jobs in sequence and provide a visual workflow.

Which solution will meet these requirements?

Options:

A.

Use an Amazon EC2 instance to run a cron job and a script to check for the S3 files and call the AWS Glue jobs. Create an Amazon CloudWatch dashboard to visualize the workflow.

B.

Use Amazon EventBridge to call an AWS Step Functions workflow for the AWS Glue jobs. Use Step Functions to create a visual workflow.

C.

Use S3 Event Notifications to invoke a series of AWS Lambda functions and AWS Glue jobs in sequence. Use Amazon QuickSight to create a visual workflow.

D.

Create an Amazon Elastic Container Service (Amazon ECS) task that contains a Python script that manages the AWS Glue jobs and creates a visual workflow. Use Amazon EventBridge Scheduler to start the ECS task.

Question 87

A logistics company is creating a data exchange platform to share shipment status information with shippers. The logistics company can see all shipment information and metadata. The company distributes shipment data updates to shippers.

Each shipper should see only shipment updates that are relevant to their company. Shippers should not see the full detail that is visible to the logistics company. The company creates an Amazon Simple Notification Service (Amazon SNS) topic for each shipper to share data. Some shippers use a mobile app to submit shipment status updates.

The company needs to create a data exchange platform that provides each shipper specific access to the data that is relevant to their company.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Ingest the shipment updates from the mobile app into Amazon Simple Queue Service (Amazon SQS). Publish the updates to the SNS topic. Apply a filter policy to rewrite the body of each message.

B.

Ingest the shipment updates from the mobile app into Amazon Simple Queue Service (Amazon SQS). Use an AWS Lambda function to consume the updates from Amazon SQS and rewrite the body of each message. Publish the updates to the SNS topic.

C.

Ingest the shipment updates from the mobile app into a second SNS topic. Publish the updates to the shipper SNS topic. Apply a filter policy to rewrite the body of each message.

D.

Ingest the shipment updates from the mobile app into Amazon Simple Queue Service (Amazon SQS). Filter and rewrite the messages in Amazon EventBridge Pipes. Publish the updates to the SNS topic.

Question 88

A company is developing an ecommerce application that will consist of a load-balanced front end, a container-based application, and a relational database. A solutions architect needs to create a highly available solution that operates with as little manual intervention as possible.

Which solutions meet these requirements? (Select TWO.)

Options:

A.

Create an Amazon RDS DB instance in Multi-AZ mode.

B.

Create an Amazon RDS DB instance and one or more replicas in another Availability Zone.

C.

Create an Amazon EC2 instance-based Docker cluster to handle the dynamic application load.

D.

Create an Amazon Elastic Container Service (Amazon ECS) cluster with a Fargate launch type to handle the dynamic application load.

E.

Create an Amazon Elastic Container Service (Amazon ECS) cluster with an Amazon EC2 launch type to handle the dynamic application load.

Question 89

A company's HTTP application is behind a Network Load Balancer (NLB). The NLB's target group is configured to use an Amazon EC2 Auto Scaling group with multiple EC2 instances that run the web service.

The company notices that the NLB is not detecting HTTP errors for the application. These errors require a manual restart of the EC2 instances that run the web service. The company needs to improve the application's availability without writing custom scripts or code.

What should a solutions architect do to meet these requirements?

Options:

A.

Enable HTTP health checks on the NLB, supplying the URL of the company's application.

B.

Add a cron job to the EC2 instances to check the local application's logs once each minute. If HTTP errors are detected, the application will restart.

C.

Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the company's application. Configure an Auto Scaling action to replace unhealthy instances.

D.

Create an Amazon CloudWatch alarm that monitors the UnhealthyHostCount metric for the NLB. Configure an Auto Scaling action to replace unhealthy instances when the alarm is in the ALARM state.

Question 90

A company has migrated several applications to AWS in the past 3 months. The company wants to know the breakdown of costs for each of these applications. The company wants to receive a regular report that Includes this Information.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Use AWS Budgets to download data for the past 3 months into a csv file. Look up the desired information.

B.

Load AWS Cost and Usage Reports into an Amazon RDS DB instance. Run SQL queries to gel the desired information.

C.

Tag all the AWS resources with a key for cost and a value of the application's name. Activate cost allocation tags Use Cost Explorer to get the desired information.

D.

Tag all the AWS resources with a key for cost and a value of the application's name. Use the AWS Billing and Cost Management console to download bills for the past 3 months. Look up the desired information.

Question 91

A company stores a large volume of critical data in Amazon RDS for PostgreSQL tables. The company is developing several new features for an upcoming product launch. Some of the new features require many table alterations.

The company needs a solution to test the altered tables for several days. After testing, the solution must make the new features available to customers in production.

Which solution will meet these requirements with the HIGHEST availability?

Options:

A.

Create a new instance of the database in RDS for PostgreSQL to test the new features. When the testing is finished, take a backup of the test database, and restore the test database to the production database.

B.

Create new database tables in the production database to test the new features. When the testing is finished, copy the data from the older tables to the new tables. Delete the older tables, and rename the new tables accordingly.

C.

Create an Amazon RDS read replica to deploy a new instance of the database. Make updates to the database tables in the replica instance. When the testing is finished, promote the replica instance to become the new production instance.

D.

Use an Amazon RDS blue/green deployment to deploy a new test instance of the database. Make database table updates in the test instance. When the testing is finished, promote the test instance to become the new production instance.

Question 92

A company has a multi-tier web application. The application's internal service components are deployed on Amazon EC2 instances. The internal service components need to access third-party software as a service (SaaS) APIs that are hosted on AWS.

The company needs to provide secure and private connectivity from the application's internal services to the third-party SaaS application. The company needs to ensure that there is minimal public internet exposure.

Which solution will meet these requirements?

Options:

A.

Implement an AWS Site-to-Site VPN to establish a secure connection with the third-party SaaS provider.

B.

Deploy AWS Transit Gateway to manage and route traffic between the application's VPC and the third-party SaaS provider.

C.

Configure AWS PrivateLink to allow only outbound traffic from the VPC without enabling the third-party SaaS provider to establish a return path to the network.

D.

Use AWS PrivateLink to create a private connection between the application's VPC and the third-party SaaS provider.

Question 93

A company hosts an application in an Amazon EC2 Auto Scaling group. The company has observed that during periods of high demand, new instances take too long to join the Auto Scaling group and serve the increased demand. The company determines that the root cause of the issue is the long boot time of the instances in the Auto Scaling group. The company needs to reduce the time required to launch new instances to respond to demand. Which solution will meet this requirement?

Options:

A.

Increase the maximum capacity of the Auto Scaling group by 50%.

B.

Create a warm pool for the Auto Scaling group. Use the default specification for the warm pool size.

C.

Increase the health check grace period for the Auto Scaling group by 50%.

D.

Create a scheduled scaling action. Set the desired capacity equal to the maximum capacity of the Auto Scaling group.

Question 94

A solutions architect needs to connect a company's corporate network to its VPC to allow on-premises access to its AWS resources. The solution must provide encryption of all trafficbetween the corporate network and the VPC at the network layer and the session layer. The solution also must provide security controls to prevent unrestricted access between AWS and the on-premises systems.

Which solution meets these requirements?

Options:

A.

Configure AWS Direct Connect to connect to the VPC. Configure the VPC route tables to allow and deny traffic between AWS and on premises as required.

B.

Create an IAM policy to allow access to the AWS Management Console only from a defined set of corporate IP addresses Restrict user access based on job responsibility by using an IAM policy and roles

C.

Configure AWS Site-to-Site VPN to connect to the VPC. Configure route table entries to direct traffic from on premises to the VPC. Configure instance security groups and network ACLs to allow only required traffic from on premises.

D.

Configure AWS Transit Gateway to connect to the VPC. Configure route table entries to direct traffic from on premises to the VPC. Configure instance security groups and network ACLs to allow only required traffic from on premises.

Question 95

A company is building an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for its workloads. All secrets that are stored in Amazon EKS must be encrypted in the Kubernetes etcd key-value store.

Which solution will meet these requirements?

Options:

A.

Create a new AWS Key Management Service (AWS KMS) key. Use AWS Secrets Manager to manage, rotate, and store all secrets in Amazon EKS.

B.

Create a new AWS Key Management Service (AWS KMS) key. Enable Amazon EKS KMS secrets encryption on the Amazon EKS cluster.

C.

Create the Amazon EKS cluster with default options. Use the Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver as an add-on.

D.

Create a new AWS Key Management Service (AWS KMS) key with the alias/aws/ebs alias. Enable default Amazon Elastic Block Store (Amazon EBS) volume encryption for the account.

Question 96

How can trade data from DynamoDB be ingested into an S3 data lake for near real-time analysis?

Options:

A.

Use DynamoDB Streams to invoke a Lambda function that writes to S3.

B.

Use DynamoDB Streams to invoke a Lambda function that writes to Data Firehose, which writes to S3.

C.

Enable Kinesis Data Streams on DynamoDB. Configure it to invoke a Lambda function that writes to S3.

D.

Enable Kinesis Data Streams on DynamoDB. Use Data Firehose to write to S3.

Question 97

An online gaming company hosts its platform on Amazon EC2 instances behind Network Load Balancers (NLBs) across multiple AWS Regions. The NLBs can route requests to targets overthe internet. The company wants to improve the customer playing experience by reducing end-to-end load time for its global customer base.

Which solution will meet these requirements?

Options:

A.

Create Application Load Balancers (ALBs) in each Region to replace the existing NLBs. Register the existing EC2 instances as targets for the ALBs in each Region.

B.

Configure Amazon Route 53 to route equally weighted traffic to the NLBs in each Region.

C.

Create additional NLBs and EC2 instances in other Regions where the company has large customer bases.

D.

Create a standard accelerator in AWS Global Accelerator. Configure the existing NLBs as target endpoints.

Question 98

A company stores a large dataset for an online advertising business in an Amazon RDS for MySQL DB instance. The company wants to run business reporting queries on the data without affecting write operations to the DB instance.

Which solution will meet these requirements?

Options:

A.

Deploy RDS read replicas to process the business reporting queries.

B.

Scale out the DB instance horizontally by placing the instance behind an Elastic Load Balancing (ELB) load balancer.

C.

Scale up the DB instance to a larger instance type to handle write operations and reporting queries.

D.

Configure Amazon CloudWatch to monitor the DB instance. Deploy standby DB instances when a latency metric threshold is exceeded.

Question 99

A company is using a loosely coupled serverless architecture on AWS. The architecture consists of multiple web applications and APIs distributed across multiple teams. The company uses AWS Control Tower to provision AWS accounts. The company's development teams use AWS CloudFormation.

The company wants to improve trace monitoring and gain insight into how individual services in application stacks are performing.

Which solution will meet these requirements?

Options:

A.

Enable AWS CloudTrail across all accounts by using AWS Control Tower.

B.

Enable AWS X-Ray across all accounts by using AWS Control Tower.

C.

Enable Amazon CloudWatch in the CloudFormation templates.

D.

Enable AWS X-Ray in the CloudFormation templates.

Question 100

A company is developing a new application that will run on Amazon EC2 instances. The application needs to access multiple AWS services.

The company needs to ensure that the application will not use long-term access keys to access AWS services.

Options:

A.

Create an IAM user. Assign the IAM user to the application. Create programmatic access keys for the IAM user. Embed the access keys in the application code.

B.

Create an IAM user that has programmatic access keys. Store the access keys in AWS Secrets Manager. Configure the application to retrieve the keys from Secrets Manager when the application runs.

C.

Create an IAM role that can access AWS Systems Manager Parameter Store. Associate the role with each EC2 instance profile. Create IAM access keys for the AWS services, and store the keys in Parameter Store. Configure the application to retrieve the keys from Parameter Store when the application runs.

D.

Create an IAM role that has permissions to access the required AWS services. Associate the IAM role with each EC2 instance profile.

Question 101

A company has Amazon EC2 instances in multiple AWS Regions. The instances all store and retrieve confidential data from the same Amazon S3 bucket. The company wants to improve the security of its current architecture.

The company wants to ensure that only the Amazon EC2 instances within its VPC can access the S3 bucket. The company must block all other access to the bucket.

Which solution will meet this requirement?

Options:

A.

Use IAM policies to restrict access to the S3 bucket.

B.

Use server-side encryption (SSE) to encrypt data in the S3 bucket at rest. Store the encryption key on the EC2 instances.

C.

Create a VPC endpoint for Amazon S3. Configure an S3 bucket policy to allow connections only from the endpoint.

D.

Use AWS Key Management Service (AWS KMS) with customer-managed keys to encrypt the data before sending the data to the S3 bucket.

Question 102

A global company runs its workloads on AWS The company's application uses Amazon S3 buckets across AWS Regions for sensitive data storage and analysis. The company stores millions of objects in multiple S3 buckets daily. The company wants to identify all S3 buckets that are not versioning-enabled.

Which solution will meet these requirements?

Options:

A.

Set up an AWS CloudTrail event that has a rule to identify all S3 buckets that are not versioning-enabled across Regions

B.

Use Amazon S3 Storage Lens to identify all S3 buckets that are not versioning-enabled across Regions.

C.

Enable IAM Access Analyzer for S3 to identify all S3 buckets that are not versioning-enabled across Regions

D.

Create an S3 Multi-Region Access Point to identify all S3 buckets that are not versioning-enabled across Regions

Question 103

A company is developing a new application that uses a relational database to store user data and application configurations. The company expects the application to have steady user growth. The company expects the database usage to be variable and read-heavy, with occasional writes.

The company wants to cost-optimize the database solution. The company wants to use an AWS managed database solution that will provide the necessary performance.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Deploy the database on Amazon RDS. Use Provisioned IOPS SSD storage to ensure consistent performance for read and write operations.

B.

Deploy the database on Amazon Aurora Serveriess to automatically scale the database capacity based on actual usage to accommodate the workload.

C.

Deploy the database on Amazon DynamoDB. Use on-demand capacity mode to automatically scale throughput to accommodate the workload.

D.

Deploy the database on Amazon RDS Use magnetic storage and use read replicas to accommodate the workload

Question 104

A company runs a NetApp storage array in an on-premises data center. The company wants to migrate the storage array to Amazon FSx for NetApp ONTAP. The company has a mix of NFS and SMB file shares with complex directory structures and over 60 million small files. The company has 10 Gbps of network bandwidth available. The company wants to optimize migration efficiency for the file system.

Options:

A.

Use AWS DataSync with a bandwidth throttle. Use the All tiering policy.

B.

Provision an AWS Storage Gateway Volume Gateway. Configure a zero-ETL integration with the FSx for NetApp ONTAP file system.

C.

Set up NetApp SnapMirror replication between the on-premises array and the FSx for ONTAP file system.

D.

Use AWS Snowball Edge to perform an offline migration.

Question 105

An ecommerce company runs Its application on AWS. The application uses an Amazon Aurora PostgreSQL cluster in Multi-AZ mode for the underlying database. During a recent promotionalcampaign, the application experienced heavy read load and write load. Users experienced timeout issues when they attempted to access the application.

A solutions architect needs to make the application architecture more scalable and highly available.

Which solution will meet these requirements with the LEAST downtime?

Options:

A.

Create an Amazon EventBndge rule that has the Aurora cluster as a source. Create an AWS Lambda function to log the state change events of the Aurora cluster. Add the Lambda function as a target for the EventBndge rule Add additional reader nodes to fail over to.

B.

Modify the Aurora cluster and activate the zero-downtime restart (ZDR) feature. Use Database Activity Streams on the cluster to track the cluster status.

C.

Add additional reader instances to the Aurora cluster Create an Amazon RDS Proxy target group for the Aurora cluster.

D.

Create an Amazon ElastiCache for Redis cache. Replicate data from the Aurora cluster to Redis by using AWS Database Migration Service (AWS DMS) with a write-around approach.

Question 106

A company is planning to run an AI/ML workload on AWS. The company needs to train a model on a dataset that is in Amazon S3 Standard. A model training application requires multiple compute nodes and single-digit millisecond access to the data.

Which solution will meet these requirements in the MOST cost-effective way?

Options:

A.

Move the data to S3 Intelligent-Tiering. Point the model training application to S3 Intelligent-Tiering as the data source.

B.

Add partitions to the S3 bucket by adding random prefixes. Reconfigure the model training application to point to the new prefixes as the data source.

C.

Move the data to S3 Express One Zone. Point the model training application to S3 Express One Zone as the data source.

D.

Move the data to a General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS)volume attached to an Amazon EC2 instance. Point the model training application to the gp3 volume as the data source.

Question 107

A company needs a secure connection between its on-premises environment and AWS. This connection does not need high bandwidth and will handle a small amount of traffic. The connection should be set up quickly.

What is the MOST cost-effective method to establish this type of connection?

Options:

A.

Implement a client VPN

B.

Implement AWS Direct Connect.

C.

Implement a bastion host on Amazon EC2.

D.

Implement an AWS Site-to-Site VPN connection.

Question 108

A company is designing a secure solution to grant access to its Amazon RDS for PostgreSQL database. Applications that run on Amazon EC2 instances must be able to securely authenticate to the database without storing long-term credentials.

Which solution will meet these requirements?

Options:

A.

Enable RDS IAM authentication and configure AWS Secrets Manager to store database credentials. Configure applications to retrieve credentials at runtime.

B.

Configure a custom IAM policy for the database that allows access from the EC2 instances' IP addresses. Configure applications to use a static password to authenticate to the database.

C.

Set up an IAM user for each application. Store the access key ID and secret access key in the EC2 instances' environment variables. Grant the IAM users permission to the database.

D.

Use IAM roles to assign permissions to the EC2 instances. Configure the applications to obtain a token from the RDS database to authenticate by using IAM authentication.

Question 109

A company wants to create an API to authorize users by using JSON Web Tokens (JWTs). The company needs to support dynamic access to multiple AWS services by using path-based routing.

Which solution will meet these requirements?

Options:

A.

Deploy an Application Load Balancer behind an Amazon API Gateway REST API. Configure IAM authorization.

B.

Deploy an Application Load Balancer behind an Amazon API Gateway HTTP API. Use Amazon Cognito for authorization.

C.

Deploy a Network Load Balancer behind an Amazon API Gateway REST API. Use an AWS Lambda function as a custom authorizer.

D.

Deploy a Network Load Balancer behind an Amazon API Gateway HTTP API. Use Amazon Cognito for authorization.

Question 110

A company is building a new web application that serves static and dynamic content from an API. Users will access the application from around the world. The company wants to minimize latency in the most cost-effective way.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Deploy the static content to an Amazon S3 bucket. Use an Amazon API Gateway HTTP API to serve the dynamic content. Create an Amazon CloudFront distribution that uses the S3 bucket and the HTTP API as origins. Enable caching for static content.

B.

Deploy the static content to an Amazon S3 bucket. Provide the bucket website endpoint to users. Use an Amazon API Gateway HTTP API with caching enabled to serve the dynamic content.

C.

Deploy the static content to an Amazon S3 bucket. Use two Amazon EC2 instances as web servers. Deploy an Application Load Balancer to distribute traffic. Create an Amazon CloudFront distribution in front of the S3 bucket to cache static content.

D.

Deploy the static content to an Amazon S3 bucket. Provide the bucket website endpoint to users. Create an Amazon CloudFront distribution in front of the S3 bucket to cache static content.

Question 111

A company is developing a social media application. The company anticipates rapid and unpredictable growth in users and data volume. The application needs to handle a continuous high volume of user requests. User requests include long-running processes that store large amounts of user-generated content and user profiles in a relational format. The processes must run in a specific order. The company requires an architecture that can scale resources to meet demand spikes without downtime or performance degradation. The company must ensure that the components of the application can evolve independently without affecting other parts of the system. Which combination of AWS services will meet these requirements?

Options:

A.

Deploy the application on Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Use Amazon RDS as the database. Use Amazon Simple Queue Service (Amazon SQS) to decouple message processing between components.

B.

Deploy the application on Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Use Amazon RDS as the database. Use Amazon Simple Notification Service (Amazon SNS) to decouple message processing between components.

C.

Use Amazon DynamoDB as the database. Use AWS Lambda functions to implement the application. Configure Amazon DynamoDB Streams to invoke the Lambda functions. Use AWS Step Functions to manage workflows between services.

D.

Use an AWS Elastic Beanstalk environment with auto scaling to deploy the application. Use Amazon RDS as the database. Use Amazon Simple Notification Service (Amazon SNS) to decouple message processing between components.

Question 112

Question:

A company wants to migrate an application that uses a microservice architecture to AWS. The services currently run on Docker containers on-premises. The application has an event-driven architecture that uses Apache Kafka. The company configured Kafka to use multiple queues to send and receive messages. Some messages must be processed by multiple services. Which solution will meet these requirements with the LEAST management overhead?

Options:

Options:

A.

Migrate the services to Amazon Elastic Container Service (Amazon ECS) with the Amazon EC2 launch type. Deploy a Kafka cluster on EC2 instances to handle service-to-service communication.

B.

Migrate the services to Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Create multiple Amazon Simple Queue Service (Amazon SQS) queues to handle service-to-service communication.

C.

Migrate the services to Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. Deploy an Amazon Managed Streaming for Apache Kafka (Amazon MSK) cluster to handle service-to-service communication.

D.

Migrate the services to Amazon Elastic Container Service (Amazon ECS) with the Amazon EC2 launch type. Use Amazon EventBridge to handle service-to-service communication.

Question 113

A company is planning to migrate an on-premises online transaction processing (OLTP) database that uses MySQL to an AWS managed database management system. Several reporting and analytics applications use the on-premises database heavily on weekends and at the end of each month. The cloud-based solution must be able to handle read-heavy surges during weekends and at the end of each month.

Which solution will meet these requirements?

Options:

A.

Migrate the database to an Amazon Aurora MySQL cluster. Configure Aurora Auto Scaling to use replicas to handle surges.

B.

Migrate the database to an Amazon EC2 instance that runs MySQL. Use an EC2 instance type that has ephemeral storage. Attach Amazon EBS Provisioned IOPS SSD (io2) volumes to the instance.

C.

Migrate the database to an Amazon RDS for MySQL database. Configure the RDS for MySQL database for a Multi-AZ deployment, and set up auto scaling.

D.

Migrate from the database to Amazon Redshift. Use Amazon Redshift as the database for both OLTP and analytics applications.

Question 114

A company is using an AWS Lambda function in a VPC. The Lambda function needs to access dependencies that exceed the size of the Lambda layer quota. The data that the Lambda function retrieves must be encrypted in transit.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Store the dependencies in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system to the Lambda function. Retrieve the dependencies from the file system.

B.

Store the dependencies on an Amazon EC2 instance that has an instance store volume and web server software. Use HTTPS API calls to retrieve the dependencies each time the Lambda function runs.

C.

Store the dependencies on an Amazon EC2 instance that hosts an NFS file server. Read the files from the EC2 instance each time the Lambda function runs.

D.

Store the dependencies in two separate Lambda layers. Redesign the application to have two Lambda functions that use different Lambda layers.

Question 115

A company is launching a new application that will be hosted on Amazon EC2 instances. A solutions architect needs to design a solution that does not allow public IPv4 access that originates from the internet. However, the solution must allow the EC2 instances to make outbound IPv4 internet requests.

Options:

A.

Deploy a NAT gateway in public subnets in both Availability Zones. Create and configure one route table for each private subnet.

B.

Deploy an internet gateway in public subnets in both Availability Zones. Create and configure a shared route table for the private subnets.

C.

Deploy a NAT gateway in public subnets in both Availability Zones. Create and configure a shared route table for the private subnets.

D.

Deploy an egress-only internet gateway in public subnets in both Availability Zones. Create and configure one route table for each private subnet.

Question 116

A solutions architect is designing an application that helps users fill out and submit registration forms. The solutions architect plans to use a two-tier architecture that includes a web application server tier and a worker tier.

The application needs to process submitted forms quickly. The application needs to process each form exactly once. The solution must ensure that no data is lost.

Which solution will meet these requirements?

Options:

A.

Use an Amazon Simple Queue Service {Amazon SQS) FIFO queue between the web application server tier and the worker tier to store and forward form data.

B.

Use an Amazon API Gateway HTTP API between the web application server tier and the worker tier to store and forward form data.

C.

Use an Amazon Simple Queue Service (Amazon SQS) standard queue between the web application server tier and the worker tier to store and forward form data.

D.

Use an AWS Step Functions workflow. Create a synchronous workflow between the web application server tier and the worker tier that stores and forwards form data.

Question 117

A company has a single AWS account that contains resources belonging to several teams. The company needs to identify the costs associated with each team. The company wants to use a tag named CostCenter to identify resources that belong to each team.

Options:

A.

Tag all resources that belong to each team with the user-defined CostCenter tag.

B.

Create a tag for each team, and set the value to CostCenter.

C.

Activate the CostCenter tag to track cost allocation.

D.

Configure AWS Billing and Cost Management to send monthly invoices to the company through email messages.

E.

Set up consolidated billing in the existing AWS account.

Question 118

An online education platform experiences lag and buffering during peak usage hours, when thousands of students access video lessons concurrently. A solutions architect needs to improve the performance of the education platform.

The platform needs to handle unpredictable traffic surges without losing responsiveness. The platform must provide smooth video playback performance at all times. The platform must create multiple copies of each video lesson and store the copies in various bitrates to serve users who have different internet speeds. The smallest video size is 7 GB.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Use Amazon ElastiCache to cache videos in all the required bitrates. Use AWS Lambda functions to process the videos and to convert the videos to the required bitrates.

B.

Create an Auto Scaling group that includes Amazon EC2 instances that are sized to meet peak loads. Use the Auto Scaling group to serve videos. Use the Auto Scaling group to convert the videos to the required bitrates.

C.

Store a copy of every video in every required bitrate in an Amazon S3 bucket. Use a single Amazon EC2 instance to serve the videos.

D.

Use Amazon Kinesis Video Streams to store and serve the videos. Use AWS Lambda functions to process the videos and to convert the videos to the required bitrates.

Question 119

The DNS provider that hosts a company's domain name records is experiencing outages that cause service disruption for a website running on AWS. The company needs to migrate to a more resilient managed DNS service and wants the service to run on AWS.

What should a solutions architect do to rapidly migrate the DNS hosting service?

Options:

A.

Create an Amazon Route 53 public hosted zone for the domain name. Import the zone file containing the domain records hosted by the previous provider.

B.

Create an Amazon Route 53 private hosted zone for the domain name. Import the zone file containing the domain records hosted by the previous provider.

C.

Create a Simple AD directory in AWS. Enable zone transfer between the DNS provider and AWS Directory Service for Microsoft Active Directory for the domain records.

D.

Create an Amazon Route 53 Resolver inbound endpoint in the VPC. Specify the IP addresses that the provider's DNS will forward DNS queries to. Configure the provider's DNS to forward DNS queries for the domain to the IP addresses that are specified in the inbound endpoint.

Question 120

A company uses an Amazon EC2 instance to run a script to poll for and process messages in an Amazon Simple Queue Service (Amazon SQS) queue. The company wants to reduce operational overhead while maintaining its ability to process an increasing number of messages that are added to the queue. Which solution will meet these requirements?

Options:

A.

Increase the size of the EC2 instance to process messages in the SQS queue faster.

B.

Configure an Amazon EventBridge rule to turn off the EC2 instance when the SQS queue is empty.

C.

Migrate the script on the EC2 instance to an AWS Lambda function with an event source of the SQS queue.

D.

Configure an AWS Systems Manager Run Command to run the script on demand.

Question 121

A company is planning to migrate customer records to an Amazon S3 bucket. The company needs to ensure that customer records are protected against unauthorized access and are encrypted in transit and at rest. The company must monitor all access to the S3 bucket.

Options:

A.

Use AWS Key Management Service (AWS KMS) to encrypt customer records at rest. Create an S3 bucket policy that includes the aws:SecureTransport condition. Use an IAM policy to control access to the records. Use AWS CloudTrail to monitor access to the records.

B.

Use AWS Nitro Enclaves to encrypt customer records at rest. Use AWS Key Management Service (AWS KMS) to encrypt the records in transit. Use an IAM policy to control access to the records. Use AWS CloudTrail and AWS Security Hub to monitor access to the records.

C.

Use AWS Key Management Service (AWS KMS) to encrypt customer records at rest. Create an Amazon Cognito user pool to control access to the records. Use AWS CloudTrail to monitor access to the records. Use Amazon GuardDuty to detect threats.

D.

Use server-side encryption with Amazon S3 managed keys (SSE-S3) with default settings to encrypt the records at rest. Access the records by using an Amazon CloudFront distribution that uses the S3 bucket as the origin. Use IAM roles to control access to the records. Use Amazon CloudWatch to monitor access to the records.

Question 122

A company runs its production workload on Amazon EC2 instances with Amazon Elastic Block Store (Amazon EBS) volumes. A solutions architect needs to analyze the current EBS volume cost and to recommend optimizations. The recommendations need to include estimated monthly saving opportunities.

Which solution will meet these requirements?

Options:

A.

Use Amazon Inspector reporting to generate EBS volume recommendations for optimization.

B.

Use AWS Systems Manager reporting to determine EBS volume recommendations for optimization.

C.

Use Amazon CloudWatch metrics reporting to determine EBS volume recommendations for optimization.

D.

Use AWS Compute Optimizer to generate EBS volume recommendations for optimization.

Question 123

A company is planning to migrate a legacy application to AWS. The application currently uses NFS to communicate to an on-premises storage solution to store application data. The application cannot be modified to use any other communication protocols other than NFS for this purpose.

Which storage solution should a solutions architect recommend for use after the migration?

Options:

A.

AWS DataSync

B.

Amazon Elastic Block Store (Amazon EB5)

C.

Amazon Elastic File System (Amazon EF5)

D.

Amazon EMR File System (Amazon EMRFS)

Question 124

A company is developing a latency-sensitive application. Part of the application includes several AWS Lambda functions that need to initialize as quickly as possible. The Lambda functions are written in Java and contain initialization code outside the handlers to load libraries, initialize classes, and generate unique IDs.

Which solution will meet the startup performance requirement MOST cost-effectively?

Options:

A.

Move all the initialization code to the handlers for each Lambda function. Activate Lambda SnapStart for each Lambda function. Configure SnapStart to reference the $LATEST version of each Lambda function.

B.

Publish a version of each Lambda function. Create an alias for each Lambda function. Configure each alias to point to its corresponding version. Set up a provisioned concurrency configuration for each Lambda function to point to the corresponding alias.

C.

Publish a version of each Lambda function. Set up a provisioned concurrency configuration for each Lambda function to point to the corresponding version. Activate Lambda SnapStart for the published versions of the Lambda functions.

D.

Update the Lambda functions to add a pre-snapshot hook. Move the code that generates unique IDs into the handlers. Publish a version of each Lambda function. Activate Lambda SnapStart for the published versions of the Lambda functions.

Question 125

A company has an on-premises application that uses SFTP to collect financial data from multiple vendors. The company is migrating to the AWS Cloud. The company has created an application that uses Amazon S3 APIs to upload files from vendors.

Some vendors run their systems on legacy applications that do not support S3 APIs. The vendors want to continue to use SFTP-based applications to upload data. The company wants to use managed services for the needs of the vendors that use legacy applications.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Create an AWS Database Migration Service (AWS DMS) instance to replicate data from the storage of the vendors that use legacy applications to Amazon S3. Provide the vendors with the credentials to access the AWS DMS instance.

B.

Create an AWS Transfer Family endpoint for vendors that use legacy applications.

C.

Configure an Amazon EC2 instance to run an SFTP server. Instruct the vendors that use legacy applications to use the SFTP server to upload data.

D.

Configure an Amazon S3 File Gateway for vendors that use legacy applications to upload files to an SMB file share.

Question 126

A company is migrating a new application from an on-premises data center to a new VPC in the AWS Cloud. The company has multiple AWS accounts and VPCs that share many subnets and applications.

The company wants to have fine-grained access control for the new application. The company wants to ensure that all network resources across accounts and VPCs that are granted permission to access the new application can access the application.

Options:

A.

Set up a VPC peering connection for each VPC that needs access to the new application VPC. Update route tables in each VPC to enable connectivity.

B.

Deploy a transit gateway in the account that hosts the new application. Share the transit gateway with each account that needs to connect to the application. Update route tables in the VPC that hosts the new application and in the transit gateway to enable connectivity.

C.

Use an AWS PrivateLink endpoint service to make the new application accessible to other VPCs. Control access to the application by using an endpoint policy.

D.

Use an Application Load Balancer (ALB) to expose the new application to the internet. Configure authentication and authorization processes to ensure that only specified VPCs can access the application.

Question 127

A company has an application that runs on Amazon EC2 instances within a private subnet in a VPC. The instances access data in an Amazon S3 bucket in the same AWS Region. The VPC contains a NAT gateway in a public subnet to access the S3 bucket. The company wants to reduce costs by replacing the NAT gateway without compromising security or redundancy.

Which solution meets these requirements?

Options:

A.

Replace the NAT gateway with a NAT instance.

B.

Replace the NAT gateway with an internet gateway.

C.

Replace the NAT gateway with a gateway VPC endpoint.

D.

Replace the NAT gateway with an AWS Direct Connect connection.

Question 128

A company has a web application that stores user transactions in an Amazon DynamoDB table. To comply with regulations, the company must retain a copy of user transaction data for 7 years.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Use DynamoDB point-in-time recovery to back up the table continuously.

B.

Use AWS Backup to create backup schedules and retention policies for the table.

C.

Create an on-demand backup of the table by using DynamoDB. Store the backup in an Amazon S3 bucket. Set an S3 Lifecycle configuration for the S3 bucket.

D.

Create an Amazon EventBridge rule to invoke an AWS Lambda function. Configure the Lambda function to back up the table and to store the backup in an Amazon S3 bucket. Set an S3 Lifecycle configuration for the S3 bucket.

Question 129

A global media streaming company is migrating its user authentication and content delivery services to AWS. The company wants to use Amazon API Gateway for user authentication and authorization. The company needs a solution that restricts API access to AWS Regions in the United States and ensures minimal latency.

Which solution will meet these requirements?

Options:

A.

Create an API Gateway REST API. Configure an AWS WAF firewall in the same Region. Implement AWS WAF rules to deny requests that originate from Regions outside the United States. Associate the AWS WAF firewall with the API Gateway REST API.

B.

Create an API Gateway HTTP API. Configure an AWS WAF firewall in a different Region. Implement AWS WAF rules to deny requests that originate from Regions outside the United States. Associate the AWS WAF firewall with the API Gateway HTTP API.

C.

Create an API Gateway REST API. Configure an AWS WAF firewall in a different Region. Implement AWS WAF rules to deny requests that originate from Regions outside the United States. Associate the AWS WAF firewall with the API Gateway REST API.

D.

Create an API Gateway HTTP API. Configure an AWS WAF firewall in the same Region. Implement AWS WAF rules to deny requests that originate from Regions outside the United States. Associate the AWS WAF firewall with the API Gateway HTTP API.

Question 130

A company is building a serverless application that processes large volumes of data from a mobile app. The application uses an AWS Lambda function to process the data and store the data in an Amazon DynamoDB table.

The company needs to ensure that the application can recover from failures and continue processing data without losing any records.

Which solution will meet these requirements?

Options:

A.

Configure the Lambda function to use a dead-letter queue with an Amazon Simple Queue Service (Amazon SQS) queue. Configure Lambda to retry failed records from the dead-letter queue. Use a retry mechanism by implementing an exponential backoff algorithm.

B.

Configure the Lambda function to read records from Amazon Data Firehose. Replay the Firehose records in case of any failures.

C.

Use Amazon OpenSearch Service to store failed records. Configure AWS Lambda to retry failed records from OpenSearch Service. Use Amazon EventBridge to orchestrate the retry logic.

D.

Use Amazon Simple Notification Service (Amazon SNS) to store the failed records. Configure Lambda to retry failed records from the SNS topic. Use Amazon API Gateway to orchestrate the retry calls.

Question 131

A company wants to implement a data lake in the AWS Cloud. The company must ensure that only specific teams have access to sensitive data in the data lake. The company must have row-level access control for the data lake.

Options:

Options:

A.

Use Amazon RDS to store the data. Use IAM roles and permissions for data governance and access control.

B.

Use Amazon Redshift to store the data. Use IAM roles and permissions for data governance and access control.

C.

Use Amazon S3 to store the data. Use AWS Lake Formation for data governance and access control.

D.

Use AWS Glue Catalog to store the data. Use AWS Glue DataBrew for data governance and access control.

Question 132

A company is using an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The company must ensure that Kubernetes service accounts in the EKS cluster have secure and granular access to specific AWS resources by using IAM roles for service accounts (IRSA).

Which combination of solutions will meet these requirements? (Select TWO.)

Options:

A.

Create an IAM policy that defines the required permissions. Attach the policy directly to the IAM role of the EKS nodes.

B.

Implement network policies within the EKS cluster to prevent Kubernetes service accounts from accessing specific AWS services.

C.

Modify the EKS cluster's IAM role to include permissions for each Kubernetes service account. Ensure a one-to-one mapping between IAM roles and Kubernetes roles.

D.

Define an IAM role that includes the necessary permissions. Annotate the Kubernetes service accounts with the Amazon Resource Name (ARN) of the IAM role.

E.

Set up a trust relationship between the IAM roles for the service accounts and an OpenID Connect (OIDC) identity provider.

Question 133

A company wants to migrate hundreds of gigabytes of unstructured data from an on-premises location to an Amazon S3 bucket. The company has a 100-Mbps internet connection on premises. The company needs to encrypt the data in transit to the S3 bucket. The company will store new data directly in Amazon S3.

Options:

A.

Use AWS Database Migration Service (AWS DMS) to synchronize the on-premises data to a destination S3 bucket.

B.

Use AWS DataSync to migrate the data from the on-premises location to an S3 bucket.

C.

Use an AWS Snowball Edge device to migrate the data to an S3 bucket. Use an AWS CloudHSM key to encrypt the data on the Snowball Edge device.

D.

Set up an AWS Direct Connect connection between the on-premises location and AWS. Use the s3 cp command to move the data directly to an S3 bucket.

Question 134

A company runs a production database on Amazon RDS for MySQL. The company wants to upgrade the database version for security compliance reasons. Because the database contains critical data, the company wants a quick solution to upgrade and test functionality without losing any data.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Create an RDS manual snapshot. Upgrade to the new version of Amazon RDS for MySQL.

B.

Use native backup and restore. Restore the data to the upgraded new version of Amazon RDS for MySQL.

C.

Use AWS Database Migration Service (AWS DMS) to replicate the data to the upgraded new version of Amazon RDS for MySQL.

D.

Use Amazon RDS Blue/Green Deployments to deploy and test production changes.

Question 135

A solutions architect is creating a data processing job that runs once daily and can take up to 2 hours to complete. If the job is interrupted, it has to restart from the beginning.

How should the solutions architect address this issue in the MOST cost-effective manner?

Options:

A.

Create a script that runs locally on an Amazon EC2 Reserved Instance that is triggered by a cron job.

B.

Create an AWS Lambda function triggered by an Amazon EventBridge scheduled event.

C.

Use an Amazon Elastic Container Service (Amazon ECS) Fargate task triggered by an Amazon EventBridge scheduled event.

D.

Use an Amazon Elastic Container Service (Amazon ECS) task running on Amazon EC2 triggered by an Amazon EventBridge scheduled event.

Question 136

A company has a web application that uses Amazon API Gateway to route HTTPS requests to AWS Lambda functions. The application uses an Amazon Aurora MySQL database for its data storage. The application has experienced unpredictable surges in traffic that overwhelm the database with too many connection requests. The company wants to implement a scalable solution that is more resilient to database failures.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Create an Amazon RDS proxy for the database. Replace the database endpoint with the proxy endpoint in the Lambda functions.

B.

Migrate the database to Amazon DynamoDB tables by using AWS Database Migration Service (AWS DMS).

C.

Review the existing connections. Call MySQL queries to end any connections in the sleep state.

D.

Increase the instance class of the database with more memory. Set a larger value for the max_connections parameter.

Question 137

A company hosts an application on AWS that stores files that users need to access. The application uses two Amazon EC2 instances. One instance is in Availability Zone A, and the second instance is in Availability Zone B. Both instances use Amazon Elastic Block Store (Amazon EBS) volumes. Users must be able to access the files at any time without delay. Users report that the two instances occasionally contain different versions of the same file. Users occasionally receive HTTP 404 errors when they try to download files. The company must address the customer issues. The company cannot make changes to the application code. Which solution will meet these requirements in the MOST operationally efficient way?

Options:

A.

Run the robocopy command on one of the EC2 instances on a schedule to copy files from the Availability Zone A instance to the Availability Zone B instance.

B.

Configure the application to store the files on both EBS volumes each time a user writes or updates a file.

C.

Mount an Amazon Elastic File System (Amazon EFS) file system to the EC2 instances. Copy the files from the EBS volumes to the EFS file system. Configure the application to store files in the EFS file system.

D.

Create an EC2 instance profile that allows the instance in Availability Zone A to access the S3 bucket. Re-associate the instance profile to the instance in Availability Zone B when needed.

Question 138

An internal product team is deploying a new application to a private VPC in a company's AWS account. The application runs on Amazon EC2 instances that are in a security group named App1. The EC2 instances store application data in an Amazon S3 bucket and use AWS Secrets Manager to store application service credentials. The company's security policy prohibits applications in a private VPC from using public IP addresses to communicate.

Which combination of solutions will meet these requirements? (Select TWO.)

Options:

A.

Configure gateway endpoints for Amazon S3 and AWS Secrets Manager.

B.

Configure interface VPC endpoints for Amazon S3 and AWS Secrets Manager.

C.

Add routes to the endpoints in the VPC route table.

D.

Associate the App1 security group with the interface VPC endpoints. Configure a self-referencing security group rule to allow inbound traffic.

E.

Associate the App1 security group with the gateway endpoints. Configure a self-referencing security group rule to allow inbound traffic.

Question 139

A company runs a custom application on Amazon EC2 On-Demand Instances. The application has frontend nodes that must run 24/7. The backend nodes only need to run for short periods depending on the workload.

Frontend nodes accept jobs and place them in queues. Backend nodes asynchronously process jobs from the queues, and jobs can be restarted. The company wants to scale infrastructure based on workload, using the most cost-effective option.

Which solution meets these requirements MOST cost-effectively?

Options:

A.

Use Reserved Instances for the frontend nodes. Use AWS Fargate for the backend nodes.

B.

Use Reserved Instances for the frontend nodes. Use Spot Instances for the backend nodes.

C.

Use Spot Instances for the frontend nodes. Use Reserved Instances for the backend nodes.

D.

Use Spot Instances for the frontend nodes. Use AWS Fargate for the backend nodes.

Question 140

A company uses an Amazon S3 bucket as its data lake storage platform The S3 bucket contains a massive amount of data that is accessed randomly by multiple teams and hundreds of applications. The company wants to reduce the S3 storage costs and provide immediate availability for frequently accessed objects

What is the MOST operationally efficient solution that meets these requirements?

Options:

A.

Create an S3 Lifecycle rule to transition objects to the S3 Intelligent-Tiering storage class

B.

Store objects in Amazon S3 Glacier Use S3 Select to provide applications with access to the data.

C.

Use data from S3 storage class analysis to create S3 Lifecycle rules to automatically transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class.

D.

Transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class Create an AWS Lambda function to transition objects to the S3 Standard storage class when they are accessed by an application

Question 141

A company wants to provide a third-party system that runs in a private data center with access to its AWS account. The company wants to call AWS APIs directly from the third-party system. The company has an existing process for managing digital certificates. The company does not want to use SAML or OpenID Connect (OIDC) capabilities and does not want to store long-term AWS credentials.

Which solution will meet these requirements?

Options:

A.

Configure mutual TLS to allow authentication of the client and server sides of the communication channel.

B.

Configure AWS Signature Version 4 to authenticate incoming HTTPS requests to AWS APIs.

C.

Configure Kerberos to exchange tickets for assertions that can be validated by AWS APIs.

D.

Configure AWS Identity and Access Management (IAM) Roles Anywhere to exchange X.509 certificates for AWS credentials to interact with AWS APIs.

Question 142

A company is building a solution to provide customers with an API that accesses financial data. The API backend needs to compute tax data for each request. The company anticipates greater demand to access the data during the last 3 months of each year.

A solutions architect needs to design a scalable solution that can meet the regular demand and the peak demand at the end of each year.

Which solution will meet these requirements?

Options:

A.

Host the API on an Amazon EC2 instance that runs third-party software. Configure the EC2 instance to perform tax computations.

B.

Deploy an Amazon API Gateway REST API. Create an AWS Lambda function to perform tax computations. Integrate the Lambda function with the REST API.

C.

Create an Application Load Balancer (ALB) in front of two Amazon EC2 instances. Configure the EC2 instances to perform tax computations.

D.

Deploy an Amazon API Gateway REST API. Configure an Amazon EC2 instance to perform tax computations. Integrate the EC2 instance with the REST API.

Question 143

Question:

A company runs an online order management system on AWS. The company stores order and inventory data for the previous 5 years in an Amazon Aurora MySQL database. The company deletes inventory data after 5 years.

The company wants to optimize costs to archive data.

Options:

Options:

A.

Create an AWS Glue crawler to export data to Amazon S3. Create an AWS Lambda function to compress the data.

B.

Use the SELECT INTO OUTFILE S3 query on the Aurora database to export the data to Amazon S3. Configure S3 Lifecycle rules on the S3 bucket.

C.

Create an AWS Glue DataBrew Job to migrate data from Aurora to Amazon S3. Configure S3 Lifecycle rules on the S3 bucket.

D.

Use the AWS Schema Conversion Tool (AWS SCT) to replicate data from Aurora to Amazon S3. Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class.

Question 144

A company runs game applications on AWS. The company needs to collect, visualize, and analyze telemetry data from the company's game servers. The company wants to gain insights into the behavior, performance, and health of game servers in near real time. Which solution will meet these requirements?

Options:

A.

Use Amazon Kinesis Data Streams to collect telemetry data. Use Amazon Managed Service for Apache Flink to process the data in near real time and publish custom metrics to Amazon CloudWatch. Use Amazon CloudWatch to create dashboards and alarms from the custom metrics.

B.

Use Amazon Data Firehose to collect, process, and store telemetry data in near real time. Use AWS Glue to extract, transform, and load (ETL) data from Firehose into required formats for analysis. Use Amazon QuickSight to visualize and analyze the data.

C.

Use Amazon Kinesis Data Streams to collect, process, and store telemetry data. Use Amazon EMR to process the data in near real time into required formats for analysis. Use Amazon Athena to analyze and visualize the data.

D.

Use Amazon DynamoDB Streams to collect and store telemetry data. Configure DynamoDB Streams to invoke AWS Lambda functions to process the data in near real time. Use Amazon Managed Grafana to visualize and analyze the data.

Question 145

A company is creating a web application that will store a large number of images in Amazon S3. The images will be accessed by users over variable periods of time. The company wants to:

Retain all the images.

Incur no cost for retrieval.

Have minimal management overhead.

Have the images available with no impact on retrieval time.

Which solution meets these requirements?

Options:

A.

Implement S3 Intelligent-Tiering.

B.

Implement S3 storage class analysis.

C.

Implement an S3 Lifecycle policy to move data to S3 Standard-Infrequent Access (S3 Standard-IA).

D.

Implement an S3 Lifecycle policy to move data to S3 One Zone-Infrequent Access (S3 One Zone-IA).

Question 146

A company plans to store sensitive user data on Amazon S3. Internal security compliance requirements mandate encryption of data before sending it to Amazon S3.

What should a solutions architect recommend to satisfy these requirements?

Options:

A.

Server-side encryption with customer-provided encryption keys

B.

Client-side encryption with Amazon S3 managed encryption keys

C.

Server-side encryption with keys stored in AWS Key Management Service (AWS KMS)

D.

Client-side encryption with a key stored in AWS Key Management Service (AWS KMS)

Question 147

An international company needs to share data from an Amazon S3 bucket to employees who are located around the world. The company needs a secure solution to provide employees with access to the S3 bucket. The employees are already enrolled in AWS IAM Identity Center.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Create a help desk application to generate an Amazon S3 presigned URL for each employee. Configure the presigned URLs to have short expirations. Instruct employees to contact the company help desk to receive a presigned URL to access the S3 bucket.

B.

Create a group for Amazon S3 access in IAM Identity Center. Add the employees who require access to the S3 bucket to the group. Create an IAM policy to allow Amazon S3 access from the group. Instruct employees to use the AWS access portal to access the AWS Management Console and navigate to the S3 bucket.

C.

Create an Amazon S3 File Gateway. Create one share for data uploads and a second share for data downloads. Set up an SFTP service on an Amazon EC2 instance. Mount the shares to the EC2 instance. Instruct employees to use the SFTP server.

D.

Configure AWS Transfer Family SFTP endpoints. Select the custom identity provider option. Use AWS Secrets Manager to manage the user credentials. Instruct employees to use Transfer Family SFTP.

Question 148

A solutions architect has an application container, an AWS Lambda function, and an Amazon Simple Queue Service (Amazon SQS) queue. The Lambda function uses the SQS queue as an event source. The Lambda function makes a call to a third-party machine learning (ML) API when the function is invoked. The response from the third-party API can take up to 60 seconds to return.

The Lambda function's timeout value is currently 65 seconds. The solutions architect has noticed that the Lambda function sometimes processes duplicate messages from the SQS queue.

What should the solutions architect do to ensure that the Lambda function does not process duplicate messages?

Options:

A.

Configure the Lambda function with a larger amount of memory.

B.

Configure an increase in the Lambda function's timeout value.

C.

Configure the SQS queue's delivery delay value to be greater than the maximum time it takes to call the third-party API.

D.

Configure the SQS queue's visibility timeout value to be greater than the maximum time it takes to call the third-party API.

Question 149

A company wants to optimize costs for its AWS infrastructure. The company wants to receive notifications when actual costs or forecasted costs exceed a specified budget. The company does not want to develop a custom solution.

Which solution will meet these requirements?

Options:

A.

Use AWS Trusted Advisor to set up budget notifications. Configure Amazon CloudWatch to monitor costs. Export CloudWatch data to Amazon S3. Use machine learning (ML) to estimate future trends based on the CloudWatch data.

B.

Create a budget in AWS Budgets that has a specified cost threshold. Create an AWS Lambda function that sends a notification to the company when costs reach the specified threshold. Use AWS Billing and Cost Management reports to monitor costs.

C.

Use AWS Cost Explorer to set a specified budget threshold. Create an AWS Lambda function to calculate cost estimates. Configure the Lambda function to send a notification to an Amazon Simple Notification Service (Amazon SNS) topic if estimated costs exceed the specified threshold.

D.

Create a budget in AWS Budgets that has a specified cost threshold. Configure AWS Budgets to send budget alerts to an Amazon Simple Notification Service (Amazon SNS) topic. Use AWS Cost Explorer to monitor costs.

Question 150

A company wants to publish a private website for its on-premises employees. The website consists of several HTML pages and image files. The website must be available only through HTTPS and must be available only to on-premises employees. A solutions architect plans to store the website files in an Amazon S3 bucket.

Which solution will meet these requirements?

Options:

A.

Create an S3 bucket policy to deny access when the source IP address is not the public IP address of the on-premises environment Set up an Amazon Route 53 alias record to point to the S3 bucket. Provide the alias record to the on-premises employees to grant the employees access to the website.

B.

Create an S3 access point to provide website access. Attach an access point policy to deny access when the source IP address is not the public IP address of the on-premises environment. Provide the S3 access point alias to the on-premises employees to grant the employees access to the website.

C.

Create an Amazon CloudFront distribution that includes an origin access control (OAC) that is configured for the S3 bucket. Use AWS Certificate Manager for SSL. Use AWS WAF with an IP set rule that allows access for the on-premises IP address. Set up an Amazon Route 53 alias record to point to the CloudFront distribution.

D.

Create an Amazon CloudFront distribution that includes an origin access control (OAC) that is configured for the S3 bucket. Create a CloudFront signed URL for the objects in the bucket. Set up an Amazon Route 53 alias record to point to the CloudFront distribution. Provide the signed URL to the on-premises employees to grant the employees access to the website.

Question 151

A telemarketing company is designing its customer call center functionality on AWS. The company needs a solution that provides multiple speaker recognition and generates transcript files. The company wants to query the transcript files to analyze the business patterns.

Which solution will meet these requirements?

Options:

A.

Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use machine learning (ML) models to analyze the transcript files.

B.

Use Amazon Transcribe for multiple speaker recognition. Use Amazon Athena to analyze the transcript files.

C.

Use Amazon Translate for multiple speaker recognition. Store the transcript files in Amazon Redshift. Use SQL queries to analyze the transcript files.

D.

Use Amazon Rekognition for multiple speaker recognition. Store the transcript files in Amazon S3. Use Amazon Textract to analyze the transcript files.

Question 152

A company hosts its order processing system on AWS. The architecture consists of a frontend and a backend. The frontend includes an Application Load Balancer (ALB) and Amazon EC2 instances in an Auto-Scaling group. The backend includes an EC2 instance and an Amazon RDS MySQL database.

To prevent incomplete or lost orders, the company wants to ensure that order states are always preserved. The company wants to ensure that every order will eventually be processed, even after an outage or pause. Every order must be processed exactly once.

Options:

A.

Create an Auto Scaling group and an ALB for the backend. Create a read replica for the RDS database in a second Availability Zone. Update the backend RDS endpoint.

B.

Create an Auto Scaling group and an ALB for the backend. Create an Amazon RDS proxy in front of the RDS database. Update the backend EC2 instance to use the Amazon RDS proxy endpoint.

C.

Create an Auto Scaling group for the backend. Configure the backend EC2 instances to con-sume messages from an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Configure a dead-letter queue (DLQ) for the SQS queue.

D.

Create an AWS Lambda function to replace the backend EC2 instance. Subscribe the func-tion to an Amazon Simple Notification Service (Amazon SNS) topic. Configure the frontend to send orders to the SNS topic.

Question 153

A company wants to protect resources that the company hosts on AWS, including Application Load Balancers and Amazon CloudFront distributions.

The company wants an AWS service that can provide near real-time visibility into attacks on the company's resources. The service must also have a dedicated AWS team to assist with DDoS attacks.

Which AWS service will meet these requirements?

Options:

A.

AWS WAF

B.

AWS Shield Standard

C.

Amazon Macie

D.

AWS Shield Advanced

Question 154

A company needs to grant a team of developers access to the company's AWS resources. The company must maintain a high level of security for the resources.

The company requires an access control solution that will prevent unauthorized access to the sensitive data.

Which solution will meet these requirements?

Options:

A.

Share the IAM user credentials for each development team member with the rest of the team to simplify access management and to streamline development workflows.

B.

Define IAM roles that have fine-grained permissions based on the principle of least privilege. Assign an IAM role to each developer.

C.

Create IAM access keys to grant programmatic access to AWS resources. Allow only developers to interact with AWS resources through API calls by using the access keys.

D.

Create an AWS Cognito user pool. Grant developers access to AWS resources by using the user pool.

Question 155

A company is developing a rating system for its ecommerce web application. The company needs a solution to save ratings that users submit in an Amazon DynamoDB table.

The company wants to ensure that developers do not need to interact directly with the DynamoDB table. The solution must be scalable and reusable.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Create an Application Load Balancer (ALB). Create an AWS Lambda function, and set the function as a target group in the ALB. Invoke the Lambda function by using the put_item method through the ALB.

B.

Create an AWS Lambda function. Configure the Lambda function to interact with the DynamoDB table by using the put-item method from Boto3. Invoke the Lambda function from the web application.

C.

Create an Amazon Simple Queue Service (Amazon SQS) queue and an AWS Lambda function that has an SQS trigger type. Instruct the developers to add customer ratings to the SQS queue as JSON messages. Configure the Lambda function to fetch the ratings from the queue and store the ratings in DynamoDB.

D.

Create an Amazon API Gateway REST API Define a resource and create a new POST method Choose AWS as the integration type, and select DynamoDB as the service. Set the action to PutItem.

Question 156

A company is deploying a new application to a VPC on existing Amazon EC2 instances. The application has a presentation tier that uses an Auto Scaling group of EC2 instances. The application also has a database tier that uses an Amazon RDS Multi-AZ database.

The VPC has two public subnets that are split between two Availability Zones. A solutions architect adds one private subnet to each Availability Zone for the RDS database. The solutions architect wants to restrict network access to the RDS database to block access from EC2 instances that do not host the new application.

Which solution will meet this requirement?

Options:

A.

Modify the RDS database security group to allow traffic from a CIDR range that includes IP addresses of the EC2 instances that host the new application.

B.

Associate a new ACL with the private subnets. Deny all incoming traffic from IP addresses that belong to any EC2 instance that does not host the new application.

C.

Modify the RDS database security group to allow traffic from the security group that is associated with the EC2 instances that host the new application.

D.

Associate a new ACL with the private subnets. Deny all incoming traffic except for traffic from a CIDR range that includes IP addresses of the EC2 instances that host the new application.

Question 157

An ecommerce company runs a multi-tier application on AWS. The frontend and backend tiers both run on Amazon EC2 instances. The database tier runs on an Amazon RDS for MySQL DB instance. The backend tier communicates with the RDS DB instance.

The application makes frequent calls to return identical datasets from the database. The frequent calls on the database cause performance slowdowns. A solutions architect must improve the performance of the application backend.

Which solution will meet this requirement?

Options:

A.

Configure an Amazon Simple Notification Service (Amazon SNS) topic between the EC2 instances and the RDS DB instance.

B.

Configure an Amazon ElastiCache (Redis OSS) cache. Configure the backend EC2 instances to read from the cache.

C.

Configure an Amazon DynamoDB Accelerator (DAX) cluster. Configure the backend EC2 instances to read from the cluster.

D.

Configure Amazon Data Firehose to stream the calls to the database.

Question 158

A company runs an application on Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses Amazon Route 53 to route traffic to the ALB. The ALB is a resource in an AWS Shield Advanced protection group.

The company is preparing for a blue/green deployment in which traffic will shift to a new ALB. The company wants to protect against DDoS attacks during the deployment.

Which solution will meet this requirement?

Options:

A.

Add the new ALB to the Shield Advanced protection group. Select Sum as the aggregation type for the volume of traffic for the whole group.

B.

Add the new ALB to the Shield Advanced protection group. Select Mean as the aggregation type for the volume of traffic for the whole group.

C.

Create a new Shield Advanced protection group. Add the new ALB to the new protection group. Select Sum as the aggregation type for the volume of traffic.

D.

Set up an Amazon CloudFront distribution. Add the CloudFront distribution and the new ALB to the Shield Advanced protection group. Select Max as the aggregation type for the volume of traffic for the whole group.

Question 159

Question:

A company hosts a public application on AWS. The company uses an Application Load Balancer (ALB) to distribute application traffic to multiple Amazon EC2 instances that are hosted in private subnets.

The company wants to authenticate all the requests by using an on-premises Active Directory Federation Service (AD FS). The company uses AWS Direct Connect to connect its on-premises data center to AWS.

Which solution will meet this requirement?

Options:

A.

Configure an Amazon Cognito user pool. Integrate the user pool with the ALB for AD FS authentication.

B.

Configure an AWS Directory Service directory. Integrate the directory with the ALB for AD FS authentication.

C.

Replace the ALB with a Network Load Balancer (NLB). Use Amazon Connect Agent Workspace to integrate an agent workspace with the NLB.

D.

Configure an AWS Directory Service AD Connector. Integrate the AD Connector with the ALB for AD FS authentication.

Question 160

A company has a serverless web application that is comprised of AWS Lambda functions. The application experiences spikes in traffic that cause increased latency because of cold starts. The company wants to improve the application’s ability to handle traffic spikes and to minimize latency. The solution must optimize costs during periods when traffic is low.

Options:

A.

Configure provisioned concurrency for the Lambda functions. Use AWS Application Auto Scaling to adjust the provisioned concurrency.

B.

Launch Amazon EC2 instances in an Auto Scaling group. Add a scheduled scaling policy to launch additional EC2 instances during peak traffic periods.

C.

Configure provisioned concurrency for the Lambda functions. Set a fixed concurrency level to handle the maximum expected traffic.

D.

Create a recurring schedule in Amazon EventBridge Scheduler. Use the schedule to invoke the Lambda functions periodically to warm the functions.

Question 161

A company is building an application on AWS that connects to an Amazon RDS database. The company wants to manage the application configuration and to securely store and retrieve credentials for the database and other services.

Which solution will meet these requirements with the LEAST administrative overhead?

Options:

A.

Use AWS AppConfig to store and manage the application configuration. Use AWS Secrets Manager to store and retrieve the credentials.

B.

Use AWS Lambda to store and manage the application configuration. Use AWS Systems Manager Parameter Store to store and retrieve the credentials.

C.

Use an encrypted application configuration file Store the file in Amazon S3 for the application configuration. Create another S3 file to store and retrieve the credentials.

D.

Use AWS AppConfig to store and manage the application configuration. Use Amazon RDS to store and retrieve the credentials.

Question 162

A company collects data for temperature, humidity, and atmospheric pressure in cities across multiple continents. The average volume of data that the company collects from each site daily is 500 GB. Each site has a high-speed internet connection.

The company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon S3 bucket. The solution must minimize operational complexity.

Which solution meets these requirements?

Options:

A.

Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket.

B.

Upload the data from each site to an S3 bucket in the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket. Then remove the data from the origin S3 bucket.

C.

Schedule AWS Snowball Edge Storage Optimized device jobs daily to transfer data from each site to the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket.

D.

Upload the data from each site to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. At regular intervals, take an EBS snapshot and copy it to the Region that contains the destination S3 bucket. Restore the EBS volume in that Region.

Question 163

A company has an e-commerce site. The site is designed as a distributed web application hosted in multiple AWS accounts under one AWS Organizations organization. The web application is comprised of multiple microservices. All microservices expose their AWS services either through Amazon CloudFront distributions or public Application Load Balancers (ALBs). The company wants to protect public endpoints from malicious attacks and monitor security configurations. Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Use AWS WAF to protect the public endpoints. Use AWS Firewall Manager from a dedicated security account to manage rules in AWS WAF. Use AWS Config rules to monitor the Regional and global WAF configurations.

B.

Use AWS WAF to protect the public endpoints. Apply AWS WAF rules in each account. Use AWS Config rules and AWS Security Hub to monitor the WAF configurations of the ALBs and the CloudFront distributions.

C.

Use AWS WAF to protect the public endpoints. Use AWS Firewall Manager from a dedicated security account to manage the rules in AWS WAF. Use Amazon Inspector and AWS Security Hub to monitor the WAF configurations of the ALBs and the CloudFront distributions.

D.

Use AWS Shield Advanced to protect the public endpoints. Use AWS Config rules to monitor the Shield Advanced configuration for each account.

Question 164

A company hosts an end-user application on Amazon EC2 instances behind an Application Load Balancer (ALB). The company needs to configure end-to-end encryption between the ALB and the EC2 instances.

Which solution will meet this requirement with the LEAST operational effort?

Options:

A.

Deploy AWS CloudHSM. Import a third-party certificate into CloudHSM. Configure the EC2 instances and the ALB to use the CloudHSM imported certificate.

B.

Import a third-party certificate bundle into AWS Certificate Manager (ACM). Generate a self-signed certificate on the EC2 instances. Associate the ACM imported third-party certificate with the ALB.

C.

Import a third-party SSL certificate into AWS Certificate Manager (ACM). Install the third-party certificate on the EC2 instances. Associate the ACM imported third-party certificate with the ALB.

D.

Use Amazon-issued AWS Certificate Manager (ACM) certificates on the EC2 instances and the ALB.

Question 165

An insurance company is creating an application to record personal user data. The data includes users’ names, ages, and health data. The company wants to run the application in a private subnet on AWS.

Because of data security requirements, the company must have access to the operating system of the compute resources that run the application tier. The company must use a low-latency NoSQL database to store the data.

Which solution will meet these requirements?

Options:

A.

Use Amazon EC2 instances for the application tier. Use an Amazon DynamoDB table for the database tier. Create a VPC endpoint for DynamoDB. Assign the instances an instance profile that has permission to access DynamoDB.

B.

Use AWS Lambda functions for the application tier. Use an Amazon DynamoDB table for the database tier. Assign a Lambda function an appropriate IAM role to access the table.

C.

Use AWS Fargate for the application tier. Create an Amazon Aurora PostgreSQL instance inside a private subnet for the database tier.

D.

Use Amazon EC2 instances for the application tier. Use an Amazon S3 bucket to store the data in JSON format. Configure the application to use Amazon Athena to read and write the data to and from the S3 bucket.

Question 166

A company is building a critical data processing application that will run on Amazon EC2 instances. The company must not run any two nodes on the same underlying hardware. The company requires at least 99.99% availability for the application.

Which solution will meet these requirements?

Options:

A.

Deploy the application to one Availability Zone by using a cluster placement group strategy.

B.

Deploy the application to three Availability Zones by using a spread placement group strategy.

C.

Deploy the application to three Availability Zones by using a cluster placement group strategy.

D.

Deploy the application to one Availability Zone by using a partition placement group strategy.

Question 167

A company stores sensitive customer data in an Amazon DynamoDB table. The company frequently updates the data. The company wants to use the data to personalize offers for customers.

The company's analytics team has its own AWS account. The analytics team runs an application on Amazon EC2 instances that needs to process data from the DynamoDB tables. The company needs to follow security best practices to create a process to regularly share data from DynamoDB to the analytics team.

Which solution will meet these requirements?

Options:

A.

Export the required data from the DynamoDB table to an Amazon S3 bucket as multiple JSON files. Provide the analytics team with the necessary IAM permissions to access the S3 bucket.

B.

Allow public access to the DynamoDB table. Create an IAM user that has permission to access DynamoDB. Share the IAM user with the analytics team.

C.

Allow public access to the DynamoDB table. Create an IAM user that has read-only permission for DynamoDB. Share the IAM user with the analytics team.

D.

Create a cross-account IAM role. Create an IAM policy that allows the AWS account ID of the analytics team to access the DynamoDB table. Attach the IAM policy to the IAM role. Establish a trust relationship between accounts.

Question 168

A company tracks customer satisfaction by using surveys that the company hosts on its website. The surveys sometimes reach thousands of customers every hour. Survey results are currently sent in email messages to the company so company employees can manually review results and assess customer sentiment.

The company wants to automate the customer survey process. Survey results must be available for the previous 12 months.

Which solution will meet these requirements in the MOST scalable way?

Options:

A.

Send the survey results data to an Amazon API Gateway endpoint that is connected to an Amazon Simple Queue Service (Amazon SQS) queue. Create an AWS Lambda function to poll the SQS queue, call Amazon Comprehend for sentiment analysis, and save the results to an Amazon DynamoDB table. Set the TTL for all records to 365 days in the future.

B.

Send the survey results data to an API that is running on an Amazon EC2 instance. Configure the API to store the survey results as a new record in an Amazon DynamoDB table, call Amazon Comprehend for sentiment analysis, and save the results in a second DynamoDB table. Set the TTL for all records to 365 days in the future.

C.

Write the survey results data to an Amazon S3 bucket. Use S3 Event Notifications to invoke an AWS Lambda function to read the data and call Amazon Rekognition for sentiment analysis. Store the sentiment analysis results in a second S3 bucket. Use S3 Lifecycle policies on each bucket to expire objects after 365 days.

D.

Send the survey results data to an Amazon API Gateway endpoint that is connected to an Amazon Simple Queue Service (Amazon SQS) queue. Configure the SQS queue to invoke an AWS Lambda function that calls Amazon Lex for sentiment analysis and saves the results to an Amazon DynamoDB table. Set the TTL for all records to 365 days in the future.

Question 169

A company runs an application on Amazon EC2 instances. The application needs to access an Amazon RDS database. The company wants to grant the EC2 instances access permissions to the RDS database while following the principle of least privilege.

Which solution will meet these requirements?

Options:

A.

Create an IAM user that has a policy that grants administrative permissions. Use the IAM user's access keys on the EC2 instances to access the RDS database.

B.

Create an IAM user that has a policy that grants the minimum required permissions to access the RDS database. Embed the IAM user's access keys on the EC2 instances to access the RDS database.

C.

Create an IAM role that has a policy that grants the minimum required permissions to access the RDS database. Attach the IAM role access key and the IAM role secret key to the EC2 instance profile.

D.

Create an IAM role that has a policy that grants the minimum required permissions to access the RDS database. Attach the IAM role to an EC2 instance profile. Associate the instance profile with the instances.

Question 170

A company hosts its main public web application in one AWS Region across multiple Availability Zones. The application uses an Amazon EC2 Auto Scaling group and an Application Load Balancer (ALB).

A web development team needs a cost-optimized compute solution to improve the company's ability to serve dynamic content globally to millions of customers.

Which solution will meet these requirements?

Options:

A.

Create an Amazon CloudFront distribution. Configure the existing ALB as the origin.

B.

Use Amazon Route 53 to serve traffic to the ALB and EC2 instances based on the geographic location of each customer.

C.

Create an Amazon S3 bucket with public read access enabled. Migrate the web application to the S3 bucket. Configure the S3 bucket for website hosting.

D.

Use AWS Direct Connect to directly serve content from the web application to the location of each customer.

Question 171

A media company hosts its video processing workload on AWS. The workload uses Amazon EC2 instances in an Auto Scaling group to handle varying levels of demand. The workload stores the original videos and the processed videos in an Amazon S3 bucket.

The company wants to ensure that the video processing workload is scalable. The company wants to prevent failed processing attempts because of resource constraints. The architecturemust be able to handle sudden spikes in video uploads without impacting the processing capability.

Which solution will meet these requirements with the LEAST overhead?

Options:

A.

Migrate the workload from Amazon EC2 instances to AWS Lambda functions. Configure an Amazon S3 event notification to invoke the Lambda functions when a new video is uploaded. Configure the Lambda functions to process videos directly and to save processed videos back to the S3 bucket.

B.

Migrate the workload from Amazon EC2 instances to AWS Lambda functions. Use Amazon S3 to invoke an Amazon Simple Notification Service (Amazon SNS) topic when a new video is uploaded. Subscribe the Lambda functions to the SNS topic. Configure the Lambda functions to process the videos asynchronously and to save processed videos back to the S3 bucket.

C.

Configure an Amazon S3 event notification to send a message to an Amazon Simple Queue Service (Amazon SQS) queue when a new video is uploaded. Configure the existing Auto Scaling group to poll the SQS queue, process the videos, and save processed videos back to the S3 bucket.

D.

Configure an Amazon S3 upload trigger to invoke an AWS Step Functions state machine when a new video is uploaded. Configure the state machine to orchestrate the video processing workflow by placing a job message in the Amazon SQS queue. Configure the job message to invoke the EC2 instances to process the videos. Save processed videos back to the S3 bucket.

Question 172

A solutions architect is designing the storage architecture for a new web application used for storing and viewing engineering drawings. All application components will be deployed on the AWS infrastructure. The application design must support caching to minimize the amount of time that users wait for the engineering drawings to load. The application must be able to store petabytes of data.

Which combination of storage and caching should the solutions architect use?

Options:

A.

Amazon S3 with Amazon CloudFront

B.

Amazon S3 Glacier Deep Archive with Amazon ElastiCache

C.

Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront

D.

AWS Storage Gateway with Amazon ElastiCache

Question 173

A company uses an AWS Transfer for SFTP public server endpoint and Amazon S3 storage to host large datasets for its customers. The company provides customers SSH private keys to authenticate and download their datasets. The Transfer for SFTP server is configured with structured logging that is saved to an S3 bucket. The company wants to charge customers based on their monthly data download usage. Which solution will meet these requirements?

Options:

A.

Configure VPC Flow Logs to write to a new S3 bucket. Run monthly queries on the flow logs to identify customer usage and calculate cost. Add the charges to the customers' monthly bills.

B.

Each month, use AWS Cost Explorer to examine the costs for Transfer for SFTP and obtain a breakdown by customer. Add the charges to the customers' monthly bills.

C.

Enable requester pays on the S3 bucket that hosts the software. Allocate the charges to each customer based on the customer's requests.

D.

Run Amazon Athena queries on the logging S3 bucket monthly to identify customer usage and calculate costs. Add the charges to the customers' monthly bills.

Question 174

A company hosts an application on AWS that gives users the ability to download photos. The company stores all photos in an Amazon S3 bucket that is located in the us-east-1 Region. The company wants to provide the photo download application to global customers with low latency.

Which solution will meet these requirements?

Options:

A.

Find the public IP addresses that Amazon S3 uses in us-east-1. Configure an Amazon Route 53 latency-based routing policy that routes to all the public IP addresses.

B.

Configure an Amazon CloudFront distribution in front of the S3 bucket. Use the distribution endpoint to access the photos that are in the S3 bucket.

C.

Configure an Amazon Route 53 geoproximity routing policy to route the traffic to the S3 bucket that is closest to each customer's location.

D.

Create a new S3 bucket in the us-west-1 Region. Configure an S3 Cross-Region Replication rule to copy the photos to the new S3 bucket.

Question 175

A company is designing a serverless application to process a large number of events within an AWS account. The application saves the events to a data warehouse for further analysis. The application sends incoming events to an Amazon SQS queue. Traffic between the application and the SQS queue must not use public IP addresses.

Options:

A.

Create a VPC endpoint for Amazon SQS. Set the queue policy to deny all access except from the VPC endpoint.

B.

Configure server-side encryption with SQS-managed keys (SSE-SQS).

C.

Configure AWS Security Token Service (AWS STS) to generate temporary credentials for resources that access the queue.

D.

Configure VPC Flow Logs to detect SQS traffic that leaves the VPC.

Question 176

A data science team needs storage for nightly log processing. The size and number of logs is unknown, and the logs persist for only 24 hours.

What is the MOST cost-effective solution?

Options:

A.

Amazon S3 Glacier Deep Archive

B.

Amazon S3 Standard

C.

Amazon S3 Intelligent-Tiering

D.

Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)

Question 177

A company manages an application that stores data on an Amazon RDS for PostgreSQL Multi-AZ DB instance. High traffic on the application is causing increased latency for many read queries.

A solutions architect must improve the performance of the application.

Which solution will meet this requirement?

Options:

A.

Enable Amazon RDS Performance Insights. Configure storage capacity to scale automatically.

B.

Configure the DB instance to use DynamoDB Accelerator (DAX).

C.

Create a read replica of the DB instance. Serve read traffic from the read replica.

D.

Use Amazon Data Firehose between the application and Amazon RDS to increase the concurrency of database requests.

Question 178

A company stores user data in AWS. The data is used continuously with peak usage during business hours. Access patterns vary, with some data not being used for months at a time. A solutions architect must choose a cost-effective solution that maintains the highest level of durability while maintaining high availability.

Which storage solution meets these requirements?

Options:

A.

Amazon S3 Standard

B.

Amazon S3 Intelligent-Tiering

C.

Amazon S3 Glacier Deep Archive

D.

Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)

Question 179

A company runs an application on Amazon EC2 instances across multiple Availability Zones in the same AWS Region. The EC2 instances share an Amazon Elastic File System (Amazon EFS) volume that is mounted on all the instances. The EFS volume stores a variety of files such as installation media, third-party files, interface files, and other one-time files.

The company accesses some EFS files frequently and needs to retrieve the files quickly. The company accesses other files rarely. The EFS volume is multiple terabytes in size. The company needs to optimize storage costs for Amazon EFS.

Which solution will meet these requirements with the LEAST effort?

Options:

A.

Move the files to Amazon S3. Set up a lifecycle policy to move the files to S3 Glacier Flexible Retrieval.

B.

Apply a lifecycle policy to the EFS files to move the files to EFS Infrequent Access.

C.

Move the files to Amazon Elastic Block Store (Amazon EBS) Cold HDD Volumes (sc1).

D.

Move the files to Amazon S3. Set up a lifecycle policy to move the rarely-used files to S3 Glacier Deep Archive.

Question 180

A media publishing company is building an application that allows users to print custom books. The frontend runs in a Docker container. Incoming order volume is highly variable and can exceed the throughput of the physical printing machines. Order-processing payloads can be up to 4 MB.

The company needs a scalable solution for handling incoming orders.

Which solution will meet this requirement?

Options:

A.

Use Amazon SQS to queue incoming orders. Use Lambda@Edge to process orders. Deploy the frontend on Amazon EKS.

B.

Use Amazon SNS to queue incoming orders. Use a Lambda function to process orders. Deploy the frontend on AWS Fargate.

C.

Use Amazon SQS to queue incoming orders. Use a Lambda function to process orders. Deploy the frontend on Amazon ECS with the Fargate launch type.

D.

Use Amazon SNS to queue incoming orders. Use Lambda@Edge to process orders. Deploy the frontend on Amazon EC2 instances.

Question 181

A finance company has a web application that generates credit reports for customers. The company hosts the frontend of the web application on a fleet of Amazon EC2 instances that is associated with an Application Load Balancer (ALB). The application generates reports by running queries on an Amazon RDS for SQL Server database.

The company recently discovered that malicious traffic from around the world is abusing the application by submitting unnecessary requests. The malicious traffic is consuming significant compute resources. The company needs to address the malicious traffic.

Which solution will meet this requirement?

Options:

A.

Use AWS WAF to create a web ACL. Associate the web ACL with the ALB. Update the web ACL to block IP addresses that are associated with malicious traffic.

B.

Use AWS WAF to create a web ACL. Associate the web ACL with the ALB. Use the AWS WAF Bot Control managed rule feature.

C.

Set up AWS Shield to protect the ALB and the database.

D.

Use AWS WAF to create a web ACL. Associate the web ACL with the ALB. Configure the AWS WAF IP reputation rule.

Question 182

A company wants to send data from its on-premises systems to Amazon S3 buckets. The company created the S3 buckets in three different accounts. The company must send the data privately without traveling across the internet. The company has no existing dedicated connectivity to AWS.

Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)

Options:

A.

Establish a networking account in the AWS Cloud. Create a private VPC in the networking account. Set up an AWS Direct Connect connection with a private VIF between the on-premises environment and the private VPC.

B.

Establish a networking account in the AWS Cloud. Create a private VPC in the networking account. Set up an AWS Direct Connect connection with a public VIF between the on-premises environment and the private VPC.

C.

Create an Amazon S3 interface endpoint in the networking account.

D.

Create an Amazon S3 gateway endpoint in the networking account.

E.

Establish a networking account in the AWS Cloud. Create a private VPC in the networking account. Peer VPCs from the accounts that host the S3 buckets with the VPC in the network account.

Question 183

A company runs a latency-sensitive gaming service in the AWS Cloud. The gaming service runs on a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). An Amazon DynamoDB table stores the gaming data. All the infrastructure is in a single AWS Region. The main user base is in that same Region.

A solutions architect needs to update the architecture to support a global expansion of the gaming service. The gaming service must operate with the least possible latency.

Which solution will meet these requirements?

Options:

A.

Create an Amazon CloudFront distribution in front of the ALB.

B.

Deploy an Amazon API Gateway regional API endpoint. Integrate the API endpoint with the ALB.

C.

Create an accelerator in AWS Global Accelerator. Add a listener. Configure the endpoint to point to the ALB.

D.

Deploy the ALB and the fleet of EC2 instances to another Region. Use Amazon Route 53 with geolocation routing.

Question 184

A company wants to run a hybrid workload for data processing. The data needs to be accessed by on-premises applications for local data processing using an NFS protocol, and must also be accessible from the AWS Cloud for further analytics and batch processing.

Which solution will meet these requirements?

Options:

A.

Use an AWS Storage Gateway file gateway to provide file storage to AWS, then perform analytics on this data in the AWS Cloud.

B.

Use an AWS Storage Gateway tape gateway to copy the backup of the local data to AWS, then perform analytics on this data in the AWS Cloud.

C.

Use an AWS Storage Gateway volume gateway in a stored volume configuration to regularly take snapshots of the local data, then copy the data to AWS.

D.

Use an AWS Storage Gateway volume gateway in a cached volume configuration to back up all the local storage in the AWS Cloud, then perform analytics on this data in the cloud.

Question 185

A company is designing a new Amazon Elastic Kubernetes Service (Amazon EKS) deployment to host multi-tenant applications that use a single cluster. The company wants to ensure that each pod has its own hosted environment. The environments must not share CPU, memory, storage, or elastic network interfaces.

Which solution will meet these requirements?

Options:

A.

Use Amazon EC2 instances to host self-managed Kubernetes clusters. Use taints and tolerations to enforce isolation boundaries.

B.

Use Amazon EKS with AWS Fargate. Use Fargate to manage resources and to enforce isolation boundaries.

C.

Use Amazon EKS and self-managed node groups. Use taints and tolerations to enforce isolation boundaries.

D.

Use Amazon EKS and managed node groups. Use taints and tolerations to enforce isolation boundaries.

Question 186

A company has developed a non-production application that is composed of multiple microservices for each of the company's business units. A single development team maintains all the microservices.

The current architecture uses a static web frontend and a Java-based backend that contains the application logic. The architecture also uses a MySQL database that the company hosts on an Amazon EC2 instance.

The company needs to ensure that the application is secure and available globally.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Use Amazon CloudFront and AWS Amplify to host the static web frontend. Refactor the microservices to use AWS Lambda functions that the microservices access by using Amazon API Gateway. Migrate the MySQL database to an Amazon EC2 Reserved Instance.

B.

Use Amazon CloudFront and Amazon S3 to host the static web frontend. Refactor the microservices to use AWS Lambda functions that the microservices access by using Amazon API Gateway. Migrate the MySQL database to Amazon RDS for MySQL.

C.

Use Amazon CloudFront and Amazon S3 to host the static web frontend. Refactor the microservices to use AWS Lambda functions that are in a target group behind a Network Load Balancer. Migrate the MySQL database to Amazon RDS for MySQL.

D.

Use Amazon S3 to host the static web frontend. Refactor the microservices to use AWS Lambda functions that are in a target group behind an Application Load Balancer. Migrate the MySQL database to an Amazon EC2 Reserved Instance.

Question 187

A company is deploying an application that processes streaming data in near-real time. The company plans to use Amazon EC2 instances for the workload. The network architecture must be configurable to provide the lowest possible latency between nodes.

Which networking solution meets these requirements?

Options:

A.

Place the EC2 instances in multiple VPCs, and configure VPC peering.

B.

Attach an Elastic Fabric Adapter (EFA) to each EC2 instance.

C.

Run the EC2 instances in a spread placement group.

D.

Use Amazon Elastic Block Store (Amazon EBS) optimized instance types.

Question 188

A solutions architect is designing the architecture for a two-tier web application. The web application consists of an internet-facing Application Load Balancer (ALB) that forwards traffic to an Auto Scaling group of Amazon EC2 instances.

The EC2 instances must be able to access an Amazon RDS database. The company does not want to rely solely on security groups or network ACLs. Only the minimum resources that are necessary should be routable from the internet.

Which network design meets these requirements?

Options:

A.

Place the ALB, EC2 instances, and RDS database in private subnets.

B.

Place the ALB in public subnets. Place the EC2 instances and RDS database in private subnets.

C.

Place the ALB and EC2 instances in public subnets. Place the RDS database in private subnets.

D.

Place the ALB outside the VPC. Place the EC2 instances and RDS database in private subnets.

Question 189

A company has an Amazon S3 data lake that is governed by AWS Lake Formation. The company wants to create a visualization in Amazon QuickSight by joining the data in the data lake with operational data that is stored in an Amazon Aurora MySQL database. The company wants to enforce column-level authorization so that the company's marketing team can access only a subset of columns in the database.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Use Amazon EMR to ingest the data directly from the database to the QuickSight SPICE engine. Include only the required columns.

B.

Use AWS Glue Studio to ingest the data from the database to the S3 data lake. Attach an IAM policy to the QuickSight users to enforce column-level access control. Use Amazon S3 as the data source in QuickSight.

C.

Use AWS Glue Elastic Views to create a materialized view for the database in Amazon S3. Create an S3 bucket policy to enforce column-level access control for the QuickSight users. Use Amazon S3 as the data source in QuickSight.

D.

Use a Lake Formation blueprint to ingest the data from the database to the S3 data lake. Use Lake Formation to enforce column-level access control for the QuickSight users. Use Amazon Athena as the data source in QuickSight.

Question 190

A financial company is migrating its banking applications to a set of AWS accounts managed by AWS Organizations. The applications will store sensitive customer data on Amazon Elastic Block Store (Amazon EBS) volumes. The company will take regular snapshots for backup purposes.

The company wants to implement controls across all AWS accounts to prevent sharing EBS snapshots publicly.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Enable AWS Config rules for each organizational unit (OU) in Organizations to monitor EBS snapshot permissions.

B.

Enable block public access for EBS snapshots at the organization level.

C.

Create an IAM policy in the root account of the organization that prevents users from modifying snapshot permissions.

D.

Use AWS CloudTrail to track snapshot permission changes.

Question 191

A company hosts a database that runs on an Amazon RDS instance deployed to multiple Availability Zones. A periodic script negatively affects a critical application by querying the database. How can application performance be improved with minimal costs?

Options:

A.

Add functionality to the script to identify the instance with the fewest active connections and query that instance.

B.

Create a read replica of the database. Configure the script to query only the read replica.

C.

Instruct the development team to manually export new entries at the end of the day.

D.

Use Amazon ElastiCache to cache the common queries the script runs.

Question 192

A company needs a solution to automate email ingestion. The company needs to automatically parse email messages, look for email attachments, and save any attachments to an Amazon S3 bucket in near real time. Email volume varies significantly from day to day.

Which solution will meet these requirements?

Options:

A.

Set up email receiving in Amazon Simple Email Service {Amazon SES). Create a rule set and a receipt rule. Create an AWS Lambda function that Amazon SES can invoke to process the email bodies and attachments.

B.

Set up email content filtering in Amazon Simple Email Service (Amazon SES). Create a content filtering rule based on sender, recipient, message body, and attachments.

C.

Set up email receiving in Amazon Simple Email Service (Amazon SES). Configure Amazon SES and S3 Event Notifications to process the email bodies and attachments.

D.

Create an AWS Lambda function to process the email bodies and attachments. Use Amazon EventBridge to invoke the Lambda function. Configure an EventBridge rule to listen for incoming emails.

Question 193

A company runs HPC workloads requiring high IOPS.

Which combination of steps will meet these requirements? (Select TWO)

Options:

A.

Use Amazon EFS as a high-performance file system.

B.

Use Amazon FSx for Lustre as a high-performance file system.

C.

Create an Auto Scaling group of EC2 instances. Use Reserved Instances. Configure a spread placement group. Use AWS Batch for analytics.

D.

Use Mountpoint for Amazon S3 as a high-performance file system.

E.

Create an Auto Scaling group of EC2 instances. Use mixed instance types and a cluster placement group. Use Amazon EMR for analytics.

Question 194

A company is designing a solution to capture customer activity on the company's web applications. The company wants to analyze the activity data to make predictions.

Customer activity on the web applications is unpredictable and can increase suddenly. The company requires a solution that integrates with other web applications. The solution must include an authorization step.

Which solution will meet these requirements?

Options:

A.

Deploy a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS) container instance. Store the data in an Amazon Elastic File System (Amazon EFS) file system. Configure the applications to pass an authorization header to the GWLB.

B.

Deploy an Amazon API Gateway endpoint in front of an Amazon Kinesis data stream. Store the data in an Amazon S3 bucket. Use an AWS Lambda function to handle authorization.

C.

Deploy an Amazon API Gateway endpoint in front of an Amazon Data Firehose delivery stream. Store the data in an Amazon S3 bucket. Use an API Gateway Lambda authorizer to handle authorization.

D.

Deploy a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS) container instance. Store the data in an Amazon Elastic File System (Amazon EFS) file system. Use an AWS Lambda function to handle authorization.

Question 195

A company runs its databases on Amazon RDS for PostgreSQL. The company wants a secure solution to manage the master user password by rotating the password every 30 days. Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Use Amazon EventBridge to schedule a custom AWS Lambda function to rotate the password every 30 days.

B.

Use the modlfy-db-instance command in the AWS CLI to change the password.

C.

Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.

D.

Integrate AWS Systems Manager Parameter Store with Amazon RDS for PostgreSQL to automate password rotation.

Question 196

A company runs its critical storage application in the AWS Cloud. The application uses Amazon S3 in two AWS Regions. The company wants the application to send remote user data to the nearest S3 bucket with no public network congestion. The company also wants the application to fail over with the least amount of management of Amazon S3.

Which solution will meet these requirements?

Options:

A.

Implement an active-active design between the two Regions. Configure the application to use the regional S3 endpoints closest to the user.

B.

Use an active-passive configuration with S3 Multi-Region Access Points. Create a global endpoint for each of the Regions.

C.

Send user data to the regional S3 endpoints closest to the user. Configure an S3 cross-account replication rule to keep the S3 buckets synchronized.

D.

Set up Amazon S3 to use Multi-Region Access Points in an active-active configuration with a single global endpoint. Configure S3 Cross-Region Replication.

Demo: 196 questions
Total 649 questions