Pre-Winter Special Flat 65% Limited Time Discount offer - Ends in 0d 00h 00m 00s - Coupon code: suredis

Amazon Web Services DBS-C01 AWS Certified Database - Specialty Exam Practice Test

Demo: 95 questions
Total 324 questions

AWS Certified Database - Specialty Questions and Answers

Question 1

A company is developing a multi-tier web application hosted on AWS using Amazon Aurora as the database. The application needs to be deployed to production and other non-production environments. A Database Specialist needs to specify different MasterUsername and MasterUserPassword properties in the AWS CloudFormation templates used for automated deployment. The CloudFormation templates are version controlled in the company’s code repository. The company also needs to meet compliance requirement by routinely rotating its database master password for production.

What is most secure solution to store the master password?

Options:

A.

Store the master password in a parameter file in each environment. Reference the environment-specific parameter file in the CloudFormation template.

B.

Encrypt the master password using an AWS KMS key. Store the encrypted master password in the CloudFormation template.

C.

Use the secretsmanager dynamic reference to retrieve the master password stored in AWS Secrets Manager and enable automatic rotation.

D.

Use the ssm dynamic reference to retrieve the master password stored in the AWS Systems Manager Parameter Store and enable automatic rotation.

Question 2

An ecommerce company uses Amazon DynamoDB as the backend for its payments system. A new regulation requires the company to log all data access requests for financial audits. For this purpose, the company plans to use AWS logging and save logs to Amazon S3

How can a database specialist activate logging on the database?

Options:

A.

Use AWS CloudTrail to monitor DynamoDB control-plane operations. Create a DynamoDB stream to monitor data-plane operations. Pass the stream to Amazon Kinesis Data Streams. Use that stream as a source for Amazon Kinesis Data Firehose to store the data in an Amazon S3 bucket.

B.

Use AWS CloudTrail to monitor DynamoDB data-plane operations. Create a DynamoDB stream to monitor control-plane operations. Pass the stream to Amazon Kinesis Data Streams. Use that stream as a source for Amazon Kinesis Data Firehose to store the data in an Amazon S3 bucket.

C.

Create two trails in AWS CloudTrail. Use Trail1 to monitor DynamoDB control-plane operations. Use Trail2 to monitor DynamoDB data-plane operations.

D.

Use AWS CloudTrail to monitor DynamoDB data-plane and control-plane operations.

Question 3

A manufacturing company has an. inventory system that stores information in an Amazon Aurora MySQL DB cluster. The database tables are partitioned. The database size has grown to 3 TB. Users run one-time queries by using a SQL client. Queries that use an equijoin to join large tables are taking a long time to run.

Which action will improve query performance with the LEAST operational effort?

Options:

A.

Migrate the database to a new Amazon Redshift data warehouse.

B.

Enable hash joins on the database by setting the variable optimizer_switch to hash_join=on.

C.

Take a snapshot of the DB cluster. Create a new DB instance by using the snapshot, and enable parallel query mode.

D.

Add an Aurora read replica.

Question 4

A Database Specialist is creating a new Amazon Neptune DB cluster, and is attempting to load fata from Amazon S3 into the Neptune DB cluster using the Neptune bulk loader API. The Database Specialist receives the following error:

“Unable to connect to s3 endpoint. Provided source = s3://mybucket/graphdata/ and region = us-east-1. Please verify your S3 configuration.”

Which combination of actions should the Database Specialist take to troubleshoot the problem? (Choose two.)

Options:

A.

Check that Amazon S3 has an IAM role granting read access to Neptune

B.

Check that an Amazon S3 VPC endpoint exists

C.

Check that a Neptune VPC endpoint exists

D.

Check that Amazon EC2 has an IAM role granting read access to Amazon S3

E.

Check that Neptune has an IAM role granting read access to Amazon S3

Question 5

A company stores session history for its users in an Amazon DynamoDB table. The company has a large user base and generates large amounts of session data.

Teams analyze the session data for 1 week, and then the data is no longer needed. A database specialist needs to design an automated solution to purge session data that is more than 1 week old.

Which strategy meets these requirements with the MOST operational efficiency?

Options:

A.

Create an AWS Step Functions state machine with a DynamoDB DeleteItem operation that uses the ConditionExpression parameter to delete items older than a week. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled rule that runs the Step Functions state machine on a weekly basis.

B.

Create an AWS Lambda function to delete items older than a week from the DynamoDB table. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled rule that triggers the Lambda function on a weekly basis.

C.

Enable Amazon DynamoDB Streams on the table. Use a stream to invoke an AWS Lambda function to delete items older than a week from the DynamoDB table

D.

Enable TTL on the DynamoDB table and set a Number data type as the TTL attribute. DynamoDB will automatically delete items that have a TTL that is less than the current time.

Question 6

A company is concerned about the cost of a large-scale, transactional application using Amazon DynamoDB that only needs to store data for 2 days before it is deleted. In looking at the tables, a Database Specialist notices that much of the data is months old, and goes back to when the application was first deployed.

What can the Database Specialist do to reduce the overall cost?

Options:

A.

Create a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.

B.

Create a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table.

C.

Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.

D.

Create an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.

Question 7

An ecommerce company uses a backend application that stores data in an Amazon DynamoDB table. The backend application runs in a private subnet in a VPC and must connect to this table.

The company must minimize any network latency that results from network connectivity issues, even during periods of heavy application usage. A database administrator also needs the ability to use a private connection to connect to the DynamoDB table from the application.

Which solution will meet these requirements?

Options:

A.

Use network ACLs to ensure that any outgoing or incoming connections to any port except DynamoDB are deactivated. Encrypt API calls by using TLS.

B.

Create a VPC endpoint for DynamoDB in the application's VPC. Use the VPC endpoint to access the table.

C.

Create an AWS Lambda function that has access to DynamoDB. Restrict outgoing access only to this Lambda function from the application.

D.

Use a VPN to route all communication to DynamoDB through the company's own corporate network infrastructure.

Question 8

A company hosts an on-premises Microsoft SQL Server Enterprise edition database with Transparent Data Encryption (TDE) enabled. The database is 20 TB in size and includes sparse tables. The company needs to migrate the database to Amazon RDS for SQL Server during a maintenance window that is scheduled for an upcoming weekend. Data-at-rest encryption must be enabled for the target DB instance.

Which combination of steps should the company take to migrate the database to AWS in the MOST operationally efficient manner? (Choose two.)

Options:

A.

Use AWS Database Migration Service (AWS DMS) to migrate from the on-premises source database to the RDS for SQL Server target database.

B.

Disable TDE. Create a database backup without encryption. Copy the backup to Amazon S3.

C.

Restore the backup to the RDS for SQL Server DB instance. Enable TDE for the RDS for SQL Server DB instance.

D.

Set up an AWS Snowball Edge device. Copy the database backup to the device. Send the device to AWS. Restore the database from Amazon S3.

E.

Encrypt the data with client-side encryption before transferring the data to Amazon RDS.

Question 9

A company uses an Amazon RDS for PostgreSQL DB instance for its customer relationship management (CRM) system. New compliance requirements specify that the database must be encrypted at rest.

Which action will meet these requirements?

Options:

A.

Create an encrypted copy of manual snapshot of the DB instance. Restore a new DB instance from the encrypted snapshot.

B.

Modify the DB instance and enable encryption.

C.

Restore a DB instance from the most recent automated snapshot and enable encryption.

D.

Create an encrypted read replica of the DB instance. Promote the read replica to a standalone instance.

Question 10

A company is using Amazon Aurora PostgreSQL for the backend of its application. The system users are complaining that the responses are slow. A database specialist has determined that the queries to Aurora take longer during peak times. With the Amazon RDS Performance Insights dashboard, the load in the chart for average active sessions is often above the line that denotes maximum CPU usage and the wait state shows that most wait events are IO:XactSync.

What should the company do to resolve these performance issues?

Options:

A.

Add an Aurora Replica to scale the read traffic.

B.

Scale up the DB instance class.

C.

Modify applications to commit transactions in batches.

D.

Modify applications to avoid conflicts by taking locks.

Question 11

A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP) transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the OLTP transactions run all the time. The company has benchmarked its workload and determined that a six-node Aurora DB cluster is appropriate for the peak workload.

The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of nodes in the cluster to support the workload at different times. The workload has not changed since the previous benchmarking exercise.

How can a Database Specialist address these requirements with minimal user involvement?

Options:

A.

Split up the DB cluster into two different clusters: one for OLTP and the other for reporting. Monitor and set up replication between the two clusters to keep data consistent.

B.

Review all evaluate the peak combined workload. Ensure that utilization of the DB cluster node is at an acceptable level. Adjust the number of instances, if necessary.

C.

Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal workload. The cluster can be restarted again depending on the workload at the time.

D.

Set up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.

Question 12

A company is running its line of business application on AWS, which uses Amazon RDS for MySQL at the persistent data store. The company wants to minimize downtime when it migrates the database to Amazon Aurora.

Which migration method should a Database Specialist use?

Options:

A.

Take a snapshot of the RDS for MySQL DB instance and create a new Aurora DB cluster with the option to migrate snapshots.

B.

Make a backup of the RDS for MySQL DB instance using the mysqldump utility, create a new Aurora DB cluster, and restore the backup.

C.

Create an Aurora Replica from the RDS for MySQL DB instance and promote the Aurora DB cluster.

D.

Create a clone of the RDS for MySQL DB instance and promote the Aurora DB cluster.

Question 13

A company's application development team wants to share an automated snapshot of its Amazon RDS database with another team. The database is encrypted with a custom AWS Key Management Service (AWS KMS) key under the "WeShare" AWS account. The application development team needs to share the DB snapshot under the "WeReceive" AWS account.

Which combination of actions must the application development team take to meet these requirements? (Choose two.)

Options:

A.

Add access from the "WeReceive" account to the custom AWS KMS key policy of the sharing team.

B.

Make a copy of the DB snapshot, and set the encryption option to disable.

C.

Share the DB snapshot by setting the DB snapshot visibility option to public.

D.

Make a copy of the DB snapshot, and set the encryption option to enable.

E.

Share the DB snapshot by using the default AWS KMS encryption key.

Question 14

A worldwide digital advertising corporation collects browser information in order to provide targeted visitors with contextually relevant pictures, websites, and connections. A single page load may create many events, each of which must be kept separately. A single event may have a maximum size of 200 KB and an average size of 10 KB. Each page load requires a query of the user's browsing history in order to deliver suggestions for targeted advertising. The advertising corporation anticipates daily page views of more than 1 billion from people in the United States, Europe, Hong Kong, and India. The information structure differs according to the event. Additionally, browsing information must be written and read with a very low latency to guarantee that consumers have a positive viewing experience.

Which database solution satisfies these criteria?

Options:

A.

Amazon DocumentDB

B.

Amazon RDS Multi-AZ deployment

C.

Amazon DynamoDB global table

D.

Amazon Aurora Global Database

Question 15

An electric utility company wants to store power plant sensor data in an Amazon DynamoDB table. The utility company has over 100 power plants and each power plant has over 200 sensors that send data every 2 seconds. The sensor data includes time with milliseconds precision, a value, and a fault attribute if the sensor is malfunctioning. Power plants are identified by a globally unique identifier. Sensors are identified by a unique identifier within each power plant. A database specialist needs to design the table to support an efficient method of finding all faulty sensors within a given power plant.

Which schema should the database specialist use when creating the DynamoDB table to achieve the fastest query time when looking for faulty sensors?

Options:

A.

Use the plant identifier as the partition key and the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.

B.

Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a local secondary index (LSI) on the fault attribute.

C.

Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.

D.

Use the plant identifier as the partition key and the sensor identifier as the sort key. Create a local secondary index (LSI) on the fault attribute.

Question 16

An ecommerce company has tasked a Database Specialist with creating a reporting dashboard that visualizes critical business metrics that will be pulled from the core production database running on Amazon Aurora. Data that is read by the dashboard should be available within 100 milliseconds of an update.

The Database Specialist needs to review the current configuration of the Aurora DB cluster and develop a

cost-effective solution. The solution needs to accommodate the unpredictable read workload from the reporting dashboard without any impact on the write availability and performance of the DB cluster.

Which solution meets these requirements?

Options:

A.

Turn on the serverless option in the DB cluster so it can automatically scale based on demand.

B.

Provision a clone of the existing DB cluster for the new Application team.

C.

Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoing replication using AWS DMS change data capture (CDC).

D.

Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPU consumption.

Question 17

A business is transferring its on-premises database workloads to the Amazon Web Services (AWS) Cloud. A database professional migrating an Oracle database with a huge table to Amazon RDS has picked AWS DMS. The database professional observes that AWS DMS is consuming considerable time migrating the data.

Which activities would increase the pace of data migration? (Select three.)

Options:

A.

Create multiple AWS DMS tasks to migrate the large table.

B.

Configure the AWS DMS replication instance with Multi-AZ.

C.

Increase the capacity of the AWS DMS replication server.

D.

Establish an AWS Direct Connect connection between the on-premises data center and AWS.

E.

Enable an Amazon RDS Multi-AZ configuration.

F.

Enable full large binary object (LOB) mode to migrate all LOB data for all large tables.

Question 18

A company uses Amazon DynamoDB as the data store for its ecommerce website. The website receives little to no traffic at night, and the majority of the traffic occurs during the day. The traffic growth during peak hours is gradual and predictable on a daily basis, but it can be orders of magnitude higher than during off-peak hours.

The company initially provisioned capacity based on its average volume during the day without accounting for the variability in traffic patterns. However, the website is experiencing a significant amount of throttling during peak hours. The company wants to reduce the amount of throttling while minimizing costs.

What should a database specialist do to meet these requirements?

Options:

A.

Use reserved capacity. Set it to the capacity levels required for peak daytime throughput.

B.

Use provisioned capacity. Set it to the capacity levels required for peak daytime throughput.

C.

Use provisioned capacity. Create an AWS Application Auto Scaling policy to update capacity based on consumption.

D.

Use on-demand capacity.

Question 19

A retail company manages a web application that stores data in an Amazon DynamoDB table. The company is undergoing account consolidation efforts. A database engineer needs to migrate the DynamoDB table from the current AWS account to a new AWS account.

Which strategy meets these requirements with the LEAST amount of administrative work?

Options:

A.

Use AWS Glue to crawl the data in the DynamoDB table. Create a job using an available blueprint to export the data to Amazon S3. Import the data from the S3 file to a DynamoDB table in the new account.

B.

Create an AWS Lambda function to scan the items of the DynamoDB table in the current account and write to a file in Amazon S3. Create another Lambda function to read the S3 file and restore the items of a DynamoDB table in the new account.

C.

Use AWS Data Pipeline in the current account to export the data from the DynamoDB table to a file in Amazon S3. Use Data Pipeline to import the data from the S3 file to a DynamoDB table in the new account.

D.

Configure Amazon DynamoDB Streams for the DynamoDB table in the current account. Create an AWS Lambda function to read from the stream and write to a file in Amazon S3. Create another Lambda function

to read the S3 file and restore the items to a DynamoDB table in the new account.

Question 20

A software development company is using Amazon Aurora MySQL DB clusters for several use cases, including development and reporting. These use cases place unpredictable and varying demands on the Aurora DB clusters, and can cause momentary spikes in latency. System users run ad-hoc queries sporadically throughout the week. Cost is a primary concern for the company, and a solution that does not require significant rework is needed.

Which solution meets these requirements?

Options:

A.

Create new Aurora Serverless DB clusters for development and reporting, then migrate to these new DB clusters.

B.

Upgrade one of the DB clusters to a larger size, and consolidate development and reporting activities on this larger DB cluster.

C.

Use existing DB clusters and stop/start the databases on a routine basis using scheduling tools.

D.

Change the DB clusters to the burstable instance family.

Question 21

A database specialist is designing an enterprise application for a large company. The application uses Amazon DynamoDB with DynamoDB Accelerator (DAX).

The database specialist observes that most of the queries are not found in the DAX cache and that they still require DynamoDB table reads.

What should the database specialist review first to improve the utility of DAX?

Options:

A.

The DynamoDB ConsumedReadCapacityUnits metric

B.

The trust relationship to perform the DynamoDB API calls

C.

The DAX cluster's TTL setting

D.

The validity of customer-specified AWS Key Management Service (AWS KMS) keys for DAX encryption at rest

Question 22

Amazon RDS for Oracle with Transparent Data Encryption is used by a financial services organization (TDE). At all times, the organization is obligated to encrypt its data at rest. The decryption key must be widely distributed, and access to the key must be restricted. The organization must be able to rotate the encryption key on demand to comply with regulatory requirements. If any possible security vulnerabilities are discovered, the organization must be able to disable the key. Additionally, the company's overhead must be kept to a minimal.

What method should the database administrator use to configure the encryption to fulfill these specifications?

Options:

A.

AWS CloudHSM

B.

AWS Key Management Service (AWS KMS) with an AWS managed key

C.

AWS Key Management Service (AWS KMS) with server-side encryption

D.

AWS Key Management Service (AWS KMS) CMK with customer-provided material

Question 23

A company has more than 100 AWS accounts that need Amazon RDS instances. The company wants to build an automated solution to deploy the RDS instances with specific compliance parameters. The data does not need to be replicated. The company needs to create the databases within 1 day

Which solution will meet these requirements in the MOST operationally efficient way?

Options:

A.

Create RDS resources by using AWS CloudFormation. Share the CloudFormation template with each account.

B.

Create an RDS snapshot. Share the snapshot With each account Deploy the snapshot into each account

C.

use AWS CloudFormation to create RDS instances in each account. Run AWS Database Migration Service (AWS DMS) replication to each ot the created instances.

D.

Create a script by using the AWS CLI to copy the ROS instance into the other accounts from a template account.

Question 24

A company's application team needs to select an AWS managed database service to store application and user data. The application team is familiar with MySQL but is open to new solutions. The application and user data is stored in 10 tables and is de-normalized. The application will access this data through an API layer using an unique ID in each table. The company expects the traffic to be light at first, but the traffic Will Increase to thousands of transactions each second within the first year- The database service must support active reads

and writes in multiple AWS Regions at the same time_ Query response times need to be less than 100 ms Which AWS database solution will meet these requirements?

Options:

A.

Deploy an Amazon RDS for MySQL environment in each Region and leverage AWS Database Migration Service (AWS DMS) to set up a multi-Region bidirectional replication

B.

Deploy an Amazon Aurora MySOL global database with write forwarding turned on

C.

Deploy an Amazon DynamoDB database with global tables

D.

Deploy an Amazon DocumentDB global cluster across multiple Regions.

Question 25

A financial services company is using AWS Database Migration Service (AWS OMS) to migrate Its databases from on-premises to AWS. A database administrator is working on replicating a database to AWS from on-premises using full load and change data capture (CDC). During the CDC replication, the database administrator observed that the target latency was high and slowly increasing-

What could be the root causes for this high target latency? (Select TWO.)

Options:

A.

There was ongoing maintenance on the replication instance

B.

The source endpoint was changed by modifyng the task

C.

Loopback changes had affected the source and target instances-

D.

There was no primary key or index in the target database.

E.

There were resource bottlenecks in the replication instance

Question 26

A company has an AWS CloudFormation stack that defines an Amazon RDS DB instance. The company accidentally deletes the stack and loses recent data from the DB instance. A database specialist must change the CloudFormation template for the RDS resource to reduce the chance of accidental data loss from the DB instance in the future.

Which combination of actions should the database specialist take to meet this requirement? (Choose three.)

Options:

A.

Set the DeletionProtection property to True.

B.

Set the MultiAZ property to True.

C.

Set the TerminationProtection property to True.

D.

Set the DeleteAutomatedBackups property to False.

E.

Set the DeletionPolicy attribute to No.

F.

Set the DeletionPolicy attribute to Retain.

Question 27

A Database Specialist is designing a disaster recovery strategy for a production Amazon DynamoDB table. The table uses provisioned read/write capacity mode, global secondary indexes, and time to live (TTL). The Database Specialist has restored the latest backup to a new table.

To prepare the new table with identical settings, which steps should be performed? (Choose two.)

Options:

A.

Re-create global secondary indexes in the new table

B.

Define IAM policies for access to the new table

C.

Define the TTL settings

D.

Encrypt the table from the AWS Management Console or use the update-table command

E.

Set the provisioned read and write capacity

Question 28

A gaming company is evaluating Amazon ElastiCache as a solution to manage player leaderboards. Millions of players around the world will complete in annual tournaments. The company wants to implement an architecture that is highly available. The company also wants to ensure that maintenance activities have minimal impact on the availability of the gaming platform.

Which combination of steps should the company take to meet these requirements? (Choose two.)

Options:

A.

Deploy an ElastiCache for Redis cluster with read replicas and Multi-AZ enabled.

B.

Deploy an ElastiCache for Memcached global datastore.

C.

Deploy a single-node ElastiCache for Redis cluster with automatic backups enabled. In the event of a failure, create a new cluster and restore data from the most recent backup.

D.

Use the default maintenance window to apply any required system changes and mandatory updates as soon as they are available.

E.

Choose a preferred maintenance window at the time of lowest usage to apply any required changes and mandatory updates.

Question 29

A company is running a business-critical application on premises by using Microsoft SQL Server. A database specialist is planning to migrate the instance with several databases to the AWS Cloud. The database specialist will use SQL Server Standard edition hosted on Amazon EC2 Windows instances. The solution must provide high availability and must avoid a single point of failure in the SQL Server deployment architecture.

Which solution will meet these requirements?

Options:

A.

Create Amazon RDS for SQL Server Multi-AZ DB instances. Use Amazon S3 as a shared storage option to host the databases.

B.

Set up Always On Failover Cluster Instances as a single SQL Server instance. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.

C.

Set up Always On availability groups to group one or more user databases that fail over together across multiple SQL Server instances. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.

D.

Create an Application Load Balancer to distribute database traffic across multiple EC2 instances in multiple Availability Zones. Use Amazon S3 as a shared storage option to host the databases.

Question 30

A company’s database specialist disabled TLS on an Amazon DocumentDB cluster to perform benchmarking tests. A few days after this change was implemented, a database specialist trainee accidentally deleted multiple tables. The database specialist restored the database from available snapshots. An hour after restoring the cluster, the database specialist is still unable to connect to the new cluster endpoint.

What should the database specialist do to connect to the new, restored Amazon DocumentDB cluster?

Options:

A.

Change the restored cluster’s parameter group to the original cluster’s custom parameter group.

B.

Change the restored cluster’s parameter group to the Amazon DocumentDB default parameter group.

C.

Configure the interface VPC endpoint and associate the new Amazon DocumentDB cluster.

D.

Run the syncInstances command in AWS DataSync.

Question 31

A company's development team needs to have production data restored in a staging AWS account. The production database is running on an Amazon RDS for

PostgreSQL Multi-AZ DB instance, which has AWS KMS encryption enabled using the default KMS key. A database specialist planned to share the most recent automated snapshot with the staging account, but discovered that the option to share snapshots is disabled in the AWS Management Console.

What should the database specialist do to resolve this?

Options:

A.

Disable automated backups in the DB instance. Share both the automated snapshot and the default KMS key with the staging account. Restore the snapshot in the staging account and enable automated backups.

B.

Copy the automated snapshot specifying a custom KMS encryption key. Share both the copied snapshot and the custom KMS encryption key with the staging account. Restore the snapshot to the staging account within the same Region.

C.

Modify the DB instance to use a custom KMS encryption key. Share both the automated snapshot and the custom KMS encryption key with the staging account. Restore the snapshot in the staging account.

D.

Copy the automated snapshot while keeping the default KMS key. Share both the snapshot and the default KMS key with the staging account. Restore the snapshot in the staging account.

Question 32

A company is looking to move an on-premises IBM Db2 database running AIX on an IBM POWER7 server. Due to escalating support and maintenance costs, the company is exploring the option of moving the workload to an Amazon Aurora PostgreSQL DB cluster.

What is the quickest way for the company to gather data on the migration compatibility?

Options:

A.

Perform a logical dump from the Db2 database and restore it to an Aurora DB cluster. Identify the gaps and compatibility of the objects migrated by comparing row counts from source and target tables.

B.

Run AWS DMS from the Db2 database to an Aurora DB cluster. Identify the gaps and compatibility of the objects migrated by comparing the row counts from source and target tables.

C.

Run native PostgreSQL logical replication from the Db2 database to an Aurora DB cluster to evaluate the migration compatibility.

D.

Run the AWS Schema Conversion Tool (AWS SCT) from the Db2 database to an Aurora DB cluster. Create a migration assessment report to evaluate the migration compatibility.

Question 33

A major organization maintains a number of Amazon DB clusters. Each of these clusters is configured differently to meet certain needs. These configurations may be classified into wider groups based on the team and use case.

A database administrator wishes to streamline the process of storing and updating these settings. Additionally, the database administrator want to guarantee that changes to certain configuration categories are automatically implemented to all instances as necessary.

Which AWS service or functionality will assist in automating and achieving this goal?

Options:

A.

AWS Systems Manager Parameter Store

B.

DB parameter group

C.

AWS Config

D.

AWS Secrets Manager

Question 34

A company has a reporting application that runs on an Amazon EC2 instance in an isolated developer account on AWS. The application needs to retrieve data during non-peak company hours from an Amazon Aurora PostgreSQL database that runs in the companys production account The companys security team requires that access to production

resources complies with AWS best security practices

A database administrator needs to provide the reporting application with access to the production database. The company has already configured VPC peering between the production account and developer account The company has also updated the route tables in both accounts With the necessary entries to correctly set up VPC peering

What must the database administrator do to finish providing connectivity to the reporting application?

Options:

A.

Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432. Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432.

B.

Add an outbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432. Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432.

C.

Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on all TCP ports. Add an inbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on port 5432_

D.

Add an inbound security group rule to the database security group that allows access from the developer account VPC CIDR on port 5432_ Add an outbound security group rule to the EC2 security group that allows access to the production account VPC CIDR on all TCP ports

Question 35

A business that specializes in internet advertising is developing an application that will show adverts to its customers. The program stores data in an Amazon DynamoDB database. Additionally, the application caches its reads using a DynamoDB Accelerator (DAX) cluster. The majority of reads come via the GetItem and BatchGetItem queries. The application does not need consistency of readings.

The application cache does not behave as intended after deployment. Specific extremely consistent queries to the DAX cluster are responding in several milliseconds rather than microseconds.

How can the business optimize cache behavior in order to boost application performance?

Options:

A.

Increase the size of the DAX cluster.

B.

Configure DAX to be an item cache with no query cache

C.

Use eventually consistent reads instead of strongly consistent reads.

D.

Create a new DAX cluster with a higher TTL for the item cache.

Question 36

A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi-AZ DB instance. The company recently conducted tests on the database after business hours, and

the tests generated additional database logs. As a result, free storage of the DB instance is low and is expected to be exhausted in 2 days.

The company wants to recover the free storage that the additional logs consumed. The solution must not result in downtime for the database.

Which solution will meet these requirements?

Options:

A.

Modify the rds.log_retention_period parameter to 0. Reboot the DB instance to save the changes.

B.

Modify the rds.log_retention_period parameter to 1440. Wait up to 24 hours for database logs to be deleted.

C.

Modify the temp file_limit parameter to a smaller value to reclaim space on the DB instance.

D.

Modify the rds.log_retention_period parameter to 1440. Reboot the DB instance to save the changes.

Question 37

A banking company recently launched an Amazon RDS for MySQL DB instance as part of a proof-of-concept project. A database specialist has configured automated database snapshots. As a part of routine testing, the database specialist noticed one day that the automated database snapshot was not created.

Which of the following are possible reasons why the snapshot was not created? (Choose two.)

Options:

A.

A copy of the RDS automated snapshot for this DB instance is in progress within the same AWS Region.

B.

A copy of the RDS automated snapshot for this DB instance is in progress in a different AWS Region.

C.

The RDS maintenance window is not configured.

D.

The RDS DB instance is in the STORAGE_FULL state.

E.

RDS event notifications have not been enabled.

Question 38

A gaming company is developing a new mobile game and decides to store the data for each user in Amazon DynamoDB. To make the registration process as easy as possible, users can log in with their existing Facebook or Amazon accounts. The company expects more than 10,000 users.

How should a database specialist implement access control with the LEAST operational effort?

Options:

A.

Use web identity federation on the mobile app and AWS STS with an attached IAM role to get temporary credentials to access DynamoDB.

B.

Use web identity federation on the mobile app and create individual IAM users with credentials to access DynamoDB.

C.

Use a self-developed user management system on the mobile app that lets users access the data from DynamoDB through an API.

D.

Use a single IAM user on the mobile app to access DynamoDB.

Question 39

A startup company in the travel industry wants to create an application that includes a personal travel assistant to display information for nearby airports based on user location. The application will use Amazon DynamoDB and must be able to access and display attributes such as airline names, arrival times, and flight numbers. However, the application must not be able to access or display pilot names or passenger counts.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Use a proxy tier between the application and DynamoDB to regulate access to specific tables, items, and attributes.

B.

Use IAM policies with a combination of IAM conditions and actions to implement fine-grained access control.

C.

Use DynamoDB resource policies to regulate access to specific tables, items, and attributes.

D.

Configure an AWS Lambda function to extract only allowed attributes from tables based on user profiles.

Question 40

A database specialist needs to review and optimize an Amazon DynamoDB table that is experiencing performance issues. A thorough investigation by the database specialist reveals that the partition key is causing hot partitions, so a new partition key is created. The database specialist must effectively apply this new partition key to all existing and new data.

How can this solution be implemented?

Options:

A.

Use Amazon EMR to export the data from the current DynamoDB table to Amazon S3. Then use Amazon EMR again to import the data from Amazon S3 into a new DynamoDB table with the new partition key.

B.

Use AWS DMS to copy the data from the current DynamoDB table to Amazon S3. Then import the DynamoDB table to create a new DynamoDB table with the new partition key.

C.

Use the AWS CLI to update the DynamoDB table and modify the partition key.

D.

Use the AWS CLI to back up the DynamoDB table. Then use the restore-table-from-backup command and modify the partition key.

Question 41

A financial institution uses AWS to host its online application. Amazon RDS for MySQL is used to host the application's database, which includes automatic backups.

The program has corrupted the database logically, resulting in the application being unresponsive. The exact moment the corruption occurred has been determined, and it occurred within the backup retention period.

How should a database professional restore a database to its previous state prior to corruption?

Options:

A.

Use the point-in-time restore capability to restore the DB instance to the specified time. No changes to the application connection string are required.

B.

Use the point-in-time restore capability to restore the DB instance to the specified time. Change the application connection string to the new, restored DB instance.

C.

Restore using the latest automated backup. Change the application connection string to the new, restored DB instance.

D.

Restore using the appropriate automated backup. No changes to the application connection string are required.

Question 42

A company plans to use AWS Database Migration Service (AWS DMS) to migrate its database from one Amazon EC2 instance to another EC2 instance as a full load task. The company wants the database to be inactive during the migration. The company will use a dms.t3.medium instance to perform the migration and will use the default settings for the migration.

Which solution will MOST improve the performance of the data migration?

Options:

A.

Increase the number of tables that are loaded in parallel.

B.

Drop all indexes on the source tables.

C.

Change the processing mode from the batch optimized apply option to transactional mode.

D.

Enable Multi-AZ on the target database while the full load task is in progress.

Question 43

A gaming company is building a mobile game that will have as many as 25,000 active concurrent users in the first 2 weeks after launch. The game has a leaderboard that shows the 10 highest scoring players over the last 24 hours. The leaderboard calculations are processed by an AWS Lambda function, which takes about 10 seconds. The company wants the data on the leaderboard to be no more than 1 minute old.

Which architecture will meet these requirements in the MOST operationally efficient way?

Options:

A.

Deliver the player data to an Amazon Timestream database. Create an Amazon ElastiCache for Redis cluster. Configure the Lambda function to store the results in Redis. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the Redis cluster for the leaderboard data.

B.

Deliver the player data to an Amazon Timestream database. Create an Amazon DynamoDB table. Configure the Lambda function to store the results in DynamoDB. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the DynamoDB table for the leaderboard data.

C.

Deliver the player data to an Amazon Aurora MySQL database. Create an Amazon DynamoDB table. Configure the Lambda function to store the results in MySQL. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the DynamoDB table for the leaderboard data.

D.

Deliver the player data to an Amazon Neptune database. Create an Amazon ElastiCache for Redis cluster. Configure the Lambda function to store the results in Redis. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the Redis cluster for the leaderboard data.

Question 44

A financial company has allocated an Amazon RDS MariaDB DB instance with large storage capacity to accommodate migration efforts. Post-migration, the company purged unwanted data from the instance. The company now want to downsize storage to save money. The solution must have the least impact on production and near-zero downtime.

Which solution would meet these requirements?

Options:

A.

Create a snapshot of the old databases and restore the snapshot with the required storage

B.

Create a new RDS DB instance with the required storage and move the databases from the old instances to the new instance using AWS DMS

C.

Create a new database using native backup and restore

D.

Create a new read replica and make it the primary by terminating the existing primary

Question 45

A company is using Amazon Redshift. A database specialist needs to allow an existing Redshift cluster to access data from other Redshift clusters. Amazon RDS for PostgreSQL databases, and AWS Glue Data Catalog tables.

Which combination of steps will meet these requirements with the MOST operational efficiency? (Choose three.)

Options:

A.

Take a snapshot of the required tables from the other Redshift clusters. Restore the snapshot into the existing Redshift cluster.

B.

Create external tables in the existing Redshift database to connect to the AWS Glue Data Catalog tables.

C.

Unload the RDS tables and the tables from the other Redshift clusters into Amazon S3. Run COPY commands to load the tables into the existing Redshift cluster.

D.

Use federated queries to access data in Amazon RDS.

E.

Use data sharing to access data from the other Redshift clusters.

F.

Use AWS Glue jobs to transfer the AWS Glue Data Catalog tables into Amazon S3. Create external tables in the existing Redshift database to access this data.

Question 46

A database specialist needs to delete user data and sensor data 1 year after it was loaded in an Amazon DynamoDB table. TTL is enabled on one of the attributes. The database specialist monitors TTL rates on the Amazon CloudWatch metrics for the table and observes that items are not being deleted as expected.

What is the MOST likely reason that the items are not being deleted?

Options:

A.

The TTL attribute's value is set as a Number data type.

B.

The TTL attribute's value is set as a Binary data type.

C.

The TTL attribute's value is a timestamp in the Unix epoch time format in seconds.

D.

The TTL attribute's value is set with an expiration of 1 year.

Question 47

A company is using an Amazon Aurora PostgreSQL DB cluster with an xlarge primary instance master and two large Aurora Replicas for high availability and read-only workload scaling. A failover event occurs and application performance is poor for several minutes. During this time, application servers in all Availability Zones are healthy and responding normally.

What should the company do to eliminate this application performance issue?

Options:

A.

Configure both of the Aurora Replicas to the same instance class as the primary DB instance. Enable cache coherence on the DB cluster, set the primary DB instance failover priority to tier-0, and assign a failover priority of tier-1 to the replicas.

B.

Deploy an AWS Lambda function that calls the DescribeDBInstances action to establish which instance has failed, and then use the PromoteReadReplica operation to promote one Aurora Replica to be the primary DB instance. Configure an Amazon RDS event subscription to send a notification to an Amazon SNS topic to which the Lambda function is subscribed.

C.

Configure one Aurora Replica to have the same instance class as the primary DB instance. Implement Aurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the primary DB instance and one replica with the same instance class. Set the failover priority to tier-1 for the other replicas.

D.

Configure both Aurora Replicas to have the same instance class as the primary DB instance. Implement Aurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the primary DB instance and to tier-1 for the replicas.

Question 48

A company is using Amazon with Aurora Replicas for read-only workload scaling. A Database Specialist needs to split up two read-only applications so each application always connects to a dedicated replica. The Database Specialist wants to implement load balancing and high availability for the read-only applications.

Which solution meets these requirements?

Options:

A.

Use a specific instance endpoint for each replica and add the instance endpoint to each read-only application connection string.

B.

Use reader endpoints for both the read-only workload applications.

C.

Use a reader endpoint for one read-only application and use an instance endpoint for the other read-only application.

D.

Use custom endpoints for the two read-only applications.

Question 49

On a single Amazon RDS DB instance, a business hosts a MySQL database for its ecommerce application. Automatically saving application purchases to the database results in high-volume writes. Employees routinely create purchase reports for the company. The organization wants to boost database performance and minimize downtime associated with upgrade patching.

Which technique will satisfy these criteria with the LEAST amount of operational overhead?

Options:

A.

Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and enable Memcached in the MySQL option group.

B.

Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and set up replication to a MySQL DB instance running on Amazon EC2.

C.

Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and add a read replica.

D.

Add a read replica and promote it to an Amazon Aurora MySQL DB cluster master. Then enable Amazon Aurora Serverless.

Question 50

A company has an on-premises production Microsoft SQL Server with 250 GB of data in one database. A database specialist needs to migrate this on-premises

SQL Server to Amazon RDS for SQL Server. The nightly native SQL Server backup file is approximately 120 GB in size. The application can be down for an extended period of time to complete the migration. Connectivity between the on-premises environment and AWS can be initiated from on-premises only.

How can the database be migrated from on-premises to Amazon RDS with the LEAST amount of effort?

Options:

A.

Back up the SQL Server database using a native SQL Server backup. Upload the backup files to Amazon S3. Download the backup files on an Amazon EC2 instance and restore them from the EC2 instance into the new production RDS instance.

B.

Back up the SQL Server database using a native SQL Server backup. Upload the backup files to Amazon S3. Restore the backup files from the S3 bucket into the new production RDS instance.

C.

Provision and configure AWS DMS. Set up replication between the on-premises SQL Server environment to replicate the database to the new production RDS instance.

D.

Back up the SQL Server database using AWS Backup. Once the backup is complete, restore the completed backup to an Amazon EC2 instance and move it to the new production RDS instance.

Question 51

A marketing company is using Amazon DocumentDB and requires that database audit logs be enabled. A Database Specialist needs to configure monitoring so that all data definition language (DDL) statements performed are visible to the Administrator. The Database Specialist has set the audit_logs parameter to enabled in the cluster parameter group.

What should the Database Specialist do to automatically collect the database logs for the Administrator?

Options:

A.

Enable DocumentDB to export the logs to Amazon CloudWatch Logs

B.

Enable DocumentDB to export the logs to AWS CloudTrail

C.

Enable DocumentDB Events to export the logs to Amazon CloudWatch Logs

D.

Configure an AWS Lambda function to download the logs using the download-db-log-file-portion operation and store the logs in Amazon S3

Question 52

A company is using Amazon RDS for PostgreSQL. The Security team wants all database connection requests to be logged and retained for 180 days. The RDS for PostgreSQL DB instance is currently using the default parameter group. A Database Specialist has identified that setting the log_connections parameter to 1 will enable connections logging.

Which combination of steps should the Database Specialist take to meet the logging and retention requirements? (Choose two.)

Options:

A.

Update the log_connections parameter in the default parameter group

B.

Create a custom parameter group, update the log_connections parameter, and associate the parameter with the DB instance

C.

Enable publishing of database engine logs to Amazon CloudWatch Logs and set the event expiration to 180 days

D.

Enable publishing of database engine logs to an Amazon S3 bucket and set the lifecycle policy to 180 days

E.

Connect to the RDS PostgreSQL host and update the log_connections parameter in the postgresql.conf file

Question 53

A database specialist wants to ensure that an Amazon Aurora DB cluster is always automatically upgraded to the most recent minor version available. Noticing that there is a new minor version available, the database specialist has issues an AWS CLI command to enable automatic minor version updates. The command runs successfully, but checking the Aurora DB cluster indicates that no update to the Aurora version has been made.

What might account for this? (Choose two.)

Options:

A.

The new minor version has not yet been designated as preferred and requires a manual upgrade.

B.

Configuring automatic upgrades using the AWS CLI is not supported. This must be enabled expressly using the AWS Management Console.

C.

Applying minor version upgrades requires sufficient free space.

D.

The AWS CLI command did not include an apply-immediately parameter.

E.

Aurora has detected a breaking change in the new minor version and has automatically rejected the upgrade.

Question 54

A Database Specialist is constructing a new Amazon Neptune DB cluster and tries to load data from Amazon S3 using the Neptune bulk loader API. The Database Specialist is confronted with the following error message:

€Unable to establish a connection to the s3 endpoint. The source URL is s3:/mybucket/graphdata/ and the region code is us-east-1. Kindly confirm your Configuration S3.

Which of the following activities should the Database Specialist take to resolve the issue? (Select two.)

Options:

A.

Check that Amazon S3 has an IAM role granting read access to Neptune

B.

Check that an Amazon S3 VPC endpoint exists

C.

Check that a Neptune VPC endpoint exists

D.

Check that Amazon EC2 has an IAM role granting read access to Amazon S3

E.

Check that Neptune has an IAM role granting read access to Amazon S3

Question 55

A database specialist needs to reduce the cost of an application's database. The database is running on a Multi-AZ deployment of an Amazon ROS for Microsoft SQL Server DB instance. The application requires the database to support stored procedures, SQL Server Wire Protocol (TDS), and T-SQC The database must also be highly available. The database specialist is using AWS Database Migration Service (AWS DMS) to migrate the database to a new data store.

Which solution will reduce the cost of the database with the LEAST effort?

Options:

A.

Use AWS Database Migration Service (DMS) to migrate to an RDS for MySQL Multi-AZ database. Update the application code to use the features of MySQL that correspond to SQL Server. Update the application to use the MySQL port.

B.

use AWS Database Migration Serve (OMS) to migrate to an RDS for PostgreSQL Multi-AZ database. Turn on the SQL_COMPAT optional extension within the database to allow the required features. Update the application to use the PostgreSQL port

C.

Use AWS Database Migration Service (OMS) to migrate to an RDS for SQL Server Single-AZ database. Update the application to use the new database endpoint

D.

Use AWS Database Migration Service (DMS) to migrate the database to Amazon Aurora PostgreSOL_ Turn on Babelfish for Aurora PostgreSOL_ Update the application to use the Babelfish TDS port.

Question 56

A gaming company is designing a mobile gaming app that will be accessed by many users across the globe. The company wants to have replication and full support for multi-master writes. The company also wants to ensure low latency and consistent performance for app users.

Which solution meets these requirements?

Options:

A.

Use Amazon DynamoDB global tables for storage and enable DynamoDB automatic scaling

B.

Use Amazon Aurora for storage and enable cross-Region Aurora Replicas

C.

Use Amazon Aurora for storage and cache the user content with Amazon ElastiCache

D.

Use Amazon Neptune for storage

Question 57

A database specialist wants to ensure that an Amazon Aurora DB cluster is always automatically upgraded to the most recent minor version available. Noticing that there is a new minor version available, the database specialist has issues an AWS CLI command to enable automatic minor version updates. The command runs successfully, but checking the Aurora DB cluster indicates that no update to the Aurora version has been made.

What might account for this? (Choose two.)

Options:

A.

The new minor version has not yet been designated as preferred and requires a manual upgrade.

B.

Configuring automatic upgrades using the AWS CLI is not supported. This must be enabled expressly using the AWS Management Console.

C.

Applying minor version upgrades requires sufficient free space.

D.

The AWS CLI command did not include an apply-immediately parameter.

E.

Aurora has detected a breaking change in the new minor version and has automatically rejected the upgrade.

Question 58

A large company has a variety of Amazon DB clusters. Each of these clusters has various configurations that adhere to various requirements. Depending on the team and use case, these configurations can be organized into broader categories.

A database administrator wants to make the process of storing and modifying these parameters more systematic. The database administrator also wants to ensure that changes to individual categories of configurations are automatically applied to all instances when required.

Which AWS service or feature will help automate and achieve this objective?

Options:

A.

AWS Systems Manager Parameter Store

B.

DB parameter group

C.

AWS Config

D.

AWS Secrets Manager

Question 59

The Security team for a finance company was notified of an internal security breach that happened 3 weeks ago. A Database Specialist must start producing audit logs out of the production Amazon Aurora PostgreSQL cluster for the Security team to use for monitoring and alerting. The Security team is required to perform real- time alerting and monitoring outside the Aurora DB cluster and wants to have the cluster push encrypted files to the chosen solution.

Which approach will meet these requirements?

Options:

A.

Use pg_audit to generate audit logs and send the logs to the Security team.

B.

Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.

C.

Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.

D.

Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.

Question 60

A company has an AWS CloudFormation template written in JSON that is used to launch new Amazon RDS for MySQL DB instances. The security team has asked a database specialist to ensure that the master password is automatically rotated every 30 days for all new DB instances that are launched using the template.

What is the MOST operationally efficient solution to meet these requirements?

Options:

A.

Save the password in an Amazon S3 object. Encrypt the S3 object with an AWS KMS key. Set the KMS key to be rotated every 30 days by setting the EnableKeyRotation property to true. Use a CloudFormation custom resource to read the S3 object to extract the password.

B.

Create an AWS Lambda function to rotate the secret. Modify the CloudFormation template to add an AWS::SecretsManager::RotationSchedule resource. Configure the RotationLambdaARN value and, for the RotationRules property, set the AutomaticallyAfterDays parameter to 30.

C.

Modify the CloudFormation template to use the AWS KMS key as the database password. Configure an Amazon EventBridge rule to invoke the KMS API to rotate the key every 30 days by setting the ScheduleExpression parameter to ***/30***.

D.

Integrate the Amazon RDS for MySQL DB instances with AWS IAM and centrally manage the master database user password.

Question 61

Recently, a gaming firm purchased a popular iOS game that is especially popular during the Christmas season. The business has opted to include a leaderboard into the game, which will be powered by Amazon DynamoDB. The application's load is likely to increase significantly throughout the Christmas season.

Which solution satisfies these criteria at the lowest possible cost?

Options:

A.

DynamoDB Streams

B.

DynamoDB with DynamoDB Accelerator

C.

DynamoDB with on-demand capacity mode

D.

DynamoDB with provisioned capacity mode with Auto Scaling

Question 62

A manufacturing company’s website uses an Amazon Aurora PostgreSQL DB cluster.

Which configurations will result in the LEAST application downtime during a failover? (Choose three.)

Options:

A.

Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.

B.

Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.

C.

Edit and enable Aurora DB cluster cache management in parameter groups.

D.

Set TCP keepalive parameters to a high value.

E.

Set JDBC connection string timeout variables to a low value.

F.

Set Java DNS caching timeouts to a high value.

Question 63

An worldwide gaming company's development team is experimenting with using Amazon DynamoDB to store in-game events for three mobile titles. Maximum concurrent users for the most popular game is 500,000, while the least popular game is 10,000. The typical event is 20 KB in size, while the average user session generates one event each second. Each event is assigned a millisecond time stamp and a globally unique identification.

The lead developer generated a single DynamoDB database with the following structure for the events:

  • Partition key: game name
  • Sort key: event identifier
  • Local secondary index: player identifier
  • Event time

In a small-scale development setting, the tests were successful. When the application was deployed to production, however, new events were not being added to the database, and the logs indicated DynamoDB failures with the ItemCollectionSizeLimitExceededException issue code.

Which design modification should a database professional offer to the development team?

Options:

A.

Use the player identifier as the partition key. Use the event time as the sort key. Add a global secondary index with the game name as the partition key and the event time as the sort key.

B.

Create two tables. Use the game name as the partition key in both tables. Use the event time as the sort key for the first table. Use the player identifier as the sort key for the second table.

C.

Replace the sort key with a compound value consisting of the player identifier collated with the event time, separated by a dash. Add a local secondary index with the player identifier as the sort key.

D.

Create one table for each game. Use the player identifier as the partition key. Use the event time as the sort key.

Question 64

A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal. Traffic patterns throughout the year are usually stable; however, a large event is planned. The company knows that traffic will increase by up to 10 times the normal load over the 3-day event. When sale prices are published during the event, traffic will spike rapidly.

How should a Database Specialist ensure DynamoDB can handle the increased traffic?

Options:

A.

Ensure the table is always provisioned to meet peak needs

B.

Allow burst capacity to handle the additional load

C.

Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic

D.

Preprovision additional capacity for the known peaks and then reduce the capacity after the event

Question 65

A company is planning to close for several days. A Database Specialist needs to stop all applications along with the DB instances to ensure employees do not have access to the systems during this time. All databases are running on Amazon RDS for MySQL.

The Database Specialist wrote and executed a script to stop all the DB instances. When reviewing the logs, the Database Specialist found that Amazon RDS DB instances with read replicas did not stop.

How should the Database Specialist edit the script to fix this issue?

Options:

A.

Stop the source instances before stopping their read replicas

B.

Delete each read replica before stopping its corresponding source instance

C.

Stop the read replicas before stopping their source instances

D.

Use the AWS CLI to stop each read replica and source instance at the same time

Question 66

A large IT hardware manufacturing company wants to deploy a MySQL database solution in the AWS Cloud. The solution should quickly create copies of the company's production databases for test purposes. The solution must deploy the test databases in minutes, and the test data should match the latest production data as closely as possible. Developers must also be able to make changes in the test database and delete the instances afterward.

Which solution meets these requirements?

Options:

A.

Leverage Amazon RDS for MySQL with write-enabled replicas running on Amazon EC2. Create the test copies using a mysqidump backup from the RDS for MySQL DB instances and importing them into the new EC2 instances.

B.

Leverage Amazon Aurora MySQL. Use database cloning to create multiple test copies of the production DB clusters.

C.

Leverage Amazon Aurora MySQL. Restore previous production DB instance snapshots into new test copies of Aurora MySQL DB clusters to allow them to make changes.

D.

Leverage Amazon RDS for MySQL. Use database cloning to create multiple developer copies of the production DB instance.

Question 67

A bike rental company operates an application to track its bikes. The application receives location and condition data from bike sensors. The application also receives rental transaction data from the associated mobile app.

The application uses Amazon DynamoDB as its database layer. The company has configured DynamoDB with provisioned capacity set to 20% above the expected peak load of the application. On an average day, DynamoDB used 22 billion read capacity units (RCUs) and 60 billion write capacity units (WCUs). The application is running well. Usage changes smoothly over the course of the day and is generally shaped like a bell curve. The timing and magnitude of peaks vary based on the weather and season, but the general shape is consistent.

Which solution will provide the MOST cost optimization of the DynamoDB database layer?

Options:

A.

Change the DynamoDB tables to use on-demand capacity.

B.

Use AWS Auto Scaling and configure time-based scaling.

C.

Enable DynamoDB capacity-based auto scaling.

D.

Enable DynamoDB Accelerator (DAX).

Question 68

A company needs a data warehouse solution that keeps data in a consistent, highly structured format. The company requires fast responses for end-user queries when looking at data from the current year, and users must have access to the full 15-year dataset, when needed. This solution also needs to handle a fluctuating number incoming queries. Storage costs for the 100 TB of data must be kept low.

Which solution meets these requirements?

Options:

A.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storage. Provision enough instances to support high demand.

B.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Provision enough instances to support high demand.

C.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon Redshift Concurrency Scaling.

D.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Leverage Amazon Redshift elastic resize.

Question 69

Recently, an ecommerce business transferred one of its SQL Server databases to an Amazon RDS for SQL Server Enterprise Edition database instance. The corporation anticipates an increase in read traffic as a result of an approaching sale. To accommodate the projected read load, a database professional must establish a read replica of the database instance.

Which procedures should the database professional do prior to establishing the read replica? (Select two.)

Options:

A.

Identify a potential downtime window and stop the application calls to the source DB instance.

B.

Ensure that automatic backups are enabled for the source DB instance.

C.

Ensure that the source DB instance is a Multi-AZ deployment with Always ON Availability Groups.

D.

Ensure that the source DB instance is a Multi-AZ deployment with SQL Server Database Mirroring (DBM).

E.

Modify the read replica parameter group setting and set the value to 1.

Question 70

A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database generates log files with retention periods set to their default values. The company has now mandated that database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and after- the-fact analyses.

What should a Database Specialist do to meet these requirements with minimal effort?

Options:

A.

Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.

B.

Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.

C.

Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.

D.

Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after 90 days.

Question 71

A user has a non-relational key-value database. The user is looking for a fully managed AWS service that will offload the administrative burdens of operating and scaling distributed databases. The solution must be cost- effective and able to handle unpredictable application traffic.

What should a Database Specialist recommend for this user?

Options:

A.

Create an Amazon DynamoDB table with provisioned capacity mode

B.

Create an Amazon DocumentDB cluster

C.

Create an Amazon DynamoDB table with on-demand capacity mode

D.

Create an Amazon Aurora Serverless DB cluster

Question 72

AWS CloudFormation stack including an Amazon RDS database instance was mistakenly removed, resulting in the loss of recent data. A Database Specialist must apply RDS parameters to the CloudFormation template in order to minimize the possibility of future inadvertent instance data loss.

Which settings will satisfy this criterion? (Select three.)

Options:

A.

Set DeletionProtection to True

B.

Set MultiAZ to True

C.

Set TerminationProtection to True

D.

Set DeleteAutomatedBackups to False

E.

Set DeletionPolicy to Delete

F.

Set DeletionPolicy to Retain

Question 73

A company has an ecommerce web application with an Amazon RDS for MySQL DB instance. The marketing team has noticed some unexpected updates to the product and pricing information on the website, which is impacting sales targets. The marketing team wants a database specialist to audit future database activity to help identify how and when the changes are being made.

What should the database specialist do to meet these requirements? (Choose two.)

Options:

A.

Create an RDS event subscription to the audit event type.

B.

Enable auditing of CONNECT and QUERY_DML events.

C.

SSH to the DB instance and review the database logs.

D.

Publish the database logs to Amazon CloudWatch Logs.

E.

Enable Enhanced Monitoring on the DB instance.

Question 74

A database specialist is managing an application in the us-west-1 Region and wants to set up disaster recovery in the us-east-1 Region. The Amazon Aurora MySQL DB cluster needs an RPO of 1 minute and an RTO of 2 minutes.

Which approach meets these requirements with no negative performance impact?

Options:

A.

Enable synchronous replication.

B.

Enable asynchronous binlog replication.

C.

Create an Aurora Global Database.

D.

Copy Aurora incremental snapshots to the us-east-1 Region.

Question 75

A large gaming company is creating a centralized solution to store player session state for multiple online games. The workload required key-value storage with low latency and will be an equal mix of reads and writes. Data should be written into the AWS Region closest to the user across the games’ geographically distributed user base. The architecture should minimize the amount of overhead required to manage the replication of data between Regions.

Which solution meets these requirements?

Options:

A.

Amazon RDS for MySQL with multi-Region read replicas

B.

Amazon Aurora global database

C.

Amazon RDS for Oracle with GoldenGate

D.

Amazon DynamoDB global tables

Question 76

A stock market analysis firm maintains two locations: one in the us-east-1 Region and another in the eu-west-2 Region. The business want to build an AWS database solution capable of providing rapid and accurate updates.

Dashboards with advanced analytical queries are used to present data in the eu-west-2 office. Because the corporation will use these dashboards to make purchasing choices, they must have less than a second to obtain application data.

Which solution satisfies these criteria and gives the MOST CURRENT dashboard?

Options:

A.

Deploy an Amazon RDS DB instance in us-east-1 with a read replica instance in eu-west-2. Create an Amazon ElastiCache cluster in eu-west-2 to cache data from the read replica to generate the dashboards.

B.

Use an Amazon DynamoDB global table in us-east-1 with replication into eu-west-2. Use multi-active replication to ensure that updates are quickly propagated to eu-west-2.

C.

Use an Amazon Aurora global database. Deploy the primary DB cluster in us-east-1. Deploy the secondary DB cluster in eu-west-2. Configure the dashboard application to read from the secondary cluster.

D.

Deploy an Amazon RDS for MySQL DB instance in us-east-1 with a read replica instance in eu-west-2. Configure the dashboard application to read from the read replica.

Question 77

To meet new data compliance requirements, a company needs to keep critical data durably stored and readily accessible for 7 years. Data that is more than 1 year old is considered archival data and must automatically be moved out of the Amazon Aurora MySQL DB cluster every week. On average, around 10 GB of new data is added to the database every month. A database specialist must choose the most operationally efficient solution to migrate the archival data to Amazon S3.

Which solution meets these requirements?

Options:

A.

Create a custom script that exports archival data from the DB cluster to Amazon S3 using a SQL view, then deletes the archival data from the DB cluster. Launch an Amazon EC2 instance with a weekly cron job to execute the custom script.

B.

Configure an AWS Lambda function that exports archival data from the DB cluster to Amazon S3 using a SELECT INTO OUTFILE S3 statement, then deletes the archival data from the DB cluster. Schedule the Lambda function to run weekly using Amazon EventBridge (Amazon CloudWatch Events).

C.

Configure two AWS Lambda functions: one that exports archival data from the DB cluster to Amazon S3 using the mysqldump utility, and another that deletes the archival data from the DB cluster. Schedule both Lambda functions to run weekly using Amazon EventBridge (Amazon CloudWatch Events).

D.

Use AWS Database Migration Service (AWS DMS) to continually export the archival data from the DB cluster to Amazon S3. Configure an AWS Data Pipeline process to run weekly that executes a custom SQL script to delete the archival data from the DB cluster.

Question 78

A company uses Microsoft SQL Server on Amazon RDS in a Multi-AZ deployment as the database engine for its application. The company was recently acquired by another company. A database specialist must rename the database to follow a new naming standard.

Which combination of steps should the database specialist take to rename the database? (Choose two.)

Options:

A.

Turn off automatic snapshots for the DB instance. Rename the database with the rdsadmin.dbo.rds_modify_db_name stored procedure. Turn on the automatic snapshots.

B.

Turn off Multi-AZ for the DB instance. Rename the database with the rdsadmin.dbo.rds_modify_db_name stored procedure. Turn on Multi-AZ Mirroring.

C.

Delete all existing snapshots for the DB instance. Use the rdsadmin.dbo.rds_modify_db_name stored procedure.

D.

Update the application with the new database connection string.

E.

Update the DNS record for the DB instance.

Question 79

A Database Specialist must create a read replica to isolate read-only queries for an Amazon RDS for MySQL DB instance. Immediately after creating the read replica, users that query it report slow response times.

What could be causing these slow response times?

Options:

A.

New volumes created from snapshots load lazily in the background

B.

Long-running statements on the master

C.

Insufficient resources on the master

D.

Overload of a single replication thread by excessive writes on the master

Question 80

A pharmaceutical company's drug search API is using an Amazon Neptune DB cluster. A bulk uploader process automatically updates the information in the database a few times each week. A few weeks ago during a bulk upload, a database specialist noticed that the database started to respond frequently with a

ThrottlingException error. The problem also occurred with subsequent uploads.

The database specialist must create a solution to prevent ThrottlingException errors for the database. The solution must minimize the downtime of the cluster.

Which solution meets these requirements?

Options:

A.

Create a read replica that uses a larger instance size than the primary DB instance. Fail over the primary DB instance to the read replica.

B.

Add a read replica to each Availability Zone. Use an instance for the read replica that is the same size as the primary DB instance. Keep the traffic between the API and the database within the Availability Zone.

C.

Create a read replica that uses a larger instance size than the primary DB instance. Offload the reads from the primary DB instance.

D.

Take the latest backup, and restore it in a DB cluster of a larger size. Point the application to the newly created DB cluster.

Question 81

A business need a data warehouse system that stores data consistently and in a highly organized fashion. The organization demands rapid response times for end-user inquiries including current-year data, and users must have access to the whole 15-year dataset when necessary. Additionally, this solution must be able to manage a variable volume of incoming inquiries. Costs associated with storing the 100 TB of data must be maintained to a minimum.

Which solution satisfies these criteria?

Options:

A.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storage. Provision enough instances to support high demand.

B.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Provision enough instances to support high demand.

C.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon Redshift Concurrency Scaling.

D.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Leverage Amazon Redshift elastic resize.

Question 82

A manufacturing company stores its inventory details in an Amazon DynamoDB table in the us-east-2 Region. According to new compliance and regulatory policies, the company is required to back up all of its tables nightly and store these backups in the us-west-2 Region for disaster recovery for 1 year

Which solution MOST cost-effectively meets these requirements?

Options:

A.

Convert the existing DynamoDB table into a global table and create a global table replica in the us-west-2 Region.

B.

Use AWS Backup to create a backup plan. Configure cross-Region replication in the plan and assign the DynamoDB table to this plan

C.

Create an on-demand backup of the DynamoDB table and restore this backup in the us-west-2 Region.

D.

Enable Amazon S3 Cross-Region Replication (CRR) on the S3 bucket where DynamoDB on-demand backups are stored.

Question 83

A company is using Amazon Aurora MySQL as the database for its retail application on AWS. The company receives a notification of a pending database upgrade and wants to ensure upgrades do not occur before or during the most critical time of year. Company leadership is concerned that an Amazon RDS maintenance window will cause an outage during data ingestion.

Which step can be taken to ensure that the application is not interrupted?

Options:

A.

Disable weekly maintenance on the DB cluster.

B.

Clone the DB cluster and migrate it to a new copy of the database.

C.

Choose to defer the upgrade and then find an appropriate down time for patching.

D.

Set up an Aurora Replica and promote it to primary at the time of patching.

Question 84

A company has applications running on Amazon EC2 instances in a private subnet with no internet connectivity. The company deployed a new application that uses Amazon DynamoDB, but the application cannot connect to the DynamoDB tables. A developer already checked that all permissions are set correctly.

What should a database specialist do to resolve this issue while minimizing access to external resources?

Options:

A.

Add a route to an internet gateway in the subnet’s route table.

B.

Add a route to a NAT gateway in the subnet’s route table.

C.

Assign a new security group to the EC2 instances with an outbound rule to ports 80 and 443.

D.

Create a VPC endpoint for DynamoDB and add a route to the endpoint in the subnet’s route table.

Question 85

A Database Specialist is planning to create a read replica of an existing Amazon RDS for MySQL Multi-AZ DB instance. When using the AWS Management Console to conduct this task, the Database Specialist discovers that the source RDS DB instance does not appear in the read replica source selection box, so the read replica cannot be created.

What is the most likely reason for this?

Options:

A.

The source DB instance has to be converted to Single-AZ first to create a read replica from it.

B.

Enhanced Monitoring is not enabled on the source DB instance.

C.

The minor MySQL version in the source DB instance does not support read replicas.

D.

Automated backups are not enabled on the source DB instance.

Question 86

A financial services organization employs an Amazon Aurora PostgreSQL DB cluster to host an application on AWS. No log files detailing database administrator activity were discovered during a recent examination. A database professional must suggest a solution that enables access to the database and maintains activity logs. The solution should be simple to implement and have a negligible effect on performance.

Which database specialist solution should be recommended?

Options:

A.

Enable Aurora Database Activity Streams on the database in synchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Kinesis Data Firehose destination to an Amazon S3 bucket.

B.

Create an AWS CloudTrail trail in the Region where the database runs. Associate the database activity logs with the trail.

C.

Enable Aurora Database Activity Streams on the database in asynchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Firehose destination to an Amazon S3 bucket.

D.

Allow connections to the DB cluster through a bastion host only. Restrict database access to the bastion host and application servers. Push the bastion host logs to Amazon CloudWatch Logs using the CloudWatch Logs agent.

Question 87

A company migrated one of its business-critical database workloads to an Amazon Aurora Multi-AZ DB cluster. The company requires a very low RTO and needs to improve the application recovery time after database failovers.

Which approach meets these requirements?

Options:

A.

Set the max_connections parameter to 16,000 in the instance-level parameter group.

B.

Modify the client connection timeout to 300 seconds.

C.

Create an Amazon RDS Proxy database proxy and update client connections to point to the proxy endpoint.

D.

Enable the query cache at the instance level.

Question 88

A business uses Amazon EC2 instances in VPC A to serve an internal file-sharing application. This application is supported by an Amazon ElastiCache cluster in VPC B that is peering with VPC A. The corporation migrates the instances of its applications from VPC A to VPC B. The file-sharing application is no longer able to connect to the ElastiCache cluster, as shown by the logs.

What is the best course of action for a database professional to take in order to remedy this issue?

Options:

A.

Create a second security group on the EC2 instances. Add an outbound rule to allow traffic from the ElastiCache cluster security group.

B.

Delete the ElastiCache security group. Add an interface VPC endpoint to enable the EC2 instances to connect to the ElastiCache cluster.

C.

Modify the ElastiCache security group by adding outbound rules that allow traffic to VPC CIDR blocks from the ElastiCache cluster.

D.

Modify the ElastiCache security group by adding an inbound rule that allows traffic from the EC2 instances security group to the ElastiCache cluster.

Question 89

A database specialist is constructing an AWS CloudFormation stack using AWS CloudFormation. The database expert wishes to avoid the stack's Amazon RDS ProductionDatabase resource being accidentally deleted.

Which solution will satisfy this criterion?

Options:

A.

Create a stack policy to prevent updates. Include ג€Effectג€ : ג€ProductionDatabaseג€ and ג€Resourceג€ : ג€Denyג€ in the policy.

B.

Create an AWS CloudFormation stack in XML format. Set xAttribute as false.

C.

Create an RDS DB instance without the DeletionPolicy attribute. Disable termination protection.

D.

Create a stack policy to prevent updates. Include Effect, Deny, and Resource :ProductionDatabase in the policy.

Question 90

A company is going through a security audit. The audit team has identified cleartext master user password in the AWS CloudFormation templates for Amazon RDS for MySQL DB instances. The audit team has flagged this as a security risk to the database team.

What should a database specialist do to mitigate this risk?

Options:

A.

Change all the databases to use AWS IAM for authentication and remove all the cleartext passwords in CloudFormation templates.

B.

Use an AWS Secrets Manager resource to generate a random password and reference the secret in the CloudFormation template.

C.

Remove the passwords from the CloudFormation templates so Amazon RDS prompts for the password when the database is being created.

D.

Remove the passwords from the CloudFormation template and store them in a separate file. Replace the passwords by running CloudFormation using a sed command.

Question 91

A finance company migrated its 3 ׀¢׀’ on-premises PostgreSQL database to an Amazon Aurora PostgreSQL DB cluster. During a review after the migration, a database specialist discovers that the database is not encrypted at rest. The database must be encrypted at rest as soon as possible to meet security requirements. The database specialist must enable encryption for the DB cluster with minimal downtime.

Which solution will meet these requirements?

Options:

A.

Modify the unencrypted DB cluster using the AWS Management Console. Enable encryption and choose to apply the change immediately.

B.

Take a snapshot of the unencrypted DB cluster and restore it to a new DB cluster with encryption enabled. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster.

C.

Create an encrypted Aurora Replica of the unencrypted DB cluster. Promote the Aurora Replica as the new master.

D.

Create a new DB cluster with encryption enabled and use the pg_dump and pg_restore utilities to load data to the new DB cluster. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster.

Question 92

A company is running a finance application on an Amazon RDS for MySQL DB instance. The application is governed by multiple financial regulatory agencies. The RDS DB instance is set up with security groups to allow access to certain Amazon EC2 servers only. AWS KMS is used for encryption at rest.

Which step will provide additional security?

Options:

A.

Set up NACLs that allow the entire EC2 subnet to access the DB instance

B.

Disable the master user account

C.

Set up a security group that blocks SSH to the DB instance

D.

Set up RDS to use SSL for data in transit

Question 93

On AWS, a business is developing a web application. The application needs that the database supports concurrent read and write activities in several AWS Regions. Additionally, the database must communicate data changes across Regions as they occur. The application must be highly available and have a latency of less than a few hundred milliseconds.

Which solution satisfies these criteria?

Options:

A.

Amazon DynamoDB global tables

B.

Amazon DynamoDB streams with AWS Lambda to replicate the data

C.

An Amazon ElastiCache for Redis cluster with cluster mode enabled and multiple shards

D.

An Amazon Aurora global database

Question 94

A database expert is responsible for building a highly available online transaction processing (OLTP) solution that makes use of Amazon RDS for MySQL production databases. Disaster recovery criteria include a cross-regional deployment and an RPO and RTO of 5 and 30 minutes, respectively.

What should the database professional do to ensure that the database meets the criteria for high availability and disaster recovery?

Options:

A.

Use a Multi-AZ deployment in each Region.

B.

Use read replica deployments in all Availability Zones of the secondary Region.

C.

Use Multi-AZ and read replica deployments within a Region.

D.

Use Multi-AZ and deploy a read replica in a secondary Region.

Question 95

A company developed an AWS CloudFormation template used to create all new Amazon DynamoDB tables in its AWS account. The template configures provisioned throughput capacity using hard-coded values. The company wants to change the template so that the tables it creates in the future have independently configurable read and write capacity units assigned.

Which solution will enable this change?

Options:

A.

Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Configure DynamoDB to provision throughput capacity using the stack’s mappings.

B.

Add values for two Number parameters, rcuCount and wcuCount, to the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.

C.

Add values for the rcuCount and wcuCount parameters as outputs of the template. Configure DynamoDB to provision throughput capacity using the stack outputs.

D.

Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.

Demo: 95 questions
Total 324 questions