Labour Day Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: cramtreat

DBS-C01 AWS Certified Database - Specialty Questions and Answers

Questions 4

Recently, a financial institution created a portfolio management service. The application's backend is powered by Amazon Aurora, which supports MySQL.

The firm demands a response time of five minutes and a response time of five minutes. A database professional must create a disaster recovery system that is both efficient and has a low replication latency.

How should the database professional tackle these requirements?

Options:

A.

Configure AWS Database Migration Service (AWS DMS) and create a replica in a different AWS Region.

B.

Configure an Amazon Aurora global database and add a different AWS Region.

C.

Configure a binlog and create a replica in a different AWS Region.

D.

Configure a cross-Region read replica.

Buy Now
Questions 5

A Database Specialist is troubleshooting an application connection failure on an Amazon Aurora DB cluster with multiple Aurora Replicas that had been running with no issues for the past 2 months. The connection failure lasted for 5 minutes and corrected itself after that. The Database Specialist reviewed the Amazon RDS events and determined a failover event occurred at that time. The failover process took around 15 seconds to

complete.

What is the MOST likely cause of the 5-minute connection outage?

Options:

A.

After a database crash, Aurora needed to replay the redo log from the last database checkpoint

B.

The client-side application is caching the DNS data and its TTL is set too high

C.

After failover, the Aurora DB cluster needs time to warm up before accepting client connections

D.

There were no active Aurora Replicas in the Aurora DB cluster

Buy Now
Questions 6

A database specialist is designing an enterprise application for a large company. The application uses Amazon DynamoDB with DynamoDB Accelerator (DAX).

The database specialist observes that most of the queries are not found in the DAX cache and that they still require DynamoDB table reads.

What should the database specialist review first to improve the utility of DAX?

Options:

A.

The DynamoDB ConsumedReadCapacityUnits metric

B.

The trust relationship to perform the DynamoDB API calls

C.

The DAX cluster's TTL setting

D.

The validity of customer-specified AWS Key Management Service (AWS KMS) keys for DAX encryption at rest

Buy Now
Questions 7

A database specialist needs to review and optimize an Amazon DynamoDB table that is experiencing performance issues. A thorough investigation by the database specialist reveals that the partition key is causing hot partitions, so a new partition key is created. The database specialist must effectively apply this new partition key to all existing and new data.

How can this solution be implemented?

Options:

A.

Use Amazon EMR to export the data from the current DynamoDB table to Amazon S3. Then use Amazon EMR again to import the data from Amazon S3 into a new DynamoDB table with the new partition key.

B.

Use AWS DMS to copy the data from the current DynamoDB table to Amazon S3. Then import the DynamoDB table to create a new DynamoDB table with the new partition key.

C.

Use the AWS CLI to update the DynamoDB table and modify the partition key.

D.

Use the AWS CLI to back up the DynamoDB table. Then use the restore-table-from-backup command and modify the partition key.

Buy Now
Questions 8

A company migrated an on-premises Oracle database to Amazon RDS for Oracle. A database specialist needs to monitor the latency of the database.

Which solution will meet this requirement with the LEAST operational overhead?

Options:

A.

Publish RDS Performance insights metrics to Amazon CloudWatch. Add AWS CloudTrail filters to monitor database performance

B.

Install Oracle Statspack. Enable the performance statistics feature to collect, store, and display performance data to monitor database performance.

C.

Enable RDS Performance Insights to visualize the database load. Enable Enhanced Monitoring to view how different threads use the CPU

D.

Create a new DB parameter group that includes the AllocatedStorage, DBInstanceClassMemory, and DBInstanceVCPU variables. Enable RDS Performance Insights

Buy Now
Questions 9

A company's database specialist implements an AWS Database Migration Service (AWS DMS) task for change data capture (CDC) to replicate data from an on- premises Oracle database to Amazon S3. When usage of the company's application increases, the database specialist notices multiple hours of latency with the CDC.

Which solutions will reduce this latency? (Choose two.)

Options:

A.

Configure the DMS task to run in full large binary object (LOB) mode.

B.

Configure the DMS task to run in limited large binary object (LOB) mode.

C.

Create a Multi-AZ replication instance.

D.

Load tables in parallel by creating multiple replication instances for sets of tables that participate in common transactions.

E.

Replicate tables in parallel by creating multiple DMS tasks for sets of tables that do not participate in common transactions.

Buy Now
Questions 10

A Database Specialist is performing a proof of concept with Amazon Aurora using a small instance to confirm a simple database behavior. When loading a large dataset and creating the index, the Database Specialist encounters the following error message from Aurora:

ERROR: cloud not write block 7507718 of temporary file: No space left on device

What is the cause of this error and what should the Database Specialist do to resolve this issue?

Options:

A.

The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to modify the workload to load the data slowly.

B.

The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to enable Aurora storage scaling.

C.

The local storage used to store temporary tables is full. The Database Specialist needs to scale up the instance.

D.

The local storage used to store temporary tables is full. The Database Specialist needs to enable local storage scaling.

Buy Now
Questions 11

The website of a manufacturing firm makes use of an Amazon Aurora PostgreSQL database cluster.

Which settings will result in the LEAST amount of downtime for the application during failover? (Select three.)

Options:

A.

Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.

B.

Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.

C.

Edit and enable Aurora DB cluster cache management in parameter groups.

D.

Set TCP keepalive parameters to a high value.

E.

Set JDBC connection string timeout variables to a low value.

F.

Set Java DNS caching timeouts to a high value.

Buy Now
Questions 12

A company needs to migrate Oracle Database Standard Edition running on an Amazon EC2 instance to an Amazon RDS for Oracle DB instance with Multi-AZ. The database supports an ecommerce website that runs continuously. The company can only provide a maintenance window of up to 5 minutes.

Which solution will meet these requirements?

Options:

A.

Configure Oracle Real Application Clusters (RAC) on the EC2 instance and the RDS DB instance. Update the connection string to point to the RAC cluster. Once the EC2 instance and RDS DB instance are in sync, fail over from Amazon EC2 to Amazon RDS.

B.

Export the Oracle database from the EC2 instance using Oracle Data Pump and perform an import into Amazon RDS. Stop the application for the entire process. When the import is complete, change the

database connection string and then restart the application.

C.

Configure AWS DMS with the EC2 instance as the source and the RDS DB instance as the destination. Stop the application when the replication is in sync, change the database connection string, and then restart the application.

D.

Configure AWS DataSync with the EC2 instance as the source and the RDS DB instance as the destination. Stop the application when the replication is in sync, change the database connection string, and then restart the application.

Buy Now
Questions 13

A large retail company recently migrated its three-tier ecommerce applications to AWS. The company’s backend database is hosted on Amazon Aurora PostgreSQL. During peak times, users complain about longer page load times. A database specialist reviewed Amazon RDS Performance Insights and found a spike in IO:XactSync wait events. The SQL attached to the wait events are all single INSERT statements.

How should this issue be resolved?

Options:

A.

Modify the application to commit transactions in batches

B.

Add a new Aurora Replica to the Aurora DB cluster.

C.

Add an Amazon ElastiCache for Redis cluster and change the application to write through.

D.

Change the Aurora DB cluster storage to Provisioned IOPS (PIOPS).

Buy Now
Questions 14

A company is using an Amazon Aurora MySQL database with Performance Insights enabled. A database specialist is checking Performance Insights and observes an alert message that starts with the following phrase: `Performance Insights is unable to collect SQL Digest statistics on new queries`¦`

Which action will resolve this alert message?

Options:

A.

Truncate the events_statements_summary_by_digest table.

B.

Change the AWS Key Management Service (AWS KMS) key that is used to enable Performance Insights.

C.

Set the value for the performance_schema parameter in the parameter group to 1.

D.

Disable and reenable Performance Insights to be effective in the next maintenance window.

Buy Now
Questions 15

A social media company recently launched a new feature that gives users the ability to share live feeds of their daily activities with their followers. The company has an Amazon RDS for

MySOL DB instance that stores data about follower engagement

After the new feature launched, the company noticed high CPU utilization and high database latency during reads and writes. The company wants to implement a solution that will identify the source of the high CPU utilization.

Which solution will meet these requirements with the LEAST administrative oversight?

Options:

A.

Use Amazon DevOps Guru insights

B.

Use AWS CloudTrail

C.

Use Amazon CloudWatch Logs

D.

Use Amazon Aurora Database Activity Streams

Buy Now
Questions 16

A software development company is using Amazon Aurora MySQL DB clusters for several use cases, including development and reporting. These use cases place unpredictable and varying demands on the Aurora DB clusters, and can cause momentary spikes in latency. System users run ad-hoc queries sporadically throughout the week. Cost is a primary concern for the company, and a solution that does not require significant rework is needed.

Which solution meets these requirements?

Options:

A.

Create new Aurora Serverless DB clusters for development and reporting, then migrate to these new DB clusters.

B.

Upgrade one of the DB clusters to a larger size, and consolidate development and reporting activities on this larger DB cluster.

C.

Use existing DB clusters and stop/start the databases on a routine basis using scheduling tools.

D.

Change the DB clusters to the burstable instance family.

Buy Now
Questions 17

A company performs an audit on various data stores and discovers that an Amazon S3 bucket is storing a credit card number. The S3 bucket is the target of an AWS Database Migration Service (AWS DMS) continuous replication task that uses change data capture (CDC). The company determines that this field is not needed by anyone who uses the target data. The company has manually removed the existing credit card data from the S3 bucket.

What is the MOST operationally efficient way to prevent new credit card data from being written to the S3 bucket?

Options:

A.

Add a transformation rule to the DMS task to ignore the column from the source data endpoint.

B.

Add a transformation rule to the DMS task to mask the column by using a simple SQL query.

C.

Configure the target S3 bucket to use server-side encryption with AWS KMS keys (SSE-KMS).

D.

Remove the credit card number column from the data source so that the DMS task does not need to be altered.

Buy Now
Questions 18

A company is developing an application that performs intensive in-memory operations on advanced data structures such as sorted sets. The application requires sub-millisecond latency for reads and writes. The application occasionally must run a group of commands as an ACID-compliant operation. A database specialist is setting up the database for this application. The database specialist needs the ability to create a new database cluster from the latest backup of the production cluster.

Which type of cluster should the database specialist create to meet these requirements?

Options:

A.

Amazon ElastiCache for Memcached

B.

Amazon Neptune

C.

Amazon ElastiCache for Redis

D.

Amazon DynamoDB Accelerator (DAX)

Buy Now
Questions 19

A company has a heterogeneous six-node production Amazon Aurora DB cluster that handles online transaction processing (OLTP) for the core business and OLAP reports for the human resources department. To match compute resources to the use case, the company has decided to have the reporting workload for the human resources department be directed to two small nodes in the Aurora DB cluster, while every other workload goes to four large nodes in the same DB cluster.

Which option would ensure that the correct nodes are always available for the appropriate workload while meeting these requirements?

Options:

A.

Use the writer endpoint for OLTP and the reader endpoint for the OLAP reporting workload.

B.

Use automatic scaling for the Aurora Replica to have the appropriate number of replicas for the desired workload.

C.

Create additional readers to cater to the different scenarios.

D.

Use custom endpoints to satisfy the different workloads.

Buy Now
Questions 20

A worldwide digital advertising corporation collects browser information in order to provide targeted visitors with contextually relevant pictures, websites, and connections. A single page load may create many events, each of which must be kept separately. A single event may have a maximum size of 200 KB and an average size of 10 KB. Each page load requires a query of the user's browsing history in order to deliver suggestions for targeted advertising. The advertising corporation anticipates daily page views of more than 1 billion from people in the United States, Europe, Hong Kong, and India. The information structure differs according to the event. Additionally, browsing information must be written and read with a very low latency to guarantee that consumers have a positive viewing experience.

Which database solution satisfies these criteria?

Options:

A.

Amazon DocumentDB

B.

Amazon RDS Multi-AZ deployment

C.

Amazon DynamoDB global table

D.

Amazon Aurora Global Database

Buy Now
Questions 21

A database specialist is planning to migrate a 4 TB Microsoft SQL Server DB instance from on premises to Amazon RDS for SQL Server. The database is primarily used for nightly batch processing.

Which RDS storage option meets these requirements MOST cost-effectively?

Options:

A.

General Purpose SSD storage

B.

Provisioned IOPS storage

C.

Magnetic storage

D.

Throughput Optimized hard disk drives (HDD)

Buy Now
Questions 22

A healthcare company is running an application on Amazon EC2 in a public subnet and using Amazon DocumentDB (with MongoDB compatibility) as the storage layer. An audit reveals that the traffic between

the application and Amazon DocumentDB is not encrypted and that the DocumentDB cluster is not encrypted at rest. A database specialist must correct these issues and ensure that the data in transit and the

data at rest are encrypted.

Which actions should the database specialist take to meet these requirements? (Select TWO.)

Options:

A.

Download the SSH RSA public key for Amazon DocumentDB. Update the application configuration to use the instance endpoint instead of the cluster endpoint and run queries over SSH.

B.

Download the SSL .pem public key for Amazon DocumentDB. Add the key to the application package and make sure the application is using the key while connecting to the cluster.

C.

Create a snapshot of the unencrypted cluster. Restore the unencrypted snapshot as a new cluster with the —storage-encrypted parameter set to true. Update the application to point to the new cluster.

D.

Create an Amazon DocumentDB VPC endpoint to prevent the traffic from going to the Amazon DocumentDB public endpoint. Set a VPC endpoint policy to allow only the application instance's security group to connect.

E.

Activate encryption at rest using the modify-db-cluster command with the —storage-encrypted parameter set to true. Set the security group of the cluster to allow only the application instance's security group to connect.

Buy Now
Questions 23

A database specialist manages a critical Amazon RDS for MySQL DB instance for a company. The data stored daily could vary from .01% to 10% of the current database size. The database specialist needs to ensure that the DB instance storage grows as needed.

What is the MOST operationally efficient and cost-effective solution?

Options:

A.

Configure RDS Storage Auto Scaling.

B.

Configure RDS instance Auto Scaling.

C.

Modify the DB instance allocated storage to meet the forecasted requirements.

D.

Monitor the Amazon CloudWatch FreeStorageSpace metric daily and add storage as required.

Buy Now
Questions 24

A company is concerned about the cost of a large-scale, transactional application using Amazon DynamoDB that only needs to store data for 2 days before it is deleted. In looking at the tables, a Database Specialist notices that much of the data is months old, and goes back to when the application was first deployed.

What can the Database Specialist do to reduce the overall cost?

Options:

A.

Create a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.

B.

Create a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table.

C.

Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.

D.

Create an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.

Buy Now
Questions 25

A startup company in the travel industry wants to create an application that includes a personal travel assistant to display information for nearby airports based on user location. The application will use Amazon DynamoDB and must be able to access and display attributes such as airline names, arrival times, and flight numbers. However, the application must not be able to access or display pilot names or passenger counts.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Use a proxy tier between the application and DynamoDB to regulate access to specific tables, items, and attributes.

B.

Use IAM policies with a combination of IAM conditions and actions to implement fine-grained access control.

C.

Use DynamoDB resource policies to regulate access to specific tables, items, and attributes.

D.

Configure an AWS Lambda function to extract only allowed attributes from tables based on user profiles.

Buy Now
Questions 26

A manufacturing company stores its inventory details in an Amazon DynamoDB table in the us-east-2 Region. According to new compliance and regulatory policies, the company is required to back up all of its tables nightly and store these backups in the us-west-2 Region for disaster recovery for 1 year

Which solution MOST cost-effectively meets these requirements?

Options:

A.

Convert the existing DynamoDB table into a global table and create a global table replica in the us-west-2 Region.

B.

Use AWS Backup to create a backup plan. Configure cross-Region replication in the plan and assign the DynamoDB table to this plan

C.

Create an on-demand backup of the DynamoDB table and restore this backup in the us-west-2 Region.

D.

Enable Amazon S3 Cross-Region Replication (CRR) on the S3 bucket where DynamoDB on-demand backups are stored.

Buy Now
Questions 27

A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for MySQL Multi-AZ DB instance is part of this deployment with a database-1.xxxxxxxxxxxx.us-east- 1.rds.amazonaws.com endpoint listening on port 3306. The company’s Database Specialist is able to log in to MySQL and run queries from the bastion host using these details.

When users try to utilize the application hosted in the AWS account, they are presented with a generic error message. The application servers are logging a “could not connect to server: Connection times out” error message to Amazon CloudWatch Logs.

What is the cause of this error?

Options:

A.

The user name and password the application is using are incorrect.

B.

The security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.

C.

The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.

D.

The user name and password are correct, but the user is not authorized to use the DB instance.

Buy Now
Questions 28

A retail company uses Amazon Redshift for its 1 PB data warehouse. Several analytical workloads run on a Redshift cluster. The tables within the cluster have grown rapidly. End users are reporting poor performance of daily reports that run on the transaction fact tables.

A database specialist must change the design of the tables to improve the reporting performance. All the changes must be applied dynamically. The changes must have the least possible impact on users and must optimize the overall table size.

Which solution will meet these requirements?

Options:

A.

Use the STL SCAN view to understand how the tables are getting scanned. Identify the columns that are used in filter and group by conditions. Create a temporary table with the identified columns as sort keys and compression as Zstandard (ZSTD) by copying the data from the original table. Drop the original table. Give the temporary table the same name that the original table had.

B.

Run an explain plan to analyze the queries on the tables. Consider recommendations from Amazon Redshift Advisor. Identify the columns that are used in filter and group by conditions. Convert the recommended columns from Redshift Advisor into sort keys with compression encoding set to RAW. Set the rest of the column compression encoding to AZ64.

C.

Run an explain plan to analyze the queries on the tables. Consider recommendations from Amazon Redshift Advisor. Identify the columns that are used in filter and group by conditions. Convert the recommended columns from Redshift Advisor into sort keys with compression encoding set to I_ZO. Set the rest of the column compression encoding to Zstandard (ZSTD).

D.

Run an explain plan to analyze the queries on the tables. Consider recommendations from Amazon Redshift Advisor. Identify the columns that are used in filter and group by conditions. Create a deep copy of the table with the identified columns as sort keys and compression for all columns as Zstandard (ZSTD) by using a bulk insert. Drop the original table. Give the copy table the same name that the original table had.

Buy Now
Questions 29

A business is transferring its on-premises database workloads to the Amazon Web Services (AWS) Cloud. A database professional migrating an Oracle database with a huge table to Amazon RDS has picked AWS DMS. The database professional observes that AWS DMS is consuming considerable time migrating the data.

Which activities would increase the pace of data migration? (Select three.)

Options:

A.

Create multiple AWS DMS tasks to migrate the large table.

B.

Configure the AWS DMS replication instance with Multi-AZ.

C.

Increase the capacity of the AWS DMS replication server.

D.

Establish an AWS Direct Connect connection between the on-premises data center and AWS.

E.

Enable an Amazon RDS Multi-AZ configuration.

F.

Enable full large binary object (LOB) mode to migrate all LOB data for all large tables.

Buy Now
Questions 30

A company is using Amazon Aurora PostgreSQL for the backend of its application. The system users are complaining that the responses are slow. A database specialist has determined that the queries to Aurora take longer during peak times. With the Amazon RDS Performance Insights dashboard, the load in the chart for average active sessions is often above the line that denotes maximum CPU usage and the wait state shows that most wait events are IO:XactSync.

What should the company do to resolve these performance issues?

Options:

A.

Add an Aurora Replica to scale the read traffic.

B.

Scale up the DB instance class.

C.

Modify applications to commit transactions in batches.

D.

Modify applications to avoid conflicts by taking locks.

Buy Now
Questions 31

A business that specializes in internet advertising is developing an application that will show adverts to its customers. The program stores data in an Amazon DynamoDB database. Additionally, the application caches its reads using a DynamoDB Accelerator (DAX) cluster. The majority of reads come via the GetItem and BatchGetItem queries. The application does not need consistency of readings.

The application cache does not behave as intended after deployment. Specific extremely consistent queries to the DAX cluster are responding in several milliseconds rather than microseconds.

How can the business optimize cache behavior in order to boost application performance?

Options:

A.

Increase the size of the DAX cluster.

B.

Configure DAX to be an item cache with no query cache

C.

Use eventually consistent reads instead of strongly consistent reads.

D.

Create a new DAX cluster with a higher TTL for the item cache.

Buy Now
Questions 32

A company has a production environment running on Amazon RDS for SQL Server with an in-house web application as the front end. During the last application maintenance window, new functionality was added to the web application to enhance the reporting capabilities for management. Since the update, the application is slow to respond to some reporting queries.

How should the company identify the source of the problem?

Options:

A.

Install and configure Amazon CloudWatch Application Insights for Microsoft .NET and Microsoft SQL Server. Use a CloudWatch dashboard to identify the root cause.

B.

Enable RDS Performance Insights and determine which query is creating the problem. Request changes to the query to address the problem.

C.

Use AWS X-Ray deployed with Amazon RDS to track query system traces.

D.

Create a support request and work with AWS Support to identify the source of the issue.

Buy Now
Questions 33

A business just transitioned from an on-premises Oracle database to Amazon Aurora PostgreSQL. Following the move, the organization observed that every day around 3:00 PM, the application's response time is substantially slower. The firm has determined that the problem is with the database, not the application.

Which set of procedures should the Database Specialist do to locate the erroneous PostgreSQL query most efficiently?

Options:

A.

Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.

B.

Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.

C.

Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.

D.

Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.

Buy Now
Questions 34

A database specialist wants to ensure that an Amazon Aurora DB cluster is always automatically upgraded to the most recent minor version available. Noticing that there is a new minor version available, the database specialist has issues an AWS CLI command to enable automatic minor version updates. The command runs successfully, but checking the Aurora DB cluster indicates that no update to the Aurora version has been made.

What might account for this? (Choose two.)

Options:

A.

The new minor version has not yet been designated as preferred and requires a manual upgrade.

B.

Configuring automatic upgrades using the AWS CLI is not supported. This must be enabled expressly using the AWS Management Console.

C.

Applying minor version upgrades requires sufficient free space.

D.

The AWS CLI command did not include an apply-immediately parameter.

E.

Aurora has detected a breaking change in the new minor version and has automatically rejected the upgrade.

Buy Now
Questions 35

A company plans to migrate a MySQL-based application from an on-premises environment to AWS. The application performs database joins across several tables and uses indexes for faster query response times. The company needs the database to be highly available with automatic failover.

Which solution on AWS will meet these requirements with the LEAST operational overhead?

Options:

A.

Deploy an Amazon RDS DB instance with a read replica.

B.

Deploy an Amazon RDS Multi-AZ DB instance.

C.

Deploy Amazon DynamoDB global tables.

D.

Deploy multiple Amazon RDS DB instances. Use Amazon Route 53 DNS with failover health checks configured.

Buy Now
Questions 36

A major organization maintains a number of Amazon DB clusters. Each of these clusters is configured differently to meet certain needs. These configurations may be classified into wider groups based on the team and use case.

A database administrator wishes to streamline the process of storing and updating these settings. Additionally, the database administrator want to guarantee that changes to certain configuration categories are automatically implemented to all instances as necessary.

Which AWS service or functionality will assist in automating and achieving this goal?

Options:

A.

AWS Systems Manager Parameter Store

B.

DB parameter group

C.

AWS Config

D.

AWS Secrets Manager

Buy Now
Questions 37

A ride-hailing application uses an Amazon RDS for MySQL DB instance as persistent storage for bookings. This application is very popular and the company expects a tenfold increase in the user base in next few months. The application experiences more traffic during the morning and evening hours.

This application has two parts:

  • An in-house booking component that accepts online bookings that directly correspond to simultaneous requests from users.
  • A third-party customer relationship management (CRM) component used by customer care representatives. The CRM uses queries to access booking data.

A database specialist needs to design a cost-effective database solution to handle this workload. Which solution meets these requirements?

Options:

A.

Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to the RDS for MySQL DB instance used by the CRM.

B.

Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to an Amazon SQS queue. This triggers another Lambda function that pulls data from Amazon SQS and writes it to the RDS for MySQL DB instance used by the CRM.

C.

Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to an Amazon Redshift database used by the CRM.

D.

Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to Amazon Athena, which is used by the CRM.

Buy Now
Questions 38

A company is migrating a mission-critical 2-TB Oracle database from on premises to Amazon Aurora. The cost for the database migration must be kept to a minimum, and both the on-premises Oracle database and the Aurora DB cluster must remain open for write traffic until the company is ready to completely cut over to Aurora.

Which combination of actions should a database specialist take to accomplish this migration as quickly as possible? (Choose two.)

Options:

A.

Use the AWS Schema Conversion Tool (AWS SCT) to convert the source database schema. Then restore the converted schema to the target Aurora DB cluster.

B.

Use Oracle’s Data Pump tool to export a copy of the source database schema and manually edit the schema in a text editor to make it compatible with Aurora.

C.

Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Select the migration type to replicate ongoing changes to keep the source and target databases in sync until the company is ready to move all user traffic to the Aurora DB cluster.

D.

Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS Kinesis Data Firehose stream to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.

E.

Create an AWS Glue job and related resources to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS DMS task to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.

Buy Now
Questions 39

A company needs to deploy an Amazon Aurora PostgreSQL DB instance into multiple accounts. The company will initiate each DB instance from an existing Aurora PostgreSQL DB instance that runs in a

shared account. The company wants the process to be repeatable in case the company adds additional accounts in the future. The company also wants to be able to verify if manual changes have been made

to the DB instance configurations after the company deploys the DB instances.

A database specialist has determined that the company needs to create an AWS CloudFormation template with the necessary configuration to create a DB instance in an account by using a snapshot of the existing DB instance to initialize the DB instance. The company will also use the CloudFormation template's parameters to provide key values for the DB instance creation (account ID, etc.).

Which final step will meet these requirements in the MOST operationally efficient way?

Options:

A.

Create a bash script to compare the configuration to the current DB instance configuration and to report any changes.

B.

Use the CloudFormation drift detection feature to check if the DB instance configurations have changed.

C.

Set up CloudFormation to use drift detection to send notifications if the DB instance configurations have been changed.

D.

Create an AWS Lambda function to compare the configuration to the current DB instance configuration and to report any changes.

Buy Now
Questions 40

A retail company is about to migrate its online and mobile store to AWS. The company’s CEO has strategic plans to grow the brand globally. A Database Specialist has been challenged to provide predictable read and write database performance with minimal operational overhead.

What should the Database Specialist do to meet these requirements?

Options:

A.

Use Amazon DynamoDB global tables to synchronize transactions

B.

Use Amazon EMR to copy the orders table data across Regions

C.

Use Amazon Aurora Global Database to synchronize all transactions

D.

Use Amazon DynamoDB Streams to replicate all DynamoDB transactions and sync them

Buy Now
Questions 41

A company wants to migrate its existing on-premises Oracle database to Amazon Aurora PostgreSQL. The migration must be completed with minimal downtime using AWS DMS. A Database Specialist must validate that the data was migrated accurately from the source to the target before the cutover. The migration must have minimal impact on the performance of the source database.

Which approach will MOST effectively meet these requirements?

Options:

A.

Use the AWS Schema Conversion Tool (AWS SCT) to convert source Oracle database schemas to the target Aurora DB cluster. Verify the datatype of the columns.

B.

Use the table metrics of the AWS DMS task created for migrating the data to verify the statistics for the tables being migrated and to verify that the data definition language (DDL) statements are completed.

C.

Enable the AWS Schema Conversion Tool (AWS SCT) premigration validation and review the premigration checklist to make sure there are no issues with the conversion.

D.

Enable AWS DMS data validation on the task so the AWS DMS task compares the source and target records, and reports any mismatches.

Buy Now
Questions 42

A company’s ecommerce website uses Amazon DynamoDB for purchase orders. Each order is made up of a Customer ID and an Order ID. The DynamoDB table uses the Customer ID as the partition key and the Order ID as the sort key.

To meet a new requirement, the company also wants the ability to query the table by using a third attribute named Invoice ID. Queries using the Invoice ID must be strongly consistent. A database specialist must provide this capability with optimal performance and minimal overhead.

What should the database administrator do to meet these requirements?

Options:

A.

Add a global secondary index on Invoice ID to the existing table.

B.

Add a local secondary index on Invoice ID to the existing table.

C.

Recreate the table by using the latest snapshot while adding a local secondary index on Invoice ID.

D.

Use the partition key and a FilterExpression parameter with a filter on Invoice ID for all queries.

Buy Now
Questions 43

A company is going to use an Amazon Aurora PostgreSQL DB cluster for an application backend. The DB cluster contains some tables with sensitive data. A Database Specialist needs to control the access privileges at the table level.

How can the Database Specialist meet these requirements?

Options:

A.

Use AWS IAM database authentication and restrict access to the tables using an IAM policy.

B.

Configure the rules in a NACL to restrict outbound traffic from the Aurora DB cluster.

C.

Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data.

D.

Define access privileges to the tables containing sensitive data in the pg_hba.conf file.

Buy Now
Questions 44

A company has an application that uses an Amazon DynamoDB table to store user data. Every morning, a single-threaded process calls the DynamoDB API Scan operation to scan the entire table and generate a critical start-of-day report for management. A successful marketing campaign recently doubled the number of items in the table, and now the process takes too long to run and the report is not generated in time.

A database specialist needs to improve the performance of the process. The database specialist notes that, when the process is running, 15% of the table’s provisioned read capacity units (RCUs) are being used.

What should the database specialist do?

Options:

A.

Enable auto scaling for the DynamoDB table.

B.

Use four threads and parallel DynamoDB API Scan operations.

C.

Double the table’s provisioned RCUs.

D.

Set the Limit and Offset parameters before every call to the API.

Buy Now
Questions 45

A database professional is tasked with the task of migrating 25 GB of data files from an on-premises storage system to an Amazon Neptune database.

Which method of data loading is the FASTEST?

Options:

A.

Upload the data to Amazon S3 and use the Loader command to load the data from Amazon S3 into the Neptune database.

B.

Write a utility to read the data from the on-premises storage and run INSERT statements in a loop to load the data into the Neptune database.

C.

Use the AWS CLI to load the data directly from the on-premises storage into the Neptune database.

D.

Use AWS DataSync to load the data directly from the on-premises storage into the Neptune database.

Buy Now
Questions 46

A database professional maintains a fleet of Amazon RDS database instances that are configured to utilize the default database parameter group. A database expert must connect a custom parameter group with certain database instances.

When will the instances be allocated to this new parameter group once the database specialist performs this change?

Options:

A.

Instantaneously after the change is made to the parameter group

B.

In the next scheduled maintenance window of the DB instances

C.

After the DB instances are manually rebooted

D.

Within 24 hours after the change is made to the parameter group

Buy Now
Questions 47

A company has deployed an application that uses an Amazon RDS for MySQL DB cluster. The DB cluster uses three read replicas. The primary DB instance is an

8XL-sized instance, and the read replicas are each XL-sized instances.

Users report that database queries are returning stale data. The replication lag indicates that the replicas are 5 minutes behind the primary DB instance. Status queries on the replicas show that the SQL_THREAD is 10 binlogs behind the IO_THREAD and that the IO_THREAD is 1 binlog behind the primary.

Which changes will reduce the lag? (Choose two.)

Options:

A.

Deploy two additional read replicas matching the existing replica DB instance size.

B.

Migrate the primary DB instance to an Amazon Aurora MySQL DB cluster and add three Aurora Replicas.

C.

Move the read replicas to the same Availability Zone as the primary DB instance.

D.

Increase the instance size of the primary DB instance within the same instance class.

E.

Increase the instance size of the read replicas to the same size and class as the primary DB instance.

Buy Now
Questions 48

A Database Specialist has migrated an on-premises Oracle database to Amazon Aurora PostgreSQL. The schema and the data have been migrated successfully. The on-premises database server was also being used to run database maintenance cron jobs written in Python to perform tasks including data purging and generating data exports. The logs for these jobs show that, most of the time, the jobs completed within 5 minutes, but a few jobs took up to 10 minutes to complete. These maintenance jobs need to be set up for Aurora PostgreSQL.

How can the Database Specialist schedule these jobs so the setup requires minimal maintenance and provides high availability?

Options:

A.

Create cron jobs on an Amazon EC2 instance to run the maintenance jobs following the required schedule.

B.

Connect to the Aurora host and create cron jobs to run the maintenance jobs following the required schedule.

C.

Create AWS Lambda functions to run the maintenance jobs and schedule them with Amazon CloudWatch Events.

D.

Create the maintenance job using the Amazon CloudWatch job scheduling plugin.

Buy Now
Questions 49

A company is running a website on Amazon EC2 instances deployed in multiple Availability Zones (AZs). The site performs a high number of repetitive reads and writes each second on an Amazon RDS for MySQL Multi- AZ DB instance with General Purpose SSD (gp2) storage. After comprehensive testing and analysis, a database specialist discovers that there is high read latency and high CPU utilization on the DB instance.

Which approach should the database specialist to take to resolve this issue without changing the application?

Options:

A.

Implementing sharding to distribute the load to multiple RDS for MySQL databases.

B.

Use the same RDS for MySQL instance class with Provisioned IOPS (PIOPS) storage.

C.

Add an RDS for MySQL read replica.

D.

Modify the RDS for MySQL database class to a bigger size and implement Provisioned IOPS (PIOPS).

Buy Now
Questions 50

An electric utility company wants to store power plant sensor data in an Amazon DynamoDB table. The utility company has over 100 power plants and each power plant has over 200 sensors that send data every 2 seconds. The sensor data includes time with milliseconds precision, a value, and a fault attribute if the sensor is malfunctioning. Power plants are identified by a globally unique identifier. Sensors are identified by a unique identifier within each power plant. A database specialist needs to design the table to support an efficient method of finding all faulty sensors within a given power plant.

Which schema should the database specialist use when creating the DynamoDB table to achieve the fastest query time when looking for faulty sensors?

Options:

A.

Use the plant identifier as the partition key and the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.

B.

Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a local secondary index (LSI) on the fault attribute.

C.

Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.

D.

Use the plant identifier as the partition key and the sensor identifier as the sort key. Create a local secondary index (LSI) on the fault attribute.

Buy Now
Questions 51

A business is launching a new Amazon RDS for SQL Server database instance. The organization wishes to allow auditing of the SQL Server database.

Which measures should a database professional perform in combination to achieve this requirement? (Select two.)

Options:

A.

Create a service-linked role for Amazon RDS that grants permissions for Amazon RDS to store audit logs on Amazon S3.

B.

Set up a parameter group to configure an IAM role and an Amazon S3 bucket for audit log storage. Associate the parameter group with the DB instance.

C.

Disable Multi-AZ on the DB instance, and then enable auditing. Enable Multi-AZ after auditing is enabled.

D.

Disable automated backup on the DB instance, and then enable auditing. Enable automated backup after auditing is enabled.

E.

Set up an options group to configure an IAM role and an Amazon S3 bucket for audit log storage. Associate the options group with the DB instance.

Buy Now
Questions 52

A company is running a two-tier ecommerce application in one AWS account. The web server is deployed using an Amazon RDS for MySQL Multi-AZ DB instance. A Developer mistakenly deleted the database in the production environment. The database has been restored, but this resulted in hours of downtime and lost revenue.

Which combination of changes in existing IAM policies should a Database Specialist make to prevent an error like this from happening in the future? (Choose three.)

Options:

A.

Grant least privilege to groups, users, and roles

B.

Allow all users to restore a database from a backup that will reduce the overall downtime to restore the database

C.

Enable multi-factor authentication for sensitive operations to access sensitive resources and API operations

D.

Use policy conditions to restrict access to selective IP addresses

E.

Use AccessList Controls policy type to restrict users for database instance deletion

F.

Enable AWS CloudTrail logging and Enhanced Monitoring

Buy Now
Questions 53

A company is moving its fraud detection application from on premises to the AWS Cloud and is using Amazon Neptune for data storage. The company has set up a 1 Gbps AWS Direct Connect connection to migrate 25 TB of fraud detection data from the on-premises data center to a Neptune DB instance. The company already has an Amazon S3 bucket and an S3 VPC endpoint, and 80% of the company’s network bandwidth is available.

How should the company perform this data load?

Options:

A.

Use an AWS SDK with a multipart upload to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

B.

Use AWS Database Migration Service (AWS DMS) to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

C.

Use AWS DataSync to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

D.

Use the AWS CLI to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

Buy Now
Questions 54

A company's database specialist is building an Amazon RDS for Microsoft SQL Server DB instance to store hundreds of records in CSV format. A customer service tool uploads the records to an Amazon S3 bucket.

An employee who previously worked at the company already created a custom stored procedure to map the necessary CSV fields to the database tables. The database specialist needs to implement a solution that reuses this previous work and minimizes operational overhead.

Which solution will meet these requirements?

Options:

A.

Create an Amazon S3 event to invoke an AWS Lambda function. Configure the Lambda function to parse the .csv file and use a SQL client library to run INSERT statements to load the data into the tables.

B.

Write a custom .NET app that is hosted on Amazon EC2. Configure the .NET app to load the .csv file and call the custom stored procedure to insert the data into the tables.

C.

Download the .csv file from Amazon S3 to the RDS D drive by using an AWS msdb stored procedure. Call the custom stored procedure to insert the data from the RDS D drive into the tables.

D.

Create an Amazon S3 event to invoke AWS Step Functions to parse the .csv file and call the custom stored procedure to insert the data into the tables.

Buy Now
Questions 55

A software company uses an Amazon RDS for MySQL Multi-AZ DB instance as a data store for its critical applications. During an application upgrade process, a database specialist runs a custom SQL script that accidentally removes some of the default permissions of the master user.

What is the MOST operationally efficient way to restore the default permissions of the master user?

Options:

A.

Modify the DB instance and set a new master user password.

B.

Use AWS Secrets Manager to modify the master user password and restart the DB instance.

C.

Create a new master user for the DB instance.

D.

Review the IAM user that owns the DB instance, and add missing permissions.

Buy Now
Questions 56

The Security team for a finance company was notified of an internal security breach that happened 3 weeks ago. A Database Specialist must start producing audit logs out of the production Amazon Aurora PostgreSQL cluster for the Security team to use for monitoring and alerting. The Security team is required to perform real- time alerting and monitoring outside the Aurora DB cluster and wants to have the cluster push encrypted files to the chosen solution.

Which approach will meet these requirements?

Options:

A.

Use pg_audit to generate audit logs and send the logs to the Security team.

B.

Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.

C.

Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.

D.

Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.

Buy Now
Questions 57

A manufacturing company has an. inventory system that stores information in an Amazon Aurora MySQL DB cluster. The database tables are partitioned. The database size has grown to 3 TB. Users run one-time queries by using a SQL client. Queries that use an equijoin to join large tables are taking a long time to run.

Which action will improve query performance with the LEAST operational effort?

Options:

A.

Migrate the database to a new Amazon Redshift data warehouse.

B.

Enable hash joins on the database by setting the variable optimizer_switch to hash_join=on.

C.

Take a snapshot of the DB cluster. Create a new DB instance by using the snapshot, and enable parallel query mode.

D.

Add an Aurora read replica.

Buy Now
Questions 58

A company is using Amazon with Aurora Replicas for read-only workload scaling. A Database Specialist needs to split up two read-only applications so each application always connects to a dedicated replica. The Database Specialist wants to implement load balancing and high availability for the read-only applications.

Which solution meets these requirements?

Options:

A.

Use a specific instance endpoint for each replica and add the instance endpoint to each read-only application connection string.

B.

Use reader endpoints for both the read-only workload applications.

C.

Use a reader endpoint for one read-only application and use an instance endpoint for the other read-only application.

D.

Use custom endpoints for the two read-only applications.

Buy Now
Questions 59

A development team at an international gaming company is experimenting with Amazon DynamoDB to store in-game events for three mobile games. The most popular game hosts a maximum of 500,000 concurrent users, and the least popular game hosts a maximum of 10,000 concurrent users. The average size of an event is 20 KB, and the average user session produces one event each second. Each event is tagged with a time in milliseconds and a globally unique identifier.

The lead developer created a single DynamoDB table for the events with the following schema:

  • Partition key: game name
  • Sort key: event identifier
  • Local secondary index: player identifier
  • Event time

The tests were successful in a small-scale development environment. However, when deployed to production, new events stopped being added to the table and the logs show DynamoDB failures with the ItemCollectionSizeLimitExceededException error code.

Which design change should a database specialist recommend to the development team?

Options:

A.

Use the player identifier as the partition key. Use the event time as the sort key. Add a global secondary index with the game name as the partition key and the event time as the sort key.

B.

Create two tables. Use the game name as the partition key in both tables. Use the event time as the sort key for the first table. Use the player identifier as the sort key for the second table.

C.

Replace the sort key with a compound value consisting of the player identifier collated with the event time, separated by a dash. Add a local secondary index with the player identifier as the sort key.

D.

Create one table for each game. Use the player identifier as the partition key. Use the event time as the sort key.

Buy Now
Questions 60

An online bookstore uses Amazon Aurora MySQL as its backend database. After the online bookstore added a popular book to the online catalog, customers began reporting intermittent timeouts on the checkout page. A database specialist determined that increased load was causing locking contention on the database. The database specialist wants to automatically detect and diagnose database performance issues and to resolve bottlenecks faster.

Which solution will meet these requirements?

Options:

A.

Turn on Performance Insights for the Aurora MySQL database. Configure and turn on Amazon DevOps Guru for RDS.

B.

Create a CPU usage alarm. Select the CPU utilization metric for the DB instance. Create an Amazon Simple Notification Service (Amazon SNS) topic to notify the database specialist when CPU utilization is over 75%.

C.

Use the Amazon RDS query editor to get the process ID of the query that is causing the database to lock. Run a command to end the process.

D.

Use the SELECT INTO OUTFILE S3 statement to query data from the database. Save the data directly to an Amazon S3 bucket. Use Amazon Athena to analyze the files for long-running queries.

Buy Now
Questions 61

A banking company recently launched an Amazon RDS for MySQL DB instance as part of a proof-of-concept project. A database specialist has configured automated database snapshots. As a part of routine testing, the database specialist noticed one day that the automated database snapshot was not created.

Which of the following are possible reasons why the snapshot was not created? (Choose two.)

Options:

A.

A copy of the RDS automated snapshot for this DB instance is in progress within the same AWS Region.

B.

A copy of the RDS automated snapshot for this DB instance is in progress in a different AWS Region.

C.

The RDS maintenance window is not configured.

D.

The RDS DB instance is in the STORAGE_FULL state.

E.

RDS event notifications have not been enabled.

Buy Now
Questions 62

A company is looking to migrate a 1 TB Oracle database from on-premises to an Amazon Aurora PostgreSQL DB cluster. The company’s Database Specialist discovered that the Oracle database is storing 100 GB of large binary objects (LOBs) across multiple tables. The Oracle database has a maximum LOB size of 500 MB with an average LOB size of 350 MB. The Database Specialist has chosen AWS DMS to migrate the data with the largest replication instances.

How should the Database Specialist optimize the database migration using AWS DMS?

Options:

A.

Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBs together

B.

Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2 without LOBs

C.

Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB and task 2 without LOBs

D.

Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data and LOBs together

Buy Now
Questions 63

A company has a database monitoring solution that uses Amazon CloudWatch for its Amazon RDS for SQL Server environment. The cause of a recent spike in CPU utilization was not determined using the standard metrics that were collected. The CPU spike caused the application to perform poorly, impacting users. A Database Specialist needs to determine what caused the CPU spike.

Which combination of steps should be taken to provide more visibility into the processes and queries running during an increase in CPU load? (Choose two.)

Options:

A.

Enable Amazon CloudWatch Events and view the incoming T-SQL statements causing the CPU to spike.

B.

Enable Enhanced Monitoring metrics to view CPU utilization at the RDS SQL Server DB instance level.

C.

Implement a caching layer to help with repeated queries on the RDS SQL Server DB instance.

D.

Use Amazon QuickSight to view the SQL statement being run.

E.

Enable Amazon RDS Performance Insights to view the database load and filter the load by waits, SQL statements, hosts, or users.

Buy Now
Questions 64

A company has a on-premises Oracle Real Application Clusters (RAC) database. The company wants to migrate the database to AWS and reduce licensing costs. The company's application team wants to store JSON payloads that expire after 28 hours. The company has development capacity if code changes are required.

Which solution meets these requirements?

Options:

A.

Use Amazon DynamoDB and leverage the Time to Live (TTL) feature to automatically expire the data.

B.

Use Amazon RDS for Oracle with Multi-AZ. Create an AWS Lambda function to purge the expired data. Schedule the Lambda function to run daily using Amazon EventBridge.

C.

Use Amazon DocumentDB with a read replica in a different Availability Zone. Use DocumentDB change streams to expire the data.

D.

Use Amazon Aurora PostgreSQL with Multi-AZ and leverage the Time to Live (TTL) feature to automatically expire the data.

Buy Now
Questions 65

A company is using Amazon Redshift as its data warehouse solution. The Redshift cluster handles the following types of workloads:

*Real-time inserts through Amazon Kinesis Data Firehose

*Bulk inserts through COPY commands from Amazon S3

*Analytics through SQL queries

Recently, the cluster has started to experience performance issues.

Which combination of actions should a database specialist take to improve the cluster's performance? (Choose three.)

Options:

A.

Modify the Kinesis Data Firehose delivery stream to stream the data to Amazon S3 with a high buffer size and to load the data into Amazon Redshift by using the COPY command.

B.

Stream real-time data into Redshift temporary tables before loading the data into permanent tables.

C.

For bulk inserts, split input files on Amazon S3 into multiple files to match the number of slices on Amazon Redshift. Then use the COPY command to load data into Amazon Redshift.

D.

For bulk inserts, use the parallel parameter in the COPY command to enable multi-threading.

E.

Optimize analytics SQL queries to use sort keys.

F.

Avoid using temporary tables in analytics SQL queries.

Buy Now
Questions 66

A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal. Traffic patterns throughout the year are usually stable; however, a large event is planned. The company knows that traffic will increase by up to 10 times the normal load over the 3-day event. When sale prices are published during the event, traffic will spike rapidly.

How should a Database Specialist ensure DynamoDB can handle the increased traffic?

Options:

A.

Ensure the table is always provisioned to meet peak needs

B.

Allow burst capacity to handle the additional load

C.

Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic

D.

Preprovision additional capacity for the known peaks and then reduce the capacity after the event

Buy Now
Questions 67

A business uses Amazon EC2 instances in VPC A to serve an internal file-sharing application. This application is supported by an Amazon ElastiCache cluster in VPC B that is peering with VPC A. The corporation migrates the instances of its applications from VPC A to VPC B. The file-sharing application is no longer able to connect to the ElastiCache cluster, as shown by the logs.

What is the best course of action for a database professional to take in order to remedy this issue?

Options:

A.

Create a second security group on the EC2 instances. Add an outbound rule to allow traffic from the ElastiCache cluster security group.

B.

Delete the ElastiCache security group. Add an interface VPC endpoint to enable the EC2 instances to connect to the ElastiCache cluster.

C.

Modify the ElastiCache security group by adding outbound rules that allow traffic to VPC CIDR blocks from the ElastiCache cluster.

D.

Modify the ElastiCache security group by adding an inbound rule that allows traffic from the EC2 instances security group to the ElastiCache cluster.

Buy Now
Questions 68

A single MySQL database was moved to Amazon Aurora by a business. The production data is stored in a database cluster in VPC PROD, whereas 12 testing environments are hosted in VPC TEST with the same AWS account. Testing has a negligible effect on the test data. The development team requires that each environment be updated nightly to ensure that each test database has daily production data.

Which migration strategy will be the quickest and least expensive to implement?

Options:

A.

Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.

B.

Run the master in Amazon Aurora MySQL. Take a nightly snapshot, and restore it into 12 databases in VPC_TEST using Aurora Serverless.

C.

Run the master in Amazon Aurora MySQL. Create 12 Aurora Replicas in VPC_TEST, and script the replicas to be deleted and re-created nightly.

D.

Run the master in Amazon Aurora MySQL using Aurora Serverless. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.

Buy Now
Questions 69

A company is using Amazon Redshift. A database specialist needs to allow an existing Redshift cluster to access data from other Redshift clusters. Amazon RDS for PostgreSQL databases, and AWS Glue Data Catalog tables.

Which combination of steps will meet these requirements with the MOST operational efficiency? (Choose three.)

Options:

A.

Take a snapshot of the required tables from the other Redshift clusters. Restore the snapshot into the existing Redshift cluster.

B.

Create external tables in the existing Redshift database to connect to the AWS Glue Data Catalog tables.

C.

Unload the RDS tables and the tables from the other Redshift clusters into Amazon S3. Run COPY commands to load the tables into the existing Redshift cluster.

D.

Use federated queries to access data in Amazon RDS.

E.

Use data sharing to access data from the other Redshift clusters.

F.

Use AWS Glue jobs to transfer the AWS Glue Data Catalog tables into Amazon S3. Create external tables in the existing Redshift database to access this data.

Buy Now
Questions 70

A marketing company is developing an application to track responses to email message campaigns. The company needs a database storage solution that is optimized to work with highly connected data. The database needs to limit connections and programmatic access to the data by using IAM policies.

Which solution will meet these requirements?

Options:

A.

Amazon ElastiCache for Redis cluster

B.

Amazon Aurora MySQL DB cluster

C.

Amazon DynamoDB table

D.

Amazon Neptune DB cluster

Buy Now
Questions 71

An Amazon RDS EBS-optimized instance with Provisioned IOPS (PIOPS) storage is using less than half of its allocated IOPS over the course of several hours under constant load. The RDS instance exhibits multi-second read and write latency, and uses all of its maximum bandwidth for read throughput, yet the instance uses less than half of its CPU and RAM resources.

What should a Database Specialist do in this situation to increase performance and return latency to sub- second levels?

Options:

A.

Increase the size of the DB instance storage

B.

Change the underlying EBS storage type to General Purpose SSD (gp2)

C.

Disable EBS optimization on the DB instance

D.

Change the DB instance to an instance class with a higher maximum bandwidth

Buy Now
Questions 72

A company with 500,000 employees needs to supply its employee list to an application used by human resources. Every 30 minutes, the data is exported using the LDAP service to load into a new Amazon DynamoDB table. The data model has a base table with Employee ID for the partition key and a global secondary index with Organization ID as the partition key.

While importing the data, a database specialist receives ProvisionedThroughputExceededException errors. After increasing the provisioned write capacity units

(WCUs) to 50,000, the specialist receives the same errors. Amazon CloudWatch metrics show a consumption of 1,500 WCUs.

What should the database specialist do to address the issue?

Options:

A.

Change the data model to avoid hot partitions in the global secondary index.

B.

Enable auto scaling for the table to automatically increase write capacity during bulk imports.

C.

Modify the table to use on-demand capacity instead of provisioned capacity.

D.

Increase the number of retries on the bulk loading application.

Buy Now
Questions 73

A pharmaceutical company uses Amazon Quantum Ledger Database (Amazon QLDB) to store its clinical trial data records. The company has an application that runs as AWS Lambda functions. The application is hosted in the private subnet in a VPC.

The application does not have internet access and needs to read some of the clinical data records. The company is concerned that traffic between the QLDB ledger and the VPC could leave the AWS network. The company needs to secure access to the QLDB ledger and allow the VPC traffic to have read-only access.

Which security strategy should a database specialist implement to meet these requirements?

Options:

A.

Move the QLDB ledger into a private database subnet inside the VPC. Run the Lambda functions inside the same VPC in an application private subnet. Ensure that the VPC route table allows read-only flow from the application subnet to the database subnet.

B.

Create an AWS PrivateLink VPC endpoint for the QLDB ledger. Attach a VPC policy to the VPC endpoint to allow read-only traffic for the Lambda functions that run inside the VPC.

C.

Add a security group to the QLDB ledger to allow access from the private subnets inside the VPC where the Lambda functions that access the QLDB ledger are running.

D.

Create a VPN connection to ensure pairing of the private subnet where the Lambda functions are running with the private subnet where the QLDB ledger is deployed.

Buy Now
Questions 74

A company runs a customer relationship management (CRM) system that is hosted on-premises with a MySQL database as the backend. A custom stored procedure is used to send email notifications to another system when data is inserted into a table. The company has noticed that the performance of the CRM system has decreased due to database reporting applications used by various teams. The company requires an AWS solution that would reduce maintenance, improve performance, and accommodate the email notification feature.

Which AWS solution meets these requirements?

Options:

A.

Use MySQL running on an Amazon EC2 instance with Auto Scaling to accommodate the reporting applications. Configure a stored procedure and an AWS Lambda function that uses Amazon SES to send email notifications to the other system.

B.

Use Amazon Aurora MySQL in a multi-master cluster to accommodate the reporting applications. Configure Amazon RDS event subscriptions to publish a message to an Amazon SNS topic and subscribe the other system's email address to the topic.

C.

Use MySQL running on an Amazon EC2 instance with a read replica to accommodate the reporting applications. Configure Amazon SES integration to send email notifications to the other system.

D.

Use Amazon Aurora MySQL with a read replica for the reporting applications. Configure a stored procedure and an AWS Lambda function to publish a message to an Amazon SNS topic. Subscribe the other system's email address to the topic.

Buy Now
Questions 75

A financial company is running an Amazon Redshift cluster for one of its data warehouse solutions. The company needs to generate connection logs, user logs, and user activity logs. The company also must make these logs available for future analysis.

Which combination of steps should a database specialist take to meet these requirements? (Choose two.)

Options:

A.

Edit the database configuration of the cluster by enabling audit logging. Direct the logging to a specified log group in Amazon CloudWatch Logs.

B.

Edit the database configuration of the cluster by enabling audit logging. Direct the logging to a specified Amazon S3 bucket

C.

Modify the cluster by enabling continuous delivery of AWS CloudTrail logs to Amazon S3.

D.

Create a new parameter group with the enable_user_activity_logging parameter set to true. Configure the cluster to use the new parameter group.

E.

Modify the system table to enable logging for each user.

Buy Now
Questions 76

An online retail company is planning a multi-day flash sale that must support processing of up to 5,000 orders per second. The number of orders and exact schedule for the sale will vary each day. During the sale, approximately 10,000 concurrent users will look at the deals before buying items. Outside of the sale, the traffic volume is very low. The acceptable performance for read/write queries should be under 25 ms. Order items are about 2 KB in size and have a unique identifier. The company requires the most cost-effective solution that will automatically scale and is highly available.

Which solution meets these requirements?

Options:

A.

Amazon DynamoDB with on-demand capacity mode

B.

Amazon Aurora with one writer node and an Aurora Replica with the parallel query feature enabled

C.

Amazon DynamoDB with provisioned capacity mode with 5,000 write capacity units (WCUs) and 10,000 read capacity units (RCUs)

D.

Amazon Aurora with one writer node and two cross-Region Aurora Replicas

Buy Now
Questions 77

A global company is creating an application. The application must be highly available. The company requires an RTO and an RPO of less than 5 minutes. The company needs a database that will provide the ability to set up an active-active configuration and near real-time synchronization of data across tables in multiple AWS Regions.

Which solution will meet these requirements?

Options:

A.

Amazon RDS for MariaDB with cross-Region read replicas

B.

Amazon RDS With a Multi-AZ deployment

C.

Amazon DynamoDB global tables

D.

Amazon DynamoDB With a global secondary index (GSI)

Buy Now
Questions 78

A company is using Amazon RDS for PostgreSQL. The Security team wants all database connection requests to be logged and retained for 180 days. The RDS for PostgreSQL DB instance is currently using the default parameter group. A Database Specialist has identified that setting the log_connections parameter to 1 will enable connections logging.

Which combination of steps should the Database Specialist take to meet the logging and retention requirements? (Choose two.)

Options:

A.

Update the log_connections parameter in the default parameter group

B.

Create a custom parameter group, update the log_connections parameter, and associate the parameter with the DB instance

C.

Enable publishing of database engine logs to Amazon CloudWatch Logs and set the event expiration to 180 days

D.

Enable publishing of database engine logs to an Amazon S3 bucket and set the lifecycle policy to 180 days

E.

Connect to the RDS PostgreSQL host and update the log_connections parameter in the postgresql.conf file

Buy Now
Questions 79

A company has an existing system that uses a single-instance Amazon DocumentDB (with MongoDB compatibility) cluster. Read requests account for 75% of the system queries. Write requests are expected to increase by 50% after an upcoming global release. A database specialist needs to design a solution that improves the overall database performance without creating additional application overhead.

Which solution will meet these requirements?

Options:

A.

Recreate the cluster with a shared cluster volume. Add two instances to serve both read requests and write requests.

B.

Add one read replica instance. Activate a shared cluster volume. Route all read queries to the read replica instance.

C.

Add one read replica instance. Set the read preference to secondary preferred.

D.

Add one read replica instance. Update the application to route all read queries to the read replica instance.

Buy Now
Questions 80

A financial services company has an application deployed on AWS that uses an Amazon Aurora PostgreSQL DB cluster. A recent audit showed that no log files contained database administrator activity. A database specialist needs to recommend a solution to provide database access and activity logs. The solution should use the least amount of effort and have a minimal impact on performance.

Which solution should the database specialist recommend?

Options:

A.

Enable Aurora Database Activity Streams on the database in synchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Kinesis Data Firehose destination to an Amazon S3 bucket.

B.

Create an AWS CloudTrail trail in the Region where the database runs. Associate the database activity logs with the trail.

C.

Enable Aurora Database Activity Streams on the database in asynchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Firehose destination to an Amazon S3 bucket.

D.

Allow connections to the DB cluster through a bastion host only. Restrict database access to the bastion host and application servers. Push the bastion host logs to Amazon CloudWatch Logs using the CloudWatch Logs agent.

Buy Now
Questions 81

A company is load testing its three-tier production web application deployed with an AWS CloudFormation template on AWS. The Application team is making changes to deploy additional Amazon EC2 and AWS Lambda resources to expand the load testing capacity. A Database Specialist wants to ensure that the changes made by the Application team will not change the Amazon RDS database resources already deployed.

Which combination of steps would allow the Database Specialist to accomplish this? (Choose two.)

Options:

A.

Review the stack drift before modifying the template

B.

Create and review a change set before applying it

C.

Export the database resources as stack outputs

D.

Define the database resources in a nested stack

E.

Set a stack policy for the database resources

Buy Now
Questions 82

A company has two separate AWS accounts: one for the business unit and another for corporate analytics. The company wants to replicate the business unit data stored in Amazon RDS for MySQL in us-east-1 to its corporate analytics Amazon Redshift environment in us-west-1. The company wants to use AWS DMS with Amazon RDS as the source endpoint and Amazon Redshift as the target endpoint.

Which action will allow AVS DMS to perform the replication?

Options:

A.

Configure the AWS DMS replication instance in the same account and Region as Amazon Redshift.

B.

Configure the AWS DMS replication instance in the same account as Amazon Redshift and in the same Region as Amazon RDS.

C.

Configure the AWS DMS replication instance in its own account and in the same Region as Amazon Redshift.

D.

Configure the AWS DMS replication instance in the same account and Region as Amazon RDS.

Buy Now
Questions 83

A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP) transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the OLTP transactions run all the time. The company has benchmarked its workload and determined that a six-node Aurora DB cluster is appropriate for the peak workload.

The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of nodes in the cluster to support the workload at different times. The workload has not changed since the previous benchmarking exercise.

How can a Database Specialist address these requirements with minimal user involvement?

Options:

A.

Split up the DB cluster into two different clusters: one for OLTP and the other for reporting. Monitor and set up replication between the two clusters to keep data consistent.

B.

Review all evaluate the peak combined workload. Ensure that utilization of the DB cluster node is at an acceptable level. Adjust the number of instances, if necessary.

C.

Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal workload. The cluster can be restarted again depending on the workload at the time.

D.

Set up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.

Buy Now
Questions 84

A stock market analysis firm maintains two locations: one in the us-east-1 Region and another in the eu-west-2 Region. The business want to build an AWS database solution capable of providing rapid and accurate updates.

Dashboards with advanced analytical queries are used to present data in the eu-west-2 office. Because the corporation will use these dashboards to make purchasing choices, they must have less than a second to obtain application data.

Which solution satisfies these criteria and gives the MOST CURRENT dashboard?

Options:

A.

Deploy an Amazon RDS DB instance in us-east-1 with a read replica instance in eu-west-2. Create an Amazon ElastiCache cluster in eu-west-2 to cache data from the read replica to generate the dashboards.

B.

Use an Amazon DynamoDB global table in us-east-1 with replication into eu-west-2. Use multi-active replication to ensure that updates are quickly propagated to eu-west-2.

C.

Use an Amazon Aurora global database. Deploy the primary DB cluster in us-east-1. Deploy the secondary DB cluster in eu-west-2. Configure the dashboard application to read from the secondary cluster.

D.

Deploy an Amazon RDS for MySQL DB instance in us-east-1 with a read replica instance in eu-west-2. Configure the dashboard application to read from the read replica.

Buy Now
Questions 85

A large gaming company is creating a centralized solution to store player session state for multiple online games. The workload required key-value storage with low latency and will be an equal mix of reads and writes. Data should be written into the AWS Region closest to the user across the games’ geographically distributed user base. The architecture should minimize the amount of overhead required to manage the replication of data between Regions.

Which solution meets these requirements?

Options:

A.

Amazon RDS for MySQL with multi-Region read replicas

B.

Amazon Aurora global database

C.

Amazon RDS for Oracle with GoldenGate

D.

Amazon DynamoDB global tables

Buy Now
Questions 86

A bank intends to utilize Amazon RDS to host a MySQL database instance. The database should be able to handle high-volume read requests with extremely few repeated queries.

Which solution satisfies these criteria?

Options:

A.

Create an Amazon ElastiCache cluster. Use a write-through strategy to populate the cache.

B.

Create an Amazon ElastiCache cluster. Use a lazy loading strategy to populate the cache.

C.

Change the DB instance to Multi-AZ with a standby instance in another AWS Region.

D.

Create a read replica of the DB instance. Use the read replica to distribute the read traffic.

Buy Now
Questions 87

Developers have requested a new Amazon Redshift cluster so they can load new third-party marketing data. The new cluster is ready and the user credentials are given to the developers. The developers indicate that their copy jobs fail with the following error message:

“Amazon Invalid operation: S3ServiceException:Access Denied,Status 403,Error AccessDenied.”

The developers need to load this data soon, so a database specialist must act quickly to solve this issue.

What is the MOST secure solution?

Options:

A.

Create a new IAM role with the same user name as the Amazon Redshift developer user ID. Provide the IAM role with read-only access to Amazon S3 with the assume role action.

B.

Create a new IAM role with read-only access to the Amazon S3 bucket and include the assume role action. Modify the Amazon Redshift cluster to add the IAM role.

C.

Create a new IAM role with read-only access to the Amazon S3 bucket with the assume role action. Add this role to the developer IAM user ID used for the copy job that ended with an error message.

D.

Create a new IAM user with access keys and a new role with read-only access to the Amazon S3 bucket. Add this role to the Amazon Redshift cluster. Change the copy job to use the access keys created.

Buy Now
Questions 88

A Database Specialist needs to define a database migration strategy to migrate an on-premises Oracle database to an Amazon Aurora MySQL DB cluster. The company requires near-zero downtime for the data migration. The solution must also be cost-effective.

Which approach should the Database Specialist take?

Options:

A.

Dump all the tables from the Oracle database into an Amazon S3 bucket using datapump (expdp). Run data transformations in AWS Glue. Load the data from the S3 bucket to the Aurora DB cluster.

B.

Order an AWS Snowball appliance and copy the Oracle backup to the Snowball appliance. Once the Snowball data is delivered to Amazon S3, create a new Aurora DB cluster. Enable the S3 integration to migrate the data directly from Amazon S3 to Amazon RDS.

C.

Use the AWS Schema Conversion Tool (AWS SCT) to help rewrite database objects to MySQL during the schema migration. Use AWS DMS to perform the full load and change data capture (CDC) tasks.

D.

Use AWS Server Migration Service (AWS SMS) to import the Oracle virtual machine image as an Amazon EC2 instance. Use the Oracle Logical Dump utility to migrate the Oracle data from Amazon EC2 to an Aurora DB cluster.

Buy Now
Questions 89

A company is migrating its on-premises database workloads to the AWS Cloud. A database specialist performing the move has chosen AWS DMS to migrate an Oracle database with a large table to Amazon RDS. The database specialist notices that AWS DMS is taking significant time to migrate the data.

Which actions would improve the data migration speed? (Choose three.)

Options:

A.

Create multiple AWS DMS tasks to migrate the large table.

B.

Configure the AWS DMS replication instance with Multi-AZ.

C.

Increase the capacity of the AWS DMS replication server.

D.

Establish an AWS Direct Connect connection between the on-premises data center and AWS.

E.

Enable an Amazon RDS Multi-AZ configuration.

F.

Enable full large binary object (LOB) mode to migrate all LOB data for all large tables.

Buy Now
Questions 90

A company conducted a security audit of its AWS infrastructure. The audit identified that data was not encrypted in transit between application servers and a

MySQL database that is hosted in Amazon RDS.

After the audit, the company updated the application to use an encrypted connection. To prevent this problem from occurring again, the company's database team needs to configure the database to require in-transit encryption for all connections.

Which solution will meet this requirement?

Options:

A.

Update the parameter group in use by the DB instance, and set the require_secure_transport parameter to ON.

B.

Connect to the database, and use ALTER USER to enable the REQUIRE SSL option on the database user.

C.

Update the security group in use by the DB instance, and remove port 80 to prevent unencrypted connections from being established.

D.

Update the DB instance, and enable the Require Transport Layer Security option.

Buy Now
Questions 91

An online gaming company is using an Amazon DynamoDB table in on-demand mode to store game scores. After an intensive advertisement campaign in South

America, the average number of concurrent users rapidly increases from 100,000 to 500,000 in less than 10 minutes every day around 5 PM.

The on-call software reliability engineer has observed that the application logs contain a high number of DynamoDB throttling exceptions caused by game score insertions around 5 PM. Customer service has also reported that several users are complaining about their scores not being registered.

How should the database administrator remediate this issue at the lowest cost?

Options:

A.

Enable auto scaling and set the target usage rate to 90%.

B.

Switch the table to provisioned mode and enable auto scaling.

C.

Switch the table to provisioned mode and set the throughput to the peak value.

D.

Create a DynamoDB Accelerator cluster and use it to access the DynamoDB table.

Buy Now
Questions 92

A company's application development team wants to share an automated snapshot of its Amazon RDS database with another team. The database is encrypted with a custom AWS Key Management Service (AWS KMS) key under the "WeShare" AWS account. The application development team needs to share the DB snapshot under the "WeReceive" AWS account.

Which combination of actions must the application development team take to meet these requirements? (Choose two.)

Options:

A.

Add access from the "WeReceive" account to the custom AWS KMS key policy of the sharing team.

B.

Make a copy of the DB snapshot, and set the encryption option to disable.

C.

Share the DB snapshot by setting the DB snapshot visibility option to public.

D.

Make a copy of the DB snapshot, and set the encryption option to enable.

E.

Share the DB snapshot by using the default AWS KMS encryption key.

Buy Now
Questions 93

A corporation intends to migrate a 500-GB Oracle database to Amazon Aurora PostgreSQL utilizing the AWS Schema Conversion Tool (AWS SCT) and AWS Data Management Service (AWS DMS). The database does not have any stored procedures, but does contain several huge or partitioned tables. Because the program is vital to the company, it is preferable to migrate with little downtime.

Which measures should a database professional perform in combination to expedite the transfer process? (Select three.)

Options:

A.

Use the AWS SCT data extraction agent to migrate the schema from Oracle to Aurora PostgreSQL.

B.

For the large tables, change the setting for the maximum number of tables to load in parallel and perform a full load using AWS DMS.

C.

For the large tables, create a table settings rule with a parallel load option in AWS DMS, then perform a full load using DMS.

D.

Use AWS DMS to set up change data capture (CDC) for continuous replication until the cutover date.

E.

Use AWS SCT to convert the schema from Oracle to Aurora PostgreSQL.

F.

Use AWS DMS to convert the schema from Oracle to Aurora PostgreSQL and for continuous replication.

Buy Now
Questions 94

A bike rental company operates an application to track its bikes. The application receives location and condition data from bike sensors. The application also receives rental transaction data from the associated mobile app.

The application uses Amazon DynamoDB as its database layer. The company has configured DynamoDB with provisioned capacity set to 20% above the expected peak load of the application. On an average day, DynamoDB used 22 billion read capacity units (RCUs) and 60 billion write capacity units (WCUs). The application is running well. Usage changes smoothly over the course of the day and is generally shaped like a bell curve. The timing and magnitude of peaks vary based on the weather and season, but the general shape is consistent.

Which solution will provide the MOST cost optimization of the DynamoDB database layer?

Options:

A.

Change the DynamoDB tables to use on-demand capacity.

B.

Use AWS Auto Scaling and configure time-based scaling.

C.

Enable DynamoDB capacity-based auto scaling.

D.

Enable DynamoDB Accelerator (DAX).

Buy Now
Questions 95

A company is using an Amazon RDS for MySQL DB instance for its internal applications. A security audit shows that the DB instance is not encrypted at rest. The company’s application team needs to encrypt the DB instance.

What should the team do to meet this requirement?

Options:

A.

Stop the DB instance and modify it to enable encryption. Apply this setting immediately without waiting for the next scheduled RDS maintenance window.

B.

Stop the DB instance and create an encrypted snapshot. Restore the encrypted snapshot to a new encrypted DB instance. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.

C.

Stop the DB instance and create a snapshot. Copy the snapshot into another encrypted snapshot. Restore the encrypted snapshot to a new encrypted DB instance. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.

D.

Create an encrypted read replica of the DB instance. Promote the read replica to master. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.

Buy Now
Exam Code: DBS-C01
Exam Name: AWS Certified Database - Specialty
Last Update: May 4, 2024
Questions: 324
DBS-C01 pdf

DBS-C01 PDF

$28  $80
DBS-C01 Engine

DBS-C01 Testing Engine

$33.25  $95
DBS-C01 PDF + Engine

DBS-C01 PDF + Testing Engine

$45.5  $130