Weekend Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70percent

Confluent CCAAK Confluent Certified Administrator for Apache Kafka Exam Practice Test

Demo: 16 questions
Total 54 questions

Confluent Certified Administrator for Apache Kafka Questions and Answers

Question 1

A company has an existing Kafka cluster running without SSL/TLS enabled. The customer wants to enable SSL on brokers to secure data in transit, but they would like to give applications connecting to this cluster some time to migrate to using SSL connection instead of putting a hard stop.

Which solution will meet the customer's requirements?

Options:

A.

Enable SSL on the current Listener, and do not enable mTLS.

B.

Modify the advertised listeners setting on brokers to use SSL.

C.

Create a new listener with SSL enabled.

D.

Enable SSL on the current listener, and do not implement SSL on application side.

Question 2

What is the correct permission check sequence for Kafka ACLs?

Options:

A.

Super Users → Deny ACL → Allow ACL → Deny

B.

Allow ACL → Deny ACL → Super Users → Deny

C.

Deny ACL → Deny → Allow ACL → Super Users

D.

Super Users → Allow ACL → Deny ACL → Deny

Question 3

Which secure communication is supported between the REST proxy and REST clients?

Options:

A.

TLS (HTTPS)

B.

MD5

C.

SCRAM

D.

Kerberos

Question 4

A Kafka cluster with three brokers has a topic with 10 partitions and a replication factor set to three. Each partition stores 25 GB data per day and data retention is set to 24 hours.

How much storage will be consumed by the topic on each broker?

Options:

A.

75 GB

B.

250 GB

C.

300 GB

D.

750 GB

Question 5

An employee in the reporting department needs assistance because their data feed is slowing down. You start by quickly checking the consumer lag for the clients on the data stream.

Which command will allow you to quickly check for lag on the consumers?

Options:

A.

bin/kafka-consumer-lag.sh

B.

bin/kafka-consumer-groups.sh

C.

bin/kafka-consumer-group-throughput.sh

D.

bin/kafka-reassign-partitions.sh

Question 6

A customer has a use case for a ksqlDB persistent query. You need to make sure that duplicate messages are not processed and messages are not skipped.

Which property should you use?

Options:

A.

processing.guarantee=exactly_once

B.

ksql.streams auto offset.reset=earliest

C.

ksql.streams auto.offset.reset=latest

D.

ksql.fail.on.production.error=false

Question 7

Kafka Connect is running on a two node cluster in distributed mode. The connector is a source connector that pulls data from Postgres tables (users/payment/orders), writes to topics with two partitions, and with replication factor two. The development team notices that the data is lagging behind.

What should be done to reduce the data lag*?

The Connector definition is listed below:

{

"name": "confluent-postgresql-source",

"connector class": "PostgresSource",

"topic.prefix": "postgresql_",

& nbsp;& nbsp;& nbsp;…

"db.name": "postgres",

"table.whitelist": "users.payment.orders”,

"timestamp.column.name": "created_at",

"output.data format": "JSON",

"db.timezone": "UTC",

"tasks.max": "1"

}

Options:

A.

Increase the number of Connect Nodes.

B.

Increase the number of Connect Tasks (tasks max value).

C.

Increase the number of partitions.

D.

Increase the replication factor and increase the number of Connect Tasks.

Question 8

Which model does Kafka use for consumers?

Options:

A.

Push

B.

Publish

C.

Pull

D.

Enrollment

Question 9

If the Controller detects the failure of a broker that was the leader for some partitions, which actions will be taken? (Choose two.)

Options:

A.

The Controller waits for a new leader to be nominated by ZooKeeper.

B.

The Controller persists the new leader and ISR list to ZooKeeper.

C.

The Controller sends the new leader and ISR list changes to all brokers.

D.

The Controller sends the new leader and ISR list changes to all producers and consumers.

Question 10

You are using Confluent Schema Registry to provide a RESTful interface for storing and retrieving schemas.

Which types of schemas are supported? (Choose three.)

Options:

A.

Avro

B.

gRPC

C.

JSON

D.

Thrift

E.

Protobuf

Question 11

Which connector type takes data from a topic and sends it to an external data system?

Options:

A.

Sink Connector

B.

Source Connector

C.

Streams Connector

D.

Syslog Connector

Question 12

Your organization has a mission-critical Kafka cluster that must be highly available. A Disaster Recovery (DR) cluster has been set up using Replicator, and data is continuously being replicated from source cluster to the DR cluster. However, you notice that the message on offset 1002 on source cluster does not seem to match with offset 1002 on the destination DR cluster.

Which statement is correct?

Options:

A.

The DR cluster is lagging behind updates; once the DR cluster catches up, the messages will match.

B.

The message on DR cluster got over-written accidently by another application.

C.

The offsets for the messages on the source, destination cluster may not match.

D.

The message was updated on source cluster, but the update did not flow into destination DR cluster and errored.

Question 13

Your Kafka cluster has four brokers. The topic t1 on the cluster has two partitions, and it has a replication factor of three. You create a Consumer Group with four consumers, which subscribes to t1.

In the scenario above, how many Controllers are in the Kafka cluster?

Options:

A.

One

B.

Two

C.

Three

D.

Four

Question 14

When a broker goes down, what will the Controller do?

Options:

A.

Wait for a follower to take the lead.

B.

Trigger a leader election among the remaining followers to distribute leadership.

C.

Become the leader for the topic/partition that needs a leader, pending the broker return in the cluster.

D.

Automatically elect the least loaded broker to become the leader for every orphan's partitions.

Question 15

Which out-of-the-box Kafka Authorizer implementation uses ZooKeeper?

Options:

A.

RBAC

B.

ACLs

C.

Ranger

D.

LDAP

Question 16

Which statements are correct about partitions? (Choose two.)

Options:

A.

A partition in Kafka will be represented by a single segment on a disk.

B.

A partition is comprised of one or more segments on a disk.

C.

All partition segments reside in a single directory on a broker disk.

D.

A partition size is determined after the largest segment on a disk.

Demo: 16 questions
Total 54 questions