Spring Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70percent

Confluent CCDAK Confluent Certified Developer for Apache Kafka Certification Examination Exam Practice Test

Demo: 27 questions
Total 90 questions

Confluent Certified Developer for Apache Kafka Certification Examination Questions and Answers

Question 1

(A consumer application needs to use an at-most-once delivery semantic.

What is the best consumer configuration and code skeleton to avoid duplicate messages being read?)

Options:

A.

auto.offset.reset=latest and enable.auto.commit=truewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);for (var record : records) {// Any processing}consumer.commitAsync();}

B.

auto.offset.reset=earliest and enable.auto.commit=falsewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);consumer.commitAsync();for (var record : records) {// Any processing}}

C.

auto.offset.reset=earliest and enable.auto.commit=falsewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);for (var record : records) {// Any processing}consumer.commitAsync();}

D.

auto.offset.reset=earliest and enable.auto.commit=truewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);consumer.commitAsync();for (var record : records) {// Any processing}}

Question 2

(Which configuration determines the maximum number of records a consumer can poll in a single call to poll()?)

Options:

A.

max.poll.records

B.

max.records.consumer

C.

fetch.max.records

D.

max.poll.records.interval

Question 3

(An S3 source connector named s3-connector stopped running.

You use the Kafka Connect REST API to query the connector and task status.

One of the three tasks has failed.

You need to restart the connector and all currently running tasks.

Which REST request will restart the connector instance and all its tasks?)

Options:

A.

POST /connectors/s3-connector/restart?includeTasks=true

B.

POST /connectors/s3-connector/restart?includeTasks=true&onlyFailed=true

C.

POST /connectors/s3-connector/restart

D.

POST /connectors/s3-connector/tasks/0/restart

Question 4

You have a Kafka client application that has real-time processing requirements.

Which Kafka metric should you monitor?

Options:

A.

Consumer lag between brokers and consumers

B.

Total time to serve requests to replica followers

C.

Consumer heartbeat rate to group coordinator

D.

Aggregate incoming byte rate

Question 5

You are creating a Kafka Streams application to process retail data.

Match the input data streams with the appropriate Kafka Streams object.

Options:

Question 6

(What are two stateless operations in the Kafka Streams API?

Select two.)

Options:

A.

Reduce

B.

Join

C.

Filter

D.

GroupBy

Question 7

You are composing a REST request to create a new connector in a running Connect cluster. You invoke POST /connectors with a configuration and receive a 409 (Conflict) response.

What are two reasons for this response? (Select two.)

Options:

A.

The connector configuration was invalid, and the response body will expand on the configuration error.

B.

The connect cluster has reached capacity, and new connectors cannot be created without expanding the cluster.

C.

The Connector already exists in the cluster.

D.

The Connect cluster is in process of rebalancing.

Question 8

Match the topic configuration setting with the reason the setting affects topic durability.

(You are given settings like unclean.leader.election.enable=false, replication.factor, min.insync.replicas=2)

Options:

Question 9

(You want to enrich the content of a topic by joining it with key records from a second topic.

The two topics have a different number of partitions.

Which two solutions can you use?

Select two.)

Options:

A.

Use a GlobalKTable for one of the topics where data does not change frequently and use a KStream–GlobalKTable join.

B.

Repartition one topic to a new topic with the same number of partitions as the other topic (co-partitioning constraint) and use a KStream–KTable join.

C.

Create as many Kafka Streams application instances as the maximum number of partitions of the two topics and use a KStream–KTable join.

D.

Use a KStream–KTable join; Kafka Streams will automatically repartition the topics to satisfy the co-partitioning constraint.

Question 10

The producer code below features a Callback class with a method called onCompletion().

When will the onCompletion() method be invoked?

Options:

A.

When a consumer sends an acknowledgement to the producer

B.

When the producer puts the message into its socket buffer

C.

When the producer batches the message

D.

When the producer receives the acknowledgment from the broker

Question 11

You are sending messages to a Kafka cluster in JSON format and want to add more information related to each message:

Format of the message payload

Message creation time

A globally unique identifier that allows the message to be traced through the systemWhere should this additional information be set?

Options:

A.

Header

B.

Key

C.

Value

D.

Broker

Question 12

You need to set alerts on key broker metrics to trigger notifications when the cluster is unhealthy.

Which are three minimum broker metrics to monitor?

(Select three.)

Options:

A.

kafka.controller:type=KafkaController,name=TopicsToDeleteCount

B.

kafka.controller:type=KafkaController,name=OfflinePartitionsCount

C.

kafka.controller:type=KafkaController,name=ActiveControllerCount

D.

kafka.controller:type=ControllerStats,name=UncleanLeaderElectionsPerSec

E.

kafka.controller:type=KafkaController,name=LastCommittedRecordOffset

Question 13

You need to consume messages from Kafka using the command-line interface (CLI).

Which command should you use?

Options:

A.

kafka-console-consumer

B.

kafka-consumer

C.

kafka-get-messages

D.

kafka-consume

Question 14

(You create an Orders topic with 10 partitions.

The topic receives data at high velocity.

Your Kafka Streams application initially runs on a server with four CPU threads.

You move the application to another server with 10 CPU threads to improve performance.

What does this example describe?)

Options:

A.

Horizontal Scaling

B.

Vertical Scaling

C.

Plain Scaling

D.

Scaling Out

Question 15

Which tool can you use to modify the replication factor of an existing topic?

Options:

A.

kafka-reassign-partitions.sh

B.

kafka-recreate-topic.sh

C.

kafka-topics.sh

D.

kafka-reassign-topics.sh

Question 16

You have a consumer group with default configuration settings reading messages from your Kafka cluster.

You need to optimize throughput so the consumer group processes more messages in the same amount of time.

Which change should you make?

Options:

A.

Remove some consumers from the consumer group.

B.

Increase the number of bytes the consumers read with each fetch request.

C.

Disable auto commit and have the consumers manually commit offsets.

D.

Decrease the session timeout of each consumer.

Question 17

(You are configuring a source connector that writes records to an Orders topic.

You need to send some of the records to a different topic.

Which Single Message Transform (SMT) is best suited for this requirement?)

Options:

A.

RegexRouter

B.

InsertField

C.

TombstoneHandler

D.

HeaderFrom

Question 18

(You are designing a stream pipeline to monitor the real-time location of GPS trackers, where historical location data is not required.

Each event has:

• Key: trackerId

• Value: latitude, longitude

You need to ensure that the latest location for each tracker is always retained in the Kafka topic.

Which topic configuration parameter should you set?)

Options:

A.

cleanup.policy=compact

B.

retention.ms=infinite

C.

min.cleanable.dirty.ratio=-1

D.

retention.ms=0

Question 19

The producer code below features a Callback class with a method called onCompletion().

In the onCompletion() method, when the request is completed successfully, what does the value metadata.offset() represent?

Options:

A.

The sequential ID of the message committed into a partition

B.

Its position in the producer’s batch of messages

C.

The number of bytes that overflowed beyond a producer batch of messages

D.

The ID of the partition to which the message was committed

Question 20

This schema excerpt is an example of which schema format?

package com.mycorp.mynamespace;

message SampleRecord {

int32 Stock = 1;

double Price = 2;

string Product_Name = 3;

}

Options:

A.

Avro

B.

Protobuf

C.

JSON Schema

D.

YAML

Question 21

(A consumer application runs once every two weeks and reads from a Kafka topic.

The last time the application ran, the last offset processed was 217.

The application is configured with auto.offset.reset=latest.

The current offsets in the topic start at 318 and end at 588.

Which offset will the application start reading from when it starts up for its next run?)

Options:

A.

0

B.

218

C.

318

D.

589

Question 22

Which is true about topic compaction?

Options:

A.

When a client produces a new event with an existing key, the old value is overwritten with the new value in the compacted log segment.

B.

When a client produces a new event with an existing key, the broker immediately deletes the offset of the existing event.

C.

Topic compaction does not remove old events; instead, when clients consume events from a compacted topic, they store events in a hashmap that maintains the latest value.

D.

Compaction will keep exactly one message per key after compaction of inactive log segments.

Question 23

(You are developing a Kafka Streams application with a complex topology that has multiple sources, processors, sinks, and sub-topologies.

You are working in a development environment and do not have access to a real Kafka cluster or topics.

You need to perform unit testing on your Kafka Streams application.

Which should you use?)

Options:

A.

TestProducer, TestConsumer

B.

KafkaUnitTestDriver

C.

TopologyTestDriver

D.

MockProducer, MockConsumer

Question 24

Your application is consuming from a topic with one consumer group.

The number of running consumers is equal to the number of partitions.

Application logs show that some consumers are leaving the consumer group during peak time, triggering a rebalance. You also notice that your application is processing many duplicates.

You need to stop consumers from leaving the consumer group.

What should you do?

Options:

A.

Reduce max.poll.records property.

B.

Increase session.timeout.ms property.

C.

Add more consumer instances.

D.

Split consumers in different consumer groups.

Question 25

Which two statements are correct when assigning partitions to the consumers in a consumer group using the assign() API?

(Select two.)

Options:

A.

It is mandatory to subscribe to a topic before calling assign() to assign partitions.

B.

The consumer chooses which partition to read without any assignment from brokers.

C.

The consumer group will not be rebalanced if a consumer leaves the group.

D.

All topics must have the same number of partitions to use assign() API.

Question 26

(You are writing to a Kafka topic with producer configuration acks=all.

The producer receives acknowledgements from the broker but still creates duplicate messages due to network timeouts and retries.

You need to ensure that duplicate messages are not created.

Which producer configuration should you set?)

Options:

A.

enable.auto.commit=true

B.

retries=2147483647max.in.flight.requests.per.connection=5enable.idempotence=false

C.

retries=2147483647max.in.flight.requests.per.connection=1enable.idempotence=true

D.

retries=0max.in.flight.requests.per.connection=5enable.idempotence=true

Question 27

You have a topic with four partitions. The application reads from it using two consumers in a single consumer group.

Processing is CPU-bound, and lag is increasing.

What should you do?

Options:

A.

Add more consumers to increase the level of parallelism of the processing.

B.

Add more partitions to the topic to increase the level of parallelism of the processing.

C.

Increase the max.poll.records property of consumers.

D.

Decrease the max.poll.records property of consumers.

Demo: 27 questions
Total 90 questions