Spring Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: cramtick70

CCDAK Confluent Certified Developer for Apache Kafka Certification Examination Questions and Answers

Questions 4

You have a Kafka Connect cluster with multiple connectors.

One connector is not working as expected.

How can you find logs related to that specific connector?

Options:

A.

Modify the log4j.properties file to enable connector context.

B.

Modify the log4j.properties file to add a dedicated log appender for the connector.

C.

Change the log level to DEBUG to have connector context information in logs.

D.

Make no change, there is no way to find logs other than by stopping all the other connectors.

Buy Now
Questions 5

What is a consequence of increasing the number of partitions in an existing Kafka topic?

Options:

A.

Existing data will be redistributed across the new number of partitions temporarily increasing cluster load.

B.

Records with the same key could be located in different partitions.

C.

Consumers will need to process data from more partitions which will significantly increase consumer lag.

D.

The acknowledgment process will increase latency for producers using acks=all.

Buy Now
Questions 6

(A stream processing application tracks user activity in online shopping carts, including items added, removed, and ordered throughout the day for each user.

You need to capture data to identify possible periods of user inactivity.

Which type of Kafka Streams window should you use?)

Options:

A.

Session

B.

Hopping

C.

Tumbling

D.

Sliding

Buy Now
Questions 7

Which tool can you use to modify the replication factor of an existing topic?

Options:

A.

kafka-reassign-partitions.sh

B.

kafka-recreate-topic.sh

C.

kafka-topics.sh

D.

kafka-reassign-topics.sh

Buy Now
Questions 8

Your Kafka cluster has five brokers. The topic t1 on the cluster has:

Two partitions

Replication factor = 4

min.insync.replicas = 3You need strong durability guarantees for messages written to topic t1.You configure a producer acks=all and all the replicas for t1 are in-sync.How many brokers need to acknowledge a message before it is considered committed?

Options:

A.

2

B.

3

C.

4

D.

5

Buy Now
Questions 9

This schema excerpt is an example of which schema format?

package com.mycorp.mynamespace;

message SampleRecord {

int32 Stock = 1;

double Price = 2;

string Product_Name = 3;

}

Options:

A.

Avro

B.

Protobuf

C.

JSON Schema

D.

YAML

Buy Now
Questions 10

Your application is consuming from a topic configured with a deserializer.

It needs to be resilient to badly formatted records ("poison pills"). You surround the poll() call with a try/catch for RecordDeserializationException.

You need to log the bad record, skip it, and continue processing.

Which action should you take in the catch block?

Options:

A.

Log the bad record, no other action needed.

B.

Log the bad record and seek the consumer to the offset of the next record.

C.

Log the bad record and call the consumer.skip() method.

D.

Throw a runtime exception to trigger a restart of the application.

Buy Now
Questions 11

(You are implementing a Kafka Streams application to process financial transactions.

Each transaction must be processed exactly once to ensure accuracy.

The application reads from an input topic, performs computations, and writes results to an output topic.

During testing, you notice duplicate entries in the output topic, which violates the exactly-once processing requirement.

You need to ensure exactly-once semantics (EOS) for this Kafka Streams application.

Which step should you take?)

Options:

A.

Enable compaction on the output topic to handle duplicates.

B.

Set enable.idempotence=true in the internal producer configuration of the Kafka Streams application.

C.

Set enable.exactly_once=true in the Kafka Streams configuration.

D.

Set processing.guarantee=exactly_once_v2 in the Kafka Streams configuration.

Buy Now
Questions 12

A stream processing application is consuming from a topic with five partitions. You run three instances of the application. Each instance has num.stream.threads=5.

You need to identify the number of stream tasks that will be created and how many will actively consume messages from the input topic.

Options:

A.

5 created, 1 actively consuming

B.

5 created, 5 actively consuming

C.

15 created, 5 actively consuming

D.

15 created, 15 actively consuming

Buy Now
Questions 13

(You are experiencing low throughput from a Java producer.

Kafka producer metrics show a low I/O thread ratio and low I/O thread wait ratio.

What is the most likely cause of the slow producer performance?)

Options:

A.

The producer is sending large batches of messages.

B.

There is a bad data link layer (Layer 2) connection from the producer to the cluster.

C.

The producer code has an expensive callback function.

D.

Compression is enabled.

Buy Now
Questions 14

Which two statements are correct when assigning partitions to the consumers in a consumer group using the assign() API?

(Select two.)

Options:

A.

It is mandatory to subscribe to a topic before calling assign() to assign partitions.

B.

The consumer chooses which partition to read without any assignment from brokers.

C.

The consumer group will not be rebalanced if a consumer leaves the group.

D.

All topics must have the same number of partitions to use assign() API.

Buy Now
Questions 15

Where are source connector offsets stored?

Options:

A.

offset.storage.topic

B.

storage.offset.topic

C.

topic.offset.config

D.

offset, storage, partitions

Buy Now
Questions 16

You want to connect with username and password to a secured Kafka cluster that has SSL encryption.

Which properties must your client include?

Options:

A.

security.protocol=SASL_SSLsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='myUser' password='myPassword';

B.

security.protocol=SSLsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='myUser' password='myPassword';

C.

security.protocol=SASL_PLAINTEXTsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='myUser' password='myPassword';

D.

security.protocol=PLAINTEXTsasl.jaas.config=org.apache.kafka.common.security.ssl.TlsLoginModule required username='myUser' password='myPassword';

Buy Now
Questions 17

(Which configuration determines the maximum number of records a consumer can poll in a single call to poll()?)

Options:

A.

max.poll.records

B.

max.records.consumer

C.

fetch.max.records

D.

max.poll.records.interval

Buy Now
Questions 18

(Your application consumes from a topic configured with a deserializer.

You want the application to be resilient to badly formatted records (poison pills).

You surround the poll() call with a try/catch block for RecordDeserializationException.

You need to log the bad record, skip it, and continue processing other records.

Which action should you take in the catch block?)

Options:

A.

Log the bad record and seek the consumer to the offset of the next record.

B.

Log the bad record and call consumer.skip() method.

C.

Throw a runtime exception to trigger a restart of the application.

D.

Log the bad record; no other action is needed.

Buy Now
Questions 19

Which partition assignment minimizes partition movements between two assignments?

Options:

A.

RoundRobinAssignor

B.

StickyAssignor

C.

RangeAssignor

D.

PartitionAssignor

Buy Now
Questions 20

(What are two stateless operations in the Kafka Streams API?

Select two.)

Options:

A.

Reduce

B.

Join

C.

Filter

D.

GroupBy

Buy Now
Questions 21

Which configuration allows more time for the consumer poll to process records?

Options:

A.

session.timeout.ms

B.

heartbeat.interval.ms

C.

max.poll.interval.ms

D.

fetch.max.wait.ms

Buy Now
Questions 22

You need to configure a sink connector to write records that fail into a dead letter queue topic. Requirements:

Topic name: DLQ-Topic

Headers containing error context must be added to the messagesWhich three configuration parameters are necessary?(Select three.)

Options:

A.

errors.tolerance=all

B.

errors.deadletterqueue.topic.name=DLQ-Topic

C.

errors.deadletterqueue.context.headers.enable=true

D.

errors.tolerance=none

E.

errors.log.enable=true

F.

errors.log.include.messages=true

Buy Now
Questions 23

(Your configuration parameters for a Source connector and Connect worker are:

• offset.flush.interval.ms=60000

• offset.flush.timeout.ms=500

• offset.storage.topic=connect-offsets

• offset.storage.replication.factor=-1

Which two statements match the expected behavior?

Select two.)

Options:

A.

The offsets topic will use the broker default replication factor.

B.

The connector will commit offsets to the broker default offsets topic.

C.

The connector will commit offsets to a topic called connect-offsets.

D.

The connector will wait 500 ms before trying to commit offsets for tasks.

Buy Now
Questions 24

(You are building real-time streaming applications using Kafka Streams.

Your application has a custom transformation.

You need to define custom processors in Kafka Streams.

Which tool should you use?)

Options:

A.

TopologyTestDriver

B.

Processor API

C.

Kafka Streams Domain Specific Language (DSL)

D.

Kafka Streams Custom Transformation Language

Buy Now
Questions 25

You have a topic with four partitions. The application reads from it using two consumers in a single consumer group.

Processing is CPU-bound, and lag is increasing.

What should you do?

Options:

A.

Add more consumers to increase the level of parallelism of the processing.

B.

Add more partitions to the topic to increase the level of parallelism of the processing.

C.

Increase the max.poll.records property of consumers.

D.

Decrease the max.poll.records property of consumers.

Buy Now
Questions 26

You are writing to a topic with acks=all.

The producer receives acknowledgments but you notice duplicate messages.

You find that timeouts due to network delay are causing resends.

Which configuration should you use to prevent duplicates?

Options:

A.

enable.auto.commit=true

B.

retries=2147483647max.in.flight.requests.per.connection=5enable.idempotence=true

C.

retries=0max.in.flight.requests.per.connection=5enable.idempotence=true

D.

retries=2147483647max.in.flight.requests.per.connection=1enable.idempotence=false

Buy Now
Questions 27

(A consumer application needs to use an at-most-once delivery semantic.

What is the best consumer configuration and code skeleton to avoid duplicate messages being read?)

Options:

A.

auto.offset.reset=latest and enable.auto.commit=truewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);for (var record : records) {// Any processing}consumer.commitAsync();}

B.

auto.offset.reset=earliest and enable.auto.commit=falsewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);consumer.commitAsync();for (var record : records) {// Any processing}}

C.

auto.offset.reset=earliest and enable.auto.commit=falsewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);for (var record : records) {// Any processing}consumer.commitAsync();}

D.

auto.offset.reset=earliest and enable.auto.commit=truewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);consumer.commitAsync();for (var record : records) {// Any processing}}

Buy Now
Exam Code: CCDAK
Exam Name: Confluent Certified Developer for Apache Kafka Certification Examination
Last Update: Feb 20, 2026
Questions: 90
CCDAK pdf

CCDAK PDF

$25.5  $84.99
CCDAK Engine

CCDAK Testing Engine

$30  $99.99
CCDAK PDF + Engine

CCDAK PDF + Testing Engine

$40.5  $134.99