You have a Kafka Connect cluster with multiple connectors.
One connector is not working as expected.
How can you find logs related to that specific connector?
What is a consequence of increasing the number of partitions in an existing Kafka topic?
(A stream processing application tracks user activity in online shopping carts, including items added, removed, and ordered throughout the day for each user.
You need to capture data to identify possible periods of user inactivity.
Which type of Kafka Streams window should you use?)
Your Kafka cluster has five brokers. The topic t1 on the cluster has:
Two partitions
Replication factor = 4
min.insync.replicas = 3You need strong durability guarantees for messages written to topic t1.You configure a producer acks=all and all the replicas for t1 are in-sync.How many brokers need to acknowledge a message before it is considered committed?
This schema excerpt is an example of which schema format?
package com.mycorp.mynamespace;
message SampleRecord {
int32 Stock = 1;
double Price = 2;
string Product_Name = 3;
}
Your application is consuming from a topic configured with a deserializer.
It needs to be resilient to badly formatted records ("poison pills"). You surround the poll() call with a try/catch for RecordDeserializationException.
You need to log the bad record, skip it, and continue processing.
Which action should you take in the catch block?
(You are implementing a Kafka Streams application to process financial transactions.
Each transaction must be processed exactly once to ensure accuracy.
The application reads from an input topic, performs computations, and writes results to an output topic.
During testing, you notice duplicate entries in the output topic, which violates the exactly-once processing requirement.
You need to ensure exactly-once semantics (EOS) for this Kafka Streams application.
Which step should you take?)
A stream processing application is consuming from a topic with five partitions. You run three instances of the application. Each instance has num.stream.threads=5.
You need to identify the number of stream tasks that will be created and how many will actively consume messages from the input topic.
(You are experiencing low throughput from a Java producer.
Kafka producer metrics show a low I/O thread ratio and low I/O thread wait ratio.
What is the most likely cause of the slow producer performance?)
Which two statements are correct when assigning partitions to the consumers in a consumer group using the assign() API?
(Select two.)
You want to connect with username and password to a secured Kafka cluster that has SSL encryption.
Which properties must your client include?
(Which configuration determines the maximum number of records a consumer can poll in a single call to poll()?)
(Your application consumes from a topic configured with a deserializer.
You want the application to be resilient to badly formatted records (poison pills).
You surround the poll() call with a try/catch block for RecordDeserializationException.
You need to log the bad record, skip it, and continue processing other records.
Which action should you take in the catch block?)
Which partition assignment minimizes partition movements between two assignments?
You need to configure a sink connector to write records that fail into a dead letter queue topic. Requirements:
Topic name: DLQ-Topic
Headers containing error context must be added to the messagesWhich three configuration parameters are necessary?(Select three.)
(Your configuration parameters for a Source connector and Connect worker are:
• offset.flush.interval.ms=60000
• offset.flush.timeout.ms=500
• offset.storage.topic=connect-offsets
• offset.storage.replication.factor=-1
Which two statements match the expected behavior?
Select two.)
(You are building real-time streaming applications using Kafka Streams.
Your application has a custom transformation.
You need to define custom processors in Kafka Streams.
Which tool should you use?)
You have a topic with four partitions. The application reads from it using two consumers in a single consumer group.
Processing is CPU-bound, and lag is increasing.
What should you do?
You are writing to a topic with acks=all.
The producer receives acknowledgments but you notice duplicate messages.
You find that timeouts due to network delay are causing resends.
Which configuration should you use to prevent duplicates?
(A consumer application needs to use an at-most-once delivery semantic.
What is the best consumer configuration and code skeleton to avoid duplicate messages being read?)
Confluent Certified Developer | CCDAK Questions Answers | CCDAK Test Prep | Confluent Certified Developer for Apache Kafka Certification Examination Questions PDF | CCDAK Online Exam | CCDAK Practice Test | CCDAK PDF | CCDAK Test Questions | CCDAK Study Material | CCDAK Exam Preparation | CCDAK Valid Dumps | CCDAK Real Questions | Confluent Certified Developer CCDAK Exam Questions