Examcollection offers free demo for CCDAK exam. "Confluent Certified Developer for Apache Kafka Certification Examination", also known as CCDAK exam, is a Confluent Certification. This set of posts, Passing the Confluent CCDAK exam, will help you answer those questions. The CCDAK Questions & Answers covers all the knowledge points of the real exam. 100% real Confluent CCDAK exams and revised by experts!

Check CCDAK free dumps before getting the full version:

NEW QUESTION 1
What is true about replicas ?

  • A. Produce requests can be done to the replicas that are followers
  • B. Produce and consume requests are load-balanced between Leader and Follower replicas
  • C. Leader replica handles all produce and consume requests
  • D. Follower replica handles all consume requests

Answer: C

Explanation:
Replicas are passive - they don't handle produce or consume request. Produce and consume requests get sent to the node hosting partition leader.

NEW QUESTION 2
What isn't an internal Kafka Connect topic?

  • A. connect-status
  • B. connect-offsets
  • C. connect-configs
  • D. connect-jars

Answer: D

Explanation:
connect-configs stores configurations, connect-status helps to elect leaders for connect, and connect-offsets store source offsets for source connectors

NEW QUESTION 3
Where are the dynamic configurations for a topic stored?

  • A. In Zookeeper
  • B. In an internal Kafka topic topic_configuratins
  • C. In server.properties
  • D. On the Kafka broker file system

Answer: A

Explanation:
Dynamic topic configurations are maintained in Zookeeper.

NEW QUESTION 4
To prevent network-induced duplicates when producing to Kafka, I should use

  • A. max.in.flight.requests.per.connection=1
  • B. enable.idempotence=true
  • C. retries=200000
  • D. batch.size=1

Answer: B

Explanation:
Producer idempotence helps prevent the network introduced duplicates. More details herehttps://cwiki.apache.org/confluence/display/KAFKA/Idempotent+Producer

NEW QUESTION 5
To import data from external databases, I should use

  • A. Confluent REST Proxy
  • B. Kafka Connect Sink
  • C. Kafka Streams
  • D. Kafka Connect Source

Answer: D

Explanation:
Kafka Connect Sink is used to export data from Kafka to external databases and Kafka
Connect Source is used to import from external databases into Kafka.

NEW QUESTION 6
You are sending messages with keys to a topic. To increase throughput, you decide to
increase the number of partitions of the topic. Select all that apply.

  • A. All the existing records will get rebalanced among the partitions to balance load
  • B. New records with the same key will get written to the partition where old records with that key were written
  • C. New records may get written to a different partition
  • D. Old records will stay in their partitions

Answer: CD

Explanation:
Increasing the number of partition causes new messages keys to get hashed differently, and breaks the guarantee "same keys goes to the same partition". Kafka logs are immutable and the previous messages are not re-shuffled

NEW QUESTION 7
Which of the following setting increases the chance of batching for a Kafka Producer?

  • A. Increase batch.size
  • B. Increase message.max.bytes
  • C. Increase the number of producer threads
  • D. Increase linger.ms

Answer: D

Explanation:
linger.ms forces the producer to wait to send messages, hence increasing the chance of creating batches

NEW QUESTION 8
How will you find out all the partitions without a leader?

  • A. kafka-topics.sh --broker-list localhost:9092 --describe --under-replicated-partitions
  • B. kafka-topics.sh --bootstrap-server localhost:2181 --describe --unavailable-partitions
  • C. kafka-topics.sh --zookeeper localhost:2181 --describe --unavailable-partitions
  • D. kafka-topics.sh --zookeeper localhost:2181 --describe --under-replicated-partitions

Answer: C

Explanation:
Please note that as of Kafka 2.2, the --zookeeper option is deprecated and you can now usekafka-topics.sh --bootstrap-server localhost:9092 --describe --unavailable-partitions

NEW QUESTION 9
Select all the way for one consumer to subscribe simultaneously to the following topics - topic.history, topic.sports, topic.politics? (select two)

  • A. consumer.subscribe(Pattern.compile("topic\..*"));
  • B. consumer.subscribe("topic.history"); consumer.subscribe("topic.sports"); consumer.subscribe("topic.politics");
  • C. consumer.subscribePrefix("topic.");
  • D. consumer.subscribe(Arrays.asList("topic.history", "topic.sports", "topic.politics"));

Answer: AD

Explanation:
Multiple topics can be passed as a list or regex pattern.

NEW QUESTION 10
A producer application in a developer machine was able to send messages to a Kafka topic. After copying the producer application into another developer's machine, the producer is able to connect to Kafka but unable to produce to the same Kafka topic because of an authorization issue. What is the likely issue?

  • A. Broker configuration needs to be changed to allow a different producer
  • B. You cannot copy a producer application from one machine to another
  • C. The Kafka ACL does not allow another machine IP
  • D. The Kafka Broker needs to be rebooted

Answer: C

Explanation:
ACLs take "Host" as a parameter, which represents an IP. It can be * (all IP), or a specific IP. Here, it's a specific IP as moving a producer to a different machine breaks the consumer, so the ACL needs to be updated

NEW QUESTION 11
What is a generic unique id that I can use for messages I receive from a consumer?

  • A. topic + partition + timestamp
  • B. topic + partition + offset
  • C. topic + timestamp

Answer: B

Explanation:
(Topic,Partition,Offset) uniquely identifies a message in Kafka

NEW QUESTION 12
The kafka-console-consumer CLI, when used with the default options

  • A. uses a random group id
  • B. always uses the same group id
  • C. does not use a group id

Answer: A

Explanation:
If a group is not specified, the kafka-console-consumer generates a random consumer group.

NEW QUESTION 13
A consumer is configured with enable.auto.commit=false. What happens when close() is called on the consumer object?

  • A. The uncommitted offsets are committed
  • B. A rebalance in the consumer group will happen immediately
  • C. The group coordinator will discover that the consumer stopped sending heartbeat
  • D. It will cause rebalance after session.timeout.ms

Answer: B

Explanation:
Calling close() on consumer immediately triggers a partition rebalance as the consumer will not be available anymore.

NEW QUESTION 14
Which KSQL queries write to Kafka?

  • A. COUNT and JOIN
  • B. SHOW STREAMS and EXPLAIN <query> statements
  • C. CREATE STREAM WITH <topic> and CREATE TABLE WITH <topic>
  • D. CREATE STREAM AS SELECT and CREATE TABLE AS SELECT

Answer: CD

Explanation:
SHOW STREAMS and EXPLAIN <query> statements run against the KSQL server that the KSQL client is connected to. They don't communicate directly with Kafka. CREATE STREAM WITH <topic> and CREATE TABLE WITH <topic> write metadata to the KSQL command topic. Persistent queries based on CREATE STREAM AS SELECT and CREATE TABLE AS SELECT read and write to Kafka topics. Non-persistent queries based on SELECT that are stateless only read from Kafka topics, for example SELECT … FROM foo WHERE …. Non-persistent queries that are stateful read and write to Kafka, for example, COUNT and JOIN. The data in Kafka is deleted automatically when you terminate the query with CTRL-C.

NEW QUESTION 15
In Avro, adding a field to a record without default is a schema evolution

  • A. forward
  • B. backward
  • C. full
  • D. breaking

Answer: A

Explanation:
Clients with old schema will be able to read records saved with new schema.

NEW QUESTION 16
Your streams application is reading from an input topic that has 5 partitions. You run 5 instances of your application, each with num.streams.threads set to 5. How many stream tasks will be created and how many will be active?

  • A. 5 created, 1 active
  • B. 5 created, 5 active
  • C. 25 created, 25 active
  • D. 25 created, 5 active

Answer: D

Explanation:
One partition is assigned a thread, so only 5 will be active, and 25 threads (i.e. tasks) will be created

NEW QUESTION 17
......

100% Valid and Newest Version CCDAK Questions & Answers shared by Dumpscollection.com, Get Full Dumps HERE: https://www.dumpscollection.net/dumps/CCDAK/ (New 150 Q&As)