Cause all that matters here is passing the Confluent CCDAK exam. Cause all that you need is a high score of CCDAK Confluent Certified Developer for Apache Kafka Certification Examination exam. The only one thing you need to do is downloading Certleader CCDAK exam study guides now. We will not let you down with our money-back guarantee.
Online Confluent CCDAK free dumps demo Below:
NEW QUESTION 1
When using plain JSON data with Connect, you see the following error messageorg.apache.kafka.connect.errors.DataExceptionJsonDeserializer with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. How will you fix the error?
- A. Set key.converter, value.converter to JsonConverter and the schema registry url
- B. Use Single Message Transforms to add schema and payload fields in the message
- C. Set key.converter.schemas.enable and value.converter.schemas.enable to false
- D. Set key.converter, value.converter to AvroConverter and the schema registry url
Answer: C
Explanation:
You will need to set the schemas.enable parameters for the converter to false for plain text with no schema.
NEW QUESTION 2
To transform data from a Kafka topic to another one, I should use
- A. Kafka Connect Sink
- B. Kafka Connect Source
- C. Consumer + Producer
- D. Kafka Streams
Answer: D
Explanation:
Kafka Streams is a library for building streaming applications, specifically applications that transform input Kafka topics into output Kafka topics
NEW QUESTION 3
Your producer is producing at a very high rate and the batches are completely full each
time. How can you improve the producer throughput? (select two)
- A. Enable compression
- B. Disable compression
- C. Increase batch.size
- D. Decrease batch.size
- E. Decrease linger.ms Increase linger.ms
Answer: AC
Explanation:
batch.size controls how many bytes of data to collect before sending messages to the Kafka broker. Set this as high as possible, without exceeding available memory. Enabling compression can also help make more compact batches and increase the throughput of your producer. Linger.ms will have no effect as the batches are already full
NEW QUESTION 4
You want to sink data from a Kafka topic to S3 using Kafka Connect. There are 10 brokers in the cluster, the topic has 2 partitions with replication factor of 3. How many tasks will you configure for the S3 connector?
- A. 10
- B. 6
- C. 3
- D. 2
Answer: D
Explanation:
You cannot have more sink tasks (= consumers) than the number of partitions, so 2.
NEW QUESTION 5
What happens if you write the following code in your producer? producer.send(producerRecord).get()
- A. Compression will be increased
- B. Throughput will be decreased
- C. It will force all brokers in Kafka to acknowledge the producerRecord
- D. Batching will be increased
Answer: B
Explanation:
Using Future.get() to wait for a reply from Kafka will limit throughput.
NEW QUESTION 6
In Kafka, every broker... (select three)
- A. contains all the topics and all the partitions
- B. knows all the metadata for all topics and partitions
- C. is a controller
- D. knows the metadata for the topics and partitions it has on its disk
- E. is a bootstrap broker
- F. contains only a subset of the topics and the partitions
Answer: BEF
Explanation:
Kafka topics are divided into partitions and spread across brokers. Each brokers knows about all the metadata and each broker is a bootstrap broker, but only one of them is elected controller
NEW QUESTION 7
You are using JDBC source connector to copy data from a table to Kafka topic. There is one connector created with max.tasks equal to 2 deployed on a cluster of 3 workers. How many tasks are launched?
- A. 3
- B. 2
- C. 1
- D. 6
Answer: C
Explanation:
JDBC connector allows one task per table.
NEW QUESTION 8
You want to send a message of size 3 MB to a topic with default message size configuration. How does KafkaProducer handle large messages?
- A. KafkaProducer divides messages into sizes of max.request.size and sends them in order
- B. KafkaProducer divides messages into sizes of message.max.bytes and sends them in order
- C. MessageSizeTooLarge exception will be thrown, KafkaProducer will not retry and return exception immediately
- D. MessageSizeTooLarge exception will be thrown, KafkaProducer retries until the number of retries are exhausted
Answer: C
Explanation:
MessageSizeTooLarge is not a retryable exception.
NEW QUESTION 9
Your manager would like to have topic availability over consistency. Which setting do you need to change in order to enable that?
- A. compression.type
- B. unclean.leader.election.enable
- C. min.insync.replicas
Answer: B
Explanation:
unclean.leader.election.enable=true allows non ISR replicas to become leader, ensuring availability but losing consistency as data loss will occur
NEW QUESTION 10
What is the default port that the KSQL server listens on?
- A. 9092
- B. 8088
- C. 8083
- D. 2181
Answer: B
Explanation:
Default port of KSQL server is 8088
NEW QUESTION 11
Which Kafka CLI should you use to consume from a topic?
- A. kafka-console-consumer
- B. kafka-topics
- C. kafka-console
- D. kafka-consumer-groups
Answer: A
Explanation:
Examplekafka-console-consumer --bootstrap-server 127.0.0.1:9092 --topic test --from- beginning
NEW QUESTION 12
If I supply the setting compression.type=snappy to my producer, what will happen? (select two)
- A. The Kafka brokers have to de-compress the data
- B. The Kafka brokers have to compress the data
- C. The Consumers have to de-compress the data
- D. The Consumers have to compress the data
- E. The Producers have to compress the data
Answer: C
Explanation:
Kafka transfers data with zero copy and no transformation. Any transformation (including compression) is the responsibility of clients.
NEW QUESTION 13
How can you gracefully make a Kafka consumer to stop immediately polling data from Kafka and gracefully shut down a consumer application?
- A. Call consumer.wakeUp() and catch a WakeUpException
- B. Call consumer.poll() in another thread
- C. Kill the consumer thread
Answer: A
Explanation:
See https://stackoverflow.com/a/37748336/3019499
NEW QUESTION 14
To produce data to a topic, a producer must provide the Kafka client with...
- A. the list of brokers that have the data, the topic name and the partitions list
- B. any broker from the cluster and the topic name and the partitions list
- C. all the brokers from the cluster and the topic name
- D. any broker from the cluster and the topic name
Answer: D
Explanation:
All brokers can respond to a Metadata request, so a client can connect to any broker in the cluster and then figure out on its own which brokers to send data to.
NEW QUESTION 15
Where are the ACLs stored in a Kafka cluster by default?
- A. Inside the broker's data directory
- B. Under Zookeeper node /kafka-acl/
- C. In Kafka topic kafka_acls
- D. Inside the Zookeeper's data directory
Answer: A
Explanation:
ACLs are stored in Zookeeper node /kafka-acls/ by default.
NEW QUESTION 16
A topic receives all the orders for the products that are available on a commerce site. Two applications want to process all the messages independently - order fulfilment and monitoring. The topic has 4 partitions, how would you organise the consumers for optimal performance and resource usage?
- A. Create 8 consumers in the same group with 4 consumers for each application
- B. Create two consumers groups for two applications with 8 consumers in each
- C. Create two consumer groups for two applications with 4 consumers in each
- D. Create four consumers in the same group, one for each partition - two for fulfilment and two for monitoring
Answer: C
Explanation:
two partitions groups - one for each application so that all messages are delivered to both the application. 4 consumers in each as there are 4 partitions of the topic, and you cannot have more consumers per groups than the number of partitions (otherwise they will be inactive and wasting resources)
NEW QUESTION 17
......
P.S. Dumps-hub.com now are offering 100% pass ensure CCDAK dumps! All CCDAK exam questions have been updated with correct answers: https://www.dumps-hub.com/CCDAK-dumps.html (150 New Questions)