CCDAK Exam Questions - Online Test


CCDAK Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

certleader.com

Proper study guides for Refresh Confluent Confluent Certified Developer for Apache Kafka Certification Examination certified begins with Confluent CCDAK preparation products which designed to deliver the Downloadable CCDAK questions by making you pass the CCDAK test at your first time. Try the free CCDAK demo right now.

Free demo questions for Confluent CCDAK Exam Dumps Below:

NEW QUESTION 1
Which of these joins does not require input topics to be sharing the same number of partitions?

  • A. KStream-KTable join
  • B. KStream-KStream join
  • C. KStream-GlobalKTable
  • D. KTable-KTable join

Answer: C

Explanation:
GlobalKTables have their datasets replicated on each Kafka Streams instance and therefore no repartitioning is required

NEW QUESTION 2
Which of the following setting increases the chance of batching for a Kafka Producer?

  • A. Increase batch.size
  • B. Increase message.max.bytes
  • C. Increase the number of producer threads
  • D. Increase linger.ms

Answer: D

Explanation:
linger.ms forces the producer to wait to send messages, hence increasing the chance of creating batches

NEW QUESTION 3
Using the Confluent Schema Registry, where are Avro schema stored?

  • A. In the Schema Registry embedded SQL database
  • B. In the Zookeeper node /schemas
  • C. In the message bytes themselves
  • D. In the _schemas topic

Answer: D

Explanation:
The Schema Registry stores all the schemas in the _schemas Kafka topic

NEW QUESTION 4
Compaction is enabled for a topic in Kafka by setting log.cleanup.policy=compact. What is true about log compaction?

  • A. After cleanup, only one message per key is retained with the first value
  • B. Each message stored in the topic is compressed
  • C. Kafka automatically de-duplicates incoming messages based on key hashes
  • D. After cleanup, only one message per key is retained with the latest value Compaction changes the offset of messages

Answer: D

Explanation:
Log compaction retains at least the last known value for each record key for a single topic partition. All compacted log offsets remain valid, even if record at offset has been compacted away as a consumer will get the next highest offset.

NEW QUESTION 5
Consumer failed to process record # 10 and succeeded in processing record # 11. Select the course of action that you should choose to guarantee at least once processing

  • A. Commit offsets at 10
  • B. Do not commit until successfully processing the record #10
  • C. Commit offsets at 11

Answer: C

Explanation:
Here, you shouldn't commit offsets 11 or 10 as it would indicate that the message #10 has been processed successfully.

NEW QUESTION 6
What exceptions may be caught by the following producer? (select two) ProducerRecord<String, String> record =
new ProducerRecord<>("topic1", "key1", "value1"); try {
producer.send(record);
} catch (Exception e) { e.printStackTrace();
}

  • A. BrokerNotAvailableException
  • B. SerializationException
  • C. InvalidPartitionsException
  • D. BufferExhaustedException

Answer: BD

Explanation:
These are the client side exceptions that may be encountered before message is sent to the broker, and before a future is returned by the .send() method.

NEW QUESTION 7
A topic has three replicas and you set min.insync.replicas to 2. If two out of three replicas are not available, what happens when a produce request with acks=all is sent to broker?

  • A. NotEnoughReplicasException will be returned
  • B. Produce request is honored with single in-sync replica
  • C. Produce request will block till one of the two unavailable partition is available again.

Answer: A

Explanation:
With this configuration, a single in-sync replica becomes read-only. Produce request will receive NotEnoughReplicasException.

NEW QUESTION 8
The Controller is a broker that is... (select two)

  • A. elected by Zookeeper ensemble
  • B. is responsible for partition leader election
  • C. elected by broker majority
  • D. is responsible for consumer group rebalances

Answer: AB

Explanation:
Controller is a broker that in addition to usual broker functions is responsible for partition leader election. The election of that broker happens thanks to Zookeeper and at any time only one broker can be a controller

NEW QUESTION 9
How will you find out all the partitions where one or more of the replicas for the partition are not in-sync with the leader?

  • A. kafka-topics.sh --bootstrap-server localhost:9092 --describe --unavailable- partitions
  • B. kafka-topics.sh --zookeeper localhost:2181 --describe --unavailable- partitions
  • C. kafka-topics.sh --broker-list localhost:9092 --describe --under-replicated-partitions
  • D. kafka-topics.sh --zookeeper localhost:2181 --describe --under-replicated-partitions

Answer: D

NEW QUESTION 10
There are two consumers C1 and C2 belonging to the same group G subscribed to topics T1 and T2. Each of the topics has 3 partitions. How will the partitions be assigned to consumers with Partition Assigner being Round Robin Assigner?

  • A. C1 will be assigned partitions 0 and 2 from T1 and partition 1 from T2. C2 will have partition 1 from T1 and partitions 0 and 2 from T2.
  • B. Two consumers cannot read from two topics at the same time
  • C. C1 will be assigned partitions 0 and 1 from T1 and T2, C2 will be assigned partition 2 from T1 and T2.
  • D. All consumers will read from all partitions

Answer: A

Explanation:
The correct option is the only one where the two consumers share an equal number of partitions amongst the two topics of three partitions. An interesting article to read ishttps://medium.com/@anyili0928/what-i-have-learned-from-kafka-partition-assignment- strategy-799fdf15d3ab

NEW QUESTION 11
Which Kafka CLI should you use to consume from a topic?

  • A. kafka-console-consumer
  • B. kafka-topics
  • C. kafka-console
  • D. kafka-consumer-groups

Answer: A

Explanation:
Examplekafka-console-consumer --bootstrap-server 127.0.0.1:9092 --topic test --from- beginning

NEW QUESTION 12
An ecommerce wesbite sells some custom made goods. What's the natural way of modeling this data in Kafka streams?

  • A. Purchase as stream, Product as stream, Customer as stream
  • B. Purchase as stream, Product as table, Customer as table
  • C. Purchase as table, Product as table, Customer as table
  • D. Purchase as stream, Product as table, Customer as stream

Answer: B

Explanation:
Mostly-static data is modeled as a table whereas business transactions should be modeled as a stream.

NEW QUESTION 13
What's is true about Kafka brokers and clients from version 0.10.2 onwards?

  • A. Clients and brokers must have the exact same version to be able to communicate
  • B. A newer client can talk to a newer broker, but an older client cannot talk to a newer broker
  • C. A newer client can talk to a newer broker, and an older client can talk to a newer broker
  • D. A newer client can't talk to a newer broker, but an older client can talk to a newer broker

Answer: C

Explanation:
Kafka's new bidirectional client compatibility introduced in 0.10.2 allows this. Read more herehttps://www.confluent.io/blog/upgrading-apache-kafka-clients-just-got-easier/

NEW QUESTION 14
Select all the way for one consumer to subscribe simultaneously to the following topics - topic.history, topic.sports, topic.politics? (select two)

  • A. consumer.subscribe(Pattern.compile("topic\..*"));
  • B. consumer.subscribe("topic.history"); consumer.subscribe("topic.sports"); consumer.subscribe("topic.politics");
  • C. consumer.subscribePrefix("topic.");
  • D. consumer.subscribe(Arrays.asList("topic.history", "topic.sports", "topic.politics"));

Answer: AD

Explanation:
Multiple topics can be passed as a list or regex pattern.

NEW QUESTION 15
A Zookeeper configuration has tickTime of 2000, initLimit of 20 and syncLimit of 5. What's the timeout value for followers to connect to Zookeeper?

  • A. 20 sec
  • B. 10 sec
  • C. 2000 ms
  • D. 40 sec

Answer: D

Explanation:
tick time is 2000 ms, and initLimit is the config taken into account when establishing a connection to Zookeeper, so the answer is 2000 * 20 = 40000 ms = 40s

NEW QUESTION 16
A bank uses a Kafka cluster for credit card payments. What should be the value of the property unclean.leader.election.enable?

  • A. FALSE
  • B. TRUE

Answer: A

Explanation:
Setting unclean.leader.election.enable to true means we allow out-of-sync replicas to become leaders, we will lose messages when this occurs, effectively losing credit card payments and making our customers very angry.

NEW QUESTION 17
A topic receives all the orders for the products that are available on a commerce site. Two applications want to process all the messages independently - order fulfilment and monitoring. The topic has 4 partitions, how would you organise the consumers for optimal performance and resource usage?

  • A. Create 8 consumers in the same group with 4 consumers for each application
  • B. Create two consumers groups for two applications with 8 consumers in each
  • C. Create two consumer groups for two applications with 4 consumers in each
  • D. Create four consumers in the same group, one for each partition - two for fulfilment and two for monitoring

Answer: C

Explanation:
two partitions groups - one for each application so that all messages are delivered to both the application. 4 consumers in each as there are 4 partitions of the topic, and you cannot have more consumers per groups than the number of partitions (otherwise they will be inactive and wasting resources)

NEW QUESTION 18
......

P.S. Easily pass CCDAK Exam with 150 Q&As Dumpscollection.com Dumps & pdf Version, Welcome to Download the Newest Dumpscollection.com CCDAK Dumps: https://www.dumpscollection.net/dumps/CCDAK/ (150 New Questions)