CCDAK- Confluent Certified Developer Apache Kafka Practice Questions

CCDAK Confluent Certified Developer for Apache Kafka

CCDAK is one of the most popular exam for Apache Kafka. In this section, I have listed up some example questions

Kafka Connect can be run in these modes; (Select two option)

  • Distributed Mode
  • Vertical mode
  • Batch mode
  • Standalone mode

Kafka can be run with Standalone mode and Distributed mode. Standalone mode is useful for development and testing Kafka Connect on a local machine.

Distributed mode runs Connect workers on multiple machines (nodes)

To add a field without default value is a ….. compatibility

  • Backward
  • Forward
  • Full
  • Nonen

To add a field without default value is forward compatibility (or delete a field that has optional value)

In order to push data from source to Kafka, you need to implement

  • Kafka Producer
  • Kafka Consumer
  • Kafka Connect Sink
  • Kafka API

The correct answer is to implement the Kafka Producer.Kafka Producer is a Kafka client that publishes records to the Kafka cluster.

To export data from Kafka to S3, which Kafka Connector you need to use

  • Amazon S3 source connector
  • Amazon S3 Sink connector
  • Kafka Streams S3 Connector
  • CDC Connector

You can use the Kafka Connect Amazon S3 sink connector to export data from Apache Kafka® topics to S3 objects in either Avro, JSON, or Bytes formats.

https://docs.confluent.io/current/connect/kafka-connect-s3/index.html

Which protocol is used in Kafka ?

  • UDP
  • TTLS
  • TCP
  • HTTP

Kafka uses a binary protocol over TCP

What is the command to produce a message to Kafka from console?

  • kafka-topics.sh — zookeeper localhost:9092 — topic my-topic
  • kafka-topics.sh --broker-list localhost:9092 --topic my-topic
  • kafka-console-producer.sh --broker-list localhost:9092 --topic my-topic
  • kafka-console-consumer.sh --broker-list localhost:9092 --topic my-topic --from-beginning

In order to publish a message to Kafka, you need to use

kafka-console-producer.sh --broker-list localhost:9092 --topic my-topic

In order to create a topic with 3 partitions, you need to execute

  • kafka-topics.sh — create — bootstrap-server localhost:9092 — replication-factor 1 — partitions 3 — topic test
  • kafka-topics.sh — create — zookeeper localhost:9092 — replication-factor 1 — partitions 3 — topic test
  • kafka-producer-topics.sh — create — zookeeper localhost:9092 — replication-factor 1 — partitions 3 — topic test
  • kafka-topics.sh — create — zookeeper localhost:9092 — replication 1 — partitions 3 — topic test

kafka-topics.sh — create — bootstrap-server localhost:9092 — replication-factor 1 — partitions 3 — topic test

— partitions parameter is used to define the number of the partition. On the other hand, you need to use --bootstrap-server to define list of hosts

Which are the default value of the replication factor and partition size (Select two answer)

  • The default replication factor for new topics is 1
  • The default partition number is 1
  • The default replication factor for new topics is 3
  • The default partition number is 3

For the partition, the default value is 1 (the value in the config num.partitions=1). The default Replication factor is also 1 (the setting of the config is replication.factor)

In order to send data to Kafka without implementing any Producer, you need to use

  • Kafka Producer
  • Kafka Rest Proxy
  • Kafka Sink
  • Kafka API

Kafka Rest Proxy allows you to produce/consume message without any development

Example of producing message

$ curl -X POST -H "Content-Type: application/vnd.kafka.avro.v1+json" \
--data '{"value_schema": "{\"type\": \"record\", \"name\": \"User\", \"fields\": [{\"name\": \"name\", \"type\": \"string\"}]}", "records": [{"value": {"name": "testUser"}}]}' \
"http://localhost:8082/topics/avrotest"
{"offsets":[{"partition":0,"offset":0,"error_code":null,"error":null}],"key_schema_id":null,"value_schema_id":21}

Which of the below sentence explains at least once semantics?

  • Once the message is processed properly, the consumer is going to send an acknowledgement
  • Once the message is received by the consumer, acknowledgement is sent accordingly
  • The message must be delivered only once and no message should be lost

Once the message is processed properly, the consumer is going to send an acknowledgement

Consumers will receive and process every message, but they may process the same message more than once.

Do you want more questions like this ?

3 tests with 150 exam questions in order to prepare Confluent Certified Developer Apache Kafka certification

Go to the course with this link !

Data & Cloud Architect and Trainer .