CCAAK的中関連問題 & CCAAK試験資料CCAAK学習教材は、試験にすばやく合格し、希望する証明書を取得するのに役立ちます。その後、あなたは良い仕事を得るためにもう一つのチップを持っています。 CCAAK学習教材を使用すると、より高い出発点に立って、CCAAK試験に他の人よりも一歩早く合格し、他の人よりも早くチャンスを活用できます。このペースの速い社会では、あなたの時間はとても貴重です。 1人の力だけに頼る場合、あなたが優位に立つことは困難です。 CCAAKの学習に関する質問は、最も満足のいくアシスタントになります。 Confluent Certified Administrator for Apache Kafka 認定 CCAAK 試験問題 (Q23-Q28):質問 # 23
Kafka Connect is running on a two node cluster in distributed mode. The connector is a source connector that pulls data from Postgres tables (users/payment/orders), writes to topics with two partitions, and with replication factor two. The development team notices that the data is lagging behind.
What should be done to reduce the data lag*?
The Connector definition is listed below:
{
"name": "confluent-postgresql-source",
"connector class": "PostgresSource",
"topic.prefix": "postgresql_",
& nbsp;& nbsp;& nbsp;...
"db.name": "postgres",
"table.whitelist": "users.payment.orders",
"timestamp.column.name": "created_at",
"output.data format": "JSON",
"db.timezone": "UTC",
"tasks.max": "1"
}
A. Increase the number of Connect Tasks (tasks max value).
B. Increase the number of partitions.
C. Increase the number of Connect Nodes.
D. Increase the replication factor and increase the number of Connect Tasks.
正解:A
解説:
The connector is currently configured with "tasks.max": "1", which means only one task is handling all tables (users, payment, orders). This can create a bottleneck and lead to lag. Increasing tasks.max allows Kafka Connect to parallelize work across multiple tasks, which can pull data from different tables concurrently and reduce lag.
質問 # 24
Your Kafka cluster has four brokers. The topic t1 on the cluster has two partitions, and it has a replication factor of three. You create a Consumer Group with four consumers, which subscribes to t1.
In the scenario above, how many Controllers are in the Kafka cluster?
A. One
B. three
C. two
D. Four
正解:A
解説:
In a Kafka cluster, only one broker acts as the Controller at any given time. The Controller is responsible for managing cluster metadata, such as partition leadership and broker status. Even if the cluster has multiple brokers (in this case, four), only one is elected as the Controller, and others serve as regular brokers. If the current Controller fails, another broker is automatically elected to take its place.
質問 # 25
A company is setting up a log ingestion use case where they will consume logs from numerous systems. The company wants to tune Kafka for the utmost throughput.
In this scenario, what acknowledgment setting makes the most sense?
A. acks=undefined
B. acks=1
C. acks=all
D. acks=0
正解:D
解説:
acks=0 provides the highest throughput because the producer does not wait for any acknowledgment from the broker. This minimizes latency and maximizes performance.
However, it comes at the cost of no durability guarantees - messages may be lost if the broker fails before writing them. This setting is suitable when throughput is critical and occasional data loss is acceptable, such as in some log ingestion use cases where logs are also stored elsewhere.
質問 # 26
You are managing a cluster with a large number of topics, and each topic has a lot of partitions. A team wants to significantly increase the number of partitions for some topics.
Which parameters should you check before increasing the partitions?
A. Check if compression is being used.
B. Check the producer batch size and buffer size.
C. Check the max open file count on brokers.
D. Check if acks=all is beina used.
正解:C
解説:
Each Kafka partition maps to multiple log segment files, and each segment results in open file descriptors on the broker. When the number of partitions increases significantly, it can exceed the OS-level limit for open files per broker process, leading to failures or degraded performance. Therefore, it is essential to check and possibly increase the ulimit -n (max open files) setting on the broker machines.
質問 # 27
If the Controller detects the failure of a broker that was the leader for some partitions, which actions will be taken? (Choose two.)
A. The Controller sends the new leader and ISR list changes to all brokers.
B. The Controller persists the new leader and ISR list to ZooKeeper.
C. The Controller sends the new leader and ISR list changes to all producers and consumers.
D. The Controller waits for a new leader to be nominated by ZooKeeper.
正解:A、B
解説:
The Controller updates ZooKeeper with the new leader and in-sync replica (ISR) information to maintain metadata consistency.
Brokers need this information to correctly route client requests and continue replication.