SPLK-2002テストトレーニング & SPLK-2002日本語版復習指南SPLK-2002試験に合格することは、特に良い仕事を探していて、SPLK-2002認定資格を取得したい多くの人々にとって非常に重要であることがわかっています。認定資格を取得できれば、それは大いに役立つでしょう。たとえば、以前よりも会社でより多くの仕事とより良い肩書きを得るのに役立ち、SPLK-2002認定資格はより高い給料を得るのに役立ちます。当社には、試験に合格し、SPLK-2002試験トレントでSPLK-2002認定を取得するのに役立つ能力があると考えています。 Splunk Enterprise Certified Architect 認定 SPLK-2002 試験問題 (Q100-Q105):質問 # 100
Which component in the splunkd.log will log information related to bad event breaking?
A. EventBreaking
B. AggregatorMiningProcessor
C. Audittrail
D. IndexingPipeline
正解:B
解説:
The AggregatorMiningProcessor component in the splunkd.log file will log information related to bad event breaking. The AggregatorMiningProcessor is responsible for breaking the incoming data into events and applying the props.conf settings. If there is a problem with the event breaking, such as incorrect timestamps, missing events, or merged events, the AggregatorMiningProcessor will log the error or warning messages in the splunkd.log file. The Audittrail component logs information about the audit events, such as user actions, configuration changes, and search activity. The EventBreaking component logs information about the event breaking rules, such as the LINE_BREAKER and SHOULD_LINEMERGE settings. The IndexingPipeline component logs information about the indexing pipeline, such as the parsing, routing, and indexing phases.
For more information, see About Splunk Enterprise logging and [Configure event line breaking] in the Splunk documentation.
質問 # 101
A customer plans to ingest 600 GB of data per day into Splunk. They will have six concurrent users, and they also want high data availability and high search performance. The customer is concerned about cost and wants to spend the minimum amount on the hardware for Splunk. How many indexers are recommended for this deployment?
A. Two indexers clustered, assuming high availability is the greatest priority.
B. Three indexers not in a cluster, assuming a long data retention period.
C. Two indexers not in a cluster, assuming users run many long searches.
D. Two indexers clustered, assuming a high volume of saved/scheduled searches.
正解:D
質問 # 102
When converting from a single-site to a multi-site cluster, what happens to existing single-site clustered buckets?
A. They will be replicated across all peers in the multi-site cluster and age out based on existing policies.
B. They will stop replicating within the single-site and remain on the indexer they reside on and age out according to existing policies.
C. They will continue to replicate within the origin site and age out based on existing policies.
D. They will maintain replication as required according to the single-site policies, but never age out.
正解:B
解説:
When converting from a single-site to a multi-site cluster, existing single-site clustered buckets will maintain replication as required according to the single-site policies, but never age out. Single-site clustered buckets are buckets that were created before the conversion to a multi-site cluster. These buckets will continue to follow the single-site replication and search factors, meaning that they will have the same number of copies and searchable copies across the cluster, regardless of the site. These buckets will never age out, meaning that they will never be frozen or deleted, unless they are manually converted to multi-site buckets. Single-site clustered buckets will not continue to replicate within the origin site, because they will be distributed across the cluster according to the single-site policies. Single-site clustered buckets will not be replicated across all peers in the multi-site cluster, because they will follow the single-site replication factor, which may be lower than the multi-site total replication factor. Single-site clustered buckets will not stop replicating within the single-site and remain on the indexer they reside on, because they will still be subject to the replication and availability rules of the cluster
質問 # 103
The guidance Splunk gives for estimating size on for syslog data is 50% of original data size. How does this divide between files in the index?
質問 # 104
Indexing is slow and real-time search results are delayed in a Splunk environment with two indexers and one search head. There is ample CPU and memory available on the indexers. Which of the following is most likely to improve indexing performance?
A. Decrease the maximum concurrent scheduled searches in limits.conf
B. Increase the number of parallel ingestion pipelines in server.conf
C. Increase the maximum number of hot buckets in indexes.conf
D. Decrease the maximum size of the search pipelines in limits.conf
正解:B
解説:
Explanation
Increasing the number of parallel ingestion pipelines in server.conf is most likely to improve indexing performance when indexing is slow and real-time search results are delayed in a Splunk environment with two indexers and one search head. The parallel ingestion pipelines allow Splunk to process multiple data streams simultaneously, which increases the indexing throughput and reduces the indexing latency. Increasing the maximum number of hot buckets in indexes.conf will not improve indexing performance, but rather increase the disk space consumption and the bucket rolling time. Decreasing the maximum size of the search pipelines in limits.conf will not improve indexing performance, but rather reduce the search performance and the search concurrency. Decreasing the maximum concurrent scheduled searches in limits.conf will not improve indexing performance, but rather reduce the search capacity and the search availability. For more information, see Configure parallel ingestion pipelines in the Splunk documentation.