ユニーク-信頼的なC1000-185日本語学習内容試験-試験の準備方法C1000-185資格模擬C1000-185試験はJpshikenの教材を準備し、高品質で合格率が高く、実際のC1000-185試験を十分に理解しており、C1000-185学習教材を長年にわたって作成した専門家によって完了します。彼らは、C1000-185試験の準備をするときに受験者が本当に必要とするものを非常によく知っています。また、実際のC1000-185試験の状況を非常によく理解しています。実際の試験がどのようなものかをお知らせします。C1000-185試験問題のソフトバージョンを試すことができます。これにより、実際の試験をシミュレートできます。 IBM watsonx Generative AI Engineer - Associate 認定 C1000-185 試験問題 (Q49-Q54):質問 # 49
Which of the following best describes the process of large-scale iterative alignment tuning in the context of customizing LLMs with InstructLab?
A. Repeated fine-tuning of a model using reinforcement learning, focusing on aligning its outputs with human preferences across a diverse set of tasks
B. Fine-tuning the model exclusively on binary classification tasks to improve its generalization on all other tasks
C. A single training run of the model on a dataset to generate better predictions for a fixed number of prompts
D. Direct training of the model on an expanded version of the dataset, without adjusting prompts or training tasks
正解:A
質問 # 50
You are tasked with generating a product description for an e-commerce platform using a generative AI model. However, you notice that the generated text tends to repeat phrases excessively, leading to verbose output. To address this, you decide to adjust the model's temperature parameter.
Which of the following changes would help reduce the repetitiveness of the generated text while maintaining a balance between creativity and coherence?
A. Set the temperature to 0.0
B. Decrease the temperature from 0.8 to 0.6
C. Decrease the temperature from 0.9 to 0.3
D. Increase the temperature from 0.5 to 1.5
正解:C
質問 # 51
When preparing a dataset for fine-tuning a large language model for a named entity recognition (NER) task, which of the following preprocessing steps is most critical for ensuring accurate entity classification?
A. Use sentence segmentation to isolate each named entity in its own sentence
B. Ensure proper tokenization of the dataset according to the model's vocabulary
C. Remove rare entities to improve model performance on common entities
D. Randomly shuffle the dataset before training to increase diversity
正解:B
質問 # 52
You are fine-tuning a general-purpose language model on a medical dataset to generate summaries of patient consultations. After fine-tuning, you notice that the model sometimes generates hallucinations-statements that are factually incorrect or irrelevant to the specific domain. You suspect that the fine-tuning process did not sufficiently align the model with the medical domain.
Which of the following is the most effective technique to reduce hallucinations during fine-tuning?
A. Increase the number of layers in the model
B. Increase the model's batch size during training
C. Add more general-purpose data to the fine-tuning dataset
D. Use domain-specific tokenization during fine-tuning
正解:D
質問 # 53
You are developing a document understanding system that integrates IBM watsonx.ai and Watson Discovery to extract insights from large sets of documents. The system needs to leverage watsonx.ai's large language model to summarize documents and Watson Discovery to search and extract relevant data from those documents.
What is the best approach to achieve this integration?
A. Use Watson Discovery to index and search documents, and then send the retrieved documents to watsonx.ai's LLM for summarization through API calls.
B. Use Watson Discovery for summarizing documents and watsonx.ai's LLM for only retrieving relevant content from the documents.
C. Use watsonx.ai's LLM to both retrieve and summarize the documents, bypassing Watson Discovery.
D. Use watsonx.ai's LLM to create a summary for each document in advance, and Watson Discovery only for searching pre-generated summaries.