一番優秀なNCM-MCI-6.10最新受験攻略一回合格-権威のあるNCM-MCI-6.10模擬試験最新版記録に便利なように原稿に印刷されたNutanixのNCM-MCI-6.10試験問題をすばやく学習したい場合は、NCM-MCI-6.10ガイドトレントの模擬模擬テストを選択できます。 CertShiken学習効果をタイムテストし、NCM-MCI-6.10学習クイズでソフトウェアモデルを提供します。実際のテスト環境で問題と速度を解決する能力を発揮するのに役立ちます。 最後に、他の電子機器で練習したい場合は、オンライン版を使用してNCM-MCI-6.10練習資料を選択できます。 Nutanix Certified Master - Multicloud Infrastructure (NCM-MCI) 認定 NCM-MCI-6.10 試験問題 (Q22-Q27):質問 # 22
An administrator wants to increase the performance of their Database virtual machine.
Database_VM has a database that is spread across three vDisks in the volume group Database_VM. The volume group is directly attached to the virtual machine. Previous performance analysis has indicated all storage requests are going to the same node. While this test environment has 1 node, the production environment has 3 nodes.
Configure the Volume Group Database_VM so that it's optimized for the user's VM and the production environment. The virtual machine has been powered off and moved to this test cluster for the maintenance work.
Note: Do not power on the VM. 正解:
解説:
See the Explanation below for detailed answer.
Explanation:
Here is the step-by-step solution to configure the Volume Group for optimized performance in the production environment.
This task is performed in Prism Central.
* From the main dashboard, navigate to Compute & Storage > Volume Groups.
* Find the Volume Group named Database_VM in the list.
* Select the checkbox next to Database_VM.
* Click the Actions dropdown menu and select Update.
* In the "Update Volume Group" dialog, scroll to the bottom of the "Basic Configuration" section.
* Find the checkbox labeled Enable Client Side Load Balancing and check it.
Note: This setting allows the iSCSI initiator within the guest VM to connect to all CVMs in the cluster, distributing the storage load from the three vDisks across all three nodes in the production environment instead of focusing all I/O on just one.
Click Save.
質問 # 23
Task 7
An administrator has been informed that a new workload requires a logically segmented network to meet security requirements.
Network configuration:
VLAN: 667
Network: 192.168.0.0
Subnet Mask: 255.255.255.0
DNS server: 34.82.231.220
Default Gateway: 192.168.0.1
Domain: cyberdyne.net
IP Pool: 192.168.9.100-200
DHCP Server IP: 192.168.0.2
Configure the cluster to meet the requirements for the new workload if new objects are required, start the name with 667. 正解:
解説:
See the Explanation for step by step solution.
Explanation:
To configure the cluster to meet the requirements for the new workload, you need to do the following steps:
Create a new VLAN with ID 667 on the cluster. You can do this by logging in to Prism Element and going to Network Configuration > VLANs > Create VLAN. Enter 667 as the VLAN ID and a name for the VLAN, such as 667_VLAN.
Create a new network segment with the network details provided. You can do this by logging in to Prism Central and going to Network > Network Segments > Create Network Segment. Enter a name for the network segment, such as 667_Network_Segment, and select 667_VLAN as the VLAN. Enter 192.168.0.0 as the Network Address and 255.255.255.0 as the Subnet Mask. Enter 192.168.0.1 as the Default Gateway and
34.82.231.220 as the DNS Server. Enter cyberdyne.net as the Domain Name.
Create a new IP pool with the IP range provided. You can do this by logging in to Prism Central and going to Network > IP Pools > Create IP Pool. Enter a name for the IP pool, such as 667_IP_Pool, and select
667_Network_Segment as the Network Segment. Enter 192.168.9.100 as the Starting IP Address and
192.168.9.200 as the Ending IP Address.
Configure the DHCP server with the IP address provided. You can do this by logging in to Prism Central and going to Network > DHCP Servers > Create DHCP Server. Enter a name for the DHCP server, such as
667_DHCP_Server, and select 667_Network_Segment as the Network Segment. Enter 192.168.0.2 as the IP Address and select 667_IP_Pool as the IP Pool.
質問 # 24
A company who offers Infrastructure as a Service needs to onboard a new customer. The new customer requires a dedicated cloud plan which tolerates two host failures.
The customer is planning to move current workloads in three waves, with three months between waves starting today:
* Wave One: 100 VMs
* Wave Two: 50 VMs
* Wave Three: 20 VMs
Workload profile is:
* vCPU: 4
* vRAM: 16 GB
* Storage: 200 GB
The service provider company needs to estimate required resources upfront, to accommodate customer requirements, considering also that:
* limit the number of total nodes
* selected system vendor HPE
* selected model DX365-10-G11-NVMe
* full-flash node (including NVMe + SSD)
* 12 months runway
Create and save the scenario as IaaS and export to the desktop, name the file IaaS-requirement.pdf Note: You must export the PDF to the desktop as IaaS-requirement.pdf to receive any credit. 正解:
解説:
See the Explanation below for detailed answer.
Explanation:
Here is the step-by-step solution to create and export the capacity planning scenario. This task is performed within Prism Central.
1. Navigate to the Planning Dashboard
* From the Prism Central main menu (hamburger icon), navigate to Operations > Planning.
2. Create and Define the Scenario
* Click the + Create Scenario button.
* In the dialog box:
* Scenario Name: IaaS
* Scenario Type: Select New Workload
* Click Create. This will open the scenario editor.
3. Configure Cluster and Runway Settings
* In the "IaaS" scenario editor, find the Runway setting (top left) and set it to 12 Months.
* Find the Cluster configuration tile and click Edit.
* Set Number of Host Failures to Tolerate to 2.
* Click Save.
4. Define the Workload Profile
* In the Workloads section, click the + Add Workload button.
* Select Create a new workload profile.
* Fill in the VM specifications:
* Workload Name: Customer-VM (or similar)
* vCPU per VM: 4
* Memory per VM: 16 GB
* Storage per VM: 200 GB
* Click Add.
5. Set the Workload Growth Plan (Waves)
* You will be returned to the main scenario editor. In the timeline section ("Workload Plan"), add the VMs:
* Wave One (Today):
* Click + Add under the "Today" column.
* Select the Customer-VM profile.
* Enter 100 VMs.
* Click Add.
* Wave Two (3 Months):
* Click the + icon on the timeline itself.
* Set the date to 3 Months from today.
* Click + Add under this new "3 Months" column.
* Select the Customer-VM profile.
* Enter 50 VMs.
* Click Add.
* Wave Three (6 Months):
* Click the + icon on the timeline.
* Set the date to 6 Months from today.
* Click + Add under this new "6 Months" column.
* Select the Customer-VM profile.
* Enter 20 VMs.
* Click Add.
6. Select the Hardware
* In the Hardware configuration tile, click Change Hardware.
* In the "Select Hardware" pane:
* Vendor: Select HPE.
* Model: Search for and select DX365-10-G11-NVMe.
* Note: This model is full-flash by definition, satisfying the requirement.
* Click Done. The planner will recalculate the required nodes.
7. Save and Export the Scenario
* Click the Save icon (floppy disk) in the top-right corner to save the IaaS scenario.
* Click the Export icon (arrow pointing down) in the top-right corner.
* Select PDF from the dropdown menu.
* A "Save As" dialog will appear.
* Navigate to the Desktop.
* Set the file name to IaaS-requirement.pdf.
* Click Save.
質問 # 25
An administrator needs to perform AOS and AHV upgrades on a Nutanix cluster and wants to ensure that VM data is replicated as quickly as possible when hosts and CVMs are rebooted.
Configure Cluster 1 so that after planned host and CVM reboots, the rebuild scan starts immediately.
Note:
You will need to use SSH for this task. Ignore the fact that this is a 1-node cluster. 正解:
解説:
See the Explanation below for detailed answer.
Explanation:
Here is the step-by-step solution to configure the immediate rebuild scan on Cluster 1.
This task must be performed from an SSH session connected to a CVM (Controller VM) on Cluster 1.
1. Access the Cluster 1 CVM
* From the Prism Central dashboard, navigate to Hardware > Clusters and click on Cluster 1 to open its Prism Element (PE) interface.
* In the Cluster 1 PE, navigate to Hardware > CVMs to find the IP address of any CVM in the cluster.
* Use an SSH client (like PuTTY) to connect to the CVM's IP address.
* Log in with the admin user and password.
2. Modify the Rebuild Delay Setting
By default, the cluster waits 15 minutes (900 seconds) before starting a rebuild scan after a CVM reboot. You will change this setting to 0.
* Once logged into the CVM, run the following command to set the delay to 0 seconds:
gflag --set --gflags=stargate_delayed_rebuild_scan_secs=0
* (Optional but recommended) You can verify the change took effect by running the "get" command:
gflag --get --gflags=stargate_delayed_rebuild_scan_secs
The output should now show stargate_delayed_rebuild_scan_secs=0.
質問 # 26
Task 1
An administrator needs to configure storage for a Citrix-based Virtual Desktop infrastructure.
Two VDI pools will be created
Non-persistent pool names MCS_Pool for tasks users using MCS Microsoft Windows 10 virtual Delivery Agents (VDAs) Persistent pool named Persist_Pool with full-clone Microsoft Windows 10 VDAs for power users
20 GiB capacity must be guaranteed at the storage container level for all power user VDAs The power user container should not be able to use more than 100 GiB Storage capacity should be optimized for each desktop pool.
Configure the storage to meet these requirements. Any new object created should include the name of the pool (s) (MCS and/or Persist) that will use the object.
Do not include the pool name if the object will not be used by that pool.
Any additional licenses required by the solution will be added later. 正解:
解説:
See the Explanation for step by step solution.
Explanation:
To configure the storage for the Citrix-based VDI, you can follow these steps:
Log in to Prism Central using the credentials provided.
Go to Storage > Storage Pools and click on Create Storage Pool.
Enter a name for the new storage pool, such as VDI_Storage_Pool, and select the disks to include in the pool.
You can choose any combination of SSDs and HDDs, but for optimal performance, you may prefer to use more SSDs than HDDs.
Click Save to create the storage pool.
Go to Storage > Containers and click on Create Container.
Enter a name for the new container for the non-persistent pool, such as MCS_Pool_Container, and select the storage pool that you just created, VDI_Storage_Pool, as the source.
Under Advanced Settings, enable Deduplication and Compression to reduce the storage footprint of the non- persistent desktops. You can also enable Erasure Coding if you have enough nodes in your cluster and want to save more space. These settings will help you optimize the storage capacity for the non-persistent pool.
Click Save to create the container.
Go to Storage > Containers and click on Create Container again.
Enter a name for the new container for the persistent pool, such as Persist_Pool_Container, and select the same storage pool, VDI_Storage_Pool, as the source.
Under Advanced Settings, enable Capacity Reservation and enter 20 GiB as the reserved capacity. This will guarantee that 20 GiB of space is always available for the persistent desktops. You can also enter 100 GiB as the advertised capacity to limit the maximum space that this container can use. These settings will help you control the storage allocation for the persistent pool.
Click Save to create the container.
Go to Storage > Datastores and click on Create Datastore.
Enter a name for the new datastore for the non-persistent pool, such as MCS_Pool_Datastore, and select NFS as the datastore type. Select the container that you just created, MCS_Pool_Container, as the source.
Click Save to create the datastore.
Go to Storage > Datastores and click on Create Datastore again.
Enter a name for the new datastore for the persistent pool, such as Persist_Pool_Datastore, and select NFS as the datastore type. Select the container that you just created, Persist_Pool_Container, as the source.
Click Save to create the datastore.
The datastores will be automatically mounted on all nodes in the cluster. You can verify this by going to Storage > Datastores and clicking on each datastore. You should see all nodes listed under Hosts.
You can now use Citrix Studio to create your VDI pools using MCS or full clones on these datastores. For more information on how to use Citrix Studio with Nutanix Acropolis, see Citrix Virtual Apps and Desktops on Nutanix or Nutanix virtualization environments. https://portal.nutanix.com/page/ ... x-Virtual-Apps-and- Desktops:bp-nutanix-storage-configuration.html