Task 13: Configure MPIO and Vendor MSDSM (Conditional)
DOCUMENT CATEGORY: Runbook SCOPE: MPIO and MSDSM configuration PURPOSE: Enable MPIO and configure vendor MSDSM so Windows correctly claims FC disks as multi-path volumes MASTER REFERENCE: Microsoft Learn — Enable External Storage
Status: Active
This task applies only to disaggregated SAN deployments. If you are deploying with Storage Spaces Direct (hyperconverged), skip to Task 15: Complete Combined Script.
Overview
Windows Multipath I/O (MPIO) aggregates multiple physical paths to a SAN LUN into a single logical disk, providing both redundancy and load distribution. MPIO must be installed as a Windows Feature, and the Microsoft Device Specific Module (MSDSM) must be configured with the hardware IDs for the specific SAN array in use. Without MSDSM, Windows presents each path as a separate disk rather than claiming them as a single multi-path volume.
When to Run
| Scenario | Run? |
|---|---|
| SAN disaggregated deployment | ✅ Yes — all nodes |
| Storage Spaces Direct (S2D) | ❌ Skip |
| MPIO already installed from previous deployment | ✅ Yes — verify vendor hardware IDs still registered |
Prerequisites
| Requirement | Details |
|---|---|
| Task 12 complete | FC HBA drivers installed |
| SAN array vendor | Vendor and model confirmed (hardware IDs needed) |
| FC fabric | Zoning configured (nodes can see SAN ports) — OR zoning will be completed post-Arc, before LUN access |
Variables from variables.yml
| Path | Type | Description |
|---|---|---|
cluster_nodes[].management_ip | string | PSRemoting target IP |
storage.san_array_vendor | string | SAN array vendor name for documentation |
Step 1: Install MPIO Windows Feature
- Direct (On Node)
- Orchestrated Script
- Standalone Script
Run on each node individually via console, KVM, or RDP.
Toolkit script: scripts/deploy/04-cluster-deployment/phase-03-os-configuration/task-13-configure-mpio-and-vendor-msdsm-conditional/powershell/Enable-MPIO-Direct.ps1
# Task 13 - Step 1: Install MPIO Feature (run on each node)
Add-WindowsFeature -Name Multipath-IO -IncludeManagementTools
Restart each node after the feature installation completes before proceeding to Step 2.
Run from the management server against all nodes.
Toolkit script: scripts/deploy/04-cluster-deployment/phase-03-os-configuration/task-13-configure-mpio-and-vendor-msdsm-conditional/powershell/Enable-MPIO-Orchestrated.ps1
# Task 13 - Step 1: Install MPIO Feature (orchestrated)
# variables.yml variables:
# cluster_nodes[].management_ip -> $ServerList
$ConfigPath = "$env:USERPROFILE\variables.yml"
$ServerList = (Get-Content $ConfigPath | Select-String 'management_ip:\s+"?([^"'' ]+)' |
ForEach-Object { $_.Matches[0].Groups[1].Value.Trim() })
Invoke-Command -ComputerName $ServerList -ScriptBlock {
Add-WindowsFeature -Name Multipath-IO -IncludeManagementTools
}
# Restart all nodes after feature installation
$ServerList | ForEach-Object { Restart-Computer -ComputerName $_ -Force }
The script restarts all nodes immediately after feature installation. Wait for all nodes to return online before proceeding to Step 2.
When to use: Use this option for a self-contained deployment without a shared configuration file.
Script: See azurelocal-toolkit → scripts/deploy/ for the standalone script for this task.
See toolkit repository for standalone implementation.
Step 2: Configure Vendor MSDSM Hardware IDs
After all nodes have restarted, configure the Microsoft DSM with your SAN array's hardware IDs. Select the tab for your array vendor.
- Pure Storage
- Dell PowerStore
- HPE Alletra
- Hitachi VSP
- NetApp ONTAP
# Task 13 - Step 2: Configure MSDSM for Pure Storage FlashArray
# Auto-claim FC disks via MPIO
Enable-MSDSMAutomaticClaim -BusType "FC"
# Register Pure Storage hardware IDs
New-MSDSMSupportedHW -VendorId "PURE" -ProductId "FlashArray"
# Set load balancing policy (Least Queue Depth recommended for All-Flash)
Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy LQD
# Task 13 - Step 2: Configure MSDSM for Dell PowerStore
Enable-MSDSMAutomaticClaim -BusType "FC"
# Register Dell PowerStore hardware IDs
New-MSDSMSupportedHW -VendorId "DELL" -ProductId "PowerStore"
# Set load balancing policy
Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy LQD
# Task 13 - Step 2: Configure MSDSM for HPE Alletra
Enable-MSDSMAutomaticClaim -BusType "FC"
# Register HPE Alletra hardware IDs
New-MSDSMSupportedHW -VendorId "HP" -ProductId "OPEN-V"
New-MSDSMSupportedHW -VendorId "HP" -ProductId "Alletra"
# Set load balancing policy
Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy LQD
# Task 13 - Step 2: Configure MSDSM for Hitachi VSP
Enable-MSDSMAutomaticClaim -BusType "FC"
# Register Hitachi VSP hardware IDs
New-MSDSMSupportedHW -VendorId "HITACHI" -ProductId "OPEN-V"
# Set load balancing policy (Round Robin typical for VSP)
Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy RR
# Task 13 - Step 2: Configure MSDSM for NetApp ONTAP
Enable-MSDSMAutomaticClaim -BusType "FC"
# Register NetApp ONTAP hardware IDs
New-MSDSMSupportedHW -VendorId "NETAPP" -ProductId "LUN"
# Set load balancing policy
Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy LQD
The hardware IDs (VendorId / ProductId) shown above are commonly used values. Always verify the exact IDs for your specific array firmware version with your SAN vendor's Windows MPIO configuration guide.
Verification
# Confirm MPIO is installed
(Get-WindowsFeature -Name Multipath-IO).InstallState
# Confirm supported hardware IDs are registered
Get-MSDSMSupportedHW | Format-Table VendorId, ProductId
# Confirm auto-claim is enabled for FC
mpclaim -s -d
Expected: InstallState = Installed; your vendor's IDs appear in the supported HW list; mpclaim shows disk paths (will show 0 paths until LUNs are presented and zoning is complete).
Validation Checklist
- MPIO Windows feature installed on all nodes
- All nodes restarted after MPIO installation
- Vendor hardware IDs registered via
New-MSDSMSupportedHW -
Enable-MSDSMAutomaticClaim -BusType "FC"confirmed on all nodes - Load balance policy set
Navigation
← Task 12: FC HBA Drivers · ↑ Phase 03 · Task 14: Verify LUN Presentation →
Version Control
| Version | Date | Author | Changes |
|---|---|---|---|
| 1.0.0 | 2026-05-02 | Azure Local Cloud | Initial release |