Skip to main content
Version: Next

Task 02: VMFleet Storage Performance Testing

Runbook Azure

DOCUMENT CATEGORY: Runbook SCOPE: Storage performance baseline PURPOSE: Deploy VMFleet, run storage tests, create performance baseline for customer handover MASTER REFERENCES:

Status: Active VMFleet Version: 2.1.0.0 (April 2024)


Overview

VMFleet is Microsoft's official storage load generation tool for Storage Spaces Direct (S2D) environments. It deploys a fleet of VMs across all cluster nodes, each running DiskSpd to generate I/O load. This creates a realistic, distributed storage workload to validate performance and establish baselines.

Monitoring During Testing

VMFleet includes a built-in real-time dashboard (Watch-FleetCluster). For historical data and visualization, use the Azure Local Insights workbook configured in Phase 18: Monitoring & Observability. The Insights workbook provides IOPS, throughput, latency, and health metrics collected via Azure Monitor.

Purpose of This Step

  1. Validate storage performance under load across all nodes
  2. Document baseline metrics (IOPS, throughput, latency) for customer handover
  3. Verify storage health under stress conditions
  4. Generate reports for consolidated validation package
Maintenance Window Required

VMFleet generates significant I/O load. Run during a maintenance window when no production workloads are active.

Prerequisites

  • Infrastructure health validation completed (Step 1)
  • Windows Server 2022 Core VHD available (sysprepped)
  • At least 100GB free space on each node's Fleet volume
  • Maintenance window scheduled (4-6 hours)
  • No production VMs running during test

Variables from variables.yml

Variable PathTypeDescription
platform.cluster_nameStringCluster name used in report headers
compute.nodes[].nameStringNode hostnames for per-node fleet VM distribution
storage.pool_nameStringStorage pool name for volume creation

Report Output

All results are saved to:

\\<ClusterName>\ClusterStorage$\Collect\validation-reports\
├── 02-vmfleet-storage-baseline-YYYYMMDD.csv
└── 02-vmfleet-storage-summary-YYYYMMDD.txt

Part 1: Install VMFleet Module

# Install VMFleet module (current version: 2.1.0.0)
Install-Module -Name VMFleet -Force -Scope AllUsers

# Verify installation
Get-Module -Name VMFleet -ListAvailable
Import-Module VMFleet

# Check available commands
Get-Command -Module VMFleet

Expected output: Module version 2.1.0.0 (released April 2024) with commands including:

  • Install-Fleet - Create folder structure and copy binaries
  • New-Fleet - Deploy fleet VMs
  • Start-Fleet / Stop-Fleet - Control fleet operations
  • Remove-Fleet - Cleanup fleet VMs
  • Watch-FleetCluster - Real-time performance dashboard
  • Measure-FleetCoreWorkload - Run 4 pre-defined workload profiles
  • Start-FleetSweep - Run custom workload profiles
  • Get-FleetVolumeEstimate - Calculate recommended volume sizes
VMFleet Version History
VersionRelease DateNotes
2.1.0.0April 2024Current - Arc VM support, bug fixes
2.0.2.2Jan 2022Previous stable
2.0.0.0Sept 2021Major rewrite with Measure-FleetCoreWorkload

1.2 Verify DiskSpd Availability

VMFleet uses DiskSpd internally. Verify it's accessible:

# DiskSpd is bundled with VMFleet, but verify
$DiskSpd = Get-Command diskspd.exe -ErrorAction SilentlyContinue
if (-not $DiskSpd) {
Write-Host "DiskSpd will be deployed by VMFleet during installation"
}

Part 2: Prepare Storage Volumes

VMFleet includes a helper command to calculate optimal volume sizes based on your cluster configuration:

# Calculate recommended volume sizes for your cluster
Get-FleetVolumeEstimate

Example output:

MirrorType VolumeSize Description
---------- ---------- -----------
2-way Mirror 500GB For 2-node clusters
3-way Mirror 350GB For 3+ node clusters
Mirror-Accel 400GB For mirror-accelerated parity

2.2 Create Collect Volume

The Collect volume stores VMFleet configuration, results, and the template VHD. Create it on shared storage visible to all nodes.

# Create Collect volume (single volume, accessible from all nodes)
# Recommended: 200GB minimum for VHD + results

$CollectVolumeName = "Collect"
$CollectVolumeSize = 200GB

# Check if volume already exists
$ExistingVolume = Get-ClusterSharedVolume | Where-Object { $_.SharedVolumeInfo.FriendlyVolumeName -like "*$CollectVolumeName*" }

if (-not $ExistingVolume) {
# Create the virtual disk and volume
New-Volume -StoragePoolFriendlyName "S2D on $((Get-Cluster).Name)" `
-FriendlyName $CollectVolumeName `
-FileSystem CSVFS_ReFS `
-Size $CollectVolumeSize

Write-Host "Created Collect volume: $CollectVolumeSize" -ForegroundColor Green
} else {
Write-Host "Collect volume already exists" -ForegroundColor Yellow
}

# Verify volume path
$CollectPath = "C:\ClusterStorage\$CollectVolumeName"
Test-Path $CollectPath

2.3 Create Per-Node Fleet Volumes

Each node needs its own Fleet volume to host VMFleet VMs. Use the size from Get-FleetVolumeEstimate:

# Get cluster nodes
$Nodes = (Get-ClusterNode).Name

# Use volume size from Get-FleetVolumeEstimate or manual calculation
# Rule of thumb: ~10GB per fleet VM + overhead
$FleetVolumeSize = 500GB # Adjust based on VM count and resiliency

foreach ($Node in $Nodes) {
$FleetVolumeName = "Fleet-$Node"

# Check if volume exists
$ExistingFleet = Get-ClusterSharedVolume | Where-Object {
$_.SharedVolumeInfo.FriendlyVolumeName -like "*$FleetVolumeName*"
}

if (-not $ExistingFleet) {
# Create Fleet volume for this node
New-Volume -StoragePoolFriendlyName "S2D on $((Get-Cluster).Name)" `
-FriendlyName $FleetVolumeName `
-FileSystem CSVFS_ReFS `
-Size $FleetVolumeSize

Write-Host "Created Fleet volume for $Node" -ForegroundColor Green
} else {
Write-Host "Fleet volume for $Node already exists" -ForegroundColor Yellow
}
}

# List all volumes
Get-ClusterSharedVolume | Format-Table Name, SharedVolumeInfo

2.4 Prepare Windows Server Core VHD

Copy a sysprepped Windows Server 2022 Core VHDX to the Collect volume:

# Source VHD location (update with your path)
$SourceVHD = "C:\ClusterStorage\Library\WindowsServer2022-Core-Sysprep.vhdx"

# Destination in Collect volume
$CollectPath = "C:\ClusterStorage\Collect"
$FleetVHD = "$CollectPath\FleetImage.vhdx"

# Copy VHD if not already present
if (-not (Test-Path $FleetVHD)) {
Write-Host "Copying template VHD to Collect volume..." -ForegroundColor Yellow
Copy-Item -Path $SourceVHD -Destination $FleetVHD -Force
Write-Host "VHD copied successfully" -ForegroundColor Green
} else {
Write-Host "Fleet VHD already exists at $FleetVHD" -ForegroundColor Yellow
}

# Verify VHD
Get-VHD -Path $FleetVHD | Format-Table VhdFormat, VhdType, Size, FileSize

Part 3: Install and Deploy VMFleet

3.1 Install Fleet Infrastructure

# Install VMFleet infrastructure
# This creates necessary folder structure and copies binaries

Install-Fleet -CollectVolumePath "C:\ClusterStorage\Collect"

# Verify installation
Get-ChildItem "C:\ClusterStorage\Collect" -Directory

Expected folders:

  • control - Fleet control scripts
  • result - Test results output
  • tools - DiskSpd and other tools
  • vhd - VHD storage

3.2 Create Fleet VMs

# Create the VM fleet
# Parameters:
# -VMs: Total number of VMs across cluster (4-8 per node recommended)
# -AdminPass: Local admin password for VMs
# -ConnectPass: Password for VM connection
# -BaseVHD: Path to template VHD

$TotalNodes = (Get-ClusterNode).Count
$VMsPerNode = 6 # Adjust based on available resources
$TotalVMs = $TotalNodes * $VMsPerNode

$AdminPassword = ConvertTo-SecureString "P@ssw0rd123!" -AsPlainText -Force

New-Fleet -BaseVHD "C:\ClusterStorage\Collect\FleetImage.vhdx" `
-VMs $TotalVMs `
-AdminPass $AdminPassword `
-ConnectPass $AdminPassword `
-DVDISO $null # No additional ISO needed

Write-Host "Created $TotalVMs VMFleet VMs across $TotalNodes nodes" -ForegroundColor Green

3.3 Verify Fleet Deployment

# Check fleet VMs
Get-FleetVM | Format-Table Name, State, ComputerName

# Verify VMs are distributed across nodes
Get-FleetVM | Group-Object ComputerName | Format-Table Name, Count

# Check all VMs are running
$RunningVMs = (Get-FleetVM | Where-Object State -eq "Running").Count
$TotalFleetVMs = (Get-FleetVM).Count

Write-Host "Fleet Status: $RunningVMs of $TotalFleetVMs VMs running" -ForegroundColor $(if($RunningVMs -eq $TotalFleetVMs){"Green"}else{"Yellow"})

Part 4: Start Fleet and Monitor Dashboard

4.1 Start the Fleet

# Start all fleet VMs and begin monitoring
Start-Fleet

# Wait for VMs to fully boot and respond
Write-Host "Waiting for fleet VMs to boot (60 seconds)..." -ForegroundColor Yellow
Start-Sleep -Seconds 60

# Verify fleet is ready
$FleetStatus = Get-FleetVM | Where-Object State -eq "Running"
Write-Host "Fleet ready: $($FleetStatus.Count) VMs online" -ForegroundColor Green

4.2 Launch Monitoring Dashboard

# Watch-FleetCluster provides real-time performance dashboard
# Run this in a separate PowerShell window for monitoring

Watch-FleetCluster

# Dashboard shows:
# - IOPS per node
# - Throughput (MB/s)
# - Latency (ms)
# - CPU utilization
# - Storage health

Dashboard Interpretation:

MetricDescriptionHealthy Range
IOPSI/O operations per secondVaries by workload
ThroughputMB/s read/writeVaries by workload
LatencyAverage response time< 10ms typical
CPUCluster CPU utilization< 80% during test
HealthStorage pool statusAll healthy

4.3 Azure Monitor Integration

If Azure Local Insights was configured in Phase 18, you can monitor VMFleet testing through the Azure portal:

Available Azure Workbooks During Testing

WorkbookWhat It ShowsAccess Path
Azure Local InsightsCluster health, storage IOPS/latency, node CPU/memoryAzure Portal → Azure Local → Cluster → Insights
Performance HistoryHistorical storage metrics via Get-ClusterPerformanceHistoryPowerShell

Real-Time Performance Queries (Log Analytics)

// Storage performance during VMFleet testing
Perf
| where ObjectName == "Cluster CSV File System"
| where CounterName in ("Read Bytes/sec", "Write Bytes/sec", "Reads/sec", "Writes/sec")
| where TimeGenerated > ago(1h)
| summarize avg(CounterValue) by CounterName, bin(TimeGenerated, 1m)
| render timechart
// CPU utilization during test
Perf
| where ObjectName == "Processor"
| where CounterName == "% Processor Time"
| where InstanceName == "_Total"
| where TimeGenerated > ago(1h)
| summarize avg(CounterValue) by Computer, bin(TimeGenerated, 1m)
| render timechart
Capture Before/During/After Metrics

Run these queries before, during, and after VMFleet testing to document the performance impact and baseline. Export the results to include in the customer handover package.


Part 5: Run Core Workload Tests

5.1 Run Measure-FleetCoreWorkload

The Measure-FleetCoreWorkload command runs four industry-standard storage profiles:

# Run comprehensive storage performance tests
# This runs General, Peak, VDI, and SQL profiles sequentially

# Initialize report
$DateStamp = Get-Date -Format "yyyyMMdd"
$ReportPath = "C:\ClusterStorage\Collect\validation-reports"
$SummaryFile = "$ReportPath\02-vmfleet-storage-summary-$DateStamp.txt"
$CSVFile = "$ReportPath\02-vmfleet-storage-baseline-$DateStamp.csv"

New-Item -Path $ReportPath -ItemType Directory -Force -ErrorAction SilentlyContinue

$ReportHeader = @"
================================================================================
VMFLEET STORAGE PERFORMANCE BASELINE REPORT
================================================================================
Cluster: $((Get-Cluster).Name)
Date: $(Get-Date -Format "yyyy-MM-dd HH:mm:ss")
Fleet Size: $((Get-FleetVM).Count) VMs
Nodes: $((Get-ClusterNode).Count)
Generated By: $(whoami)
================================================================================

"@
$ReportHeader | Out-File $SummaryFile -Encoding UTF8

# Run core workload measurement
# Duration: ~30 minutes per profile (2+ hours total)
Write-Host "Starting core workload tests (estimated 2+ hours)..." -ForegroundColor Yellow

$CoreResults = Measure-FleetCoreWorkload -Verbose

# Save results
$CoreResults | Export-Csv -Path $CSVFile -NoTypeInformation
$CoreResults | Format-Table | Out-String | Add-Content $SummaryFile

Write-Host "Core workload tests complete. Results saved to $CSVFile" -ForegroundColor Green

5.2 Workload Profile Details

The Measure-FleetCoreWorkload command runs four industry-standard profiles with VM-CSV alignment testing at both 30% and 100%:

ProfileBlock SizeThreadsQueue DepthRead/WriteI/O PatternCPU CapPurpose
General4K132100/0, 90/10, 70/30Working set distribution40%Realistic mixed workload
Peak4K432100/0100% RandomNoneMaximum IOPS (hero number)
VDI8K read, 32K write1880/2080% random, 20% sequentialNoneVirtual desktop workload
SQL8K OLTP, 32K Log4/28/170/30 OLTP, 0/100 LogRandom OLTP, Sequential LogNoneOLTP database workload
Working Set Distribution

The General profile uses a realistic working set distribution (-rdpct95/5:4/10:1/85) where:

  • 95% of I/O targets 5% of the file (hot data)
  • 4% of I/O targets 10% of the file (warm data)
  • 1% of I/O targets 85% of the file (cold data)

This simulates real-world data access patterns better than uniform random I/O.

5.3 Run Individual Workload Profiles

For targeted testing, run individual profiles:

# Peak IOPS test (4K random read)
Start-FleetSweep -b 4k -t 4 -o 32 -w 0 -r -d 300

# Throughput test (512K sequential read)
Start-FleetSweep -b 512k -t 2 -o 16 -w 0 -d 300

# Mixed workload (8K random, 70% read)
Start-FleetSweep -b 8k -t 4 -o 16 -w 30 -r -d 300

# Latency-focused (4K random read, low queue depth)
Start-FleetSweep -b 4k -t 1 -o 1 -w 0 -r -d 300

DiskSpd Parameter Reference:

ParameterDescriptionExample
-bBlock size-b4k, -b8k, -b64k, -b512k
-tThreads per target-t4 = 4 threads
-oOutstanding I/O queue depth per thread-o32 = 32 outstanding IOs
-wWrite percentage (0-100)-w0 = read only, -w30 = 70/30 read/write
-rRandom I/O (omit for sequential)-r = random
-rsMixed random/sequential-rs80 = 80% random, 20% sequential
-dDuration in seconds-d300 = 5 minutes
-SuwDisable software and hardware write cacheRecommended for accurate results
-Z10mRandom write buffer (defeats compression/dedup)-Z10m = 10MB random buffer
-rdpctWorking set distribution-rdpct95/5:4/10:1/85 = hot/warm/cold
-g<n>iIOPS limit per thread-g750i = 750 IOPS cap
DiskSpd 2.2 Changes

DiskSpd 2.2 (June 2024) includes changes to the async I/O loop that improve latency measurement at high queue depths. If comparing results to older tests, results may need rebaselining.


Part 6: Collect and Analyze Results

6.1 Parse Results to CSV

# Get all result files
$ResultPath = "C:\ClusterStorage\Collect\result"
$ResultFiles = Get-ChildItem -Path $ResultPath -Filter "*.xml" -Recurse

# Parse results
$AllResults = foreach ($File in $ResultFiles) {
[xml]$Xml = Get-Content $File.FullName

$Result = $Xml.Results.TimeSpan.Iops

[PSCustomObject]@{
Timestamp = $File.LastWriteTime
TotalIOPS = [math]::Round($Result.Total, 0)
ReadIOPS = [math]::Round($Result.Read, 0)
WriteIOPS = [math]::Round($Result.Write, 0)
ReadMBps = [math]::Round($Xml.Results.TimeSpan.Throughput.Read / 1MB, 2)
WriteMBps = [math]::Round($Xml.Results.TimeSpan.Throughput.Write / 1MB, 2)
AvgLatencyMs = [math]::Round($Xml.Results.TimeSpan.Latency.Average, 2)
MaxLatencyMs = [math]::Round($Xml.Results.TimeSpan.Latency.Max, 2)
}
}

# Append to CSV
$AllResults | Export-Csv -Path $CSVFile -Append -NoTypeInformation

# Display summary
$AllResults | Format-Table -AutoSize

6.2 Generate Performance Summary

# Calculate aggregate statistics
$Stats = @"

================================================================================
PERFORMANCE SUMMARY
================================================================================

IOPS METRICS:
Peak IOPS (4K Random Read): $($AllResults | Where-Object {$_.WriteIOPS -eq 0} | Measure-Object TotalIOPS -Maximum).Maximum
Sustained IOPS (Mixed): $($AllResults | Measure-Object TotalIOPS -Average).Average | [math]::Round(0))
Minimum IOPS: $(($AllResults | Measure-Object TotalIOPS -Minimum).Minimum)

THROUGHPUT METRICS:
Peak Read Throughput: $(($AllResults | Measure-Object ReadMBps -Maximum).Maximum) MB/s
Peak Write Throughput: $(($AllResults | Measure-Object WriteMBps -Maximum).Maximum) MB/s
Average Throughput: $(([math]::Round(($AllResults | Measure-Object ReadMBps -Average).Average + ($AllResults | Measure-Object WriteMBps -Average).Average, 2))) MB/s

LATENCY METRICS:
Average Latency: $(([math]::Round(($AllResults | Measure-Object AvgLatencyMs -Average).Average, 2))) ms
Maximum Latency: $(($AllResults | Measure-Object MaxLatencyMs -Maximum).Maximum) ms
Latency Target (< 10ms): $(if(($AllResults | Measure-Object AvgLatencyMs -Average).Average -lt 10){"PASS"}else{"REVIEW"})

"@

$Stats | Add-Content $SummaryFile
Write-Host $Stats

6.3 Expected Performance Ranges

Reference values for healthy Azure Local clusters:

ConfigurationPeak IOPS (4K)Throughput (MB/s)Latency (ms)
2-node, NVMe200,000+3,000+< 5
3-node, NVMe400,000+5,000+< 5
4-node, NVMe600,000+8,000+< 5
2-node, SSD50,000+1,000+< 10
4-node, SSD150,000+3,000+< 10
note

Performance varies based on drive type, count, and configuration. Use these as reference only.

6.4 Understanding Result Columns

The result.tsv file from Measure-FleetCoreWorkload contains detailed metrics:

ColumnDescription
RunLabelUnique identifier for the test run
WorkloadProfile name (General, Peak, VDI, SQL)
VMAlignmentPctPercentage of VMs aligned to CSV owner (30% or 100%)
IOPSTotal I/O operations per second across all VMs
AverageCPUAverage cluster CPU utilization
AverageCSVReadIOPSRead IOPS from CSV host perspective
AverageCSVWriteIOPSWrite IOPS from CSV host perspective
AverageCSVReadMillisecondsRead latency from host
AverageCSVWriteMillisecondsWrite latency from host
AverageReadMillisecondsRead latency from VM perspective
AverageWriteMillisecondsWrite latency from VM perspective
ReadMilliseconds50/90/99Read latency percentiles
WriteMilliseconds50/90/99Write latency percentiles
CutoffTypeWhy test stopped (No, ReadLatency, WriteLatency, Scale)
30% vs 100% Alignment
  • 100% Alignment: All VMs are on the node that owns their CSV (best-case scenario)
  • 30% Alignment: VMs are distributed across nodes (realistic production scenario)

Both values should be documented as they represent different operating conditions.


Part 7: Stop and Cleanup Fleet

7.1 Stop Fleet VMs

# Stop all fleet operations
Stop-Fleet

# Verify all VMs stopped
Get-FleetVM | Where-Object State -ne "Off" | Stop-VM -Force

# Check status
Get-FleetVM | Format-Table Name, State

7.2 Remove Fleet (Cleanup)

# Remove all fleet VMs and configurations
# WARNING: This deletes all fleet VMs and their disks

Remove-Fleet -Force

# Verify removal
$RemainingFleet = Get-VM -Name "Fleet*" -ErrorAction SilentlyContinue
if ($RemainingFleet) {
Write-Host "Warning: $($RemainingFleet.Count) fleet VMs still exist" -ForegroundColor Yellow
} else {
Write-Host "Fleet cleanup complete" -ForegroundColor Green
}

7.3 Cleanup Fleet Volumes (Optional)

# Remove Fleet volumes if no longer needed
# Keep Collect volume for future testing

$FleetVolumes = Get-ClusterSharedVolume | Where-Object { $_.Name -like "*Fleet-*" }

foreach ($Volume in $FleetVolumes) {
# Remove the volume (WARNING: destroys data)
# Remove-VirtualDisk -FriendlyName $Volume.SharedVolumeInfo.FriendlyVolumeName -Confirm:$false

Write-Host "Would remove: $($Volume.Name)" -ForegroundColor Yellow
}

# Uncomment to actually remove:
# $FleetVolumes | ForEach-Object { Remove-VirtualDisk -FriendlyName $_.SharedVolumeInfo.FriendlyVolumeName -Confirm:$false }

Part 8: Document Baseline for Customer Handover

8.1 Create Baseline Statement

$Baseline = @"

================================================================================
STORAGE PERFORMANCE BASELINE STATEMENT
================================================================================

Cluster: $((Get-Cluster).Name)
Test Date: $(Get-Date -Format "yyyy-MM-dd")
Test Duration: Approximately 4 hours
Test Tool: VMFleet with DiskSpd
VMs Deployed: $TotalVMs VMs across $TotalNodes nodes

BASELINE PERFORMANCE METRICS:

| Workload Profile | IOPS | Throughput | Avg Latency |
|------------------|------|------------|-------------|
| Peak (4K Random) | XXX,XXX | X,XXX MB/s | X.X ms |
| General (8K Mixed) | XXX,XXX | X,XXX MB/s | X.X ms |
| VDI (8K 80/20) | XXX,XXX | X,XXX MB/s | X.X ms |
| SQL (64K Seq) | XXX,XXX | X,XXX MB/s | X.X ms |

HEALTH DURING TEST:
Storage Pool Status: Healthy
Virtual Disks: All Healthy
Cluster Nodes: All Online
No errors or faults observed

NOTES:
- Performance metrics establish baseline for this specific cluster
- Actual workload performance depends on application characteristics
- Recommend monitoring during production workload onboarding
- Contact Azure Local Cloud for performance optimization if needed

================================================================================
Validated By: _________________________ Date: ____________
================================================================================

"@

$Baseline | Add-Content $SummaryFile
Write-Host "Baseline statement added to report" -ForegroundColor Green
Write-Host "Report location: $SummaryFile" -ForegroundColor Cyan

8.2 Finalize Report

# Add report footer
$Footer = @"

================================================================================
END OF VMFLEET STORAGE PERFORMANCE REPORT
================================================================================
Report Files:
Summary: $SummaryFile
Raw Data: $CSVFile
================================================================================

"@

$Footer | Add-Content $SummaryFile

# Display report location
Write-Host "`n`nVMFleet testing complete!" -ForegroundColor Green
Write-Host "Summary Report: $SummaryFile" -ForegroundColor Cyan
Write-Host "CSV Data: $CSVFile" -ForegroundColor Cyan

Validation Checklist

CategoryRequirementStatus
SetupVMFleet module installed
SetupCollect volume created
SetupFleet volumes created (per node)
SetupTemplate VHD copied
FleetFleet VMs deployed
FleetAll VMs running
TestingCore workload profiles run
TestingResults collected to CSV
PerformanceIOPS within expected range
PerformanceLatency < 10ms average
PerformanceNo storage faults during test
CleanupFleet VMs removed
ReportBaseline statement documented

Troubleshooting

Fleet VMs Won't Start

# Check for resource issues
Get-ClusterResource | Where-Object State -ne "Online"

# Verify VHD is accessible
Test-Path "C:\ClusterStorage\Collect\FleetImage.vhdx"

# Check event log
Get-WinEvent -LogName "Microsoft-Windows-Hyper-V-VMMS-Admin" -MaxEvents 20

Poor Performance Results

# Check storage health
Get-StorageSubSystem | Get-StorageHealthReport

# Verify RDMA is working
Get-SmbMultichannelConnection

# Check for throttling
Get-StorageQoSFlow | Where-Object { $_.Status -ne "Ok" }

Results Not Collected

# Verify result path
Test-Path "C:\ClusterStorage\Collect\result"

# Check for result files
Get-ChildItem "C:\ClusterStorage\Collect\result" -Recurse

Next Step

Proceed to Task 3: Network & RDMA Validation once VMFleet testing is complete.


PreviousUpNext
← Task 1: Infrastructure Health ValidationTesting & ValidationTask 3: Network & RDMA Validation →

Version Control

VersionDateAuthorChanges
1.0.02026-03-24Azure Local Cloudnology TeamInitial release