Skip to main content
Version: Next

Hardware Requirements

Runbook Azure

DOCUMENT CATEGORY: Runbook SCOPE: Hardware specifications for Azure Local PURPOSE: Define hardware requirements for cluster deployment MASTER REFERENCE: Microsoft Learn - Hardware Requirements

Status: Active


Overview

This document defines hardware requirements and specifications for Azure Local cluster deployments. It covers:

  • Supported Hardware Platforms - Dell integrated systems and validated nodes
  • Minimum Requirements - CPU, memory, storage, and network specifications
  • Firmware Baselines - Required BIOS, iDRAC, NIC, and controller versions
  • Hardware Discovery - Automated inventory collection via iDRAC Redfish API
  • Node Inventory Worksheets - Per-site hardware documentation
Azure Local Cloud Standard

Azure Local Cloud deploys Dell AX-series integrated systems for Azure Local. All hardware must meet Microsoft Azure Local certification requirements.


Supported Hardware Platforms

Dell Integrated Systems (AX-Series)

Dell AX-series nodes are purpose-built for Azure Local with pre-validated configurations:

ModelCPUMax RAMStorageNetworkUse Case
Dell AX-760Intel Xeon Scalable 5th Gen4TBNVMe + SSD4x 100GbEGeneral purpose
Dell AX-770Intel Xeon Scalable 5th Gen4TBNVMe + SSD4x 100GbEHigh-performance

Minimum Hardware Requirements

Azure Local System Requirements

ComponentMinimumRecommendedNotes
Nodes per Cluster12-16Single-node supported for dev/test
CPU Cores (per node)16 cores32+ coresSecond-generation 64-bit or later
RAM (per node)128 GB256+ GBAdditional for Arc VM management
Boot Drive200 GB480+ GB SSDBOSS card or M.2 recommended
Storage Drives2 drives4+ NVMeFor Storage Spaces Direct
Network Adapters2 NICs4+ NICsRDMA-capable for storage

Storage Requirements

Storage TierDrive TypeMinimum CountPurpose
Cache TierNVMe SSD2 per nodeHigh-speed caching
Capacity TierSSD or HDD4 per nodeData storage
Boot DriveSSD (BOSS/M.2)1 per nodeOS installation
Storage Configuration
  • Minimum 2 capacity drives required for Storage Spaces Direct resiliency
  • Cache-to-capacity ratio: 1:4 to 1:8 recommended
  • Total raw capacity: Plan for 2-3x usable capacity (mirroring overhead)

Network Requirements

InterfaceSpeedRDMAPurpose
Management1 GbE or 10 GbENoCluster management, iDRAC
Storage25 GbE or 100 GbERequiredStorage Spaces Direct (S2D)
Compute/VM25 GbE or 100 GbEOptionalVM traffic, SDN

RDMA Technologies Supported:

  • RoCEv2 (RDMA over Converged Ethernet v2) - Preferred
  • iWARP - Alternative for environments without DCB

Firmware Baseline Requirements

Azure Local Cloud Standard Firmware Versions

ComponentMinimum VersionTarget VersionNotes
Dell BIOS2.0.0Latest stableCheck Dell support matrix
iDRAC Firmware7.0.0Latest stableiDRAC 9 required
NIC FirmwareVaries by modelLatest validatedCheck Azure Local catalog
RAID ControllerVaries by modelLatest stablePERC/HBA firmware
Dell SBE Package4.2.25124.2.2512.1616+Solution Builder Extension

Dell Gold Image

Azure Local Cloud standard Dell Gold Image specifications:

ComponentVersionNotes
Windows Server Build2510Azure Local OS
SBE Version4.2.2512.1616Dell Solution Builder Extension
Dell Solution Version12.2510.0.3165Integrated solution package
Azure Local Build2601Running version after Arc registration

Firmware Validation

Validate firmware versions match across all cluster nodes:

# Check BIOS version consistency
$discovery = Get-Content discovery/idrac-inventory.json | ConvertFrom-Json

foreach ($server in $discovery.servers.PSObject.Properties.Value) {
Write-Host "Server: $($server.service_tag)"
Write-Host " BIOS: $($server.firmware.bios_version)"
Write-Host " iDRAC: $($server.firmware.idrac_version)"
}

Hardware Discovery Process

Overview

Hardware discovery automates collection of Dell server specifications via iDRAC Redfish API, enabling:

  • Accurate hardware inventory for deployment planning
  • Validation that hardware meets Azure Local requirements
  • Documentation of firmware versions and configurations
  • Change tracking for hardware lifecycle management

Primary Script: Get-DellServerInventory-FromiDRAC.ps1 Output: discovery/idrac-inventory.json

Prerequisites

iDRAC Access Requirements:

  • iDRAC 9 or later with Redfish API enabled
  • Network connectivity to iDRAC management interface (HTTPS/443)
  • iDRAC administrator credentials

Verify iDRAC Accessibility:

# Test network connectivity
Test-NetConnection -ComputerName "10.10.10.11" -Port 443

# Verify Redfish API endpoint
Invoke-RestMethod -Uri "https://10.10.10.11/redfish/v1/" -SkipCertificateCheck

What Gets Discovered

CategoryDetails Captured
SystemModel, service tag, asset tag, BIOS version
CPUModel, core count, thread count, max speed
MemoryTotal capacity, DIMM count, speed, type
StorageNVMe/SSD/HDD count, capacity, controller firmware
NetworkAdapter model, MAC addresses, firmware version
FirmwareBIOS, iDRAC, NIC, controller versions

Running Discovery

Single Server:

cd C:\git\azurelocal-toolkit

.\scripts\discovery\Get-DellServerInventory-FromiDRAC.ps1 `
-iDRACIP "10.10.10.11"

Multiple Servers:

# Prompt for iDRAC credentials once
$cred = Get-Credential -Message "Enter iDRAC Administrator credentials"

# Discover multiple servers
.\scripts\discovery\Get-DellServerInventory-FromiDRAC.ps1 `
-iDRACIP @("10.10.10.11", "10.10.10.12", "10.10.10.13", "10.10.10.14") `
-Credential $cred `
-Verbose

With Historical Tracking:

# Keep timestamped files for firmware change tracking
.\scripts\discovery\Get-DellServerInventory-FromiDRAC.ps1 `
-iDRACIP @("10.10.10.11", "10.10.10.12") `
-Credential $cred `
-KeepTimestampedFiles

Validate Hardware Requirements

After discovery, validate hardware meets Azure Local requirements:

$discovery = Get-Content discovery/idrac-inventory.json | ConvertFrom-Json

foreach ($server in $discovery.servers.PSObject.Properties.Value) {
Write-Host "`nValidating: $($server.service_tag)" -ForegroundColor Cyan

# CPU validation (minimum 16 cores)
if ($server.cpu.cores -lt 16) {
Write-Host " ⚠️ CPU: $($server.cpu.cores) cores (minimum 16 required)" -ForegroundColor Yellow
} else {
Write-Host " ✅ CPU: $($server.cpu.cores) cores" -ForegroundColor Green
}

# RAM validation (minimum 128 GB)
if ($server.memory.total_capacity_gb -lt 128) {
Write-Host " ⚠️ RAM: $($server.memory.total_capacity_gb) GB (minimum 128 GB required)" -ForegroundColor Yellow
} else {
Write-Host " ✅ RAM: $($server.memory.total_capacity_gb) GB" -ForegroundColor Green
}

# Storage validation (minimum 2 NVMe for S2D cache)
if ($server.storage.nvme_count -lt 2) {
Write-Host " ⚠️ NVMe: $($server.storage.nvme_count) drives (minimum 2 required)" -ForegroundColor Yellow
} else {
Write-Host " ✅ NVMe: $($server.storage.nvme_count) drives ($($server.storage.nvme_total_gb) GB)" -ForegroundColor Green
}

# NIC validation (minimum 2 adapters)
if ($server.network.adapters.Count -lt 2) {
Write-Host " ⚠️ NICs: $($server.network.adapters.Count) adapters (minimum 2 required)" -ForegroundColor Yellow
} else {
Write-Host " ✅ NICs: $($server.network.adapters.Count) adapters" -ForegroundColor Green
}
}

Node Inventory Worksheet (Per-Site)

Site Hardware Summary

FieldValue
Site Name/ID________________________
Number of Nodes☐ 2 ☐ 3 ☐ 4 ☐ 8 ☐ 16
Node Model________________________ (e.g., Dell AX-7525)
Cluster Configuration☐ 2-node ☐ 3-node ☐ 4+ node

Node Details

Complete for each node in the cluster:

Node 1

FieldValue
Hostname________________________ (e.g., CONAX01)
Service Tag________________________
Asset Tag________________________

Compute:

ComponentSpecification
CPU Model________________________
CPU Cores______
CPU Speed______ GHz
RAM Total______ GB
DIMM Configuration________________________

Storage:

Drive TypeModelCapacityCountPurpose
Boot (BOSS/M.2)________________ GB__OS
NVMe (Cache)________________ GB__S2D Cache
SSD (Capacity)________________ GB__S2D Capacity
HDD (Capacity)________________ GB__S2D Capacity

Network:

AdapterModelPortsSpeedRDMAPurpose
NIC 1__________________ GbE☐ Yes ☐ No________
NIC 2__________________ GbE☐ Yes ☐ No________
NIC 3__________________ GbE☐ Yes ☐ No________
NIC 4__________________ GbE☐ Yes ☐ No________

Firmware:

ComponentCurrent VersionTarget VersionStatus
BIOS________________________☐ Current ☐ Update needed
iDRAC________________________☐ Current ☐ Update needed
NIC Firmware________________________☐ Current ☐ Update needed
RAID Controller________________________☐ Current ☐ Update needed

Node 2

(Repeat the Node 1 table structure)

FieldValue
Hostname________________________
Service Tag________________________
Asset Tag________________________

(Continue for additional nodes)


iDRAC Access and Credentials

NodeiDRAC IPUsernamePassword LocationAccess Verified
Node 1____________________Key Vault: ________☐ Yes ☐ No
Node 2____________________Key Vault: ________☐ Yes ☐ No
Node 3____________________Key Vault: ________☐ Yes ☐ No
Node 4____________________Key Vault: ________☐ Yes ☐ No

iDRAC Verification:

  • iDRAC firmware version matches baseline: ☐ Yes ☐ No
  • iDRAC web interface accessible: ☐ Yes ☐ No
  • Redfish API enabled and accessible: ☐ Yes ☐ No
  • Virtual console (KVM) functional: ☐ Yes ☐ No
  • Virtual media mount tested: ☐ Yes ☐ No

NIC Mapping and Labeling

Document the physical NIC identification for each node:

SlotPortMAC AddressPCI AddressPurposeVLAN(s)
Embedded1__:__:__:__:__:______Management____
Embedded2__:__:__:__:__:______Management____
Slot 11__:__:__:__:__:______Storage 1____
Slot 12__:__:__:__:__:______Storage 2____
Slot 21__:__:__:__:__:______ComputeTrunk
Slot 22__:__:__:__:__:______ComputeTrunk

RDMA Configuration:

SettingValue
RDMA Technology☐ RoCEv2 ☐ iWARP ☐ N/A
RDMA NICs per Node☐ 2 ☐ 4 ☐ Other: ____
RDMA Network Isolation☐ Dedicated VLANs ☐ Separate physical NICs
DCB/PFC Configured☐ Yes ☐ No

Hardware Assessment Checklist

Pre-Deployment Validation

  • All nodes are same model or validated compatible
  • All nodes have identical hardware configuration
  • Firmware versions match across all nodes
  • Firmware meets minimum baseline requirements
  • CPU cores meet minimum requirements (16+)
  • RAM meets minimum requirements (128 GB+)
  • Storage drives meet minimum requirements (2+ NVMe)
  • Network adapters are RDMA-capable (for storage)
  • iDRAC accessible from deployment workstation
  • Service tags documented
  • Hardware warranty verified and documented

Discovery Outputs

  • discovery/idrac-inventory.json generated
  • Hardware validation script run with all checks passing
  • Firmware update plan created (if updates needed)
  • NIC mapping documented for all nodes
  • RDMA configuration plan documented

Next Steps

After completing hardware requirements documentation:

  1. For multi-site deployments, see Multi-Site Planning
  2. Proceed to Stage 11 - Hardware Provisioning
  3. Review Site Assessment - Ensure Site Assessment is complete