Skip to content

Scale-Out File Server — Design & Deployment Guide

Guest SOFS on Azure Local for AVD FSLogix Profiles

**Version** 1.0
**Last Updated** March 2026
**Maintained by** Hybrid Cloud Solutions LLC
**Customer** Infinite Improbability Corp (IIC)

What This Document Covers

This document is the complete design and deployment reference for a 3-node Scale-Out File Server (SOFS) guest cluster running Storage Spaces Direct (S2D) on Azure Local, purpose-built to host FSLogix profile containers for Azure Virtual Desktop (AVD) session hosts.

The document is organized in four parts:

  • Part I — Design Decisions explains how two stacked mirror layers multiply raw consumption, introduces the "cattle vs. pets" concept for AVD session hosts, walks through all 13 storage sizing scenarios (10 SOFS + 3 Cloud Cache) with their capacity implications, and commits to the Maximum HA scenario: three-way mirror at the Azure Local host layer for profile CSVs, three-way mirror at the guest S2D layer for profile volumes, and two-way mirror at the host layer for AVD workload CSVs. The hardware implication — a 6-node Dell AX-760 cluster with 10 × 7.68 TB NVMe per node — is derived directly from the capacity math.
  • Part II — Architecture & Design details the hardware build, naming and identity model, host-layer storage layout, SOFS VM configuration, guest S2D storage design with three FSLogix shares, network configuration, and AVD integration points.
  • Part III — Implementation provides the full 11-phase deployment from creating Azure Local host volumes through guest cluster creation, S2D configuration, SOFS role setup, SMB share creation, NTFS permissions, antivirus exclusions, and validation. Every phase includes exact PowerShell or Azure CLI commands with IIC-specific resource names following Azure Cloud Adoption Framework (CAF) naming conventions.
  • Part IV — Reference consolidates the IP/name reference table, operational notes (patching, monitoring, failure scenarios), AVD session host configuration (FSLogix registry keys, identity model), optional Cloud Cache for DR, automation script inventory with links, and Microsoft documentation references.

The companion azurelocal-sofs-fslogix repository contains automation that can execute these same steps via Terraform, Bicep, ARM templates, PowerShell scripts, and Ansible playbooks — see Automation Scripts for a detailed breakdown of what each tool covers and which phases it automates.


Table of Contents

Part I — Design Decisions

  1. Understanding Stacked Mirror Resiliency
  2. Why AVD Session Hosts Are Cattle, Not Pets
  3. Storage Sizing Scenarios
  4. Recommended Scenario — Maximum HA

Part II — Architecture & Design

  1. Hardware Build — Dell AX-760 (6-Node Cluster)
  2. Naming and Identity
  3. Azure Local Host Storage Design
  4. SOFS VM Configuration
  5. Guest SOFS Cluster Storage Design
  6. Network Design
  7. AVD Integration Points

Part III — Implementation

  1. Prerequisites
  2. Phase 1: Create Azure Local Host Volumes
  3. Phase 2: Deploy SOFS VMs
  4. Phase 3: Configure Anti-Affinity Rules
  5. Phase 4: Post-Deployment VM Configuration
  6. Phase 5: Install Required Roles and Features
  7. Phase 6: Validate and Create the Guest Failover Cluster
  8. Phase 7: Enable Storage Spaces Direct (S2D)
  9. Phase 8: Add the Scale-Out File Server Role
  10. Phase 9: Configure NTFS Permissions for FSLogix
  11. Phase 10: Antivirus Exclusions
  12. Phase 11: Validation and Testing

Part IV — Reference

  1. IP and Name Reference
  2. Operations and Maintenance
  3. Important Notes and Considerations
  4. Considerations for AVD Deployment
  5. Appendix A — Cloud Cache for DR to Azure (Optional)
  6. Automation Scripts
  7. Microsoft Documentation Links
  8. Related Resources

Part I — Design Decisions

1. Understanding Stacked Mirror Resiliency

Mirror resiliency is evaluated at two independent layers:

  • Azure Local cluster layer — The physical S2D pool where Cluster Shared Volumes (CSVs) are created to host SOFS VMs and AVD workload VMs.
  • SOFS cluster layer — The virtual S2D pool inside the SOFS guest cluster, formed from VHDX data disks passed through from the Azure Local cluster. This is where the FSLogix profile volumes live.

These layers multiply. A two-way mirror on Azure Local hosting a two-way mirror inside the SOFS cluster means every byte of profile data exists in 2 × 2 = 4 physical copies. A three-way at both layers means 3 × 3 = 9 copies.

Combination Copies per Byte Raw Multiplier
Azure Local 2-way × SOFS 2-way 4 ~4.5 : 1
Azure Local 2-way × SOFS 3-way 6 ~6.2 : 1
Azure Local 3-way × SOFS 2-way 6 ~6.8 : 1
Azure Local 3-way × SOFS 3-way 9 ~9.3 : 1

Stacked Mirror — Physical Copies Per Byte

The capacity tax is real. Three-way at both layers means 9 physical copies of every byte of user profile data. This is the maximum resiliency configuration — but it consumes substantial raw capacity. The cluster must be sized accordingly from the start, which is why this document presents the full scenario analysis before committing to a design.


2. Why AVD Session Hosts Are Cattle, Not Pets

In IIC's Azure Virtual Desktop deployment, session hosts are cattle — interchangeable, disposable, and replaceable — not pets that are individually maintained and irreplaceable. This distinction drives the entire storage design.

Pooled Host Pools: Non-Persistent by Design

IIC's 24 AVD session hosts run Windows 11 Enterprise Multi-Session in a pooled host pool. Users are load-balanced across available hosts — no user is permanently assigned to a specific VM. At logoff, the session host returns to its clean baseline state. All user personalization, data, and application state lives somewhere else: FSLogix profile containers on the SOFS.

FSLogix Decouples State from Compute

This is the key insight: the user's data is not on the VM. FSLogix's kernel-mode filter driver intercepts the profile load at logon, mounts a per-user VHDX from the SOFS share (\\iic-fslogix\Profiles), and transparently redirects C:\Users\<Username> into that VHDX. When the user logs off, the VHDX is cleanly unmounted and the VM is stateless again.

What This Means for Storage Design

Component Nature Storage Implication
**AVD session host VMs** Cattle — replaceable, stateless Host-layer CSVs need **2-way mirror** — sufficient protection for VMs that can be redeployed from a golden image in minutes
**SOFS / FSLogix profile data** Pets — irreplaceable user state Host-layer CSVs need **3-way mirror** at host + **3-way mirror** at guest — maximum protection for data that represents every user's work environment

Losing a session host VM is a non-event — the user reconnects to another host and their profile remounts seamlessly. Losing profile data would mean users losing their desktop settings, Outlook cache, application configurations, browser bookmarks, and saved documents stored in OneDrive/SharePoint sync. The asymmetry between these two impacts is why IIC invests in 3-way mirrors for profiles but only 2-way for workloads.

IIC's AVD Environment

Parameter Value
**Total users** 2,000
**Concurrent users (50%)** 1,000
**Session hosts** 24 × Windows 11 Enterprise Multi-Session
**VM size** 8 vCPU, 32 GB RAM per host
**Users per host** 25 concurrent
**Workload profile** Heavy — Teams, Outlook, LOB applications
**Host pool type** Pooled (non-persistent)
**FSLogix per-user quotas** 5 GB profile + 10 GB ODFC + 3 GB AppData = 18 GB/user

3. Storage Sizing Scenarios

Before committing to a mirror configuration, it's essential to understand whether the cluster can physically support it. The following scenarios are based on the Azure Local Storage Sizing Analysis that evaluates every practical combination of SOFS node count, host mirror, and guest mirror against a given hardware baseline.

Baseline: Minimum Viable Cluster (3 nodes × 3 drives × 7.68 TB)

For reference, the minimum viable cluster has:

Item Value
Nodes 3
NVMe drives per node 3
Raw capacity per drive 7.68 TB
Total raw 69.12 TB
Formatted (~92%) ~63.59 TB
S2D reserve (1 drive/node × 3 nodes) ~21.20 TB
**Allocatable** **~42,639 GB**

SOFS Scenarios 1–10

Each scenario targets 5,120 GB usable FSLogix space with 10% growth headroom. The "Fits?" column indicates whether the total pool consumed fits within the 42,639 GB allocatable ceiling of the 3-node/3-drive baseline.

# SOFS Nodes Host Mirror Guest Mirror Copies/Byte Pool Consumed Fits?
1 3-VM 2-way 2-way 4 ~36,000 GB **Yes**
2 3-VM 2-way 3-way 6 ~54,000 GB No
3 3-VM 3-way 2-way 6 ~54,000 GB No
4 3-VM 3-way 3-way 9 ~81,000 GB No
5 2-VM 2-way 2-way 4 ~24,000 GB **Yes**
6 2-VM 2-way 3-way 6 ~50,000 GB No
7 2-VM 3-way 2-way 6 ~48,000 GB No
8 2-VM 3-way 3-way 9 ~72,000 GB No
9 3-VM 2-way 2-way (workload 2-way) 4 ~42,000 GB Barely
10 3-VM 3-way 2-way (workload 2-way) 6 ~54,000 GB No

On a 3-node/3-drive baseline, only Scenarios 1 and 5 (both 2×2) fit. Any three-way mirror at either layer pushes past the pool ceiling.

Cloud Cache Scenarios 11–13

Cloud Cache replaces the guest S2D layer with a local cache on each session host plus Azure Blob replication. This eliminates the guest mirror multiplier entirely.

# Approach Host Mirror Guest Mirror Pool Consumed Notes
11 Cloud Cache (3-VM SOFS) 2-way None (CC) ~24,000 GB SOFS provides primary storage; Azure Blob for DR
12 Cloud Cache (no SOFS) 2-way None ~10,000 GB Azure Files or Blob only — no on-prem SOFS
13 Cloud Cache hybrid 3-way None (CC) ~36,000 GB 3-way host for SOFS; CC handles profile resiliency

Cloud Cache eliminates the guest mirror tax but introduces session-host local disk requirements, write amplification, and Azure egress costs. It's the right choice for multi-site DR or when the cluster physically cannot support 3-way mirrors. For IIC's single-site deployment with sufficient hardware, SOFS with stacked mirrors is simpler and more predictable.

More Drives Per Node or More Nodes

The scenarios above are constrained by the 3-node/3-drive baseline. Adding more drives per node (4, 5, 6+) or more nodes (4+) changes the math dramatically:

  • 4 nodes × 4 drives × 7.68 TB: ~86,016 GB allocatable — Scenarios 1–5 all fit
  • 6 nodes × 5 drives × 7.68 TB: ~180,000+ GB allocatable — all SOFS scenarios fit
  • 6 nodes × 10 drives × 7.68 TB: ~377,856 GB allocatable — all scenarios fit with headroom

The decision gate is simple: if you want three-way mirrors at either layer, you need more capacity than a 3-node/3-drive cluster provides. Size the cluster to fit the resiliency you need — don't compromise resiliency to fit the cluster.

Decision: Commit to One Scenario

This document commits to Scenario 4 on scaled-up hardware — three-way host mirror for SOFS CSVs, three-way guest mirror for profile volumes, and two-way host mirror for AVD workload CSVs. The next section details the hardware required and the full capacity math.

Reference: Use the S2D Capacity Calculator from the azurelocal-toolkit repository to model your own hardware configuration against any of these 13 scenarios.


Configuration Summary

Layer Mirror Rationale
**Host S2D — SOFS CSVs** 3-way Profile data is irreplaceable; survives 2 simultaneous drive/node failures
**Guest S2D — Profile volumes** 3-way Defense in depth; 9 physical copies total
**Host S2D — AVD workload CSVs** 2-way Session hosts are cattle; redeployable in minutes

Hardware Implication: 6-Node Dell AX-760 Cluster

To support 1,200 users at 3×3 mirrors with comfortable headroom, IIC's cluster consists of:

  • 6 physical nodes (iic-01-n01 through iic-01-n06)
  • 10 × 7.68 TB NVMe U.2 per node (60 drives total)
  • Full hardware details in Section 5

Host S2D Pool Capacity

Item Value
Raw capacity 60 × 7,680 GB = **460,800 GB**
Formatted (~92%) ~423,936 GB
S2D reserve (1 drive/node × 6 nodes) 6 × 7,680 GB = 46,080 GB
**Allocatable** **377,856 GB**

Host Volume Layout

Volume Usable Size Mirror Pool Consumed Purpose
`csv-iic-clus01-m3-sofs-01` 31,627 GB 3-way 94,881 GB SOFS VM 1 (iic-sofs-01)
`csv-iic-clus01-m3-sofs-02` 31,627 GB 3-way 94,881 GB SOFS VM 2 (iic-sofs-02)
`csv-iic-clus01-m3-sofs-03` 31,627 GB 3-way 94,881 GB SOFS VM 3 (iic-sofs-03)
`csv-iic-clus01-m2-avd-01` 2,000 GB 2-way 4,000 GB AVD session hosts (12 VMs)
`csv-iic-clus01-m2-avd-02` 2,000 GB 2-way 4,000 GB AVD session hosts (12 VMs)
**Total** **292,643 GB** **Headroom: 85,213 GB (22.6%)**

Why three separate SOFS host volumes? If all three SOFS VMs sit on a single Azure Local volume, that volume is a shared-fate dependency — a volume-level issue takes out the entire guest cluster. With three volumes, a single volume failure only affects one SOFS node. The guest S2D three-way mirror continues operating on the remaining two nodes with full resiliency.

Do not thin-provision the host volumes. New-Volume uses fixed provisioning by default — leave it that way. Thin provisioning lets you over-commit the Azure Local storage pool by allocating more logical capacity than physical space exists, but for SOFS host volumes this creates more problems than it solves:

  • Pool full = all volumes die. If total writes exceed the physical pool capacity, S2D puts the pool into a degraded/read-only state. That's not one volume full — it's every SOFS VM going read-only simultaneously.
  • Defeats fault isolation. Three volumes on a shared thin pool are back to a shared-fate dependency on pool free space — exactly what separate volumes are designed to eliminate.
  • Write-time allocation overhead. Every write must find and allocate slabs from the pool. During a logon storm, that's an extra metadata operation per write. Fixed provisioning has pre-allocated extents — writes go straight to reserved space.
  • Misleading capacity reporting. Volumes report large free space while the underlying pool may be nearly full. Admin tools, PerfMon, and FSRM all show the logical number, not the physical reality.

SOFS VM Configuration

Item Value
VM count 3
Names `iic-sofs-01`, `iic-sofs-02`, `iic-sofs-03`
vCPU 4 per VM
RAM 16 GB per VM
Generation Gen2
OS disk 127 GB (dynamic VHDX)
Data disks 7 × 4,500 GB per VM (dynamic VHDX)
Total disk per VM ~31,627 GB (fits within CSV)
OS Windows Server 2025 Datacenter: Azure Edition Core

Guest S2D Pool Capacity

Item Value
Total S2D pool 21 disks × 4,500 GB = 94,500 GB
S2D reserve (1 × 4,500 GB × 3 nodes) 13,500 GB
**Allocatable** **81,000 GB**
At 3-way mirror **27,000 GB usable**

Guest Volume Layout

Volume Usable Size Pool Consumed Per-User Quota Headroom
`Profiles` 7,500 GB 22,500 GB 5 GB × 1,200 = 6,000 GB 25%
`ODFC` 13,500 GB 40,500 GB 10 GB × 1,200 = 12,000 GB 12.5%
`AppData` 6,000 GB 18,000 GB 3 GB × 1,200 = 3,600 GB 67%
**Total** **27,000 GB** **81,000 GB** **18 GB/user** **25% avg**

ODFC headroom is modest at 12.5%. In practice, 10 GB per user is generous for Outlook OST + Teams cache. Most users will consume 3–6 GB for ODFC. The quotas represent ceiling allocations, not expected utilization. Monitor actual usage after deployment and expand data disks if needed — they are dynamically provisioned.

AVD Session Host Callout (Informational)

Parameter Value
VMs 24 × Windows 11 Enterprise Multi-Session
Size 8 vCPU, 32 GB RAM, 127 GB OS disk
Users per VM 25 concurrent
Capacity 25 × 40 = 600 concurrent users
Placement 12 VMs on `csv-iic-clus01-m2-avd-01`, 12 on `csv-iic-clus01-m2-avd-02`

AVD session host deployment is covered in the azurelocal-avd repository. The split across two workload CSVs is noted here for completeness — it is an AVD deployment concern, not a SOFS concern.

Compute N-1 Validation

With 6 nodes, the cluster must support all workloads with one node down (N-1):

Workload vCPU RAM
SOFS (3 VMs) 12 48 GB
AVD (24 VMs) 192 768 GB
**Total** **204** **816 GB**
Metric Per Node at N-1 (5 nodes) Available per Node Utilization
vCPU 40.8 128 cores **32%**
RAM 163.2 GB 512 GB **32%**

Comfortable headroom at N-1 for both compute and memory.


Part II — Architecture & Design

5. Hardware Build — Dell AX-760 (6-Node Cluster)

IIC's Azure Local cluster is built on the Dell Integrated System for Microsoft Azure Local AX-760 platform.

Per-Node Specification

Component Specification
**Platform** Dell AX-760
**Processors** 2 × Intel Xeon Gold 6548N (32 cores / 64 threads each = 128 threads/node)
**Memory** 512 GB DDR5-5600 (16 × 32 GB DIMMs)
**Storage** 10 × 7.68 TB NVMe U.2 (all-flash, single tier)
**Boot** Dell BOSS-N1 (M.2 RAID-1 internal boot)
**Network** 4 × 25 GbE SFP28 (2 for management, 2 for compute/storage)
**Form factor** 2U rack-mount

Cluster Totals

Resource Per Node 6-Node Total
CPU cores 64 (128 threads) 384 cores (768 threads)
RAM 512 GB 3,072 GB (3 TB)
NVMe drives 10 60
Raw storage 76.8 TB 460.8 TB
Network ports (25 GbE) 4 24

Networking

Component Specification
**TOR switches** 2 × Dell S5248F-ON (VLT pair)
**Uplinks** 4 × 100 GbE to spine (per switch)
**Host connections** 25 GbE SFP28 to each TOR switch
**VLANs** Management, Compute, Storage, Migration

Physical Rack Layout

Position Equipment
U1–U2 Dell S5248F-ON TOR Switch #1
U3–U4 Dell S5248F-ON TOR Switch #2
U5–U6 Patch panels
U7–U18 6 × Dell AX-760 nodes (2U each)

6. Naming and Identity

IIC Naming Convention

All resources follow Azure Cloud Adoption Framework (CAF) and IIC standards.

Item Value
**Company** Infinite Improbability Corp
**Domain** `improbability.cloud`
**NetBIOS** `IMPROBABLE`
**Prefix** `iic`
**Entra tenant** `improbability.onmicrosoft.com`
**Azure Local cluster** `iic-clus01`
**Physical nodes** `iic-01-n01` through `iic-01-n06`
**Resource group** `rg-iic-sofs-azl-eus-01`
**Location** East US

Single AD Domain Model

IIC uses a single Active Directory domain for everything — Azure Local host nodes, SOFS VMs, and AVD session hosts are all joined to improbability.cloud. There is no separate management domain.

Since the SOFS cluster and AVD users are in the same domain, Kerberos authentication to the SMB shares is native — no cross-domain trust is needed.

Component Identity Auth to SOFS
Azure Local host nodes `improbability.cloud` domain member N/A (infrastructure)
SOFS VMs `improbability.cloud` domain member N/A (they are the server)
AVD session hosts `improbability.cloud` domain member Kerberos — native (same domain)
User at logon `improbability.cloud` domain user Kerberos TGS for `\\iic-fslogix`

OU Structure

DC=improbability,DC=cloud
└── OU=Azure Local
    ├── OU=Host Nodes      ← iic-01-n01 through iic-01-n06
    ├── OU=SOFS
    │   ├── iic-sofs-01, iic-sofs-02, iic-sofs-03
    │   ├── iic-sofs (cluster CNO)
    │   └── iic-fslogix (SOFS access point)
    └── OU=AVD
        └── AVD session hosts

Service Accounts

Account Type Purpose
`svc-sofs-admin` Domain user (managed service account) SOFS cluster administration
`gmsa-sofs$` Group Managed Service Account (gMSA) S2D and cluster operations

AD Objects & OU Structure


7. Azure Local Host Storage Design

S2D manages all 60 NVMe drives across 6 nodes as a single distributed pool. All drives are NVMe-only (no cache/capacity tier split) — S2D runs in flat (all-capacity) mode.

Host Volume Layout

Reproduced from Section 4 for reference:

Volume Usable Size Mirror Pool Consumed Purpose
`csv-iic-clus01-m3-sofs-01` 31,627 GB 3-way 94,881 GB SOFS VM 1
`csv-iic-clus01-m3-sofs-02` 31,627 GB 3-way 94,881 GB SOFS VM 2
`csv-iic-clus01-m3-sofs-03` 31,627 GB 3-way 94,881 GB SOFS VM 3
`csv-iic-clus01-m2-avd-01` 2,000 GB 2-way 4,000 GB AVD session hosts (12 VMs)
`csv-iic-clus01-m2-avd-02` 2,000 GB 2-way 4,000 GB AVD session hosts (12 VMs)
**Total** **292,643 GB** **Headroom: 85,213 GB (22.6%)**

Host Volume Layout


8. SOFS VM Configuration

Each SOFS VM is deployed from the Windows Server 2025 Datacenter: Azure Edition Core (Gen2) gallery image (marketplace SKU: 2025-datacenter-azure-edition-core).

Specification Value
**VM count** 3
**VM names** `iic-sofs-01`, `iic-sofs-02`, `iic-sofs-03`
**vCPU** 4 per VM
**RAM** 16 GB per VM
**OS disk** 127 GB (dynamic VHDX)
**Data disks** 7 × 4,500 GB per VM (dynamic VHDX)
**Total disk per VM** ~31,627 GB
**OS** Windows Server 2025 Datacenter: Azure Edition Core
**Domain** `improbability.cloud`
**Placement** Anti-affinity — one VM per physical node (pinned to `iic-01-n01`, `iic-01-n02`, `iic-01-n03`)

Datacenter licensing is required for Storage Spaces Direct. Standard edition does not support S2D.

Why 16 GB RAM (not 8 GB)? With 1,200 users and 7 × 4,500 GB data disks per VM, the S2D metadata footprint and SMB session count are significantly larger than a small deployment. 16 GB provides comfortable headroom for the S2D health service, ReFS metadata cache, and concurrent SMB handles during logon storms.

SOFS Architecture — Three Host Volumes + Three Guest Volumes


9. Guest SOFS Cluster Storage Design

Inside the 3-VM SOFS guest cluster, all 21 data disks (7 × 4,500 GB × 3 VMs) form a single S2D storage pool. Three separate S2D volumes are created — one per FSLogix workload — using three-way mirror for maximum resiliency.

Pool Summary

Item Value
Total S2D pool 30 × 4,500 GB = 94,500 GB
S2D reserve (1 × 4,500 GB × 3 nodes) 13,500 GB
**Allocatable** **81,000 GB**

Guest Volume Layout

Volume Usable Size Mirror Pool Consumed SMB Share Contents
`Profiles` 7,500 GB 3-way 22,500 GB `\\iic-fslogix\Profiles` FSLogix profile containers
`ODFC` 13,500 GB 3-way 40,500 GB `\\iic-fslogix\ODFC` Office Data File Containers (Outlook OST, Teams cache)
`AppData` 6,000 GB 3-way 18,000 GB `\\iic-fslogix\AppData` Per-user AppData redirections
**Total** **27,000 GB** **81,000 GB**

FSRM Quotas

File Server Resource Manager quotas prevent individual users from consuming disproportionate share space:

Volume Soft Warning (80%) Hard Limit Per-User Allocation
Profiles 4 GB 5 GB 5 GB
ODFC 8 GB 10 GB 10 GB
AppData 2.4 GB 3 GB 3 GB

Why Three Shares?

IIC's deployment targets 1,200 users across 24 AVD session hosts. Three separate volumes provide significant operational advantages at this scale:

  • NTFS metadata isolation — Each volume has its own MFT and change journal. Outlook OST writes hammering the ODFC change journal don't compete with profile writes for NTFS lock time on the Profiles volume.
  • Logon storm resilience — Heavy AppData syncs (Chrome profiles, specialized apps) only slow the AppData volume. The Profiles volume stays responsive — Start Menu and Desktop load fast for everyone else.
  • FSRM quotas — Per-volume File Server Resource Manager quotas let you hard-cap ODFC so one user's 50 GB Outlook cache can't eat into profile space. Impossible with a single volume.
  • Monitoring granularity — Separate PerfMon counters per volume. "ODFC at 85%" is actionable. "FSLogixData at 60%" tells you nothing about what's growing.
  • Future migration path — If IIC moves to Azure NetApp Files or tiered storage later, pre-separated data maps cleanly to different tiers (fast tier for Profiles, cheaper tier for ODFC/AppData).

Guest S2D Storage Design

FSLogix Data Flow — User Login to Disk Write


10. Network Design

All SOFS VMs connect to the compute network via a single NIC. The AVD session hosts are on the same network/VLAN for optimal SMB latency.

IP Allocation

Component IP Address Notes
`iic-sofs-01` 10.42.10.21 S2D node
`iic-sofs-02` 10.42.10.22 S2D node
`iic-sofs-03` 10.42.10.23 S2D node
`iic-sofs` (cluster CNO) 10.42.10.25 Failover cluster IP
`iic-fslogix` (SOFS access point) Uses cluster IP; DNS A record

Assign static IPs or DHCP reservations before creating the guest cluster. All SOFS nodes must have stable, predictable IP addresses.

Firewall Ports

Between SOFS VMs (east-west):

Port Protocol Purpose
445 TCP SMB (S2D replication, CSV redirected I/O, client access)
5445 TCP SMB over QUIC (if used)
5985–5986 TCP WinRM / PowerShell Remoting
135 TCP RPC Endpoint Mapper (cluster communication)
49152–65535 TCP RPC dynamic ports (cluster, S2D)
3343 UDP Cluster network driver

Between SOFS VMs and AVD session hosts:

Port Protocol Purpose
445 TCP SMB (FSLogix profile access via `\\iic-fslogix\`)

Network Topology


11. AVD Integration Points

FSLogix Profile Mapping

Users never see a mapped drive or UNC path — the FSLogix agent (frxsvc.exe) on each session host handles everything automatically via a kernel-mode filter driver:

  1. VHDLocations is configured (via GPO) pointing to \\iic-fslogix\Profiles.
  2. At user logon, the FSLogix filter driver intercepts the profile load, connects to the share using the user's AD Kerberos identity, and creates (or mounts) a per-user VHDX inside a folder named <SID>_<Username>.
  3. The driver redirects C:\Users\<Username> into the mounted VHDX — completely transparent to the user and all applications.

Identity Model

On Azure Local, AVD session hosts must be AD domain-joined. Pure Entra ID join is not supported for Azure Local Arc VMs. Since all components are in the improbability.cloud domain, Kerberos authentication is automatic.

Hybrid Entra ID Join (domain-joined + registered in Entra ID) is also supported and recommended for SSO to the AVD gateway. It does not change the SOFS authentication path.

FSLogix Registry Configuration (Three Shares)

Profile Containers point to the Profiles share:

HKLM\SOFTWARE\FSLogix\Profiles
    Enabled                          REG_DWORD    1
    VHDLocations                     REG_MULTI_SZ \\iic-fslogix\Profiles
    SizeInMBs                        REG_DWORD    30000
    VolumeType                       REG_SZ       VHDX
    FlipFlopProfileDirectoryName     REG_DWORD    1

Office Data File Containers (ODFC) point to the ODFC share:

HKLM\SOFTWARE\Policies\FSLogix\ODFC
    Enabled                          REG_DWORD    1
    VHDLocations                     REG_MULTI_SZ \\iic-fslogix\ODFC
    VolumeType                       REG_SZ       VHDX
    FlipFlopProfileDirectoryName     REG_DWORD    1
    IncludeOutlookPersonalization    REG_DWORD    1

AppData redirection can use folder redirection GPO to \\iic-fslogix\AppData\%USERNAME% or a separate FSLogix container — choose based on user persona requirements.

GPO Path: Computer Configuration → Administrative Templates → FSLogix → Profile Containers

AVD session host deployment, including host pool configuration and session host provisioning, is documented in the azurelocal-avd repository.

AVD Reference Architecture


Part III — Implementation

Prerequisites

Infrastructure

  • Azure Local cluster (iic-clus01) with 6 physical nodes (iic-01-n01 through iic-01-n06)
  • 377,856 GB allocatable pool capacity available on the Azure Local cluster (60 × 7.68 TB NVMe, minus reserves)
  • Windows Server 2025 Datacenter: Azure Edition Core (Gen2) gallery image registered on the Azure Local cluster (marketplace SKU: 2025-datacenter-azure-edition-core)

Licensing

  • Windows Server 2025 Datacenter: Azure Edition Core (Gen2) is required for Storage Spaces Direct (S2D). Each of the 3 SOFS VMs must be licensed for Datacenter.
  • If your Azure Local hosts are licensed with Windows Server Datacenter with Software Assurance or you have an active Azure Local subscription that includes Windows Server guest licensing, your guest VM rights may already cover the SOFS VMs. Check with your Microsoft licensing contact — this is not always included and depends on how the Azure Local cluster was purchased and licensed.
  • Without existing guest rights, you will need 3 additional Windows Server 2025 Datacenter licenses (or a volume licensing agreement that covers them).

Active Directory and DNS

  • Active Directory domain environment (improbability.cloud)
  • DNS configured for the domain
  • A domain account with permissions to:
  • Create Computer Objects in the target OU (required for the failover cluster CNO and the SOFS access point)
  • Join computers to the domain
  • Register DNS records (or pre-stage the DNS entries manually)
  • Create and manage SMB shares on the cluster
  • Pre-stage the cluster CNO (iic-sofs) and SOFS access point (iic-fslogix) Computer Objects in AD if your environment restricts dynamic Computer Object creation — otherwise the account above must have Create Computer Objects permission on the target OU

AD Objects & OU Structure

Tooling

  • Host volume creation (Phase 1): PowerShell run directly on an Azure Local cluster node (or via remote PowerShell to the cluster). The New-Volume cmdlet is a Storage Spaces Direct operation — it does not go through Azure.
  • Azure resource provisioning (Phases 1–2): Azure CLI (az) run from a PowerShell session. Install the Azure CLI and the stack-hci-vm extension. All commands in this guide use PowerShell variable syntax ($variable) and PowerShell line continuation (backtick `), not bash.
  • Guest OS configuration (Phases 4–11): Standard PowerShell remoting (Enter-PSSession / Invoke-Command) against the SOFS VMs from a management workstation with RSAT installed.

Install from a management workstation (winget):

# Azure CLI
winget install --id Microsoft.AzureCLI --source winget

# Azure PowerShell (Az module)
winget install --id Microsoft.AzurePowerShell --source winget

After installing the Azure CLI, add the stack-hci-vm extension:

az extension add --name stack-hci-vm --upgrade

RSAT (Remote Server Administration Tools) is required for Enter-PSSession, Invoke-Command, and failover cluster management. Install it via Settings → Apps → Optional Features → Add a feature → search "RSAT", or:

Get-WindowsCapability -Name RSAT* -Online |
    Where-Object { $_.State -ne 'Installed' } |
    Add-WindowsCapability -Online

Phase 1: Create Azure Local Host Volumes

1.1 — Create the Three-Way Mirror SOFS Volumes

PowerShell Run on: Cluster Node

Run this on an Azure Local cluster node (any node in the host cluster).

Create three separate three-way mirror CSV volumes — one per SOFS VM. Each volume provides 31,627 GB usable to hold one VM's OS and 7 data disks at full provisioned capacity.

# ── Create three dedicated SOFS storage volumes ──
# One per SOFS VM for fault isolation
# Three-way mirror: ~31,627 GB usable each = ~94,881 GB pool each
$sofsVolumes = @(
    "csv-iic-clus01-m3-sofs-01",
    "csv-iic-clus01-m3-sofs-02",
    "csv-iic-clus01-m3-sofs-03"
)

foreach ($volName in $sofsVolumes) {
    New-Volume -FriendlyName $volName `
               -StoragePoolFriendlyName "S2D on iic-clus01" `
               -FileSystem CSVFS_ReFS `
               -ResiliencySettingName Mirror `
               -NumberOfDataCopies 3 `
               -Size 31627GB
}

1.2 — Create the Two-Way Mirror Workload Volumes

PowerShell Run on: Cluster Node

Run this on an Azure Local cluster node.

Create two two-way mirror CSV volumes for AVD session hosts:

# ── Create two workload volumes for AVD session hosts ──
# Two-way mirror: 2,000 GB usable each = 4,000 GB pool each
$workloadVolumes = @(
    "csv-iic-clus01-m2-avd-01",
    "csv-iic-clus01-m2-avd-02"
)

foreach ($volName in $workloadVolumes) {
    New-Volume -FriendlyName $volName `
               -StoragePoolFriendlyName "S2D on iic-clus01" `
               -FileSystem CSVFS_ReFS `
               -ResiliencySettingName Mirror `
               -NumberOfDataCopies 2 `
               -Size 2000GB
}

1.3 — Verify Volumes

PowerShell Run on: Cluster Node

Get-VirtualDisk -CimSession "iic-clus01" |
    Where-Object { $_.FriendlyName -like "csv-iic-clus01-*" } |
    Select-Object FriendlyName, ResiliencySettingName, NumberOfDataCopies, Size, HealthStatus

Get-ClusterSharedVolume -Cluster "iic-clus01" |
    Select-Object Name, State

1.4 — Create Storage Paths in Azure

Azure CLI Run on: Mgmt Workstation

Run this from a management workstation with Azure CLI and the stack-hci-vm extension installed.

# ── Create storage paths — one per SOFS CSV volume ──
$subscription     = "<Your Subscription ID>"
$resourceGroup    = "rg-iic-sofs-azl-eus-01"
$location         = "eastus"
$customLocationID = "<Your Custom Location Resource ID>"

$storagePathDefs = @(
    @{ Name = "sp-iic-sofs-vol-01"; Path = "C:\ClusterStorage\csv-iic-clus01-m3-sofs-01" },
    @{ Name = "sp-iic-sofs-vol-02"; Path = "C:\ClusterStorage\csv-iic-clus01-m3-sofs-02" },
    @{ Name = "sp-iic-sofs-vol-03"; Path = "C:\ClusterStorage\csv-iic-clus01-m3-sofs-03" }
)

foreach ($sp in $storagePathDefs) {
    az stack-hci-vm storagepath create `
        --resource-group $resourceGroup `
        --custom-location $customLocationID `
        --location $location `
        --name $sp.Name `
        --path $sp.Path
}

After creation, capture the resource IDs:

$storagePathIds = @{}
foreach ($sp in $storagePathDefs) {
    $nodeId = $sp.Name.Substring($sp.Name.Length - 2)
    $storagePathIds[$nodeId] = az stack-hci-vm storagepath show `
        --resource-group $resourceGroup `
        --name $sp.Name `
        --query id -o tsv
}

$storagePathIds | Format-Table -AutoSize

1.5 — Verify Logical Network and Prerequisites

Azure CLI Run on: Mgmt Workstation

az extension add --name stack-hci-vm --upgrade

$subscription     = "<Your Subscription ID>"
$resourceGroup    = "rg-iic-sofs-azl-eus-01"
$location         = "eastus"
$customLocationID = "<Your Custom Location Resource ID>"
$imageName        = "img-iic-ws2025-dc-aze-core-g2-v1"
$logicalNetworkId = "<Your Compute Logical Network Resource ID>"

Phase 2: Deploy SOFS VMs

2.1 — Create Network Interfaces

Azure CLI Run on: Mgmt Workstation

$nodeIds = @("01", "02", "03")

foreach ($nodeId in $nodeIds) {
    az stack-hci-vm network nic create `
        --resource-group $resourceGroup `
        --custom-location $customLocationID `
        --location $location `
        --name "iic-sofs-$nodeId-nic" `
        --subnet-id $logicalNetworkId
}

2.2 — Create the VMs

Azure CLI Run on: Mgmt Workstation

Each VM is created on its dedicated storage volume:

$nodeIds = @("01", "02", "03")

foreach ($nodeId in $nodeIds) {
    az stack-hci-vm create `
        --name "iic-sofs-$nodeId" `
        --resource-group $resourceGroup `
        --custom-location $customLocationID `
        --location $location `
        --image $imageName `
        --admin-username "sofs_admin" `
        --admin-password "<YourSecurePassword>" `
        --computer-name "iic-sofs-$nodeId" `
        --hardware-profile memory-mb="16384" processors="4" `
        --nics "iic-sofs-$nodeId-nic" `
        --storage-path-id $storagePathIds[$nodeId] `
        --authentication-type all `
        --enable-agent true
}

2.3 — Create and Attach Data Disks

Azure CLI Run on: Mgmt Workstation

Each VM needs 7 × 4,500 GB data disks for the S2D storage pool:

$nodeIds     = @("01", "02", "03")
$diskNumbers = 1..7

foreach ($nodeId in $nodeIds) {
    foreach ($diskNumber in $diskNumbers) {
        $diskName = "iic-sofs-$nodeId-data$('{0:D2}' -f $diskNumber)"
        az stack-hci-vm disk create `
            --resource-group $resourceGroup `
            --custom-location $customLocationID `
            --location $location `
            --name $diskName `
            --size-gb 4500 `
            --dynamic true `
            --storage-path-id $storagePathIds[$nodeId]
    }
}

# Attach the data disks to each VM
foreach ($nodeId in $nodeIds) {
    $diskNames = 1..7 | ForEach-Object { "iic-sofs-$nodeId-data$('{0:D2}' -f $_)" }
    az stack-hci-vm disk attach `
        --resource-group $resourceGroup `
        --vm-name "iic-sofs-$nodeId" `
        --disks @diskNames `
        --yes
}

2.4 — Verify VMs and Disks

Azure CLI Run on: Mgmt Workstation

az stack-hci-vm list --resource-group $resourceGroup -o table

$nodeIds = @("01", "02", "03")
foreach ($nodeId in $nodeIds) {
    Write-Host "=== iic-sofs-$nodeId ==="
    az stack-hci-vm show `
        --resource-group $resourceGroup `
        --name "iic-sofs-$nodeId" `
        --query "{name:name, dataDisks:properties.storageProfile.dataDisks}"
}

2.5 — Verify VM Placement

PowerShell Run on: Cluster Node

Run this on an Azure Local cluster node.

Get-ClusterGroup -Cluster "iic-clus01" |
    Where-Object { $_.Name -like "iic-sofs*" } |
    Select-Object Name, OwnerNode, State

If any VMs share a node, live migrate them:

Move-ClusterVirtualMachineRole -Name "iic-sofs-01" -Node "iic-01-n01" -Cluster "iic-clus01"
Move-ClusterVirtualMachineRole -Name "iic-sofs-02" -Node "iic-01-n02" -Cluster "iic-clus01"
Move-ClusterVirtualMachineRole -Name "iic-sofs-03" -Node "iic-01-n03" -Cluster "iic-clus01"

Phase 3: Configure Anti-Affinity Rules

Anti-affinity rules ensure the three SOFS VMs always run on different Azure Local physical nodes so a single host failure only takes out one S2D node.

3.1 — Create the Anti-Affinity Rule (Azure Local / Windows Server 2025)

PowerShell Run on: Cluster Node

Run this on an Azure Local cluster node (or a management machine with RSAT Failover Clustering tools installed).

New-ClusterAffinityRule -Name "SOFS-AntiAffinity" `
                        -RuleType DifferentNode `
                        -Cluster "iic-clus01"

Add-ClusterGroupToAffinityRule -Groups "iic-sofs-01","iic-sofs-02","iic-sofs-03" `
                               -Name "SOFS-AntiAffinity" `
                               -Cluster "iic-clus01"

Set-ClusterAffinityRule -Name "SOFS-AntiAffinity" `
                        -Enabled 1 `
                        -Cluster "iic-clus01"

# Verify
Get-ClusterAffinityRule -Name "SOFS-AntiAffinity" -Cluster "iic-clus01"

Expected output:

Name                RuleType       Groups                                    Enabled
----                -----------    -------                                   -------
SOFS-AntiAffinity   DifferentNode  {iic-sofs-01, iic-sofs-02, iic-sofs-03}  1

3.2 — Alternative: Legacy AntiAffinityClassNames Method

PowerShell Run on: Cluster Node

If the New-ClusterAffinityRule cmdlet is not available (older builds):

$AntiAffinity = New-Object System.Collections.Specialized.StringCollection
$AntiAffinity.Add("SOFSCluster")

(Get-ClusterGroup -Name "iic-sofs-01" -Cluster "iic-clus01").AntiAffinityClassNames = $AntiAffinity
(Get-ClusterGroup -Name "iic-sofs-02" -Cluster "iic-clus01").AntiAffinityClassNames = $AntiAffinity
(Get-ClusterGroup -Name "iic-sofs-03" -Cluster "iic-clus01").AntiAffinityClassNames = $AntiAffinity

# Verify
Get-ClusterGroup -Cluster "iic-clus01" |
    Where-Object { $_.Name -like "iic-sofs*" } |
    Format-List Name, AntiAffinityClassNames

Note: AntiAffinityClassNames is a soft rule — the cluster will try to keep VMs apart but will allow co-location if no other option exists. The New-ClusterAffinityRule with DifferentNode is the preferred approach on Azure Local 23H2+ / Windows Server 2025.


Phase 4: Post-Deployment VM Configuration

4.1 — Domain Join the SOFS VMs

PowerShell Run on: SOFS VM

Run this on each SOFS VM (via RDP, Azure Arc remote access, or Invoke-Command):

$domain = "improbability.cloud"
$ouPath = "OU=SOFS,OU=Azure Local,DC=improbability,DC=cloud"
$credential = Get-Credential -Message "Enter domain join credentials"

Add-Computer -DomainName $domain `
             -OUPath $ouPath `
             -Credential $credential `
             -Restart -Force

Tip: Script this across all three VMs from a management workstation:

$cred = Get-Credential -Message "Domain join credentials"
$nodes = "iic-sofs-01","iic-sofs-02","iic-sofs-03"
foreach ($node in $nodes) {
    Invoke-Command -ComputerName $node -ScriptBlock {
        Add-Computer -DomainName "improbability.cloud" `
                     -OUPath "OU=SOFS,OU=Azure Local,DC=improbability,DC=cloud" `
                     -Credential $using:cred `
                     -Restart -Force
    }
}

4.2 — Verify Domain Join and Network Configuration

PowerShell Run on: SOFS VM

After reboot, on each SOFS VM:

(Get-WmiObject Win32_ComputerSystem).Domain
hostname
Get-NetIPAddress -AddressFamily IPv4 | Where-Object { $_.IPAddress -notlike "169.*" }
Resolve-DnsName improbability.cloud

4.3 — IP Address Reference

VM Name IP Address Role
iic-sofs-01 10.42.10.21 S2D Node
iic-sofs-02 10.42.10.22 S2D Node
iic-sofs-03 10.42.10.23 S2D Node

Phase 5: Install Required Roles and Features

PowerShell Run on: SOFS VM

Run this on all three SOFS VMs:

Install-WindowsFeature -Name Failover-Clustering,
                              FS-FileServer,
                              FS-Resource-Manager,
                              RSAT-Clustering-PowerShell,
                              RSAT-Clustering-Mgmt `
                       -IncludeManagementTools -Restart

FS-Resource-Manager is included for FSRM quota management on the profile volumes.

5.1 — Firewall Considerations

PowerShell Run on: SOFS VM

Windows Firewall rules for Failover Clustering, S2D, and SMB are automatically created when the features are installed. Verify:

Get-NetFirewallRule -Group "Failover Clusters" | Select-Object DisplayName, Enabled, Direction
Get-NetFirewallRule -Group "File and Printer Sharing" |
    Where-Object { $_.DisplayName -like "*SMB*" } |
    Select-Object DisplayName, Enabled, Direction

If your environment uses a hardened base image, refer to the port table in Section 10 — Network Design for required ports.


Phase 6: Validate and Create the Guest Failover Cluster

6.1 — Validate the Cluster

PowerShell Run on: SOFS VM

Run from any one of the SOFS VMs:

Test-Cluster -Node "iic-sofs-01","iic-sofs-02","iic-sofs-03" `
             -Include "Inventory","Network","System Configuration"

Skip the "Storage" tests — we're using S2D inside VMs, not shared SAS/FC storage.

6.2 — Create the Failover Cluster

PowerShell Run on: SOFS VM

New-Cluster -Name "iic-sofs" `
            -Node "iic-sofs-01","iic-sofs-02","iic-sofs-03" `
            -StaticAddress "10.42.10.25" `
            -NoStorage

6.3 — Create the Cloud Witness Storage Account

Azure CLI Run on: Mgmt Workstation

az storage account create `
    --name "stsofswitnessiic01" `
    --resource-group $resourceGroup `
    --location $location `
    --sku Standard_LRS `
    --kind StorageV2 `
    --min-tls-version TLS1_2 `
    --allow-blob-public-access false

$witnessKey = (az storage account keys list `
    --account-name "stsofswitnessiic01" `
    --resource-group $resourceGroup `
    --query "[0].value" -o tsv)

6.4 — Configure the Cloud Witness

PowerShell Run on: SOFS VM

Set-ClusterQuorum -Cluster "iic-sofs" `
                  -CloudWitness `
                  -AccountName "stsofswitnessiic01" `
                  -AccessKey $witnessKey `
                  -Endpoint "core.windows.net"

Phase 7: Enable Storage Spaces Direct (S2D)

7.1 — Clean the Data Disks

PowerShell Run on: SOFS VM

On each SOFS VM, ensure the data disks are raw/uninitialized:

Get-Disk | Where-Object { $_.Number -ne 0 -and $_.IsBoot -eq $false } |
    Clear-Disk -RemoveData -RemoveOEM -Confirm:$false

7.2 — Enable S2D

PowerShell Run on: SOFS VM

Run from any one of the SOFS VMs:

Enable-ClusterStorageSpacesDirect -Cluster "iic-sofs" -Confirm:$false

Important for nested/guest S2D: Since these are VMs, S2D treats all disks as capacity (flat — no caching tier). This is expected and correct.

7.3 — Apply Guest S2D Tuning (Registry)

PowerShell Run on: SOFS VM

On each SOFS VM, increase the S2D I/O timeout for VM latency:

Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Services\spaceport\Parameters" `
                 -Name "HwTimeout" `
                 -Value 0x0000003C `
                 -Type DWord

Get-StorageSubSystem Clus* |
    Set-StorageHealthSetting -Name "System.Storage.PhysicalDisk.AutoReplace.Enabled" -Value "False"

7.4 — Create the S2D Volumes

PowerShell Run on: SOFS VM

Run from any one of the SOFS VMs. Create three volumes with three-way mirror:

# Profiles — 7,500 GB
New-Volume -FriendlyName "Profiles" `
           -StoragePoolFriendlyName "S2D on iic-sofs" `
           -FileSystem CSVFS_ReFS `
           -ResiliencySettingName Mirror `
           -NumberOfDataCopies 3 `
           -Size 7500GB

# ODFC (Office Data File Containers) — 13,500 GB
New-Volume -FriendlyName "ODFC" `
           -StoragePoolFriendlyName "S2D on iic-sofs" `
           -FileSystem CSVFS_ReFS `
           -ResiliencySettingName Mirror `
           -NumberOfDataCopies 3 `
           -Size 13500GB

# AppData — 6,000 GB
New-Volume -FriendlyName "AppData" `
           -StoragePoolFriendlyName "S2D on iic-sofs" `
           -FileSystem CSVFS_ReFS `
           -ResiliencySettingName Mirror `
           -NumberOfDataCopies 3 `
           -Size 6000GB

-NumberOfDataCopies 3 creates a three-way mirror. On a 3-node S2D cluster this is the default, but specifying it explicitly makes the design intent clear. Total pool consumed: 81,000 GB (100% of allocatable pool).

Verify:

Get-Volume -CimSession "iic-sofs" |
    Where-Object { $_.FileSystemLabel -match "Profiles|ODFC|AppData" }

Get-VirtualDisk -CimSession "iic-sofs"

Phase 8: Add the Scale-Out File Server Role

8.1 — Add the SOFS Cluster Role

PowerShell Run on: SOFS VM

Run from any one of the SOFS VMs:

Add-ClusterScaleOutFileServerRole -Name "iic-fslogix" -Cluster "iic-sofs"

AD and DNS permissions required: The cluster CNO (iic-sofs$) must have permission to create a Computer Object for the SOFS access point (iic-fslogix) in the target OU. If your AD environment restricts this, pre-stage the iic-fslogix Computer Object and grant the iic-sofs$ CNO full control over it.

Verify:

Get-ClusterGroup -Cluster "iic-sofs" | Where-Object { $_.GroupType -eq "ScaleOutFileServer" }

8.2 — Create the FSLogix SMB Shares

PowerShell Run on: SOFS VM

function New-SOFSShare {
    param([string]$VolumeName, [string]$ShareName)
    $csv = (Get-ClusterSharedVolume -Cluster "iic-sofs" |
        Where-Object { $_.SharedVolumeInfo.FriendlyVolumeName -match $VolumeName }
    ).SharedVolumeInfo.FriendlyVolumeName
    $path = "$csv\$ShareName"
    New-Item -Path $path -ItemType Directory -Force | Out-Null
    New-SmbShare -Name $ShareName `
                 -Path $path `
                 -ScopeName "iic-fslogix" `
                 -ContinuouslyAvailable $true `
                 -CachingMode None `
                 -FullAccess "IMPROBABLE\Domain Admins" `
                 -ChangeAccess "IMPROBABLE\Domain Users" `
                 -FolderEnumerationMode AccessBased
}

New-SOFSShare -VolumeName "Profiles" -ShareName "Profiles"
New-SOFSShare -VolumeName "ODFC"     -ShareName "ODFC"
New-SOFSShare -VolumeName "AppData"  -ShareName "AppData"

Critical settings: - -ContinuouslyAvailable $true — Required for SOFS. Enables transparent failover via SMB3 persistent handles. - -CachingMode None — Disables offline file caching (FSLogix manages its own caching). - -ScopeName "iic-fslogix" — Associates the share with the SOFS cluster role, not a single node.


Phase 9: Configure NTFS Permissions for FSLogix

PowerShell Run on: SOFS VM

Run from any one of the SOFS VMs:

function Set-FSLogixNTFS {
    param([string]$SharePath, [string]$Domain = "IMPROBABLE")

    $acl = Get-Acl $SharePath
    $acl.SetAccessRuleProtection($true, $false)

    # CREATOR OWNER — Modify (subfolders and files only)
    $acl.AddAccessRule((New-Object System.Security.AccessControl.FileSystemAccessRule(
        "CREATOR OWNER", "Modify", "ContainerInherit,ObjectInherit", "InheritOnly", "Allow")))

    # Domain Users — Modify (this folder only) — allows creating their profile folder
    $acl.AddAccessRule((New-Object System.Security.AccessControl.FileSystemAccessRule(
        "$Domain\Domain Users", "Modify", "None", "None", "Allow")))

    # Domain Admins — Full Control (this folder, subfolders, and files)
    $acl.AddAccessRule((New-Object System.Security.AccessControl.FileSystemAccessRule(
        "$Domain\Domain Admins", "FullControl", "ContainerInherit,ObjectInherit", "None", "Allow")))

    # SYSTEM — Full Control
    $acl.AddAccessRule((New-Object System.Security.AccessControl.FileSystemAccessRule(
        "NT AUTHORITY\SYSTEM", "FullControl", "ContainerInherit,ObjectInherit", "None", "Allow")))

    Set-Acl -Path $SharePath -AclObject $acl
}

Set-FSLogixNTFS -SharePath "C:\ClusterStorage\Profiles\Profiles"
Set-FSLogixNTFS -SharePath "C:\ClusterStorage\ODFC\ODFC"
Set-FSLogixNTFS -SharePath "C:\ClusterStorage\AppData\AppData"

Why this structure: Each user's FSLogix agent creates a subfolder (by SID) and a VHDX inside it. CREATOR OWNER ensures users can only modify their own profile folder. The "Modify, this folder only" entry for Domain Users lets the agent create the initial folder.

9.1 — Configure FSRM Quotas

PowerShell Run on: SOFS VM

Apply per-user quotas on each share directory using auto-apply templates:

# Create quota templates
New-FsrmQuotaTemplate -Name "FSLogix-Profiles-5GB" `
    -Size 5GB `
    -SoftLimit `
    -Threshold (New-FsrmQuotaThreshold -Percentage 80 -Action (
        New-FsrmAction -Type Event -EventType Warning -Body "User [Source Io Owner] has reached 80% of their 5 GB profile quota on [Quota Path]."
    ))

New-FsrmQuotaTemplate -Name "FSLogix-ODFC-10GB" `
    -Size 10GB `
    -SoftLimit `
    -Threshold (New-FsrmQuotaThreshold -Percentage 80 -Action (
        New-FsrmAction -Type Event -EventType Warning -Body "User [Source Io Owner] has reached 80% of their 10 GB ODFC quota on [Quota Path]."
    ))

New-FsrmQuotaTemplate -Name "FSLogix-AppData-3GB" `
    -Size 3GB `
    -SoftLimit `
    -Threshold (New-FsrmQuotaThreshold -Percentage 80 -Action (
        New-FsrmAction -Type Event -EventType Warning -Body "User [Source Io Owner] has reached 80% of their 3 GB AppData quota on [Quota Path]."
    ))

# Apply auto-apply quotas (applies to each new user subfolder)
New-FsrmAutoQuota -Path "C:\ClusterStorage\Profiles\Profiles" -Template "FSLogix-Profiles-5GB"
New-FsrmAutoQuota -Path "C:\ClusterStorage\ODFC\ODFC" -Template "FSLogix-ODFC-10GB"
New-FsrmAutoQuota -Path "C:\ClusterStorage\AppData\AppData" -Template "FSLogix-AppData-3GB"

Phase 10: Antivirus Exclusions

10.1 — Antivirus Exclusions on SOFS Nodes

PowerShell Run on: SOFS VM

Run on each SOFS VM:

Add-MpPreference -ExclusionPath "C:\ClusterStorage"
Add-MpPreference -ExclusionExtension ".VHD"
Add-MpPreference -ExclusionExtension ".VHDX"
Add-MpPreference -ExclusionProcess "clussvc.exe"
Add-MpPreference -ExclusionProcess "csvfs.sys"

Get-MpPreference | Select-Object ExclusionPath, ExclusionExtension, ExclusionProcess

10.2 — Antivirus Exclusions on AVD Session Hosts (When Deployed)

PowerShell Run on: Session Host

When deploying AVD session hosts, configure FSLogix exclusions to prevent profile corruption:

Add-MpPreference -ExclusionProcess "frxsvc.exe"
Add-MpPreference -ExclusionProcess "frxdrv.sys"
Add-MpPreference -ExclusionProcess "frxccd.sys"
Add-MpPreference -ExclusionPath "$env:ProgramFiles\FSLogix\Apps"
Add-MpPreference -ExclusionPath "$env:TEMP\intlMountPoints"
Add-MpPreference -ExclusionExtension ".VHD"
Add-MpPreference -ExclusionExtension ".VHDX"

Phase 11: Validation and Testing

11.1 — Verify SOFS Access

PowerShell Run on: Mgmt Workstation

From any machine on the compute network:

"Profiles","ODFC","AppData" | ForEach-Object {
    [PSCustomObject]@{ Share = $_; Accessible = (Test-Path "\\iic-fslogix\$_") }
}

Get-SmbShare -CimSession "iic-sofs-01" -Name "Profiles","ODFC","AppData" |
    Select-Object Name, ScopeName, ContinuouslyAvailable, CachingMode

11.2 — Test Failover

PowerShell Run on: Mgmt Workstation

  1. Log into an AVD session so a FSLogix profile is mounted.
  2. Identify which SOFS node currently owns the connection:
Get-SmbOpenFile -CimSession "iic-sofs-01","iic-sofs-02","iic-sofs-03" |
    Where-Object { $_.Path -like "*Profiles*" -or $_.Path -like "*ODFC*" -or $_.Path -like "*AppData*" }
  1. Drain the owning SOFS VM's host node to simulate failure:
Suspend-ClusterNode -Name "iic-01-n01" -Cluster "iic-clus01" -Drain
  1. Verify the user's session remains connected (SMB3 transparent failover handles the reconnection).

11.3 — Verify Anti-Affinity

PowerShell Run on: Mgmt Workstation

Get-ClusterGroup -Cluster "iic-clus01" |
    Where-Object { $_.Name -like "iic-sofs*" } |
    Select-Object Name, OwnerNode

Get-ClusterAffinityRule -Name "SOFS-AntiAffinity" -Cluster "iic-clus01"

11.4 — Verify S2D Health

PowerShell Run on: SOFS VM

Get-StorageSubSystem -CimSession "iic-sofs" |
    Get-StorageHealthReport

Get-VirtualDisk -CimSession "iic-sofs" |
    Select-Object FriendlyName, HealthStatus, OperationalStatus, ResiliencySettingName, NumberOfDataCopies

SOFS Deployment Phases


Part IV — Reference

IP and Name Reference

Component Name / Value Purpose
**Azure Local cluster** `iic-clus01` Physical cluster
**Physical nodes** `iic-01-n01` through `iic-01-n06` Azure Local hosts
**SOFS CSV 1** `csv-iic-clus01-m3-sofs-01` (31,627 GB, 3-way) Hosts iic-sofs-01 (94,881 GB pool)
**SOFS CSV 2** `csv-iic-clus01-m3-sofs-02` (31,627 GB, 3-way) Hosts iic-sofs-02 (94,881 GB pool)
**SOFS CSV 3** `csv-iic-clus01-m3-sofs-03` (31,627 GB, 3-way) Hosts iic-sofs-03 (94,881 GB pool)
**Workload CSV 1** `csv-iic-clus01-m2-avd-01` (2,000 GB, 2-way) AVD session hosts (4,000 GB pool)
**Workload CSV 2** `csv-iic-clus01-m2-avd-02` (2,000 GB, 2-way) AVD session hosts (4,000 GB pool)
**SOFS VM 1** `iic-sofs-01` / 10.42.10.21 S2D node (127 GB OS + 7 × 4,500 GB data)
**SOFS VM 2** `iic-sofs-02` / 10.42.10.22 S2D node (127 GB OS + 7 × 4,500 GB data)
**SOFS VM 3** `iic-sofs-03` / 10.42.10.23 S2D node (127 GB OS + 7 × 4,500 GB data)
**Guest cluster CNO** `iic-sofs` / 10.42.10.25 Failover cluster name
**SOFS access point** `iic-fslogix` Client access (`\\iic-fslogix\`)
**Profiles volume** `Profiles` (7,500 GB, 3-way) `\\iic-fslogix\Profiles`
**ODFC volume** `ODFC` (13,500 GB, 3-way) `\\iic-fslogix\ODFC`
**AppData volume** `AppData` (6,000 GB, 3-way) `\\iic-fslogix\AppData`
**Cloud witness** `stsofswitnessiic01` Azure Storage Account quorum witness
**Anti-affinity rule** `SOFS-AntiAffinity` Keeps VMs on separate nodes
**Resource group** `rg-iic-sofs-azl-eus-01` Azure resource group
**AD domain** `improbability.cloud` / `IMPROBABLE` Single domain for all components
**AD OU** `OU=SOFS,OU=Azure Local,DC=improbability,DC=cloud` SOFS computer objects

Operations and Maintenance

Patching Procedure

  1. Drain one SOFS VM at a time using Suspend-ClusterNode -Drain
  2. Patch and reboot the drained VM
  3. Wait for S2D resync to complete (minutes on all-NVMe)
  4. Repeat for the next VM

Never patch two SOFS VMs simultaneously — this would leave a single node with no mirror partner (with 3-way mirror, the remaining 2 nodes still maintain full resiliency, but you lose the ability to tolerate another failure during the update window).

Patching Sequence — Rolling Update

Monitoring

What to Monitor Where Alert Threshold
S2D pool health `Get-StoragePool` on SOFS cluster Any status other than Healthy
Volume capacity PerfMon per CSV volume 80% consumed
FSRM quota events Event Log on SOFS nodes Warning (80%) or hard limit hit
SMB session count `Get-SmbSession` Unusual spike or drop
FSLogix mount failures FSLogix event log on session hosts Event ID 25 (mount failure)
S2D rebuild progress `Get-StorageJob` Active rebuild jobs

Failure Scenarios

Failure Impact Recovery
1 SOFS VM down S2D continues on 2 nodes, full 3-way resiliency, zero interruption to AVD VM restarts or is live-migrated
1 Azure Local node down Anti-affinity ensures only 1 SOFS VM affected — same as above Node recovers, S2D resyncs
1 Azure Local CSV volume offline Only SOFS VM on that volume affected — same as above Volume recovers, VM restarts
2 SOFS VMs down simultaneously **Profile storage offline** — FSLogix fails to mount Restore VMs; Cloud Cache provides continuity if configured
1 SOFS VM + 1 data disk on another VM 3-way mirror degrades to single copy on affected volume segment Replace disk, wait for rebuild

Failure Scenarios — Dual-Layer Resiliency


Important Notes and Considerations

Licensing: Windows Server 2025 Datacenter: Azure Edition Core (Gen2) is required for S2D, and guest VM licensing is not always included with Azure Local. See Prerequisites — Licensing.

Supportability: Microsoft's official guidance is that S2D in guest VMs is supported on Windows Server (not Azure Local OS as the guest). Since IIC is running Windows Server 2025 Datacenter: Azure Edition Core (Gen2) inside the VMs on an Azure Local host, this is a supported configuration. Do not mix the Azure Local cluster's own S2D storage volumes with SOFS shares on the same cluster — the guest cluster approach keeps these cleanly separated.

Network: All SOFS VMs should be on the same compute network/VLAN as the AVD session hosts for optimal latency. If IIC has a dedicated storage VLAN, a second NIC could be added to each SOFS VM for intra-cluster (S2D replication) traffic, but for most deployments a single compute network NIC is sufficient.

Capacity planning: This design provisions 40,500 GB usable for FSLogix profiles across three volumes at 3-way mirror, consuming the entire 81,000 GB allocatable guest pool. Data disks are dynamically provisioned, so day-one consumption will be much lower than the ceiling — it grows as profiles are written. Monitor utilization and expand Azure Local host volumes and VM data disks if growth exceeds projections.

Backup and DR: SOFS with continuously available shares requires special backup considerations. Standard VSS-based backup tools may not work directly against the SOFS share. Consider FSLogix Cloud Cache (see Appendix A) or a backup agent inside the guest cluster that can back up the FSLogix VHDX files on a schedule during off-hours when profiles are not mounted.


Considerations for AVD Deployment

This section is not part of the SOFS deployment itself. These are items to plan for when deploying AVD session hosts that will consume the SOFS shares.

Network Placement

AVD session hosts should be on the same compute network/VLAN as the SOFS VMs. Same-subnet placement eliminates routing hops and provides the best login/logoff performance.

Profile Sizing

Plan FSLogix max profile size (SizeInMBs) based on user workload. The default 30 GB is generous for most office workers. If users have heavy Outlook OST files or OneDrive cache, you may need more. Monitor actual usage after deployment and adjust.

GPO Path

Computer Configuration → Administrative Templates → FSLogix → Profile Containers

Plan your session host identity model before deploying. The NTFS permissions (Phase 9) and SMB share permissions (Phase 8) reference AD domain groups (IMPROBABLE\Domain Users, IMPROBABLE\Domain Admins). If your AVD users are in a different OU or security group, adjust those references accordingly.


Appendix A — Cloud Cache for DR to Azure (Optional)

FSLogix Cloud Cache provides near-real-time replication of profile data to a secondary storage provider — typically Azure Blob Storage or Azure Files — without requiring separate backup infrastructure.

How It Works

Cloud Cache replaces VHDLocations with CCDLocations. Instead of writing directly to the SOFS share, the FSLogix agent writes to a local cache on the session host first, then asynchronously flushes to all configured providers:

  1. Primary provider: SOFS (\\iic-fslogix\Profiles) — same SMB share as the non-Cloud Cache configuration
  2. Secondary provider: Azure Blob Storage — provides DR copy in Azure

If the SOFS becomes temporarily unavailable, Cloud Cache serves from the local cache. The user continues working with no interruption. At sign-out, Cloud Cache ensures all providers are synchronized before completing.

CCDLocations Registry Configuration

Configure on each AVD session host (or via GPO):

HKLM\SOFTWARE\FSLogix\Profiles
    Enabled                       REG_DWORD    1
    CCDLocations                  REG_SZ       type=smb,name="SOFS",connectionString=\\iic-fslogix\Profiles;type=azure,name="AzureBlob",connectionString="|fslogix/<KEY-NAME>|"
    ClearCacheOnLogoff            REG_DWORD    1
    FlipFlopProfileDirectoryName  REG_DWORD    1

For the three-share layout, configure CCDLocations separately for each:

  • Profiles: type=smb,name="SOFS-Profiles",connectionString=\\iic-fslogix\Profiles;type=azure,...
  • ODFC: Configure under HKLM\SOFTWARE\Policies\FSLogix\ODFC with the ODFC share and a separate Azure container
  • AppData: Configure AppData redirection separately if using Cloud Cache

When to Use Cloud Cache

Scenario Recommendation
DR requirement for profile data **Use Cloud Cache** — provides automatic Azure replication
Single-site, no DR requirement **SOFS alone is sufficient** — simpler, fewer moving parts
Multi-site AVD with shared profiles **Use Cloud Cache** — enables cross-site profile access
Regulatory requirement for off-site backup **Use Cloud Cache** — Azure Blob is the off-site copy

Considerations

  • Cloud Cache adds write amplification — every profile write goes to the local cache and all providers
  • Session host local disk must have sufficient free space for the cache (plan for at least 50% of average profile size per concurrent user)
  • Azure Blob Storage costs accrue based on data stored and write transactions
  • Cloud Cache supports up to 4 providers in any combination of SMB and Azure Blob

Automation Scripts

The azurelocal-sofs-fslogix repository includes automation tooling for every phase of the SOFS deployment.

Central Configuration

File Description
[`config/variables.example.yml`](https://github.com/AzureLocal/azurelocal-sofs-fslogix/blob/main/config/variables.example.yml) Example configuration — copy to `config/variables.yml` and fill in your values. Key Vault URI references are used for secrets.

Phase 1 — Azure Resource Provisioning

Tool Path Description
**Terraform** [`src/terraform/`](https://github.com/AzureLocal/azurelocal-sofs-fslogix/tree/main/src/terraform) Full IaC using `azapi` + `azurerm` providers. Creates resource group, Key Vault, cloud witness storage, NICs, Arc VMs, and data disks. Auto-generates Ansible inventory.
**Bicep** [`src/bicep/`](https://github.com/AzureLocal/azurelocal-sofs-fslogix/tree/main/src/bicep) Subscription-scope Bicep deployment with modules for VMs, NICs, disks, and cloud witness.
**ARM** [`src/arm/`](https://github.com/AzureLocal/azurelocal-sofs-fslogix/tree/main/src/arm) Legacy ARM JSON templates — maintained for environments that require JSON. **Bicep is recommended.**
**PowerShell** [`src/powershell/Deploy-SOFS-Azure.ps1`](https://github.com/AzureLocal/azurelocal-sofs-fslogix/blob/main/src/powershell/Deploy-SOFS-Azure.ps1) Azure CLI wrapper script. Use when IaC is not required.
**Ansible** [`src/ansible/playbooks/deploy-azure-resources.yml`](https://github.com/AzureLocal/azurelocal-sofs-fslogix/blob/main/src/ansible/playbooks/deploy-azure-resources.yml) Runs on `localhost` using Azure CLI. Creates the same Azure resources.

Phases 3–11 — Guest Cluster Configuration

Tool Path Phases Description
**PowerShell** [`src/powershell/Configure-SOFS-Cluster.ps1`](https://github.com/AzureLocal/azurelocal-sofs-fslogix/blob/main/src/powershell/Configure-SOFS-Cluster.ps1) 3–11 Comprehensive WinRM/PSRemoting-based script. Idempotent — safe to re-run.
**Ansible** [`src/ansible/playbooks/configure-sofs-cluster.yml`](https://github.com/AzureLocal/azurelocal-sofs-fslogix/blob/main/src/ansible/playbooks/configure-sofs-cluster.yml) 5–11 WinRM+Kerberos playbook. Does **not** handle anti-affinity (Phases 3–4).

Supplemental Scripts

Script Path Description
`New-SOFSDeployment.ps1` [`src/powershell/`](https://github.com/AzureLocal/azurelocal-sofs-fslogix/blob/main/src/powershell/New-SOFSDeployment.ps1) SOFS role + SMB share creation (Phases 8–9)
`Set-FSLogixShare.ps1` [`src/powershell/`](https://github.com/AzureLocal/azurelocal-sofs-fslogix/blob/main/src/powershell/Set-FSLogixShare.ps1) NTFS/SMB permissions + FSLogix registry keys (Phases 9–10)
`configure-fslogix.yml` [`src/ansible/playbooks/`](https://github.com/AzureLocal/azurelocal-sofs-fslogix/blob/main/src/ansible/playbooks/configure-fslogix.yml) FSLogix registry settings on AVD session hosts
`Test-SOFSDeployment.ps1` [`tests/`](https://github.com/AzureLocal/azurelocal-sofs-fslogix/blob/main/tests/Test-SOFSDeployment.ps1) Full post-deployment validation

CI/CD Pipeline Examples

Directory Description
[`examples/pipelines/azure-devops/`](https://github.com/AzureLocal/azurelocal-sofs-fslogix/tree/main/examples/pipelines/azure-devops) Azure DevOps YAML pipeline definitions
[`examples/pipelines/github-actions/`](https://github.com/AzureLocal/azurelocal-sofs-fslogix/tree/main/examples/pipelines/github-actions) GitHub Actions workflow files
[`examples/pipelines/gitlab/`](https://github.com/AzureLocal/azurelocal-sofs-fslogix/tree/main/examples/pipelines/gitlab) GitLab CI/CD pipeline definitions

Terraform and Bicep handle only Phase 1 (Azure resource provisioning). Guest OS cluster configuration (Phases 3–11) requires the PowerShell script or Ansible playbook — infrastructure-as-code tools cannot configure Windows Failover Clustering or S2D inside the guest OS.


Azure Local

Storage Spaces Direct

Scale-Out File Server

Windows Server Failover Clustering

FSLogix

Azure Virtual Desktop

Azure Cloud Adoption Framework


**SOFS Repository** [AzureLocal/azurelocal-sofs-fslogix](https://github.com/AzureLocal/azurelocal-sofs-fslogix)
**AVD Repository** [AzureLocal/azurelocal-avd](https://github.com/AzureLocal/azurelocal-avd)
**Toolkit Repository** [AzureLocal/azurelocal-toolkit](https://github.com/AzureLocal/azurelocal-toolkit)
**Website** [azurelocal.cloud](https://azurelocal.cloud)
**Path** `docs/reference/sofs-design-and-deployment-guide.md`
**Maintained by** Hybrid Cloud Solutions LLC