Transcript
EMC Simple Support Matrix EMC Unified VNX Series OCTOBER 2014 P/N 300-012-396 REV 41
© 2011 - 2014 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United State and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulator document for your product line, go to EMC Online Support (https://support.emc.com). The information in this ESSM is a configuration summary. For more detailed information, refer to https://elabnavigator.emc.com.
Table 1
EMC® E-Lab™ qualified specific versions or ranges for EMC Unified VNX® Series (page 1 of 3) Path and Volume Management PowerPath
Platform Support AIX 5300-09-01-0847 g
®
Native MPIO/LVM Allowed
Target Rev
5.5 P02 - 5.5 P06
5.3 - 5.3 SP1 P01 c;
5.1 SP1
5.5 - 5.5 P01
AIX 7100-00-02-1041 g, i
5.5 P02 - 5.5 P06; 5.7- 5.7 SP2
(AIX) VIOS 2.1.1 or later o
5.5 or later
Y 5.3 SP1 P01 c;
5.1 SP2
HP-UX 11iv2 (11.23) g HP-UX 11iv3 (11.31)
5.1 - 5.1 SP1
g
Y 5.7
Linux: AX 2.0 GA - 2.0 SP3
5.0 - 5.0 MP3
10g R2 - Symantec SF 11g, 5.1 SP1 - 6.0.1, 12c R1 6.0.3, 6.1.0, 6.1.1 Symantec SF 5.1 SP1 - 6.0.1, 11g R2, 5.1 SP1 PR1 6.0.3, 6.0.5, 6.1, 6.1.1 12c R1 6.0.1, 6.0.3, 6.1, 6.1.1 10g
5.0
SG 11.16 11.19
5.0
5.0 - 6.0.1, 6.0.3, 6.0.5
SG 11.17 11.20
5.0 - 6.0.3, 6.0.5
10g R2 11g R1 10g R2 - Symantec SF 11g R2, 5.1 SP1 - 6.0.1, 6.03, 6.0.5 12c R1
Ya 5.5 - 5.6 k and 5.7 SP1 - 5.7 SP1 P01 k
5.3.x c, k
Y
5.6 k and 5.7 SP1 k
Linux: AX 4 Linux: Citrix XenServer 6.x m Linux: OL 4 U4 - 4 U7
Symantec SF 5.1 SP1
5.0 MP3 - 6.0.1, 6.0.3, 6.0.5, 6.1.1
SG 11.16
5.1 SP1 - 6.0.1, 6.0.3, 6.0.5
10g - 11g
Y
b , n, n
Linux: AX 3 b
Allowed
5.1 SP1- 6.0.1, 5.0 MP3 - 5.1 PowerHA 6.0.3, 6.1.0, (HACMP) 6.1.1 5.4 - 7.1
(Pvlinks) d / Y
5.1 SP2 - 5.2 P04
IBM i 7.1 p
Native
5.1 SP1 PR16.0.1, 6.0.3, 6.1, 6.1.1
5.5 - 5.5 P01 HP-UX 11iv1 (11.11) g
Symantec / Veritas / DMP / VxVM / VxDMP
Target Rev
AIX 6100-02-01-0847 g, i
Virtual Provisioning ™ Host Symantec / Veritas / Oracle f SFRAC/VxCFS / SF RAC Reclamation h HA Cluster
Y
b,n
5.0 MP2 5.0 MP4
Linux: OL 5.0, 5.1 - 5. 2 b Linux: OL 5.3 - 5. 5 Linux: OL 5.6 Linux: OL 5.7- 5.10 Linux: OL 5.x UEK R1; R1 U1; R2 U1 s Linux: OL 5.x UEK R1 U2 & U3; R2 U2, U3, & U4 s Linux: OL 6.x UEK R1; R1 U3; R2 U1, U2, U3, & U4 s Linux: OL 6.x UEK R1 U1 and U2; R2 s
5.5 k, 5.7 SP4 k
5.0 MP3
5.3.x c, k 5.1 SP1
10g 11g
5.0 MP3 - 5.1
Symantec SF 5.1 SP1
5.5 - 5.6 k, 5.7 SP5 k 5.6 k, 5.7 SP5 k
Y
5.7 SP1 - 5.7 SP5 k 5.7 SP1 P01 5.7 SP5 k
11 g
6.0
Symantec SF 6.0
12c R1
5.7 SP1 - 5.7 SP5 k 5.7 SP3 - 5.7 SP5 k
Linux: OL 6.x UEK R3 s Linux: OVM 2.2 and later
10g R2 & 11g R2 (L, M) q
5.3 SP1 HF2 r 5.7 SP5 k
Linux: RHEL 4.0 U2 - U4 n, b, e; 4.5 - 4.9 n, b, e Linux: RHEL 5.0, 5.1 b, i Linux: RHEL 5.2 b, i Linux: RHEL 5.3 - 5.4 i Linux: RHEL 5.5 - 5.6 i
Ya
5.5 - 5.6 k. 5.7 SP1 k, 5.7 SP1 P01- P02 k, 5.7 SP5 k
Linux: RHEL 5.7, 5.8, 5.9 i
5.6 k, 5.7 SP1, k 5.7 SP1 P01- P02 k, 5.7 SP5 k
Linux: RHEL 5.10 i
5.7 SP1 P01- P02 k, 5.7 SP5 k
10/3/14
5.3.x c, k 5.1 SP1
4.1 MP2 5.0 MP4
4.1, 5.0
5.0 MP3 5.0 MP4
5.0 MP3 - 5.1
5.0 MP3 - 5.1
5.0 MP4 - 5.1 SP1
Y
10g 11g Symantec SF 5.1 SP1
Y 5.1 SP1 - 6.0.3, 6.0.5, 6.1.0, 5.0 MP4 - 5.1 6.1.1
5.0 MP4 - 6.0.3, 6.0.5, 6.1, 6.1.1
10g 11g, 12c R1
Symantec SF 5.1 SP1 - 6.0.3, 6.0.5, 6.1.0, 6.1.1
EMC® E-Lab™ qualified specific versions or ranges for EMC Unified VNX® Series (page 2 of 3)
Table 1
Path and Volume Management PowerPath®
Platform Support
Target Rev
Native MPIO/LVM Allowed
Native
Allowed
5.1 SP1 PR1 & PR2
Linux: RHEL 6.1, 6.2 i
Symantec SF 5.1 SP1 PR2 Symantec SF 5.1 5.1 SP1 PR2 - 6.0.3, SP1 PR2 - 6.0.3, 6.0.5, 6.1, 6.1.1 6.1.0 Symantec SF 5.1 5.1 SP1 RP3, 6.0.1 - 11g R2, SP1 PR2 & RP3, 12c R1 6.0.3, 6.0.5, 6.1, 6.0.1 - 6.0.3, 6.1.1 6.1.0 Symantec SF 6.0.3, 6.1, 6.1.1 6.0.3, 6.0.5, 6.1, 6.1.1 5.1 SP1 PR2
5.1 SP1 PR1 6.0.3, 6.1.0
5.6 k, 5.7 SP1 k, 5.7 SP1 P01 - P02 k, 5.7 SP5 k
Linux: RHEL 6.3, 6.4 i
5.1 SP1 PR2 & RP3, 6.0.1 6.0.3, 6.1.0
Linux: RHEL 6.5 i
6.0.3, 6.0.5, 6.1, 6.1.1
Y
Linux: RHEL 7.0 Linux: SLES 9 SP3 - SLES 9 SP4 n
5.5
5.3.x c, k 5.1 SP1
5.5 - 5.7 SP1 k,
Linux: SLES 10 SP4 b, n
5.0 MP3 - 5.1
6.0 - 6.0.3
5.7 SP1 P01 - P02 k
Linux: SLES 11
Y 4.1 MP2 5.0 RP1
k
Linux: SLES 10 SP2- SLES 10 SP3 b, n
5.1 SP1
Linux: SLES 11 SP1 - SLES 11 SP2
i
Linux: SLES 11 SP3 i
5.1 SP1, 6.0 - 6.0.3 5.7 SP2 - 5.7 SP5 k
Microsoft Windows 2003 (x86 and x64) SP2 and R2 SP2 Microsoft Windows 2008 (x86) SP1 and SP2
5.5 - 5.7 SP4
5.3 5.3 SP1c
Microsoft Windows 2008 R2 and R2 SP1 i Microsoft Windows Server 2012 i i
5.0 MP4 - 5.1
6.0.4, 6.1.0 5.1 SP2
Microsoft Windows 2008 (x64) SP1 and SP2 i
Microsoft Windows Server 2012 R2
Symantec / Veritas / DMP / VxVM / VxDMP Target Rev
Linux: RHEL 6.0 i
MPIO/Y
Virtual Provisioning ™ Host Symantec / Veritas / Oracle f SFRAC/VxCFS / SF RAC Reclamation h HA Cluster
5.0 - 5.1 SP1
5.1 SP2 - 6.0.1 5.1 - 5.1 SP1 5.1 SP2 - 6.1, 6.1.1
5.5 SP1 - 5.7 SP4
6.0.2, 6.1, 6.1.1
5.7 SP1 - 5.7 SP4
6.1, 6.1.1
4.1, 5.0
10 g
Symantec SF 5.1 SP1. 6.0 6.0 - 6.0.3 6.0.3 Symantec SF 5.0 MP3 - 5.1 5.1 SP1 R2- Symantec SF 5.1 SP1- 6.0.3, 6.1 11g 12c R1 5.1 SP, 6.0 - 6.0.3 SF 6.0.4 11g R2 Symantec 6.0.4, 6.1.0 10g R2 MSCS 5.0 - 5.1 Symantec SF 11g 5.1 SP2 5.1- 5.1 SP2 10g R2 5.0 - 6.0.1 Symantec SF 11g, 5.1 SP2 - 6.1, 12c R1 Failover 5.1 SP1 - 6.0.1, 6.1.1 6.1.1 Cluster 6.0.2, 6.1, 6.1.1 Symantec SF 6.0.2, 6.1, 6.1.1 Symantec SF 6.1.0, 6.1, 6.1.1 5.1 SP1
10g R2 11g
OpenVMS Alpha v7.3-2, 8.2, 8.3, 8.4 g Y
OpenVMS Integrity v8.2-1, 8.3, 8.3-1H1, 8.4 g Solaris 8 SPARC e
Refer to the Solaris Letters of Support at https://elabnavigator.emc.com,
Solaris 9 SPARC
5.1 SP1
Solaris 10 SPARC Solaris 10 x86
5.1 SP1 - 6.0.3, 6.1.0
5.3 5.3 P04 c
5.1 SP1_x64 6.0.3_x64
5.5 - 5.5 P04
Solaris 11 x86 Solaris 11.1 x86
6.0 PR1 - 6.0.1, 6.0.3, 6.0.5, 6.1.0, 6.1.1 5.5 P01 - 5.5 P04
MPxIO/Y
6.0.3, 6.1.0
5.0 - 6.1 5.0_x64 5.1_x64
5.0 - 6.0.1, 6.0.3, 6.0.5 SC 4.0 SC 4.1
6.0.3, 6.0.5, 6.1, 6.1.1 6.0 PR1 - 6.0.1, 6.03
5.5 P01 - 5.5 P04
6.0.3_x64
SC 4.1
6.0.3
6.0.3, 6.0.5, 6.1.0, 6.1.1 SC 4.2
10g R2 11g, 12c R1
6.0 PR1 - 6.0.1, 6.03, 6.0.5, 6.1, 6.1.1
SC 4.0
6.0.3_x64, 6.0.5_x64
10g - 11g
5.0 - 5.1
6.0 PR1_x64 6.0.1, 6.0.3_x64
5.5 P02- 5.5 P04
10/3/14
5.0 - 5.0 SP1
5.5 - 5.5 P04
Solaris 11.2 SPARC Solaris 11.2 x86
SC 3.1 - 3.2
SC 3.1 - 3.3
Solaris 11 SPARC
Solaris 11.1 SPARC
Y
11g R2, 12c R1
Symantec SF 5.1 SP1 Symantec SF 5.1 SP1 - 6.0.3, 6.1.0 Symantec SF 5.1 SP1_x64 6.0.3_x64 Symantec SF 6.0 PR1 - 6.0.1, 6.0.3, 6.0.5, 6.1.0, 6.1.1 Symantec SF 6.0.3, 6.1.0 Symantec SF 6.0 PR1_x64 6.0.1, 6.0.3_x64 Symantec SF 6.0.3_x64 Symantec SF 6.0.3, 6.0.5, 6.1.0, 6.1.1 Symantec SF 6.0.3_x64, 6.0.5_x64
Table 1
EMC® E-Lab™ qualified specific versions or ranges for EMC Unified VNX® Series (page 3 of 3) Path and Volume Management PowerPath®
Platform Support
Target Rev
VMware ESX/ESXi 4.0 (vSphere) VMware ESX/ESXi 4.1 (vSphere) i VMware ESXi 5.0 (vSphere) i, j
VMware ESXi 5.1 (vSphere)
i, j
VMware ESXi 5.5 (vSphere) i, j
Native MPIO/LVM Allowed
Symantec / Veritas / DMP / VxVM / VxDMP Target Rev
Virtual Provisioning ™ Host Symantec / Veritas / Oracle f SFRAC/VxCFS / SF RAC Reclamation h HA Cluster
Native
Allowed
PP/VE 5.4 - 5.4 SP1 VxDMP 6.0 6.0.1
PP/VE 5.4 SP2 PP/VE 5.7 l, 5.7 P02 l, 5.8, 5.9 SP1 5.9 SP1 P02 PP/VE 5.7 P02 l, 5.8, 5.9, 5.9 SP1- 5.9 SP1 P02 PP/VE 5.9, 5.9 SP15.9 SP1 P02
N/A
NMP
VxDMP 6.0.1, 6.1
ESX
VxDMP 6.1
Target Rev = EMC recommended versions for new and existing installations. These versions contain the latest features and provide the highest level of reliability. Allowed = Older EMC-approved versions that are still functional, or newer/interim versions not yet targeted by EMC. These versions may not contain the latest fixes and features and may require upgrading to resolve issues or take advantage of newer product features. Versions are suitable for sustaining existing environments but should not be targeted for new installations.
Table 1 Legend, Footnotes, and Components a. MPIO with AX supported at AX 2.0 SP1 and later. MPIO with RHEL supported at RHEL 4.0 U3 and later. Legend: Blank = Not supported Y = Supported NA = Not applicable
b. 8 Gbs initiator support begins with AX 2.0 SP3, AX 3.0 SP2, OL 4.0 U7, OL 5.2, RHEL 4.7, RHEL 5.2, SLES 10 SP2. c. These PowerPath versions have reached End-of-Life (EOL). For extended support contact PowerPath Engineering or https://support.emc.com/products/PowerPath. d. Pvlinks is an HP-UX alternate pathing solution that does not support load balancing. e. Refer to the appropriate Letter of Support on the E-Lab Navigator, Extended Support tab, End-of-Life Configurations heading, for information on end-of-life configurations. f. Refer to the EMC Virtual Provisioning table (https://elabnavigator.emc.com) for all VP (without host reclamation) qualified configurations. g. Only server vendor-sourced HBAs are supported. h. Refer to Symantec Release Notes (www.symantec.com) for supported VxVM versions, which are dependent on the Veritas Cluster version. i. For EMC XtremCache™ software operating system interoperability information, refer to the EMC Support Matrix (ESM), located at https://elabnavigator.emc.com. j. Refer to the VMware vSphere 5 ESSM, at https://elabnavigator.emc.com, Simple Support Matrix tab, Platform Solutions, for details. k. Version and kernel specific. l. Before upgrading to vSphere 5.1, refer to EMC Knowledgebase articles emc302625 and emc305937. m. All guest operating systems and features that Citrix supports, such as XenMotion, XenHA, and XenDesktop, are supported. n. This operating system is currently EOL. All corresponding configurations are frozen. It may no longer be publicly supported by the operating system vendor and will require an extended support contract from the operating system vendor in order for the configuration listed in the ESSM to be supported by EMC. It is highly recommended the customer upgrade the server to an ESSM-supported configuration or install it as a virtual machine on an EMC-approved virtualization hypervisor. o. Consult the Virtualization Hosting Server (Parent) Solutions table in E-Lab Navigator at https://elabnavigator.emc.com for your specific VIOS version prior to implementation. p. VNX8000, VNX7600, VNX5800, VNX5600, VNX5400 and VNX5200 only support base configurations including FAST Cache, FAST VP, and Compression. Supported only with VIOS configurations. q. EMC recommends changing Linux MPIO kernel parameter no_path_retry=15 (default is 5). r. This PowerPath version has reached End-of-Life (EOL). For extended support contact PowerPath Engineering or https:// support.emc.com/products/PowerPath. s. For OL with Red Hat stock kernel, refer to the equivalent Red Hat RHEL configuration for supported configurations (for example, OL 5.0 U8 = RHEL 5.8).
Host Servers/ Adapters
All Fibre Channel HBAs from Emulex, QLogic (2 Gb/s or greater), and Brocade (8 Gb/s or greater), including vendor rebranded versions. Solaris requires an RPQ when using any Brocade HBA. All 10 Gb/s Emulex, QLogic, Brocade, Cisco, Broadcom, and Intel CNAs (10GbE NICs that support FCoE). All 1 Gb/s or 10 Gb/s NICs for iSCSI connectivity, as supported by Server/OS vendor. Any EMC or vendor-supplied OS-approved driver/firmware/BIOS is allowed. EMC recommends using latest E-Lab Navigator listed driver/firmware/BIOS versions. Adapters must support link speed compatible with switch (fabric) or array (direct connect). All host systems are supported where the host and OS vendors allow the host/OS/adapter combination. If systems meet the criteria in this ESSM, no further ELN connectivity validation and no RPQ is required. All FC SAN switches (2 Gb/s or greater) from EMC Connectrix®, Brocade, Cisco, and QLogic for host and storage connectivity are supported. Refer to the Switched Fabric Topology Parameters table located on the E-Lab Navigator (ELN) at https://elabnavigator.emc.com for supported switch fabric interoperability firmware and settings. • R30 — VNX 5100/5300/5500/5700/7500; EFD Cache, SATA, EFD support added. Minimum FLARE revision 04.30.000.5.xxx. • R30++ — VNX 5100/5300/5500/5700/7500; FCoE support added. Minimum FLARE revision 04.30.000.5.5xx. • R31— VNX 5100/5300/5500/5700/7500; Unisphere/Navisphere Manager v1.1; Minimum FLARE revision 05.31. • R32— VNX 5100/5300/5500/5700/7500; Unisphere/Navisphere Manager v1.2; Minimum FLARE revision 05.32. • R33 (only) — VNX 5200/5400/5600/5800/7600/8000; Unisphere/Navisphere Manager v1.3; Minimum FLARE revision 05.33. For specialized and end of life configurations, refer to https://elabnavigator.emc.com/eln/extendedSupport.
Switches
Operating environment
Extended Support
10/3/14
Table 2
VNX 5200/5400/5600/5800/7600/8000 Limitations (page 1 of 5)
Maximum number of storage pools Maximum number of disks in a storage pool Maximum number of usable disks for all storage pools Maximum number of disks that can be added to a pool at a time Maximum number of pool LUNs per storage pool Minimum user capacity Maximum user capacity Maximum number of pool LUNs per storage system Clones per storage system Clones per source LUN Clones per consistent fracture Clone groups per storage system Clone private LUNs a, b per storage system (required) SnapView Snapsc per storage system Snapshots per source LUN SnapView sessions per source LUN Reserved LUNsd per storage system Source LUNs with Rollback active Concurrent Executing Sessions Destination LUNs per Session Incremental Source LUNse Defined Incremental Sessionsf Incremental Sessions per Source LUN Snapshots per storage system Snapshots per Primary LUN Snapshots per consistency group
VNX8000 VNX7600 VNX5800 Storage Pool capacity 60 40 40 996 996 746 996 996 746 180 120 120 4000 3000 2000 Pool LUN limits 1block 1block 1block 256 TB 256 TB 256 TB 4000 3000 2000 SnapView supported configurations 2048 2048 2048 8 8 8 64 64 32 1024 1024 1024 2 2 2 2048 1024 512 8 8 512
LUN migrations Deduplication pass
Deduplications processes per SP Deduplication run time 10/3/14
15 246 246 80 1000
15 121 121 80 1000
1block 256 TB 1000
1block 256 TB 1000
1block 256 TB 1000
1024 8 32 256 2 512
256 8 32 128 2 256
256 8 32 128 2 256
8 8 256
8 8 256
8 8 128
8 8 128
300
300
16 50 256
8 50 256
8 50 256
8 50 256
512
512
512
512
512
8
8
8
8000 256 64 VNX5600
8000 256 64 VNX5400
8000 256 64 VNX5200
128
128
128
128
2000 256
1000 128
1000 128
1000 128
256 (MVA) 256 (MVS) 64 32 (MVA) 32 (MVS)
256 (MVA) 128 (MVS) 64 32 (MVA) 32 (MVS)
256 (MVA) 128 (MVS) 64 32 (MVA) 32 (MVS)
128 (MVA) 128 (MVS) 64 32 (MVA) 32 (MVS)
20
20
20
20
16
16
16
16
512
8 8 8 VN X Snapshots Configuration Guidelines 32000 24000 16000 256 256 256 64 64 64 VNX8000 VNX7600 VNX5800
4000 512
Concurrent compression/decompression operations per SP Concurrent migration operations per system
20 496 496 120 1000
300
Snapshot Mount Points Concurrent restore operations
Maximum number of concurrent compression operations per Storage Processor(SP) Maximum number of compressed LUNs involving migration per storage system Maximum number of compressed LUNs
VNX5200
300
256
Maximum number of consistency groups Maximum number of members per consistency group
VNX5400
300 300 SAN Copy Storage system limits 32 32 100 100 512 512
Consistency Groups
Maximum number of mirrors
8 8 512
VNX5600
256 (MVA) 1024 (MVS) 64 64 (MVA) 64 (MVS)
256 3000 512 MirrorView 256 (MVA) 512 (MVS) 64 64 (MVA) 64 (MVS)
Block compressed LUN limitations 40 32 24
24
4000 3000 2000 1000 1000 1000 Block compression operation limitations 40 32 20 20 20 20 24 24 16 16 16 16 Block-level deduplication limitations (VNX for Block) • 8 maximum • Started every 12 hours upon the last completed run for that pool • Will only trigger if 64GBs of new/changed data is found • Pass can be paused • Resume causes a check to run shortly after resuming • 3 maximum • 4 hour maximum – After 4 hours, deduplication process is paused and other queued process on the SP are allowed to run – If no other processes are queued, deduplication will keep running
Table 2
VNX 5200/5400/5600/5800/7600/8000 Limitations (page 2 of 5) Storage systems used for SAN Copy replication Fibre Channel SAN Copy system Target g system
Storage System VNX8000, VNX7600, VNX5800, VNX5600h, VNX5400, VNX5200 VNX7500, VNX5700, VNX5500, VNX5300, VNX5100 CX4-960, CX4-480, CX4-240, CX4-120i CX3-80, CX3-40, CX3-20, CX3-10 CX3-40C, CX3-20C, CX3-10C CX700, CX500 CX500i, CX300i CX600, CX400 CX300
Guideline/ Specification CIFS TCP connection
P
P
P
P P
P P
P P
P P
P P
P P
N/A P
N/A P
P N/A P
P N/A P P
N/A X N/A N/A
N/A P N/A N/A
N/A N/A
N/A N/A
X N/A N/A X
P N/A N/A P
Maximum tested value
Comment
64K (default and theoretical max.), 40K (max. tested)
CIFS guidelines Param tcp.maxStreams sets the maximum number of TCP connections a Data Mover can have. Maximum value is 64K, or 65535. TCP connections (streams) are shared by other components and should be changed in monitored small increments. With SMB1/SMB2 a TCP connection means single client (machine) connection.
Share name length Number of CIFS shares
80 characters (Unicode). 40,000 per Data Mover(Max. tested limit)
Number of NetBIOS names/ compnames per Virtual Data Mover NetBIOS name length
509 (max)
Comment (ASCII chars) for NetBIOS name for server
256
Compname length
63 bytes
Number of domains
10 tested
Block size negotiated
64 KB 128KB with SMB2
Number of simultaneous requests per CIFS session (maxMpxCount)
127(SMB1) 512(SMB2)
Total number files/directories opened per Data Mover Number of Home Directories supported
500,000
10/3/14
iSCSI Target System
P
Xj P X P X N/A N/A P P P X N/A N/A Operating Environment for File 8.1 Configuration guidelines
CX200 AX100, AX150, AX100i, AX150i AX4-5F AX4-5SCF AX4-5i, AX4-5SCi
SAN Copy system
15
20,000(Max possible limit, not recommended)
With SMB3 and multi channel, a single client could use several network connections for the same session. It will depend on the number of available interfaces on the client machine, or for high speed interface like 10Gb link, it can go up to 4 TCP connections per link. Unicode: The maximum length for a share name with Unicode enabled is 80 characters. Larger number of shares can be created. Maximum tested value is 40K per Data Mover.
Limited by the number of network interfaces available on the Virtual Data Mover. From a local group perspective, the number is limited to 509. NetBIOS and compnames must be associated with at least one unique network interface. NetBIOS names are limited to 15 characters (Microsoft limit) and cannot begin with an@ (at sign) or a (dash) character. The name also cannot include white space, tab characters, or the following symbols: / \ : ; , = * + | [ ] ? < > “. If using compnames, the NetBIOS form of the name is assigned automatically and is derived from the first 15 characters of the . Limited to 256 ASCII characters. • Restricted Characters: You cannot use double quotation ("), semicolon (;), accent (`), and comma (,) characters within the body of a comment. Attempting to use these special characters results in an error message. You can only use an exclamation point (!) if it is preceded by a single quotation mark (’). • Default Comments: If you do not explicitly add a comment, the system adds a default comment of the form EMC-SNAS:T where is the version of the NAS software. For integration with Windows environment releases later than Windows 2000, the CIFS server computer name length can be up to 21 characters when UTF-8 (3 bytes char) is used. 509 (theoretical max.) The maximum number of Windows domains a Data Mover can be a member of. To increase the max default value from 32, change parameter cifs.lsarpc.maxDomain. The Parameters Guide for VNX for File contains more detailed information about this parameter. Maximum buffer size that can be negotiated with Microsoft Windows clients. To increase the default value, change param cifs.W95BufSz, cifs.NTBufSz or cifs.W2KBufSz. The Parameters Guide for VNX for File contains more detailed information about these parameters. Note: With SMB2.1, read and write operation supports 1MB buffer (this feature is named “large MTU”). For SMB1, value is fixed and defines the number of requests a client is able to send to the Data Mover at the same time (for example, a change notification request). To increase this value, change wsathe maxMpxCount parameter. For SMB2 and newer protocol dialect, this notion has been replaced by ‘credit number’, which has a max of 512 credit per client but could be adjusted dynamically by the server depending on the load. The Parameters Guide for VNX for File contains more detailed information about this parameter. A large number of open files could require high memory usage on the Data Mover and potentially lead to out-of-memory issues. As Configuration file containing Home Directories info is read completely at each user connection, recommendation would be to not exceed few thousands for easy management.
Table 2
VNX 5200/5400/5600/5800/7600/8000 Limitations (page 3 of 5)
Guideline/ Specification Number of Windows/UNIX users connected at the same time
Maximum tested value 40,000(limited by the number of TCP connections)
Number of users per TCP connection
64K
Number of files/directories opened per CIFS connection in SMB1
64K
Number of files/directories opened per CIFS connection in SMB2 Number of VDMs per Data Mover
127K
Max connections to secondary storage per primary (VNX for File) file system Number of HTTP threads for servicing FileMover API requests per Data Mover
1024
Mount point name length
255 bytes (ASCII)
File system name length
240 bytes (ASCII) 19 chars display for list option 255 bytes (NFS) 255 characters (CIFS)
Filename length
128
64
Pathname length
1,024 bytes
Directory name length
255 bytes
Subdirectories (per parent directory) Number of file systems per VNX Number of file systems per Data Mover
64,000
Maximum disk volume size
Dependent on RAID group (see comments)
Total storage for a Data Mover (Fibre Channel Only)
VNX Version 7.0 200 TB VNX5300 256 TB VNX5500, VNX5700, VNX7500, VG2 & VG8
File size Number of directories supported per file system
16 TB
10/3/14
4096 2048
Comment Earlier versions of the VNX server relied on a basic database, nameDB, to maintain Usermapper and secmap mapping information. DBMS now replaces the basic database. This solves the inode consumption issue and provides better consistency and recoverability with the support of database transactions. It also provides better atomicity, isolation, and durability in database management. To decrease the default value, change param cifs.listBlocks (default 255, max 255). The value of this parameter times 256 = max number of users. Note: TID/FID/UID shares this parameter and cannot be changed individually for each ID. Use caution when increasing this value as it could lead to an out-of-memory condition. Refer to the Parameters Guide for VNX for File for parameter information. To decrease the default value, change param cifs.listBlocks (default 255, max 255). The value of this parameter times 256 = max number of files/directories opened per CIFS connection. Note: TID/FID/UID shares this parameter and cannot be changed individually for each ID. Use caution when increasing this value as it could lead to an out-of-memory condition. Be sure to follow the recommendation for total number of files/directories opened per Data Mover. Refer to the System Parameters Guide for VNX for File for parameter information. To decrease the default value, change parameter cifs.smb2.listBlocks (default 511, max 511). The value of this parameter times 256 = max number of files/directories opened per CIFS connection. The total number of VDMs, file systems, and checkpoints across a whole cabinet cannot exceed 2048. FileMover
Number of threads available for recalling data from secondary storage is half the number of whichever is the lower of CIFS or NFS threads. 16 (default), can be increased using server_http command, max (tested) is 64. File system guidelines The "/" is used when creating the mount point and is equal to one character. If exceeded, Error 4105: Server_x:path_name: invalid path specified is returned. For nas_fs list, the name of a file system will be truncated if it is more than 19 characters. To display the full file system name, use the –info option with a file system ID (nas_fs -i id=). With Unicode enabled in an NFS environment, the number of characters that can be stored depends on the client encoding type such as latin-1. For example: With Unicode enabled, a Japanese UTF-8 character may require three bytes. With Unicode enabled in a CIFS environment, the maximum number of characters is 255. For filenames shared between NFS and CIFS, CIFS allows 255 characters. NFS truncates these names when they are more than 255 bytes in UTF-8, and manages the file successfully. Note: Make sure the final path length of restored files is less than 1024 bytes. For example, if a file is backed up which originally had path name of 900 bytes, and it is restoring to a path with 400 bytes, the final path length would be 1300 bytes and would not be restored. This is a hard limit and is rejected on creation if over the 255 limit.The limit is bytes for UNIX names, Unicode characters for CIFS. This is a hard limit, code will prevent you from creating more than 64,000 directories. This max number includes VDM and checkpoint file systems. The mount operation will fail when the number of file systems reaches 2048 with an error indicating maximum number of file systems reached. This maximum number includes VDM and checkpoint file systems. Unified platforms — Running setup_clariion on the VNX7500 platform will provision the storage to use 1 LUN per RAID group which might result in LUNS > 2 TB, depending on drive capacity and number of drives in the RAID group. On all VNX integrated platforms (VNX5300, VNX5500, VNX5700, VNX7500), the 2 TB LUN limitation has been lifted--this might or might not result in LUN size larger than 2 TB depending on RAID group size and drive capacity. For all other unified platforms, setup_clariion will continue to function as in the past, breaking large RAID groups into LUNS that are < 2 TB in size. Gateway systems — For gateway systems, users might configure LUNs greater than 2 TB up to 16 TB or max size of the RAID group, whichever is less. This is supported for VG2 and VG8 NAS gateways when attached to CX3, CX4, VNX, and Symmetrix DMX™-4 and VMAX® backends. Multi-Path File Systems (MPFS)— MPFS supports LUNs greater than 2 TB. Windows 2000 and 32-bit Windows XP, however, cannot support large LUNs due to a Windows OS limitation. All other MPFS Windows clients support LUN sizes greater than 2 TB if the 5.0.90.900 patch is applied. Use of these 32-bit Windows XP clients on VNX7500 systems require request for price quotation (RPQ) approval. These total capacity values represent Fibre Channel disk maximum with no ATA drives. Fibre Channel capacity will change if ATA is used. Notes: • On a per-Data-Mover basis, the total size of all file systems, and the size of all SavVols used by SnapSure must be less than the total supported capacity. Exceeding these limit can cause out of memory panic. • Refer to the VNX for file capacity limits tables for more information, including mixed disk type configurations. This hard limit is enforced and cannot be exceeded. Same as number of inodes in file system. Each 8 KB of space = 1 inode.
Table 2
VNX 5200/5400/5600/5800/7600/8000 Limitations (page 4 of 5)
Guideline/ Specification Number of files per directory Maximum number of files and directories per VNX file system
Maximum tested value 500,000 256 million (default)
Comment Exceeding this number will cause performance problems. This is the total number of files and directories that can be in a single VNX file system. This number can be increased to 4 billion at file system creation time but should only be done after considering recovery and restore time and the total storage utilized per file. The actual maximum number in a given file system is dependent on a number of factors including the size of the file system. When quotas are enabled with file size policy, the maximum amount of deduplicated data supported is 256 Maximum amount of deduplicated 256 TB data supported TB. This amount includes other files owned by UID 0 or GID. All other industry-standard caveats, restrictions, policies, and best practices prevail. This includes, but is not limited to, fsck times (now made faster through multi-threading), backup and restore times, number of objects per file system, snapshots, file system replication, VDM replication, performance, availability, extend times, and layout policies. Proper planning and preparation should occur prior to implementing these guidelines. Naming services guidelines Number of DNS domains 3 - WebUI Three DNS servers per Data Mover is the limit if using WebUI. There is no limit when using the command unlimited – CLI line interface (CLI). Number of NIS Servers per Data 10 You can configure up to 10 NIS servers in a single NIS domain on a Data Mover. Mover NIS record capacity 1004 bytes A Data Mover can read 1004 bytes of data from a NIS record. Number of DNS servers per DNS 3 domain NFS guidelines Number of NFS exports 2,048 per Data Mover tested You might notice a performance impact when managing a large number of exports using Unisphere. Unlimited theoretical max. Number of concurrent NFS clients 64K with TCP (theoretical) Limited by TCP connections. Unlimited with UDP (theoretical) Netgroup line size 16383 The maximum line length that the Data Mover will accept in the local netgroup file on the Data Mover or the netgroup map in the NIS domain that the Data Mover is bound to. Number of UNIX groups 64K 2 billion max value of any GID. The maximum number of GIDs is 64K, but an individual GID can have an ID supported in the range of 0- 2147483648. Networking guidelines Link aggregation/ether channel 8 ports (ether channel) Ether channel: the number of ports used must be a power of 2 (2, 4, or 8). Link aggregation: any number of 12 ports (link aggregation (LACP) ports can be used. All ports must be the same speed. Mixing different NIC types (that is copper and fibre) is not recommended. Number of VLANs supported 4094 IEEE standard. Number of interfaces per Data 45 tested Theoretically 509. Mover Number of FTP connections Theoretical value 64K By default the value is (in theory) 0xFFFF, but it is also limited by the number of TCP streams that can be opened. To increase the default value, change param tcp.maxStreams (set to 0x00000800 by default). If you increase it to 64K before you start TCP, you will not be able to increase the number of FTP connections. Refer to the Parameters Guide for VNX for File for parameter information. Guideline/ Specification
Maximum tested value
Number of tree quotas Max size of tree quotas Max number of unique groups Quota path length
8191 256 TB 64K 1024
Number of replication sessions per VNX Max number of replication sessions per Data Mover Max number of local and remote file system and VDM replication sessions per Data Mover Max number of loopback file system and VDM replication sessions per Data Mover Guideline/ Specification
1365 (NSX) (other platforms) 1024
Comment Quotas guidelines Per file system Includes file size and quota tree size. Per file system Replicator V2 guidelines 682 This enforced limit includes all configured file system, VDM, and copy sessions.
682
341
Maximum tested value
Comment Snapsure guidelines Up to 96 read-only checkpoints per file system are supported as well as 16 writeable checkpoints.
Number of checkpoints per file 96 read-only system 16 writeable On systems with Unicode enabled, a character might require between 1 and 3 bytes, depending on encoding type or character used. For example, a Japanese character typically uses 3 bytes in UTF-8. ASCII characters require 1 byte.
10/3/14
Table 2
Usable IP storage capacity limit per blade (all disk types /uses)k Max FS size Max # FS per DM/Bladel Max # FS per cabinet Max configured replication sessions per DM/blade for Replicator v2m Max # of checkpoints per PFSn Max # of NDMP sessions per DM/ blade Memory per DM/blade Table 2 Footnotes:
VNX 5200/5400/5600/5800/7600/8000 Limitations (page 5 of 5) VNX for File 8.1 capacity limits VNX5600 VNX5800 256 TB 256 TB
VNX5200 256 TB
VNX5400 256 TB
16 TB 2048
16 TB 2048
16 TB 2048
4096 1024
4096 1024
4096 1024
VNX7600 256 TB
VNX8000 256 TB
VG10 256 TB
VG20 256 TB
16T B 2048
16 TB 2048
16 TB 2048
16 TB 2048
16 TB 2048
4096 1024
4096 1024
4096 1024
4096 1024
4096 1024
96
96
96
96
96
96
96
96
4
4
8
8
8
8
8
8
6 GB
6 GB
12 GB
12 GB
24 GB
24 GB
6 GB
24 GB
a. Each clone private LUN must be at least 1 GB. b. A thin or thick LUN may not be used for a clone private LUN. c. The limits for snapshots and sessions include SnapView snapshots or SnapView sessions as well as reserved snapshots or reserved sessions used in other applications, such as SAN Copy™ (incremental sessions) and MirrorView™/Asynchronous. d. A thin LUN cannot be used in the reserved LUN pool. e. These limits include MirrorView Asynchronous (MirrorView/A) images and SnapView snapshot source LUNs in addition to the incremental SAN Copy LUNs. The maximum number of source LUNs assumes one reserved LUN assigned to each source LUN. f. These limits include MirrorView/A images and SnapView sessions in addition to the incremental SAN Copy sessions. g. Target implies the storage system can be used as the remote system when using SAN Copy on another storage system. h. The VNX5400 and VNX5600 may not be supported as target array over Fibre when source array Operating Environment is 04.30.000.5.525 and earlier for CX4 storage systems and 05.32.000.5.207 and earlier for VNX storage systems. i. The VNX storage systems can support SAN Copy over Fibre Channel or SAN Copy over iSCSI only if the storage systems are configured with I/O ports of that type. Please see the release notes for VNX Operating Environment version 05.33.000.5.015. j. The CX300, AX100 and AX150 do not support SAN Copy, but support SAN Copy/E. See the release notes for this related product for details. k. This is the usable IP storage capacity per blade. For overall platform capacity, consider the type and size of disk drive and the usable Fibre Channel host capacity requirements, RAID group options and the total number of disks supported. l. This count includes production file systems, user-defined file system checkpoints, and two checkpoints for each replication session. m. A maximum of 256 concurrently transferring sessions are supported per DM. n. PFS (Production File System).
10/3/14
Copyright © 2015 EMC Corporation. All rights reserved. Published in USA. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC², EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulatory document for your product line, go to EMC Online Support (https://support.emc.com).