Transcript
OceanStor V3 Converged Storage Pre-sales Training HCS Storage Pre-sales Training
Teacher name Background
Project Experience
Teacher name
Specialist Skills
position
[email protected]
+0086 1888888888
Page 2
Contents
1
Positioning
2
Hardware and software architectures
3
Price structure and License mode
4
Competitive analysis
3
Huawei Unified Storage Overview Mid-range Unified storage 6800 V3
Entry-Level Block S2200T
Unified storage S2600T
5300 V3
5500 V3
5800 V3 5600 V3
Vendor
Huawei
S2000T/2000 V3
5000 V3/6000 V3
1800018000 / 18000 V3
VNXe
VNX
VMAX
SMS100
HUS/ASM2000
USP/VSP
DS3000
V7000/DS5000
DS8000
P2000
3par StorServ 7000
P10000
EMC HDS
IBM
HP
FAS2000
NetApp
Entry-level 4
FAS8000 Mid-range
High-end
High End 18000 Series
Challenge: Application explosion vs. storage islands
Application
Traditional Apps
Cloud Apps
Storage
Source: IDC, Gartner
OceanStor V3 – Enterprise converged storage
SSD&HDD Convergence High-end, mid-range, entry level Convergence
Primary & Backup Storage Convergence
SAN&NAS Convergence
Heterogeneous storage Convergence
Unified & easy management
OceanStor V3
State-of-the-art hardware
6
Convergence of SAN and NAS Traditional storage
OceanStor V3 series
NAS SAN
NAS
Or SAN
Two storage devices are required to provide SAN and NAS services. The efficiency of databases and file sharing services cannot be maximized.
7
NAS + SAN
Block- and file-level data storage is unified, requiring no additional file engines, simplifying deployment, and reducing purchase cost. The efficiency of databases and file sharing services is notably improved.
Convergence of high-end, mid-range, and entry-level storage systems Traditional storage
OceanStor V3 series
Highend
Highend
Entrylevel
Midrange
Entrylevel
Midrange
High-end, mid-range, and entry-level storage systems interwork, enabling free data flow. High-end, mid-range, and entry-level storage systems are managed in a unified manner, multiple times the efficiency.
Diverse architectures Different operation habits No data flow
8
Convergence of heterogeneous storage Traditional storage
OceanStor V3 series
Heterogeneous devices and diverse operating systems
Storage systems from different vendors have diverse models and run different operating systems. Complex management and low efficiency
9
Legacy storage systems are reused, protecting the original investment. Pools of third-party storage resources simplify storage management and achieve cloud-based storage.
Convergence of SSDs and HDDs Traditional storage HDD storage system SSD storage system
OceanStor V3 series HDD
HDD pool
To choose large-capacity and costeffective traditional HDD storage systems or to choose highperformance SSD storage systems? This is a question. 10
SSD
SSD pool
All-flash arrays are optimized to put the high performance and low latency advantages of SSDs into full play. HDDs and SSDs are converged to meet the performance requirements of complex services.
Converged backup Traditional storage
OceanStor V3 series
Backup software on the server
Primary storage
Backup storage
Backup server + backup software + backup storage High cost and complex network
11
Backup storage is perfectly integrated into primary storage, requiring no additional backup software and reducing your purchase cost. Primary storage and backup storage are managed in a unified manner, simplifying the O&M of backup solutions.
Huawei storage product positioning and SDS evolving Oracle Hadoop OpenStack
Performance
< 100 us
Real Time Storage
1 ms
OceanStor V3 Unified Storage
10 ms
SDS controller Big Data Storage
100 ms
Cold Storage
>1s
<1 TB
100 TB
1 PB
100 PB
>1 EB
Capacity
Use case positioning Production data center Apps DB
Emai l
Remote DR data center
Backup server OA
Apps
VM
FC
FC/FCoE
Unified Storage
NFS/CIFS/iSCSI
Unified Storage
V3 Use Cases Consolidation
OLTP
Server virtualization and VDI
13
Private cloud
Backup
DR
13
Contents
1
Positioning
2
Hardware and software architectures
3
Price structure and License mode
4
Competitive analysis
14
HUAWEI OceanStor V3 series storage systems
5300 V3
5500 V3
5600 V3
5800 V3
6800 V3
Unified software/hardware/management platform
15
OceanStor V3 series specifications Product Name
5300 V3
5500 V3
5600 V3
5800 V3
6800 V3
Controller enclosure form
2 U disk and controller integration
2 U disk and controller integration
3 U independent engine
3 U independent engine
6 U independent engine
Supported disk quantity
500
750
1000
1250
3200
Max. number of controllers
8
8
8
8
8
Cache capacity (dual-controller)
32 GB / 64 GB (planned)
48 GB / 96 GB / 128 GB (planned)
64 GB / 128 GB
128 GB / 256 GB
256 GB / 512 GB/1024GB
Max. number of front-end host ports (dual-controller)
24
24
56
56
40
Hardware specifications
Software specifications Base software
Basic software license for block (including Device Management, SmartThin, SmartMulti-tenant, SmartMigration, SmartErase, Cloud Service) Upgrade license from block to unified storage (including SmartDedupe & SmartCompression (for FS), SmartQuota, NFS, CIFS, NDMP)
File
Block
CIFS NFS NDMP license SmartQuota (intelligent quota) HyperLock (WORM) HyperSnap (snapshot) HyperReplication (remote replication) SmartCache (intelligent SSD caching), SmartQoS (intelligent service quality control) SmartPartition (intelligent cache partitioning), SmartDedupe & SmartCompression (intelligent data deduplication and compression HyperCopy (LUN copy) HyperClone (cloning) HyperMirror (volume mirroring) HyperSnap (snapshot) HyperReplication (remote replication) SmartCache (intelligent SSD caching), SmartQoS (intelligent service quality control) SmartPartition (intelligent cache partitioning), SmartDedupe & SmartCompression (intelligent data deduplication and compression) SmartTier (intelligent data tiering), SystemReporter (system report software) SmartVirtualization (intelligent heterogeneous virtualization), SmartMotion (intelligent data migration)
16
Specifications
Replacement of T series OceanStor 18500
High-end storage product
6800 V3 128GB/256 GB/512 GB
S5800T 192 GB
S6800T 192 GB/384 GB
128 GB to 256 GB of cache becomes a standard of the mid-range and high-end storage systems. S5800T/S6800T V2 are highly competitive.
S5600T 24 GB/48 GB
5600 V3 64 GB/128 GB
S5800T 96 GB
5500 V3 48 GB/96 GB
In fierce competition of mid-range storage system market, the competence of the S5600T decreases, whereas the S5800T becomes increasingly competitive. S2600T 16 GB
Add 5600 V3/5800 V3 to comprehensively increase the competitiveness and coverage of Huawei mid-range and high-end storage systems. Use 5600 V3 as a mid-range storage system to increase S5800T V2's competence if necessary.
5300 V3 64 GB 5300 V3 32 GB
Add 5500 V3 to strength Huawei's presence in the mid-range market and increase the S5600T's competence.
Add 5300 V3 to expand the coverage of mid-range and entry-level storage systems.
S5500T 16 Gb/32 GB
Mid-range and entry-level S2600T and S5500T stay competitive in a long term.
Product version 17
Product architecture
iSCSI/FC/FCoE
File service
Block service
Controllers
LUN Semantics
File Semantics
Storage pool
Capacity
Disk Enclosure Hardware
18
RAID 2.0+
Management
File+ Block
NFS/CIFS/FTP/HTTP
Unified storage platform Model 5300 V3
Controller Platform (SAN + NAS)
2 U platform
2 U 25 x 2.5-inch disk enclosure
3 U platform
4 U 24 x 3.5-inch disk enclosure
5500 V3 5600 V3 5800 V3
6800 V3
Disk Enclosure Platform
6 U platform 4 U 75 x 3.5-inch high-density disk enclosure 19
5300 V3/5500 V3 controller
SAS expansion ports Two SAS 3.0 expansion ports per controller
Power-BBU-Fan modules 1+1 Up to 94% of power conversion efficiency –48 V DC and 240 V DC 110/220 V AC
Onboard ports 5300 V3: four GE ports per controller 5500 V3: four 8 Gbit/s Fibre Channel ports per controller 20
Interface modules Two slots for hot-swappable interface modules Port types: 8 or 16 Gbit/s Fibre Channel, GE, 10GE TOE, 10GE FCoE, and 12 Gbit/s SAS
5600 V3/5800 V3 controller BBU modules 5600 V3: 1+1; 5800 V3: 2+1 AC power failure protection
Controller modules Dual controllers Automatic frequency adjustment for reduced power consumption Built-in fan modules (fan modules are integrated in controller modules, but can be maintained independently)
Management modules 1+1 Hot-swappable Multi-controller scale-out and interconnection for establishing heartbeats
Power modules 1+1 Up to 94% of power conversion efficiency 240 V DC
Interface modules 16 slots for hot-swappable interface modules Port types: 8 or 16 Gbit/s Fibre Channel, GE, 10GE TOE, 10GE FCoE, and 12 Gbit/s SAS 21
5600 V3/5800 V3 Controller Interface module BBU module
System enclosure
Management module
Power module
Controller
22
6800 V3 controller BBU modules 3+1 AC power failure protection
Controller modules 2- or 4-controller configuration Automatic frequency adjustment for reduced power consumption Built-in fan modules (fan modules are integrated in controller modules, but can be maintained independently)
Power modules 1+1 240 V DC Up to 94% of power conversion efficiency
Management modules 1+1 Hot-swappable Multi-controller scale-out and interconnection for establishing heartbeats
Interface modules 2-controller: 12; 4-controller: 24 Hot-swappable Port types: 8 or 16 Gbit/s Fibre Channel, GE, 10GE TOE, 10GE FCoE, and 12 Gbit/s SAS 23
6800 V3 Controller Interface module
Management module
BBU module System enclosure
Controller
24
Power module
Disk enclosure platform
2 U disk enclosure: 25 x 2.5-inch disks
Disk module Power module Expansion module
4 U disk enclosure: 24 x 3.5-inch disks
Disk module Fan module
Expansion module Power module
25
High-density disk enclosure Power module
System enclosure Fan module Disk module Expansion module
26
4 U high-density disk enclosure: 75 x 3.5-inch disks
SmartI/O card 1
Unified Multiprotocol adapter •8 Gbit/s and 16 Gbit/s FC •10 Gbit/s FCoE •10 Gbit/s iWARP (Scale-out) •Port mode switchover on the management (FC/ETH/Scale-out) •Use corresponding optical modules after a port mode switchover.
F C
2
4
ETH
5
3
1
Module Power indicator/Hot Swap button
2
16 Gbit/s FC/8 Gbit/s FC/10GE port
3
Link/Active/Mode indicator of the port
4
Module handle
5
Module mode silkscreen
27
Deduplication/compression acceleration card (ACC) 3
•Functions: Provides a variety of functions such as deduplication fingerprint computing, GZIP compression and decompression, and hardware acceleration, and relieves CPU workloads, improving system deduplication and compression performance.
1
2
1
Module Power indicator/Hot Swap button
2
Active indicator
3
Module handler/silkscreen 28
No.
Indicator
Description
1
Module Power indicator
Steady green: The module is working correctly. Blinking green: The module receives a hot swap request. Steady red: The module malfunctions. Off: The module is powered off or hot swappable.
2
Active indicator of the port
Steady green: No data is being processed. Blinking green: Data is being processed. Off: The module is not working correctly.
IP Scale-out (2 U)
Back-end ports
Front-end service ports
A SmartI/O card must be used to implement Scaleout and must be inserted into fixed slot 1.
Scale-out ports A1
5300 V3 5500 V3 B1
30
IP Scale-out (2 U four-controller direct-connection) Management port: You are advised to connect to a network using management ports of controller 0 first. The number of management ports required is based on site requirements.
A direct-connection network supports a maximum of four controllers. 31
IP Scale-out (2 U four-controller switch-connection)
CE6850 switch 1
ETH management ports on the rear panel of a CE6850 switch can be used to connect a user's network.
CE6850 switch 2
2 U controller enclosure 1
A switch-connection network supports a maximum of eight controllers. A direct-connection network cannot be switched to a switch-connection network. 32
2 U controller enclosure 2
IP Scale-out (3 U) Back-end ports
Front-end service ports
Scale-out ports
A SmartI/O card must be used to implement Scaleout and must be inserted into fixed slot 3.
5600 V3 5800 V3
A3
B3
33
IP Scale-out (3 U four-controller direct-connection)
A direct-connection network supports a maximum of four controllers.
34
IP Scale-out (3 U eight-controller switch-connection) CE6850 switch 1
CE6850 switch 2
A switch-connection network supports a maximum of eight controllers.
3 U controller enclosure 1
A direct-connection network cannot be switched to a switch-connection network.
3 U controller enclosure 2
3 U controller enclosure 3
3 U controller enclosure 4
35
IP Scale-out (6 U) Back-end ports
Front-end service ports
6800 V3
A3
B3
Fixed Scale-out slots 36
A SmartI/O card must be used to implement Scale-out and must be inserted into fixed slot 3.
IP Scale-out (6 U six-controller switch-connection) CE6850 switch 1
A switch-connection network supports a maximum of eight controllers.
CE6850 switch 2
6 U controller enclosure 1
6 U controllers do not support direct-connection networks. 6 U controller enclosure 2
37
IP Scale-out (6 U eight-controller switch-connection) CE6850 switch 1
A switch-connection network supports a maximum of eight controllers.
CE6850 switch 2
6 U controller enclosure 1
6 U controllers do not support direct-connection networks.
6 U controller enclosure 2
38
39
RAID 2.0+: Basis of OceanStor OS
LUN
LUN Hot spare
Virtual pool
Physical HDD
Physical HDD RAID 2.0+: Block-level aggregation
Traditional RAID
Benefits: 1. Wide stripe of the data, accelerating performance 2. Many-to-many disks rebuild, 20x speed, higher reliability 40
RAID2.0+ block virtualization technology Physical disk Pool
Chunk
CKG
Extent
Volume
LUN
SSD Tiered
SAS Nontiered
NLSAS
Different types of disks are distributed in the pool
Break each disk into finegrain Chunk(64MB)
41
Chunks from different disks shape the chunk group(CKG)
CKG is broken into smaller space(256KB~ 64MB)
Nontiered Several Extent shape Volume
LUN can be constructed in a short time. There is no need to allocate the resource in advance,
RAID2.0+ Benefit -- Wide stripe LUN
block-level virtualization
,LUN is not limited within a RAID group, but could span all disks in the
system Pool 42
RAID2.0+ Benefit -- Decoupling application and physical storage resource Extent1
Host
Creates tiers in the pool, each having a different disk type.
Pool
Within the monitoring period, records the read
CKG Tier0 SSD
and write I/O frequency in a unit of extent (512
LUN Extent1 Extent2
Extent2 Extent3
Extent3 Extent4
CKG
Tier1 SAS
KB to 64 MB). Within the migration period, migrates extents
between tiers according to the frequency analysis result. The migration does not interrupt host I/Os.
Extent4
CKG Tier2 NL SAS
43
RAID2.0+ Benefit -- Many-to-many Fast Rebuild Traditional RAID
RAID2.0+
Supported RAID type •RAID0, RAID5,RAID6,RAID10
Unallocated CK Hot Spare disk
Hot Spare space
Few-to-One Rebuild
Many-to-Many Rebuild
hotspots & long rebuild exposure
parallel rebuilds in less time 44
30mins rebuild 1TB data
Data Layout Comparison
Hot spare
Hot spare
Traditional RAID
45
LUN virtualization
Block virtualization
EMC: VNX
Huawei: RAID 2.0+ HP 3PAR: Fast RAID IBM XIV: Super stripe
OceanStor V3 software overview Data acceleration
OceanStor OS
Data protection
O&M management
Intelligent and high-end features (QoS and cache partitioning) of Smart series software accelerate critical service responses. The integration of SAN and NAS enables storage resources to be intelligently allocated on demand.
Simple management
Hyper series security features provide comprehensive data protection for local, remote, and other sites, effectively ensuring high data reliability and security.
Easy-to-configure management software helps users operate and maintain multi-brand and cross-field devices in a unified manner and provides GUI-based
end-to-end management, significantly promoting the BYOD management efficiency.
Smart series software suite Comprehensive high-end features, bringing optimal user experience SAN and NAS integration, enabling ondemand resource allocation
46
Hyper series data protection suite Multi-dimensional data protection and diverse 3DC DR solutions for high reliability and security
Simple management software suite Simple management from single devices to multi-brand and cross-field devices, suitable for BYOD environments
Two-level Management
eSight Unified Management
Network
Server
47
DeviceManager
Storage
eSight -- Storage Resource Management Software eSight
Topology Status display management
Application
Fault alarm
Host
Device discovery
Storage path
Network
Fiber Channel/IP Switch
48
Capacity and performance
Trend
Report
Storage
Remote maintenance
DeviceManager – Storage Device Management
B/S WEB Interface
Design for IT generalist, easy to use
49
Alarm monitoring
Capacity allocation
Performance display
eSDK -- Ecosystem Microsoft System Center Symantec VOM OpenStack Nagios
Industry ISV
CA
Management
Application
Industry standard platform
VMware vCenter
HP OpenView IBM Tivoli EMC SRM Suite Management
Plug-in
Plug-in
Media & entertainment energy Financial Education SDK
Provider
vCenter Plug-In
Cinder Plug-In
SMI-S Provider
System Center PlugIn
Nagios Plug-In
VASA Provider
OpenAPI
eSDK
Restful API capacit y
host configura tion
Pool
Storage product
alarm LUN
SAN, NAS
50
protocol
SNMP perfor mance FS
Data protection Quota
Resource optimizati Multi- on tenanc DR y
Cloud storage
Block service backup
V3 add-on Features – Block based Data protection software suite
Efficiency software suite
SmartQoS
SmartVirtualization
SmartMotion
Data movement across systems
Horizontal data movement
HyperSnap: protects local data based on increments.
Intelligent service quality control
HyperClone: protects local data based on full copies. HyperCopy: protects data between devices.
IBM HDS
EMC HW
SmartThin
HyperReplication: implements DR protection
Thin provisioning
between data centers.
SmartTier Vertical data movement
SmartPartition Intelligent cache partitioning
SmartDedupe & SmartCompression Intelligent data deduplication and compression
Partition 1 APP Partition 2 Partition 4
APP
APP
Partition 3
Partition 5
Partition N
SmartCache
SmartErase
SmartMigration
SmartMulti-Tenant
Intelligent SSD caching
Data destruction
LUN relocation
Multi-tenant
51
HyperSnap (virtual snapshot) HyperSnap working principle 1. Before HyperSnap is enabled, the way that data is written remains unchanged. 2. After HyperSnap is enabled, a mapping table is created to record data mapping relationships. 3. Copy-on –write (COW) is performed. Before new data is written, data in the original data block of the source LUN is written to the resource space and the mapping table is modified accordingly. 4. The data in the original data block is overwritten when new data is added to the data block. 5. The snapshot is rolled back.
Snapshot point in time
x y
a b c d e f g
d
1 2 3 4 5 6 7
Nonsnapshot point in Mapping time Source LUN table
Data at the snapshot point in time
Concept As one of snapshot technologies (including LUN clone), a virtual snapshot is a point-in-time copy of source data. The implementation of a virtual snapshot depends on its source LUN.
Technical highlights
Application scenarios
Resource space 52
Quick snapshot generation: A storage system can generate a snapshot within several seconds. Low storage space consumption: Snapshots are not full physical copies of source data. Therefore, only a small amount of storage space is required. Rapid data backup and restore for addressing mistaken data deletion and viruses. Continuous data protection Data analysis and tests
HyperClone (LUN clone) Concept
HyperClone working principle
Clone, a type of snapshot technology, obtains fully populated point-in-time copies of source data. Clone is a backup approach that allows incremental synchronization.
HyperClone working principle
a b c dj e f g h i Primary LUN
1.
0 0 1 1 0 1 0 1 1 0 0 1 0 0 1 0 1
Bitmaps that have been copied are marked as "0" while bitmaps that are being copied or to be copied are marked as "1".
Progress bitmap
a b c
2.
3. 4.
5.
Initial synchronization
Split
For the same data blocks, the bitmaps are marked as "0" while for different data blocks, the bitmaps are marked as "1".
n kjl m
If the primary LUN receives a write request from the production host during the initial synchronization, the storage system checks the synchronization progress. If the original data block to be replaced is not synchronized to the secondary LUN, the new data block is written to the primary LUN and the storage system returns a write success acknowledgement to the host. Then, the synchronization task will synchronize the new data block to the secondary LUN. If the original data block to be replaced has already been synchronized, the new data block must be written to the primary and secondary LUNs. If the original data block to be replaced is being synchronized, the storage system waits until the data block is copied. Then, the storage system writes the new data block to the primary and secondary LUNs. After the initial synchronization is complete, the primary LUN is split from the secondary one. Then, both the primary and secondary LUNs can be used separately for testing and data analysis. Data changes made to the primary and secondary LUNs do not affect each other, and the progress bitmap records data changes on both LUNs.
Highlights and application scenarios Secondary LUN
53
If the primary LUN is damaged, its secondary LUNs are not affected. The primary LUN can have multiple secondary LUNs. HyperClone is applicable to data backup, protection, analysis, and tests.
HyperCopy (LUN copy) Concept HyperCopy copies data from a source LUN to a target LUN within one storage system and between storage systems.
HyperCopy working principle Service suspension
1
Full LUN copy
Target LUN
Source LUN 1 2 3 4 5 6 7 8 9
HyperCopy working principle
LUN copy
Full LUN copy copies all data from the source LUN to the target LUN. Full LUN copy is performed offline. If the source LUN receives a write I/Os during data copy, the data on the target LUN becomes inconsistent with data on the source LUN. Therefore, write I/Os on the source LUN need to be stopped during replication.
Highlights and application scenarios
54
Support for third-party storage Applicable to scenarios where the capacity of a source LUN is smaller than that of a target LUN Data migration within one storage system and between storage systems Data backup
HyperReplication/S (synchronous remote replication) Concept HyperReplication/S working principle 1 4
Cache
2
Cache
3 2
3
LUN A
Synchronous remote replication
2
3
LUN B
Secondary site
Highlights and application scenarios
HyperReplication/S working principle
Primary site
The secondary site synchronizes data with the primary site in real time to achieve full protection for data consistency and minimize data loss in the event of a disaster.
Zero data loss 32:1 replication ration (sum of synchronous remote replication and asynchronous remote replication) Mutual mirroring relationship between primary and secondary storage Applicable to local or intra-city data DR 55
When a synchronous remote replication task is established, an initial synchronization is performed to replicate all the data from the primary LUN to the secondary LUN. After the initial synchronization is complete and the primary LUN receives a write request from the production host, I/Os will be processed according to the following processes: 1. The primary LUN receives a write request from a production host and sets the differential log value to differential for the data block corresponding to the I/O. 2. The primary site writes the data of the request to the primary LUN (LUN A) and sends the write request to the secondary site through the configured replication link. 3. If data is successfully written to both LUN A and LUN B, the corresponding differential log value is changed to non-differential. Otherwise, the value remains differential, and the data block will be copied again in the next synchronization. 4. The primary site sends a write success acknowledgement to the host.
HyperReplication/A (asynchronous remote replication) Concept Data is synchronized periodically to minimize the adverse impact on service performance caused by the long latency of long-distance data transfer.
HyperReplication/A working principle 2 N+1
3
1
X+1
4
N
X
Cache
Cache
5
Asynchronous remote replication
LUN A
1
5 LUN B
Secondary site Primary site Innovative multi-time-segment caching technology (patent number: PCT/CN2013/080203) for second-level RPO
Highlights and application scenarios
Little impact on performance and RPO reduced to 5 seconds 32:1 replication ration (sum of synchronous remote replication and asynchronous remote replication) Mutual mirroring relationship between primary and secondary storage Applicable to local, intra-city, and remote data DR 56
HyperReplication/A working principle
When an asynchronous remote replication task is established, an initial synchronization is performed to copy all data from the primary LUN to the secondary LUN. After the initial synchronization is complete, I/Os will be processed according to the following processes: 1. When a replication period starts, data parts with new time segments (TPN+1 and TPX+1) are respectively generated in the caches of the primary LUN (LUN A) and the secondary LUN (LUN B). 2. The primary site receives a write request from a production host. 3. The primary site writes the requested data to the part with the TPN+1 time segment and sends an acknowledgement to the host. 4. During data synchronization, the storage system reads data in cache time segment TPN of the primary LUN in the previous synchronization period, transmits the data to the secondary site, and writes the data to cache time segment TPX+1 of the secondary LUN. When the write cache of the primary site reaches the high watermark, data in the cache is automatically flushed to disks. In this case, a snapshot is generated for data of time segment TPN. 5. During synchronization, such data is read from the snapshot and copied to the secondary LUN.
HyperMirror (volume mirroring) Concept HyperMirror is a data backup technology. It creates multiple physical mirror copies for a LUN to achieve continuous LUN backup and protection. In this way, the reliability and availability of the LUN are significantly improved.
Ordinary LUN
Working principle Mirror LUNs are presented as ordinary LUNs. Mirror copies of a mirror LUN can be from local or third-party LUNs. Write I/Os access both mirror copies A and B, while, read I/Os access only one of the mirror copies. A mirror copy can be split and used for data recovery. Data between mirror copies must be consistent.
Mirror LUN
Highlights
Mirror copy A
Mirror copy B
57
If a third-party LUN fails to be accessed due to a third-party array failure, LUN services will not be interrupted. If a LUN fails because a local RAID 5 dual-disk (RAID 6 triple-disk) fails, services will not be interrupted. Data changes will be automatically copied to the LUN after the fault is rectified. After a mirror LUN is used for local HA, value-added features such as snapshot, replication, clone, or copy can be configured for the mirror LUN. Volume mirrors can be used to improve LUN read performance.
HyperMirror application scenarios Host
Scenario 1: Intra-array volume mirrors can be used to improve LUN reliability. LUNs in mirror groups belong to different disk domains.
Mirror LUN
Mirror copy
Mirror copy
LUN of a thirdparty storage array LUN of a Huawei storage array
Mirror LUN
Host
Mirror copy
Mirror copy
Huawei storage array
Scenario 2: Working with SmartVirtualization, volume mirrors of a third-party array can be used to improve third-party array reliability.
LUN Third-party storage array
LUN of a thirdparty storage array LUN of a Huawei storage array
Mirror LUN
Host
Mirror copy
Mirror copy
Huawei storage array
LUN Third-party storage array
LUN Third-party storage array 58
Scenario 3: Working with SmartVirtualization, volume mirrors of two third-party arrays can be used to improve third-party array reliability and reuse efficiency.
Competitive analysis of HyperMirror
In the mainstream mid-range storage systems, only IBM V7000 has the volume mirroring function.
The volume mirroring function is not available in any mainstream high-end storage systems, but is provided in gateway products. Competition Item
EMC VPLEX
Product type
Virtual gateway
Third party array supported
Yes
IBM V7000/SVC V7000: array SVC: virtual gateway Yes
Round-robin read supported
Yes
No
Yes
Replication services configured for mirror LUNs Value-added features inherited after conversions between non-mirrors and mirrors Number of branches Data recovery mode
No RecoveryPoint required
Yes
Yes
N/A
Yes
No
8 Incremental
2 Incremental
2 Incremental
Cache
Read and write cache
Read and write cache
Read and write cache
59
Huawei HyperMirror Array Yes
SmartVirtualization (intelligent heterogeneous virtualization) Concept eDevLUN Metadata volume
External LUN: A LUN in a third-party storage system.
eDevLUN: On the local storage system, a third-party LUN is virtualized into an eDevLUN. A user can map an eDevLUN to an application server or configure value-added services for an eDevLUN as an ordinary LUN. An eDevLUN resides in a storage pool of the local storage system.
Data organization
LUN (external LUN)
Data volume
Application server
Local storage system
The local storage system adopts block-level virtualization storage technology to manage space. An eDevLUN consists the following two parts:
Third-party storage system
–Meta volume: Such a volume records the data organization form and data attributes. A meta volume adopts a tree structure. The local storage system provides storage space for meta volumes. –Data volume: Such a volume stores actual user data. External LUNs of a third-party storage system provide storage space for data volumes.
Application scenarios
Consolidation of heterogeneous storage systems and full reuse of legacy storage systems
HUAWEI EMC
IBM 60
Heterogeneous data migration with LUN migration features
Heterogeneous data protection with snapshot, clone, replication
SmartMulti-Tenant (multi-tenant) SmartMulti-Tenant efficiently separates the resource data of tenants and assigns the management work of some resources to tenants to make tenants manage resources in their own domains.
Storage administrator
Working principle
Tenant A
Tenant B
Tenant C
Data
Data
Data
Data
Data
Data
Data
Data
Data
Tenant administrator A
Tenant administrator B
Resource management The storage administrator has full resource management permission, whereas tenant administrators can only query resources.
Application scenarios Tenant administrator C
A storage administrator manages all resources in a storage array and all tenants. A tenant administrator can only manage resources in its tenant. 61
Rights- and domain-based management: Tenant administrators can manage LUNs and monitor LUN performance. The storage administrator can allocate LUNs to tenants, and assign and manage tenant administrators.
Hosting environment isolation of telecom service providers' billing, CRM, and payment systems, dealer portals, and application programs Data isolation of large enterprises' HR records, and financial and customer information Data isolation of government departments' taxation, welfare, education, national defense records
SmartErase (data destruction) Provides two data destruction methods to prevent sensitive data leakage.
DoD: provides a software method of destroying data on writable storage media, namely, three times of overwriting.
1.
Using an 8-bit character to overwrite all addresses
2.
Using the complementary codes of the character (complements of 0 and 1) to overwrite all addresses
3.
Using a random character to overwrite all addresses
Customized: A system generates data based on internal algorithms and uses the data to overwrite all addresses of LUNs for specific times. The times of overwriting range from 3 to 99. The default value is 7.
62
SmartTier (intelligent data tiering) Data monitoring
The I/O monitoring module counts the activity levels of all data blocks.
Highlights SmartTier meets the requirements of enterprises on both performance and capacity. By preventing historical data from occupying expensive storage media, it ensures effective investment
Distribution analysis
Data migration
The data distribution analysis module ranks the activity level of each data block.
The data migration module migrates data based on the ranking and data migration policy.
Migration mode Manual and automatic migration modes are available. I/O monitoring and migration periods can be configured. Migration speed The migration speed can be high, medium, or low. Migration policy Four migration policies are available: Automatic migration, Migration to a higher performance tier, Migration to a lower performance tier, and No migration. 63
and eliminates energy consumption caused by useless capacities, reducing TCO and optimizing cost effectiveness. Indicator
With SmartTier
Without SmartTier
Configurations
12 x 200 GB SSDs 36 x 300 GB 10k rpm SAS
132 x 300 GB 10k rpm SAS disks
Number of 2 U disk enclosures
2
6
Tier 0 application I/O latency
2 ms
10 ms
Tier 1 application I/O latency
7 ms
20 ms
Storage space utilization rate
70%
20%
Power
500 w
1500 w
The IOPS of the virtual hybrid load mode (18 Exchange VMs, 2 database VMs, and 2 application VMs) is up to 26,564.
64
SmartThin (intelligent thin provisioning) Working principle Thin LUN
Actual storage data
Physical space Pool
Space allocation
2. Query the mapping table for modification or data allocation.
8 KB
Highlights
Ondemand allocation
1. Send write requests.
Capacity-on-write: When allocated space becomes insufficient, storage space is added in a unit of 64 KB by default. Mapping table: Logical units and physical units (default granularity of 64 KB) are associated using a mapping table. Direct-on-time: I/Os of read and write logical units are redirected to physical units using a mapping table.
Efficient allocation policies: allocate resources at a granularity of 64 KB, ensuring great efficiency for scenarios with small data blocks. Various reclamation mechanisms: support VMware VAAI, Symantec Storage Foundation, and Windows Server 2012 command reclamation, and all-zero page reclamation.
Application scenarios 32 MB
8 KB 32 MB
8 KB 8 KB 8 KB 8 KB Data volume: 32 KB Allocated space: 256 KB
8 KB
32 MB
8 KB 32 MB Data volume: 32 KB Allocated space: 128 MB 65
SmartThin can help core service systems that have demanding requirements on business continuity, such as bank transaction systems, to expand system capacity online without interrupting ongoing services. Services where the growth of application system data is hard to be accurately evaluated, such as email services and web disk services. SmartThin can assist with on-demand physical space allocation, preventing a space waste. Mixed services that have diverse storage requirements, such as carriers' services. SmartThin can assist with physical space contention, achieving optimized space configuration.
SmartQoS (intelligent service quality control) To ensure the performance of mission-critical Enable priority-based control.
Critical service (high)
services, SmartQoS sets a performance goal for
Key service (medium) Generable service (low) Single service performance
each service, applicable to scenarios with mixed
services.
Priority-based policy: Multiple I/O queues are
maintained in the storage system. Based on I/O queue priorities, I/O queues are managed by
Enable the flow control policy.
Performance of general services increases, affecting that of critical services.
controlling of front-end concurrences, CPU,
Overall performance
cache, and back-end disks. Limit the performance of General general services to service prevent impact on others.
Flow control mechanism: Non-critical applications are prevented from consuming too many storage
resources by the limiting of their IOPS, bandwidth, Critical service
66
and latency.
SmartQoS (intelligent service quality control) Enable performance assurance.
Overall performance
Target: 8000 Critical service (high) Performance reaches the expected level. (5000 to 8000)
Key service (medium) Performance drops 20%.
General service (low) Performance drops 40%.
Flow control mechanism: Minimum performance indicators such as IOPS, bandwidth, and
latency are set for critical applications. In this way, the storage system ensures these indicators regardless of storage workload changes. 67
SmartPartition (intelligent cache partitioning) SmartPartition
SmartPartition ensures the performance of critical applications by partitioning core system resources.
Host concurrency
Working principle
Cache
Divides storage resources into cache partitions with various sizes. Allocates different applications to different cache partitions. The storage system automatically adjusts host and disk access concurrencies of each cache partition, ensuring service quality of applications.
Highlights
Disk access concurrency
Service 1
Service 2
Service 3
68
Service 4
Dedicated cache partitions for service isolation, improving service reliability Dedicated assurance for core services and full utilization of cache resources, significantly improving service quality Applicable to application scenarios with mixed services
SmartCache
SmartCache uses SSDs as read cache and works
LUN/File system Dedupe Meta
Working principle
RAM Cache
SmartCache SSD
HDD
LUN
Employs SmartCache pools to manage SSDs and supports online capacity expansion and reduction. Uses SmartCache partitions to manage allocated space and supports service isolation based on service priorities. Adopts cyclic sequential write to enhance write performance and prolong SSD service life. Leverages LUNs or file systems to add or delete SmartCache partitions, and flexibly enables or disables the SmartCache function of a LUN or file system.
Application scenarios
LUN
Storage pool SmartCache
SSD
with RAM Cache to accelerate data read for LUNs and file systems. It also functions as read cache for deduplication metadata to improve storage system performance.
69 SSD
SmartCache applies to services that have hotspot areas and intensive small random read I/Os, for example, databases, OLTP applications, web services, and file services.
V3 add-on Features – File based
NAS service features CIFS
NFS
Common Internet File System
SmartThin Thin provisioning
SmartDedupe Intelligent data deduplication
NDMP
Network File System
SmartQoS Intelligent service quality control
SmartCompression Online compression
Network Data Management Protocol
SmartPartition Intelligent cache partitioning
SmartQuota Quota management
HyperSnap
HyperReplication
HyperLock
Snapshot
Remote replication
WORM
70
SmartCache Intelligent SSD caching
HyperSnap root
data1
data2
data3
data4
data5
To be deleted Used
snapshot
data6
data7
data8
data9
1. Second-level generation When creating a snapshot for a file system, you only need to copy and store the root node of the file system. No user data is copied during the process. The snapshot generation can be complete in 1 or 2 seconds. Before data is modified, the snapshot file set and the primary file system share the same file system space, so no additional space is required by the snapshot file set. 71
2. Second-level deletion Deleting snapshots releases root node pointers and the data exclusively occupied by the snapshots, but file system data is not affected. Only exclusively occupied space is reclaimed and shared data will not be deleted. Snapshots of a unified file system can be deleted in seconds and space exclusively occupied by the snapshots is gradually reclaimed by background tasks. 71
HyperReplication HyperReplication (for file systems) is a file system remote replication feature. It supports only asynchronous remote replication between file systems. Remote replication, one of core DR technologies, maintains data duplicates at two or more sites far from each other, thereby preventing disasters from causing data loss. Application scenarios: remote DR and centralized DR DR center
Production center
FS
Service site 01 NFS/ CIFS
NFS/ CIFS
FS
FS
Service site 02 FS
FC/ iSCSI
FC/ iSCSI
FS
FS
FS FS
Central DR site 72
HyperLock – WORM
The process for creating a WORM file system is similar to that for creating a common file system. You need only to set the type to WORM and specify properties when creating a file system.
Initial: All newly created files are in this state. In this state, files can be changed. Locked: A file in this state is being protected and cannot be modified, deleted, or renamed. Expired: Any user can delete a file in this state and can read data or view properties of the file. Any other operation request will fail. Added: Data can be added to the end of a file in this state. The file cannot be deleted or truncated.
73
SmartQuota SmartQuota (the quota management feature) manages and controls resources to limit the maximum resources (including capacity space and file quantity) allowed for a directory, preventing resources from being overly used by some users and ensuring normal resource utilization. A directory exclusively used by manager A limits resources available for manager A.
NAS
Manager A
A directory used by project group A limits resources available for project group A. Engineer A Engineer B ...
Shares
A directory used by the Sales Department limits resources available for the Sales Department. Sales person A Sales person B ...
Engineer A Engineer B ... Sales person A Sales person B ... Manager A ...
Typical configuration 1. SmartQuota can be enabled for a shared directory that is configured for a department, project group, or user to limit the maximum resources available for the directory. 2. Working with the ACL permission management, SmartQuota allows users with corresponding permissions to access directories. 3. SmartQuota determines resources available for each directory. 74
Contents
1
Positioning
2
Hardware and software architectures
3
Price structure and License mode
4
Competitive analysis
75
Quotation elements for the V3 series SAN or SAN + NAS
Storage form Controller enclosure
Controller enclosure specifications (AC, DC, and cache size)
Disk unit (SAS/NL-SAS/SSD)
Storage unit Disk enclosure (ordinary or high-density)
Auxiliary device
Modem/Cabinet
Value-added software
Value-added storage software
76
Software License Software and Value-added Function Base software
File
Block
Description
Basic software license for block (including Device Management, SmartThin, SmartMulti-ten Mandatory for SAN ant, SmartMigration, SmartErase, Cloud Service) Upgrade license from block to unified storage (including SmartDedupe & SmartCompression (for FS), SmartQuota, NFS, CIFS, NDMP)
Configured during SAN storage upgrade to unified storage or SAN+NAS integrated storage
CIFS NFS NDMP license SmartQuota (intelligent quota) HyperLock (intelligent WORM) HyperSnap HyperReplication SmartCache, SmartQoS, SmartPartition
Quotation of the file engine software is described as follows: 1. Select file functions. The items in red are included in the base software package. 2. Other NAS value-added functions are optional.
HyperCopy (LUN copy) HyperClone (cloning) HyperMirror (volume mirroring) HyperSnap (snapshot) HyperReplication (remote replication)
Value-added functions (optional)
SmartCache (intelligent SSD caching), SmartQoS (intelligent service quality control), SmartPartition (intelligent cache partitioning), SmartDedupe & SmartCompression (intelligent data deduplication and compression) SmartTier (intelligent data tiering), SystemReporter (system report software), SmartVirtualization (intelligent heterogeneous virtualization), SmartMotion (intelligent data migration)
Value-added functions (optional)
OceanStor UltraPath license
Mandatory 5300 V3 is included in the base software package. Other models are quoted separately. 77
Contents
1
Positioning
2
Hardware and software architectures
3
Price structure and License mode
4
Competitive analysis
78
Overall OceanStor V3 Series Competition Strategy 15 10
5 0 Brand New product, lack of application, weak brand
Architecture Innovative architecture, high performance, capacity and scalability
Storage Service Five convergences
Pricing
EMC VNX NetApp IBM StorWize HP StorServ Huawei OceanStor V3
Aiming EMC, 20% lower
Overall competition strategy: Focus on traditional enterprise application and private cloud environment. Beat IBM, HP, with technology. Beat EMC and NetApp with Price. 79
OceanStor V3 Series Advantage Highlights Multi-controller architecture1 1. Support for scale-out up to 8 controllers 2. Excellent reliability/performance/scalability 3. Used to beat out EMC VNX, HDS HUS, and IBM DS series
Architecture
All-in-one unified storage2 1. Support for coexistence of blocks and files in controller enclosures 2. Disk pool–based block and file construction with high efficiency 3. Beyond other competitors' capabilities with the exception of NetApp
Next-generation hardware architecture3 1. 16 Gbps Fibre Channel or 56 Gbps IB host ports, PCIe 3.0 system bus, and 12 Gbps SAS disk ports, doubling the overall bandwidth and optimizing performance 2. Beyond competitors' capabilities
80
OceanStor V3 Series Advantage Highlights Inline deduplication and compression4 1. Data deduplication/compression first and then storage, saving storage space 2. EMC/NetApp unable to support inline mode 3. Only Huawei employing deduplication/compression acceleration cards to reduce workload
Software
Efficiency assurance software5 1. QoS and cache partitioning, ensuring service SLA 2. Beyond other competitors' capabilities with the exception of HDS high-end and mid-range storage and EMC high-end storage
Data protection software6 1. Volume mirroring, 5-second RPO, and quick data recovery, comprehensively protecting data. 2. Beyond competitors' capabilities
81
OceanStor V3 Series Advantage Highlights
Others
SmartIO 4-port interface module7 1. Support for 8 Gbps or 16 Gbps Fibre Channel/10GE/FCoE 2. Configuration by user 3. Flexible adaption to network planning changes 4. Beyond other competitors' capabilities with the exception of NetApp
Authentication8 1. SPC-1 performance test report used to beat out EMC (no such report) and other competitors (lower performance)
82
Competition Landscape Huawei
EMC
6800 V3 256 GB/512 GB, 1500 disks
VMAX100K 512 GB, 720 disks VMAX 10K 128 to 512 GB, 1500 disks
5800 V3 128 GB /256 GB, 1250 disks
VNX8000 256 GB, 1500 disks
FAS8080EX 256 GB, 1440 disks
5600 V3 64 GB/128 GB, 1000 disks
VNX7600 128 GB, 1000 disks
FAS8060 128 GB, 1200 disks
5500 V3 48 GB/96 GB/128 GB, 750 disks
VNX5800 64 GB, 750 disks
5300 V3 32 GB/64 GB, 500 disks
VNX5600 48 GB, 500 disks
2600 V3 (not released) 32 GB, 500 disks
VNX5400 32 GB, 250 disks VNX5200 32 GB, 125 disks
2200 V3 (not released) 8 GB/16 GB, 350 disks
NetApp
FAS8040 64 GB, 720 disks FAS8020 48 GB, 480 disks
IBM
HP
HDS
DS8870 1 TB, 1536 disks DS8800 384 GB, 1536 disks
StoreServ 10800 96 to 768 GB, 1920 disks StoreServ 10400 96 to 384 GB, 960 disks
VSP 512 GB to 1 TB, 2048 disks
Compellent SC8000 32 to 128 GB, 960 disks
V7000 (two controllers) 64 GB, 504 disks StoreServ 7400 32 (two controllers) to 64 GB (four controllers) 480 disks
HUS 150 32 GB, 960 disks
StoreServ 7200 24 GB, 144 disks
V5000 (two controllers) 16 GB, 240 disks V3700 16 GB, 120 disks V3500 8 GB, 24 disks
VNXe3200 24 GB, 50 disks
HUS VM 256 GB, 1152 disks
V7000 (two controllers) 128 GB, 504 disks
FAS2554 32 GB, 144 disks FAS2552 32 GB, 144 disks FAS2520 32 GB, 84 disks
DELL
EVA P6500 8 to 16 GB, 450 disks EVA P6300 4 to 8 GB, 250 disks MSA 2040 8 GB, 199 disks
SC4020 32 GB, 120 disks HUS 130 16 GB, 264 disks HUS 110 8 GB, 120 disks
EqualLogic PS6000 8 GB, 24 disks EqualLogic PS4000 8 GB, 24 disks MD3800 8 GB, 192 disks MD3600 4 GB, 96 disks
Note: 1. The analysis is based on cache capacity and disk quantity. The V3 series is configured with two controllers. This diagram is used for bidding. 83
OceanStor V3 series vs. EMC VNX2 series VNX
Capability VNX Disk Arrays Control Station VNX X-Blade (2)
9U
VNX Block Controller (2) N + 1 power
Control Station (Linux OS) –
VNX
Unified (file, block)
√
x
Scale out functionality
8
2(block)
Thin provisioning
√
√
Auto-tiering
4MB
256MB
Flash Cache
√
√
inline
post
virtualization
√
x
Cache partition
√
x
Cache size
√
x
16 Gbps FC, SmartIO
√
x
High-density disk enclosure
75
60
Architecture
dedupe/compression
specification
Manage X-Blades
VNX X-Blade (DART OS) –
V3
Independent controller
Supported disk
VNX SP (FLARE OS) 84
OceanStor V3 series vs. IBM Storwize series Capability
V3
Storwize
Unified (file, block)
√
x
Scale out functionality
8
8
Thin provisioning
√
√
Auto-tiering
4MB
256MB
Flash Cache
√
Architecture
virtualization
√
x no dedup Inline compression √
Cache partition
√
x
Supported disk
√
x
16 Gbps FC, 12Gbps SAS, SmartIO
√
√
High-density disk enclosure
75
60
dedupe/compression
SVC architecture + XIV management GUI.
FM- two system X server, SONAS
inline
specification
85
OceanStor V3 series vs. HP StoreServ Series Capability
V3
StoreServ
Unified (file, block)
√
x
Scale out functionality
8
4
Thin provisioning
√
√
Auto-tiering
4MB
128MB
Flash Cache
√
√
dedupe/compression
√
√
virtualization
√
x
Cache partition, QOS
√
Multi-tenancy
Cache/Supported disk
√
16 Gbps FC, 12Gbps SAS, SmartIO
√
High-density disk enclosure
√
x Only 16Gbps FC x
Architecture
Additional X3830 gateways provide file service
specification
86
OceanStor V3 series vs. NetApp FAS Series Capability
V3
FAS
Unified (file, block)
√
√
Scale out functionality
8
8
Thin provisioning
√
√
Auto-tiering
√
x
Flash Cache
√
Architecture
virtualization
√
√ Post (inline compression) √
Cache partition, QOS
√
Multi-tenancy
Cache/Supported disk
√
x
16 Gbps FC, 12Gbps SAS, SmartIO
√
6Gbps SAS
High-density disk enclosure
75
48
dedupe/compression
7-mode and c-mode offer Different functionality
inline
specification
87
HUAWEI ENTERPRISE ICT SOLUTIONS A BETTER WAY
Copyright©2012 Huawei Technologies Co., Ltd. All Rights Reserved. The information in this document may contain predictive statements including, without limitation, statements regarding the future financial and operating results, future product portfolio, new technology, etc. There are a number of factors that could cause actual results and developments to differ materially from those expressed or implied in the predictive statements. Therefore, such information is provided for reference purpose only and constitutes neither an offer nor an acceptance. Huawei may change the information at any time without notice.