Note: I had a short opportunity to meet the HPE 3PAR 8400 disk array, so I wrote down my observations. This second part focuses on the more interesting part, the configuration of two fields in different data centers in a geographic cluster.
Official documentation
- HPE 3PAR Virtual Copy Snapshots and copy data management
- HPE 3PAR Remote Copy Software User Guide
- HP 3PAR Management Console User's Guide
- HPE 3PAR StoreServ Management Console 3.0 User Guide
What is replication and how to use it?
Replication is the process of creating an exact copy (replica) of data. Local replication refers to replicating data within the same array or data center. Remote replication replicates data to a remote data center or site (Remote site). A replica (copy) can be created once, repeatedly at some interval, or synchronously maintained with the same data as the source.
Data copies can be created for a variety of reasons. We can save the current state (Snapshot) before performing some risky operation, so that we can roll back in case of failure. We can create a copy (Clone) as a backup or to quickly create a test environment from production data. Using synchronous replication, we can maintain the same data on different hardware resources (e.g., different disks or arrays, or a different array in a different location) and in case of a failure (failure of these hardware resources) we can switch to the data copy and continue operation. With the techniques described at the end, the switchover can happen automatically without downtime.
The HPE 3PAR arrays support two types of local replication, Snapshot (Virtual copy) and Clone (Physical copy). And one method for remote replication called Remote Copy, for which we can use the Peer Persistence extension along with Quorum Witness to create a geographic cluster.
Local replication - Snapshot (Virtual copy)
Note: In the StoreServ Management Console (SSMC) the term Snapshot is used, but in the older HPE 3PAR Management Console the term Virtual copy is used.
Snapshot is a pointer-based replica that uses copy-on-write. It contains only the changes compared to the base data. It does not take up much space and we can roll back by overwriting only the changed data in the base volume. Snapshots can be created from:
- Virtual Volume - the basic operation, referred to as the Base Volume
- Snapshot - a snapshot of another snapshot
- Clone - a snapshot of a clone
We can create hundreds of snapshots if we have enough space. Within a Virtual Volume we must have a defined Copy Space (some CPG), where the pointers to the original volume are stored and the original (overwritten) data is stored upon write. A Snapshot can be read-only or read-write. Snapshot creation is integrated with a variety of applications such as MS SQL, MS Exchange, VMware vSphere, Oracle. To use the Snapshot feature, we need the HPE 3PAR Virtual Copy software license.
Snapshots can have a limited lifetime, we can create and delete them on a schedule (scheduled), we can delete them (delete), revert to their state (promote), or assign them to Hosts (export).
Setting the Copy CPG
The basic requirement to use Snapshot is that the volume must have a defined Copy CPG. This can be the same CPG as the User CPG or a different one.
HPE 3PAR Management Console
- in the bottom left corner (Manager Pane) switch to Provisioning
- in the tree (Management Tree) open Storage System - name - Virtual Volumes and on the right tab Virtual Volumes find the desired volume
- right-click (or in the menu) select Edit
- choose Copy CPG
SSMC (StoreServ Management Console)
- top left corner 3PAR StoreServ (Mega Menu) - Block Persona - select Virtual Volumes - select the desired volume (Virtual Volume) - in the dropdown Actions menu click Edit
Creating a Snapshot
Note: In MC, the Snapshot is visible under the Volume where it was created (in general, snapshots are nested according to the parent/child relationship). In SSMC, it is displayed between Virtual Volumes and we can enable filtering by type.
HPE 3PAR Management Console
- in the bottom left corner (Manager Pane) switch to Provisioning
- in the tree (Management Tree) open Storage System - name - Virtual Volumes and on the right tab Virtual Volumes find the desired volume
- right-click (or in the menu) select Create Virtual Copy (in Destination Name is the format how the Snapshots will be named, it may happen that the name exceeds the allowed length, so it needs to be shortened)


SSMC (StoreServ Management Console)
- top left corner 3PAR StoreServ (Mega Menu) - Block Persona - select Virtual Volumes - select the desired volume (Virtual Volume) - in the dropdown Actions menu click Create snapshot
Reverting to a Snapshot
We can promote a Snapshot to any writable (RW) parent. This means we can revert the volume to the state when the Snapshot was created. The original data (which had been changed on the volume) is written back from the Snapshot to the volume. By default, it is reverted to the Base Volume. During this operation, the volume must not be assigned/exported to a Host.
HPE 3PAR Management Console
- in the bottom left corner (Manager Pane) switch to Provisioning
- in the tree (Management Tree) open Storage System - name - Virtual Volumes and on the right tab Virtual Volumes find the desired volume (Snapshot)
- right-click (or in the menu) select Promote Virtual Copy

SSMC (StoreServ Management Console)
- top left corner 3PAR StoreServ (Mega Menu) - Block Persona - select Virtual Volumes - select the desired volume (Snapshot) - in the dropdown Actions menu click Promote snapshot
Deleting a Snapshot
HPE 3PAR Management Console
- in the bottom left corner (Manager Pane) switch to Provisioning
- in the tree (Management Tree) open Storage System - name - Virtual Volumes and on the right tab Virtual Volumes find the desired volume (Snapshot)
- right-click (or in the menu) select Remove
SSMC (StoreServ Management Console)
- top left corner 3PAR StoreServ (Mega Menu) - Block Persona - select Virtual Volumes - select the desired volume (Snapshot) - in the dropdown Actions menu click Delete
Local replication - Clone (Physical copy)
Note: In the StoreServ Management Console (SSMC) the term Clone is used, but in the older HPE 3PAR Management Console the term Physical copy is used.
Clone is a full copy (full copy replica) of a Virtual Volume at a certain point in time (point-in-time). To create a clone, we need to assign a Destination Volume (target volume) of the same or larger size (this Volume must be prepared in advance) to the Base Volume. During cloning, a copy of all base data is made to the target volume. On the first write, the synchronization is lost (the clone contains different data than the source). We can leave a Snapshot created, where the changes are stored, and then perform a Resync, where the current data is copied to the clone. No special license is required (only if we use Snapshot for future resynchronization).
The source volume can be any volume assigned to clients, for some functions it must have a defined Copy CPG. The target volume must not be exported. When the copy is created, its type is changed to Clone.
Creating a Clone
HPE 3PAR Management Console
- in the bottom left corner (Manager Pane) switch to Provisioning
- in the tree (Management Tree) open Storage System - name - Virtual Volumes and on the right tab Virtual Volumes find the desired volume
- right-click (or in the menu) select Create Physical Copy

SSMC (StoreServ Management Console)
- top left corner 3PAR StoreServ (Mega Menu) - Block Persona - select Virtual Volumes - select the desired volume (Virtual Volume) - in the dropdown Actions menu click Create clone
Resync Clone
If we checked Save snapshot for later resync when creating the clone.
Remote Copy (RC) - Remote Replication
Remote replication of arrays, where we want to have the same data in two different locations, is much more complex than local replicas. HPE has a relatively simple and very universal solution on the 3PAR arrays (can be used between different models), called Remote Copy (we need the HPE 3PAR Remote Copy software license and the HPE 3PAR Virtual Copy software license). By default, it operates in Active/Passive mode (unidirectional), writes are performed to the primary volume and one-way replication to the target volume is performed. For Active/Active (bidirectional) mode, we need to create multiple RC Groups, and each one is primary on a different array.
Supported Connectivity
Connectivity between arrays is supported via:
- Fibre Channel (RCFC) - recommended, more expensive, lower latency, possibly higher speeds, either direct FC connection or FC SAN, uses a proprietary protocol, possible connection within a campus or using DWDM or longwave lines, uses two FC links (high availability and increased throughput)
- IP (RCIP) - Ethernet (IP) connection between data centers is more common and cheaper, uses Gigabit Ethernet TCP/IP, RCIP must be on a different network than the management, ports must have unique IP addresses
- or Fibre Channel over IP (FCIP) with additional hardware, supports only asynchronous replication
Replication Types
Replication can be:
- Synchronous replication (mode) - continuous operation and synchronization, data is in the same state at both locations (always the same), but there is a delay because we always wait for confirmation of the write by the target system (so rather for shorter distances - requires low latency and high throughput link), requires a link with a maximum RTT (round trip time) of 5 ms (in older OS versions it was 2.6 ms)
- Asynchronous replication - data is written locally and confirmed immediately, then replicated to the target system, so the data does not have to be exactly the same on both sides
- Asynchronous periodic mode - a local snapshot is created, which is replicated to the target array at a regular interval (e.g., 5 minutes), or manually (only the changes)
- Asynchronous streaming mode - data is replicated to the target array at the maximum speed of the available link immediately after the local write (at that moment it is stored in the cache), so we don't wait for target confirmation and still have almost identical data on the target (depends on the link)
- Synchronous Long Distance (SLD) - we need 3 different arrays, synchronous replication is performed over a shorter distance and asynchronous replication over a longer distance at the same time
Operational Principle
One array is designated as the Primary System and contains the Primary Volume or Source Volume, the other where the replication is performed is designated as Secondary, Backup, or Target System and has the Target Volume. The Primary Volume and its Target Volume form a Copy Pair.
We create a Remote Copy Group (RCG), which contains pairs of volume groups (Virtual Volume Sets), i.e., a grouping of multiple Volumes that are copied the same way and at the same time. Write order is ensured at the target in the same order as on the source across different Volumes (this may be important, for example, for databases that have multiple disks). Most operations for remote replication are performed on the RCG. We can also use RCG to simplify management.
We can use 3PAR Autonomic Replication, where we combine a pair of CPGs with a Remote Copy Group, and then changes to the primary Virtual Volume are automatically reflected on the target (e.g., creating a new volume and changing the size).
In addition to replicating from one array to one (unidirectional or bidirectional), we can also use other variations. We can do one to two (but we need multiple RCGs, each can be sent to one array), several (up to four) to one, and a special four to four.
Communication Failure, Failover to Secondary System
If there is a failure (Failure, communication or target system) and the remote side is unavailable (so replication cannot continue), writes continue on the primary volume and a Snapshot is created for later synchronization, which can be done manually or automatically. The Virtual Volume being replicated must have a Copy CPG set.
If the primary system fails, we can switch the Secondary Volume Group to primary on the target system (Failover). At that moment, the replication direction changes, writes and export (assignment to the client) are allowed on the originally target volume. We can then start applications in the backup DC in the same state as when the primary DC failed.
Configuring Remote Copy
Depending on the type of configuration we want to perform - the protocol used (RCFC, RCIP, FCIP) and how many arrays we are connecting (the classic 1-to-1), the configuration details change, but the basics are the same:
- connect the arrays and configure the RC Ports
- create the Remote Copy configuration
- create a Remote Copy Group - this determines what is replicated
The following is a brief description of the configuration, which is only for the HPE 3PAR Management Console.
Note: For all configurations related to Remote Copy, the Management Console must be connected to both (all) arrays participating in the RC, otherwise many configurations cannot be performed.
Configuring Remote Copy Ports
- in the bottom left corner (Manager Pane) switch to Systems
- in the Common Actions panel click Configure FC Port (or iSCSI or RCIP)
- configure the array ports (Host, RC)
- in the bottom left corner (Manager Pane) switch to Remote Copy
- in the Common Actions panel click Configure RC Port
- configure the ports for RC
Creating the Remote Copy Configuration
- in the bottom left corner (Manager Pane) switch to Remote Copy
- in the Common Actions panel click New Configuration
- set the available systems as primary and target, configure the links (bindings of two RC ports on each side), and immediately create the first Remote Copy Group (description below)
Creating Remote Copy Group (RCG)
- in the bottom left corner (Manager Pane) switch to Remote Copy
- in the tree (Management Tree) select Remote Copy Configuration
- in the Common Actions panel click Create Remote Copy Group

- in the first step Groups we set which system is the primary (source) and which is the target (backup), we set the group name (Group) and select the replication mode

- in the second step Virtual Volumes we select the offered volume (only those Volumes that meet the requirements, e.g., must have a Copy CPG) on the source and target (here we can let a new Volume be created, this will correctly set the same Volume WWN), the Add button creates a Volume Pair (we can create multiple)

- click Next, the summary is displayed, and Finish creates the group
Operations on RCG
We can pause and start data replication between the arrays (on a given group).
- in the bottom left corner (Manager Pane) switch to Remote Copy
- in the tree (Management Tree) open Remote Copy Configuration - Groups
- right-click on the desired group and select Start Remote Copy Group(s) or Stop Remote Copy Group(s)

When the primary system fails, we need to perform a Failover to the backup. First, we need to stop the group, then we can perform the switchover.
- in the bottom left corner (Manager Pane) switch to Remote Copy
- in the tree (Management Tree) open Remote Copy Configuration - Groups
- right-click on the desired group and select Recover Remote Copy Group(s)
- right-click on the desired group and select Restore Remote Copy Group(s)
Remote Copy with Peer Persistence
We can take remote replication capabilities even further by adding the Peer Persistence feature. The basic Remote Copy ensures that we have the same data in two locations, but only the primary location (disk array and its servers) is active. If there is a failure, we need to perform several manual operations in the backup location for that location to start functioning.
Peer Persistence together with Quorum Witness ensures that servers can be active in both locations (but write to the array only in one location) and in case of a failure (of the primary array or the entire location) the functionality automatically switches to the second location (including the array). This simplifies Disaster Recovery and increases availability. It allows us to create a geographic cluster (geocluster - metrocluser).
For Peer Persistence, we need an additional HPE 3PAR Peer Persistence software license. And it is supported for VMware, Hyper-V, Oracle RAC, Windows Server, Red Hat, HPE-UX.
In the following diagram, I have tried to schematically capture everything that is involved in this functionality.

Peer Persistence Requirements
Peer Persistence is a high availability (HA) configuration for two locations in metropolitan distance (MetroCluster), which have synchronous replication. There are several critical conditions for it to work:
- each Host (server) is connected to both arrays (so it must be able to see both locations - we need stretched fabric)
- each Host (server) must support ALUA, the arrays then pass path information so that on the server it appears as if the paths lead to one array for one LUN, the Active paths are those to the primary array, the others are Standby, and in case of any failure they automatically switch (Multipath works)
- each (used) Volume is synchronously mirrored to the second location with the same Volume WWN and has Path Management and Auto Failover policies set (for automatic switchover)
Array Failover
Peer Persistence allows us to perform manual or automatic transparent switchover from the primary to the backup array, where client I/O requests are redirected without downtime.
- manual switchover (switchover or manual transparent failover) - manually redirects requests from one array to the other and reverses the replication direction, suitable for maintenance or optimization
- automatic failover (automatic transparent failover) - if the primary system fails, it automatically redirects requests to the backup, requires the HPE 3PAR Quorum Witness
Quorum Witness
Quorum Witness (QW) is used to monitor the failure of the 3PAR system to determine when a failover should be performed. It is a virtual appliance (VMware or Hyper-V) that must be deployed in a third location. The connection between the arrays and the Quorum Witness is via IP (not FC), which simplifies the implementation, and it is also a different path (must not be connected via RC links). Due to its independence from the location and the link, the QW server can detect the failure of the source or backup system, the location, and the link between the locations.
Quorum is a pair of HP 3PAR arrays that are configured with Peer Persistence. The Quorum Witness regularly reads data from both arrays. If the primary system fails, the backup array detects that the QW is not receiving data from the primary array and performs a failover. We can thus determine whether it was only the RC link between the arrays that failed (then the failover is not performed), but the source system actually failed.
Configuring Peer Persistence
If we already have Remote Copy configured and possibly even created a Remote Copy Group with synchronous replications, enabling Peer Persistence with Quorum Witness is quite simple. I also assume we have deployed a virtual machine with the HPE 3PAR Quorum Witness.
Enabling Peer Persistence and setting the Quorum Witness address
- in the bottom left corner (Manager Pane) switch to Remote Copy
- in the Common Actions panel click Peer Persistence Configuration
- enter the IP address for the Quorum Witness and check the box

Furthermore, we need to enable two policies for the RCG (we set them on the source group - primary system). Unfortunately, this cannot be done through the Management Console. We have to use either SSMC (where the checkboxes are in the RCG edit) or use the CLI. We can verify that the policies are enabled on the group with the following command with the Quorum Witness switch, which displays extended information related to Peer Persistence (only part of the output is shown in the example).
cli% showrcopy -qw Remote Copy System Information Status: Started, Normal ... Group Information Name Target Status Role Mode Options RC_GROUP_1 STORAGE1 Started Secondary Sync auto_failover,path_management ...
Enabling Path Management policy
This policy ensures uniform host access, i.e., that the Host can access the same volume on both arrays, using ALUA, only paths to one array are set as ACTIVE and to the other as STANDBY (and they switch in case of a failure). If the policy is not enabled, the paths to both arrays are ACTIVE and the Host sees two volumes (on the backup array the volume is read-only only).
cli% setrcopygroup pol path_management <group>
Enabling Auto Failover policy
For automatic switchover to the secondary array in case of a failure, this policy must be enabled.
cli% setrcopygroup pol auto_failover <group>
(optional) Enabling Auto Recover policy
If all RC links between the array pair are unavailable, the RCG will stop (stop). With this policy enabled, the RCG will automatically start when communication is restored (this can also be set in the Management Console).
cli% setrcopygroup pol auto_recover <group>
There are no comments yet.