Preview only show first 10 pages with watermark. For full document please download

Hp Serviceguard Extended Distance Cluster For Linux A.01.01

   EMBED


Share

Transcript

HP Serviceguard Extended Distance Cluster for Linux A.01.01 release notes HP Part Number: T2808-90010 Published: May 2009 Legal Notices © Copyright 2009 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Intel®, Itanium®, registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. Oracle ® is a registered trademark of Oracle Corporation. UNIX® is a registered trademark in the United States and other countries, licensed exclusively through The Open Group. Linux® is a registered trademark of Linus Torvalds. Red Hat® is a registered trademark of Red Hat Software, Inc. SuSE® is a registered trademark of SuSE, Inc. Table of Contents 1 HP Serviceguard Extended Distance Cluster for Linux A.01.01 Release Notes....................................7 Announcements....................................................................................................................7 What’s In this Version ..........................................................................................................7 Compatibility Information and Installation Requirements.................................................9 Hardware Requirements.................................................................................................9 Disk Arrays................................................................................................................9 Other Hardware.........................................................................................................9 Software Requirements...................................................................................................9 Operating Systems...................................................................................................10 Supported Site to Site Distances..............................................................................10 Installing XDC...............................................................................................................10 Rolling Software Upgrades......................................................................................11 Verifying Installation...............................................................................................11 Support for a 4-Node Cluster........................................................................................11 What Manuals are Available for this Version................................................................13 Further Reading.............................................................................................................13 Patches and Fixes in this Version........................................................................................14 Fixes...............................................................................................................................14 Known Problems and Workarounds..................................................................................14 creating persistent device names on Red Hat 5............................................................14 disks are resynchronized at all times............................................................................15 MD parameters in the package control script...............................................................15 Software Availability in Native Languages........................................................................16 Table of Contents 3 List of Figures 1-1 4 Package failover sequence...........................................................................................12 List of Figures List of Tables 1-1 1-2 Supported Distance Between Sites..............................................................................10 Failover Sequence........................................................................................................12 5 6 1 HP Serviceguard Extended Distance Cluster for Linux A.01.01 Release Notes Announcements The HP Serviceguard Extended Distance Cluster (XDC) for Linux is a product that allows an HP Serviceguard for Linux cluster to operate across two separate sites providing disaster tolerance for applications. XDC uses the Linux Multiple Device (MD) driver’s software RAID capability and its associated tool, mdadm, to replicate data across those two sites. With Software RAID, two disks (or disk sets) are configured so that the same data is written on both disks as one "write transaction". So if data from one disk set is lost, or if one disk set is rendered unavailable, the data is always available from the second disk set. HP Serviceguard Extended Distance Cluster (XDC) for Linux A.01.01 is available for use with HP ProLiant servers and HP Integrity servers running Red Hat 4, Red Hat 5, and Novell SUSE Linux Enterprise Server 10 through the following product number: • T2346BA - license, media, and documentation Licenses for HP Serviceguard for Linux for the nodes in the cluster must be purchased separately. What’s In this Version An extended distance cluster (also known as extended campus cluster) is a Serviceguard cluster that has alternate nodes located in different data centers separated by some distance. Using Linux MD, Serviceguard XDC ensures high availability of data. Following are the key features of this product: • • HP Serviceguard A.11.18 or A.11.19 is required for Extended Distance Cluster A.01.01. HP Serviceguard A.11.16 is required for Extended Distance Cluster A.01.00. Support for Legacy and Modular packages. HP Serviceguard A.11.18 and A.11.19 supports a new process for configuring packages. Packages created using this method are referred to as Modular packages. Packages created using previous versions of Serviceguard are referred to as Legacy packages. For more information on Modular packages, see Managing Serviceguard Seventh Edition. Announcements 7 IMPORTANT: When configuring Modular packages with HP Serviceguard A.11.19, the operation_sequence order, which defines the order in which the individual script programs are executed when packages start, must be modified. By default, the XDC script is started only after the pr_cntl script is started. The operation_sequence order must be modified to ensure that the pr_cntl script is started only after the XDC script is started. By default, following is the order that is defined in the script: operation_sequence operation_sequence operation_sequence operation_sequence operation_sequence operation_sequence operation_sequence operation_sequence operation_sequence $SGCONF/scripts/sg/pr_cntl.sh $SGCONF/scripts/ext/xdc.sh $SGCONF/scripts/sg/external_pre.sh $SGCONF/scripts/sg/volume_group.sh $SGCONF/scripts/sg/filesystem.sh $SGCONF/scripts/ext/xdc_val.sh $SGCONF/scripts/sg/package_ip.sh $SGCONF/scripts/sg/external.sh $SGCONF/scripts/sg/service.sh The order must be changed as follows: operation_sequence operation_sequence operation_sequence operation_sequence operation_sequence operation_sequence operation_sequence operation_sequence operation_sequence • Supported in storage environments where the following Fibre Channel based arrays are used for data storage: — HP StorageWorks Modular Storage Array - MSA1000 and MSA1500 families — HP StorageWorks Enterprise Virtual Array (EVA) family — HP StorageWorks XP family • • Supported in a 2 data center configuration. Supports 2 nodes with one node in each data center or 4 nodes with two nodes in each data center. For information on requirements and restrictions for a 4 node environment, see “Support for a 4-Node Cluster” (page 11). Quorum Server must be configured in a third location for cluster arbitration. • 8 $SGCONF/scripts/ext/xdc.sh $SGCONF/scripts/sg/pr_cntl.sh $SGCONF/scripts/sg/external_pre.sh $SGCONF/scripts/sg/volume_group.sh $SGCONF/scripts/sg/filesystem.sh $SGCONF/scripts/ext/xdc_val.sh $SGCONF/scripts/sg/package_ip.sh $SGCONF/scripts/sg/external.sh $SGCONF/scripts/sg/service.sh HP Serviceguard Extended Distance Cluster for Linux A.01.01 Release Notes • • • Using MD, the disk sets are configured such that data written to one disk or disk set is replicated or mirrored on the other disk or disk set at the other site. As a result, if one disk set is lost, the data is still accessible from the other disk set. The data in the other disk set is always current and up-to-date, ensuring no data loss. You need to create the Multiple Device (MD) only once at the time of configuration. From then onwards, you need to only activate or deactivate the device. XDC requires that you configure persistent device names using udev prior to configuring Software RAID. Compatibility Information and Installation Requirements To use Extended Distance Cluster, the following hardware and software must be installed and configured on all nodes. Hardware Requirements Following are the hardware requirements that you need for HP Serviceguard Extended Distance Cluster A.01.01: Disk Arrays • • • HP StorageWorks Modular Storage Array - MSA1000 and MSA1500 families HP StorageWorks Enterprise Virtual Array (EVA) family HP StorageWorks XP family Other Hardware • QLogic Fibre Channel Host Bus Adapters Software Requirements Following are the software requirements that you need for HP Serviceguard Extended Distance Cluster A.01.01: • • • HP Serviceguard version A.11.18 or A.11.19 installed on the nodes of the cluster Network Time Protocol (NTP) - all nodes in the cluster to point to the same NTP server QLogic Driver - The version number of this driver depends on the version of the QLogic cards in your environment. Download the appropriate version of the driver from the following location: http://www.hp.com -> Software and Driver Downloads Select the Download drivers and software (and firmware) option. Enter the HBA name and click >>. If more than one result is displayed, download the appropriate driver for your operating system. Compatibility Information and Installation Requirements 9 Operating Systems • • • Red Hat 4 U3 or later Red Hat 5 or later Novell SUSE Linux Enterprise Server 10 or later IMPORTANT: The SLES10 release included the kernel 2.6.16.21-0.8. With this kernel version, if you remove a disk from the MD device, it stops responding. If you have SLES10 base version, you must upgrade the kernel to 2.6.16.27-0.6 to avoid this problem. For more information on support of new updates or service packs, see the HP Serviceguard for Linux Certification Matrix available at: http://www.hp.com/info/sglx Supported Site to Site Distances Table 1-1 shows the distance supported between sites based on the technology used: Table 1-1 Supported Distance Between Sites Type of Link Distance Gigabit Ethernet Twisted Pair 50 meters Short Wave Fiber 500 meters Long Wave Fiber 10 kilometers Dense Wave Division Multiplexing 100 kilometers Installing XDC Complete the following procedure to install HP Serviceguard Extended Distance Cluster A.01.01: 1. 2. 3. Insert the product CD into the drive and mount the CD. Open the terminal window. If you are installing XDC on Red Hat 4, run the following command: # rpm -Uvh xdc-A.01.01-0.rhel4.noarch.rpm If you are installing XDC on Red Hat 5, run the following command: # rpm -Uvh xdc-A.01.01-0.rhel5.noarch.rpm If you are installing XDC on Novell SUSE Linux Enterprise Server 10, run the following command: # rpm -Uvh xdc-A.01.01-0.sles10.noarch.rpm This command initializes and completes the XDC software installation. 10 HP Serviceguard Extended Distance Cluster for Linux A.01.01 Release Notes NOTE: After installing the Extended Distance Cluster software, you need to configure Software RAID. The configuration involves creating a package directory and copying the raid.conf.template file into the relevant package directories. For more information on creating a package and copying the raid.conf.template file, see the HP Serviceguard Extended Distance Cluster for Linux Deployment Guide. Rolling Software Upgrades If you have already deployed XDC A.01.00 with HP Serviceguard for Linux 11.16, you can upgrade HP Serviceguard and XDC software one node at a time without bringing down your clusters. You can complete this process at a time when one system needs to be taken offline for hardware maintenance or patch installations. For more information completing a rolling upgrade, see section Upgrading to HP Serviceguard A.11.18 and XDC A.01.01 in the HP Serviceguard Extended Distance Cluster for Linux Deployment Guide, Second Edition. Verifying Installation After you install XDC, run the following command to ensure that the software is installed: # rpm -qa | grep xdc In the output, the product name, xdc -A.01.01-0 will be listed. The presence of this file verifies that the installation is successful. Support for a 4-Node Cluster XDC now supports a 4-node extended distance cluster for Linux. In this cluster configuration, 2 nodes are at one site, while the other 2 nodes form the other site. These sites are geographically dispersed. To ensure that a package is capable of running on 4 nodes and that the data is replicated, you must specify an order of failover for a package in the package configuration file. The package configuration file contains an ordered list of nodes on which the package needs to run. This order of nodes is specified using NODE_NAME parameter. While specifying the order of nodes in the package configuration file, ensure that every node in one site is followed by a node of the other site. In other words, you must ensure that the first node and the second adoptive node for a package are in different sites. Similarly, the second node and the third adoptive node must be in different sites. The third and the fourth adoptive nodes must also be in different sites. With this configuration, if a packages fails on a node in one site, it is targeted to startup on a node in a different site. Figure 1-1 and Table 1-2 elaborate this configuration order. Compatibility Information and Installation Requirements 11 Figure 1-1 Package failover sequence In this figure, nodes N1 and N2 are in Datacenter 1 at Site 1, while N3 and N4 are in Datacenter 2 in Site 2. In the package configuration file, you need to specify the failover sequence such that N1 of Site 1 is followed by a node in Site 2. In this figure, you need to specify that N1 is followed by N3. Similarly, specify that N2 of Site 1 is followed by N4. Table 1-2 describes the failover sequence that you need to specify to ensure that there is not data loss when there is a site failure. Table 1-2 Failover Sequence Node Site N1 Site 1 N3 Site 2 N2 Site 1 N4 Site 2 or 12 HP Serviceguard Extended Distance Cluster for Linux A.01.01 Release Notes Table 1-2 Failover Sequence (continued) Node Site N1 Site 1 N4 Site 2 N2 Site 1 N3 Site 2 In this table, every node is followed by a node from another site. If this order is not used, then site failures can result in data loss. In the event of a site failure at site S1, if N1 goes down first followed by N2, it results in a package failure. This failure occurs because after N1 fails, the package may failover to N2 if the recommended order of configuration is not specified. When the package fails over to N2 at the same site, it starts with a partial mirror disk. Starting with a single mirror disk prevents the package from starting at site S2 if N2 also fails. The package will not start on a node in site S2 because XDC prevents starting a package that has already started once before with a partial mirror disk. For this reason, it is also required that if one node of the four nodes is brought down at a site for maintenance, then one node at the other site must also be brought down. So if a node in one site fails, the package will fail over to another node in the other site. IMPORTANT: It is recommended that you maintain an equal number of nodes at both sites during maintenance. However, this recommendation does not apply to a 2-node cluster environment. What Manuals are Available for this Version For information about configuring Software RAID in your Extended Distance Cluster setup, see HP Serviceguard Extended Distance Cluster for Linux Deployment Guide available on the CD. Later versions of these documents will be available at http://www.docs.hp.com. You can also see the following documents when using Software RAID: • • Managing HP Serviceguard for Linux Seventh Edition. This manual describes all basic cluster configuration and administration tasks HP Serviceguard for Linux Version A.11.18 Release Notes. These documents are available on the HP documentation web site at: http://docs.hp.com/en/ha.html. Further Reading Additional information about related high availability topics may be found on Hewlett-Packard’s HA web page: Compatibility Information and Installation Requirements 13 http://www.hp.com/go/ha Support information, including current information on patches and known problems, is available from Hewlett-Packard IT center: http://itrc.hp.com (Americas and Asia Pacific) http://europe.itrc.hp.com (Europe) To receive the latest news about recommended patches, product support matrices, and recently supported hardware, go to the IT Resource Center site above, and subscribe to the high availability programs tips and issues digest. Patches and Fixes in this Version There are no patches required for HP Serviceguard Extended Distance Cluster at the time of publication. However, this is subject to change without notice. For the most current information, contact your HP support representative or Hewlett Packard IT Resource Center. Fixes There are no known fixes at the time of this publication. However, this is subject to change without notice. For the most current information contact your HP support representative. Known Problems and Workarounds The following describes known problems with the XDC software. This is subject to change without notice. For the most current information, contact your HP support representative. More recent information on known problems and workarounds may also be available on the Hewlett Packard IT Resource Center: http://itrc.hp.com(Americas and Asia Pacific) http://europe.itrc.hp.com (Europe) creating persistent device names on Red Hat 5 • What is the problem? The udev rules created by LMPUtils on Red Hat 5 do not create the persistent device names when used as is. • What is the workaround? You must modify the rules generated by the LMPutils for Red Hat 5. These changes are explained the following example: If the rule generated by the LMPUtils is as follows: 14 HP Serviceguard Extended Distance Cluster for Linux A.01.01 Release Notes BUS="scsi", KERNEL="sd*", PROGRAM="/sbin/scsi_id", RESULT="3600508b3009259e05c6fae06fd350002", NAME="%k",SYMLINK="hpdev/mylink-sdb" Then replace the single "=" to double "=" for the parameters BUS, KERNEL, and RESULT to make it work for Red Hat 5. With these changes, the rule will appear as follows: BUS=="scsi", KERNEL=="sd*", PROGRAM="/sbin/scsi_id", RESULT=="3600508b3009259e05c6fae06fd350002", NAME="%k", SYMLINK="hpdev/mylink-sdb" disks are resynchronized at all times • What is the problem? When data on two disks is not synchronized, Software RAID resynchronizes the entire disk, rather than only updating the changed data. This may happen even after a basic cluster failover. • What is the workaround? There is no workaround as this is expected behavior with an MD driver. MD parameters in the package control script • What is the problem? The package control script has a few MD related parameters. • What is the workaround? Earlier versions of HP Serviceguard supported MD as a multipathing software. As a result, the package control script includes certain configuration parameters that are specific to MD. Do not use these parameters to configure XDC in your environment. Following are the parameters in the configuration file that you must not edit: # # # # # # # # # # # # # # # # # # MD (RAID) CONFIGURATION FILE Specify the configuration file that will be used to define the md raid devices for this package. NOTE: The multipath mechanisms that are supported for shared storage depend on the storage subsystem and the HBA driver in the configuration. Follow the documentation for those devices when setting up multipath. The MD driver was used with earlier versions of Serviceguard and may still be used by some storage system/HBA combinations. For that reason there are references to MD in the template files, worksheets, and other areas. Only use MD if your storage systems specifically calls out its use for multipath. If some other multipath mechanism is used (e.g. one built into an HBA driver), then references to MD, RAIDTAB, RAIDSTART, etc. should be commented out. If the references are in the comments, they can be ignored. References to MD devices, such as /dev/md0, should be replaced with the appropriate multipath device name. For example: RAIDTAB="/usr/local/cmcluster/conf/raidtab.sg" Known Problems and Workarounds 15 # RAIDTAB="" # MD (RAID) COMMANDS # Specify the method of activation and deactivation for md. # Leave the default (RAIDSTART="raidstart", "RAIDSTOP="raidstop") if you # want md to be started and stopped with default methods. RAIDSTART="raidstart -c ${RAIDTAB} "RAIDSTOP="raidstop -c ${RAIDTAB}" Software Availability in Native Languages XDC software and documentation is not available in native language versions. 16 HP Serviceguard Extended Distance Cluster for Linux A.01.01 Release Notes