- 1 Deploying ftserver 4800 with AUL 11.0.3 preloaded with RHEL 7.3
- 1.1 First boot of new ftserver with preloaded RHEL 7.3
- 1.2 Unsupported commands
- 1.3 Checking current fault-tolerance (Duplex) status
- 1.4 To verify whether Automatic Uptime Layer is working properly
- 1.5 Mapping SAN disks via FC to ftserver
- 1.6 Raid configuration on ftservers using RHEL
- 1.7 VTM configuration for remote management of ftserver
- 1.8 ftserver network bonding
- 1.9 Installing ftserver in rack
- 1.10 Buggrabber logs for support
Deploying ftserver 4800 with AUL 11.0.3 preloaded with RHEL 7.3
To deploy ftserver 4800 with AUL 11.0.3 preloaded with RHEL 7.3 following information can be useful.
First boot of new ftserver with preloaded RHEL 7.3
During first boot we need to accept various licenses using yes 3-4 times. After that we need to create password for admin user. After that we can logoff from admin user and login as root user. Default root password is ftServer.
Please note, 'lspci' or ‘sysreport’ are unsupported commands on ftServer systems, which will impact the lock stepping of ftServer systems and CPU/IO get shot and system to go simplex.
Checking current fault-tolerance (Duplex) status
To check whether the server is working properly in fault-tolerant (Duplex) status or not use:
command. ftsmaint also has other options apart from ls which can be checked by typing just ftsmaint.
To verify whether Automatic Uptime Layer is working properly
To check whether Automatic Uptime Layer is functioning properly, we can use:
command. This should output two lines with both status as 'PASS'.
Mapping SAN disks via FC to ftserver
At first SAN disks do not show up automatically on ftserver when LUNs are mapped using FC. We need to reboot server which will give warnings and try to generate new initramfs. Use Yes(y) for all prompts and let server reboot once again. After reboot with new initramfs the disks will appear properly in both fdisk -l and multipath -ll.
Raid configuration on ftservers using RHEL
Raid configuration on ftserver using RHEL is done using mdadm default Linux software RAID tools. The OS boot disk uses 80GB for OS, swap, crash etc. and remaining space is free. OS boot disk uses gpt partition table. Hence for using remaining free space a new partition should be created using parted only and we cannot use fdisk.
The ftserver RAID should be done on corresponding disks which are at same location in both CRUs. To get mapping between disk location on CRU and device name (/dev/sdb, etc.) use:
ls -l /dev/disk/by-duid/
After partition is created using parted on corresponding disks use:
mdadm -C /dev/md<n> -l1 -n2 /dev/sdt4 /dev/sdu4
for creating RAID partition which will use remaining free space on OS disk.
For non-OS disks first we need to use parted to delete default "Microsoft reserved" partition and create a new partition for RAID. Example parted steps after running parted /dev/sd<x> for doing this on a 1200GB disk are as follows:
unit s print rm 1 print mkpart primary 34 2344000000 print set 1 raid on print
To make RAID configuration persistent across reboots use:
mdadm --detail -sv >> /etc/mdadm.conf
Then remove the duplicate entries for root, swap, etc. which were already present before. Leave the new entries.
You can check raid status at any time using:
Note that without RAID the disks in =ftsmaint ls= will show SIMPLEX. After configuring RAID it might go to =SYNCING=, etc. states and finally after RAID is fully synced it should go to =DUPLEX= state. Hence disk state will go to =DUPLEX= only after it is successfully added to software RAID.
We should reboot server and verify that RAID partitions are coming up properly after reboot.
VTM configuration for remote management of ftserver
For configuring VTM we can use =VTMConfig= command. For static IP based VTM it is necessary to give domain name. The first configuration of VTM takes about 1-2 minutes. VTM IP addresses of both CRU's would be different. After we login into VTM for first time using ADMIN:ADMIN, it takes 1-2 minutes before proper Duplex status is shown in VTM browser window.
ftserver network bonding
ftserver network bonding is regular network bonding using /etc/sysconfig/network-scripts/ files. Bonds are named bond0, bond1, etc. We can inspect ifcfg-bond0 and various NIC files with Type=Slave and Bond=Bond0 etc. kind of configuration. Default bonds are created in active-backup (mode 1) with DHCP based IP addressing. The config files can be modified as per requirement. Other ft-server specific bonding details are given below. Use:
to get details on which bond interfaces are up, what is the IP configured and what are the physical interfaces (eg eth100600) which are part of corresponding bond.
If we prefer there is also /opt/ft/sbin/ft-bonding tool provided by ftserver to work with bond interfaces.
- /opt/ft/sbin/ft-bonding list
- Lists each bond and its ethernet ports
- (CAREFUL) /opt/ft/sbin/ft-bonding none
- Removes all Ethernet bonds
- /opt/ft/sbin/ft-bonding single
- Adds all ethernet ports to one single bond
- /opt/ft/sbin/ft-bonding paired
- Pairs one interface from each enclosure into a bond. This is similar to current bond0 having eth100600, eth100601 and bond1 having eth110600, eth110601
- /opt/ft/sbin/ft-bonding config
- Interactive session which prompts for name of bond and interfaces that should become member of bond.
After configuration file changes network restart ('systemctl restart network') is required for changes to take effect.
Installing ftserver in rack
For installing ftserver in rack first fix backside side panes using two screws on each side pane. This requires 4 cagenuts to guide the screws. For this we will use large screws that come with ftserver railing kit screw set.
After this we need to put one guiding screw on third from backside and second from top row. The screw should be placed from inside (inside ftserver case) towards outside (rack walls). This is clearly described in hardware manuals that come with ftserver also. Most of the manuals are available at Stratus official documentation site linked in references below.
Once one guiding screw is placed on each back plane, we need to take the case (without CRU, without backplane, without DVD drive) and try to put it rack using guiding screws. Once it is vertically aligned properly, we need to put four screws (via cagenuts) in front four large oval holes. These screws and cagenuts need to be obtained from other places as the default kit gives only 4 cagenuts.
Once front 4 screws are placed, we need to put as many screws as possible in the available 9 side holes where we had placed guiding screw as possible.
At this point tighten all screws as much as possible before installing other units. First backplane is installed. For installing it should be almost aligned with the slot and pushed forward (and not sideways). Then CD/DVD plane should be inserted. Finally both CRUs can be inserted.
Buggrabber logs for support
For support it is good if buggrabber logs are collected and forwarded to Stratus team. For this we should execute
cd /opt/ft/sbin ./buggrabber.pl
After this a tar file can be found in =/tmp/BugPool= folder which needs to be shared with Stratus team. The tar file can be about 130MB in size.