Remove SCSI-3 PR under RHEL

If you are working with cluster file system, one time or another you will need to remove SCSI-3 PR (SCSI-3 persistent reservations).

  • To verify if your disk has SCSI-3 PR.
# /usr/bin/sg_persist --in --no-inquiry --read-reservation --device=/dev/disk/by-id/wwn-0x60060480000290100134533031373138
PR generation=0xnn, Reservation follows:
 Key=0x1
 scope: LU_SCOPE, type: Write Exclusive, registrants only

The out put of the above command will display all SCSI-3 PR, you will need to remove one by one manually.

  • You will need to prepare and remove the SCSI-3 PR in two steps.
# /usr/bin/sg_persist --out --no-inquiry --register --param-sark=0x1 --device=/dev/disk/by-id/wwn-0x60060480000290100134533031373138
  • Remove the SCSI-3 PR.
# /usr/bin/sg_persist --out --no-inquiry --clear --param-rk=0x1 --device=/dev/disk/by-id/wwn-0x60060480000290100134533031373138
  • Verify if the SCSI-3 PR has been cancelled.
# /usr/bin/sg_persist --in --no-inquiry --read-reservation --device=/dev/disk/by-id/wwn-0x60060480000290100134533031373138
PR generation=0xnn, there is NO reservation held

# /usr/bin/sg_persist --in --no-inquiry --read-key --device=/dev/disk/by-id/wwn-0x60060480000290100134533031373138
PR generation=0xnn, there is NO reservation held

 

Advertisements

Install RHEL with boot from SAN and EMC PowerPath

Here are some useful steps to install RHEL with boot from SAN and EMC PowerPath.

  1. Make sure that only one Fibre Channel cable is plugged into the host.
  2. Enable the BIOS on one HBA, disable it on all others.
  3. Make sure only ONE LUN is presented to the host.
  4. Select LUN 0 in the BIOS on HBA 1 for boot.
  5. Boot the host and install RHEL 6.2 or later.
  6. Select Specialized Storage Devices.
  7. Select /dev/sda.
  8. Allow installation to proceed normally, reboot at end of installation.
  9. Make a copy of your initramfs.
  10. Install the PowerPath, license and run “/etc/init.d/PowerPath start”. Check the PowerPath configuration. Run the powermt save command.
  11. Edit /etc/fstab to mount boot from /dev/emcpowera1.
  12. Remount /boot.
  13. Change LVM filter to [ “a/emcpower.*/”, “r/sd.*/”, “r/disk.*/” ].
  14. Build a new initramfs with “dracut -v -f /boot/initramfs-PP-$(uname -r).img $(uname -r)”.
  15. Add the rest of the LUNs to the configuration via array masking.
  16. Plug in the second FC cable, scan in the LUNs, run powermt config, and make sure all paths are active.
  17. Reboot.

Using Emulex HBAnyware Utility to reset HBA

If you are administering a physical server with a host adapter from Emulex vendor, you can use the HBAnyware utility to get the status or reset it from CLI.

List all available host adapter.

# /usr/sbin/hbanyware/hbacmd listhbas

Display the host adaptor port status.

# /usr/sbin/hbanyware/hbacmd portstat 10:00:00:00:c9:79:23:3a

Display all attributes from a host adaptor.

# /usr/sbin/hbanyware/hbacmd HBAAttrib 10:00:00:00:c9:79:23:3a

Display the WWPN from a host adaptor.

#/usr/sbin/hbanyware/hbacmd list

Reset a host adaptor port.

# /usr/sbin/hbanyware/hbacmd Reset 10:00:00:00:c9:79:23:3a

 

Configure multiple IPv6 in the same NIC under RHEL

You can configure multiple IPv6 adressess in the same NIC with the /etc/sysconfig/network-scripts/ifcfg-* files just like IPv4 addresses.

Ensure that ipv6 kernel module is loaded.

# modprobe ipv6

Enable IPv6 in the sysconfig network file.

# cat <<EOF>> /etc/sysconfig/network
IPV6INIT=yes
EOF

Make the changes in the network file configuration to fit your needs.

# cat <<EOF>> /etc/sysconfig/ifcfg-eth1
IPV6INIT="yes"
IPV6ADDR="fe80::2/64"
IPV6ADDR_SECONDARIES="fe80::3/64 fe80::4/64"
IPV6_DEFAULTGW=fe80::1/64
DNS1=2001:4860:4860::8888
DNS2=2001:4860:4860::8844
EOF

Then bring the network interface up.

# ifup eth1

 

Extend LVM logical volumes in RHEL4

If you needs to extend a LVM logical volume under RHEL4, know that the process is very simple. You will need.

  1. A new physical, logical or storage device.
  2. Format this device and create a LVM partition.
  3. Create a LVM physical volume.
  4. Add the physical volume to the existing volume manager.
  5. Extend the logical volume.
  6. Extend the file system in real time.

Use the following command to scan new storage devices.

# cd /sys/class/scsi_host ; ls | while read -r line ; do echo "- - -" > $line/scan ; done

Format the LUN, physical or logical disk using type 8e, that is the LVM format.

# printf "n\np\n1\n\n\nx\nb\n1\n128\np\nw\n" | fdisk /dev/sda1

Create the LVM physical volume.

# pvcreate /dev/sda1

Extend the volume manager, in my case the volume manager is oracle_vg.

# vgextend oracle_vg /dev/sda1

Extend the LVM logical volume, in my case the logical volume is oracle_lvol.

# lvextend -L100%FREE /dev/oracle_vg/oracle_lv

Extend the filesystem in real time, in my case was an ext2 file system.

# ext2online /dev/oracle_vg/oracle_lv

 

Debugging core files with GDB using RHEL

Sometimes you need to go deep in your technical analysis to discover the root cause of a problem, and if you are working in a situation that a crash dump was generated, you have all necessary information to provide your root cause analysis.

A core file is an image of a process that has crashed It contains all process information pertinent to debugging: contents of hardware registers, process status, and process data.

To install the crash analyzing tool, execute the following command.

# yum install crash -y

In addition to crash, it is also necessary to install the kernel-debuginfo package, which provides the data necessary for dump analysis.

# yum --enablerepo=\*debuginfo
# debuginfo-install kernel

To start the utility, use the following command.

# crash /var/crash/127.0.0.1-2014-03-26-12\:24\:39/vmcore /usr/lib/debug/lib/modules/`uname –r`/vmlinux

Note that the <kernel> version should be the same that was captured by kdump. To find out which kernel you are currently running, use the uname -r command.

Display system information about the system.

crash> sys

Display the kernel message buffer, using the following command.

crash> log

Display the kernel stack trace.

crash> bt

Display status of processes in the system.

crash> ps

Display basic virtual memory information.

crash> vm

Display information about open files.

crash> files

Display swap information.

crash> swap

Display IPCS information.

crash> ipcs

Display IRQ information.

crash> irq -s

 

Bonding modes under RHEL 5/6

Red Hat Enterprise Linux 5/6 supports the following bonding modes, the default policy is balance-rr.

 balance-rr or 0

 Round-robin policy: Transmit packets in sequential
 order from the first available slave through the
 last. This mode provides load balancing and fault
 tolerance.

 active-backup or 1

 Active-backup policy: Only one slave in the bond is
 active. A different slave becomes active if, and only
 if, the active slave fails. The bond's MAC address is
 externally visible on only one port (network adapter)
 to avoid confusing the switch.

 In bonding version 2.6.2 or later, when a failover
 occurs in active-backup mode, bonding will issue one
 or more gratuitous ARPs on the newly active slave.
 One gratuitous ARP is issued for the bonding master
 interface and each VLAN interfaces configured above
 it, provided that the interface has at least one IP
 address configured. Gratuitous ARPs issued for VLAN
 interfaces are tagged with the appropriate VLAN id.

 This mode provides fault tolerance. The primary
 option, documented below, affects the behavior of this
 mode.

 balance-xor or 2

 XOR policy: Transmit based on the selected transmit
 hash policy. The default policy is a simple [(source
 MAC address XOR'd with destination MAC address XOR
 packet type ID) modulo slave count]. Alternate transmit
 policies may be selected via the xmit_hash_policy option,
 described below.

 This mode provides load balancing and fault tolerance.

 broadcast or 3

 Broadcast policy: transmits everything on all slave
 interfaces. This mode provides fault tolerance.

 802.3ad or 4

 IEEE 802.3ad Dynamic link aggregation. Creates
 aggregation groups that share the same speed and
 duplex settings. Utilizes all slaves in the active
 aggregator according to the 802.3ad specification.

 Slave selection for outgoing traffic is done according
 to the transmit hash policy, which may be changed from
 the default simple XOR policy via the xmit_hash_policy
 option, documented below. Note that not all transmit
 policies may be 802.3ad compliant, particularly in
 regards to the packet mis-ordering requirements of
 section 43.2.4 of the 802.3ad standard. Differing
 peer implementations will have varying tolerances for
 noncompliance.

 Prerequisites:

 1. Ethtool support in the base drivers for retrieving
 the speed and duplex of each slave.

 2. A switch that supports IEEE 802.3ad Dynamic link
 aggregation.

 Most switches will require some type of configuration
 to enable 802.3ad mode.

 balance-tlb or 5

 Adaptive transmit load balancing: channel bonding that
 does not require any special switch support.

 In tlb_dynamic_lb=1 mode; the outgoing traffic is
 distributed according to the current load (computed
 relative to the speed) on each slave.

 In tlb_dynamic_lb=0 mode; the load balancing based on
 current load is disabled and the load is distributed
 only using the hash distribution.

 Incoming traffic is received by the current slave.
 If the receiving slave fails, another slave takes over
 the MAC address of the failed receiving slave.

 Prerequisite:

 Ethtool support in the base drivers for retrieving the
 speed of each slave.

 balance-alb or 6

 Adaptive load balancing: includes balance-tlb plus
 receive load balancing (rlb) for IPV4 traffic, and
 does not require any special switch support. The
 receive load balancing is achieved by ARP negotiation.
 The bonding driver intercepts the ARP Replies sent by
 the local system on their way out and overwrites the
 source hardware address with the unique hardware
 address of one of the slaves in the bond such that
 different peers use different hardware addresses for
 the server.

 Receive traffic from connections created by the server
 is also balanced. When the local system sends an ARP
 Request the bonding driver copies and saves the peer's
 IP information from the ARP packet. When the ARP
 Reply arrives from the peer, its hardware address is
 retrieved and the bonding driver initiates an ARP
 reply to this peer assigning it to one of the slaves
 in the bond. A problematic outcome of using ARP
 negotiation for balancing is that each time that an
 ARP request is broadcast it uses the hardware address
 of the bond. Hence, peers learn the hardware address
 of the bond and the balancing of receive traffic
 collapses to the current slave. This is handled by
 sending updates (ARP Replies) to all the peers with
 their individually assigned hardware address such that
 the traffic is redistributed. Receive traffic is also
 redistributed when a new slave is added to the bond
 and when an inactive slave is re-activated. The
 receive load is distributed sequentially (round robin)
 among the group of highest speed slaves in the bond.

 When a link is reconnected or a new slave joins the
 bond the receive traffic is redistributed among all
 active slaves in the bond by initiating ARP Replies
 with the selected MAC address to each of the
 clients. The updelay parameter (detailed below) must
 be set to a value equal or greater than the switch's
 forwarding delay so that the ARP Replies sent to the
 peers will not be blocked by the switch.

 Prerequisites:

 1. Ethtool support in the base drivers for retrieving
 the speed of each slave.

 2. Base driver support for setting the hardware
 address of a device while it is open. This is
 required so that there will always be one slave in the
 team using the bond hardware address (the
 curr_active_slave) while having a unique hardware
 address for each slave in the bond. If the
 curr_active_slave fails its hardware address is
 swapped with the new curr_active_slave that was
 chosen.

Now that we know the available modes, let’s create a new bonding interface named bond0 using mode 1 with eth0 and eth1 NICs.

# modprobe bonding mode=active-backup miimon=100
# ifconfig bond0 10.10.10.10 netmask 255.255.255.0 up
# ip link set eth0 master bond0
# ip link set eth1 master bond0

To make this tules persistent, use the following commands.

# cat <<EOF>> /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BOOTPROTO=static
ONBOOT=yes
IPADDR=10.10.10.10
NETMASK=255.255.255.0
BONDING_OPTS="mode=1 miimon=100"
EOF
# cat <<EOF>> /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
EOF
# cat <<EOF>> /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
EOF
# cat <<EOF>> /etc/modprobe.d/bond.conf
alias bond0 bonding
EOF

If you need to check the bonding status, use the following command.

# cat /proc/net/bonding/bond0

If you need to switch the active NIC, use the following command.

# ifenslave -c bond0 eth1

Change Speed and Duplex of NICs in RHEL

If you need to configure speed and duplex using CLI under RHEL or similar, use the following command.

# ethtool -s eth0 speed 10000 duplex full autoneg off

To permanently add the above rules, use the following command.

# cat <<EOF>> /etc/sysconfig/network-scripts/ifcfg-eth0
ETHTOOL_OPTS="speed 10000 duplex full autoneg off"
EOF

Install and configure zabbix-agent on RHEL and Solaris servers and populate zabbix-server using Zabbix API

I always liked to develop custom shell scripts and play with monitoring applications like Nagios and Zabbix, but i was in a awkward situation.

After install and do final customizations on my zabbix-server and database server i realized that i needed install the zabbix-agent on more than three thousand servers that run not just LInux, but Solaris too. In that moment i lifted and went to the coffee shop.

I spent some time thinking in how to automate that job and i remembered that i used Ansible to automate little things, like deploy shell scripts and binaries, collect tar files, install packages and manage services.

In that moment a quest was born. And the quest was ‘make a Ansible role to automate the boring job’

I spend two days studying more about DevOps with Ansible in *nix world and i created and published in Ansible Galaxy the role ‘kdiegorsantos.zabbix-agent’ that can be used to install and configure zabbix-agent on RHEL and Solaris servers and populate zabbix-server using the Zabbix API.

zabbix-agent

This ansible role can be used to install and configure zabbix-agent on RHEL 5/6/7 and Solaris 10 servers. We have tasks to:

  1. install zabbix-agent and dependencies via YUM on RHEL servers.
  2. Deploy the zabbix-agent binary to Solaris servers.
  3. Configure zabbix-agent parameters.
  4. Start and manage zabbix-agent service.
  5. Add or update the zabbix-agent host on the zabbix-server via API.

Requirements

Ansible 1.5.4 or higher.

Role Variables

All variables are set in default\main.yml.

# zabbix-api
zabbix_api_use: True                                # True or False, to enable or not zabbix-api.
zabbix_url: "http://zabbix.mydomain/zabbix"         # The main page of your zabbix server.
zabbix_api_user: "zabbix_admin_username"            # zabbix admin username.
zabbix_api_pass: "zabbix_admin_password"            # zabbix admin password.
zabbix_create_hostgroup: absent                     # if the group not exists create it or not.
zabbix_create_host: present                         # if the host not exists create it or not.
zabbix_host_status: enabled                         # enalbe or disable the host.
zabbix_useuip: 1                                    # use ip and not dns.
zabbix_host_groups_rhel:                            # default group for Linux hosts.
  - Linux Servers
zabbix_link_templates_rhel:                         # default template for Linux hosts.
  - Template OS Linux
zabbix_host_groups_sun:                             # default group for Solaris hosts.
  - Solaris Servers
zabbix_link_templates_sun:                          # default template for Solaris hosts.
  - Template OS Solaris

Dependencies

You’ll need to install zabbix-api to be able to add or update host information on zabbix-server.

Installation:

# pip install zabbix-api

Short example:

>>> from zabbix_api import ZabbixAPI
>>> zapi = ZabbixAPI(server="https://server/")
>>> zapi.login("login", "password")
>>> zapi.trigger.get({"expandExpression": "extend", "triggerids": range(0, 100)})

See also:

Installation

Install using ansible-galaxy.

$ ansible-galaxy install kdiegorsantos.zabbix-agent

Example Playbook

After change the default variables you can run this role using the ansible-playbook command.

# ansible-playbook /etc/ansible/roles/zabbix-agent/role.yml

License

This project is licensed under the MIT license. See included LICENSE.md.

Author Information

collect-exec.sh – My personal OS report

About one year I’ve developed a shell script to collect important information about operation system, software and hardware in TIM Telecom, there they have applications running on Alpha, Solaris, HP-UX, AIX and obvious in the mighty Linux.

And for this purpose the collect-exec.sh was born, i’ve also developed two ansible tasks, one for deploy cron jobs and other to collect the compressed data that was generate by this little monster that saved me a some times.

In this shell script there is a lot of *nix commands to collect information about OS and InfoScale/Veritas Cluster Server.

The script collects a lot of information about the running system and save the output of each commands in a text file, and saves copies of important files in a directory named files. At the end of the script everything is compressed with tar in the global directory.

The function to run on AIX it’s incomplete, if you like AIX please make your contribution, fork me on github and make your changes.

#!/bin/bash

# script:       collect_exec.sh - version 2.00
# description:  collect information about system, software and hardware.
# author:       Diego R. Santos <kdiegorsantos@gmail.com>

# check if user is root.
[ $EUID -ne 0 ] && exit 1

# IMPORTANT: global collect directory on local server for all systems.
collect_path="/var/tmp/collect/$(date +"%d%m%Y")";

# function to run on platform Red Hat Linux only.
sys_linux () {

# Source function library.
. /etc/init.d/functions

# checks if collect directory exists and delete old jobs, if not exists create it. delete collect tar files too.
[ -d /var/tmp/collect ] && find /var/tmp/collect -maxdepth 1 -type d ! -name collect -exec rm -rf '{}' \;
[ -d /var/tmp/collect ] && find /var/tmp/collect -maxdepth 1 -type f -exec rm -rf '{}' \;
[ ! -d ${collect_path} ] && mkdir -p ${collect_path}

# identify current release.
[ -f /usr/bin/lsb_release ] && lsb_release -a > ${collect_path}/redhat-release.txt || cat /etc/redhat-release > ${collect_path}/redhat_release.txt

# get the output of native and third party commands.
if [ ! -d ${collect_path}/files/ ] ; then
        mkdir -p ${collect_path}/files/{boot,proc}
        cp -r /etc/{rc.d,bashrc,collectd.conf,cron.deny,crontab,dracut.conf,exports,filesystems,fstab,group,gshadow,host.conf,hosts,hosts.allow,hosts.deny,idmapd.conf,inittab,kdump.conf,krb5.conf,lftp.conf,localtime,login.defs,logrotate.conf,lsb-release,motd,mtab,my.cnf,networks,nsswitch.conf,ntp.conf,numad.conf,passwd,php.ini,profile,protocols,quotatab,redhat-release,resolv.conf,rsyslog.conf,services,shadow,shells,sos.conf,sudoers,sysctl.conf,yum.conf,zabbix_agent.conf,zabbix_agentd.conf,zabbix_server.conf,server_id.cfg,sudoers.d,ssh/sshd_config,lvm,modprobe.conf,modprobe.d,sysconfig,security,udev,postfix/main.cf} ${collect_path}/files
        cp -r /proc/{net,cpuinfo,loadavg,meminfo,net/dev,partitions,pci,stat,uptime,version,cmdline,mounts} ${collect_path}/files/proc
        ls /boot/ | egrep $(uname -r) | while read -r line ; do cp -r /boot/$line ${collect_path}/files/boot ; done
fi

alternatives --display java > ${collect_path}/java.txt
chkconfig --list > ${collect_path}/chkconfig.txt
arp -a > ${collect_path}/arp_a.txt
crontab -l > ${collect_path}/crontab.txt
date > ${collect_path}/date.txt
df -alP > ${collect_path}/df_alP.txt
df -iP > ${collect_path}/df_i.txt
df -kP > ${collect_path}/df_k.txt
df -hP > ${collect_path}/df_h.txt
dmesg > ${collect_path}/dmesg.txt
dmidecode > ${collect_path}/dmidecode.txt
dmsetup info -c > ${collect_path}/dmsetup_info.txt
dmsetup ls --tree > ${collect_path}/dmsetup_ls.txt
dmsetup status > ${collect_path}/dmsetup_status.txt
dmsetup table > ${collect_path}/dmsetup_table.txt
exportfs -v > ${collect_path}/exportfs_v.txt
fdisk -l > ${collect_path}/fdisk.txt
free > ${collect_path}/free.txt
getconf LONG_BIT > ${collect_path}/getconf_long_bit.txt
getconf PAGE_SIZE > ${collect_path}/getconf_page_size.txt
hostid > ${collect_path}/hostid.txt
hostname --fqdn > ${collect_path}/hostname.txt
ifconfig -a > ${collect_path}/ifconfig.txt
ifenslave -a > ${collect_path}/ifenslave_a.txt
ip address > ${collect_path}/ip_address.txt
ip link > ${collect_path}/ip_link.txt
ip maddr show > ${collect_path}/ip_maddr_show.txt
ip mroute show > ${collect_path}/ip_mroute_show.txt
ip neigh show > ${collect_path}/ip_neigh_show.txt
ip route show table all > ${collect_path}/ip_route_show_table_all.txt
ip -s link show > ${collect_path}/ip_link_show.txt
ipcs -a > ${collect_path}/ipcs_a.txt
last boot > ${collect_path}/last_boot.txt
lsblk > ${collect_path}/lsblk.txt
lsmod > ${collect_path}/lsmod.txt
lsof -b +M -n -l > ${collect_path}/lsof_bMnl.txt
lspci > ${collect_path}/lspci.txt
lspci -v > ${collect_path}/lspci_v.txt
lvmdiskscan > ${collect_path}/lvmdiskscan.txt
lvm dumpconfig > ${collect_path}/lvm_dumpconfig.txt
lvm version > ${collect_path}/lvm_version.txt
lvs -a -o +devices --config="global{locking_type=0}" > ${collect_path}/lvs.txt
lvs --segments --config="global{locking_type=0}" > ${collect_path}/lvs_segments.txt
multipath -v4 -ll > ${collect_path}/multipath.txt
netstat -agn > ${collect_path}/netstat_agn.txt
netstat -antpl > ${collect_path}/netstat_antpl.txt
netstat -anupl > ${collect_path}/netstat_anupl.txt
netstat -neopa > ${collect_path}/netstat_neopa.txt
netstat -nr > ${collect_path}/netstat_nr.txt
netstat -s > ${collect_path}/netstat_s.txt
netstat -s > ${collect_path}/netstat_s.txt
nfsstat -a > ${collect_path}/nfsstat.txt
ntpstat > ${collect_path}/ntpstat.txt
ps alxwww > ${collect_path}/ps_alxwww.txt
ps auxwwwm > ${collect_path}/ps_auxwwwm.txt
pstree > ${collect_path}/pstree.txt
pvs -a -v --config="global{locking_type=0}" > ${collect_path}/pvs.txt
pvscan -v --config="global{locking_type=0}" > ${collect_path}/pvscan.txt
readlink -f /usr/bin/java > ${collect_path}/java_version.txt
rhncfg-client channels > ${collect_path}/rhncfg-client_channels.txt
route -n > ${collect_path}/route.txt
rpcinfo -p localhost > ${collect_path}/rpcinfo_p_localhost.txt
rpm -qa > ${collect_path}/rpm_qa.txt
rpm -qai > ${collect_path}/rpm_qai.txt
runlevel > ${collect_path}/runlevel.txt
showmount -e localhost > ${collect_path}/showmount.txt
swapon -s > ${collect_path}/swapon.txt
udevadm info -e ${collect_path}/udevadm_info_e.txt
ulimit -a > ${collect_path}/ulimit.txt
uname -a > ${collect_path}/uname.txt
vgdisplay -vv --config="global{locking_type=0}" > ${collect_path}/vgdisplay.txt
vgscan -vvv --config="global{locking_type=0}" > ${collect_path}/vgscan.txt
vgs -v --config="global{locking_type=0}" > ${collect_path}/vgs.txt
yum -C repolist > ${collect_path}/yum_repolist.txt
/usr/local/sbin/boot_device.sh > ${collect_path}/boot_device.txt

# generate hadouken json file.
[ -x /usr/local/sbin/hadouken.py ] && /usr/local/sbin/hadouken.py

# get LUN information.
if [ -x /usr/local/sbin/inq ] ; then
  /usr/local/sbin/inq -no_dots > ${collect_path}/inq.txt
  /usr/local/sbin/inq -no_dots -wwn > ${collect_path}/inq_wwn.txt
fi

if [ -x /usr/bin/lsscsi ] ; then
  lsscsi > ${collect_path}/lsscsi.txt
fi

# get FC information.
if [ -x /usr/bin/systool ] ; then
  systool -v -c fc_host > ${collect_path}/systool_vc_fc_host.txt
else
  for a in $(ls /sys/class/fc_host/host)
    do ; cat $a/port_name >> ${collect_path}/fc_port_name.txt
  done
fi

# get EMC PowerPath information.
if [ -x /sbin/powermt ] ; then
  powermt check_registration > ${collect_path}/powermt_registration.txt
  powermt version > ${collect_path}/powermt_version.txt
  powermt display ports > ${collect_path}/powermt_display_ports.txt
  powermt display options > ${collect_path}/powermt_display_options.txt
  powermt display unmanaged > ${collect_path}/powermt_display_unmanaged.txt
  powermt display paths > ${collect_path}/powermt_display_paths.txt
  powermt display dev\=all > ${collect_path}/powermt_display_dev_all.txt
  powermt display alua dev\=all > ${collect_path}/powermt_display_alua_dev_all.txt
  powermt display options > ${collect_path}/powermt_display_options.txt
  powermt display hba_mode > ${collect_path}/powermt_display_hba_mode.txt
  powermt display port_mode > ${collect_path}/powermt_display_port_mode.txt
  powermt save file=${collect_path}/powermt_save.txt
  emcpreg -list > ${collect_path}/emcpreg.txt
fi

# get information about Oracle instances.
sids=($(ps -ef | grep pmon | awk -F\_ '{print $3}' | egrep -v '^$|\+' | xargs))
[ ! -z $sids ] && echo $sids > ${collect_path}/oracle_sids.txt

# get information about InfoScale/Veritas Cluster Server.
check_vcs_had=$(ps -ef | egrep -w "VRTSvcs/bin/had");
if [ ! -z "$check_vcs_had" ] ; then
  export PATH=${PATH}:/opt/VRTSvcs/bin:/opt/VRTS/bin:/opt/VRTSsfmh/bin:/etc/vx/bin
  hacf -verify /etc/VRTSvcs/conf/config/ -display > ${collect_path}/hacf_verify_display.txt
  had -version > ${collect_path}/had_version.txt
  hauser -display > ${collect_path}/hauser_display.txt
  hauser -list > ${collect_path}/hauser_list.txt
  hasys -list > ${collect_path}/hasys_list.txt
  hasys -state > ${collect_path}/hasys_state.txt
  hasys -nodeid > ${collect_path}/hasys_nodeid.txt
  hastatus -summ > ${collect_path}/hastatus_summ.txt
  hatype -display > ${collect_path}/hatype_display.txt
  hatype -list > ${collect_path}/hatype_list.txt
  hares -list > ${collect_path}/hares_list.txt
  hares -list > ${collect_path}/hares_list.txt
  hagrp -list > ${collect_path}/hares_list.txt
  hagrp -list > ${collect_path}/hagrp_list.txt
  haclus -value EngineVersion > ${collect_path}/haclus_engineversion.txt
  haclus -display > ${collect_path}/haclus_display.txt
  vxddladm get namingscheme > ${collect_path}/vxddladm_namingscheme.txt
  vxddladm listjbod > ${collect_path}/vxddladm_listjbod.txt
  vxddladm listsupport > ${collect_path}/vxddladm_listsupport.txt
  vxlist > ${collect_path}/vxlist.txt
  vxdisk list > ${collect_path}/vxdisk_list.txt
  vxdisk -e list > ${collect_path}/vxdisk_e_list.txt
  vxdisk -s list > ${collect_path}/vxdisk_s_list.txt
  vxdisk -o alldgs list > ${collect_path}/vxdisk_o_alldgs_list.txt
  vxdctl -c mode > ${collect_path}/vxdctl_c_mode.txt
  vxdctl mode > ${collect_path}/vxdctl_mode.txt
  vxclustadm -v nodestate > ${collect_path}/vxclustadm_nodestate.txt
  vxclustadm nidmap > ${collect_path}/vxclustadm_nidmap.txt
  /usr/lib/vxvm/bin/vxclustadm -v nodestate -d > ${collect_path}/vxclustadm_v_nodestate.txt
  gabconfig -a > ${collect_path}/gabconfig_a.txt
  gabconfig -W > ${collect_path}/gabconfig_W.txt
  lltconfig -W > ${collect_path}/lltconfig_W.txt
  lltstat > ${collect_path}/lltstat.txt
  lltstat -nvv active > ${collect_path}/lltstat_active.txt
  lltstat -n > ${collect_path}/lltstat_n.txt
  cfscluster status > ${collect_path}/cfscluster_status.txt
  vxdg list > ${collect_path}/vxdg_list.txt && vxdg free > ${collect_path}/vxdg_free.txt
  vxprint -ht > ${collect_path}/vxprint_ht.txt
  vxprint -Ath -q > ${collect_path}/vxprint_Athq.txt
  vxprint -AGts > ${collect_path}/vxprint_AGts.txt
  vxprint -m rootdg > ${collect_path}/vxprint_m_rootdg.txt
  vxlicrep > ${collect_path}/vxlicrep.txt
  vxlicrep -e > ${collect_path}/vxlicrep_e.txt
  vxfenadm -d > ${collect_path}/vxfenadm_d.txt
  vxdmpadm gettune all > ${collect_path}/vxdmpadm_gettune_all.txt
  vxdmpadm listapm all > ${collect_path}/vxdmpadm_listapm_all.txt
  vxdmpadm listenclosure all > ${collect_path}/vxdmpadm_listenclosure_all.txt
  vxdmpadm stat restored > ${collect_path}/vxdmpadm_stat_restored.txt
  vxdmpadm listctlr all > ${collect_path}/vxdmpadm_listctlr_all.txt
  vxdmpdbprint > ${collect_path}/vxdmpdbprint.txt
  vxlicense -p > ${collect_path}/vxlicense_p.txt

for a in $(/opt/VRTSvcs/bin/hares -list | awk '{print $1}') ; do
  hares -display $a > ${collect_path}/hares_display.txt
done

for a in $(/opt/VRTSvcs/bin/hagrp -list | awk '{print $1}') ; do
  hagrp -display $a > ${collect_path}/hagrp_display.txt
done

fi

if [ -d /etc/VRTSvcs/conf/config/ ] ; then
  mkdir -p ${collect_path}/VRTSvcs
  cp -r /etc/{llthosts,VRTSvcs/conf/config/*.cf,llttab,vxfenmode,vxfentab,gabtab,gabconfig,VRTSagents,VRTSvbs,vxcps,vxfen.d} ${collect_path}/VRTSvcs
fi

# compress than delete the current job directory.
[ -d ${collect_path} ] && cd ${collect_path}/../ && mv $(date +"%d%m%Y") $(hostname -s)_$(date +"%d%m%Y") && tar -cvjSf collect_$(hostname -s)_$(date +"%d%m%Y").tar.bz2 $(hostname -s)_$(date +"%d%m%Y") --remove-files

}

# function to run on platform AIX only.
sys_aix () {
        exit 1
}

# function to run on platform SunOS only.
sys_sunos () {

# checks if collect directory exists and delete old jobs, if not exists create it. delete collect tar files older than 7 days.
[ -d ${collect_path} ] && find ${collect_path}/../ -type d ! -name collect ! -name \. -exec rm -rf '{}' \; || mkdir -p ${collect_path}
[ -d ${collect_path} ] && find ${collect_path}/../ -type f -name collect_*.tar -mtime +7 -exec rm -rf '{}' \;

uname -n > ${collect_path}/hostname.txt
cfgadm -alv > ${collect_path}/cfgadm-alv.txt
cp -pr /etc/hosts /etc/hostname.* /root/scripts/* /etc/passwd /etc/shadow /etc/group /etc/services /etc/vfstab ${collect_path}
crontab -l > ${collect_path}/crontab-l.txt
df -k > ${collect_path}/df_k.txt
dladm show-aggr -L > ${collect_path}/dladm-show-aggr.txt
dladm show-dev > ${collect_path}/dladm-show-dev.txt
dladm show-phys > ${collect_path}/dladm-show-phys.txt
echo |format > ${collect_path}/format.txt
eeprom > ${collect_path}/eeprom.txt
fcinfo hba-port > ${collect_path}/fcinfo-hba.txt
ifconfig -al > ${collect_path}/ifconfig.txt
ipadm show-addr > ${collect_path}/ipadm-show-addr.txt
ipadm show-\if > ${collect_path}/ipadm-show-if.txt
ldm list > ${collect_path}/ldm-list.txt
ldm list-devices > ${collect_path}/ldm-list-devices.txt
ldm list-services > ${collect_path}/ldm-list-services.txt
ldm ls -l > ${collect_path}/ldm-ls-l.txt
luxadm -e port > ${collect_path}/luxadm-port.txt
mount > ${collect_path}/mount.txt
mpathadm list LU > ${collect_path}/mpathadm-list.txt
mpathadm show LU > ${collect_path}/mpathadm-show.txt
netstat -an > ${collect_path}/netstat-an.txt
netstat -rn > ${collect_path}/netstat-rn.txt
powermt display > ${collect_path}/powermt.txt
powermt display dev=all > ${collect_path}/powermt_all.txt
prtdiag -v > ${collect_path}/prtdiag-v.txt
ps -ef > ${collect_path}/processos.txt
ps -ef | grep -i pmon > ${collect_path}/pmon.txt
rpcinfo > ${collect_path}/rpcinfo.txt
svcs -av > ${collect_path}/svcs-av.txt
svcs -l > ${collect_path}/svcs-l.txt
svcs -xv > ${collect_path}/svcs-xv.txt
uname -a > ${collect_path}/uname.txt
/usr/bin/ls -l /dev/rdsk > ${collect_path}/ls-rdsk.txt
vmstat 5 5 > ${collect_path}/vmstat.txt
zfs list > ${collect_path}/zfs-list.txt
zpool list > ${collect_path}/zpool-list.txt
zpool status -v > ${collect_path}/zpool-status.txt

# compress than delete the current job directory.
[ -d ${collect_path} ] && cd ${collect_path}/../ && mv $(date +"%d%m%Y") $(uname -n)_$(date +"%d%m%Y") && tar -cf collect_$(uname -n)_$(date +"%d%m%Y").tar $(uname -n)_$(date +"%d%m%Y") && rm -rf $(uname -n)_$(date +"%d%m%Y")

}

# function to run on platform HP-UX only.
sys_hpux () {
bdf > ${collect_path}/bdf.txt
cp -pr /etc/hosts /etc/passwd /etc/group /etc/services /etc/lvmpvg /etc/lvmtab /etc/fstab ${collect_path}/
cp -pr /etc/shadow /etc/rc.config.d/netconf /etc/dfs /etc/rc.config.d/nddconf /etc/exports ${collect_path}/
cp -pr /.profile /etc/profile ${collect_path}/
cp /usr/local/bin/*.sh ${collect_path}/
crashconf -v > ${collect_path}/crashconf.txt
exportfs  > ${collect_path}/exportfs.txt
ioscan -fnC disk > ${collect_path}/ioscan_fnC_disks.txt
ioscan -fnC lan > ${collect_path}/ioscan_fnC_lan.txt
ioscan -fn > ${collect_path}/ioscan_fn.txt
ioscan -m dsf > ${collect_path}/ioscan_m_dsf.txt
ioscan -m lun > ${collect_path}/ioscan_m_lun.txt
kctune > ${collect_path}/kctune.txt
kmtune > ${collect_path}/kmtune.txt
lanscan >  ${collect_path}/lanscan.txt
lvdisplay -v /dev/vg*/lvol* > ${collect_path}/lvdisplay.txt
lvlnboot -R > ${collect_path}/lvlnboot_R.txt
lvlnboot -v > ${collect_path}/lvlnboot_v.txt
mount -p > ${collect_path}/mount_p.txt
netstat -in > ${collect_path}/netstat_in.txt
netstat -rn > ${collect_path}/netstat_rn.txt
powermt display > ${collect_path}/powermt_display.txt
powermt display dev=all > ${collect_path}/powermt_dispaly_devall.txt
ps -ef > ${collect_path}/ps_ef.txt
ps -ef | grep -i pmon > ${collect_path}/pmon.txt
setboot > ${collect_path}/setboot.txt
swapinfo -tam > ${collect_path}/swapinfo_tam.txt
swlist -l bundle > ${collect_path}/swlist_l_bundle.txt
swlist -l fileset > ${collect_path}/swlist_l_fileset.txt
swlist -l product > ${collect_path}/swlist_l_product.txt
sysdef > ${collect_path}/sysdef.txt
uname -a > ${collect_path}/uname_a.txt
vgdisplay -v > ${collect_path}/vgdisplay_v.txt
vparenv > ${collect_path}/vparenv.txt
vparstatus -A > ${collect_path}/vparstatus_A.txt
vparstatus > ${collect_path}/vparstatus.txt
vparstatus -v > ${collect_path}/vparstatus_v.txt

[ ! -d ${collect_path}/crontabs/ ] && mkdir -p ${collect_path}/crontabs
cp -pr /var/spool/cron/crontabs/* ${collect_path}/crontabs/

if  [ -d /etc/cmcluster ] ; then
  tar cvf ${collect_path}/cmcluster.tar  /etc/cmcluster
  cmviewcl > ${collect_path}/cmviewcl.txt
  cmviewcl -v > ${collect_path}/cmviewcl_v.txt
fi

# compress than delete the current job directory.
[ -d ${collect_path} ] && cd ${collect_path}/../ && mv $(date +"%d%m%Y") $(uname -n)_$(date +"%d%m%Y") && tar -cf collect_$(uname -n)_$(date +"%d%m%Y").tar $(uname -n)_$(date +"%d%m%Y") && rm -rf $(uname -n)_$(date +"%d%m%Y")
}

# function to check platform.
my_verify_plat () {

plat="$(uname)";
rt="0";

[ ${plat} = "Linux" ] && sys_linux ; rt=0
[ ${plat} = "AIX" ] && sys_aix ; rt=0
[ ${plat} = "SunOS" ] && sys_sunos ; rt=0
[ ${plat} = "HP-UX" ] && sys_hpux ; rt=0
[ ${rt} -ne 0 ] && exit 1

} > /dev/null 2>&1 

# check the platform than run the correct function.
my_verify_plat

I use ansible to do my hard work always, the following ansible tasks can distribute the shell script, configure cron and collect output files.

# collect-exec.yml
- hosts: datacenter1
 gather_facts: no

tasks:
 - file: name={{ item }} state=absent
 with_items:
 - /var/tmp/collect-exec.sh

- copy: src=/appl/collect/collect-exec.sh dest=/var/tmp/collect-exec.sh mode=0775 owner=root

- cron: name="collect-exec" state=present minute="0" hour="22" job="timeout 30m /var/tmp/collect-exec.sh"
# fetch_collect-exec.yml
- hosts: datacenter1
 gather_facts: no

tasks:
 - shell: "find /var/tmp/collect -maxdepth 1 -type f | awk -F/ '{print $NF}'"
 register: result

- debug: var=result

- fetch: src=/var/tmp/collect/{{ item }} dest=/appl/collect/data flat=yes
 with_items: result.stdout_lines

To run the ansible tasks do the following.

# ansible-playbook /etc/ansible/tasks/collect-exec.yml -f 50 -v
# ansible-playbook /etc/ansible/tasks/fetch_collect-exec.yml -f 50 -v

Have fun 🙂