AWS – Pre-warming EBS volume

For volumes that have been restored from snapshots, use the dd or fio utilities to read from all of the blocks on a volume. All existing data on the volume will be preserved.

$ sudo dd if=/dev/xvdf of=/dev/xvdf conv=notrunc bs=1M

You can learn more about EBS volumes here.

AWS – Auto Scaling Lifecycle Hooks

Auto Scaling lifecycle hooks enable you to perform custom actions as Auto Scaling launches or terminates instances. For example, you could install or configure software on newly launched instances, or download log files from an instance before it terminates.

  • Create a lifecycle hooks to perform an action on scale out
$ aws autoscaling put-lifecycle-hook —lifecycle-hook-name my-hook --auto-scaling-group-name my-asg --lifecycle-transition autoscaling:EC2_INSTANCE_LAUNCHING
  • Create a lifecycle hooks to perform an action on scale in
$ aws autoscaling put-lifecycle-hook —lifecycle-hook-name my-hook --auto-scaling-group-name my-asg --lifecycle-transition autoscaling:EC2_INSTANCE_TERMINATING
  • If you need more time than 1 hour send a heartbeat to keep the current state of the instance.
$ aws autoscaling record-lifecycle-action-heartbeat --instance-id i-1a2b3c4d5e6f7g --lifecycle-hook-name my-launch-hook --auto-scaling-group-name my-asg
  • Put the instance on InService state
$ aws autoscaling complete-lifecycle-action —lifecycle-action-result CONTINUE —instance-id i-1a2b3c4d5e6f7g —lifecycle-hook-name my-hook —auto-scaling-group-name my-asg
  • Put the instance on terminate state
$ aws autoscaling complete-lifecycle-action —lifecycle-action-result TERMINATE —instance-id i-1a2b3c4d5e6f7g —lifecycle-hook-name my-hook --auto-scaling-group-name my-asg
You can read more about Auto Scaling Lifecycle Hooks here.

hadouken – Simple inventory with python and sqlite3

This ansible role collects information about hardware and software from Linux servers, then inserts this information on a sqlite3 database.

The process happens in three phases.

  • Distribute and execute hadouken.py
  • Collect the json file that hadouken.py generates
  • Execute update-db.py to load json info in the database

I also included commands that collects information about EMC Storage, Veritas InfoScale and Cluster Server.


Prerequisites

On ansible server.

  • ansible 1.4 or higher
  • sqlite3
yum install ansible sqlite3

On other servers.

  • dmidecode
yum install dmidecode

Installation

Install using ansible-galaxy.

mkdir -p /etc/ansible/roles && cd /etc/ansible/roles && ansible-galaxy install kdiegorsantos.hadouken

Configuration

Create a group in your hosts ansible file named hadouken and fill with desired hosts.

cat <<EOF>> /etc/ansible/hosts
[hadouken]
webserver
dbserver
EOF

Change the default domain variable on defaults/main.yml.

cat <EOF> /etc/ansible/roles/hadouken/defaults/main.yml
domain: mydomain.com
EOF

Running

  • ansible-playbook

Run this ansible role using the ansible-playbook command.

ansible-playbook /etc/ansible/roles/hadouken/role.yml
  • hadouken.py

Run hadouken.py manually on a host.

/usr/local/sbin/hadouken.py
server_name: snelnxa72 
server_release: Red Hat Enterprise Linux Server release 5.11 (Tikanga)
server_site: SNE
server_vendor:  HP
server_model:  ProLiant BL460c Gen8
server_serial: BRC2532JH4
server_cpu: Intel Xeon CPU E5-2650 2.00GHz / 2 Socket(s) / 32 CPU(s)/ 8 Core(s) per socket
server_memory: 32 GB 
server_ip: 10.168.81.77
server_cluster: idem_cluster 
server_clusternodes: snelnx187 snelnx189 snelnxa36 snelnxa68 snelnxa69 snelnxa70 snelnxa71 snelnxa72 snelnxa73
server_frame: 000290102907 000592600076 000595700007 000595700008 CKM00154803864
server_wwpn: 10006c3be5b076f1 10006c3be5b076f5 
server_db: None
  • hadouken json file

Display the content from the json file generated by hadouken.py.

cat /var/tmp/snenix002.json | python -m json.tool
{
    "server_cluster": "",
    "server_clusternodes": "",
    "server_cpu": "2 Socket(s) Intel Xeon CPU E5540 @ 2.53GHz/ 16 CPU(s)/ 4 Core(s) per socket",
    "server_db": "",
    "server_frame": "",
    "server_ip": "10.168.90.103",
    "server_memory": "32 GB",
    "server_model": "ProLiant BL460c G6",
    "server_name": "snenix002",
    "server_release": "Red Hat Enterprise Linux Server release 6.8 (Santiago)",
    "server_serial": "BRC952N120",
    "server_site": "SNE",
    "server_vendor": "HP",
    "server_wwpn": ""
}
  • db.sqlite

Run sql commands in a easy way using query-db.sh, just give an argument to begin the search.

/etc/ansible/roles/hadouken/files/bin/query-db.sh BRC50966F0
          server_id = 946
        server_name = rjolnxc15
     server_release = Red Hat Enterprise Linux Server release 6.5 (Santiago)
        server_site = RJO
      server_vendor = HP
       server_model = ProLiant DL580 Gen8
      server_serial = BRC50966F0
         server_cpu = 4 Socket(s) Intel Xeon CPU E7-4890 v2 @ 2.80GHz/ 120 CPU(s)/ 15 Core(s) per socket
      server_memory = 1292 GB
          server_ip = 10.168.34.150
     server_cluster = 
server_clusternodes = 
       server_frame = 000595700042
        server_wwpn = 5001438028cfc61c 5001438028cfc61e 5001438028cccf94 5001438028cccf96
          server_db = 
       server_owner = 
        server_rack = 
     server_console = 
        last_update = 2016-09-23
  • sqlite3 query

Run sql commands using sqlite3.

sqlite3 -header -column /etc/ansible/roles/files/db/db.sqlite "select * from info where server_name = 'snelnxa72'"
server_id   server_name  server_release                                          server_site  server_vendor  server_model        server_serial  server_cpu                                                                   server_memory  server_ip      server_cluster  server_clusternodes  server_frame  server_wwpn  server_db   server_owner  server_rack  server_console  last_update
----------  -----------  ------------------------------------------------------  -----------  -------------  ------------------  -------------  ---------------------------------------------------------------------------  -------------  -------------  --------------  -------------------  ------------  -----------  ----------  ------------  -----------  --------------  -----------
1           snenix002    Red Hat Enterprise Linux Server release 6.8 (Santiago)  SNE          HP             ProLiant BL460c G6  BRC952N120     2 Socket(s) Intel Xeon CPU E5540 @ 2.53GHz/ 16 CPU(s)/ 4 Core(s) per socket  32 GB          10.168.90.103                                                                                                                         2016-09-23

License

This project is licensed under the MIT license. See included LICENSE.md.

Network settings in RHEL7

Here are some useful notes about network settings in RHEL7.

  • Set hostname
# hostnamectl set-hostname server001
  • Display network devices
# nmcli d
  • Set ipv4 address
# nmcli c modify eth0 ipv4.addresses 192.168.0.30/24
  • Set default gateway
# nmcli c modify eth0 ipv4.gateway 192.168.0.1
  • Set dns
# nmcli c modify eth0 ipv4.dns 8.8.8.8
  • Set manual for static setting ‘(use auto for dhcp)’
# nmcli c modify eth0 ipv4.method manual
  • Restart the network interface ‘(down and up)’
# nmcli c down eth0 && nmcli c up eth0
  • Show settings of a network interface
# nmcli d show eth0
  • Show network interface status
# ip addr show
  • Disable ipv6
# vi /etc/default/grub
# line 6: add
GRUB_CMDLINE_LINUX="ipv6.disable=1 rd.lvm.lv=fedora-server/root.....
  • Apply changing
# grub2-mkconfig -o /boot/grub2/grub.cfg
# systemctl reboot
  • Use network interfaces name like ethX
# vi /etc/default/grub
# line 6: add
GRUB_CMDLINE_LINUX="net.ifnames=0 rd.lvm.lv=fedora/swap rd.md=0.....
  • Apply changing
# grub2-mkconfig -o /boot/grub2/grub.cfg
# systemctl reboot

Get started with systemd

Here are some useful notes about systemd.

  • Display the current version of systemd
# systemctl --version
  • Display boot process duration
# systemd-analyze
  • Get the time spent by each task during the boot process
# systemd-analyze blame
  • Get the list of the dependencies
# systemctl list-dependencies
  • Get the list of dependencies for a particular service
# systemctl list-dependencies sshd.service
  • Get the content of systemd journal
# journalctl
  • Get all the events related to the crond process in the journal
# journalctl /sbin/crond
  • Get all the events since the last boot
# journalctl -b
  • Get all the events that appeared today in the journal
# jornalctl --since=today
  • Get all the events with a syslog priority of err
# journalctl -p err
  • Get the 10 last events and wait for any new one ‘(like tail -f /var/log/messages)’
# journalctl -f
  • By default journald ared stored in the /var/run/log/journald directory and disapear after reboot.
# mkdir /var/log/jornald
# echo "SystemMaxUse=50M" >> /etc/systemd/journald.conf
# systemctl restart systemd-journald
  • Display the disk space used by jornald
# journalctl --disk-usage
  • Get the full hierarchy of control groups
# systemd-cgls
  • Get the list of control group ordered by CPU, memory and disk I/O load
# systemd-cgtop
  • Kill all the processes associated with an apache server
# systemctl kill httpd
  • Put resource limits on a service (here 500 CPUShares)
# systemctl set-property httpd.service CPUShares=500
  • Get the current CPUShares service value
# systemctl show -p CPUShares httpd.service
  • Service managment ‘(you can omit .service if you wish)’
# systemctl 'status|enable|disable|start|stop|restart|reload' sshd.service
  • Display if service is enabled
# systemctl is-enabled sshd
  • If you change a service configuration, you will need to reload systemd
# systemctl daemon-reload
  • Display unit files
# systemctl list-unit-files
  • Get the list of services that failed at boot
# systemctl --failed
  • Get the status of a service on a remote server
# systemctl -H root@jason.local status sshd.service
  • Get all the configuration details about a service
# systemctl show sshd
  • Get current locale
# localectl
  • Change current locale
# localectl set-locale LANG=en_US.utf8
  • Change the console keymap
# localectl set-keymap en_US
  • Change the X11 keymap
# localectl set-x11-keymap en_US
  • Get current date and time
# timedatectl
  • Change date
# timedatectl set-time YYYY-MM-DD
  • Change time
# timedatectl set-time HH:MM:SS
  • Get the list of zones
# timedatectl list-timezones
  • Change the time zone
# timedatectl set-timezone America/Sao_Paulo
  • Get user list
# loginctl list-users
  • Get current sessions
# loginctl list-sessions
  • Display properties from a user
# loginctl show-user kdiegorsantos
  • Set runlevel 3 as default
# systemctl set-default -f multi-user.target
  • Move to rescue mode
# systemctl rescue
  • Move to runlevel 3
# systemctl isolate runlevel3.target
  • Move to graphical mode
# systemctl isolate graphical.target
  • Poweroff, restart, suspend or hibernate.
# systemctl 'poweroff|restart|suspend|hibernate'

 

Chrooted SSH in RHEL

OpenSSH 4.9+ includes a built-in chroot for sftp, but requires a few tweaks to the normal install.

You can create a rule to jail users and groups, it is very simple, if you want to create a rule based on group do the following.

  •  Override default subsystem “/usr/libexec/openssh/sftp-server” on /etc/ssh/sshd_config and create a group that will contain all sftp only users and add the user to this group.
  • The commented line Match User can be used to rules based on single user.
# groupadd sftponly
# gpasswd -a kdiegorsantos sftponly
# cat <EOF> /etc/ssh/sshd_config
Subsystem       sftp    internal-sftp
Match Group sftponly # Match User kdiegorsantos
        ChrootDirectory %h
        ForceCommand internal-sftp
        AllowTcpForwarding no
        X11Forwarding no EOF

The chroot directory must be owned by root.

# chown root:root /home/kdiegorsantos
# chmod 700 /home/kdiegorsantos

Change the user shell to prevent SSH login.

# usermod -s /bin/false kdiegorsantos

After change the SSH config file, make sure to reload the daemon to apply the rules.

# service sshd restart

Now only SFTP connections can be established by users on group sftponly.

[root@server002 ~]# ssh -l kdiegorsantos server001
kdiegorsantos@server001's password:
This service allows sftp connections only.
Connection to server001 closed.
[root@server002 ~]#

If a user is able to write to the chroot directory then it is possible for them to escalate their privileges to root and escape the chroot. One way around this is to give the user two home directories – one “real” home they can write to, and one SFTP home that is locked down to keep sshd happy and your system secure. By using mount –bind you can make the real home directory appear as a subdirectory inside the SFTP home directory, allowing them full access to their real home directory.

# mkdir /home/chroot/kdiegorsantos
# mount --bind /home/kdiegorsantos /home/chroot/kdiegorsantos
# echo '/home/kdiegorsantos /home/chroot/kdiegorsantos        none    bind' >> /etc/fstab

 

Get information about Veritas InfoScale using CLI in RHEL

 

If you are using Veritas InfoScale or Veritas Cluster Service in your environment, know that you can use the CLI to collect all useful information about your cluster.

The following commands is part of my shell script collect-exec.sh, make your changes to fit your needs.

# global collect directory.
collect_path=/var/tmp/collect_$(uname -n)_$(date +"%d%m%Y")

# check if HAD is running. 
check_vcs_had=$(ps -ef | egrep -w "VRTSvcs/bin/had");

# if HAD is running export the right PATH
if [ ! -z "$check_vcs_had" ] ; then
export PATH=${PATH}:/opt/VRTSvcs/bin:/opt/VRTS/bin:/opt/VRTSsfmh/bin:/etc/vx/bin
fi
# Display every command to create this main.cf from begin. 
hacf -verify /etc/VRTSvcs/conf/config/ -display > ${collect_path}/hacf_verify_display.txt

# Display the HAD version
had -version > ${collect_path}/had_version.txt

# Display all cluster users
hauser -display > ${collect_path}/hauser_display.txt

# List all cluster users
hauser -list > ${collect_path}/hauser_list.txt

# List all nodes in the cluster
hasys -list > ${collect_path}/hasys_list.txt

# Display the current state of all nodes in the cluster
hasys -state > ${collect_path}/hasys_state.txt

# Display the node number of all cluster nodes
hasys -nodeid > ${collect_path}/hasys_nodeid.txt

# Display the complete summary of the cluster
hastatus -summ > ${collect_path}/hastatus_summ.txt

# Display all clusters attributes
hatype -display > ${collect_path}/hatype_display.txt

# List all cluster attributes
hatype -list > ${collect_path}/hatype_list.txt

# List all resources in the cluster
hares -list > ${collect_path}/hares_list.txt

# Display all service groups in the cluster
hagrp -list > ${collect_path}/hares_list.txt

# Display the product version 
haclus -value EngineVersion > ${collect_path}/haclus_engineversion.txt

# Display the cluster name 
haclus -display > ${collect_path}/haclus_display.txt

# Display all imported disks in Storage Foundation
vxdisk list > ${collect_path}/vxdisk_list.txt

# Display all imported or exported disks in Storage Foundation
vxdisk -o alldgs list > ${collect_path}/vxdisk_o_alldgs_list.txt

# Display information about cluster file system
vxdctl -c mode > ${collect_path}/vxdctl_c_mode.txt
vxdctl mode > ${collect_path}/vxdctl_mode.txt
vxclustadm -v nodestate > ${collect_path}/vxclustadm_nodestate.txt
vxclustadm nidmap > ${collect_path}/vxclustadm_nidmap.txt
vxclustadm -v nodestate -d > ${collect_path}/vxclustadm_v_nodestate.txt

# Display information about GAB port membership
gabconfig -a > ${collect_path}/gabconfig_a.txt
gabconfig -W > ${collect_path}/gabconfig_W.txt

# Display the heartbeat links 
lltstat > ${collect_path}/lltstat.txt
lltstat -nvv active > ${collect_path}/lltstat_active.txt
lltstat -n > ${collect_path}/lltstat_n.txt
lltconfig -W > ${collect_path}/lltconfig_W.txt

# Display information about cluster file system 
cfscluster status > ${collect_path}/cfscluster_status.txt

# Display information about disk groups
vxdg list > ${collect_path}/vxdg_list.txt && vxdg free > ${collect_path}/vxdg_free.txt

# Display information about VxVM 
vxprint -ht > ${collect_path}/vxprint_ht.txt
vxprint -Ath -q > ${collect_path}/vxprint_Athq.txt
vxprint -AGts > ${collect_path}/vxprint_AGts.txt
vxprint -m rootdg > ${collect_path}/vxprint_m_rootdg.txt

# Display information about fencing 
vxfenadm -d > ${collect_path}/vxfenadm_d.txt

# Display information about VxDMP 
vxdmpadm gettune all > ${collect_path}/vxdmpadm_gettune_all.txt
vxdmpadm listapm all > ${collect_path}/vxdmpadm_listapm_all.txt
vxdmpadm listenclosure all > ${collect_path}/vxdmpadm_listenclosure_all.txt
vxdmpadm stat restored > ${collect_path}/vxdmpadm_stat_restored.txt
vxdmpadm listctlr all > ${collect_path}/vxdmpadm_listctlr_all.txt
vxdmpdbprint > ${collect_path}/vxdmpdbprint.txt
vxddladm get namingscheme > ${collect_path}/vxddladm_namingscheme.txt
vxddladm listjbod > ${collect_path}/vxddladm_listjbod.txt
vxddladm listsupport > ${collect_path}/vxddladm_listsupport.txt

# Display the product license
vxlicense -p > ${collect_path}/vxlicense_p.txt
vxlicrep > ${collect_path}/vxlicrep.txt
vxlicrep -e > ${collect_path}/vxlicrep_e.txt

# Display all attributes from each resource
for a in $(/opt/VRTSvcs/bin/hares -list | awk '{print $1}') ; do
hares -display $a > ${collect_path}/hares_display.txt
done

# Display all attributes from each service group
for a in $(/opt/VRTSvcs/bin/hagrp -list | awk '{print $1}') ; do
hagrp -display $a > ${collect_path}/hagrp_display.txt
done

 

Configure FCoE NICs in RHEL

Setting up and deploying a FCoE (Fibre-channel over Ethernet)  interface requires two packages in RHEL 6.2 or later.

  • Install required packages.
# yum install -y fcoe-utils lldpad -y
  • Put the services on startup.
# chkconfig lldpad on
# chkconfig fcoe on
  • Create the configuration file for each FCoE interface.
# cp /etc/fcoe/cfg-ethx /etc/fcoe/cfg-eth2
# cp /etc/fcoe/cfg-ethx /etc/fcoe/cfg-eth3
  • DCB_REQUIRED should be set to no for networking interfaces that implement a hardware DCBX client.
# sed -i 's/DCB_REQUIRED=\"yes\"/DCB_REQUIRED=\"no\"/g' /etc/fcoe/cfg-eth2
# sed -i 's/DCB_REQUIRED=\"yes\"/DCB_REQUIRED=\"no\"/g' /etc/fcoe/cfg-eth2
  • Start the required services.
# service lldpad start
# service fcoe start
  • Bring the FCoE interfaces up.
# ifup eth2
# ifup eth3
  • List your new FCoE interfaces.
# fcoeadm -i

Creating a New Initial RAM Disk in RHEL 3, 4 or 5

If you need to create a new initrd (initial RAM Disk) in RHEL 3,4 or 4 don’t forget to make a backup of the current initrd before generate the new one.

  • Make a backup of your current initrd.
# cp /boot/initrd-$(uname -r).img /boot/initrd-$(uname -r).img_$(date +"%d%m%Y")
  • Generate a new initrd.
# mkinitrd -f -v /boot/initrd-$(uname -r).img $(uname -r)
  • After generate the new initrd, reboot your server to boot using your new initrd.