This is a continuation of the post on Scripting EVA Checks and the script that I wrote took too long to process given the purpose of what it was designed to do. So instead, I decided a different approach to it and went with a simple ls disk command and count the number in the output and write it to a CSV file. Not all arrays have the same type of disk, but it was much easier to place it into a ForEach loop with all the types of disks we have (including ungrouped ones, that was really what the goal of this exercise was to identify disks that were failed and most likely they show up as ungrouped).
This still relies on the text files created and the profiles to be saved in the SSSU, but it's much quicker of an execution. The output is much cleaner and a lot easier to take action on.
$date = Get-Date -Format dd_MM_yyyy
New-Item -Path .\exports -ItemType Directory -Name $date
$evas = @("EVA01","EVA02","EVA03","EVA04","EVA05","EVA06")
$evaDisks=@()
ForEach ($e in $evas) {
# EVA Connection Info
$file = """file .\systems\login\$e.txt"""
$lsCmd = "ls disk"
$disks = & C:\Tools\hp\sssu.exe $file $lsCmd | Select-Object -Skip 14 | Sort-Object
Write-Output $e
$diskObj = New-Object psobject -Property @{
ARRAY = $e
FATA = ($disks | Select-String -Pattern "FATA").Count
FC = ($disks | Select-String -Pattern "FC").Count
E15 = ($disks | Select-String -Pattern "E15").Count
ENT = ($disks | Select-String -Pattern "ENT").Count
MID = ($disks | Select-String -Pattern "MID").Count
UNGROUPED = ($disks | Select-String -Pattern "Ungrouped").Count
TOTAL = $disks.Count
}
$evaDisks += $diskObj
}
$evaDisks | Export-Csv .\exports\$date\evaReport.csv -NoTypeInformation
While the other script is useful it is probably better to be used for drilling in deeper and locating the failed disks in the array while calling support and letting that complete in the background while focusing on other tasks at hand.The next step is to integrate the export with HipChat, and to post it to an operations channel.
Friday, December 29, 2017
Thursday, December 28, 2017
Scripting EVA Checks
At my job we have six HP EVA storage arrays, and four different Command View instances to manage them. Daily checks on drives was a bit tedious until I found the utility for scripting. The script started out as a bat file that would output all the disks into a single text file with pseudo xml formatting.
Here is the batch script that I used to get the output of all the disks:
@echo on
rem #########################
rem Name EVA Disk Output
rem By Jeffrey Swindel
rem Date December 2017
rem Version 0.1
rem #########################
rem ===================
rem Set ENV Variables
rem ===================
for /f "tokens=1-4 delims=/ " %%i in ("%date%") do (
set dow=%%i
set month=%%j
set day=%%k
set year=%%l
)
set datestr=%day%_%month%_%year%
echo datestr is %datestr%
rem ===================
rem Set Directory
rem ===================
cd /d C:\Tools\hp
mkdir .\exports\%datestr%
rem =========================
rem Collect Disk Output
rem =========================
rem EVA01
sssu.exe "file .\systems\eva01.txt" > .\exports\%datestr%\eva01.xml
rem EVA02
sssu.exe "file .\systems\eva02.txt" > .\exports\%datestr%\eva02.xml
rem EVA03
sssu.exe "file .\systems\eva03.txt" > .\exports\%datestr%\eva03.xml
rem EVA04
sssu.exe "file .\systems\eva04.txt" > .\exports\%datestr%\eva04.xml
rem EVA05
sssu.exe "file .\systems\eva05.txt" > .\exports\%datestr%\eva05.xml
rem EVA06
sssu.exe "file .\systems\eva06.txt" > .\exports\%datestr%\eva06.xml
rem ===================
rem Cleaning Up
rem ===================
forfiles -p ".\exports" -d -14 -c "cmd /c IF @isdir == TRUE rd /S /Q @path"
This script uses the SSSU executable with text files to pass the following arguments for each EVA
set option on_error=continue
SELECT MANAGER commandView USERNAME=commandView.user
SELECT SYSTEM "EVA01"
ls disk full XML
exit
and then clean up anything in the exports folder that is older than 14 days. This was not the most useful to parse through and find drives that failed or were ungrouped. But was definitely better than having to login to each command view.
Taking it a step further, I created a simple PowerShell script for one of the EVA's that uses the executable for SSSU, and outputs all the drives into a CSV file. It began with a list of drives that would be read from a text file and pass the ls disk command through to the executable. The output would grab certain specifics such as disk name, state, storage array, bay, and enclosure. Which proved to be more helpful for reporting purposes and could call HP and open the case without having to check Command View right away. This script was a bit more modular and could plug in the other EVA names and create scripts for each array.
#---------------------
# EVA Connection Info
#---------------------
$disks = Get-Content .\systems\eva01_disks.txt
#---------------------
# Create Empty Array
#---------------------
$eva01 = @()
#---------------------
# Loop Through Disks
#---------------------
ForEach ($d in $disks){
$file = """file .\systems\login\eva01.txt"""
$cmd = "ls disk \`"$d\`" xml"
$output = & C:\Tools\hp\sssu.exe $file $cmd
[xml]$xml = $output | Select-Object -Skip 15
$driveObject = New-Object psobject -Property @{
STORAGE_ARRAY = $xml.object.storagecellname
DISKNAME = $xml.object.diskname
DISKGROUP = $xml.object.diskgroupname
ENCLOSURE = $xml.object.shelfnumber
BAY = $xml.object.diskbaynumber
DRIVETYPE = $xml.object.diskdrivetype
STATE = $xml.object.operationalstate
USAGE = $xml.object.actualusage
}
$eva01 += $driveObject
}
#---------------------
# Output to CSV File
#---------------------
$eva01 | Export-Csv .\exports\eva01.csv -NoTypeInformation
In the world of automation, this was a leap from what was done before but still miles from where it needed to be.
The next step was to create one script that would be able to loop through each storage array and output to csv files so each one did not have to be run individually. I wanted to get away from individual text files for each storage array. The only thing left in each EVA text file was this:
set option on_error=continue
SELECT MANAGER commandView USERNAME=commandView.user
SELECT SYSTEM "EVA01"
Selecting the manager and the username. Now the script loops through an array of variables with the EVA names and outputs a csv file for each into a date folder. The "final" version is this:
$date = Get-Date -Format dd_MM_yyyy
$evas = @("EVA01","EVA02","EVA03","EVA04","EVA01","EVA01")
New-Item -Path .\exports -ItemType Directory -Name $date
ForEach ($e in $evas){
#---------------------
# EVA Connection Info
#---------------------
$file = """file .\systems\login\$e.txt"""
$lsCmd = "ls disk"
$disks = & C:\Tools\hp\sssu.exe $file $lsCmd | Select-Object -Skip 14 | Sort-Object
Write-Output $e
#---------------------
# Create Empty Array
#---------------------
$eva = @()
#---------------------
# Loop Through Disks
#---------------------
ForEach ($d in $disks){
$d = $d.Substring(2)
$cmd = "ls disk \`"$d\`" xml"
$output = & C:\Tools\hp\sssu.exe $file $cmd
Write-Output $d
[xml]$xml = $output | Select-Object -Skip 15
$driveObject = New-Object psobject -Property @{
STORAGE_ARRAY = $xml.object.storagecellname
DISKNAME = $xml.object.diskname
DISKGROUP = $xml.object.diskgroupname
ENCLOSURE = $xml.object.shelfnumber
BAY = $xml.object.diskbaynumber
DRIVETYPE = $xml.object.diskdrivetype
STATE = $xml.object.operationalstate
USAGE = $xml.object.actualusage
}
$eva += $driveObject
}
#---------------------
# Output to CSV File
#---------------------
$eva | Export-Csv .\exports\$date\$e.csv -NoTypeInformation
}
#---------------------
# Merge CSV Files
#---------------------
$files = Get-ChildItem -Path .\exports\$date\* -Include *.csv
$mergedFile = ".\exports\$date\Merged_Report.csv"
$outfile = @()
foreach($f in $files) {
if(Test-Path $f) {
$filename = [System.IO.Path]::GetFileName($f)
$temp = Import-CSV -Path $f | select *, @{Expression={$filename};Label="FileName"}
$outfile += $temp
}
else {
Write-Warning "$f : File not found"
}
$outfile | Export-Csv -Path $mergedFile -NoTypeInformation
Write-Output "$mergedFile successfully created"
}
As a side note, it's not the fastest of scripts because all together there are around 200 disks in each array. So there is room for improvement. But in the terms of sanity it is much better than what it used to be (which was non-existent). That's all for now and there will probably be more revisions to come.
Here is the batch script that I used to get the output of all the disks:
@echo on
rem #########################
rem Name EVA Disk Output
rem By Jeffrey Swindel
rem Date December 2017
rem Version 0.1
rem #########################
rem ===================
rem Set ENV Variables
rem ===================
for /f "tokens=1-4 delims=/ " %%i in ("%date%") do (
set dow=%%i
set month=%%j
set day=%%k
set year=%%l
)
set datestr=%day%_%month%_%year%
echo datestr is %datestr%
rem ===================
rem Set Directory
rem ===================
cd /d C:\Tools\hp
mkdir .\exports\%datestr%
rem =========================
rem Collect Disk Output
rem =========================
rem EVA01
sssu.exe "file .\systems\eva01.txt" > .\exports\%datestr%\eva01.xml
rem EVA02
sssu.exe "file .\systems\eva02.txt" > .\exports\%datestr%\eva02.xml
rem EVA03
sssu.exe "file .\systems\eva03.txt" > .\exports\%datestr%\eva03.xml
rem EVA04
sssu.exe "file .\systems\eva04.txt" > .\exports\%datestr%\eva04.xml
rem EVA05
sssu.exe "file .\systems\eva05.txt" > .\exports\%datestr%\eva05.xml
rem EVA06
sssu.exe "file .\systems\eva06.txt" > .\exports\%datestr%\eva06.xml
rem ===================
rem Cleaning Up
rem ===================
forfiles -p ".\exports" -d -14 -c "cmd /c IF @isdir == TRUE rd /S /Q @path"
set option on_error=continue
SELECT MANAGER commandView USERNAME=commandView.user
SELECT SYSTEM "EVA01"
ls disk full XML
exit
and then clean up anything in the exports folder that is older than 14 days. This was not the most useful to parse through and find drives that failed or were ungrouped. But was definitely better than having to login to each command view.
#---------------------
# EVA Connection Info
#---------------------
$disks = Get-Content .\systems\eva01_disks.txt
#---------------------
# Create Empty Array
#---------------------
$eva01 = @()
#---------------------
# Loop Through Disks
#---------------------
ForEach ($d in $disks){
$file = """file .\systems\login\eva01.txt"""
$cmd = "ls disk \`"$d\`" xml"
$output = & C:\Tools\hp\sssu.exe $file $cmd
[xml]$xml = $output | Select-Object -Skip 15
$driveObject = New-Object psobject -Property @{
STORAGE_ARRAY = $xml.object.storagecellname
DISKNAME = $xml.object.diskname
DISKGROUP = $xml.object.diskgroupname
ENCLOSURE = $xml.object.shelfnumber
BAY = $xml.object.diskbaynumber
DRIVETYPE = $xml.object.diskdrivetype
STATE = $xml.object.operationalstate
USAGE = $xml.object.actualusage
}
$eva01 += $driveObject
}
#---------------------
# Output to CSV File
#---------------------
$eva01 | Export-Csv .\exports\eva01.csv -NoTypeInformation
In the world of automation, this was a leap from what was done before but still miles from where it needed to be.
The next step was to create one script that would be able to loop through each storage array and output to csv files so each one did not have to be run individually. I wanted to get away from individual text files for each storage array. The only thing left in each EVA text file was this:
set option on_error=continue
SELECT MANAGER commandView USERNAME=commandView.user
SELECT SYSTEM "EVA01"
Selecting the manager and the username. Now the script loops through an array of variables with the EVA names and outputs a csv file for each into a date folder. The "final" version is this:
$date = Get-Date -Format dd_MM_yyyy
$evas = @("EVA01","EVA02","EVA03","EVA04","EVA01","EVA01")
New-Item -Path .\exports -ItemType Directory -Name $date
ForEach ($e in $evas){
#---------------------
# EVA Connection Info
#---------------------
$file = """file .\systems\login\$e.txt"""
$lsCmd = "ls disk"
$disks = & C:\Tools\hp\sssu.exe $file $lsCmd | Select-Object -Skip 14 | Sort-Object
Write-Output $e
#---------------------
# Create Empty Array
#---------------------
$eva = @()
#---------------------
# Loop Through Disks
#---------------------
ForEach ($d in $disks){
$d = $d.Substring(2)
$cmd = "ls disk \`"$d\`" xml"
$output = & C:\Tools\hp\sssu.exe $file $cmd
Write-Output $d
[xml]$xml = $output | Select-Object -Skip 15
$driveObject = New-Object psobject -Property @{
STORAGE_ARRAY = $xml.object.storagecellname
DISKNAME = $xml.object.diskname
DISKGROUP = $xml.object.diskgroupname
ENCLOSURE = $xml.object.shelfnumber
BAY = $xml.object.diskbaynumber
DRIVETYPE = $xml.object.diskdrivetype
STATE = $xml.object.operationalstate
USAGE = $xml.object.actualusage
}
$eva += $driveObject
}
#---------------------
# Output to CSV File
#---------------------
$eva | Export-Csv .\exports\$date\$e.csv -NoTypeInformation
}
#---------------------
# Merge CSV Files
#---------------------
$files = Get-ChildItem -Path .\exports\$date\* -Include *.csv
$mergedFile = ".\exports\$date\Merged_Report.csv"
$outfile = @()
foreach($f in $files) {
if(Test-Path $f) {
$filename = [System.IO.Path]::GetFileName($f)
$temp = Import-CSV -Path $f | select *, @{Expression={$filename};Label="FileName"}
$outfile += $temp
}
else {
Write-Warning "$f : File not found"
}
$outfile | Export-Csv -Path $mergedFile -NoTypeInformation
Write-Output "$mergedFile successfully created"
}
As a side note, it's not the fastest of scripts because all together there are around 200 disks in each array. So there is room for improvement. But in the terms of sanity it is much better than what it used to be (which was non-existent). That's all for now and there will probably be more revisions to come.
Wednesday, December 6, 2017
Install Webmin on CentOS 7
Webmin is a powerful web based tool to administer any UNIX system written in Perl. With it, you can setup accounts, Apache, DNS, FTP, file shares, and so much more. Webmin also removes the manual edits of configuration files, as well as provides a remote console. Installing it on Cent OS 7 takes only a matter of minutes and has some prerequisites. The version I installed was 1.860-1, on a virtual machine in KVM running a minimal install Cent OS 7.4.1708.
To start, log in via SSH or the console and update all the installed packages on the system with:
yum update -y
In order to continue with the installation, you need to install all dependencies. If they are not installed you can install them using the command:
yum install wget perl perl-Net-SSLeay perl-IO-Tty perl-Encode-Detect openssl
Then download the RPM package. To download Webmin, visit the Webmin download page and check for the the most current version of the RPM package. This is suitable for any RedHat, Fedora or CentOS system, and use wget to download.
wget http://prdownloads.sourceforge.net/webadmin/webmin-1.860-1.noarch.rpm
With all the dependencies installed, Run the following command to install Webmin:
rpm -U webmin-1.860-1.noarch.rpm
Webmin takes a moment to complete, and once it's done it will output the success message, and you can log into the console from a web browser using using your username and password created during the OS install and:
https://{IPaddress}:10000
By default, Webmin uses a self-signed SSL certificate which is common for your web browser to warn you that the connection is not secure. You can ignore this for now, and accept the self-signed SSL certificate to proceed to the log in screen.
The administration username which you can use to sign in is set to root and the password is your current root password. After the install Webmin should be started, if it didn't start automatically try the following command:
service webmin start
Finally. Webmin on system boot with:
chkconfig webmin on
In the Webmin dashboard you can see some basic information about your system such as recent logins, memory usage, installed packages, disk space, etc. Modules and services that you can manage through Webmin are located on the left panel.
To start, log in via SSH or the console and update all the installed packages on the system with:
yum update -y
In order to continue with the installation, you need to install all dependencies. If they are not installed you can install them using the command:
yum install wget perl perl-Net-SSLeay perl-IO-Tty perl-Encode-Detect openssl
Then download the RPM package. To download Webmin, visit the Webmin download page and check for the the most current version of the RPM package. This is suitable for any RedHat, Fedora or CentOS system, and use wget to download.
wget http://prdownloads.sourceforge.net/webadmin/webmin-1.860-1.noarch.rpm
With all the dependencies installed, Run the following command to install Webmin:
rpm -U webmin-1.860-1.noarch.rpm
Webmin takes a moment to complete, and once it's done it will output the success message, and you can log into the console from a web browser using using your username and password created during the OS install and:
By default, Webmin uses a self-signed SSL certificate which is common for your web browser to warn you that the connection is not secure. You can ignore this for now, and accept the self-signed SSL certificate to proceed to the log in screen.
The administration username which you can use to sign in is set to root and the password is your current root password. After the install Webmin should be started, if it didn't start automatically try the following command:
service webmin start
Finally. Webmin on system boot with:
chkconfig webmin on
In the Webmin dashboard you can see some basic information about your system such as recent logins, memory usage, installed packages, disk space, etc. Modules and services that you can manage through Webmin are located on the left panel.
Friday, December 1, 2017
KVM on Ubuntu 16.04
KVM is a free and open source virtualization solution for Linux on x86 hardware with Intel VT and AMD-V extensions. After installing Ubuntu 16.04, if the processor supports virtualization installing it requires a number of packages that are available with apt-get. The first of those packages needed are qemu and libvirt. To install qemu package and some packages for virtual machine operation, run:
$ sudo apt install -y qemu-kvm libvirt0 libvirt-bin virt-manager bridge-utils
qemu-kvm - a open source machine emulator
libvirt0 - library for interfacing with different virtualization systems
libvirt-bin - programs for the libvirt library
virt-manager - desktop application for managing VMs
bridge-utils - utilities for configuring the Linux Ethernet bridge
Register libvirt-bin to systemd to keep the programs running upon reboots.
$ sudo systemctl enable libvirt-bin
Add your user account to the libvirtd group. This allows it so you can run libvirt command without having to run it with sudo every time.
$ sudo usermod -aG libvirtd <username>
This last part is optional ... Create iso directory for sharing iso images with libvirtd group, and move the iso to that directory and change the owner.
$ sudo mkdir /var/lib/libvirt/iso
$ sudo mv filename.iso /var/lib/libvirt/iso
$ sudo chown libvirt-qemu:libvirtd /var/lib/libvirt/iso/filename.iso
Log out and back in, then launch virt-manager. This is a GUI tool for libvirt used to create virtual machines. (Alternatively, search for Virtual Machine Manager in dash).
$ virt-manager
Click "New Virtual Machine" at the upper left, select the install media and follow the prompts to create the VM.
$ sudo apt install -y qemu-kvm libvirt0 libvirt-bin virt-manager bridge-utils
qemu-kvm - a open source machine emulator
libvirt0 - library for interfacing with different virtualization systems
libvirt-bin - programs for the libvirt library
virt-manager - desktop application for managing VMs
bridge-utils - utilities for configuring the Linux Ethernet bridge
Register libvirt-bin to systemd to keep the programs running upon reboots.
$ sudo systemctl enable libvirt-bin
Add your user account to the libvirtd group. This allows it so you can run libvirt command without having to run it with sudo every time.
$ sudo usermod -aG libvirtd <username>
This last part is optional ... Create iso directory for sharing iso images with libvirtd group, and move the iso to that directory and change the owner.
$ sudo mkdir /var/lib/libvirt/iso
$ sudo mv filename.iso /var/lib/libvirt/iso
$ sudo chown libvirt-qemu:libvirtd /var/lib/libvirt/iso/filename.iso
Log out and back in, then launch virt-manager. This is a GUI tool for libvirt used to create virtual machines. (Alternatively, search for Virtual Machine Manager in dash).
$ virt-manager
Click "New Virtual Machine" at the upper left, select the install media and follow the prompts to create the VM.
Chrome Remote Desktop on Ubuntu
There's a computer I setup sometime back that is headless and was running Windows 10 with Chrome Remote Desktop so it was possible to easily log in and check things while I was away. Recently, I have been moving away from Windows and more towards Linux. After installing Ubuntu 16.04 and getting it setup by installing Google Chrome and Chrome Remote Desktop, I ran into an issue it would only show the wallpaper and the cursor. (Which is actually a feature of the Linux version that it can create a new X-Org session and not duplicate what is on the screen, similar to the version for Windows).
The fix for this is quite easy, and requires some use in the terminal. After installing Chrome Remote Desktop, stop the daemon (if it is running:
/opt/google/chrome-remote-desktop/chrome-remote-desktop --stop
Good practice is to make a backup of files your working on in Linux (especially system files)
cp /opt/google/chrome-remote-desktop/chrome-remote-desktop /opt/google/chrome-remote-desktop/chrome-remote-desktop.orig
Open the chrome-remote-desktop file in your favorite text editor. Search for the following settings and make the changes as necessary
DEFAULT_SIZES = "1920x1080"
FIRST_X_DISPLAY_NUMBER = 0
Setting the FIRST_X_DISPLAY_NUMBER to zero will mirror the desktop session, and any application you have open (same as the Windows version).
Next, comment these two sections out so it doesn't increment for a new desktop
#while os.path.exists(X_LOCK_FILE_TEMPLATE % display):
# display += 1
and this so that it doesn't start a new session since the console on desktop zero is already running :
#logging .info("Starting %s on display :%d" % (xvfb, display))
#screen_option = "%dx%dx24" % (max_width, max_height)
#self .x_proc = subprocess.Popen(
# [xvfb, ":%d" % display,
# "-auth", x_auth_file,
# "-nolisten", "tcp",
# "-noreset",
# "-screen", "0", screen_option
# ] + extra_x_args)
#if not self.x_proc.pid:
# raise Exception("Could not start Xvfb.")
Finally start the daemon:
/opt/google/chrome-remote-desktop/chrome-remote-desktop --start
Or restart the computer and it will now open with the default Unity session. More info can be found here in the Google Product Forums.
The fix for this is quite easy, and requires some use in the terminal. After installing Chrome Remote Desktop, stop the daemon (if it is running:
/opt/google/chrome-remote-desktop/chrome-remote-desktop --stop
Good practice is to make a backup of files your working on in Linux (especially system files)
cp /opt/google/chrome-remote-desktop/chrome-remote-desktop /opt/google/chrome-remote-desktop/chrome-remote-desktop.orig
Open the chrome-remote-desktop file in your favorite text editor. Search for the following settings and make the changes as necessary
DEFAULT_SIZES = "1920x1080"
FIRST_X_DISPLAY_NUMBER = 0
Setting the FIRST_X_DISPLAY_NUMBER to zero will mirror the desktop session, and any application you have open (same as the Windows version).
Next, comment these two sections out so it doesn't increment for a new desktop
#while os.path.exists(X_LOCK_FILE_TEMPLATE % display):
# display += 1
and this so that it doesn't start a new session since the console on desktop zero is already running :
#logging .info("Starting %s on display :%d" % (xvfb, display))
#screen_option = "%dx%dx24" % (max_width, max_height)
#self .x_proc = subprocess.Popen(
# [xvfb, ":%d" % display,
# "-auth", x_auth_file,
# "-nolisten", "tcp",
# "-noreset",
# "-screen", "0", screen_option
# ] + extra_x_args)
#if not self.x_proc.pid:
# raise Exception("Could not start Xvfb.")
Finally start the daemon:
/opt/google/chrome-remote-desktop/chrome-remote-desktop --start
Or restart the computer and it will now open with the default Unity session. More info can be found here in the Google Product Forums.
Subscribe to:
Posts (Atom)