Tuesday, December 11, 2018

Chef Backups to S3

Recently, a project that I undertook was to have rotating backups of our Chef servers. We have two in our lab, one on premise in our data center, and three in different VPC's in AWS. None of which were being backed up when I joined the engineering team.The main reason that this came about was that one of the servers in our lab was shutdown which caused some automatic build jobs to fail, and for some reason could not be connected to after powering it back on in AWS. Luckily I was able to recover it by taking snapshots of the drives then mount them to a new server in a different account (many more steps were taken to restore the full server).

With Chef Manage, there is a great chef-server-ctl backup command that creates a tar file of the database and files to restore to a new server. After the mishap with one of our servers, I worked up a script to create backups on the servers, and to keep only seven of them. Because I didn't want to run this as a power user (such as root), I added a line into the sudoers file to allow the user chefBackup to run the backup command:

chefBackup ALL= NOPASSWD: /usr/bin/chef-server-ctl backup --yes

With a simple script, to create the backup file

#!/bin/bash

#--Back up server
sudo /usr/bin/chef-server-ctl backup --yes

#--Remove old files
#--Crontab in Root

and then run with cron

0 2 * * * /home/chefServer/chefBackup.sh > /home/chefBackup/chefBackup.log 2>&1

To rotate the files, another cron under the root account of the server, it gets the list of files and deletes anything that is older than a certain amount of time. In this case, 7 days

0 3 * * * ls -1 /var/opt/chef-backup/*.tgz | sort -r | tail -n +6 | xargs rm > /dev/null 2>&1

After the script is creating the backup files, a simple S3 bucket with a expiration policy and user who has access was created. Then the AWS cli is installed on each server to copy the file, and the script was modified to push the backup file. Then modifying the script to add the aws s3 cp command of the latest file.

#!/bin/bash
HOSTNAME=($hostname)

#--Back up server
sudo /usr/bin/chef-server-ctl backup --yes

BACKUP="`ls -dtr1 /var/opt/chef-backup/* | tail -1`"
/usr/local/bin/aws s3 cp $BACKUP s3://chef-backup-bucket/$HOSTNAME$BACKUP

#--Remove old files
#--Crontab in Root

The process can be simplified to run under one account, or give the backup user sudo access to the system. I felt like this is a good option for now, or at least until there is another solution in place to run the backups. If you want the CloudFormation template to create the bucket yourself, it can be found here as a gist on GitHub. 

Hope this helps!

Wednesday, November 28, 2018

Brocade Backup to Git

Our Fiber Switches are the Brocade 65xx series, and recently needed a way to back them up. The configUpload command offers backing up to an FTP server, SCP, or locally. Since I wanted to push them to Git, the easiest option was SCP and found this script from TheSanGuy on automating config zone backups for the Brocade's. I modified it a bit to not add the date to the script, because Git would handle the diff and each day the script gets overwritten on the local server. After cloning the repo from GitHub to the local server, the script logs in with SSH keys to the switch and then each switch has a public key for the server to SCP the file back to the server

The version of the script is below, broken up into two zones:

#!/bin/bash
#Set Environment
TODAY=`date`
TIMESTAMP=`date +"%Y%m%d%H%M"`
LOCALPATH="/home/username"
SCPHOST="172.20.14.7"
SCPUSER="username"
SCPPATH="/home/username/Brocade-Backup"

#List of Switches to be backed up
SWITCHLIST1="zoneswitch1 zoneswitch2"
SWITCHLIST2="zoneswitch3 zoneswitch4"

for x in $SWITCHLIST1
do
ssh admin@$x configupload -scp $SCPHOST,$SCPUSER,$SCPPATH/$x.cfg
done

for x in $SWITCHLIST2
do
ssh admin@$x configupload -scp $SCPHOST,$SCPUSER,$SCPPATH/$x.cfg
done

cd $SCPPATH && \
git add . && \
git add -u && \
git commit -m "$TIMESTAMP" && \
git push

The next step is to be able to keep the old version so only the changes are pushed to Git and then also be able to push the config back to the script when creating zones instead of logging into the Java interface. But, unfortunately we don't have a lab switch and I don't feel comfortable testing that in production.

Tuesday, November 20, 2018

Google Dynamic DNS with DD-WRT

I recently switched my router back to dd-wrt. I moved away from it in favor of Google's Wi-Fi, which I had no complaints with using it other than wanted more control of my router and wanted to test some network automation. Don't get me wrong the mesh feature of Google's Wi-Fi router is amazing, but didn't have a chance to really take advantage of it while living in an apartment. Some of the other features are excellent as well

One shortcoming however of dd-wrt is that the version of inadyn (the dynamic dns client used in dd-wrt) couldn't talk with Google's registrar and update synthetic records. There were a few posts that I found where they used OpenWRT instead, which is definitely on my list to try out and get configured instead because the support and updates on that seem to happen more than dd-wrt. I chose dd-wrt for the ease of use and because it was something I was familiar with (and completely disabled http access, so management is all done through SSH).

The step I went to go around this was to use my Raspberry Pi to update the DNS record on a fifteen minute interval using a script and crontab. Here is a sample of the script:

#!/bin/bash

USERNAME="username"
PASSWORD="password"
HOSTNAME="home.domain.com"

# Resolve IP for DDNS record
NS=$( dig +short home.domain.com @resolver1.opendns.com )
# Resolve current public IP
IP=$( dig +short myip.opendns.com @resolver1.opendns.com )
# Update Google DNS Record

if [ "$NS" != "$IP" ] ; then 
    echo "IP address changed, updating"
    URL="https://${USERNAME}:${PASSWORD}@domains.google.com/nic/update?hostname=${HOSTNAME}&myip=${IP}"
    curl -s $URL
else
    echo "IP address has not changed"
fi

The script itself still needs some work, but I thought that the fifteen minute interval was a short enough period of time for the job to run. It could go even shorter, because with the if statement, it will only run if the IP address changes and not cause a large number of requests to Google. There are a few things that I am working on to update the script to increase some logging, etc. But it is simply called from crontab like this:

15 * * * * /home/pi/google-ddns-check.sh > /home/pi/google-ddns.log 2>&1

Next step is to add the time and the IP address to the log as well as append to the log file instead of overwriting it each time, but this works pretty well in a pinch for now. Hope this helps for anyone wanting to use Google's Synthetic DNS with dd-wrt.

Thanks!

Thursday, October 25, 2018

HPE OneView Invoke-RestMethod

HP's OneView Management Console has a robust API with a number of SDKs, recently we've needed to automate a number of tasks after installing new Synergy Frames that have OneView composer modules and are using the self signed certificates that are default. A recent task was to connect to the API and do some REST calls to get and set some basic configuration tasks.

Using Powershell's Invoke-RestMethod would return a couple different errors. One of which was Could not establish trust relationship for the SSL/TLS Secure, and searching around on Google a number of posts would say to add:

[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }

even that would throw an error. After trying different settings, I found the following to work:

$appliance = "192.168.1.100"
$url = "https://$appliance"
$web = New-Object Net.WebClient
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
$output = $web.DownloadString($url)

$user = @{
    userName= "Administrator"
    password= "password" 
    authnHost= "$appliance"
    authLoginDomain= "Local"
}
$json = $user | ConvertTo-Json
$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
$headers.Add("Content-type", 'application/json')
$headers.Add("Accept", 'application/json')
$uri = $url + '/rest/login-sessions'
$response = Invoke-RestMethod -Uri $uri -Method POST -Headers $headers -Body $json -ContentType 'application/json' 

$auth = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
$auth.Add("Auth", $response.sessionID)
$auth.Add("X-Api-Version", '300')


With that, I was allowed to get calls from other URIs such as getting the ethernet networks attached to the enclosure with and get other settings.

$uri = $url + '/rest/ethernet-networks'
$Networks = Invoke-RestMethod -Method GET -Headers $auth -Uri $uri

Enjoy!

Wednesday, September 19, 2018

Kickstart CentOS7

This may be a little late to the game with Kickstart files and the installation of a RedHat / CentOS / Fedora server, but I thought I would share some of the work I have been doing in this realm. The business I work for has a small Linux footprint that is steadily growing and there has been not much in the ways of standardization, or automated installs. I recently switched teams to the Engineering team and found that the installs of Linux servers were done by mounting the ISO, launching the GUI and clicking "Next, Next, Finish" (well, not quite that quick ... however, tedious).

In my spare time I started messing around with Kickstart files in my home lab with libvirt installs where it would only require one line to do a complete install of a virtual machine. Soon after it was adapted for the workplace. I created a number of Kickstart files and had troubles creating a custom ISO, so I worked around this by having floppy images that would mount to the VM. I know what you're thinking ... floppies in 2018, but it worked. The main difference of our installs is the partitioning, something which we are finally coming to a standard on (the biggest being swap space). So I had three separate files, one for 2GB, another for 4GB, and then 8GB. The problem with this is that there were a number of manual steps that could be missed (not to mention that editing the boot menu each time with ks=hd:fd0:/ks.cfg).

This process soon morphed into a single ISO, still with the three different Kickstart files and a custom boot menu. But there was more to be added in, such as users and other quick configurations. We also don't have DHCP or a PXE server to do the installs. To work around this, the Kickstart file asks for all the network configuration such as IP Address, Subnet Mask, etc. As well as hostname and swap size (originally in MB, but added in some quick math to convert from GB). The install also adds a motd banner, and modifies a few other files. With the next step to bootstrap it to Chef and kick off an initial run list with some other config details. I would also like to automatically do the logic for the swap space calculations, and DNS based on server name.

Here is the current version of the Kickstart file:

# Kickstart
#version=DEVEL

# Text only install
text

# Install
install

# Repository
cdrom

# Ignore x environment
skipx

# Include config file
%include /tmp/network.txt
%include /tmp/vg.txt
#%include /tmp/rhel.txt

# Language support
lang en_US.UTF-8

# Keyboard
keyboard us

# timezone
timezone --utc America/Denver

# set root password
rootpw  --iscrypted $6$QWxbTrS.7hAzeNRq$hCrO/f9mqzD/ZqC9XLl46P495H2fdmH0pCSPcmkh/IoGHm4u8v7fQdSzCXntiaSasST0UUOONKK2cR/BF2IMA0

# create patrol user
user --name=patrol --groups=wheel --iscrypted --password=$6$ZeGpbSQHAxj5nvou$4tV5GMq9a1W20EkYb6Y00G9B7Kmn4ilZRftinXjKZKr6h.EMLk5qxEskzkWeth4JfN7KHD7Y9sRK1oZTFIwtn1
user --name=syseng --groups=wheel --iscrypted --password=$6$Rc5TEv9yx7aDL/1t$fVwDgEAb2h8qtfq2UJbxKqnpOspI34pcJv1zgRP3lKFv3zwBs4p4mfm4WZFYRBVp7CESfgfhE20ypWxuQ3DxO1

authconfig --enableshadow --passalgo=sha512
firewall --service=ssh
selinux --enforcing

# clear the MBR (Master Boot Record)
zerombr

# bootloader
bootloader --location=mbr --driveorder=sda --append="crashkernel=auto rhgb quiet"
# remove existing partions
clearpart --all --initlabel

# Reboot after installation
reboot

# packages to install
%packages
@core
wget
%end

##############################################################################
#
# pre installation part of the KickStart configuration file
#
##############################################################################

%pre
exec < /dev/tty6 > /dev/tty6 2>&1
chvt 6
HOSTNAME=""
IPADDR=""
NETMASK=""
GATEWAY=""
DNS=""
SWAP=""

while [[ "$HOSTNAME" == "" ]] || [[ "${IPADDR}" == "" ]] || [[ "${NETMASK}" == "" ]] || [[ "${GATEWAY}" == "" ]] || [[ "${DNS}" == "" ]] || [[ "${SWAP}" == "" ]] ; do
 echo
 echo " *** Please enter the following details: *** "
 echo
 read -p "Hostname: " HOSTNAME
 read -p "IP Address: " IPADDR
 read -p "Netmask: " NETMASK
 read -p "Gateway: " GATEWAY
 read -p "DNS: " DNS
 read -p "Swap Space (GB): " SWAP
done
clear
echo "network --onboot yes --device ens192 --bootproto static --ip ${IPADDR} --netmask ${NETMASK} --gateway ${GATEWAY} --noipv6 --nameserver ${DNS} --hostname ${HOSTNAME}" > /tmp/network.txt

echo -e "Applying the following configuration: \n"
echo "Hostname = ${HOSTNAME}"
echo "IP Address = ${IPADDR}"
echo "Netmask = ${NETMASK}"
echo "Gateway = ${GATEWAY}"
echo "DNS = ${DNS}"

# calculate MB
BYTES=1024
SWAPSIZE=$((SWAP*BYTES))


sleep 5
chvt 1

cat > /tmp/vg.txt <<EOF
part /boot --fstype=ext4 --size=500
part pv.01 --grow --size=1
volgroup vg_${HOSTNAME//-} pv.01
logvol swap --name=lv_swap --vgname=vg_${HOSTNAME//-} --size=${SWAPSIZE}
logvol /tmp --fstype=ext4 --name=lv_tmp --vgname=vg_${HOSTNAME//-} --size=2048
logvol / --fstype=ext4 --name=lv_root --vgname=vg_${HOSTNAME//-} --size=1 --grow
EOF

%end

##############################################################################
#
# post installation part of the KickStart configuration file
#
##############################################################################

%post --nochroot

# bring in hostname collected from %pre, then source it
cp -Rvf network /mnt/sysimage/etc/sysconfig/network
# Set-up ens192 with hostname
cp ifcfg-ens192 /mnt/sysimage/etc/sysconfig/network-scripts/ifcfg-ens192
# force hostname change
/mnt/sysimage/bin/hostname $HOSTNAME

cat > /mnt/sysimage/etc/LoginBanner <<EOF
Hostname = ${HOSTNAME}

!!!WARNING!!!
#################################################
# All sessions are being recorded and monitored #
#################################################
EOF

echo "Banner /etc/LoginBanner" >> /mnt/sysimage/etc/ssh/sshd_config
cp /mnt/sysimage/etc/LoginBanner /mnt/sysimage/etc/motd

# chef-client

if [ ! -e /mnt/sysimage/etc/chef ]; then
        mkdir /mnt/sysimage/etc/chef
fi

cat > /mnt/sysimage/etc/chef/client.rb <<ECLRB
log_level        :info
log_location     STDOUT
chef_server_url  "http://lax:4000"
validation_client_name "chef-validator"
node_name "{$HOSTNAME}"
trusted_certs_dir "/etc/chef/trusted_certs"
ECLRB
chmod 600 /mnt/sysimage/etc/chef/client.rb

cat > /mnt/sysimage/etc/chef/validation.pem << EOVAL
-----BEGIN RSA PRIVATE KEY-----
MIIE...Wg==

-----END RSA PRIVATE KEY-----
EOVAL
chmod 600 /mnt/sysimage/etc/chef/validation.pem

cat > /mnt/sysimage/etc/chef/trusted_certs/lax-inchf01_inucn_com.crt << EOCRT
-----BEGIN CERTIFICATE-----
MIID9zCCAt

-----END CERTIFICATE-----
EOCRT
chmod 600 /mnt/sysimage/etc/chef/validation.pem


# Install Chef packages
yum -y install rubygem-chef
chkconfig chef-client on
yum -y update

##############################################################################

# Done
exit 0
%end

Thursday, August 23, 2018

Chef Server Backups

Recently, I ran across an issue where one of the Chef servers for our lab in AWS was powered down. I attempted to start the EC2 instance and connect to make a backup to migrate it to a new server but was unable to connect to it, this caused our automated development machine builds to fail and of course there was no backup. This caused alarm because the person who setup the server with roles and cookbooks had left the company and there was no documentation for how this was setup. We had some of the roles and cookbooks in GitHub, but it turned out that not all of them were there. I will go into detail later on how I managed to get a backup of the database.

We have a few Chef servers for each environment (Lab, Staging, and Production). None of which were being backed up, at least even to itself. So I took on a project to start a process to back them up which started with using the chef-server-ctl backup command as part of Chef Manage. This started with a simple cron job running as root to run it nightly at 2 AM and backing up to the local drive as well as removing older files that are not needed. The script was as follows:

#!/bin/bash
#--Back up server

sudo /usr/bin/chef-server-ctl backup --yes

#--Remove old files
ls -1 /var/opt/chef-backup/*.tgz | sort -r | tail -n +6 | xargs rm > /dev/null 2>&1

With a simple crontab that would log the output to a file :

0 2 * * * /home/chefServer/chefServerBackup.sh > /var/log/chefServerBackup.log 2>&

This was the first iteration of the job and the next step was to get the backups off the Chef server. To accomplish this, I created a local user on the server and called it chefBackup, created some ssh keys to allow for scp to another machine without a password.

#!/bin/bash
#--Back up server
sudo /usr/bin/chef-server-ctl backup --yes
#--Get last file
BACKUP="`ls -dtr1 /var/opt/chef-backup/* | tail -1`"
#--Move to ChefDK server
scp $BACKUP chefBackup@server01:$BACKUP
#--Remove old files
#--Crontab in Root

Notice that the removal of the old files remains in the root user, this was because I didn't want to give the chefBackup user too many permissions and it didn't have access to run the rm command. The removal of old files crontab looks like this:

0 3 * * * ls -1 /var/opt/chef-backup/*.tgz | sort -r | tail -n +6 | xargs rm > /dev/null 2>&1

Now the backups are no longer just on the Chef Server itself and pushed to another server. So something catastrophic must happen to loose both of these servers to where you don't have any backups (which I probably just jinxed myself). This is better than nothing at the moment. Next step is to push these off to an S3 bucket or some sort of shared storage that is not on-premise or at the least in a different data center which means more updates coming. Hope this helps. The only thing that I can stress is that it is good to have backups, but foolish to have backups that are never tested. So be sure to regularly test the backups to make sure they will work in an event of an actual disaster.

Wednesday, June 13, 2018

RVTools Export

Awhile back at work we had an outage on our main vCenter server (prior to the HA setup on 6.5) and we had to track down which host the VM running on and it took us a bit of time to track that down from various different tools that reported something different. Not to mention that our business has multiple vCenter servers, so it proved a bit challenging.

That is where I started to use RVtools to export all the information. With the beginnings of getting to work, opening RVtools, and doing an export. Soon this became very boring and tedious. So I looked at scripting it. The first version ran once for each vCenter server and did an export to a separate file which was not sustainable. The final version would create four separate files, merge them, and then send via email. It proved helpful (at least to me) and expanded to the health and host report from that.

One thing to mention was that I am always looking to better the process, and have a little intervention as possible. So I didn't want multiple folders or files from months ago laying around. Enter the batch script that runs the process and cleans up any file or folder that is older than two weeks old. This part is at the end of the script.

So to recap, the final script exports all the VM information from the vCenter servers and merges them into one csv file, as well as exports host and health information. The only information that gets emailed out is the VM information, but the other gets stored for reference purposes. Then cleans up any older information that is not needed.

The script is referenced below, it is just a simple batch script that runs locally on a machine with the RV tools executable file and can be easily moved around to different machines. At some point I will upload the everything (including the scheduled task) that will do it all. Below is the script for the export, if there are any questions feel free to reach out to me.

@echo on
rem #########################
rem Name RVToolsBatch
rem By swinnie
rem Date November 2017
rem Version 3.9.5.0
rem #########################

rem ===================
rem Set ENV Variables 
rem ===================

for /f "tokens=1-4 delims=/ " %%i in ("%date%") do (
     set dow=%%i
     set month=%%j
     set day=%%k
     set year=%%l
)
set datestr=%day%_%month%_%year%
echo datestr is %datestr%

rem ===================
rem Set Directory 
rem ===================

cd /d C:\Tools\RVTools
mkdir .\exports\%datestr%
mkdir .\exports\%datestr%\hosts
mkdir .\exports\%datestr%\health

rem ===================
rem Start RVTools batch 
rem ===================

rem VM Export
"rvtools.exe" -u administrator@vsphere.local -p "_RVToolsPWD825HN/hashed=" -s vcenter01.company.com -c exportvinfo2csv -d .\exports\%datestr% -f vcenter01.csv
"rvtools.exe" -u administrator@vsphere.local -p "_RVToolsPWD825HN/hashed=" -s vcenter01.company.com -c exportvinfo2csv -d .\exports\%datestr% -f vcenter02.csv
"rvtools.exe" -u administrator@vsphere.local -p "_RVToolsPWD825HN/hashed=" -s vcenter03.inucn.com -c exportvinfo2csv -d .\exports\%datestr% -f vcenter03.csv
"rvtools.exe" -u administrator@vsphere.local -p "_RVToolsPWD825HN/hashed=" -s vcenter04.in.lab -c exportvinfo2csv -d .\exports\%datestr% -f vcenter04.csv

rem Host Export
"rvtools.exe" -u administrator@vsphere.local -p "_RVToolsPWD825HN/hashed=" -s vcenter01.company.com -c exportvhost2csv -d .\exports\%datestr%\hosts -f vcenter01_hosts.csv
"rvtools.exe" -u administrator@vsphere.local -p "_RVToolsPWD825HN/hashed=" -s vcenter02.company.com -c exportvhost2csv -d .\exports\%datestr%\hosts -f vcenter02_hosts.csv
"rvtools.exe" -u administrator@vsphere.local -p "_RVToolsPWD825HN/hashed=" -s vcenter03.company.com -c exportvhost2csv -d .\exports\%datestr%\hosts -f vcenter03_hosts.csv
"rvtools.exe" -u administrator@vsphere.local -p "_RVToolsPWD825HN/hashed=" -s vcenter04.company.com -c exportvhost2csv -d .\exports\%datestr%\hosts -f vcenter04_hosts.csv

rem Health Export
"rvtools.exe" -u administrator@vsphere.local -p "_RVToolsPWD825HN/hashed=" -s vcenter01.company.com -c exportvhealth2csv -d .\exports\%datestr%\health -f vcenter01_health.csv
"rvtools.exe" -u administrator@vsphere.local -p "_RVToolsPWD825HN/hashed=" -s vcenter02.company.com -c exportvhealth2csv -d .\exports\%datestr%\health -f vcenter02_health.csv
"rvtools.exe" -u administrator@vsphere.local -p "_RVToolsPWD825HN/hashed=" -s vcenter03.company.com -c exportvhealth2csv -d .\exports\%datestr%\health -f vcenter03_health.csv
"rvtools.exe" -u administrator@vsphere.local -p "_RVToolsPWD825HN/hashed=" -s vcenter04.company.com -c exporthealth2csv -d .\exports\%datestr%\health -f vcenter04_health.csv

rem ===================
rem Merging Files 
rem ===================
for /f "tokens=*" %%i in ('dir /b /o:d /A:-D ".\exports\%datestr%" ') do type ".\exports\%datestr%\%%i">> .\exports\%datestr%\vmExport.csv & echo.>> .\exports\%datestr%\vmExport.csv
for /f "tokens=*" %%i in ('dir /b /o:d /A:-D ".\exports\%datestr%\hosts" ') do type ".\exports\%datestr%\hosts\%%i">> .\exports\%datestr%\hosts\hostExport.csv & echo.>> .\exports\%datestr%\hosts\hostExport.csv
for /f "tokens=*" %%i in ('dir /b /o:d /A:-D ".\exports\%datestr%\health" ') do type ".\exports\%datestr%\health\%%i">> .\exports\%datestr%\health\healthExport.csv & echo.>> .\exports\%datestr%\hosts\healthExport.csv

rem =========
rem Send mail
rem =========
set SMTPserver="mail.company.com"
set SMTPport="25"
set Mailto="myemail@compaany.com"
set Mailfrom="fromemail@company.com"
set Mailsubject="vCenter Servers Export"
set AttachmentFile=".\exports\%datestr%\vmExport.csv"

rvtoolssendmail.exe /SMTPserver %SMTPserver% /SMTPport %SMTPport% /Mailto %Mailto% /Mailfrom %Mailfrom% /Mailsubject %Mailsubject% /Attachment %AttachmentFile%

rem ===================
rem Cleaning Up 
rem ===================
forfiles -p ".\exports" -d -14 -c "cmd /c IF @isdir == TRUE rd /S /Q @path"

Sunday, May 13, 2018

Ubuntu Headless in VirtualBox

This guide is how to setup a Ubuntu 18.04 virtual machine using Oracle's VirtualBox, using the default NAT network and forwarding the SSH port for the machine to the localhost. After creating the new VM and installing the OS, note the IP address. By default Ubuntu normally has OpenSSH server installed and running. To verify, once you log in with the account created run

sudo service ssh status

You should see the output say Active: active (running). If the service is not install run

sudo apt install openssh-server

After determining that SSH is running, find the IP address of the machine and pick a random port that your workstation is not using and is easy to remember such as 2222. Click ‘Port Forwarding’, and create a new rule. Change 10.0.2.15 to match the IP address of your virtual machine.

You should now be able to run programs such as PuTTY or MobaXterm to connect over port 2222 to the virtual machine. After setting up the port forwarding, you can now run the machine in headless mode, because you're most likely not going to need the console (unless something goes terribly wrong).
Launching the machine in headless mode navigate to the VirtaulBox install directory (default is C:\Program Files\Oracle\VirtualBox) and Command Prompt run the command to list the virtual machines on your workstation
VBoxManage list vms
"ubuntu" {b2c0f1da-9047-5e2d-4ae4-af01def00645}
Start the VM in headless mode with startvm {VM Name} and the --type flag headless
VBoxManage startvm ubuntu --type headless
If everything worked as expected you can SSH to the machine using the localhost address 127.0.0.1 and port 2222. When your done, either shut it down though SSH or with VBoxManage
VBoxManage controlvm ubuntu poweroff
It's as easy as that. You can add more ports for testing if you need, or continue to use the virtual machine for other purposes if you prefer a full Bash versus the Windows-Linux Subsystem. Not to mention, you can use a different OS if you prefer (such as OpenSuSE, Debian, CentOS, Fedora, etc.)

Wednesday, April 18, 2018

Disable Local Users on Server 2012 with Powershell

At work, we had to implement a control for one of our servers that was not tied to Active Directory and had about 50 logins for various departments to access one of the environments. For PCI compliance, we needed a way to disable those accounts that have not logged in for 90 days. This proved to be a bit of a challenge because there was no central management. Seeing that it is only one server made me feel like the more it could be done.

Using Powershell and the ADSI functions of this opened the doors to the completion of the task. 

$users = $([ADSI]"WinNT://$env:COMPUTERNAME").Children | where {$_.SchemaClassName -eq 'user'}

The above command would output all the local users of the server, and then adding an array to remove certain service accounts was the next part because we still needed a backup account (the local administrator) and the services for monitoring and antivirus

$svc = @("Administrator","_svc-monitoring","SophosSAUWIN")

It was easy to omit those from the list using a simple ForEach loop

foreach ($s in $svc) {
    $users = $users | where {$_.Name -ne "$s"}
}

In the early testing, it would error out if there was a user account that was setup, and had never logged in. For this, I broke it up into two arrays; an array of active users (those who had logged in) and an array of inactive users (those who had never logged in). Fairly simple stuff ...

    $active = @()
        foreach ($u in $users){
            $activeUsers = @{Name = $u.Name} | where {$u.Properties.LastLogin.Value -ne $null}
            $activeObj = New-Object PSObject –Property $activeUsers
            $active += $activeUsers
        }

    $inactive = @()
        foreach ($u in $users){
            $inactiveUsers = @{Name = $u.Name} | where {$u.Properties.LastLogin.Value -eq $null}
            $inactiveObj = New-Object PSObject –Property $inactiveUsers
            $inactive += $inactiveUsers
        }

Next step was to set some disable parameters. The first two are straight forward in getting the date the script runs, used later for the user description to show when the account was disabled, and the other to subtract 90 days and convert to a large integer

$today = Get-Date
$nintyDays = ((Get-Date).AddDays(-90)).TofileTime()

This one was found in a TechNet article stating all the user flags, basically it takes whatever the userflag is and adds two. Simple enough. So if the default user that is setup has a flag of 513, than 515 would be disabled.

$ADS_UF_ACCOUNTDISABLE = 0x0002

The brunt of the work happens in the following two ForEach loops. The first looping thorough accounts that have actually logged in, and the second looping through accounts that have not logged in. This actually works out better, because I figure if someone requests an account and doesn't log in than why even create the account?

In the arrays created above, all I am collecting there are the usernames to pass in as a variable into the ADSI string saved as the $objUser variable which then grabs all the information that is tied to the user account. The if statement grabs the user flags and adds the disable account flag. This is because if the account was disabled a week ago, I wanted to maintain the account description on the date that it was disabled. Without that it would keep overwriting the description with the date that the script ran. I put the Write-Host in there for the purposes of testing, but the else statement will disable the account and change the description.

foreach ($a in $active) {
    $username = $a.Name
    $objUser = [ADSI]"WinNT://WORKGROUP/$env:COMPUTERNAME/$username"

    if ($objUser.psbase.Properties.item("userflags").value -band $ADS_UF_ACCOUNTDISABLE) {
                Write-Host "$username already disabled"}
    else {
        $logon = ($objUser.LastLogin).ToFileTime()
        if ($logon -le $nintyDays){
                $objUser.description = "Account Disabled $today"
                $objUser.userflags = $disableUser
                $objUser.setinfo()}
    }
}

foreach ($i in $inactive) {
    $username = $i.Name
    $objUser = [ADSI]"WinNT://WORKGROUP/$env:COMPUTERNAME/$username"
    
    if ($objUser.psbase.properties.item("userflags").value -band $ADS_UF_ACCOUNTDISABLE) {
                Write-Host "$username already disabled"}
    else {
        $objUser.description = "Account Disabled $today"
        $objUser.userflags = $disableUser
        $objUser.setinfo()
    }
}

Still with me? That was pretty easy. The next step was to make it run every morning at 1:30am with a scheduled task. The only way I could figure that out was to have a batch script, because the script needs to run as admin. The bat file is listed below.

@echo off

set scriptFileName=disableUser
set scriptFolderPath=c:\tools\scripts
set powershellScriptFileName=%scriptFileName%.ps1

powershell -Command "Start-Process powershell \"-ExecutionPolicy Bypass -NoProfile -NoExit -Command `\"cd \`\"%scriptFolderPath%\`\"; & \`\".\%powershellScriptFileName%\`\"`\"\" -Verb RunAs"

Here's the xml of the scheduled task for reference, but is probably the easiest part of this task to setup, but could be imported fairly quickly if needed.

<?xml version="1.0" encoding="UTF-16"?>
<Task version="1.2" xmlns="http://schemas.microsoft.com/windows/2004/02/mit/task">
  <RegistrationInfo>
    <Date>2018-04-18T12:34:56.0000000</Date>
    <Author>WIN-JUMPPY\Al.Fredo</Author>
  </RegistrationInfo>
  <Triggers>
    <CalendarTrigger>
      <StartBoundary>2018-04-18T01:30:00</StartBoundary>
      <Enabled>false</Enabled>
      <ScheduleByDay>
        <DaysInterval>1</DaysInterval>
      </ScheduleByDay>
    </CalendarTrigger>
  </Triggers>
  <Principals>
    <Principal id="Author">
      <UserId>WIN-JUMPPY\Al.Fredo</UserId>
      <LogonType>Password</LogonType>
      <RunLevel>HighestAvailable</RunLevel>
    </Principal>
  </Principals>
  <Settings>
    <MultipleInstancesPolicy>IgnoreNew</MultipleInstancesPolicy>
    <DisallowStartIfOnBatteries>false</DisallowStartIfOnBatteries>
    <StopIfGoingOnBatteries>true</StopIfGoingOnBatteries>
    <AllowHardTerminate>true</AllowHardTerminate>
    <StartWhenAvailable>false</StartWhenAvailable>
    <RunOnlyIfNetworkAvailable>false</RunOnlyIfNetworkAvailable>
    <IdleSettings>
      <StopOnIdleEnd>true</StopOnIdleEnd>
      <RestartOnIdle>false</RestartOnIdle>
    </IdleSettings>
    <AllowStartOnDemand>true</AllowStartOnDemand>
    <Enabled>true</Enabled>
    <Hidden>false</Hidden>
    <RunOnlyIfIdle>false</RunOnlyIfIdle>
    <WakeToRun>false</WakeToRun>
    <ExecutionTimeLimit>P3D</ExecutionTimeLimit>
    <Priority>7</Priority>
  </Settings>
  <Actions Context="Author">
    <Exec>
      <Command>C:\Tools\scripts\disableUser.bat</Command>
      <WorkingDirectory>C:\Tools\scripts</WorkingDirectory>
    </Exec>
  </Actions>
</Task> 

That's mostly it. If there are any questions, feel free to let me know! Just a side note though, in Server 2016 there is a Powershell module that can handle this. Thanks for reading.

Monday, April 9, 2018

Cockpit on Raspbian Stretch

Getting the Cockpit Project to run on a Raspberry Pi is quite easy, and requires a simple build from the source code that is available as a packaged tar file on GitHub. At the time of writing this it was version 165.

To start, node is a prerequisite and let's install that on Raspbian from source as well.

wget https://nodejs.org/dist/v9.9.0/node-v9.9.0.tar.gz 
tar -xzvf node-v9.9.0.tar.gz 
cd node-v9.9.0.tar.gz
./configure && make && sudo make install. 

This will take quite some time, so I would suggest running it with screen. After the install of node completes, there are some build prerequistes needed for Cockpit

sudo apt-get install autoconf intltool libglib2.0-dev libsystemd-dev \
libjson-glib-dev libpolkit-agent-1-dev libkrb5-dev libssh-dev \
libpam-dev libkeyutils-dev glib-networking

Download the package from GitHub, and untar it:

wget https://github.com/cockpit-project/cockpit/releases/download/165/cockpit-165.tar.xz

tar xf cockpit-165.tar.xz

You can follow these steps to build Cockpit, and from the original post I found here with --disable-pcp, I am not sure if it is still needed because the build process is a bit different from that and the ./autogen.sh failed stating it should be built from a git checkout or tar file and these worked on my install (Note that it is using the .xz archive and not the .gz archive)

cd cockpit-165
./configure --disable-pcp --disable-doc
make 
sudo make install
sudo cp ../src/bridge/cockpit.pam.insecure /etc/pam.d/cockpit

If the build is successful and there are no errors,  start cockpit:

sudo systemctl start cockpit.socket 

The status should show running, and it can be enabled to run on boot:

sudo systemctl enable cockpit.socket

To access cockpit use https://pi-address:9090 and you may see some informational messages, but they can be safely ignored.