Friday, December 29, 2017

Updated EVA Script Version

This is a continuation of the post on Scripting EVA Checks and the script that I wrote took too long to process given the purpose of what it was designed to do. So instead, I decided a different approach to it and went with a simple ls disk command and count the number in the output and write it to a CSV file. Not all arrays have the same type of disk, but it was much easier to place it into a ForEach loop with all the types of disks we have (including ungrouped ones, that was really what the goal of this exercise was to identify disks that were failed and most likely they show up as ungrouped).

This still relies on the text files created and the profiles to be saved in the SSSU, but it's much quicker of an execution. The output is much cleaner and a lot easier to take action on.

$date = Get-Date -Format dd_MM_yyyy
New-Item -Path .\exports -ItemType Directory -Name $date
$evas = @("EVA01","EVA02","EVA03","EVA04","EVA05","EVA06")
   
    $evaDisks=@()

    ForEach ($e in $evas) {
        # EVA Connection Info
        $file = """file .\systems\login\$e.txt"""
        $lsCmd = "ls disk"
        $disks = & C:\Tools\hp\sssu.exe $file $lsCmd | Select-Object -Skip 14 | Sort-Object
            Write-Output $e
            $diskObj = New-Object psobject -Property @{
                ARRAY = $e
                FATA = ($disks | Select-String -Pattern "FATA").Count
                FC = ($disks | Select-String -Pattern "FC").Count
                E15 = ($disks | Select-String -Pattern "E15").Count
                ENT = ($disks | Select-String -Pattern "ENT").Count
                MID = ($disks | Select-String -Pattern "MID").Count
                UNGROUPED = ($disks | Select-String -Pattern "Ungrouped").Count
                TOTAL = $disks.Count
                }
            $evaDisks += $diskObj
        }

$evaDisks | Export-Csv .\exports\$date\evaReport.csv -NoTypeInformation

While the other script is useful it is probably better to be used for drilling in deeper and locating the failed disks in the array while calling support and letting that complete in the background while focusing on other tasks at hand.The next step is to integrate the export with HipChat, and to post it to an operations channel.

Thursday, December 28, 2017

Scripting EVA Checks

At my job we have six HP EVA storage arrays, and four different Command View instances to manage them. Daily checks on drives was a bit tedious until I found the utility for scripting. The script started out as a bat file that would output all the disks into a single text file with pseudo xml formatting.

Here is the batch script that I used to get the output of all the disks:

@echo on
rem #########################
rem Name EVA Disk Output
rem By Jeffrey Swindel
rem Date December 2017
rem Version 0.1
rem #########################

rem ===================
rem Set ENV Variables 
rem ===================

for /f "tokens=1-4 delims=/ " %%i in ("%date%") do (
     set dow=%%i
     set month=%%j
     set day=%%k
     set year=%%l
)
set datestr=%day%_%month%_%year%
echo datestr is %datestr%

rem ===================
rem Set Directory 
rem ===================

cd /d C:\Tools\hp
mkdir .\exports\%datestr%

rem =========================
rem Collect Disk Output
rem =========================

rem EVA01
sssu.exe "file .\systems\eva01.txt" > .\exports\%datestr%\eva01.xml

rem EVA02
sssu.exe "file .\systems\eva02.txt" > .\exports\%datestr%\eva02.xml

rem EVA03
sssu.exe "file .\systems\eva03.txt" > .\exports\%datestr%\eva03.xml

rem EVA04
sssu.exe "file .\systems\eva04.txt" > .\exports\%datestr%\eva04.xml

rem EVA05
sssu.exe "file .\systems\eva05.txt" > .\exports\%datestr%\eva05.xml

rem EVA06
sssu.exe "file .\systems\eva06.txt" > .\exports\%datestr%\eva06.xml

rem ===================
rem Cleaning Up 
rem ===================
forfiles -p ".\exports" -d -14 -c "cmd /c IF @isdir == TRUE rd /S /Q @path"

This script uses the SSSU executable with text files to pass the following arguments for each EVA

set option on_error=continue 
SELECT MANAGER commandView USERNAME=commandView.user
SELECT SYSTEM "EVA01" 
ls disk full XML
exit 

and then clean up anything in the exports folder that is older than 14 days. This was not the most useful to parse through and find drives that failed or were ungrouped. But was definitely better than having to login to each command view.

Taking it a step further, I created a simple PowerShell script for one of the EVA's that uses the executable for SSSU, and outputs all the drives into a CSV file. It began with a list of drives that would be read from a text file and pass the ls disk command through to the executable. The output would grab certain specifics such as disk name, state, storage array, bay, and enclosure. Which proved to be more helpful for reporting purposes and could call HP and open the case without having to check Command View right away. This script was a bit more modular and could plug in the other EVA names and create scripts for each array.

#---------------------
# EVA Connection Info
#---------------------
$disks = Get-Content .\systems\eva01_disks.txt

#---------------------
# Create Empty Array
#---------------------
$eva01 = @() 

#---------------------
# Loop Through Disks
#---------------------
    ForEach ($d in $disks){

        $file = """file .\systems\login\eva01.txt"""
        $cmd = "ls disk \`"$d\`" xml"
        $output = & C:\Tools\hp\sssu.exe $file $cmd

            [xml]$xml = $output | Select-Object -Skip 15

            $driveObject = New-Object psobject -Property @{
                STORAGE_ARRAY = $xml.object.storagecellname
                DISKNAME = $xml.object.diskname
                DISKGROUP = $xml.object.diskgroupname
                ENCLOSURE = $xml.object.shelfnumber
                BAY = $xml.object.diskbaynumber
                DRIVETYPE = $xml.object.diskdrivetype
                STATE = $xml.object.operationalstate
                USAGE = $xml.object.actualusage
                }

        $eva01 += $driveObject

    }

#---------------------
# Output to CSV File
#---------------------
$eva01 | Export-Csv .\exports\eva01.csv -NoTypeInformation

In the world of automation, this was a leap from what was done before but still miles from where it needed to be.

The next step was to create one script that would be able to loop through each storage array and output to csv files so each one did not have to be run individually. I wanted to get away from individual text files for each storage array. The only thing left in each EVA text file was this:

set option on_error=continue 
SELECT MANAGER commandView USERNAME=commandView.user
SELECT SYSTEM "EVA01"

Selecting the manager and the username. Now the script loops through an array of variables with the EVA names and outputs a csv file for each into a date folder. The "final" version is this:

$date = Get-Date -Format dd_MM_yyyy
$evas = @("EVA01","EVA02","EVA03","EVA04","EVA01","EVA01")
New-Item -Path .\exports -ItemType Directory -Name $date
    ForEach ($e in $evas){
        #---------------------
        # EVA Connection Info
        #---------------------
        $file = """file .\systems\login\$e.txt"""
        $lsCmd = "ls disk"
        $disks = & C:\Tools\hp\sssu.exe $file $lsCmd | Select-Object -Skip 14 | Sort-Object
            Write-Output $e
        #---------------------
        # Create Empty Array
        #---------------------
        $eva = @() 
        #---------------------
        # Loop Through Disks
        #---------------------
            ForEach ($d in $disks){
                $d = $d.Substring(2)
                $cmd = "ls disk \`"$d\`" xml"
                $output = & C:\Tools\hp\sssu.exe $file $cmd
                    Write-Output $d
                    [xml]$xml = $output | Select-Object -Skip 15
                    $driveObject = New-Object psobject -Property @{
                        STORAGE_ARRAY = $xml.object.storagecellname
                        DISKNAME = $xml.object.diskname
                        DISKGROUP = $xml.object.diskgroupname
                        ENCLOSURE = $xml.object.shelfnumber
                        BAY = $xml.object.diskbaynumber
                        DRIVETYPE = $xml.object.diskdrivetype
                        STATE = $xml.object.operationalstate
                        USAGE = $xml.object.actualusage
                        }
                $eva += $driveObject
            }
        #---------------------
        # Output to CSV File
        #---------------------
        $eva | Export-Csv .\exports\$date\$e.csv -NoTypeInformation

}
#---------------------
# Merge CSV Files
#---------------------
$files = Get-ChildItem -Path .\exports\$date\* -Include *.csv
$mergedFile = ".\exports\$date\Merged_Report.csv"
$outfile = @()

foreach($f in $files) {
    if(Test-Path $f) { 
        $filename = [System.IO.Path]::GetFileName($f) 
        $temp = Import-CSV -Path $f | select *, @{Expression={$filename};Label="FileName"} 
        $outfile += $temp 
    }
    else { 
        Write-Warning "$f : File not found" 
    }
    $outfile | Export-Csv -Path $mergedFile -NoTypeInformation 
    Write-Output "$mergedFile successfully created" 
}

As a side note, it's not the fastest of scripts because all together there are around 200 disks in each array. So there is room for improvement. But in the terms of sanity it is much better than what it used to be (which was non-existent). That's all for now and there will probably be more revisions to come.

Wednesday, December 6, 2017

Install Webmin on CentOS 7

Webmin is a powerful web based tool to administer any UNIX system written in Perl. With it, you can setup accounts, Apache, DNS, FTP, file shares, and so much more. Webmin also removes the manual edits of configuration files, as well as provides a remote console. Installing it on Cent OS 7 takes only a matter of minutes and has some prerequisites. The version I installed was 1.860-1, on a virtual machine in KVM running a minimal install Cent OS 7.4.1708.

To start, log in via SSH or the console and update all the installed packages on the system with:

yum update -y

In order to continue with the installation, you need to install all dependencies. If they are not installed you can install them using the command:

yum install wget perl perl-Net-SSLeay perl-IO-Tty perl-Encode-Detect openssl

Then download the RPM package. To download Webmin, visit the Webmin download page and check for the the most current version of the RPM package. This is suitable for any RedHat, Fedora or CentOS system, and use wget to download.

wget http://prdownloads.sourceforge.net/webadmin/webmin-1.860-1.noarch.rpm

With all the dependencies installed, Run the following command to install Webmin:

rpm -U webmin-1.860-1.noarch.rpm

Webmin takes a moment to complete, and once it's done it will output the success message, and you can log into the console from a web browser using using your username and password created during the OS install and:

https://{IPaddress}:10000

By default, Webmin uses a self-signed SSL certificate which is common for your web browser to warn you that the connection is not secure. You can ignore this for now, and accept the self-signed SSL certificate to proceed to the log in screen.

The administration username which you can use to sign in is set to root and the password is your current root password. After the install Webmin should be started, if it didn't start automatically try the following command:

service webmin start

Finally. Webmin on system boot with:

chkconfig webmin on

In the Webmin dashboard you can see some basic information about your system such as recent logins, memory usage, installed packages, disk space, etc. Modules and services that you can manage through Webmin are located on the left panel.

Friday, December 1, 2017

KVM on Ubuntu 16.04

KVM is a free and open source virtualization solution for Linux on x86 hardware with Intel VT and AMD-V extensions. After installing Ubuntu 16.04, if the processor supports virtualization installing it requires a number of packages that are available with apt-get. The first of those packages needed are qemu and libvirt. To install qemu package and some packages for virtual machine operation, run:

$ sudo apt install -y qemu-kvm libvirt0 libvirt-bin virt-manager bridge-utils

qemu-kvm - a open source machine emulator
libvirt0 - library for interfacing with different virtualization systems
libvirt-bin - programs for the libvirt library
virt-manager - desktop application for managing VMs
bridge-utils - utilities for configuring the Linux Ethernet bridge

Register libvirt-bin to systemd to keep the programs running upon reboots.

$ sudo systemctl enable libvirt-bin

Add your user account to the libvirtd group. This allows it so you can run libvirt command without having to run it with sudo every time.

$ sudo usermod -aG libvirtd <username>

This last part is optional ... Create iso directory for sharing iso images with libvirtd group, and move the iso to that directory and change the owner.

$ sudo mkdir /var/lib/libvirt/iso
$ sudo mv filename.iso /var/lib/libvirt/iso
$ sudo chown libvirt-qemu:libvirtd /var/lib/libvirt/iso/filename.iso

Log out and back in, then launch virt-manager. This is a GUI tool for libvirt used to create virtual machines. (Alternatively, search for Virtual Machine Manager in dash).

$ virt-manager

Click "New Virtual Machine" at the upper left, select the install media and follow the prompts to create the VM.

Chrome Remote Desktop on Ubuntu

There's a computer I setup sometime back that is headless and was running Windows 10 with Chrome Remote Desktop so it was possible to easily log in and check things while I was away. Recently, I have been moving away from Windows and more towards Linux. After installing Ubuntu 16.04 and getting it setup by installing Google Chrome and Chrome Remote Desktop, I ran into an issue it would only show the wallpaper and the cursor. (Which is actually a feature of the Linux version that it can create a new X-Org session and not duplicate what is on the screen, similar to the version for Windows).

The fix for this is quite easy, and requires some use in the terminal. After installing Chrome Remote Desktop, stop the daemon (if it is running:

 /opt/google/chrome-remote-desktop/chrome-remote-desktop --stop

Good practice is to make a backup of files your working on in Linux (especially system files)

cp /opt/google/chrome-remote-desktop/chrome-remote-desktop /opt/google/chrome-remote-desktop/chrome-remote-desktop.orig

Open the chrome-remote-desktop file in your favorite text editor. Search for the following settings and make the changes as necessary

DEFAULT_SIZES = "1920x1080"

FIRST_X_DISPLAY_NUMBER = 0

Setting the FIRST_X_DISPLAY_NUMBER to zero will mirror the desktop session, and any application you have open (same as the Windows version).

Next, comment these two sections out so it doesn't increment for a new desktop

#while os.path.exists(X_LOCK_FILE_TEMPLATE % display):
#  display += 1

and  this so that it doesn't start a new session since the console on desktop zero is already running :
      #logging .info("Starting %s on display :%d" % (xvfb, display))
      #screen_option = "%dx%dx24" % (max_width, max_height)
      #self .x_proc = subprocess.Popen(
      #    [xvfb, ":%d" % display,
      #     "-auth", x_auth_file,
      #     "-nolisten", "tcp",
      #     "-noreset",
      #     "-screen", "0", screen_option
      #    ] + extra_x_args)
      #if not self.x_proc.pid:
      #  raise Exception("Could not start Xvfb.")

Finally start the daemon:

/opt/google/chrome-remote-desktop/chrome-remote-desktop --start

Or restart the computer and it will now open with the default Unity session. More info can be found here in the Google Product Forums.

Wednesday, November 15, 2017

Disable Xubuntu 16.04 Close Laptop Lid Suspend

On Xubuntu 16.04 after editing the power settings to do nothing when the lid closes, my laptop would still suspend. I want it to just turn the off the display. The fix for this is easy, to change it open the /etc/systemd/logind.conf file in a text editor as root using:

sudo nano /etc/systemd/logind.conf

Look for this line, and remove the comment # and save the file

#HandleLidSwitch=ignore

So it should look like this:

HandleLidSwitch=ignore

If the line is not present, add it and save the file. Then restart the systemd daemon with this command:

sudo service systemd-logind restart

Close the lid, and the laptop shouldn't suspend.

Saturday, November 4, 2017

Create a Ubuntu Core VM with VirtualBox

There are many ways to get started with Ubuntu Core. One of them is to create an SD card for a Raspberry Pi, or similar small single-board computer. Another is to use KVM on Linux. However, if you are on Windows or OS X then the easiest way is to use VirtualBox. This guide is for OS X and will work on Linux or Windows, however the commands will slightly differ.

To get started, download the image of (ubuntu-core-16-amd64.img.xz) Core 16 from the official site here.

curl "http://releases.ubuntu.com/ubuntu-core/16/ubuntu-core-16-amd64.img.xz" -o "ubuntu-core-16-amd64.img.xz"

Since OS X can't natively extract an xz archive, you can use a program such as The Unarchiver to extract the image file. Then in the same directory as the img file ubuntu-core-16-amd64.img, use VBoxManage to convert it into a VirtualBox hard disk image file (.vdi)

VBoxManage convertdd ubuntu-core-16-amd64.img ubuntu-core-16-amd64.vdi --format VDI

After converting the img file to vdi format, it will need to be expanded in order to install any snaps or work with any development. Using VBoxManage again run the command

VBoxManage modifyhd ubuntu-core-16-amd64.vdi --resize 20480

This will increase the usable size of the disk to 20GB, but will keep the actual size smaller because it will dynamically increase based on usage. Next step is to create a VM inside of VirtualBox, follow the same steps you would to create any new VM. But when it comes time for the hard disk, instead of creating a new disk select "Use an existing hard disk file" and locate the vdi image created in the previous step.


Power on the VM to complete the setup by logging into your Ubuntu SSO account and select the SSH key you'll use to connect to the core instance. Enjoy!

Saturday, October 28, 2017

Setting up Sqarespace Local Development

By now most everyone in the web design or hosting space knows of Squarespace. It's a great place for anyone who wants a well designed website without having to do much leg work for a beautiful and simple design. The only problem with templates though, is that someone could have the same looking website and could be easily mistaken for yours. In comes designers, and for anyone wanting to use Squarespace and have a custom template, they make it pretty easy to setup a local development server with Node.js, npm, and git.

Primarily I run Linux on my laptop, Ubuntu 16.04.3 LTS to be exact and setting up the local development server wasn't too difficult after installing Node.js and npm. There are various ways to install Node.js, the one that I would not recommend is installing it through the Ubuntu apt package manager. These are usually a few versions behind the latest, and may not work (at least it didn't work for me). The easiest way was using the nodesource package manager to do the install. These are not maintained by the Node.js core team, rather their respective maintainers.

For Ubuntu, the guide here outlines the install in a two easy steps

curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash
sudo apt-get install -y nodejs

The above commands download the latest 6.x version and adds the nodesource repository to the apt package manager source list, then running the normal apt-get install will pull from the repository to install. After Node.js installs, the next step is to configure the environment variables and change npm's default directory.

Some installations may require changing of the permissions to npm's directory. In my case, I needed to change the default directory. When running the command:

npm get prefix

It returned the location as /usr instead of other configurations where /usr/local was the default directory. In this case it is recommended to change the location rather than change permissions on /usr because you may inadvertenly change something that would make your system not work properly.  To change it:

1. Make a directory for global installations:
mkdir ~/.npm-global

2. Change npm configuration to use the newly created directory path"
npm config set prefix '~/.npm-global'

3. Open or create ~/.profile and add the following:
export PATH=~/.npm-global/bin:$PATH

4. Update system variables with the command in terminal:
source ~/.profile

Now you can test by downloading a package globally without using sudo:

npm install -g jshint

... or move on to the installation of the Squarespace local development.

npm install -g @squarespace/server

That's all. Now you can use the local development server to create Squarespace templates that are custom for your site.

Sunday, August 13, 2017

Raspberry PI Stratum 2 NTP server

I had a spare Raspberry Pi Model B sitting around not doing anything, so I decided to set it up as a Stratum 2 NTP server. Since I didn't have a GPS breakout or Pi hat I pointed it to five Stratum 1 sources.

1. Setting up the Raspberry Pi

Download and install Raspbian to the SD card:

sudo dd bs=1m if=2017-07-05-raspbian-jessie-lite.img of=/dev/disk2

If you don't want to have to plug the Pi into a monitor and want a headless system from the beginning, follow my guide here to enable SSH from the SD card. Then login using the default Pi user and run raspi-config to complete the initial setup.

sudo raspi-config
sudo apt-get update ; sudo apt-get upgrade


Part of hardening the Pi is to setup a new user and give it sudo privileges. Then you'll want to remove the Pi user after it is verified that the account has super user privileges. (There have been a few times that I haven't verified sudo of the new account and had to start over).

sudo useradd jeffrey -s /bin/bash -m -G adm,sudo
sudo passwd jeffrey

Log out and log back in as the new user you setup and remove the default pi user:

sudo userdel pi
sudo rm -rf /home/pi


2. Configuring NTP

NTP is already installed by default in Raspbian Jessie, you'll want to pick at least 3 different NTP servers for accurate measurements, 5 is even better. This list is a good resource to pick your servers from, just be sure to pick the ones listed as Open and not Restricted Access otherwise the ntp query won't work.

Edit the ntp.conf file 

sudo nano /etc/ntp.conf

Make the changes that are bold from my ntp.conf file provided for reference:

# /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help

driftfile /var/lib/ntp/ntp.drift


# Enable this if you want statistics to be logged.
#statsdir /var/log/ntpstats/

statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable


# You do need to talk to an NTP server or two (or three).
#server ntp.your-provider.example
server time.nc7j.com
server time-a.timefreq.bldrdoc.gov
server t1.timegps.net
server t2.timegps.net
server timekeeper.isi.edu

# pool.ntp.org maps to about 1000 low-stratum NTP servers.  Your server will
# pick a different set every time it starts up.  Please consider joining the
# pool: <http://www.pool.ntp.org/join.html>
#0.debian.pool.ntp.org
#1.debian.pool.ntp.org
#2.debian.pool.ntp.org
#3.debian.pool.ntp.org

# Access control configuration; see /usr/share/doc/ntp-doc/html/accopt.html for
# details.  The web page <http://support.ntp.org/bin/view/Support/AccessRestrictions>
# might also be helpful.
#
# Note that "restrict" applies to both servers and clients, so a configuration
# that might be intended to block requests from certain clients could also end
# up blocking replies from your own upstream servers.

# By default, exchange time with everybody, but don't allow configuration.
restrict -4 default kod notrap nomodify nopeer noquery
restrict -6 default kod notrap nomodify nopeer noquery

# Local users may interrogate the ntp server more closely.
restrict 127.0.0.1
restrict ::1

# Clients from this (example!) subnet have unlimited access, but only if
# cryptographically authenticated.
#restrict 192.168.123.0 mask 255.255.255.0 notrust
restrict 192.168.1.0 mask 255.255.255.0

# If you want to provide time to your local subnet, change the next line.
# (Again, the address is an example only.)
#broadcast 192.168.123.255
broadcast 192.168.1.255

# If you want to listen to time broadcasts on your local subnet, de-comment the
# next lines.  Please do this only if you trust everybody on the network!
#disable auth
#broadcastclient

Save the file and restart NTP:

sudo /etc/init.d/ntp restart

Test the config and be sure that everything is working by querying the NTP servers listed with:

ntpq -pn

The output should look similar to this:


This lists the IP addresses of the NTP servers you have configured in the ntp.conf file and where they are getting the time from. With this being Stratum 2, the ones listed are Stratum 1 and are receiving measurements from GPS or NIST.

To finish, you'll need to point your clients to the new name or IP of the Raspberry Pi to sync the clock.

Saturday, August 12, 2017

NextCloud 12 on the Raspberry Pi

Awhile back I got a 1TB USB drive from Western Digital that is designed for use with a Raspberry Pi and never completely set it up so it just sat in a box. Lately, I have been revisiting projects that I had never completed and this was one of them. I decided to use the drive for a NextCloud setup and a Raspberry Pi 2. There are guides out there for Ngnix (like this one), which I tried and was unsuccessful on completing the setup. After attempting an Ngnix setup, I decided to go with Apache2 instead.

After the initial setup of the Raspberry Pi, Raspbian Jessie, and a little hardening:

sudo apt-get update
sudo apt-get upgrade

1. Install Apache

The first thing to do is to setup a LAMP server using Apache, MariaDB, and PHP5. Start with installing Apache

sudo apt-get install apache2

Then add a line to the /etc/apache2/apache2.conf file to suppress a warning when checking the Apache configuration. 

sudo nano /etc/apache2/apache2.conf

and add:

ServerName server_domain_or_IP

Test the configuration with:

sudo apache2ctl configtest

All you should see in the output is:

Output
Syntax OK

Then restart Apache with the command:

sudo systemctl restart apache2

2. Install MySQL

Again using apt, we can install the MySQL server and client package.

sudo apt-get install mysql-server mysql-client

During the install, you will be asked to enter the root password for MySQL, choose a strong password and write it down somewhere. You'll need it later to setup the NextCloud database and user. Afterwards, you'll need to run a simple security script to complete the installation

mysql_secure_installation

This will go through and allow you to remove sample databases and users and configure the database server with password policies, etc.

3. Install PHP

PHP is the driver behind NextCloud and allows for the dynamic creation of pages based on scripts that queries the MySQL database. Using apt, we'll install all the necessary packages for PHP5

sudo apt-get install php5 lipapache2-mod-php php5-mcrypt php-apc php-pear \
php-xml-parser php5-cgi php5-cli php5-common php5-curl php5-dev php5-memcache \ php5-mysql php5-gd php5-imagick php5-intl

Edit the Apache dir.conf file to modify the way it serves files in the directory, having it check for an index.php file before an index.html

sudo nano /etc/apache2/mods-enabled/dir.conf

The file should look like this before saving

<IfModule mod_dir.c>
    DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm
</IfModule>

Restart the Apache service again and then test the php configuration

sudo systemctl restart apache2

In the docroot (/var/www/html) create a new info.php file and paste the following php snippet

sudo nano /var/www/html/info.php

<?php phpinfo(); ?>

Navigate to http://your_server_IP_address/info.php and you should see the PHP config of the server. If everything is okay, remove the info.php file

sudo rm /var/www/html/info.php

If you choose to use PHP7, you'll have to enable the testing repo by editing /etc/apt/sources.list and adding this to the end of the file:

deb http://mirrordirector.raspbian.org/raspbian/ stretch main contrib non-free rpi

And then create the file /etc/apt/preferences and add the following to enable the Jessie repo by default

Package: *
Pin: release n=jessie
Pin-Priority: 600

Update the package list

sudo apt-get update

Install PHP7 packages

sudo apt-get install -t stretch php7.0 php7.0-bz2 php7.0-cli php7.0-curl \
php7.0-fpm php7.0-gd php7.0-intl php7.0-json php7.0-mbstring php7.0-mcrypt \
php7.0-mysql php7.0-opcache php7.0-xml php7.0-zip php-apcu php-pear

4. Install NextCloud

Download the latest release of NextCloud from https://download.nextcloud.com/server/releases/

cd /tmp
curl -LO https://download.nextcloud.com/server/releases/nextcloud-12.0.1.tar.bz2

It is optional, but highly recommended to check the integrity of the archive file

curl -LO https://download.nextcloud.com/server/releases/nextcloud-12.0.1.tar.bz2.sha256
shasum -a 256 -c nextcloud-12.0.1.tar.bz2.sha256 < nextcloud-12.0.1.tar.bz2

The output should look similar to 

Output
nextcloud-10.0.1.tar.bz2: OK

Remove the sha256 checksum file

rm nextcloud-12.0.1.tar.bz2.sha256

Unzip the archive in the /tmp directory. This contains all the install scripts to complete the installation. It will then need to be placed in the Apache docroot directory /var/www/

sudo tar -C /var/www -xvjf /tmp/nextcloud-12.0.1.tar.bz2

This places all the files in /var/www/nextcloud directory. Since the archive is not specific to any Linux distro the permissions are not correct. This will need to be fixed and from the guide at DigitalOcean there is a great bash script that will fix this. In the temp directory create a new file. 

nano /tmp/nextcloud.sh

Paste the following into the file:

#!/bin/bash
ocpath='/var/www/nextcloud'
htuser='www-data'
htgroup='www-data'
rootuser='root'

printf "Creating possible missing Directories\n"
mkdir -p $ocpath/data
mkdir -p $ocpath/assets
mkdir -p $ocpath/updater

printf "chmod Files and Directories\n"
find ${ocpath}/ -type f -print0 | xargs -0 chmod 0640
find ${ocpath}/ -type d -print0 | xargs -0 chmod 0750
chmod 755 ${ocpath}

printf "chown Directories\n"
chown -R ${rootuser}:${htgroup} ${ocpath}/
chown -R ${htuser}:${htgroup} ${ocpath}/apps/
chown -R ${htuser}:${htgroup} ${ocpath}/assets/
chown -R ${htuser}:${htgroup} ${ocpath}/config/
chown -R ${htuser}:${htgroup} ${ocpath}/data/
chown -R ${htuser}:${htgroup} ${ocpath}/themes/
chown -R ${htuser}:${htgroup} ${ocpath}/updater/

chmod +x ${ocpath}/occ

printf "chmod/chown .htaccess\n"
if [ -f ${ocpath}/.htaccess ]
 then
  chmod 0644 ${ocpath}/.htaccess
  chown ${rootuser}:${htgroup} ${ocpath}/.htaccess
fi
if [ -f ${ocpath}/data/.htaccess ]
 then
  chmod 0644 ${ocpath}/data/.htaccess
  chown ${rootuser}:${htgroup} ${ocpath}/data/.htaccess
fi

Run it with the bash command

sudo bash /tmp/nextcloud.sh

The output should look like this

Creating possible missing Directories
chmod Files and Directories
chown Directories
chmod/chown .htaccess

Next we need to create a new site configuration for NextCloud in the /etc/apache2/sites-available directory:

sudo nano /etc/apache2/sites-available/nextcloud.conf

Paste the following into the file:

Alias /nextcloud "/var/www/nextcloud/"

<Directory /var/www/nextcloud/>
    Options +FollowSymlinks
    AllowOverride All

    <IfModule mod_dav.c>
        Dav off
    </IfModule>

    SetEnv HOME /var/www/nextcloud
    SetEnv HTTP_HOME /var/www/nextcloud

</Directory>

Save and exit, then enable the new site with

sudo a2ensite nextcloud

Additionally enable the mod_rewrite Apache module. This is required for Nextcloud to properly function.

sudo a2enmod rewrite

Next step is to create the MySQL database. This can be done with the mysql-client and logging in with the root password

mysql -u root -p

CREATE DATABASE nextcloud;
GRANT ALL ON nextcloud.* to 'nextcloud'@'localhost' IDENTIFIED BY 'password';
FLUSH PRIVILEGES;

With the database configured, you can exit the mysql cli and finish configuring NextCloud by going to the Raspberry Pi's IP or server name in a web browser

http://server_domain_or_IP/nextcloud

The NextCloud setup page should display, similar to the image below. From here you can complete the setup entering an admin username, password and the database account that was created earlier, as well as the folder for where you want to store the data.


Enjoy! This replicates the function of Dropbox or Google Drive, but giving you full control of your data. If you would like to expand on the core features of NextCloud, check out and install plugins using Nextcloud's app store.

Install Chef Client on Raspberry Pi

Chef is a great source for managing nodes (mostly for hundreds ... if not thousands), but also works great for managing a two or three in a home lab. This is going to go over installing the Chef Client on a Raspberry Pi 2 or 3 with the Raspbian OS, based off a post of a tutorial on the install found here. At the time of writing this, I am using Raspbian Jessie and Ruby 2.3.4. source code. There may be new versions out there, but these are the ones that worked for me. 

On the console of the Raspberry Pi, log in elevate privileges to the root because most of the commands that need to be run require super user rights.

sudo su

Update the package list from the official Raspbian repository

apt-get update

Install some prerequisites to compile Ruby from source. (These packages may already be installed depending on the version of Raspbian you are using, there is no harm in running the command if that is the case).

apt-get install gcc make libssl-dev

Once that is complete, download the Ruby source code from here. As noted earlier, the version I am using is 2.3.4, but there could be a newer version. When the download is complete, extract the archive to /usr/src to compile.

cd /tmp
wget https://cache.ruby-lang.org/pub/ruby/2.3/ruby-2.3.4.tar.gz
tar -xvzf ruby-2.3.4.tar.gz
cp -r ruby-2.3.4 /usr/src/

Prepare the compile with configure, ignoring the features that are not needed. This command is formatted across two lines. This may take up to ten minutes, depending on the SD card and the Pi model you have.

./configure --enable-shared --disable-install-doc \
--disable-install-rdoc --disable-install-capi

Compile the source with make

make -j4 ; make install

Using the -j4 flag with make uses multi-thread the execution, utilizing each of the Pi's processors. Grab a cup of coffee, or take a break... because it can take up to thirty minutes to complete. When the compile is complete, we can install the Chef Client using gem.

gem install chef




This process can take up to another thirty minutes to complete. Once it finishes you can verify it was installed with 

chef-client --version


Exit from the root console and move on to the final step.

exit

Finally, the last step is to configure the node to your Chef server with the knife bootstrap command. Replace username and password with your credentials, and the chef server and node too will be different.

knife bootstrap srv-chef.home.lab -N srv-pi.home.lab -x {user} -P {password}

It is normal to see some errors in the bootstrap command because the ARM client is not part of the official Chef repo. But when it completes, if you have Chef Manage installed you can view more detailed information about the Raspberry Pi or alternatively you can use knife to show more information about the node on your Chef server 

knife client show srv-pi