Linux Upskill Challenge – Part 05

Time to complete: ~0.5-1 hours

most-ly Navigating Terminal, more or less

In today’s episode we’ll be playing with terminal pagers. What is a terminal pager you ask? A terminal pager is a command line program that simply allows you to read text files. To edit files, however, you’ll need a text editor. A pager is great if you are looking for something specific in a file or if you just want to peruse the contents of a file. You can also pipe output to a pager to more closely analyze that output. In addition pagers allow you to search for keywords in files to help speed things along. Pagers get their name because they only output one page’s worth of data at a time.

The three pagers I want to talk about are more, less, and most. On Ubuntu systems less and more are installed by default. However most will need to be manually installed. More is installed by default on most Linux distros, where less, although common, may not be. Most is the rarest of the bunch and will usually require a manual install. Let’s review the differences, and strengths, of each of these commands.


We’ll start with more since it is the simplest of the bunch and has the least amount of features. Using it is as easy is calling more in a prompt followed by the filename (or filenames separated by spaces) you wish to examine.

dpaluszek@upskillchallenge:~$ more /etc/ssh/sshd_config

If you will notice more has a little percentage indicator that tells you how much of the file you have read. The following key strokes perform the following actions:

  • Space – advance to the next page. The number of lines advanced is equal to the number of lines available in your terminal window.
  • f – advance to the next page
  • Return – scroll down one line at a time, you can also use
  • b – go back a page
  • = – display the current line number
  • v – start up your default text editor
  • :n – go to next file (when you have used more than one file name as an input)
  • :p – go to previous file (when you have used more than one file name as an input)
  • :f – display current file name and line number
  • . – repeat previous command
  • /pattern – search the file for the next occurrence of the regex pattern after the slash.

Upon arriving at the end of the file the pager will cease and you’ll be returned to your prompt. This is a very useful command to pipe long output to as you can use the space bar to read the output page by page.


The less command is more feature rich than more in that it is equipped with way more bells and whistles but runs faster since each page is loaded one at a time. This command is chock full of goodies. Invoke it the same way you used more.

dpaluszek@upskillchallenge:~$ less /etc/ssh/sshd_config

Here’s a sampling of what less can do:

  • Space – advance to the next page
  • Return – scroll down one line at a time
  • d – scroll forward one half of the screen size
  • y – scroll backward one line at a time
  • b – scroll backward one page
  • u – scroll backward one half of the screen size
  • left and right arrow – scroll horizontally one half of the screen size
  • control + arrow keys – scroll all the way to the left or right
  • R – refresh the screen, useful for looking at files that are actively being modified
  • g – go to beginning of file
  • G – go to end of file
  • m – followed by either an uppercase or lowercase letter marks the first displayed line so you can easily get back to it later
  • – followed by either an uppercase or lowercase letter brings you back to the marked line (see above)
  • /pattern – search forward in the file for the matching regex
  • ?pattern – search backward in the file for the matching regex

This is just a small sampling. You can do really fancy things like find the closing bracket or parentheses from an opening one too. You can also pass the -F switch when invoking less to allow the file to be continually updated in real time as you view it. Good for monitoring log files. See the less man page for more information on all available options.


By default the most command isn’t installed on Ubuntu systems. Attempt to run it and you’ll be met with this:

dpaluszek@upskillchallenge:~$ most

Command 'most' not found, but can be installed with:

sudo apt install most

Run the aforementioned apt command (you can review apt usage in Part 4 of this series). Once installed simply invoke most using the same syntax as more and less, passing a filename as a parameter.

dpaluszek@upskillchallenge:~$ most /etc/ssh/sshd_config

As you can see most issues two useful lines. The first shows the filename currently open and the second offers some assistance. Hitting H for help nets you this useful list:

  • SPACE, D – Scroll down one Screen.
  • U, DELETE – Scroll Up one screen.
  • RETURN, DOWN – Move Down one line.
  • UP – Move Up one line.
  • T – Goto Top of File.
  • B – Goto Bottom of file.
  • >, TAB – Scroll Window right
  • < – Scroll Window left
  • RIGHT – Scroll Window left by 1 column
  • LEFT – Scroll Window right by 1 column
  • J, G – Goto line.
  • % – Goto percent.

Most also allows us to switch between open files on the fly. You can do this by first opening two files with most by passing their filenames to the command. From here you can press :n to bring up a little list of files, using the arrow keys to select which one you wish to switch to, then hitting enter to open it. This is useful in situations where you are, say, reviewing a modified config file against its default.

So far we’ve played with looking at files directly. But what about parsing output from commands? Explore the sshd_config file by using cat and piping it to more, less, and most.

dpaluszek@upskillchallenge:~$ cat /etc/ssh/sshd_config | less
dpaluszek@upskillchallenge:~$ cat /etc/ssh/sshd_config | more
dpaluszek@upskillchallenge:~$ cat /etc/ssh/sshd_config | most

As a bonus let’s go a bit off topic and discuss some terminal tricks and know how.

Linux Upskill Challenge – Part 04

Time to complete: ~0.5-1 hours

Using apt, Installing Midnight Commander, Exploring the File System

Welcome back! In this installment we’ll be working with Debian’s Advanced Package Tool. We will manipulate it using the apt command in order to manage applications. There are other command line tools that can be used to interact with the Advanced Package Tool, but we’ll be using apt today. This command allows for the easy installation, updating, listing, and removal of programs on Debian and its derived distributions. The apt command is very commonly used and referenced. We’ll be using it to install Midnight Commander.

Before we dive into apt and how its commands work we need to clear up a potential issue before it causes confusion. Beginning in Ubuntu 16 the apt command was created in order to promote ease of use by simplifying common commands issued using apt-get and apt-cache. This means that instead of using those commands with all of its options merely using apt will get you what you want with less input required. It also offers more output like a pretty progress bar when installing applications as well as added output of apt update in that it tells you how many upgradable packages you have ready to go. Running the apt list --upgradable command returns an improved formatted list of upgradable apps. Keep this in mind when managing older systems. They may not have apt installed and you’ll have to resort to using apt-get.

Note that apt, apt-get, and apt-cache are installed by default on current Ubuntu systems.

Run the ls -lR command on /etc/apt to get an idea of how its file structure is laid out.

Let’s review the different parts of apt:

  • sources.list
  • apt edit-sources
  • add-apt-repository
  • apt update
  • apt upgrade
  • apt search
  • apt install
  • apt remove
  • apt purge
  • apt autoremove
  • apt show
  • apt list


This config file, located in /etc/apt, is where the source URIs are listed for repositories. This file is important enough that it has its own man page. From this man page we can see what the default format is for listing a source:

deb [ option1=value1 option2=value2 ] uri suite [component1] [component2] [...]

Use grep to weed out entries containing “deb” to see what sources are configured for your system.

|dpaluszek@upskillchallenge:~ -bash v5.0==>grep deb /etc/apt/sources.list
deb focal main restricted
# deb-src focal main restricted
deb focal-updates main restricted
# deb-src focal-updates main restricted

apt edit-sources

This command, when run, prompts you for your text editor of choice, then opens the sources.list file in that editor. From here you can make edits. If you noticed the sources.list file is owned by root, so you’ll need to make use of sudo in order to save your changes.


Aside from editing the sources.list file manually you could also use this command. In addition to modifying the sources.list file it will automatically download and register any public keys. Use it like so:

sudo add-apt-repository ppa:<repository-name>

apt update

This easily comes in first in the “most used apt command” contest. Running apt update will update all package info from all configured sources. Running this before performing any other actions is required in order to ensure you are working with the latest set of package information and thus, getting the latest updates.

apt upgrade
apt full-upgrade

Probably the second most used apt command. Run apt upgrade to install all available updates for packages found using the sources.list file’s configured repositories. Note that any packages that require an uninstall of any other package won’t be installed. To handle this removal automatically to install such packages use the apt full-upgrade command.

apt search

This one works as advertised. Pass apt search a search term and you’ll be returned with any package containing that keyword, including what is in the package description (useful for searching out features).

apt install
apt remove
apt purge

These 3 commands rely on the package name you found using apt search to do as their name suggests. The install switch will install the package while remove will uninstall. Note that using remove will not remove configuration file artifacts in the event the program was uninstalled accidentally. To remove those artifacts use the purge switch.

apt autoremove

When packages are installed their dependencies are installed too. When you uninstall a package only that specific package is uninstalled, not the dependencies. Also note that sometimes package dependencies change and those dependencies are no longer needed. To remove the disused dependencies run the apt autoremove command.

apt show

Passing the name of a package attained from the a search will give you detailed info on the package. Use this to gain more insight into the packages you are aiming to install.

apt list

This command shows a list of packages by filtering by criteria. You can pass globs to the command as well as use options to list installed (–installed), upgradeable (–upgradeable) or all available (–all-versions) versions.

Now that we have a basic grasp of how apt works, we can go ahead and install a package. Let’s search for and install midnight commander.

|dpaluszek@upskillchallenge:~ -bash v5.0==>apt search "midnight commander"
Sorting... Done
Full Text Search... Done
avfs/focal 1.1.1-1 amd64
  virtual filesystem to access archives, disk images, remote locations

junior-system/focal 1.29ubuntu1 all
  Debian Jr. System tools

krusader/focal 2:2.7.2-1build1 amd64
  twin-panel (commander-style) file manager

mc/focal 3:4.8.24-2ubuntu1 amd64
  Midnight Commander - a powerful file manager

mc-data/focal 3:4.8.24-2ubuntu1 all
  Midnight Commander - a powerful file manager -- data files

moc/focal 1:2.6.0~svn-r2994-3build1 amd64
  ncurses based console audio player

pilot/focal 2.22+dfsg1-1 amd64
  Simple file browser from Alpine, a text-based email client

It appears that “mc” is what we are looking for. Let’s go ahead and get some info on it:

apt show mc

Installing is just as easy:

sudo apt install mc

Type Y then hit Enter when prompted to install and let the magic happen.

Exploring the File System

Now that Midnight Commander is installed lets play around with it. Run it in terminal by typing mc and hitting enter.

Midnight Commander File Browser

You can use either the arrow keys or your mouse to navigate the file system. Poke around with the following:

  • /etc/passwd
  • /etc/ssh/sshd_config
  • /var/log/auth.log
  • /etc/apt/sources.list
  • /home

You can open files by highlighting the file you wish to read and then clicking the “File” menu up top, then clicking “View File”, then clicking “Ok”. Take note regarding files you don’t have permission to access. What do you think you need to do in order to see root owned files using Midnight Commander?

Well this was a lesson that you’re sure to reuse. Using apt on Debian/Ubuntu/et al systems is a must if you are going to administer these machines with success. There are various other tools to manage packages on Linux systems and while they differ in details they are functionally equivalent.

Next episode we’ll use some new commands to view file content as well as do some more basic terminal navigation, playing with hidden files, and we’ll finish off with a deep dive into nano.

Linux Upskill Challenge – Part 03

Time to complete: ~1-1.5 hours

Working with Sudo, The /etc/shadow File, Hostname Change, TimeZone Settings

Let’s start at the top: what is sudo? You may have heard of it, you may not have, but it is an integral part to being a Linux administrator. Sudo is a program that allows a user to run other programs/commands as another user, usually the root user. This is authenticated with the credentials of the user you are running as. So, what does this afford us? Why not just log in as the root user and do your bidding straight from the root account? Well, there’s a few reasons why using sudo is beneficial:

  • You aren’t giving out your root password to other admins and users.
  • On systems where sudo is being utilized the root user itself can be disabled, heightening security.
  • There is an audit trail of all sudo activity in the form of logs.
  • You can limit users to have access to certain programs using sudo, further restraining privileges in the interest of security.
  • The process of running a sudo command offers a little buffer that promotes “think before you leap”.
  • Sudo authentication expires automatically, requiring the input of the user password again. Leaving a machine unattended (!!!) is less of a risk than if the root user was left logged in.

So how does this work? How do I use sudo? It’s simple. Just type sudo before the command you wish to run as root in terminal, enter your account password (your password, not the root password), and the command will be executed with root privileges. Easy to use? You bet.

But what’s really going on? Well, if we take a look at the sudo binary we’ll notice something a little peculiar about the permissions. Use the which command to see where sudo actually lives, then use ls -l to check permissions.

|dpaluszek@upskill:~ -bash v5.0==>which sudo
|dpaluszek@upskill:~ -bash v5.0==>ls -l /usr/bin/sudo
-rwsr-xr-x 1 root root 166056 Jul 15 00:17 /usr/bin/sudo

In the permissions for the sudo binary, notice the “s” in the fourth column? This SetUID bit means that instead of the binary being invoked by the user it will be executed as the user who owns the binary, in this case, root.

So what sudo does is check the sudoers file /etc/sudoers and sees if the user who invoked sudo is on the party list. If so, and the credentials aren’t already cached (we’ll get to that in a moment), the invoking user will be prompted for their password. Entering this password will allow the command to execute as the root user. This password will be cached for 5 minutes (5 minutes is the default on most Linux systems) meaning that within this window you can run more sudo commands without having to enter your password. Sudo creates a child process where it changes the target user (again root) and then the command is executed.

If you were thinking that users need to be explicitly added to the sudoers file and that they weren’t on it by default you’d be right. So how do we grant access to sudo? It’s as easy as modifying the sudoers file itself by adding an entry for either the specific user or for a user group. It’s a no brainer that this needs to be done as the root user! Run sudo nano /etc/sudoers and enter your password to open the sudoers file in a the nano editor.

sudo nano /etc/sudoers

Scroll down a bit and you’ll notice there are three sections we should be concerned with here. The “User privilege specification” section is where users are listed along with what sudo permissions they have. The two sections below that govern group membership permissions to sudo, in this case the admin and sudo groups have full access. The permissions are broken down as follows:

 1    2    3   4    5
  1. The username whom is getting sudo access. Groups are prefaced with a percent sigh (%).
  2. Hosts you can run sudo commands on.
  3. The target users you are allowed to run commands as.
  4. The list of groups you can switch to using the -g switch.
  5. The commands you can run.

Here’s an example of a sudoers entry that is a bit more granular:

%localadmin desktop1,desktop2=(root) /usr/bin/rm /usr/bin/hostname

Here we specify that users in the localadmin group can run the rm and hostname commands as root on 2 machines: desktop1 and desktop2.

Let’s use sudo to play around with the file that user passwords are hashed and stored in. First let’s check the permissions. Password hashes are stored in /etc/shadow.

|dpaluszek@upskill:~ -bash v5.0==>ls -l /etc/shadow
-rw-r----- 1 root shadow 1163 Aug 19 11:55 /etc/shadow

You’ll notice root owns this file. Let’s try to see the contents of this file:

|dpaluszek@upskill:~ -bash v5.0==>cat /etc/shadow
cat: /etc/shadow: Permission denied

Oh noes. DENIED. Let’s use sudo then. You can use either of these two commands, the latter being a nifty shortcut that runs the last command run:

sudo cat /etc/shadow
sudo !!

If you take a look at the output you’ll notice a line for every user on the system. Most will be built in system accounts for services and whatnot but you should see an entry for your user. Mine looks similar to this:

|dpaluszek@upskillchallenge:~ -bash v5.0==>sudo cat /etc/shadow
[sudo] password for dpaluszek: 

Let’s break down our entry for our secondary user mmessier (Note I truncated the password hash to make this easier to read):

    1   :           2            :  3  :4:  5  :6:7:8:9

Break down this line by section (between colons, there’s a total of 9 fields) and note what each represents:

  1. Username – The username on the system.
  2. Encrypted password – This is a hash of the password prefaced with what type of encryption is being used, delimited by dollar signs. These are the little encryption codes:
    • $1$ – MD5
    • $2a$ – Blowfish
    • $2y$ – Eksblowfish
    • $5$ – SHA-256
    • $6$ – SHA-512
  3. Date of Last Password Change – The date of the last password change, expressed as the number of days since Jan 1, 1970.
  4. Minimum Password Age – The minimum password age is the number of days the user will have to wait before she will be allowed to change her password again.
  5. Maximum Password Age – The maximum password age is the number of days after which the user will have to change her password.
  6. Password Warning Period – The number of days before a password is going to expire (see the maximum password age above) during which the user should be warned.
  7. Password Inactivity Period – The number of days after a password has expired (see the maximum password age above) during which the password should still be accepted (and the user should update her password during the next login).
  8. Account Expiration Date – The date of expiration of the account, expressed as the number of days since Jan 1, 1970.
  9. Reserved Field – This field is reserved for future use.

You can see why this file is owned by the root account. It contains sensitive information that should not be readily available to regular users. Attempts to crack the hash could be perpetrated should those hashes be exposed. I would also recommend not editing this file by hand unless you really know what you are doing. It is always a better idea to manipulate the contents of thi file using commands like passwd and chage.

Moving on.

Let’s restart our machine using the reboot command:

|dpaluszek@upskill:~ -bash v5.0==>reboot
Failed to set wall message, ignoring: Interactive authentication required.
Failed to reboot system via logind: Interactive authentication required.
Failed to open initctl fifo: Permission denied
Failed to talk to init daemon.

Denied again! Looks like we’ll have to sudo this one too. Run either of these to reboot the machine:

sudo shutdown -r now
sudo reboot

Use the uptime command to verify the machine did indeed reboot.

|dpaluszek@upskill:~ -bash v5.0==>uptime
 14:04:44 up 0 min,  1 user,  load average: 0.98, 0.24, 0.08

Excellent. Reboots are healthy, after all.

The sudo command logs its actions for your review. Let’s take a look at the log file /vat/log/auth.log after running a command as root.

|dpaluszek@upskill:~ -bash v5.0==>sudo hostname
[sudo] password for dpaluszek: 

Cool. Let’s now cat the auth file and see what it shows:

sudo cat /var/log/auth.log

Notice the entry for when I ran hostname? This audit trail proves useful in the sysadmin setting. It is also useful to see what commands you have run in the past, in the event you break things.

Sep 11 14:07:19 upskill sudo: dpaluszek : TTY=pts/0 ; PWD=/home/dpaluszek ; USER=root ; COMMAND=/usr/bin/hostname

Alternatively you can just grep sudo from the file in order to just grab the information you wish to see with grep sudo /var/log/auth.log.

Now that we have a grasp on how sudo works and what it can do for us let’s move on to renaming our machine.

Pointing Two Domains to the Same WordPress Site

I bought a couple of domains and decided to see if I could have both of them point to the same WordPress site. This seems pretty easy to do but it required a few steps to get it done and working. I had to straighten out the site’s certificate, I had to edit some Apache config, and I had to add some code to a WordPress php file.

Step 1 – Update letsencrypt certificate

I use letsencrypt to secure the site via a TLS connection. The binary has since been updated to certbot. We’ll be using certbot on the command line to get a new updated cert that configured for multiple domains. Here’s the command I used:

sudo certbot -d -d

Follow the prompts and verify the output indicates success. The new cert files should live in the letsencrypt default directory: /etc/letsencrypt/live

Step 2 – Update Apache

Now we need to configure Apache to respond to requests for my new domain via a new virtualhost configuration. We’ll copy the config file for then configure it for

cd /etc/apache2/sites-available
sudo cp

Edit the newly created conf file, changing the ServerName directive to the new domain name. You don’t need to edit your cert file locations since we’re using one cert for both domains. Save the new file then run these commands to enable the new site then restart Apache:

sudo a2ensite
sudo systemctl restart apache2

Browsing to the .tech site results in success.

Part 3 – Configure WordPress for Multiple Domains

Now we need to tell WordPress that it needs to stay on the domain the web user started on. Currently if you browse to the .tech domain and click a link within the site whose destination is the site you’ll be brought to the .con domain. Lame. We can fix this by editing the wp-config.php file. Mad props to Jeevanandam M. for this. Search for the following line in the file:

$table_prefix  = 'wp_';

After $table_prefix, add the following:

define('WP_SITEURL', 'https://' . $_SERVER['HTTP_HOST']);
define('WP_HOME', 'https://' . $_SERVER['HTTP_HOST'])

You should now be able to browse around the site and remain on the top level domain you started on. Success.

Linux Upskill Challenge – Part 02

Assign a Static IP Address, Customizing Your Bash Prompt & Misc Commands

Time to complete: ~1-1.5 hours

Welcome back! In this installment of the Linux Upskill Challenge we’ll be completing a few tasks:

  • Assigning a static IP to your Ubuntu virtual machine.
  • Customizing your bash prompt.
  • Doing some Ubuntu user management.
  • Playing with some more common commands.

Static IP Addressing

By now you may have noticed that rebooting your Ubuntu vm may result in it receiving a new IP address each time. This is annoying since you will need to log into Ubuntu via your hypervisor console to find out what the IP is before you can SSH into it. So let’s set a static IP address shall we?

Like most things in Linux we’ll need to edit a configuration file to accomplish our task of setting a static IP address. We’ll edit the /etc/netplan/00-installer-config.yaml file using nano:

sudo nano /etc/netplan/00-installer-config.yaml

Using nano, edit the configuration file as shown, entering your own network information. It is important to note that you cannot use tabs within yaml files, so use spaces to justify any text:

Enter your network details.

There are two ways to finalize this. You can either restart netplan:

sudo netplan apply

Or you can reboot your machine:

sudo shutdown -r now

Note that reapplying netplan settings will break your SSH connection so you will need to restart your SSH session using your newly configured IP address.

Again if you find your machine inaccessible you can always log into it using the VirtualBox console so you can fix the netplan file.

Verify your network settings with any of the following commands (they all result in the same output):

ip address show
ip add show
ip a

Fun fact: Ubuntu versions prior to 17 didn’t use netplan and its associated yaml file but instead used a configuration file: /etc/network/interfaces so take note! Encountering older operating systems is common in the sysadmin world.

Let’s move on to tinkering with your bash prompt, shall we?

Linux Upskill Challenge – Part 01

Ethernet Management – SSH

Time to complete: ~1.5-2 hours

Welcome back! In this installment of the Linux Upskill Challenge we’ll be completing a few tasks:

  • Configuring VirtualBox to allow traffic from our VM to our local network.
  • Patching the system.
  • Install SSH and test access.
  • Configure SSH for remote access in a more secure way.
  • Review some basic commands that let us gain insight into our system.

VirtualBox Ethernet Settings

VirtualBox by default put our vm onto a segregated network that it created specifically for this vm. While our vm can access the Internet through this connection we cannot speak to it from any devices on our local network. The goal here is to access our Ubuntu box via SSH (secure shell protocol) from another machine, and for that we need network accessibility. While my vm can hit network objects on my local LAN I cannot see my vm from my local LAN. To make the change in VirtualBox do the following:

From the VirtualBox main screen highlight your Ubuntu vm and click “Settings”, then click the “Network” tab up top.
Change the “Attached to:” dropdown from “NAT” to “Bridged Adapter”.
Verify that the “Name” dropdown lists the connection your host machine is using to connect to your network. I am on a laptop, so I am using my laptop’s WiFi connection. If you are on a desktop select your Ethernet port.

Now that we have that out of the way done start up your vm and log in. Let’s check the IP address, shall we? Run this command:

ip address show

You should get the following output:

Your vm IP address will follow “inet” Note that is the TCP/IP loopback address.

Verify that the IP of your vm is part of your local subnet and if so we are clear to move on.

Linux Upskill Challenge – Part 00

Learn the skills required to sysadmin a remote Linux server from the command line.

Time to complete: 1-1.5 hours

So I found this thing on Reddit: Linux Up Skill Challenge
It’s a 20 part series on Linux administration. It starts by doing some basics but evolves into doing slightly more complicated, and useful, tasks. This is all done on a headless (no GUI it’s all command line) Linux system. Pretty neat! Shall we delve into it? Let’s start learning!
I mapped this out and figured I could write a post on each section of the challenge. There’s some areas I can dip into more deeply, and I added some cool little things here and there to help round things out. In this first segment I’ll explain some basics:

  • What is virtualization?
  • What is VirtualBox?
  • What are some common Linux distributions or “distros”?
  • What other options are there for spinning up a virtual machine?

Then we’ll get into some hands on stuff:

  • What is the process for using VirtualBox to create a Linux virtual machine?
  • What are the steps to installing a Linux distro?

Virtualization and Hypervisors

Before we get into playing with Linux we need to get a machine up and running. Back in the old days spinning up a machine was accomplished by downloading an ISO installation file for the operating system of your choice (Linux, Windows, etc), burning it to a CD, then throwing that CD into your optical drive and booting your computer from it. From here you would undergo the installation process for your operating system, installing it onto a hard drive in your computer.

Well, since virtualization hit the scene those days are pretty much over. So what is virtualization? What is a “virtual machine”? Simply put virtualization is the act of creating a virtual instance of things like operating systems, networks, and even application code, as opposed to creating an actual instance. While many forms of virtualization exist the most common form, and the one generally referred to when speaking about virtualization, is hardware virtualization. In the old days as I mentioned you would “actually” install an operating system on physical hardware. Nowadays with virtualization you can install an operating system (or even multiple) on a layer of abstraction on top of the hardware. This installed instance of an operating system is called a virtual machine. The layer of abstraction managing the hardware and virtual operating system is done by the hypervisor. The hypervisor sits between the hardware and the installed operating system(s). The hypervisor is in charge of allocating resources to your installed operating systems. It orchestrates, so to speak, to ensure all virtual instances get the resources they require. There are two flavors of hypervisors: Type 1 and Type 2. Type 1 hypervisors are low level and are installed directly onto hardware. One of the most common Type 1 hypervisors is VMware’s ESXi. This is used heavily in the enterprise setting. Another Type 1 hypervisor, this one Linux based and open source, is called Proxmox. A Type 2 hypervisor runs in an installed operating system as a piece of software. Examples include Microsoft’s Hyper-V and Oracle’s VirtualBox. We’ll be using VirtualBox in this demonstration considering it is available freely on many platforms. I mentioned that I am running a MacOS machine but you can follow along if you are on Windows too, using VirtualBox.

Common Linux Flavors

There are TONS of Linux distributions out there. For an idea on how many there are, and to see the history of how certain distros spun off others, check out this graphic:
Wikipedia: Linux Distribution Timeline
There are two denominations of Linux systems, those with a pretty GUI and those that are headless, or command line only. The latter is primarily used for server related functions while the GUI enabled ones are for desktop use. Common GUI equipped flavors include

  • Ubuntu
  • Linux Mint
  • Arch Linux
  • Zorin OS

It’s important to note that the GUI isn’t quite tied to the operating system. You can mix and match supported GUIs with your Linux flavor. Note that MacOS is a derivative of a Unix operating system named Darwin, and uses the BSD kernel. Android phones run on a Linux kernel. Because it’s lightweight, many Internet of Things (IoT) devices run some sort of Linux derivative. This stuff is everywhere.

Common enterprise Linux/Unix distributions include:

  • Ubuntu
  • Debian
  • CentOS
  • RedHat
  • FreeBSD
  • OpenSUSE
  • Fedora

In this series I will be using a headless version of Ubuntu. It’s very commonly used. So if you’re looking for an answer to a question you’re likely going to find someone with the same problem online.

Someone Else’s Hypervisor?

There are other options for setting up a virtual machine aside from using a hypervisor like VirtualBox on your computer. Cloud providers have Infrastructure as a Service (IaaS) offerings where you can spin up virtual machines on a whim. This is usually done by picking from a catalog of operating systems, although you can use your own custom installation (this can get pretty advanced). Providers that offer these services include Amazon’s AWS (EC2), Google Cloud, Linode, and Digital Ocean. Being on a public provider means your virtual machine(s) can be easily configured to be on the public Internet using a public IP address. You can complete this by configuring access rules to allow traffic to whatever service you wish to make available to the Internet. This blog is on a hosted service. So meta.

Next we’ll talk about setting up a virtual machine in VirtualBox.

Frankenstein Blog

My Host Died?

I finally set aside some time to do a little site maintenance the other day. I wanted to do a few quick things. I wanted to backup the site, increase the volume size, and then backup the site again once finished. Note that I had no backups of the site EEEK I know I know shame on me. Anyways my task seemed simple, until I discovered that I couldn’t SSH into my web server. What gives? I took a look in EC2 and found the machine in a running state. Ok. I couldn’t hit the site via http. Ok. So I force restarted it via the EC2 console. Ok. Machine came back up as per Amazon but I still couldn’t get into it. Le sigh. What to do? No backups dammit! I need to access the data on this virtual machine. Well I figured since I was going to resize the volume maybe I just extract the data I needed and restore it to a new instance, upgrading the operating system in the process, use a larger volume, and be done with it.

AWS CLI (Command Line Interface)

Let me introduce you to the Amazon Web Services Command Line Interface. This open source tool was developed in order to allow for programmatic access and administration to AWS services via command line. It’s available for various shells like bash, zsh, and tcsh as well as PowerShell. It’s super useful, easy to setup, and easily repeatable. Plus there are things you can only do via command line that you can’t do in the AWS GUI (so get comfortable with a terminal). My goal was to export a copy of my virtual machine to an S3 bucket and then download it so I could extract the data I needed from it. I started by installing the CLI tool onto my MacOS laptop:
Installing the AWS CLI version 2 on macOS
Installing the pkg payload into the default location (/usr/local/aws-cli) puts the cli binary into your $PATH.
Once done I had to configure the tool to look at my AWS account:
Configuring the AWS CLI
Running the following command puts the cli tool into configuration mode where you can enter your account attributes. This allows the CLI tool to speak to your account:

dpaluszek@Dans-MacBook-Pro ~ % aws configure
AWS Access Key ID: 
AWS Secret Access Key:
Default region name: 
Default output format: 

After each of these prompts I pasted the keys from my account. I created a new user in IAM (Identity and Access Management) and used the newly generated keys. The region name is whatever region the machine(s) you wish to manipulate are in. The output format is the format in which results of commands are displayed. JSON is the default and is easily readable so I left it blank.

So let’s get to it. I started by shutting down the VM in the EC2 console. I then created a new S3 bucket in my account and named it “danblog”. I configured S3 access to be public as required for me to download the contents of the bucket once I get the export in there. I ran the following command to initiate an export of the virtual machine:

aws ec2 create-instance-export-task --description "dan_blog" --instance-id i-0893e588d5fdf55e2 --target-environment vmware --export-to-s3-task DiskImageFormat=vmdk,ContainerFormat=ova,S3Bucket=danblog

Now this should return something along these lines:

    "ExportTask": {
       "Description": "dan_blog",
       "ExportTaskId": "export-i-050174cb06f17ecbe",
       "ExportToS3Task": {
         "ContainerFormat": "ova",
         "DiskImageFormat": "vmdk",
         "S3Bucket": "danblog",
         "S3Key": "export-i-050174cb06f17ecbe.ova"
       "InstanceExportDetails": {
         "InstanceId": "i-0893e588d5fdf55e2",
         "TargetEnvironment": "vmware"
       "State": "active"

There is a command you can use to check the status of an operation (the export-task-id can be found in the previous output):

aws ec2 describe-export-tasks --export-task-ids export-i-050174cb06f17ecbe

This returns the same output as the create-instance-export-task command. Take note of the “State” entry. It will change from “active” once the process completes.

Down the Rabbit Hole, a Dead End?

So after the export completed I hopped into my S3 console and downloaded the ova file. I used the “Import Appliance” function in VirtualBox to create a VM from the ova file. I zipped through the prompts and booted it. I was met with a black screen. Le sigh. I had a rogue Ubuntu VM setup with MySQL I had setup a while back for a class I was taking that I could potentially use. I could mount the OVA file’s vmdk on this box and poke around. So I added the volume to the VM and booted Ubuntu. I checked the location of the disk and mounted it to a folder I created in my home folder. The file system seemed to be intact. I was able to access the web directory with no issue. The real question was whether or not I could get into the database. WordPress stores just about all content in a MySQL database, including the content of posts. That’s essentially, well, the site so without that I would lose everything. It’s not a ton of content but still.

I wasn’t too worried though the database is in my possession and I know the credentials so it was a matter of just getting in there. So I did the logical thing of changing the datadir directive in the /etc/mysql/mysql.conf.d/mysql.cnf file to point to the mysql directory on the “external” drive, stopped mysql, then started it. Now from here on things get a bit hazy. I saw a myriad of issues including but not limited to: mysql not successfully restarting due to permissions issues., attempting to authenticate in mysql but creds that I know work (taken from WordPress config file) do not work due to an authorization issue with the wordpressuser user account, root account reset attempts failing, amongst other things. I effectively Swiss Cheesed that drive up trying to get it to work. On a whim I decided to start over with a fresh copy of the ova downloaded from S3 and imported it into VirtualBox. While doing so I noticed that the Guest OS Type setting was set to Windows Server 2003 32 bit. I clicked through the options and set it to Ubuntu 64bit. Why would this make a difference? I thought selecting this was just a means to marking the vm graphic on theVirtualBox interface? Does it change anything boot related? There’s no such choice required in any other hypervisor I’ve played with. I did some research and I couldn’t find anything relating to that setting. The setting appears changeable in the “General” section in the “Basic” tab under a configured vm’s settings. When set to Ubuntu 64bit the machine booted, although it took about 15 minutes as it appeared hung on performing a random number generator function. I’ve read that processes such as these require entropy input into the system but I am unsure if me banging on the keyboard did anything to speed that up. Regardless the machine was booted and at the login prompt.

Now that I have the machine booted let’s see if I can log into it. This AWS machine didn’t have any creds that I knew about (perhaps I failed to record them?). By default the user AWS uses is “ubuntu” but my attempts at logging in failed. I am pretty sure I didn’t setup a password for any account on here. I did setup an ssh key via AWS for it, which I of course had as that was what I used to access the machine normally. So I grabbed the MAC address from VirtualBox and used it to find the machine’s IP via its DHCP lease in my firewall (I know I was surprised too). I was then able to SSH into the box using that IP. From here I was able to access everything normally. I zipped up the site’s root directory using tar. I then exported the MySQL WordPress database as a .sql file. I scp’d them to my local machine where I promptly backed hem up. Saved!

It’s Alive!

I switched to Digital Ocean for this machine. I wanted to try out their service based on the recommendation of a good tech friend by the name of Matthew Fox (you can peep his blog here: So I spun up an Ubuntu 20 machine, installed all the dependencies WordPress requires, and scp’d the tarball and sql export files to the machine. I unpacked the web directory back into place and modified permissions by chowning the www-user account and group. Next I setup a MySQL database for WordPress and imported the sql file. I had to create the WordPress user account configured in the WordPress config file in MySQL and assign it permissions. I had to do some apache virtualhost configuring in order to get apache serving the pages out of the right directory. I installed a letsencrypt cert using certbot, pointed apache to the correct certificate files, and ensured apache was serving over port 443 (and that a http–>https redirect was configured). The last issue I had to fix was a database connection issue when loading the site. I tracked this down to a difference in the database name from the database referenced in the WordPress php config file. It’s case sensitive so I changed it and the site loaded. Back in business.

Site is up. Backups secured.

Dual Boot Kali/Catalina on an Unsupported Mac

Two Birds With One Stone

So I have this old secondhand MacBook Pro 7, 1 that won’t run MacOS later than 10.13.x. I have some uses for the Mac side, namely in the name of mobility, but would also love to get an install of Kali Linux installed so I can play around with it. I saw a comment on Reddit referencing a tool named “Patcher” that would allow for newer versions of MacOS on older Mac hardware. I immediately went down the rabbit hole and here we are.

Outlined below are the steps I took to get Catalina installed, a custom boot manager installed, and Kali Linux running in a dual boot setup. There are 3 main segments to this:

  1. Create a bootable Catalina USB installer using Patcher then installing Catalina on my unsupported machine.
  2. Install a custom boot loader onto my machine.
  3. Copy a Kali Linux iso to a USB drive, then install it onto my machine.

Patcher – Run Newer MacOS on Unsupported Hardware

Prerequisites – Here’s all you need to get started with Patcher:

Getting this going is super easy as the Patcher app does just about everything for you. USB installer creation can be performed on any machine. Once Patcher was downloaded I performed the following steps:

  1. Open the Patcher DMG and run macOS Catalina
  2. Click Continue until you are prompted to either browse for a copy of Apple’s Catalina installer or download a fresh copy. It is imported to note that at the time of this writing 10.15.4 has been released but Patcher and 10.15.4 isn’t working on most machines. Therefore you should use Patcher’s “Download a Copy” feature to grab a copy of 10.15.3, which works.
  3. Insert your USB.
  4. Once Catalina is downloaded click the orange external drive icon to “Create a Bootable Installer”.
  5. Select your USB drive, then click start.
  6. Enter your password when prompted to begin.
  7. Once done your USB drive is ready to rock.
  8. Plug the USB into your unsupported Mac, hold option to bring up the boot picker, and select the Patcher USB.
  9. Once booted you’ll be presented with a list of options. Highlight Disk Utility and then click continue.
  10. We need to leave some space for Kali. Above the left pane click View–>Show All Devices.
  11. Highlight the internal HDD and then click the partition button.
  12. Create 2 partitions, an APFS partition for our Catalina install and another for Kali. It is up to you to decide how large you wish these to be. I set the Kali partition to be HFS nut this shouldn’t matter since it will be reformatted when we install Kali.
  13. Close Disk Utility, highlight reinstall macOS, click Continue, agree to terms, select your newly created APFS volume, then click install.
  14. Once done reboot back into the Patcher USB.
  15. Click macOS Post Install
  16. Leave checkboxes alone and click to install patches.
  17. Allow system to reboot, then boot back into the Patcher USB.
  18. While were at it, let’s disable SIP. We’ll need to do this in order to install Kali. Click Utilities on the upper toolbar then disable SIP with the following command:
    csrutil disable
  19. Reboot and setup Catalina for your use.

Installing rEFInd – Custom Bootloader

Prerequisites – Here’s all you need to get rEFInd onto your machine:

Despite involving some terminal commands this is really quite simple. We’ll mount the EFI volume on the Mac then throw the rEFInd files onto it. It involves a few terminal command but again, quite simple.

  1. Verify your architecture with the following command:
    ioreg -l -p IODeviceTree | grep firmware-abi
  2. You should get something similar to the following:
    | |   "firmware-abi" = <"EFI64">
    This indicates 64bit.
  3. Run the following command to verify where the EFI partition lives:
    diskutil list
  4. Take note of the identifier as indicated in the below screenshot:

  5. Run the following commands to create a directory with which we can mount the EFI volume to, mount it, and create a directory for rEFInd. EFI should be at disk0s1 but if it is different for you then modify the command below appropriately.
    sudo mkdir /Volumes/ESP
    sudo mount -t msdos /dev/disk0s1 /Volumes/ESP
    sudo mkdir -p /Volumes/ESP/efi/refind
  6. Download rEFInd and unzip it.
  7. Copy the contents of the refind subdirectory to the refind directory we created in the above command.
  8. More than likely you are running 64bit EFI. Delete the following to avoid conflicts:
  9. Delete the following drivers, also to avoid both conflicts and slow boot times:
  10. Rename refind.conf-sample to refind.conf.
  11. Now we must bless all things holy in order to boot to our new loader:
    sudo bless --mount /Volumes/ESP --setBoot --file /Volumes/ESP/EFI/refind/refind_x64.efi --shortform
  12. Reboot the machine and you should be presented with the rEFInd bootloader, which should look like this:

Kali Linux – Installation Time

Now that we have our machine prepped we can create a Kali USB installer, boot to it, and install Kali onto our Mac. This part is pretty simple too but you need to be careful when formatting an installation partition, lest you nuke your macOS install. Here’s what you need:

Here’s what I did:

  1. Open Etcher.
  2. Select the iso you downloaded, select your USB drive, click “Flash”.
  3. Plug your Kali USB into your destination Mac and boot it.
  4. Your USB should show up as:

    Boot Legacy OS from whole disk volume
  5. Once booted select “Graphical Install” and proceed with setting up machine name, username, etc.
  6. When prompted to manage disks/partitions choose “Manual”.
  7. Select the HFS volume you created earlier and delete the partition.
  8. This will now show up as free space.
  9. Go back to Guided Partitioning and select “use the largest continuous free space”.
  10. Select your desired partitioning schema. I just used the whole partition for all files, nothing fancy.
  11. Elect to write the changes to disk.
  12. Kali will now install, I am not sure if Grub is required but I installed it on my HDD when prompted.
  13. Profit.

I imagine you could do this with any Linux flavor, not just Kali. I haven’t attempted to play around with Debian or CentOS or anything but I see no reason why it wouldn’t work for any other Linux distributions.

Credential Stuffing – Turning Your Online Accounts Into Cash

If you are hanging out on the Dark Web you may already be familiar with credential stuffing and its criminal benefits. What you may not know is how it is leveraged within a longer process in order to deliver its final product via an underground marketplace in exchange for money, Credential stuffing is the taking of an input, in this case a database of leaked or stolen user credentials, and turning it into a list of different sites with credentials that work. This list is then put on a marketplace and sold on the Dark Web. In addition botnets are leveraged in order to stay under the radar..

Data breaches usually result in data ending up in the wrong hands. Account info is acquired, sold, and used before those exposed by the breach have a chance to change their account passwords. There is a time limit on how long malicious actors can use that data before the leak is discovered and then being subsequently cut off. That’s a problem for a black hat. In addition you are limited to the site of which the breach occurred. But what if we could extend the usefulness of that initial breach? Credential stuffing does just that.

Let’s take a database and run it through the credential stuffing assembly line. We have a database for a provider and it becomes exposed. This database is in the hands of a malicious actor. This actor makes a small investment in some automated tools. These tools allow leaked login credentials from our database to be used against a variety of platforms in search of a successful login. *This alone is a good reason to not reuse passwords across services providers.) It is here botnets are used in order to spread out the login attempts. This circumvents safeguards put in place to protect against things like brute force attacks. Once a list of working sites and logins is aggregated they are uploaded to a marketplace where they are verified and sold. These automated marketplaces have been observed by researchers to be bustling, a testament to how effective this monetary driven economy is.

The most effective way to protect yourself is to not have any online accounts at all. Since this isn’t really feasible in 2019 you are going to have to keep the following in mind:

  • Enable multi factor authentication (MFA) wherever supported
  • Use a password manager
  • Use passwords consisting of random strings of characters
  • Do not use the same password for multiple sites/services

Credential stuffing relies on compromised login credentials being used on multiple sites. Do not reuse passwords! Using a password manager to keep track of your random and differing passwords is crucial. Enabling MFA is also an effective way to render these attacks trivial.

See here for more detailed info as described by those who have been investigating credential stuffing: