Linux Upskill Challenge – Part 03

Time to complete: ~1-1.5 hours

Working with Sudo, The /etc/shadow File, Hostname Change, TimeZone Settings

Let’s start at the top: what is sudo? You may have heard of it, you may not have, but it is an integral part to being a Linux administrator. Sudo is a program that allows a user to run other programs/commands as another user, usually the root user. This is authenticated with the credentials of the user you are running as. So, what does this afford us? Why not just log in as the root user and do your bidding straight from the root account? Well, there’s a few reasons why using sudo is beneficial:

  • You aren’t giving out your root password to other admins and users.
  • On systems where sudo is being utilized the root user itself can be disabled, heightening security.
  • There is an audit trail of all sudo activity in the form of logs.
  • You can limit users to have access to certain programs using sudo, further restraining privileges in the interest of security.
  • The process of running a sudo command offers a little buffer that promotes “think before you leap”.
  • Sudo authentication expires automatically, requiring the input of the user password again. Leaving a machine unattended (!!!) is less of a risk than if the root user was left logged in.

So how does this work? How do I use sudo? It’s simple. Just type sudo before the command you wish to run as root in terminal, enter your account password (your password, not the root password), and the command will be executed with root privileges. Easy to use? You bet.

But what’s really going on? Well, if we take a look at the sudo binary we’ll notice something a little peculiar about the permissions. Use the which command to see where sudo actually lives, then use ls -l to check permissions.

__________________________________________________________________________
|dpaluszek@upskill:~ -bash v5.0==>which sudo
/usr/bin/sudo
__________________________________________________________________________
|dpaluszek@upskill:~ -bash v5.0==>ls -l /usr/bin/sudo
-rwsr-xr-x 1 root root 166056 Jul 15 00:17 /usr/bin/sudo

In the permissions for the sudo binary, notice the “s” in the fourth column? This SetUID bit means that instead of the binary being invoked by the user it will be executed as the user who owns the binary, in this case, root.

So what sudo does is check the sudoers file /etc/sudoers and sees if the user who invoked sudo is on the party list. If so, and the credentials aren’t already cached (we’ll get to that in a moment), the invoking user will be prompted for their password. Entering this password will allow the command to execute as the root user. This password will be cached for 5 minutes (5 minutes is the default on most Linux systems) meaning that within this window you can run more sudo commands without having to enter your password. Sudo creates a child process where it changes the target user (again root) and then the command is executed.

If you were thinking that users need to be explicitly added to the sudoers file and that they weren’t on it by default you’d be right. So how do we grant access to sudo? It’s as easy as modifying the sudoers file itself by adding an entry for either the specific user or for a user group. It’s a no brainer that this needs to be done as the root user! Run sudo nano /etc/sudoers and enter your password to open the sudoers file in a the nano editor.

Scroll down a bit and you’ll notice there are three sections we should be concerned with here. The “User privilege specification” section is where users are listed along with what sudo permissions they have. The two sections below that govern group membership permissions to sudo, in this case the admin and sudo groups have full access. The permissions are broken down as follows:

USER ALL=(ALL:ALL) ALL
 1    2    3   4    5
  1. The username whom is getting sudo access. Groups are prefaced with a percent sigh (%).
  2. Hosts you can run sudo commands on.
  3. The target users you are allowed to run commands as.
  4. The list of groups you can switch to using the -g switch.
  5. The commands you can run.

Here’s an example of a sudoers entry that is a bit more granular:

%localadmin desktop1,desktop2=(root) /usr/bin/rm /usr/bin/hostname

Here we specify that users in the localadmin group can run the rm and hostname commands as root on 2 machines: desktop1 and desktop2.

Let’s use sudo to play around with the file that user passwords are hashed and stored in. First let’s check the permissions. Password hashes are stored in /etc/shadow.

__________________________________________________________________________
|dpaluszek@upskill:~ -bash v5.0==>ls -l /etc/shadow
-rw-r----- 1 root shadow 1163 Aug 19 11:55 /etc/shadow

You’ll notice root owns this file. Let’s try to see the contents of this file:

__________________________________________________________________________
|dpaluszek@upskill:~ -bash v5.0==>cat /etc/shadow
cat: /etc/shadow: Permission denied

Oh noes. DENIED. Let’s use sudo then. You can use either of these two commands, the latter being a nifty shortcut that runs the last command run:

sudo cat /etc/shadow
sudo !!

If you take a look at the output you’ll notice a line for every user on the system. Most will be built in system accounts for services and whatnot but you should see an entry for your user. Mine looks similar to this:

__________________________________________________________________________
|dpaluszek@upskillchallenge:~ -bash v5.0==>sudo cat /etc/shadow
[sudo] password for dpaluszek: 
root:*:18474:0:99999:7:::
daemon:*:18474:0:99999:7:::
bin:*:18474:0:99999:7:::
sys:*:18474:0:99999:7:::
sync:*:18474:0:99999:7:::
games:*:18474:0:99999:7:::
man:*:18474:0:99999:7:::
lp:*:18474:0:99999:7:::
mail:*:18474:0:99999:7:::
news:*:18474:0:99999:7:::
uucp:*:18474:0:99999:7:::
proxy:*:18474:0:99999:7:::
www-data:*:18474:0:99999:7:::
backup:*:18474:0:99999:7:::
list:*:18474:0:99999:7:::
irc:*:18474:0:99999:7:::
gnats:*:18474:0:99999:7:::
nobody:*:18474:0:99999:7:::
systemd-network:*:18474:0:99999:7:::
systemd-resolve:*:18474:0:99999:7:::
systemd-timesync:*:18474:0:99999:7:::
messagebus:*:18474:0:99999:7:::
syslog:*:18474:0:99999:7:::
_apt:*:18474:0:99999:7:::
tss:*:18474:0:99999:7:::
uuidd:*:18474:0:99999:7:::
tcpdump:*:18474:0:99999:7:::
landscape:*:18474:0:99999:7:::
pollinate:*:18474:0:99999:7:::
systemd-coredump:!!:18483::::::
dpaluszek:$6$8idosMX/U8UD2kb3$SxDlPIXxNHwY1H6ziW4WV1osagxyyw0d.YBkFFONOVO5smQAsmdd5BcVnD7lGUQiq89o56JK6DUp7/r0hiZiA.:18483:0:99999:7:::
lxd:!:18484::::::
sshd:*:18485:0:99999:7:::
mmessier:$6$8idosMX/U8UD2kb3$SxDlPIXxNHwY1H6ziW4WV1osagxyyw0d.YBkFFONOVO5smQAsmdd5BcVnD7lGUQiq89o56JK6DUp7/r0hiZiA.:18493:0:99999:7:::

Let’s break down our entry for our secondary user mmessier (Note I truncated the password hash to make this easier to read):

mmessier:$6$8idosMb3..../r0hiZiA.:18493:0:99999:7:::
    1   :           2            :  3  :4:  5  :6:7:8:9

Break down this line by section (between colons, there’s a total of 9 fields) and note what each represents:

  1. Username – The username on the system.
  2. Encrypted password – This is a hash of the password prefaced with what type of encryption is being used, delimited by dollar signs. These are the little encryption codes:
    • $1$ – MD5
    • $2a$ – Blowfish
    • $2y$ – Eksblowfish
    • $5$ – SHA-256
    • $6$ – SHA-512
  3. Date of Last Password Change – The date of the last password change, expressed as the number of days since Jan 1, 1970.
  4. Minimum Password Age – The minimum password age is the number of days the user will have to wait before she will be allowed to change her password again.
  5. Maximum Password Age – The maximum password age is the number of days after which the user will have to change her password.
  6. Password Warning Period – The number of days before a password is going to expire (see the maximum password age above) during which the user should be warned.
  7. Password Inactivity Period – The number of days after a password has expired (see the maximum password age above) during which the password should still be accepted (and the user should update her password during the next login).
  8. Account Expiration Date – The date of expiration of the account, expressed as the number of days since Jan 1, 1970.
  9. Reserved Field – This field is reserved for future use.

You can see why this file is owned by the root account. It contains sensitive information that should not be readily available to regular users. Attempts to crack the hash could be perpetrated should those hashes be exposed. I would also recommend not editing this file by hand unless you really know what you are doing. It is always a better idea to manipulate the contents of thi file using commands like passwd and chage.

Moving on.

Let’s restart our machine using the reboot command:

__________________________________________________________________________
|dpaluszek@upskill:~ -bash v5.0==>reboot
Failed to set wall message, ignoring: Interactive authentication required.
Failed to reboot system via logind: Interactive authentication required.
Failed to open initctl fifo: Permission denied
Failed to talk to init daemon.

Denied again! Looks like we’ll have to sudo this one too. Run either of these to reboot the machine:

sudo shutdown -r now
sudo reboot

Use the uptime command to verify the machine did indeed reboot.

__________________________________________________________________________
|dpaluszek@upskill:~ -bash v5.0==>uptime
 14:04:44 up 0 min,  1 user,  load average: 0.98, 0.24, 0.08

Excellent. Reboots are healthy, after all.

The sudo command logs its actions for your review. Let’s take a look at the log file /vat/log/auth.log after running a command as root.

__________________________________________________________________________
|dpaluszek@upskill:~ -bash v5.0==>sudo hostname
[sudo] password for dpaluszek: 
upskill

Cool. Let’s now cat the auth file and see what it shows:

Notice the entry for when I ran hostname? This audit trail proves useful in the sysadmin setting. It is also useful to see what commands you have run in the past, in the event you break things.

Sep 11 14:07:19 upskill sudo: dpaluszek : TTY=pts/0 ; PWD=/home/dpaluszek ; USER=root ; COMMAND=/usr/bin/hostname

Alternatively you can just grep sudo from the file in order to just grab the information you wish to see with grep sudo /var/log/auth.log.

Now that we have a grasp on how sudo works and what it can do for us let’s move on to renaming our machine.

Pointing Two Domains to the Same WordPress Site

I bought a couple of domains and decided to see if I could have both of them point to the same WordPress site. This seems pretty easy to do but it required a few steps to get it done and working. I had to straighten out the site’s certificate, I had to edit some Apache config, and I had to add some code to a WordPress php file.

Step 1 – Update letsencrypt certificate

I use letsencrypt to secure the site via a TLS connection. The binary has since been updated to certbot. We’ll be using certbot on the command line to get a new updated cert that configured for multiple domains. Here’s the command I used:

sudo certbot -d danielpaluszek.com -d danielpaluszek.tech

Follow the prompts and verify the output indicates success. The new cert files should live in the letsencrypt default directory: /etc/letsencrypt/live

Step 2 – Update Apache

Now we need to configure Apache to respond to requests for my new domain via a new virtualhost configuration. We’ll copy the config file for danielpaluszek.com then configure it for danielpaluszek.tech:

cd /etc/apache2/sites-available
sudo cp danielpaluszek.com.conf danielpaluszek.tech.conf

Edit the newly created conf file, changing the ServerName directive to the new domain name. You don’t need to edit your cert file locations since we’re using one cert for both domains. Save the new file then run these commands to enable the new site then restart Apache:

sudo a2ensite danielpaluszek.tech
sudo systemctl restart apache2

Browsing to the .tech site results in success.

Part 3 – Configure WordPress for Multiple Domains

Now we need to tell WordPress that it needs to stay on the domain the web user started on. Currently if you browse to the .tech domain and click a link within the site whose destination is the site you’ll be brought to the .con domain. Lame. We can fix this by editing the wp-config.php file. Mad props to Jeevanandam M. for this. Search for the following line in the file:

$table_prefix  = 'wp_';

After $table_prefix, add the following:

define('WP_SITEURL', 'http://' . $_SERVER['HTTP_HOST']);
define('WP_HOME', 'http://' . $_SERVER['HTTP_HOST'])

You should now be able to browse around the site and remain on the top level domain you started on. Success.

Linux Upskill Challenge – Part 02

Assign a Static IP Address, Customizing Your Bash Prompt & Misc Commands

Time to complete: ~1-1.5 hours

Welcome back! In this installment of the Linux Upskill Challenge we’ll be completing a few tasks:

  • Assigning a static IP to your Ubuntu virtual machine.
  • Customizing your bash prompt.
  • Doing some Ubuntu user management.
  • Playing with some more common commands.

Static IP Addressing

By now you may have noticed that rebooting your Ubuntu vm may result in it receiving a new IP address each time. This is annoying since you will need to log into Ubuntu via your hypervisor console to find out what the IP is before you can SSH into it. So let’s set a static IP address shall we?

Like most things in Linux we’ll need to edit a configuration file to accomplish our task of setting a static IP address. We’ll edit the /etc/netplan/00-installer-config.yaml file using nano:

sudo nano /etc/netplan/00-installer-config.yaml

Using nano, edit the configuration file as shown, entering your own network information. It is important to note that you cannot use tabs within yaml files, so use spaces to justify any text:

Enter your network details.

There are two ways to finalize this. You can either restart netplan:

sudo netplan apply

Or you can reboot your machine:

sudo shutdown -r now

Note that reapplying netplan settings will break your SSH connection so you will need to restart your SSH session using your newly configured IP address.

Again if you find your machine inaccessible you can always log into it using the VirtualBox console so you can fix the netplan file.

Verify your network settings with any of the following commands (they all result in the same output):

ip address show
ip add show
ip a

Fun fact: Ubuntu versions prior to 17 didn’t use netplan and its associated yaml file but instead used a configuration file: /etc/network/interfaces so take note! Encountering older operating systems is common in the sysadmin world.

Let’s move on to tinkering with your bash prompt, shall we?

Linux Upskill Challenge – Part 01

Ethernet Management – SSH

Time to complete: ~1.5-2 hours

Welcome back! In this installment of the Linux Upskill Challenge we’ll be completing a few tasks:

  • Configuring VirtualBox to allow traffic from our VM to our local network.
  • Patching the system.
  • Install SSH and test access.
  • Configure SSH for remote access in a more secure way.
  • Review some basic commands that let us gain insight into our system.

VirtualBox Ethernet Settings

VirtualBox by default put our vm onto a segregated network that it created specifically for this vm. While our vm can access the Internet through this connection we cannot speak to it from any devices on our local network. The goal here is to access our Ubuntu box via SSH (secure shell protocol) from another machine, and for that we need network accessibility. While my vm can hit network objects on my local LAN I cannot see my vm from my local LAN. To make the change in VirtualBox do the following:

From the VirtualBox main screen highlight your Ubuntu vm and click “Settings”, then click the “Network” tab up top.
Change the “Attached to:” dropdown from “NAT” to “Bridged Adapter”.
Verify that the “Name” dropdown lists the connection your host machine is using to connect to your network. I am on a laptop, so I am using my laptop’s WiFi connection. If you are on a desktop select your Ethernet port.

Now that we have that out of the way done start up your vm and log in. Let’s check the IP address, shall we? Run this command:

ip address show

You should get the following output:

Your vm IP address will follow “inet” Note that 127.0.0.1 is the TCP/IP loopback address.

Verify that the IP of your vm is part of your local subnet and if so we are clear to move on.

Linux Upskill Challenge – Part 00

Learn the skills required to sysadmin a remote Linux server from the command line.

Time to complete: 1-1.5 hours

So I found this thing on Reddit: Linux Up Skill Challenge
It’s a 20 part series on Linux administration. It starts by doing some basics but evolves into doing slightly more complicated, and useful, tasks. This is all done on a headless (no GUI it’s all command line) Linux system. Pretty neat! Shall we delve into it? Let’s start learning!
I mapped this out and figured I could write a post on each section of the challenge. There’s some areas I can dip into more deeply, and I added some cool little things here and there to help round things out. In this first segment I’ll explain some basics:

  • What is virtualization?
  • What is VirtualBox?
  • What are some common Linux distributions or “distros”?
  • What other options are there for spinning up a virtual machine?

Then we’ll get into some hands on stuff:

  • What is the process for using VirtualBox to create a Linux virtual machine?
  • What are the steps to installing a Linux distro?

Virtualization and Hypervisors

Before we get into playing with Linux we need to get a machine up and running. Back in the old days spinning up a machine was accomplished by downloading an ISO installation file for the operating system of your choice (Linux, Windows, etc), burning it to a CD, then throwing that CD into your optical drive and booting your computer from it. From here you would undergo the installation process for your operating system, installing it onto a hard drive in your computer.

Well, since virtualization hit the scene those days are pretty much over. So what is virtualization? What is a “virtual machine”? Simply put virtualization is the act of creating a virtual instance of things like operating systems, networks, and even application code, as opposed to creating an actual instance. While many forms of virtualization exist the most common form, and the one generally referred to when speaking about virtualization, is hardware virtualization. In the old days as I mentioned you would “actually” install an operating system on physical hardware. Nowadays with virtualization you can install an operating system (or even multiple) on a layer of abstraction on top of the hardware. This installed instance of an operating system is called a virtual machine. The layer of abstraction managing the hardware and virtual operating system is done by the hypervisor. The hypervisor sits between the hardware and the installed operating system(s). The hypervisor is in charge of allocating resources to your installed operating systems. It orchestrates, so to speak, to ensure all virtual instances get the resources they require. There are two flavors of hypervisors: Type 1 and Type 2. Type 1 hypervisors are low level and are installed directly onto hardware. One of the most common Type 1 hypervisors is VMware’s ESXi. This is used heavily in the enterprise setting. Another Type 1 hypervisor, this one Linux based and open source, is called Proxmox. A Type 2 hypervisor runs in an installed operating system as a piece of software. Examples include Microsoft’s Hyper-V and Oracle’s VirtualBox. We’ll be using VirtualBox in this demonstration considering it is available freely on many platforms. I mentioned that I am running a MacOS machine but you can follow along if you are on Windows too, using VirtualBox.

Common Linux Flavors

There are TONS of Linux distributions out there. For an idea on how many there are, and to see the history of how certain distros spun off others, check out this graphic:
Wikipedia: Linux Distribution Timeline
There are two denominations of Linux systems, those with a pretty GUI and those that are headless, or command line only. The latter is primarily used for server related functions while the GUI enabled ones are for desktop use. Common GUI equipped flavors include

  • Ubuntu
  • Linux Mint
  • Arch Linux
  • Zorin OS

It’s important to note that the GUI isn’t quite tied to the operating system. You can mix and match supported GUIs with your Linux flavor. Note that MacOS is a derivative of a Unix operating system named Darwin, and uses the BSD kernel. Android phones run on a Linux kernel. Because it’s lightweight, many Internet of Things (IoT) devices run some sort of Linux derivative. This stuff is everywhere.

Common enterprise Linux/Unix distributions include:

  • Ubuntu
  • Debian
  • CentOS
  • RedHat
  • FreeBSD
  • OpenSUSE
  • Fedora

In this series I will be using a headless version of Ubuntu. It’s very commonly used. So if you’re looking for an answer to a question you’re likely going to find someone with the same problem online.

Someone Else’s Hypervisor?

There are other options for setting up a virtual machine aside from using a hypervisor like VirtualBox on your computer. Cloud providers have Infrastructure as a Service (IaaS) offerings where you can spin up virtual machines on a whim. This is usually done by picking from a catalog of operating systems, although you can use your own custom installation (this can get pretty advanced). Providers that offer these services include Amazon’s AWS (EC2), Google Cloud, Linode, and Digital Ocean. Being on a public provider means your virtual machine(s) can be easily configured to be on the public Internet using a public IP address. You can complete this by configuring access rules to allow traffic to whatever service you wish to make available to the Internet. This blog is on a hosted service. So meta.

Next we’ll talk about setting up a virtual machine in VirtualBox.

Frankenstein Blog

My Host Died?

I finally set aside some time to do a little site maintenance the other day. I wanted to do a few quick things. I wanted to backup the site, increase the volume size, and then backup the site again once finished. Note that I had no backups of the site EEEK I know I know shame on me. Anyways my task seemed simple, until I discovered that I couldn’t SSH into my web server. What gives? I took a look in EC2 and found the machine in a running state. Ok. I couldn’t hit the site via http. Ok. So I force restarted it via the EC2 console. Ok Machine came back up as per Amazon but I still couldn’t get into it. Le sigh. What to do? No backups dammit! I need to access the data on this virtual machine. Well I figured since I was going to resize the volume maybe I just extract the data I needed and restore it to a new instance, upgrading the operating system in the process, use a larger volume, and be done with it.

AWS CLI (Command Line Interface)

Let me introduce you to the Amazon Web Services Command Line Interface. This open source tool was developed in order to allow for programmatic access and administration to AWS services via command line. It’s available for various shells like bash, zsh, and tcsh as well as PowerShell. It’s super useful, easy to setup, and easily repeatable. Plus there are things you can only do via command line that you can’t do in the AWS GUI (so get comfortable with a terminal). My goal was to export a copy of my danielpaluszek.com virtual machine to an S3 bucket and then download it so I could extract the data I needed from it. I started by installing the CLI tool onto my MacOS laptop:
Installing the AWS CLI version 2 on macOS
Installing the pkg payload into the default location (/usr/local/aws-cli) puts the cli binary into your $PATH.
Once done I had to configure the tool to look at my AWS account:
Configuring the AWS CLI
Running the following command puts the cli tool into configuration mode where you can enter your account attributes. This allows the CLI tool to speak to your account:

dpaluszek@Dans-MacBook-Pro ~ % aws configure
AWS Access Key ID: 
AWS Secret Access Key:
Default region name: 
Default output format: 

After each of these prompts I pasted the keys from my account. I created a new user in IAM (Identity and Access Management) and used the newly generated keys. The region name is whatever region the machine(s) you wish to manipulate are in. The output format is the format in which results of commands are displayed. JSON is the default and is easily readable so I left it blank

So let’s get to it. I started by shutting down the VM in the EC2 console. I then created a new S3 bucket in my account and named it “danblog”. I configured S3 access to be public as required for me to download the contents of the bucket once I get the export in there. I ran the following command to initiate an export of the virtual machine:

aws ec2 create-instance-export-task --description "dan_blog" --instance-id i-0893e588d5fdf55e2 --target-environment vmware --export-to-s3-task DiskImageFormat=vmdk,ContainerFormat=ova,S3Bucket=danblog

Now this should return something along these lines:

{
    "ExportTask": {
       "Description": "dan_blog",
       "ExportTaskId": "export-i-050174cb06f17ecbe",
       "ExportToS3Task": {
         "ContainerFormat": "ova",
         "DiskImageFormat": "vmdk",
         "S3Bucket": "danblog",
         "S3Key": "export-i-050174cb06f17ecbe.ova"
       },
       "InstanceExportDetails": {
         "InstanceId": "i-0893e588d5fdf55e2",
         "TargetEnvironment": "vmware"
       },
       "State": "active"
     }
}

There is a command you can use to check the status of an operation (the export-task-id can be found in the previous output):

aws ec2 describe-export-tasks --export-task-ids export-i-050174cb06f17ecbe

This returns the same output as the create-instance-export-task command. Take note of the “State” entry. It will change from “active” once the process completes.

Down the Rabbit Hole, a Dead End?

So after the export completed I hopped into my S3 console and downloaded the ova file. I used the “Import Appliance” function in VirtualBox to create a VM from the ova file. I zipped through the prompts and booted it. I was met with a black screen. Le sigh. I had a rogue Ubuntu VM setup with MySQL I had setup a while back for a class I was taking that I could potentially use. I could mount the OVA file’s vmdk on this box and poke around. So I added the volume to the VM and booted Ubuntu. I checked the location of the disk and mounted it to a folder I created in my home folder. The file system seemed to be intact. I was able to access the web directory with no issue. The real question was whether or not I could get into the database. WordPress stores just about all content in a MySQL database, including the content of posts. That’s essentially, well, the site so without that I would lose everything. It’s not a ton of content but still.

I wasn’t too worried though the database is in my possession and I know the credentials so it was a matter of just getting in there. So I did the logical thing of changing the datadir directive in the /etc/mysql/mysql.conf.d/mysql.cnf file to point to the mysql directory on the “external” drive, stopped mysql, then started it. Now from here on things get a bit hazy. I saw a myriad of issues including but not limited to: mysql not successfully restarting due to permissions issues., attempting to authenticate in mysql but creds that I know work (taken from WordPress config file) do not work due to an authorization issue with the wordpressuser user account, root account reset attempts failing, amongst other things. I effectively Swiss Cheesed that drive up trying to get it to work. On a whim I decided to start over with a fresh copy of the ova downloaded from S3 and imported it into VirtualBox. While doing so I noticed that the Guest OS Type setting was set to Windows Server 2003 32 bit. I clicked through the options and set it to Ubuntu 64bit. Why would this make a difference? I thought selecting this was just a means to marking the vm graphic on theVirtualBox interface? Does it change anything boot related? There’s no such choice required in any other hypervisor I’ve played with. I did some research and I couldn’t find anything relating to that setting. The setting appears changeable in the “General” section in the “Basic” tab under a configured vm’s settings. When set to Ubuntu 64bit the machine booted, although it took about 15 minutes as it appeared hung on performing a random number generator function. I’ve read that processes such as these require entropy input into the system but I am unsure if me banging on the keyboard did anything to speed that up. Regardless the machine was booted and at the login prompt.

Now that I have the machine booted let’s see if I can log into it. This AWS machine didn’t have any creds that I knew about (perhaps I failed to record them?). By default the user AWS uses is “ubuntu” but my attempts at logging in failed. I am pretty sure I didn’t setup a password for any account on here. I did setup an ssh key via AWS for it, which I of course had as that was what I used to access the machine normally. So I grabbed the MAC address from VirtualBox and used it to find the machine’s IP via its DHCP lease in my firewall (I know I was surprised too). I was then able to SSH into the box using that IP. From here I was able to access everything normally. I zipped up the site’s root directory using tar. I then exported the MySQL WordPress database as a .sql file. I scp’d them to my local machine where I promptly backed hem up. Saved!

It’s Alive!

I switched to Digital Ocean for this machine. I wanted to try out their service based on the recommendation of a good tech friend by the name of Matthew Fox (you can peep his blog here: http://www.100781.org/). So I spun up an Ubuntu 20 machine, installed all the dependencies WordPress requires, and scp’d the tarball and sql export files to the machine. I unpacked the web directory back into place and modified permissions by chowning the www-user account and group. Next I setup a MySQL database for WordPress and imported the sql file. I had to create the WordPress user account configured in the WordPress config file in MySQL and assign it permissions. I had to do some apache virtualhost configuring in order to get apache serving the pages out of the right directory. I installed a letsencrypt cert using certbot, pointed apache to the correct certificate files, and ensured apache was serving over port 443 (and that a http–>https redirect was configured). The last issue I had to fix was a database connection issue when loading the site. I tracked this down to a difference in the database name from the database referenced in the WordPress php config file. It’s case sensitive so I changed it and the site loaded. Back in business.

Site is up. Backups secured.

Dual Boot Kali/Catalina on an Unsupported Mac

Two Birds With One Stone

So I have this old secondhand MacBook Pro 7, 1 that won’t run MacOS later than 10.13.x. I have some uses for the Mac side, namely in the name of mobility, but would also love to get an install of Kali Linux installed so I can play around with it. I saw a comment on Reddit referencing a tool named “Patcher” that would allow for newer versions of MacOS on older Mac hardware. I immediately went down the rabbit hole and here we are.

Outlined below are the steps I took to get Catalina installed, a custom boot manager installed, and Kali Linux running in a dual boot setup. There are 3 main segments to this:

  1. Create a bootable Catalina USB installer using Patcher then installing Catalina on my unsupported machine.
  2. Install a custom boot loader onto my machine.
  3. Copy a Kali Linux iso to a USB drive, then install it onto my machine.

Patcher – Run Newer MacOS on Unsupported Hardware

Prerequisites – Here’s all you need to get started with Patcher:

Getting this going is super easy as the Patcher app does just about everything for you. USB installer creation can be performed on any machine. Once Patcher was downloaded I performed the following steps:

  1. Open the Patcher DMG and run macOS Catalina Patcher.app.
  2. Click Continue until you are prompted to either browse for a copy of Apple’s Catalina installer or download a fresh copy. It is imported to note that at the time of this writing 10.15.4 has been released but Patcher and 10.15.4 isn’t working on most machines. Therefore you should use Patcher’s “Download a Copy” feature to grab a copy of 10.15.3, which works.
  3. Insert your USB.
  4. Once Catalina is downloaded click the orange external drive icon to “Create a Bootable Installer”.
  5. Select your USB drive, then click start.
  6. Enter your password when prompted to begin.
  7. Once done your USB drive is ready to rock.
  8. Plug the USB into your unsupported Mac, hold option to bring up the boot picker, and select the Patcher USB.
  9. Once booted you’ll be presented with a list of options. Highlight Disk Utility and then click continue.
  10. We need to leave some space for Kali. Above the left pane click View–>Show All Devices.
  11. Highlight the internal HDD and then click the partition button.
  12. Create 2 partitions, an APFS partition for our Catalina install and another for Kali. It is up to you to decide how large you wish these to be. I set the Kali partition to be HFS nut this shouldn’t matter since it will be reformatted when we install Kali.
  13. Close Disk Utility, highlight reinstall macOS, click Continue, agree to terms, select your newly created APFS volume, then click install.
  14. Once done reboot back into the Patcher USB.
  15. Click macOS Post Install
  16. Leave checkboxes alone and click to install patches.
  17. Allow system to reboot, then boot back into the Patcher USB.
  18. While were at it, let’s disable SIP. We’ll need to do this in order to install Kali. Click Utilities on the upper toolbar then disable SIP with the following command:
    csrutil disable
  19. Reboot and setup Catalina for your use.

Installing rEFInd – Custom Bootloader

Prerequisites – Here’s all you need to get rEFInd onto your machine:

Despite involving some terminal commands this is really quite simple. We’ll mount the EFI volume on the Mac then throw the rEFInd files onto it. It involves a few terminal command but again, quite simple.

  1. Verify your architecture with the following command:
    ioreg -l -p IODeviceTree | grep firmware-abi
  2. You should get something similar to the following:
    | |   "firmware-abi" = <"EFI64">
    This indicates 64bit.
  3. Run the following command to verify where the EFI partition lives:
    diskutil list
  4. Take note of the identifier as indicated in the below screenshot:


  5. Run the following commands to create a directory with which we can mount the EFI volume to, mount it, and create a directory for rEFInd. EFI should be at disk0s1 but if it is different for you then modify the command below appropriately.
    sudo mkdir /Volumes/ESP
    sudo mount -t msdos /dev/disk0s1 /Volumes/ESP
    sudo mkdir -p /Volumes/ESP/efi/refind
  6. Download rEFInd and unzip it.
  7. Copy the contents of the refind subdirectory to the refind directory we created in the above command.
  8. More than likely you are running 64bit EFI. Delete the following to avoid conflicts:
    refind_ia32.efi
    refind_aa64.efi
  9. Delete the following drivers, also to avoid both conflicts and slow boot times:
    drivers_aa64
    drivers_ia32
  10. Rename refind.conf-sample to refind.conf.
  11. Now we must bless all things holy in order to boot to our new loader:
    sudo bless --mount /Volumes/ESP --setBoot --file /Volumes/ESP/EFI/refind/refind_x64.efi --shortform
  12. Reboot the machine and you should be presented with the rEFInd bootloader, which should look like this:

Kali Linux – Installation Time

Now that we have our machine prepped we can create a Kali USB installer, boot to it, and install Kali onto our Mac. This part is pretty simple too but you need to be careful when formatting an installation partition, lest you nuke your macOS install. Here’s what you need:

Here’s what I did:

  1. Open Etcher.
  2. Select the iso you downloaded, select your USB drive, click “Flash”.
  3. Plug your Kali USB into your destination Mac and boot it.
  4. Your USB should show up as:

    Boot Legacy OS from whole disk volume
  5. Once booted select “Graphical Install” and proceed with setting up machine name, username, etc.
  6. When prompted to manage disks/partitions choose “Manual”.
  7. Select the HFS volume you created earlier and delete the partition.
  8. This will now show up as free space.
  9. Go back to Guided Partitioning and select “use the largest continuous free space”.
  10. Select your desired partitioning schema. I just used the whole partition for all files, nothing fancy.
  11. Elect to write the changes to disk.
  12. Kali will now install, I am not sure if Grub is required but I installed it on my HDD when prompted.
  13. Profit.

I imagine you could do this with any Linux flavor, not just Kali. I haven’t attempted to play around with Debian or CentOS or anything but I see no reason why it wouldn’t work for any other Linux distributions.

Credential Stuffing – Turning Your Online Accounts Into Cash

If you are hanging out on the Dark Web you may already be familiar with credential stuffing and its criminal benefits. What you may not know is how it is leveraged within a longer process in order to deliver its final product via an underground marketplace in exchange for money, Credential stuffing is the taking of an input, in this case a database of leaked or stolen user credentials, and turning it into a list of different sites with credentials that work. This list is then put on a marketplace and sold on the Dark Web. In addition botnets are leveraged in order to stay under the radar..

Data breaches usually result in data ending up in the wrong hands. Account info is acquired, sold, and used before those exposed by the breach have a chance to change their account passwords. There is a time limit on how long malicious actors can use that data before the leak is discovered and then being subsequently cut off. That’s a problem for a black hat. In addition you are limited to the site of which the breach occurred. But what if we could extend the usefulness of that initial breach? Credential stuffing does just that.

Let’s take a database and run it through the credential stuffing assembly line. We have a database for a provider and it becomes exposed. This database is in the hands of a malicious actor. This actor makes a small investment in some automated tools. These tools allow leaked login credentials from our database to be used against a variety of platforms in search of a successful login. *This alone is a good reason to not reuse passwords across services providers.) It is here botnets are used in order to spread out the login attempts. This circumvents safeguards put in place to protect against things like brute force attacks. Once a list of working sites and logins is aggregated they are uploaded to a marketplace where they are verified and sold. These automated marketplaces have been observed by researchers to be bustling, a testament to how effective this monetary driven economy is.

The most effective way to protect yourself is to not have any online accounts at all. Since this isn’t really feasible in 2019 you are going to have to keep the following in mind:

  • Enable multi factor authentication (MFA) wherever supported
  • Use a password manager
  • Use passwords consisting of random strings of characters
  • Do not use the same password for multiple sites/services

Credential stuffing relies on compromised login credentials being used on multiple sites. Do not reuse passwords! Using a password manager to keep track of your random and differing passwords is crucial. Enabling MFA is also an effective way to render these attacks trivial.

See here for more detailed info as described by those who have been investigating credential stuffing:
https://www.recordedfuture.com/credential-stuffing-attacks/

inode You Node We All Node

I’ll admit it. I’ve been neglecting this blog. Well, more accurately, I’ve been neglecting the sysadmin portion of the site. I haven’t updated anything, much less rebooted the server that is feeding you this page, amongst other things. The only thing I’ve been doing is periodically logging in to renew my letsencrypt cert. So about 2 minutes combined over the last 7 months?

I hopped on today to check for uptime and install Ubuntu patches and lo and behold I was met with some errors. I simply ran

apt-get update

and things were looking good. 230+ items to update. Seems about right for 330 days of uptime and not patching (shh I know it’s terrible). I ran

apt-get upgrade

to install patches but that wasn’t so kind. I was met with a bunch of output indicating that I needed to install a newer kernel in order to satisfy some dependencies. Why it wasn’t installing? I wasn’t sure at first. The output told me to attempt running this command to rectify:

apt-get -f install

This too failed but I received more interesting output:

No apport report written because the error message indicates a disk full error
dpkg-deb: error: subprocess paste was killed by signal (Broken pipe)

Interesting since nothing else indicated that the filesystem was full. I did some more investigating using du -sh and df and my filesystem was a mere 67% full. What gives? I’ll tell ya: inodes.

For those who do not understand the concept of inodes let’s break it down simply using a file cabinet as an analogy:

File Cabinet – File System
Drawers – Filenames
Folders – inodes
Documents – File Data

It’s important to distinguish the difference between filenames and inode data. The inode data is the metadata associated with data on the filesystem. This metadata includes access and modification dates, file size, permissions, etc. A common use case that better explains why inode data is separate from the filename is the usage of hard links. Say you have a regular file. It has a filename, the associated inode metadata, and the actual data. Creating a hard link to the file is really just creating another filename that points to the inode data. Check out the below screenshot:

So what’s going on here? I created “file1” using touch. Running the ls command gives me a listing of filenames, which returns “file1” as expected. Running ls -l returns inode data, hence why the owner, permissions, etc data is shown. I then created a hard link, named “file1link” which links to “file1”. Running ls -l again shows me the inode data for each of the filenames. This inode data is the same for the two filenames. Running ls -i shows me the inode number for each of the file names. They are the same for both filenames because they are referencing the same inode data. Make sense?

This is all cool stuff, but how does this play into the error I was seeing? Quite simple: I was out of inodes! I read that upon filesystem creation the ratio of inodes to disk space is 16Kb per inode so as long as your average file size is above 16Kb you shouldn’t run out of inodes. Apparently I have a shit ton of small files eating up my inodes without taking up enough space to actual fill my filesystem. Interesting little thing going on here. I ran df -i to get some inode data and dun dun dun 100% full! No more inodes left. I took this screenshot after I did my cleaning up but this is what df -i returns:

I did some more digging on the Internet and found that removing old kernels from my system should alleviate the problem I was having. The kernel I needed was queued up to install but I found that I had a dozen or so kernels still lingering around. Perhaps these extra kernels were eating up my inodes? Here’s the sequence I used to free up some inodes by forcefully deleting some extraneous kernels.

First I ran the following command to list my currently booted kernel:

uname -r

The below returns a list of all kernels on my system except the one I am booted to:

dpkg -l | tail -n +6 | grep -E 'linux-image-[0-9]+' | grep -Fv $(uname -r)

Yea some of those can go, so let’s get to it.

linux-image-4.4.0-51-generic is the oldest on here, so let’s zap that one. Use the following command to remove the initrd.img file (this is due to Bug 1678187).

sudo update-initramfs -d -k 4.2.0-51-generic

We now need to use dpkg to finalize removal:
sudo dpkg --purge linux-image-4.2.0-51-generic linux-image-extra-4.2.0-51-generic
sudo dpkg --purge linux-headers-4.2.0-51-generic
sudo dpkg --purge linux-headers-4.2.0-51

It is possible that the first dpkg command fails due to something being dependent upon that particular kernel. If this happens dpkg will alert you and you’ll need to take some action.

That should do it. I removed 2 kernels and freed up (as indicated above) over 60,000 inodes. I was then able to successfully update Ubuntu with no further issues.

Sync Your Bash Profile Across Machines

I use multiple computers. One thing that bothered me was that my custom tailored terminal window differed across my machines. Devs and script kiddies likely encountered this issue. I’ve seen some solutions online by way of GitHub but these normally relied on git to sync things across computers.

I didn’t want to go this route. I felt there was a simpler solution available. I instead elected to use Dropbox. I was already using it for my 1Password Vault so why not use it to sync my bash_profile?

I initially thought I could just put my .bash_profile into Dropbox. But there’s no easy way tell terminal to use a specific file. Instead I dropped my .bash_profile into another file. For those interested my setup and aliases take a gander:

This file is dropped into my Dropbox folder and named it mobile_bash.sh. I simply reference this file with one line in my .bash_profile on the machines I want to sync up:

The $HOME variable allows me to have differing user shortnames across my machines. Simple stuff.