Back at it again! The overnight temps were manageable and our camp stayed cozy. The fire of course died out but after getting it stoked and roaring again we had ourselves a nice cozy breakfast. We soon ventured out to seize the day, but the weather wasn’t so welcoming to that sort of idea. It was a tad cloudy with patches of fog littered about.
Our next destination? We decided on Jordan Pond. While not on the eastern coast of Mt Desert Island Jordan Pond still offers great views. With the fog hovering over just about every body of water we passed it made sense. The ocean would be foggy and maybe too dreary so we’ll hold out for tomorrow. We hoped that in the meantime circling Jordan Pond’s three mile trail would suffice. We lucked out. It was gorgeous. The lack of sun didn’t really detract from the scenery at all. In fact it almost enhanced it in a way. The low clouds and fog drifting over the the peaks into the valleys offered some pretty views. Doing the nearby hike up one of the bubbles was also a goal. Elevation is always cool. We set out around Jordan Pond Trail, stopping to marvel and take pics of the pond along the way.
About halfway around we encountered the trail up to South Bubble Mountain. There were two paths. The left being the steeper, but quicker path, and the right being a bit easier but a tad longer. With Bradley in tow we opted for the casual route up to South Bubble. Unfortunately the view of the pond from the peak was fogged over. Oh well no big deal. We took the same path back and once back to Jordan Pond Trail we continued on. Along the path we noticed the fog lifting, and soon we could see across the pond again. There was a cool wooden bridge along the way. The path then turned into crossing boulders at water level but then became a series of planks and lumber built as a walkway. It was new and you could still smell the fresh wood. Nothing like it.
Acadia Arrival, Gorham Mountain Trail, Sand Beach, Toddy Pond
What can I say about Maine? Everything in Maine is beautiful. From the trees beginning to change from their summer greens to their vibrant oranges and yellows to the picturesque clouds floating in the sky to even the fog rolling over hilltops. This place is the outdoorsy type’s dream. Our latest destination? Acadia National Park. Situated about three hours north of Portland, Acadia is on Mount Desert Island off of Maine’s coast. The island clocks in at just over 100 square miles and is second in size, on the eastern seaboard, only to Long Island. The park is huge. There’s trails everywhere including 45 miles of carriage trails built by John D. Rockefeller. Rockefeller was one of the main donators of the land for the park. There’s plenty to see and even though we spent three days there hiking, we only covered a small percentage of the park itself. We hung out by the coastline mostly in order to take in those ocean views. Stupendous! We rented a house in nearby Surry which is only about 35 minutes away from Acadia’s Sand Beach entrance. The uninsulated camp is on Toddy Pond; also spectacular views. We had a fantastic time and I wish to share with you some of the pics I took along the way. We started off on the Gorham Mountain Trail which is a bit south of Acadia’s Sand Beach entrance. We actually passed Sand Beach on the way down to Gorham. Once we got back down we headed north for a quick look at Sand Beach.
Using apt, Installing Midnight Commander, Exploring the File System
Welcome back! In this installment we’ll be working with Debian’s Advanced Package Tool. We will manipulate it using the apt command in order to manage applications. There are other command line tools that can be used to interact with the Advanced Package Tool, but we’ll be using apt today. This command allows for the easy installation, updating, listing, and removal of programs on Debian and its derived distributions. The apt command is very commonly used and referenced. We’ll be using it to install Midnight Commander.
Before we dive into apt and how its commands work we need to clear up a potential issue before it causes confusion. Beginning in Ubuntu 16 the apt command was created in order to promote ease of use by simplifying common commands issued using apt-get and apt-cache. This means that instead of using those commands with all of its options merely using apt will get you what you want with less input required. It also offers more output like a pretty progress bar when installing applications as well as added output of apt update in that it tells you how many upgradable packages you have ready to go. Running the apt list --upgradable command returns an improved formatted list of upgradable apps. Keep this in mind when managing older systems. They may not have apt installed and you’ll have to resort to using apt-get.
Note that apt, apt-get, and apt-cache are installed by default on current Ubuntu systems.
Run the ls -lR command on /etc/apt to get an idea of how its file structure is laid out.
Let’s review the different parts of apt:
This config file, located in /etc/apt, is where the source URIs are listed for repositories. This file is important enough that it has its own man page. From this man page we can see what the default format is for listing a source:
deb [ option1=value1 option2=value2 ] uri suite [component1] [component2] [...]
Use grep to weed out entries containing “deb” to see what sources are configured for your system.
|dpaluszek@upskillchallenge:~ -bash v5.0==>grep deb /etc/apt/sources.list
deb http://us.archive.ubuntu.com/ubuntu focal main restricted
# deb-src http://us.archive.ubuntu.com/ubuntu focal main restricted
deb http://us.archive.ubuntu.com/ubuntu focal-updates main restricted
# deb-src http://us.archive.ubuntu.com/ubuntu focal-updates main restricted
This command, when run, prompts you for your text editor of choice, then opens the sources.list file in that editor. From here you can make edits. If you noticed the sources.list file is owned by root, so you’ll need to make use of sudo in order to save your changes.
Aside from editing the sources.list file manually you could also use this command. In addition to modifying the sources.list file it will automatically download and register any public keys. Use it like so:
sudo add-apt-repository ppa:<repository-name>
This easily comes in first in the “most used apt command” contest. Running apt update will update all package info from all configured sources. Running this before performing any other actions is required in order to ensure you are working with the latest set of package information and thus, getting the latest updates.
apt upgrade apt full-upgrade
Probably the second most used apt command. Run apt upgrade to install all available updates for packages found using the sources.list file’s configured repositories. Note that any packages that require an uninstall of any other package won’t be installed. To handle this removal automatically to install such packages use the apt full-upgrade command.
This one works as advertised. Pass apt search a search term and you’ll be returned with any package containing that keyword, including what is in the package description (useful for searching out features).
apt install apt remove apt purge
These 3 commands rely on the package name you found using apt search to do as their name suggests. The install switch will install the package while remove will uninstall. Note that using remove will not remove configuration file artifacts in the event the program was uninstalled accidentally. To remove those artifacts use the purge switch.
When packages are installed their dependencies are installed too. When you uninstall a package only that specific package is uninstalled, not the dependencies. Also note that sometimes package dependencies change and those dependencies are no longer needed. To remove the disused dependencies run the apt autoremove command.
Passing the name of a package attained from the a search will give you detailed info on the package. Use this to gain more insight into the packages you are aiming to install.
This command shows a list of packages by filtering by criteria. You can pass globs to the command as well as use options to list installed (–installed), upgradeable (–upgradeable) or all available (–all-versions) versions.
Now that we have a basic grasp of how apt works, we can go ahead and install a package. Let’s search for and install midnight commander.
|dpaluszek@upskillchallenge:~ -bash v5.0==>apt search "midnight commander"
Full Text Search... Done
avfs/focal 1.1.1-1 amd64
virtual filesystem to access archives, disk images, remote locations
junior-system/focal 1.29ubuntu1 all
Debian Jr. System tools
krusader/focal 2:2.7.2-1build1 amd64
twin-panel (commander-style) file manager
mc/focal 3:4.8.24-2ubuntu1 amd64
Midnight Commander - a powerful file manager
mc-data/focal 3:4.8.24-2ubuntu1 all
Midnight Commander - a powerful file manager -- data files
moc/focal 1:2.6.0~svn-r2994-3build1 amd64
ncurses based console audio player
pilot/focal 2.22+dfsg1-1 amd64
Simple file browser from Alpine, a text-based email client
It appears that “mc” is what we are looking for. Let’s go ahead and get some info on it:
apt show mc
Installing is just as easy:
sudo apt install mc
Type Y then hit Enter when prompted to install and let the magic happen.
Exploring the File System
Now that Midnight Commander is installed lets play around with it. Run it in terminal by typing mc and hitting enter.
You can use either the arrow keys or your mouse to navigate the file system. Poke around with the following:
You can open files by highlighting the file you wish to read and then clicking the “File” menu up top, then clicking “View File”, then clicking “Ok”. Take note regarding files you don’t have permission to access. What do you think you need to do in order to see root owned files using Midnight Commander?
Well this was a lesson that you’re sure to reuse. Using apt on Debian/Ubuntu/et al systems is a must if you are going to administer these machines with success. There are various other tools to manage packages on Linux systems and while they differ in details they are functionally equivalent.
Next episode we’ll use some new commands to view file content as well as do some more basic terminal navigation, playing with hidden files, and we’ll finish off with a deep dive into nano.
Working with Sudo, The /etc/shadow File, Hostname Change, TimeZone Settings
Let’s start at the top: what is sudo? You may have heard of it, you may not have, but it is an integral part to being a Linux administrator. Sudo is a program that allows a user to run other programs/commands as another user, usually the root user. This is authenticated with the credentials of the user you are running as. So, what does this afford us? Why not just log in as the root user and do your bidding straight from the root account? Well, there’s a few reasons why using sudo is beneficial:
You aren’t giving out your root password to other admins and users.
On systems where sudo is being utilized the root user itself can be disabled, heightening security.
There is an audit trail of all sudo activity in the form of logs.
You can limit users to have access to certain programs using sudo, further restraining privileges in the interest of security.
The process of running a sudo command offers a little buffer that promotes “think before you leap”.
Sudo authentication expires automatically, requiring the input of the user password again. Leaving a machine unattended (!!!) is less of a risk than if the root user was left logged in.
So how does this work? How do I use sudo? It’s simple. Just type sudo before the command you wish to run as root in terminal, enter your account password (your password, not the root password), and the command will be executed with root privileges. Easy to use? You bet.
But what’s really going on? Well, if we take a look at the sudo binary we’ll notice something a little peculiar about the permissions. Use the which command to see where sudo actually lives, then use ls -l to check permissions.
In the permissions for the sudo binary, notice the “s” in the fourth column? This SetUID bit means that instead of the binary being invoked by the user it will be executed as the user who owns the binary, in this case, root.
So what sudo does is check the sudoers file /etc/sudoers and sees if the user who invoked sudo is on the party list. If so, and the credentials aren’t already cached (we’ll get to that in a moment), the invoking user will be prompted for their password. Entering this password will allow the command to execute as the root user. This password will be cached for 5 minutes (5 minutes is the default on most Linux systems) meaning that within this window you can run more sudo commands without having to enter your password. Sudo creates a child process where it changes the target user (again root) and then the command is executed.
If you were thinking that users need to be explicitly added to the sudoers file and that they weren’t on it by default you’d be right. So how do we grant access to sudo? It’s as easy as modifying the sudoers file itself by adding an entry for either the specific user or for a user group. It’s a no brainer that this needs to be done as the root user! Run sudo nano /etc/sudoers and enter your password to open the sudoers file in a the nano editor.
Scroll down a bit and you’ll notice there are three sections we should be concerned with here. The “User privilege specification” section is where users are listed along with what sudo permissions they have. The two sections below that govern group membership permissions to sudo, in this case the admin and sudo groups have full access. The permissions are broken down as follows:
USER ALL=(ALL:ALL) ALL
1 2 3 4 5
The username whom is getting sudo access. Groups are prefaced with a percent sigh (%).
Hosts you can run sudo commands on.
The target users you are allowed to run commands as.
The list of groups you can switch to using the -g switch.
The commands you can run.
Here’s an example of a sudoers entry that is a bit more granular:
Oh noes. DENIED. Let’s use sudo then. You can use either of these two commands, the latter being a nifty shortcut that runs the last command run:
sudo cat /etc/shadow
If you take a look at the output you’ll notice a line for every user on the system. Most will be built in system accounts for services and whatnot but you should see an entry for your user. Mine looks similar to this:
Break down this line by section (between colons, there’s a total of 9 fields) and note what each represents:
Username – The username on the system.
Encrypted password – This is a hash of the password prefaced with what type of encryption is being used, delimited by dollar signs. These are the little encryption codes:
$1$ – MD5
$2a$ – Blowfish
$2y$ – Eksblowfish
$5$ – SHA-256
$6$ – SHA-512
Date of Last Password Change – The date of the last password change, expressed as the number of days since Jan 1, 1970.
Minimum Password Age – The minimum password age is the number of days the user will have to wait before she will be allowed to change her password again.
Maximum Password Age – The maximum password age is the number of days after which the user will have to change her password.
Password Warning Period – The number of days before a password is going to expire (see the maximum password age above) during which the user should be warned.
Password Inactivity Period – The number of days after a password has expired (see the maximum password age above) during which the password should still be accepted (and the user should update her password during the next login).
Account Expiration Date – The date of expiration of the account, expressed as the number of days since Jan 1, 1970.
Reserved Field – This field is reserved for future use.
You can see why this file is owned by the root account. It contains sensitive information that should not be readily available to regular users. Attempts to crack the hash could be perpetrated should those hashes be exposed. I would also recommend not editing this file by hand unless you really know what you are doing. It is always a better idea to manipulate the contents of thi file using commands like passwd and chage.
Let’s restart our machine using the reboot command:
|dpaluszek@upskill:~ -bash v5.0==>reboot
Failed to set wall message, ignoring: Interactive authentication required.
Failed to reboot system via logind: Interactive authentication required.
Failed to open initctl fifo: Permission denied
Failed to talk to init daemon.
Denied again! Looks like we’ll have to sudo this one too. Run either of these to reboot the machine:
sudo shutdown -r now
Use the uptime command to verify the machine did indeed reboot.
I bought a couple of domains and decided to see if I could have both of them point to the same WordPress site. This seems pretty easy to do but it required a few steps to get it done and working. I had to straighten out the site’s certificate, I had to edit some Apache config, and I had to add some code to a WordPress php file.
Step 1 – Update letsencrypt certificate
I use letsencrypt to secure the site via a TLS connection. The binary has since been updated to certbot. We’ll be using certbot on the command line to get a new updated cert that configured for multiple domains. Here’s the command I used:
Follow the prompts and verify the output indicates success. The new cert files should live in the letsencrypt default directory: /etc/letsencrypt/live
Step 2 – Update Apache
Now we need to configure Apache to respond to requests for my new domain via a new virtualhost configuration. We’ll copy the config file for danielpaluszek.com then configure it for danielpaluszek.tech:
sudo cp danielpaluszek.com.conf danielpaluszek.tech.conf
Edit the newly created conf file, changing the ServerName directive to the new domain name. You don’t need to edit your cert file locations since we’re using one cert for both domains. Save the new file then run these commands to enable the new site then restart Apache:
Now we need to tell WordPress that it needs to stay on the domain the web user started on. Currently if you browse to the .tech domain and click a link within the site whose destination is the site you’ll be brought to the .con domain. Lame. We can fix this by editing the wp-config.php file. Mad props to Jeevanandam M. for this. Search for the following line in the file:
Assign a Static IP Address, Customizing Your Bash Prompt & Misc Commands
Time to complete: ~1-1.5 hours
Welcome back! In this installment of the Linux Upskill Challenge we’ll be completing a few tasks:
Assigning a static IP to your Ubuntu virtual machine.
Customizing your bash prompt.
Doing some Ubuntu user management.
Playing with some more common commands.
Static IP Addressing
By now you may have noticed that rebooting your Ubuntu vm may result in it receiving a new IP address each time. This is annoying since you will need to log into Ubuntu via your hypervisor console to find out what the IP is before you can SSH into it. So let’s set a static IP address shall we?
Like most things in Linux we’ll need to edit a configuration file to accomplish our task of setting a static IP address. We’ll edit the /etc/netplan/00-installer-config.yaml file using nano:
Using nano, edit the configuration file as shown, entering your own network information. It is important to note that you cannot use tabs within yaml files, so use spaces to justify any text:
There are two ways to finalize this. You can either restart netplan:
sudo netplan apply
Or you can reboot your machine:
sudo shutdown -r now
Note that reapplying netplan settings will break your SSH connection so you will need to restart your SSH session using your newly configured IP address.
Again if you find your machine inaccessible you can always log into it using the VirtualBox console so you can fix the netplan file.
Verify your network settings with any of the following commands (they all result in the same output):
ip address show
ip add show
Fun fact: Ubuntu versions prior to 17 didn’t use netplan and its associated yaml file but instead used a configuration file: /etc/network/interfaces so take note! Encountering older operating systems is common in the sysadmin world.
Let’s move on to tinkering with your bash prompt, shall we?
Welcome back! In this installment of the Linux Upskill Challenge we’ll be completing a few tasks:
Configuring VirtualBox to allow traffic from our VM to our local network.
Patching the system.
Install SSH and test access.
Configure SSH for remote access in a more secure way.
Review some basic commands that let us gain insight into our system.
VirtualBox Ethernet Settings
VirtualBox by default put our vm onto a segregated network that it created specifically for this vm. While our vm can access the Internet through this connection we cannot speak to it from any devices on our local network. The goal here is to access our Ubuntu box via SSH (secure shell protocol) from another machine, and for that we need network accessibility. While my vm can hit network objects on my local LAN I cannot see my vm from my local LAN. To make the change in VirtualBox do the following:
Now that we have that out of the way done start up your vm and log in. Let’s check the IP address, shall we? Run this command:
ip address show
You should get the following output:
Verify that the IP of your vm is part of your local subnet and if so we are clear to move on.
Learn the skills required to sysadmin a remote Linux server from the command line.
Time to complete: 1-1.5 hours
So I found this thing on Reddit: Linux Up Skill Challenge It’s a 20 part series on Linux administration. It starts by doing some basics but evolves into doing slightly more complicated, and useful, tasks. This is all done on a headless (no GUI it’s all command line) Linux system. Pretty neat! Shall we delve into it? Let’s start learning! I mapped this out and figured I could write a post on each section of the challenge. There’s some areas I can dip into more deeply, and I added some cool little things here and there to help round things out. In this first segment I’ll explain some basics:
What is virtualization?
What is VirtualBox?
What are some common Linux distributions or “distros”?
What other options are there for spinning up a virtual machine?
Then we’ll get into some hands on stuff:
What is the process for using VirtualBox to create a Linux virtual machine?
What are the steps to installing a Linux distro?
Virtualization and Hypervisors
Before we get into playing with Linux we need to get a machine up and running. Back in the old days spinning up a machine was accomplished by downloading an ISO installation file for the operating system of your choice (Linux, Windows, etc), burning it to a CD, then throwing that CD into your optical drive and booting your computer from it. From here you would undergo the installation process for your operating system, installing it onto a hard drive in your computer.
Well, since virtualization hit the scene those days are pretty much over. So what is virtualization? What is a “virtual machine”? Simply put virtualization is the act of creating a virtual instance of things like operating systems, networks, and even application code, as opposed to creating an actual instance. While many forms of virtualization exist the most common form, and the one generally referred to when speaking about virtualization, is hardware virtualization. In the old days as I mentioned you would “actually” install an operating system on physical hardware. Nowadays with virtualization you can install an operating system (or even multiple) on a layer of abstraction on top of the hardware. This installed instance of an operating system is called a virtual machine. The layer of abstraction managing the hardware and virtual operating system is done by the hypervisor. The hypervisor sits between the hardware and the installed operating system(s). The hypervisor is in charge of allocating resources to your installed operating systems. It orchestrates, so to speak, to ensure all virtual instances get the resources they require. There are two flavors of hypervisors: Type 1 and Type 2. Type 1 hypervisors are low level and are installed directly onto hardware. One of the most common Type 1 hypervisors is VMware’s ESXi. This is used heavily in the enterprise setting. Another Type 1 hypervisor, this one Linux based and open source, is called Proxmox. A Type 2 hypervisor runs in an installed operating system as a piece of software. Examples include Microsoft’s Hyper-V and Oracle’s VirtualBox. We’ll be using VirtualBox in this demonstration considering it is available freely on many platforms. I mentioned that I am running a MacOS machine but you can follow along if you are on Windows too, using VirtualBox.
Common Linux Flavors
There are TONS of Linux distributions out there. For an idea on how many there are, and to see the history of how certain distros spun off others, check out this graphic: Wikipedia: Linux Distribution Timeline There are two denominations of Linux systems, those with a pretty GUI and those that are headless, or command line only. The latter is primarily used for server related functions while the GUI enabled ones are for desktop use. Common GUI equipped flavors include
It’s important to note that the GUI isn’t quite tied to the operating system. You can mix and match supported GUIs with your Linux flavor. Note that MacOS is a derivative of a Unix operating system named Darwin, and uses the BSD kernel. Android phones run on a Linux kernel. Because it’s lightweight, many Internet of Things (IoT) devices run some sort of Linux derivative. This stuff is everywhere.
Common enterprise Linux/Unix distributions include:
In this series I will be using a headless version of Ubuntu. It’s very commonly used. So if you’re looking for an answer to a question you’re likely going to find someone with the same problem online.
Someone Else’s Hypervisor?
There are other options for setting up a virtual machine aside from using a hypervisor like VirtualBox on your computer. Cloud providers have Infrastructure as a Service (IaaS) offerings where you can spin up virtual machines on a whim. This is usually done by picking from a catalog of operating systems, although you can use your own custom installation (this can get pretty advanced). Providers that offer these services include Amazon’s AWS (EC2), Google Cloud, Linode, and Digital Ocean. Being on a public provider means your virtual machine(s) can be easily configured to be on the public Internet using a public IP address. You can complete this by configuring access rules to allow traffic to whatever service you wish to make available to the Internet. This blog is on a hosted service. So meta.
Next we’ll talk about setting up a virtual machine in VirtualBox.
I finally set aside some time to do a little site maintenance the other day. I wanted to do a few quick things. I wanted to backup the site, increase the volume size, and then backup the site again once finished. Note that I had no backups of the site EEEK I know I know shame on me. Anyways my task seemed simple, until I discovered that I couldn’t SSH into my web server. What gives? I took a look in EC2 and found the machine in a running state. Ok. I couldn’t hit the site via http. Ok. So I force restarted it via the EC2 console. Ok Machine came back up as per Amazon but I still couldn’t get into it. Le sigh. What to do? No backups dammit! I need to access the data on this virtual machine. Well I figured since I was going to resize the volume maybe I just extract the data I needed and restore it to a new instance, upgrading the operating system in the process, use a larger volume, and be done with it.
AWS CLI (Command Line Interface)
Let me introduce you to the Amazon Web Services Command Line Interface. This open source tool was developed in order to allow for programmatic access and administration to AWS services via command line. It’s available for various shells like bash, zsh, and tcsh as well as PowerShell. It’s super useful, easy to setup, and easily repeatable. Plus there are things you can only do via command line that you can’t do in the AWS GUI (so get comfortable with a terminal). My goal was to export a copy of my danielpaluszek.com virtual machine to an S3 bucket and then download it so I could extract the data I needed from it. I started by installing the CLI tool onto my MacOS laptop: Installing the AWS CLI version 2 on macOS
Installing the pkg payload into the default location (/usr/local/aws-cli) puts the cli binary into your $PATH. Once done I had to configure the tool to look at my AWS account: Configuring the AWS CLI
Running the following command puts the cli tool into configuration mode where you can enter your account attributes. This allows the CLI tool to speak to your account:
After each of these prompts I pasted the keys from my account. I created a new user in IAM (Identity and Access Management) and used the newly generated keys. The region name is whatever region the machine(s) you wish to manipulate are in. The output format is the format in which results of commands are displayed. JSON is the default and is easily readable so I left it blank
So let’s get to it. I started by shutting down the VM in the EC2 console. I then created a new S3 bucket in my account and named it “danblog”. I configured S3 access to be public as required for me to download the contents of the bucket once I get the export in there. I ran the following command to initiate an export of the virtual machine:
This returns the same output as the create-instance-export-task command. Take note of the “State” entry. It will change from “active” once the process completes.
Down the Rabbit Hole, a Dead End?
So after the export completed I hopped into my S3 console and downloaded the ova file. I used the “Import Appliance” function in VirtualBox to create a VM from the ova file. I zipped through the prompts and booted it. I was met with a black screen. Le sigh. I had a rogue Ubuntu VM setup with MySQL I had setup a while back for a class I was taking that I could potentially use. I could mount the OVA file’s vmdk on this box and poke around. So I added the volume to the VM and booted Ubuntu. I checked the location of the disk and mounted it to a folder I created in my home folder. The file system seemed to be intact. I was able to access the web directory with no issue. The real question was whether or not I could get into the database. WordPress stores just about all content in a MySQL database, including the content of posts. That’s essentially, well, the site so without that I would lose everything. It’s not a ton of content but still.
I wasn’t too worried though the database is in my possession and I know the credentials so it was a matter of just getting in there. So I did the logical thing of changing the datadir directive in the /etc/mysql/mysql.conf.d/mysql.cnf file to point to the mysql directory on the “external” drive, stopped mysql, then started it. Now from here on things get a bit hazy. I saw a myriad of issues including but not limited to: mysql not successfully restarting due to permissions issues., attempting to authenticate in mysql but creds that I know work (taken from WordPress config file) do not work due to an authorization issue with the wordpressuser user account, root account reset attempts failing, amongst other things. I effectively Swiss Cheesed that drive up trying to get it to work. On a whim I decided to start over with a fresh copy of the ova downloaded from S3 and imported it into VirtualBox. While doing so I noticed that the Guest OS Type setting was set to Windows Server 2003 32 bit. I clicked through the options and set it to Ubuntu 64bit. Why would this make a difference? I thought selecting this was just a means to marking the vm graphic on theVirtualBox interface? Does it change anything boot related? There’s no such choice required in any other hypervisor I’ve played with. I did some research and I couldn’t find anything relating to that setting. The setting appears changeable in the “General” section in the “Basic” tab under a configured vm’s settings. When set to Ubuntu 64bit the machine booted, although it took about 15 minutes as it appeared hung on performing a random number generator function. I’ve read that processes such as these require entropy input into the system but I am unsure if me banging on the keyboard did anything to speed that up. Regardless the machine was booted and at the login prompt.
Now that I have the machine booted let’s see if I can log into it. This AWS machine didn’t have any creds that I knew about (perhaps I failed to record them?). By default the user AWS uses is “ubuntu” but my attempts at logging in failed. I am pretty sure I didn’t setup a password for any account on here. I did setup an ssh key via AWS for it, which I of course had as that was what I used to access the machine normally. So I grabbed the MAC address from VirtualBox and used it to find the machine’s IP via its DHCP lease in my firewall (I know I was surprised too). I was then able to SSH into the box using that IP. From here I was able to access everything normally. I zipped up the site’s root directory using tar. I then exported the MySQL WordPress database as a .sql file. I scp’d them to my local machine where I promptly backed hem up. Saved!
I switched to Digital Ocean for this machine. I wanted to try out their service based on the recommendation of a good tech friend by the name of Matthew Fox (you can peep his blog here: http://www.100781.org/). So I spun up an Ubuntu 20 machine, installed all the dependencies WordPress requires, and scp’d the tarball and sql export files to the machine. I unpacked the web directory back into place and modified permissions by chowning the www-user account and group. Next I setup a MySQL database for WordPress and imported the sql file. I had to create the WordPress user account configured in the WordPress config file in MySQL and assign it permissions. I had to do some apache virtualhost configuring in order to get apache serving the pages out of the right directory. I installed a letsencrypt cert using certbot, pointed apache to the correct certificate files, and ensured apache was serving over port 443 (and that a http–>https redirect was configured). The last issue I had to fix was a database connection issue when loading the site. I tracked this down to a difference in the database name from the database referenced in the WordPress php config file. It’s case sensitive so I changed it and the site loaded. Back in business.
So I have this old secondhand MacBook Pro 7, 1 that won’t run MacOS later than 10.13.x. I have some uses for the Mac side, namely in the name of mobility, but would also love to get an install of Kali Linux installed so I can play around with it. I saw a comment on Reddit referencing a tool named “Patcher” that would allow for newer versions of MacOS on older Mac hardware. I immediately went down the rabbit hole and here we are.
Outlined below are the steps I took to get Catalina installed, a custom boot manager installed, and Kali Linux running in a dual boot setup. There are 3 main segments to this:
Create a bootable Catalina USB installer using Patcher then installing Catalina on my unsupported machine.
Install a custom boot loader onto my machine.
Copy a Kali Linux iso to a USB drive, then install it onto my machine.
Patcher – Run Newer MacOS on Unsupported Hardware
Prerequisites – Here’s all you need to get started with Patcher:
Getting this going is super easy as the Patcher app does just about everything for you. USB installer creation can be performed on any machine. Once Patcher was downloaded I performed the following steps:
Open the Patcher DMG and run macOS Catalina Patcher.app.
Click Continue until you are prompted to either browse for a copy of Apple’s Catalina installer or download a fresh copy. It is imported to note that at the time of this writing 10.15.4 has been released but Patcher and 10.15.4 isn’t working on most machines. Therefore you should use Patcher’s “Download a Copy” feature to grab a copy of 10.15.3, which works.
Insert your USB.
Once Catalina is downloaded click the orange external drive icon to “Create a Bootable Installer”.
Select your USB drive, then click start.
Enter your password when prompted to begin.
Once done your USB drive is ready to rock.
Plug the USB into your unsupported Mac, hold option to bring up the boot picker, and select the Patcher USB.
Once booted you’ll be presented with a list of options. Highlight Disk Utility and then click continue.
We need to leave some space for Kali. Above the left pane click View–>Show All Devices.
Highlight the internal HDD and then click the partition button.
Create 2 partitions, an APFS partition for our Catalina install and another for Kali. It is up to you to decide how large you wish these to be. I set the Kali partition to be HFS nut this shouldn’t matter since it will be reformatted when we install Kali.
Close Disk Utility, highlight reinstall macOS, click Continue, agree to terms, select your newly created APFS volume, then click install.
Once done reboot back into the Patcher USB.
Click macOS Post Install
Leave checkboxes alone and click to install patches.
Allow system to reboot, then boot back into the Patcher USB.
While were at it, let’s disable SIP. We’ll need to do this in order to install Kali. Click Utilities on the upper toolbar then disable SIP with the following command:
Reboot and setup Catalina for your use.
Installing rEFInd – Custom Bootloader
Prerequisites – Here’s all you need to get rEFInd onto your machine:
Despite involving some terminal commands this is really quite simple. We’ll mount the EFI volume on the Mac then throw the rEFInd files onto it. It involves a few terminal command but again, quite simple.
Verify your architecture with the following command:
ioreg -l -p IODeviceTree | grep firmware-abi
You should get something similar to the following:
| | "firmware-abi" = <"EFI64">
This indicates 64bit.
Run the following command to verify where the EFI partition lives:
Take note of the identifier as indicated in the below screenshot:
Run the following commands to create a directory with which we can mount the EFI volume to, mount it, and create a directory for rEFInd. EFI should be at disk0s1 but if it is different for you then modify the command below appropriately.
Reboot the machine and you should be presented with the rEFInd bootloader, which should look like this:
Kali Linux – Installation Time
Now that we have our machine prepped we can create a Kali USB installer, boot to it, and install Kali onto our Mac. This part is pretty simple too but you need to be careful when formatting an installation partition, lest you nuke your macOS install. Here’s what you need:
Select the iso you downloaded, select your USB drive, click “Flash”.
Plug your Kali USB into your destination Mac and boot it.
Your USB should show up as:
Boot Legacy OS from whole disk volume
Once booted select “Graphical Install” and proceed with setting up machine name, username, etc.
When prompted to manage disks/partitions choose “Manual”.
Select the HFS volume you created earlier and delete the partition.
This will now show up as free space.
Go back to Guided Partitioning and select “use the largest continuous free space”.
Select your desired partitioning schema. I just used the whole partition for all files, nothing fancy.
Elect to write the changes to disk.
Kali will now install, I am not sure if Grub is required but I installed it on my HDD when prompted.
I imagine you could do this with any Linux flavor, not just Kali. I haven’t attempted to play around with Debian or CentOS or anything but I see no reason why it wouldn’t work for any other Linux distributions.