There are numerous ways to harden a WordPress instance. Today we’ll review a quick and easy way to limit bruce force attacks against your WordPress login page by cutting it off from the world. By default the login page is located at the root of your WordPress web directory and is named wp-login.php. We’ll lock this down by creating what’s called a rule in a file named htaccess.
This file is wonderfully powerful. What’s more is that you can nest them for more granularity. Any htaccess file nested under another will take precedence over one located in say the root directory. In our instance the web page we wish to lock down is in the WordPress root directory, so out htaccess file will live at the root.
Rules in an htaccess file are just a way of expressing what you want done under what conditions. In our example we want to create a rule that denies all access to the wp-login.php file regardless of the origination IP address. But we want to allow access from a certain IP address. We’ll format the rule as follows:
Deny from all
Allow from 22.214.171.124
The first line opens a tag where you specify the file you wish to enact a rule on. The last closes these tags. Note that NOT using these tags will mean the rule will be applied to all files recursively within the current directory. (There are other tags like FilesMatch and Limit but we’ll save those for another time). The Order Deny, Allow on line 2 specifies in what order the rules will be enforced. We want to Deny all IPs initially (line 3), then Allow our approved IP address access (line 4). You can specify more than an IP here. You can specify a subnet in CIDR notation too. To add multiple IPs and/or subnets simply add more Allow from lines.
Save the htaccess file after modifying it and test. You should be able to access the login page when browsing from the specified Allow from IP. If you check while on another IP you’ll be met with failure, just as intended.
In today’s episode we’ll be playing with terminal pagers. What is a terminal pager you ask? A terminal pager is a command line program that simply allows you to read text files. To edit files, however, you’ll need a text editor. A pager is great if you are looking for something specific in a file or if you just want to peruse the contents of a file. You can also pipe output to a pager to more closely analyze that output. In addition pagers allow you to search for keywords in files to help speed things along. Pagers get their name because they only output one page’s worth of data at a time.
The three pagers I want to talk about are more, less, and most. On Ubuntu systems less and more are installed by default. However most will need to be manually installed. More is installed by default on most Linux distros, where less, although common, may not be. Most is the rarest of the bunch and will usually require a manual install. Let’s review the differences, and strengths, of each of these commands.
We’ll start with more since it is the simplest of the bunch and has the least amount of features. Using it is as easy is calling more in a prompt followed by the filename (or filenames separated by spaces) you wish to examine.
dpaluszek@upskillchallenge:~$ more /etc/ssh/sshd_config
If you will notice more has a little percentage indicator that tells you how much of the file you have read. The following key strokes perform the following actions:
Space – advance to the next page. The number of lines advanced is equal to the number of lines available in your terminal window.
f – advance to the next page
Return – scroll down one line at a time, you can also use
b – go back a page
= – display the current line number
v – start up your default text editor
:n – go to next file (when you have used more than one file name as an input)
:p – go to previous file (when you have used more than one file name as an input)
:f – display current file name and line number
. – repeat previous command
/pattern – search the file for the next occurrence of the regex pattern after the slash.
Upon arriving at the end of the file the pager will cease and you’ll be returned to your prompt. This is a very useful command to pipe long output to as you can use the space bar to read the output page by page.
The less command is more feature rich than more in that it is equipped with way more bells and whistles but runs faster since each page is loaded one at a time. This command is chock full of goodies. Invoke it the same way you used more.
dpaluszek@upskillchallenge:~$ less /etc/ssh/sshd_config
Here’s a sampling of what less can do:
Space – advance to the next page
Return – scroll down one line at a time
d – scroll forward one half of the screen size
y – scroll backward one line at a time
b – scroll backward one page
u – scroll backward one half of the screen size
left and right arrow – scroll horizontally one half of the screen size
control + arrow keys – scroll all the way to the left or right
R – refresh the screen, useful for looking at files that are actively being modified
g – go to beginning of file
G – go to end of file
m – followed by either an uppercase or lowercase letter marks the first displayed line so you can easily get back to it later
‘ – followed by either an uppercase or lowercase letter brings you back to the marked line (see above)
/pattern – search forward in the file for the matching regex
?pattern – search backward in the file for the matching regex
This is just a small sampling. You can do really fancy things like find the closing bracket or parentheses from an opening one too. You can also pass the -F switch when invoking less to allow the file to be continually updated in real time as you view it. Good for monitoring log files. See the less man page for more information on all available options.
By default the most command isn’t installed on Ubuntu systems. Attempt to run it and you’ll be met with this:
Command 'most' not found, but can be installed with:
sudo apt install most
Run the aforementioned apt command (you can review apt usage in Part 4 of this series). Once installed simply invoke most using the same syntax as more and less, passing a filename as a parameter.
dpaluszek@upskillchallenge:~$ most /etc/ssh/sshd_config
As you can see most issues two useful lines. The first shows the filename currently open and the second offers some assistance. Hitting H for help nets you this useful list:
SPACE, D – Scroll down one Screen.
U, DELETE – Scroll Up one screen.
RETURN, DOWN – Move Down one line.
UP – Move Up one line.
T – Goto Top of File.
B – Goto Bottom of file.
>, TAB – Scroll Window right
< – Scroll Window left
RIGHT – Scroll Window left by 1 column
LEFT – Scroll Window right by 1 column
J, G – Goto line.
% – Goto percent.
Most also allows us to switch between open files on the fly. You can do this by first opening two files with most by passing their filenames to the command. From here you can press :n to bring up a little list of files, using the arrow keys to select which one you wish to switch to, then hitting enter to open it. This is useful in situations where you are, say, reviewing a modified config file against its default.
So far we’ve played with looking at files directly. But what about parsing output from commands? Explore the sshd_config file by using cat and piping it to more, less, and most.
dpaluszek@upskillchallenge:~$ cat /etc/ssh/sshd_config | less
dpaluszek@upskillchallenge:~$ cat /etc/ssh/sshd_config | more
dpaluszek@upskillchallenge:~$ cat /etc/ssh/sshd_config | most
As a bonus let’s go a bit off topic and discuss some terminal tricks and know how.
Another chilly night and another fire to get the day started! A quick trip to Toddy Pond’s edge yielded some pretty views of the yellow morning sunlight over the solid blue water. The air was crisp. The breeze was soft. It’s sad to see it go. We had to pack up as this was our last day. Stoke that fire just a bit more before a big breakfast. The camp was so cozy it was hard to depart. Perhaps a return next summer?
We took the short trip back to Acadia in hopes of a nice hike under the glistening sun. We were not disappointed! We started off at Sand Beach and hightailed it into the woods on the eastern end of the beach. There’s a little field of rocks at the beginning of Great Head Trail that, once traversed, leads upward onto the cliff. A little zigzag here and a mini scramble there and we hit the upper rim of the cliffside. The views here were astounding. The water was a mix between cerulean blues and teal greens but whitened where it hit the rocks far below. The sea ebbed and flowed into cracks and crevices slowly eroding the 500 million year old exposed bedrock. After stopping numerous times to stop and gaze we continued on. There is a little summit with cairns littered about. Eventually the trail loops around and brings you back to Sand Beach.
After Great Head we set out and followed Park Loop Road until we hit Champlain Road. We took that through Seal Harbor, then slightly north into Asticou, then a quick turn south into Northeast Harbor. Everything we passed was super cozy looking but Northeast Harbor was really quaint, cute, and welcoming. We found a good spot right on the marina next to the harbor master’s office and busted out our sandwiches. The sun was warm and it felt great. It was good to refuel and take a break before we set off on our last hike.
You can think of Northeast Harbor as a little peninsula that sticks out southward with the harbor on the east side. On the west side is Somes Sound, and that extends farther north than the harbor. We high tailed it around the sound and headed back south, past Echo Lake, and right to Flying Mountain, located on the western side of Somes Sound. The woods were pretty this late in the day and the views from atop of the sound were spectacular. There’s nothing like being at a high elevation and looking down over sun strewn hilltops and waterways. Simply beautiful. The trails extend pretty far north but we just hiked to Valley Cove and took a wide gravel path back to the lot. Valley Cove may have been the most serene view of the trip. What a great way to round everything out. Overall our excursion was a success!
Back at it again! The overnight temps were manageable and our camp stayed cozy. The fire of course died out but after getting it stoked and roaring again we had ourselves a nice cozy breakfast. We soon ventured out to seize the day, but the weather wasn’t so welcoming to that sort of idea. It was a tad cloudy with patches of fog littered about.
Our next destination? We decided on Jordan Pond. While not on the eastern coast of Mt Desert Island Jordan Pond still offers great views. With the fog hovering over just about every body of water we passed it made sense. The ocean would be foggy and maybe too dreary so we’ll hold out for tomorrow. We hoped that in the meantime circling Jordan Pond’s three mile trail would suffice. We lucked out. It was gorgeous. The lack of sun didn’t really detract from the scenery at all. In fact it almost enhanced it in a way. The low clouds and fog drifting over the the peaks into the valleys offered some pretty views. Doing the nearby hike up one of the bubbles was also a goal. Elevation is always cool. We set out around Jordan Pond Trail, stopping to marvel and take pics of the pond along the way.
About halfway around we encountered the trail up to South Bubble Mountain. There were two paths. The left being the steeper, but quicker path, and the right being a bit easier but a tad longer. With Bradley in tow we opted for the casual route up to South Bubble. Unfortunately the view of the pond from the peak was fogged over. Oh well no big deal. We took the same path back and once back to Jordan Pond Trail we continued on. Along the path we noticed the fog lifting, and soon we could see across the pond again. There was a cool wooden bridge along the way. The path then turned into crossing boulders at water level but then became a series of planks and lumber built as a walkway. It was new and you could still smell the fresh wood. Nothing like it.
Acadia Arrival, Gorham Mountain Trail, Sand Beach, Toddy Pond
What can I say about Maine? Everything in Maine is beautiful. From the trees beginning to change from their summer greens to their vibrant oranges and yellows to the picturesque clouds floating in the sky to even the fog rolling over hilltops. This place is the outdoorsy type’s dream. Our latest destination? Acadia National Park. Situated about three hours north of Portland, Acadia is on Mount Desert Island off of Maine’s coast. The island clocks in at just over 100 square miles and is second in size, on the eastern seaboard, only to Long Island. The park is huge. There’s trails everywhere including 45 miles of carriage trails built by John D. Rockefeller. Rockefeller was one of the main donators of the land for the park. There’s plenty to see and even though we spent three days there hiking, we only covered a small percentage of the park itself. We hung out by the coastline mostly in order to take in those ocean views. Stupendous! We rented a house in nearby Surry which is only about 35 minutes away from Acadia’s Sand Beach entrance. The uninsulated camp is on Toddy Pond; also spectacular views. We had a fantastic time and I wish to share with you some of the pics I took along the way. We started off on the Gorham Mountain Trail which is a bit south of Acadia’s Sand Beach entrance. We actually passed Sand Beach on the way down to Gorham. Once we got back down we headed north for a quick look at Sand Beach.
Using apt, Installing Midnight Commander, Exploring the File System
Welcome back! In this installment we’ll be working with Debian’s Advanced Package Tool. We will manipulate it using the apt command in order to manage applications. There are other command line tools that can be used to interact with the Advanced Package Tool, but we’ll be using apt today. This command allows for the easy installation, updating, listing, and removal of programs on Debian and its derived distributions. The apt command is very commonly used and referenced. We’ll be using it to install Midnight Commander.
Before we dive into apt and how its commands work we need to clear up a potential issue before it causes confusion. Beginning in Ubuntu 16 the apt command was created in order to promote ease of use by simplifying common commands issued using apt-get and apt-cache. This means that instead of using those commands with all of its options merely using apt will get you what you want with less input required. It also offers more output like a pretty progress bar when installing applications as well as added output of apt update in that it tells you how many upgradable packages you have ready to go. Running the apt list --upgradable command returns an improved formatted list of upgradable apps. Keep this in mind when managing older systems. They may not have apt installed and you’ll have to resort to using apt-get.
Note that apt, apt-get, and apt-cache are installed by default on current Ubuntu systems.
Run the ls -lR command on /etc/apt to get an idea of how its file structure is laid out.
Let’s review the different parts of apt:
This config file, located in /etc/apt, is where the source URIs are listed for repositories. This file is important enough that it has its own man page. From this man page we can see what the default format is for listing a source:
deb [ option1=value1 option2=value2 ] uri suite [component1] [component2] [...]
Use grep to weed out entries containing “deb” to see what sources are configured for your system.
|dpaluszek@upskillchallenge:~ -bash v5.0==>grep deb /etc/apt/sources.list
deb http://us.archive.ubuntu.com/ubuntu focal main restricted
# deb-src http://us.archive.ubuntu.com/ubuntu focal main restricted
deb http://us.archive.ubuntu.com/ubuntu focal-updates main restricted
# deb-src http://us.archive.ubuntu.com/ubuntu focal-updates main restricted
This command, when run, prompts you for your text editor of choice, then opens the sources.list file in that editor. From here you can make edits. If you noticed the sources.list file is owned by root, so you’ll need to make use of sudo in order to save your changes.
Aside from editing the sources.list file manually you could also use this command. In addition to modifying the sources.list file it will automatically download and register any public keys. Use it like so:
sudo add-apt-repository ppa:<repository-name>
This easily comes in first in the “most used apt command” contest. Running apt update will update all package info from all configured sources. Running this before performing any other actions is required in order to ensure you are working with the latest set of package information and thus, getting the latest updates.
apt upgrade apt full-upgrade
Probably the second most used apt command. Run apt upgrade to install all available updates for packages found using the sources.list file’s configured repositories. Note that any packages that require an uninstall of any other package won’t be installed. To handle this removal automatically to install such packages use the apt full-upgrade command.
This one works as advertised. Pass apt search a search term and you’ll be returned with any package containing that keyword, including what is in the package description (useful for searching out features).
apt install apt remove apt purge
These 3 commands rely on the package name you found using apt search to do as their name suggests. The install switch will install the package while remove will uninstall. Note that using remove will not remove configuration file artifacts in the event the program was uninstalled accidentally. To remove those artifacts use the purge switch.
When packages are installed their dependencies are installed too. When you uninstall a package only that specific package is uninstalled, not the dependencies. Also note that sometimes package dependencies change and those dependencies are no longer needed. To remove the disused dependencies run the apt autoremove command.
Passing the name of a package attained from the a search will give you detailed info on the package. Use this to gain more insight into the packages you are aiming to install.
This command shows a list of packages by filtering by criteria. You can pass globs to the command as well as use options to list installed (–installed), upgradeable (–upgradeable) or all available (–all-versions) versions.
Now that we have a basic grasp of how apt works, we can go ahead and install a package. Let’s search for and install midnight commander.
|dpaluszek@upskillchallenge:~ -bash v5.0==>apt search "midnight commander"
Full Text Search... Done
avfs/focal 1.1.1-1 amd64
virtual filesystem to access archives, disk images, remote locations
junior-system/focal 1.29ubuntu1 all
Debian Jr. System tools
krusader/focal 2:2.7.2-1build1 amd64
twin-panel (commander-style) file manager
mc/focal 3:4.8.24-2ubuntu1 amd64
Midnight Commander - a powerful file manager
mc-data/focal 3:4.8.24-2ubuntu1 all
Midnight Commander - a powerful file manager -- data files
moc/focal 1:2.6.0~svn-r2994-3build1 amd64
ncurses based console audio player
pilot/focal 2.22+dfsg1-1 amd64
Simple file browser from Alpine, a text-based email client
It appears that “mc” is what we are looking for. Let’s go ahead and get some info on it:
apt show mc
Installing is just as easy:
sudo apt install mc
Type Y then hit Enter when prompted to install and let the magic happen.
Exploring the File System
Now that Midnight Commander is installed lets play around with it. Run it in terminal by typing mc and hitting enter.
You can use either the arrow keys or your mouse to navigate the file system. Poke around with the following:
You can open files by highlighting the file you wish to read and then clicking the “File” menu up top, then clicking “View File”, then clicking “Ok”. Take note regarding files you don’t have permission to access. What do you think you need to do in order to see root owned files using Midnight Commander?
Well this was a lesson that you’re sure to reuse. Using apt on Debian/Ubuntu/et al systems is a must if you are going to administer these machines with success. There are various other tools to manage packages on Linux systems and while they differ in details they are functionally equivalent.
Next episode we’ll use some new commands to view file content as well as do some more basic terminal navigation, playing with hidden files, and we’ll finish off with a deep dive into nano.
Working with Sudo, The /etc/shadow File, Hostname Change, TimeZone Settings
Let’s start at the top: what is sudo? You may have heard of it, you may not have, but it is an integral part to being a Linux administrator. Sudo is a program that allows a user to run other programs/commands as another user, usually the root user. This is authenticated with the credentials of the user you are running as. So, what does this afford us? Why not just log in as the root user and do your bidding straight from the root account? Well, there’s a few reasons why using sudo is beneficial:
You aren’t giving out your root password to other admins and users.
On systems where sudo is being utilized the root user itself can be disabled, heightening security.
There is an audit trail of all sudo activity in the form of logs.
You can limit users to have access to certain programs using sudo, further restraining privileges in the interest of security.
The process of running a sudo command offers a little buffer that promotes “think before you leap”.
Sudo authentication expires automatically, requiring the input of the user password again. Leaving a machine unattended (!!!) is less of a risk than if the root user was left logged in.
So how does this work? How do I use sudo? It’s simple. Just type sudo before the command you wish to run as root in terminal, enter your account password (your password, not the root password), and the command will be executed with root privileges. Easy to use? You bet.
But what’s really going on? Well, if we take a look at the sudo binary we’ll notice something a little peculiar about the permissions. Use the which command to see where sudo actually lives, then use ls -l to check permissions.
In the permissions for the sudo binary, notice the “s” in the fourth column? This SetUID bit means that instead of the binary being invoked by the user it will be executed as the user who owns the binary, in this case, root.
So what sudo does is check the sudoers file /etc/sudoers and sees if the user who invoked sudo is on the party list. If so, and the credentials aren’t already cached (we’ll get to that in a moment), the invoking user will be prompted for their password. Entering this password will allow the command to execute as the root user. This password will be cached for 5 minutes (5 minutes is the default on most Linux systems) meaning that within this window you can run more sudo commands without having to enter your password. Sudo creates a child process where it changes the target user (again root) and then the command is executed.
If you were thinking that users need to be explicitly added to the sudoers file and that they weren’t on it by default you’d be right. So how do we grant access to sudo? It’s as easy as modifying the sudoers file itself by adding an entry for either the specific user or for a user group. It’s a no brainer that this needs to be done as the root user! Run sudo nano /etc/sudoers and enter your password to open the sudoers file in a the nano editor.
Scroll down a bit and you’ll notice there are three sections we should be concerned with here. The “User privilege specification” section is where users are listed along with what sudo permissions they have. The two sections below that govern group membership permissions to sudo, in this case the admin and sudo groups have full access. The permissions are broken down as follows:
USER ALL=(ALL:ALL) ALL
1 2 3 4 5
The username whom is getting sudo access. Groups are prefaced with a percent sigh (%).
Hosts you can run sudo commands on.
The target users you are allowed to run commands as.
The list of groups you can switch to using the -g switch.
The commands you can run.
Here’s an example of a sudoers entry that is a bit more granular:
Oh noes. DENIED. Let’s use sudo then. You can use either of these two commands, the latter being a nifty shortcut that runs the last command run:
sudo cat /etc/shadow
If you take a look at the output you’ll notice a line for every user on the system. Most will be built in system accounts for services and whatnot but you should see an entry for your user. Mine looks similar to this:
Break down this line by section (between colons, there’s a total of 9 fields) and note what each represents:
Username – The username on the system.
Encrypted password – This is a hash of the password prefaced with what type of encryption is being used, delimited by dollar signs. These are the little encryption codes:
$1$ – MD5
$2a$ – Blowfish
$2y$ – Eksblowfish
$5$ – SHA-256
$6$ – SHA-512
Date of Last Password Change – The date of the last password change, expressed as the number of days since Jan 1, 1970.
Minimum Password Age – The minimum password age is the number of days the user will have to wait before she will be allowed to change her password again.
Maximum Password Age – The maximum password age is the number of days after which the user will have to change her password.
Password Warning Period – The number of days before a password is going to expire (see the maximum password age above) during which the user should be warned.
Password Inactivity Period – The number of days after a password has expired (see the maximum password age above) during which the password should still be accepted (and the user should update her password during the next login).
Account Expiration Date – The date of expiration of the account, expressed as the number of days since Jan 1, 1970.
Reserved Field – This field is reserved for future use.
You can see why this file is owned by the root account. It contains sensitive information that should not be readily available to regular users. Attempts to crack the hash could be perpetrated should those hashes be exposed. I would also recommend not editing this file by hand unless you really know what you are doing. It is always a better idea to manipulate the contents of thi file using commands like passwd and chage.
Let’s restart our machine using the reboot command:
|dpaluszek@upskill:~ -bash v5.0==>reboot
Failed to set wall message, ignoring: Interactive authentication required.
Failed to reboot system via logind: Interactive authentication required.
Failed to open initctl fifo: Permission denied
Failed to talk to init daemon.
Denied again! Looks like we’ll have to sudo this one too. Run either of these to reboot the machine:
sudo shutdown -r now
Use the uptime command to verify the machine did indeed reboot.
I bought a couple of domains and decided to see if I could have both of them point to the same WordPress site. This seems pretty easy to do but it required a few steps to get it done and working. I had to straighten out the site’s certificate, I had to edit some Apache config, and I had to add some code to a WordPress php file.
Step 1 – Update letsencrypt certificate
I use letsencrypt to secure the site via a TLS connection. The binary has since been updated to certbot. We’ll be using certbot on the command line to get a new updated cert that configured for multiple domains. Here’s the command I used:
Follow the prompts and verify the output indicates success. The new cert files should live in the letsencrypt default directory: /etc/letsencrypt/live
Step 2 – Update Apache
Now we need to configure Apache to respond to requests for my new domain via a new virtualhost configuration. We’ll copy the config file for danielpaluszek.com then configure it for danielpaluszek.tech:
sudo cp danielpaluszek.com.conf danielpaluszek.tech.conf
Edit the newly created conf file, changing the ServerName directive to the new domain name. You don’t need to edit your cert file locations since we’re using one cert for both domains. Save the new file then run these commands to enable the new site then restart Apache:
Now we need to tell WordPress that it needs to stay on the domain the web user started on. Currently if you browse to the .tech domain and click a link within the site whose destination is the site you’ll be brought to the .con domain. Lame. We can fix this by editing the wp-config.php file. Mad props to Jeevanandam M. for this. Search for the following line in the file:
Assign a Static IP Address, Customizing Your Bash Prompt & Misc Commands
Time to complete: ~1-1.5 hours
Welcome back! In this installment of the Linux Upskill Challenge we’ll be completing a few tasks:
Assigning a static IP to your Ubuntu virtual machine.
Customizing your bash prompt.
Doing some Ubuntu user management.
Playing with some more common commands.
Static IP Addressing
By now you may have noticed that rebooting your Ubuntu vm may result in it receiving a new IP address each time. This is annoying since you will need to log into Ubuntu via your hypervisor console to find out what the IP is before you can SSH into it. So let’s set a static IP address shall we?
Like most things in Linux we’ll need to edit a configuration file to accomplish our task of setting a static IP address. We’ll edit the /etc/netplan/00-installer-config.yaml file using nano:
Using nano, edit the configuration file as shown, entering your own network information. It is important to note that you cannot use tabs within yaml files, so use spaces to justify any text:
There are two ways to finalize this. You can either restart netplan:
sudo netplan apply
Or you can reboot your machine:
sudo shutdown -r now
Note that reapplying netplan settings will break your SSH connection so you will need to restart your SSH session using your newly configured IP address.
Again if you find your machine inaccessible you can always log into it using the VirtualBox console so you can fix the netplan file.
Verify your network settings with any of the following commands (they all result in the same output):
ip address show
ip add show
Fun fact: Ubuntu versions prior to 17 didn’t use netplan and its associated yaml file but instead used a configuration file: /etc/network/interfaces so take note! Encountering older operating systems is common in the sysadmin world.
Let’s move on to tinkering with your bash prompt, shall we?
Welcome back! In this installment of the Linux Upskill Challenge we’ll be completing a few tasks:
Configuring VirtualBox to allow traffic from our VM to our local network.
Patching the system.
Install SSH and test access.
Configure SSH for remote access in a more secure way.
Review some basic commands that let us gain insight into our system.
VirtualBox Ethernet Settings
VirtualBox by default put our vm onto a segregated network that it created specifically for this vm. While our vm can access the Internet through this connection we cannot speak to it from any devices on our local network. The goal here is to access our Ubuntu box via SSH (secure shell protocol) from another machine, and for that we need network accessibility. While my vm can hit network objects on my local LAN I cannot see my vm from my local LAN. To make the change in VirtualBox do the following:
Now that we have that out of the way done start up your vm and log in. Let’s check the IP address, shall we? Run this command:
ip address show
You should get the following output:
Verify that the IP of your vm is part of your local subnet and if so we are clear to move on.