Linux Upskill Challenge – Part 00

Learn the skills required to sysadmin a remote Linux server from the command line.

Time to complete: 1-1.5 hours

So I found this thing on Reddit: Linux Up Skill Challenge
It’s a 20 part series on Linux administration. It starts by doing some basics but evolves into doing slightly more complicated, and useful, tasks. This is all done on a headless (no GUI it’s all command line) Linux system. Pretty neat! Shall we delve into it? Let’s start learning!
I mapped this out and figured I could write a post on each section of the challenge. There’s some areas I can dip into more deeply, and I added some cool little things here and there to help round things out. In this first segment I’ll explain some basics:

  • What is virtualization?
  • What is VirtualBox?
  • What are some common Linux distributions or “distros”?
  • What other options are there for spinning up a virtual machine?

Then we’ll get into some hands on stuff:

  • What is the process for using VirtualBox to create a Linux virtual machine?
  • What are the steps to installing a Linux distro?

Virtualization and Hypervisors

Before we get into playing with Linux we need to get a machine up and running. Back in the old days spinning up a machine was accomplished by downloading an ISO installation file for the operating system of your choice (Linux, Windows, etc), burning it to a CD, then throwing that CD into your optical drive and booting your computer from it. From here you would undergo the installation process for your operating system, installing it onto a hard drive in your computer.

Well, since virtualization hit the scene those days are pretty much over. So what is virtualization? What is a “virtual machine”? Simply put virtualization is the act of creating a virtual instance of things like operating systems, networks, and even application code, as opposed to creating an actual instance. While many forms of virtualization exist the most common form, and the one generally referred to when speaking about virtualization, is hardware virtualization. In the old days as I mentioned you would “actually” install an operating system on physical hardware. Nowadays with virtualization you can install an operating system (or even multiple) on a layer of abstraction on top of the hardware. This installed instance of an operating system is called a virtual machine. The layer of abstraction managed between the hardware and operating system is done by the hypervisor. The hypervisor sits between the hardware and the installed operating system(s). The hypervisor is in charge of allocating resources to your installed operating systems. It orchestrates, so to speak, to ensure all virtual instances get the resources they require allocated to them. There are two flavors of hypervisors: Type 1 and Type 2. Type 1 hypervisors are low level and are installed directly onto hardware. One of the most common Type 1 hypervisors is VMware’s ESXi. This is used heavily in the enterprise setting. Another Type 1 hypervisor, this one Linux based and open source, is called Proxmox. A Type 2 hypervisor runs in an installed operating system as a piece of software. Examples include Microsoft’s Hyper-V and Oracle’s VirtualBox. We’ll be using VirtualBox in this demonstration considering it is available freely on many platforms. I mentioned that I am running a MacOS machine but you can follow along if you are on Windows too, using VirtualBox.

Common Linux Flavors

There are TONS of Linux distributions out there. For an idea on how many there are, and to see the history of how certain distros spun off others, check out this graphic:
Wikipedia: Linux Distribution Timeline
There are two denominations of Linux systems, those with a pretty GUI and those that are headless, or command line only. The latter is primarily used for server related functions while the GUI enabled ones are for desktop use. Common GUI equipped flavors include

  • Ubuntu
  • Linux Mint
  • Arch Linux
  • Zorin OS

It’s important to note that the GUI isn’t quite tied to the operating system. You can mix and match supported GUIs with your Linux flavor. Note that MacOS is a derivative of a Unix operating system named Darwin, and uses the BSD kernel. This stuff is everywhere.

Common enterprise Linux/Unix distributions include:

  • Ubuntu
  • Debian
  • CentOS
  • RedHat
  • FreeBSD
  • OpenSUSE
  • Fedora

In this series I will be using a headless version of Ubuntu. It’s very commonly used. So if you’re looking for an answer to a question online you’re likely going to find an answer.

Someone Else’s Hypervisor?

There are other options for setting up a virtual machine aside from using a hypervisor like VirtualBox on your computer. Cloud providers have Infrastructure as a Service (IaaS) offerings where you can spin up virtual machines on a whim. This is usually done by picking from a catalog of operating systems, although you can use your own custom installation (this can get pretty advanced). Providers that offer these services include Amazon’s AWS (EC2), Google Cloud, Linode, and Digital Ocean. Being on a public provider means your virtual machine(s) can be easily configured to be on the public Internet using a public IP address. You can complete this by configuring access rules to allow traffic to whatever service you wish to make available to the Internet. This blog is on a hosted service. So meta.

Next we’ll talk about setting up a virtual machine in VirtualBox.

Frankenstein Blog

My Host Died?

I finally set aside some time to do a little site maintenance the other day. I wanted to do a few quick things. I wanted to backup the site, increase the volume size, and then backup the site again once finished. Note that I had no backups of the site EEEK I know I know shame on me. Anyways my task seemed simple, until I discovered that I couldn’t SSH into my web server. What gives? I took a look in EC2 and found the machine in a running state. Ok. I couldn’t hit the site via http. Ok. So I force restarted it via the EC2 console. Ok Machine came back up as per Amazon but I still couldn’t get into it. Le sigh. What to do? No backups dammit! I need to access the data on this virtual machine. Well I figured since I was going to resize the volume maybe I just extract the data I needed and restore it to a new instance, upgrading the operating system in the process, use a larger volume, and be done with it.

AWS CLI (Command Line Interface)

Let me introduce you to the Amazon Web Services Command Line Interface. This open source tool was developed in order to allow for programmatic access and administration to AWS services via command line. It’s available for various shells like bash, zsh, and tcsh as well as PowerShell. It’s super useful, easy to setup, and easily repeatable. Plus there are things you can only do via command line that you can’t do in the AWS GUI (so get comfortable with a terminal). My goal was to export a copy of my virtual machine to an S3 bucket and then download it so I could extract the data I needed from it. I started by installing the CLI tool onto my MacOS laptop:
Installing the pkg payload into the default location (/usr/local/aws-cli) puts the cli binary into your $PATH.
Once done I had to configure the tool to look at my AWS account:
Running the following command puts the cli tool into configuration mode where you can enter your account attributes. This allows the CLI tool to speak to your account:

dpaluszek@Dans-MacBook-Pro ~ % aws configure
AWS Access Key ID: 
AWS Secret Access Key:
Default region name: 
Default output format: 

After each of these prompts I pasted the keys from my account. I created a new user in IAM (Identity and Access Management) and used the newly generated keys. The region name is whatever region the machine(s) you wish to manipulate are in. The output format is the format in which results of commands are displayed. JSON is the default and is easily readable so I left it blank

So let’s get to it. I started by shutting down the VM in the EC2 console. I then created a new S3 bucket in my account and named it “danblog”. I configured S3 access to be public as required for me to download the contents of the bucket once I get the export in there. I ran the following command to initiate an export of the virtual machine:

aws ec2 create-instance-export-task --description "dan_blog" --instance-id i-0893e588d5fdf55e2 --target-environment vmware --export-to-s3-task DiskImageFormat=vmdk,ContainerFormat=ova,S3Bucket=danblog

Now this should return something along these lines:

    "ExportTask": {
       "Description": "dan_blog",
       "ExportTaskId": "export-i-050174cb06f17ecbe",
       "ExportToS3Task": {
         "ContainerFormat": "ova",
         "DiskImageFormat": "vmdk",
         "S3Bucket": "danblog",
         "S3Key": "export-i-050174cb06f17ecbe.ova"
       "InstanceExportDetails": {
         "InstanceId": "i-0893e588d5fdf55e2",
         "TargetEnvironment": "vmware"
       "State": "active"

There is a command you can use to check the status of an operation (the export-task-id can be found in the previous output):

aws ec2 describe-export-tasks --export-task-ids export-i-050174cb06f17ecbe

This returns the same output as the create-instance-export-task command. Take note of the “State” entry. It will change from “active” once the process completes.

Down the Rabbit Hole, a Dead End?

So after the export completed I hopped into my S3 console and downloaded the ova file. I used the “Import Appliance” function in VirtualBox to create a VM from the ova file. I zipped through the prompts and booted it. I was met with a black screen. Le sigh. I had a rogue Ubuntu VM setup with MySQL I had setup a while back for a class I was taking that I could potentially use. I could mount the OVA file’s vmdk on this box and poke around. So I added the volume to the VM and booted Ubuntu. I checked the location of the disk and mounted it to a folder I created in my home folder. The file system seemed to be intact. I was able to access the web directory with no issue. The real question was whether or not I could get into the database. WordPress stores just about all content in a MySQL database, including the content of posts. That’s essentially, well, the site so without that I would lose everything. It’s not a ton of content but still.

I wasn’t too worried though the database is in my possession and I know the credentials so it was a matter of just getting in there. So I did the logical thing of changing the datadir directive in the /etc/mysql/mysql.conf.d/mysql.cnf file to point to the mysql directory on the “external” drive, stopped mysql, then started it. Now from here on things get a bit hazy. I saw a myriad of issues including but not limited to: mysql not successfully restarting due to permissions issues., attempting to authenticate in mysql but creds that I know work (taken from WordPress config file) do not work due to an authorization issue with the wordpressuser user account, root account reset attempts failing, amongst other things. I effectively Swiss Cheesed that drive up trying to get it to work. On a whim I decided to start over with a fresh copy of the ova downloaded from S3 and imported it into VirtualBox. While doing so I noticed that the Guest OS Type setting was set to Windows Server 2003 32 bit. I clicked through the options and set it to Ubuntu 64bit. Why would this make a difference? I thought selecting this was just a means to marking the vm graphic on theVirtualBox interface? Does it change anything boot related? There’s no such choice required in any other hypervisor I’ve played with. I did some research and I couldn’t find anything relating to that setting. The setting appears changeable in the “General” section in the “Basic” tab under a configured vm’s settings. When set to Ubuntu 64bit the machine booted, although it took about 15 minutes as it appeared hung on performing a random number generator function. I’ve read that processes such as these require entropy input into the system but I am unsure if me banging on the keyboard did anything to speed that up. Regardless the machine was booted and at the login prompt.

Now that I have the machine booted let’s see if I can log into it. This AWS machine didn’t have any creds that I knew about (perhaps I failed to record them?). By default the user AWS uses is “ubuntu” but my attempts at logging in failed. I am pretty sure I didn’t setup a password for any account on here. I did setup an ssh key via AWS for it, which I of course had as that was what I used to access the machine normally. So I grabbed the MAC address from VirtualBox and used it to find the machine’s IP via its DHCP lease in my firewall (I know I was surprised too). I was then able to SSH into the box using that IP. From here I was able to access everything normally. I zipped up the site’s root directory using tar. I then exported the MySQL WordPress database as a .sql file. I scp’d them to my local machine where I promptly backed hem up. Saved!

It’s Alive!

I switched to Digital Ocean for this machine. I wanted to try out their service based on the recommendation of a good tech friend by the name of Matthew Fox (you can peep his blog here: So I spun up an Ubuntu 20 machine, installed all the dependencies WordPress requires, and scp’d the tarball and sql export files to the machine. I unpacked the web directory back into place and modified permissions by chowning the www-user account and group. Next I setup a MySQL database for WordPress and imported the sql file. I had to create the WordPress user account configured in the WordPress config file in MySQL and assign it permissions. I had to do some apache virtualhost configuring in order to get apache serving the pages out of the right directory. I installed a letsencrypt cert using certbot, pointed apache to the correct certificate files, and ensured apache was serving over port 443 (and that a http–>https redirect was configured). The last issue I had to fix was a database connection issue when loading the site. I tracked this down to a difference in the database name from the database referenced in the WordPress php config file. It’s case sensitive so I changed it and the site loaded. Back in business.

Site is up. Backups secured.

Dual Boot Kali/Catalina on an Unsupported Mac

Two Birds With One Stone

So I have this old secondhand MacBook Pro 7, 1 that won’t run MacOS later than 10.13.x. I have some uses for the Mac side, namely in the name of mobility, but would also love to get an install of Kali Linux installed so I can play around with it. I saw a comment on Reddit referencing a tool named “Patcher” that would allow for newer versions of MacOS on older Mac hardware. I immediately went down the rabbit hole and here we are.

Outlined below are the steps I took to get Catalina installed, a custom boot manager installed, and Kali Linux running in a dual boot setup. There are 3 main segments to this:

  1. Create a bootable Catalina USB installer using Patcher then installing Catalina on my unsupported machine.
  2. Install a custom boot loader onto my machine.
  3. Copy a Kali Linux iso to a USB drive, then install it onto my machine.

Patcher – Run Newer MacOS on Unsupported Hardware

Prerequisites – Here’s all you need to get started with Patcher:

Getting this going is super easy as the Patcher app does just about everything for you. USB installer creation can be performed on any machine. Once Patcher was downloaded I performed the following steps:

  1. Open the Patcher DMG and run macOS Catalina
  2. Click Continue until you are prompted to either browse for a copy of Apple’s Catalina installer or download a fresh copy. It is imported to note that at the time of this writing 10.15.4 has been released but Patcher and 10.15.4 isn’t working on most machines. Therefore you should use Patcher’s “Download a Copy” feature to grab a copy of 10.15.3, which works.
  3. Insert your USB.
  4. Once Catalina is downloaded click the orange external drive icon to “Create a Bootable Installer”.
  5. Select your USB drive, then click start.
  6. Enter your password when prompted to begin.
  7. Once done your USB drive is ready to rock.
  8. Plug the USB into your unsupported Mac, hold option to bring up the boot picker, and select the Patcher USB.
  9. Once booted you’ll be presented with a list of options. Highlight Disk Utility and then click continue.
  10. We need to leave some space for Kali. Above the left pane click View–>Show All Devices.
  11. Highlight the internal HDD and then click the partition button.
  12. Create 2 partitions, an APFS partition for our Catalina install and another for Kali. It is up to you to decide how large you wish these to be. I set the Kali partition to be HFS nut this shouldn’t matter since it will be reformatted when we install Kali.
  13. Close Disk Utility, highlight reinstall macOS, click Continue, agree to terms, select your newly created APFS volume, then click install.
  14. Once done reboot back into the Patcher USB.
  15. Click macOS Post Install
  16. Leave checkboxes alone and click to install patches.
  17. Allow system to reboot, then boot back into the Patcher USB.
  18. While were at it, let’s disable SIP. We’ll need to do this in order to install Kali. Click Utilities on the upper toolbar then disable SIP with the following command:
    csrutil disable
  19. Reboot and setup Catalina for your use.

Installing rEFInd – Custom Bootloader

Prerequisites – Here’s all you need to get rEFInd onto your machine:

Despite involving some terminal commands this is really quite simple. We’ll mount the EFI volume on the Mac then throw the rEFInd files onto it. It involves a few terminal command but again, quite simple.

  1. Verify your architecture with the following command:
    ioreg -l -p IODeviceTree | grep firmware-abi
  2. You should get something similar to the following:
    | |   "firmware-abi" = <"EFI64">
    This indicates 64bit.
  3. Run the following command to verify where the EFI partition lives:
    diskutil list
  4. Take note of the identifier as indicated in the below screenshot:

  5. Run the following commands to create a directory with which we can mount the EFI volume to, mount it, and create a directory for rEFInd. EFI should be at disk0s1 but if it is different for you then modify the command below appropriately.
    sudo mkdir /Volumes/ESP
    sudo mount -t msdos /dev/disk0s1 /Volumes/ESP
    sudo mkdir -p /Volumes/ESP/efi/refind
  6. Download rEFInd and unzip it.
  7. Copy the contents of the refind subdirectory to the refind directory we created in the above command.
  8. More than likely you are running 64bit EFI. Delete the following to avoid conflicts:
  9. Delete the following drivers, also to avoid both conflicts and slow boot times:
  10. Rename refind.conf-sample to refind.conf.
  11. Now we must bless all things holy in order to boot to our new loader:
    sudo bless --mount /Volumes/ESP --setBoot --file /Volumes/ESP/EFI/refind/refind_x64.efi --shortform
  12. Reboot the machine and you should be presented with the rEFInd bootloader, which should look like this:

Kali Linux – Installation Time

Now that we have our machine prepped we can create a Kali USB installer, boot to it, and install Kali onto our Mac. This part is pretty simple too but you need to be careful when formatting an installation partition, lest you nuke your macOS install. Here’s what you need:

Here’s what I did:

  1. Open Etcher.
  2. Select the iso you downloaded, select your USB drive, click “Flash”.
  3. Plug your Kali USB into your destination Mac and boot it.
  4. Your USB should show up as:

    Boot Legacy OS from whole disk volume
  5. Once booted select “Graphical Install” and proceed with setting up machine name, username, etc.
  6. When prompted to manage disks/partitions choose “Manual”.
  7. Select the HFS volume you created earlier and delete the partition.
  8. This will now show up as free space.
  9. Go back to Guided Partitioning and select “use the largest continuous free space”.
  10. Select your desired partitioning schema. I just used the whole partition for all files, nothing fancy.
  11. Elect to write the changes to disk.
  12. Kali will now install, I am not sure if Grub is required but I installed it on my HDD when prompted.
  13. Profit.

I imagine you could do this with any Linux flavor, not just Kali. I haven’t attempted to play around with Debian or CentOS or anything but I see no reason why it wouldn’t work for any other Linux distributions.

Credential Stuffing – Turning Your Online Accounts Into Cash

If you are hanging out on the Dark Web you may already be familiar with credential stuffing and its criminal benefits. What you may not know is how it is leveraged within a longer process in order to deliver its final product via an underground marketplace in exchange for money, Credential stuffing is the taking of an input, in this case a database of leaked or stolen user credentials, and turning it into a list of different sites with credentials that work. This list is then put on a marketplace and sold on the Dark Web. In addition botnets are leveraged in order to stay under the radar..

Data breaches usually result in data ending up in the wrong hands. Account info is acquired, sold, and used before those exposed by the breach have a chance to change their account passwords. There is a time limit on how long malicious actors can use that data before the leak is discovered and then being subsequently cut off. That’s a problem for a black hat. In addition you are limited to the site of which the breach occurred. But what if we could extend the usefulness of that initial breach? Credential stuffing does just that.

Let’s take a database and run it through the credential stuffing assembly line. We have a database for a provider and it becomes exposed. This database is in the hands of a malicious actor. This actor makes a small investment in some automated tools. These tools allow leaked login credentials from our database to be used against a variety of platforms in search of a successful login. *This alone is a good reason to not reuse passwords across services providers.) It is here botnets are used in order to spread out the login attempts. This circumvents safeguards put in place to protect against things like brute force attacks. Once a list of working sites and logins is aggregated they are uploaded to a marketplace where they are verified and sold. These automated marketplaces have been observed by researchers to be bustling, a testament to how effective this monetary driven economy is.

The most effective way to protect yourself is to not have any online accounts at all. Since this isn’t really feasible in 2019 you are going to have to keep the following in mind:

  • Enable multi factor authentication (MFA) wherever supported
  • Use a password manager
  • Use passwords consisting of random strings of characters
  • Do not use the same password for multiple sites/services

Credential stuffing relies on compromised login credentials being used on multiple sites. Do not reuse passwords! Using a password manager to keep track of your random and differing passwords is crucial. Enabling MFA is also an effective way to render these attacks trivial.

See here for more detailed info as described by those who have been investigating credential stuffing:

inode You Node We All Node

I’ll admit it. I’ve been neglecting this blog. Well, more accurately, I’ve been neglecting the sysadmin portion of the site. I haven’t updated anything, much less rebooted the server that is feeding you this page, amongst other things. The only thing I’ve been doing is periodically logging in to renew my letsencrypt cert. So about 2 minutes combined over the last 7 months?

I hopped on today to check for uptime and install Ubuntu patches and lo and behold I was met with some errors. I simply ran

apt-get update

and things were looking good. 230+ items to update. Seems about right for 330 days of uptime and not patching (shh I know it’s terrible). I ran

apt-get upgrade

to install patches but that wasn’t so kind. I was met with a bunch of output indicating that I needed to install a newer kernel in order to satisfy some dependencies. Why it wasn’t installing? I wasn’t sure at first. The output told me to attempt running this command to rectify:

apt-get -f install

This too failed but I received more interesting output:

No apport report written because the error message indicates a disk full error
dpkg-deb: error: subprocess paste was killed by signal (Broken pipe)

Interesting since nothing else indicated that the filesystem was full. I did some more investigating using du -sh and df and my filesystem was a mere 67% full. What gives? I’ll tell ya: inodes.

For those who do not understand the concept of inodes let’s break it down simply using a file cabinet as an analogy:

File Cabinet – File System
Drawers – Filenames
Folders – inodes
Documents – File Data

It’s important to distinguish the difference between filenames and inode data. The inode data is the metadata associated with data on the filesystem. This metadata includes access and modification dates, file size, permissions, etc. A common use case that better explains why inode data is separate from the filename is the usage of hard links. Say you have a regular file. It has a filename, the associated inode metadata, and the actual data. Creating a hard link to the file is really just creating another filename that points to the inode data. Check out the below screenshot:

So what’s going on here? I created “file1” using touch. Running the ls command gives me a listing of filenames, which returns “file1” as expected. Running ls -l returns inode data, hence why the owner, permissions, etc data is shown. I then created a hard link, named “file1link” which links to “file1”. Running ls -l again shows me the inode data for each of the filenames. This inode data is the same for the two filenames. Running ls -i shows me the inode number for each of the file names. They are the same for both filenames because they are referencing the same inode data. Make sense?

This is all cool stuff, but how does this play into the error I was seeing? Quite simple: I was out of inodes! I read that upon filesystem creation the ratio of inodes to disk space is 16Kb per inode so as long as your average file size is above 16Kb you shouldn’t run out of inodes. Apparently I have a shit ton of small files eating up my inodes without taking up enough space to actual fill my filesystem. Interesting little thing going on here. I ran df -i to get some inode data and dun dun dun 100% full! No more inodes left. I took this screenshot after I did my cleaning up but this is what df -i returns:

I did some more digging on the Internet and found that removing old kernels from my system should alleviate the problem I was having. The kernel I needed was queued up to install but I found that I had a dozen or so kernels still lingering around. Perhaps these extra kernels were eating up my inodes? Here’s the sequence I used to free up some inodes by forcefully deleting some extraneous kernels.

First I ran the following command to list my currently booted kernel:

uname -r

The below returns a list of all kernels on my system except the one I am booted to:

dpkg -l | tail -n +6 | grep -E 'linux-image-[0-9]+' | grep -Fv $(uname -r)

Yea some of those can go, so let’s get to it.

linux-image-4.4.0-51-generic is the oldest on here, so let’s zap that one. Use the following command to remove the initrd.img file (this is due to Bug 1678187).

sudo update-initramfs -d -k 4.2.0-51-generic

We now need to use dpkg to finalize removal:
sudo dpkg --purge linux-image-4.2.0-51-generic linux-image-extra-4.2.0-51-generic
sudo dpkg --purge linux-headers-4.2.0-51-generic
sudo dpkg --purge linux-headers-4.2.0-51

It is possible that the first dpkg command fails due to something being dependent upon that particular kernel. If this happens dpkg will alert you and you’ll need to take some action.

That should do it. I removed 2 kernels and freed up (as indicated above) over 60,000 inodes. I was then able to successfully update Ubuntu with no further issues.

Anthony’s Nose – Hudson Valley

We made it! The initial scramble was enough to keep us warmed up.


Looking south down the Hudson.


Another view looking west.


Sicc panorama.


Here’s the first of 3 rock scrambles up to Anthony’s nose via the Camp Smith Trail.


It rained a lot yesterday and the trail was muddy in many spots. Some were even little creeks of running water.


There’s a few nice overlooks on the way.


Another overlook.

Sync Your Bash Profile Across Machines

I use multiple computers. One thing that bothered me was that my custom tailored terminal window differed across my machines. Devs and script kiddies likely encountered this issue. I’ve seen some solutions online by way of GitHub but these normally relied on git to sync things across computers.

I didn’t want to go this route. I felt there was a simpler solution available. I instead elected to use Dropbox. I was already using it for my 1Password Vault so why not use it to sync my bash_profile?

I initially thought I could just put my .bash_profile into Dropbox. But there’s no easy way tell terminal to use a specific file. Instead I dropped my .bash_profile into another file. For those interested my setup and aliases take a gander:

This file is dropped into my Dropbox folder and named it I simply reference this file with one line in my .bash_profile on the machines I want to sync up:

The $HOME variable allows me to have differing user shortnames across my machines. Simple stuff.

Password Management

I’m writing this mostly as a means to relay, to those interested, my recommended way of managing passwords across computers using a password manager. I tell many about my methods but writing this gives me something to point you to and say “Hey read what I wrote it’s all laid out for you.” If you’re a power user then this is for you. So here we go.

Oh wait before I forget. I feel obligated to first issue a little disclaimer: I am not affiliated with any of the companies whose tools I am using. I am (unfortunately) not getting paid to write this. It is also worth noting that there will always be implementation layer flaws so I am not claiming that this is in any way perfectly secure or anything.


Let’s start from the ground up and talk about passwords in general. You’re probably using the same password across multiple services. Using actual words in your passwords, are we? You’re likely adding the same numbers at the end of your passwords like a year or date, and maybe adding a symbol or two in an attempt to be clever. You may be using pass phrases instead of a password in an attempt to make your password longer. Sorry Not Sorry but these common password practices are generally considered insecure and could leave you exposed to risk. Researchers have been continually reviewing leaked passwords and extracting trends from this leaked data. Malicious actors leverage these trends, and as such, the more of these common yet bad practices you utilize the easier it is for you to have an online account hijacked with relative ease.

The method I use to protect my online identities may seem a little convoluted at first. But it’s not as complicated as it seems. Initially setting yourself up can be time consuming as you’ll be going through all of your accounts but once you clear this stage you’ll be sitting pretty and more secure than you were before.

Let’s talk about a security measure that you should first employ before you begin touching your passwords.

2 Factor Authentication (2FA)

2 Factor Authentication (or multi factor authentication, MFA) is a simple way to protect your accounts. 2FA adds a second layer of security by requiring you to enter a one time generated passcode at the time you are accessing your account. This passcode is commonly sent to you as a text message but apps exist which, when configured, can also generate these codes for you. Even with your password an attacker cannot access your account without this passcode. I recommend you use 2 factor authentication on all services that support it. For more details on 2FA in general check out this in depth NIST article about it: Back to basics: Multi-factor authentication (MFA). Exact methods for enabling 2FA will vary across your accounts, so look at the documentation provided for the specific account you are turning 2FA on for.

Password Math

Alright so this section’s heading has the word “math” in it. But don’t get scared. We’re not doing any calculus just yet. We should, however, touch on compute power and how it relates to some statistics. There are many scenarios where “hackers” will employ software to crack your password. Some software is as simple as brute forcing the login page of a website. Others involve breaking into a server’s database, stealing email addresses and password hashes, and running some code to iterate through passwords until they get a password match. With computational power doing nothing but getting better and faster the time it takes to perform these tasks decreases. This is especially worrisome for those using pass phrases. Attacks use a list of dictionary words against a hash or login until success is met. Adding numbers or symbols, or even the substitution of them, is easily programmed into password cracking software. What’s the best way to reduce hacker’s efficiency? Well, a mathematically sound password is one constructed of totally random characters of the largest size allowed. For software to iterate through all combinations of random letters, numbers, and symbols the process becomes more timely the longer password you use. As you add characters to a password the cracking difficulty increases exponentially.

Let’s look at an example. Let’s say we are using all 95 printable (not control characters, you comp sci nerd you) ascii characters. This includes all letters, both upper and lower case, numbers, and symbols. Let’s say your password is one character long. This means we are looking at 95 different possible one character passwords. Right? Right. So let’s add a second character. Does this mean we have 95×2 possible characters now? Nope. It means we have 95×95 possible combinations: 9025. Adding a third character cubes 95 (95x95x95) bringing the total of 3 character passwords up to 857375 possibilities. This gets large quickly, which is good. This is what we want. We want completely random passwords with as many characters as allowed. 12, 16, 32, 64 characters. Amazon allows for up to 128 character passwords. I’ll let you calculate what 95^128 is. It’s a fucking big number.

I’m Not Memorizing That

Alright so now we know a few things. We know that large and random passwords are the safest, but we also know that memorizing these things are virtually impossible. No doubt. Writing these down is counter productive. I’m not going into how that post it on your computer with your password written on it is bad for your health. So what do you do?  Alas! Tools exist that facilitate the storage of crazy passwords. Enter the password manager. A password manager is an application that runs on your computer that stores an encrypted vault of some sort. Within this vault are your usernames, login emails, passwords, notes you may have made, credit card info, whatever you wish to store. Most allow you to throw whatever sensitive info you may wish into them.

The really neat thing about password managers is their web browser extensions. They will open the webpage of the site you wish to log into, paste the username and password into their respective fields, then log you into the site. This makes using crazy long and random passwords easy. Pretty sweet.

My Password Manager of Choice

I condone the use of 1Password by AgileBits. I appreciate their sales model (I pay for a yearly family subscription) as well as the fact that their vault is pretty secure. I don’t mind paying for software that is under continuing development. Elcomsoft did a review on password managers and although they initially stated 1Password was of average strength further review indicated it was the strongest. You can read their follow up here: Attacking the 1Password Master Password Follow-Up.

The 1Password family sharing bit is pretty useful although I am personally a little sketched out by it. Your password vault is stored on their servers in this scenario. I don’t utilize this and have my vault setup locally on each of my computers. This vault syncs across devices using a 2FA protected DropBox account. 1Password can store the vault for you on their servers but this makes me a bit uneasy. The benefit of them storing your vault for you is that multiple users can share vaults (sharing what to whom is at your discretion) allowing access to the same passwords from any designated account. You also get some admin functionality like being able to recover and unlock fellow 1Password accounts. It’s not that I don’t trust them, it’s that they are and will always be a juicy attack surface.

Syncing Across Devices

I mentioned above that I store my 1Password vault locally in DropBox. I can access Dropbox from whichever devices I choose, and thus, my vault is accessible across my computers, both Windows and Mac. I have 1Password and Dropbox on my iOS devices as well. 2 factor authentication adds a layer of security to my DropBox account. I encrypt my vault twice, technically. Once by 1Password then again by Dropbox. The NSA could probably get my vault files but who’s stopping that anyway? And even then they still have to crack my vault.

1Password Sync Settings


If you do not wish to go this far you can still sync across devices using a 1Password hosted vault. Once signed into 1Password your online vaults will be made available to you.

The upper, indented section shows my shared vaults. My “Primary” vault is the one synced using Dropbox.

Secure Password Generator?

We know that simply using a super long and random password is secure but where can you generate these random passwords safely? I use Steve Gibson’s Perfect Passwords page to generate random passwords securely. Steve was clever when creating this page. The page is delivered over a secure connection so no one can snoop. In addition the web page’s expire tag is set to a date in 1999. The passwords page, which was generated for you and only you, is ignored by search engines and is not cached by things like the Wayback machine. Steve’s generator also doesn’t generate the same password twice. Math is cool. Pretty nifty. Bookmark it.

From the GRC Perfect Passwords page: “Every time this page is displayed, our server generates a unique set of custom, high quality, cryptographic-strength password strings which are safe for you to use.”