Adding SSH keys to an EC2 instance.

So in previous sections we talked about how SSH keys were far more preferable to passwords, and considerably more secure. But how do you get these onto your EC2 instances?

We saw that when you launch an instance, you can – in fact should – assign an AWS generated key to them. But what if you dont want to use that key – wouldnt it be nice to get a key already installed on the EC2 instance when it starts up? This is perfectly possible, along with other things, and we shall use this to demonstrate a very useful tool called Packer…

First we need to get access to the AWS command line. If you dont yet have the AWS CLI installed on your box, be that Windows Mac or Linux, then go and get that downloaded and installed now from here.. https://aws.amazon.com/cli/

I make no apologies for doing all this from a Windows machine btw – because that is what most people will probably start by using. I would urge everyone to start moving towards a linux world though, simply because it is the most flexible if a little more awkward to learn…

Once you have the CLI installed you need to get access to it programatically. This needs access keys, so fire up the AWS console, log in and go to IAM – Identity and Access Management.

In here, there is an option to add access keys to an IAM user. If you don’t have an IAM user set up and are still using root – then stop and get a user added first.

From the user section, add an access key for CLI access. This will give you an access key, and a secret key. Make very careful note of them – again in a password safe like https://keepass.info/ is an ideal place.

You can now configure the aws cli to use these keys to access your AWS account. Get to the command line, and enter aws configure

This will ask for the access and secret key, as well as your default region and output options. You can pick the defaults for the latter two – I would suggest <none> for output format unless you want JSON for some reason…

C:\Users\cs\git\packer>aws configure
AWS Access Key ID [****************KPMR]:
AWS Secret Access Key [****************P1ku]:
Default region name [eu-north-1]:
Default output format [None]:

C:\Users\cs\git\packer>

If you now type aws s3 ls from the command line it should just return a list of all the S3 buckets on the account – or a blank line if you don’t have any. If you havent set up the keys or entered them correctly it will give you an access error instead which will need troubleshooting.

Assuming all is well, then we have access to the account from the CLI.

What we are going to do now is create an new Amazon Machine Image. You have probably seen these before – they are the AMI’s that are used to create EC2 instances. As well as the several hundred, if not thousands that you are provided with you can also roll your own. We are going to build one with our keys built into it, so we can connect directly.

Packer is a tool from Hashicorp that will let you build images of, amongst other things Amazon Machine Images, AMI’s. You can get a download from here… https://developer.hashicorp.com/packer/install?product_intent=packer

Once you have it downloaded, you will have a single file – that’s all no dependencies. Stick it somewhere in your file structure – I prefer to have a folder c:\program files\hashicorp and all their tooling such as Packer and Terraform goes in there. I then just add it to the path, so I can run it from anywhere.

Lets do a dry run. In a new folder, create a file called packer.pkr.hcl and enter the following…

packer {
  required_plugins {
    amazon = {
      version = ">= 1.2.8"
      source  = "github.com/hashicorp/amazon"
    }
  }
}

source "amazon-ebs" "amazon-linux" {
  ami_name      = "packer-example-2"
  instance_type = "t3.micro"
  region        = "eu-north-1"
  source_ami_filter {
    filters = {
      image-id            = "ami-0d74f1e79c38f2933"
      root-device-type    = "ebs"
      virtualization-type = "hvm"
    }
    most_recent = true
    owners      = ["137112412989"]
  }
  ssh_username = "ec2-user"
}

build {
  name    = "SSH-keys-loaded"
  sources = [
    "source.amazon-ebs.amazon-linux"
  ]

}

It’s quite a lot to take in, so lets look at what it is going to do…

The first part sets up the plugins and requirements by downloading the latest shim from gitlab to enable the packer tool to interface correctly to AWS.

The second part – starting with source “amazon-ebs” “amazon-linux” defines the source image that you are going to use to build your image. Here, the packer tool will download and create an ami image called “packer-example-2” with the defined instance types, here it is a t3.micro etc. It does this from the source image that is defined in the source_ami_filter.

You have to pick an AMI image that is available in your region (remember the default region you picked when setting up the CLI?) An AMI image for Amazon-Linux2 in say, us-west-1 is NOT the same as one in eu-north-1 despite being exactly the same, they have a different AMI number. As newer versions are released and updated, the AMI number changes as well. You need to got to the EC2 instance page, and list the AMI’s available from the AMI section on the left toolbar. Find an Amazon-Linux2 image, and copy the entire AMI number (including the AMI- bit at the beginning)

You now need the owner ID, which you can find by listing all the available AMI’s – stick in the AMI id and it will filter down to give you just one line. You will also want the Owner ID – this is the AWS account that owns that image, in this case oddly enough it is Amazon.

If you are runing on en-north-1 (Stockholm) and it’s not too far into the future, the supplied values I gave in the file will work. Otherwise plug your details in and save the file.

Now we can run Packer against the file and it will do stuff….

Firstly, you must run packer init packer.pkr.hcl to download the relevant plugins that are needed. You only have to do this once per folder that you use.

C:\Users\cs\git\packer>packer init packer.pkr.hcl
Installed plugin github.com/hashicorp/amazon v1.3.1 in "C:/Users/cs/AppData/Roaming/packer.d/plugins/github.com/hashicorp/amazon/packer-plugin-amazon_v1.3.1_x5.0_windows_amd64.exe"

Once init is done, then you can run against the main file with packer build packer.pkr.hcl

C:\Users\cs\git\packer>packer build packer.pkr.hcl
SSH-keys-loaded.amazon-ebs.amazon-linux: output will be in this color.

==> SSH-keys-loaded.amazon-ebs.amazon-linux: Prevalidating any provided VPC information
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Prevalidating AMI Name: packer-example
    SSH-keys-loaded.amazon-ebs.amazon-linux: Found Image ID: ami-0d74f1e79c38f2933
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Creating temporary keypair: packer_6609bc15-904d-2bb3-1580-c0f46b08246d
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Creating temporary security group for this instance: packer_6609bc16-6bef-fc19-83e6-eef8816d6119
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Authorizing access to port 22 from [0.0.0.0/0] in the temporary security groups...
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Launching a source AWS instance...
    SSH-keys-loaded.amazon-ebs.amazon-linux: Instance ID: i-0f1346d8d379362b8
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Waiting for instance (i-0f1346d8d379362b8) to become ready...
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Using SSH communicator to connect: 13.51.162.255
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Waiting for SSH to become available...
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Connected to SSH!
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Stopping the source instance...
    SSH-keys-loaded.amazon-ebs.amazon-linux: Stopping instance
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Waiting for the instance to stop...
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Creating AMI packer-example from instance i-0f1346d8d379362b8
    SSH-keys-loaded.amazon-ebs.amazon-linux: AMI: ami-09cdb78d2a743a9a4
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Waiting for AMI to become ready...
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Skipping Enable AMI deprecation...
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Terminating the source AWS instance...
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Cleaning up any extra volumes...
==> SSH-keys-loaded.amazon-ebs.amazon-linux: No volumes to clean up, skipping
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Deleting temporary security group...
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Deleting temporary keypair...
Build 'SSH-keys-loaded.amazon-ebs.amazon-linux' finished after 2 minutes 57 seconds.

==> Wait completed after 2 minutes 57 seconds

==> Builds finished. The artifacts of successful builds are:
--> SSH-keys-loaded.amazon-ebs.amazon-linux: AMIs were created:
eu-north-1: ami-09cdb78d2a743a9a4

If you watch the EC2 instances window in the console, you will see that an instance starts on it’s own, and it is not started by the AWS “launch-wizard-2”. Instead it is launched by the Packer program with packer named security groups as you can see here. It is using your access keys on the AWS cli that you set up earlier to do this.

What Packer is doing, is launching an AMI that it controls, and making a copy of it. It takes that launched image, clones it, generates a new AMI and then uploads that to your AWS account. This is then your image and no one else can use it, unless you decide to publish it to the world. However it is possible for you to use it to launch EC2 instances. The very last thing that Packer does is show you the AMI ID… eu-north-1: ami-09cdb78d2a743a9a4

You can see if you go to the AWS console, into EC2 and then the AMI list, that Packer has cloned an AMI instance, and added it into here – your own list of AMI’s…. you can see here that it is owned by me, not by Amazon, or Canonical or anyone else.

On the face of it thats not that useful. If you were to start up an EC2 instance from this, then it would not do anything different from the Amazon Linux master AMI it was copied from. What we need to do is change it.

Take a look at the following build block. We have added a provisioner section to it – go ahead and paste that part into your working file now.

build {
  name    = "SSH-keys-loaded"
  sources = [
    "source.amazon-ebs.amazon-linux"
  ]

  provisioner "shell" {
    inline = [
     "cd /home/ec2-user",
     "echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGva38OqNvc1FZgsv3OP2dk9HjdM+482iMwXR1lqenx7 cs' >> .ssh/authorized_keys",
    ]
  }

This instructs Packer to build onto the source that it already has… the source is the Amazon provided AMI and this builder changes that AMI and makes it more useful to us… It does so by means of a provisioner. You can see that the first line references the source so we make sure we are building on the correct thing. It then starts the provisioner – in this case a shell that runs inline with the AMI. In essence, it starts the Amazon linux image up, runs some shell commands, shuts down when they are completed and then packages up the AMI.

The commands are quite simple – it goes the the ec2-users home directory, echos a new authorized SSH key into the .ssh folder and appends it and then finishes. We – of course – can choose any SSH key we want there and clearly you can pick the one that would be used by your local client to connect to the EC2 instances.

To get the key for the shell provisioner, you need to go to the linux box that will connect to this EC2 instance….

┌─[✓]─[cs@rootca]─[~]
└─cat .ssh/id_ed25519.pub
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGva38OqNvc1FZgsv3OP2dk9HjdM+482iMwXR1lqenx7 cs@rootca

┌─[✓]─[cs@rootca]─[~]
└─

All you are doing is listing the contents of the public key used for SSH connections. The three line block line starting with “ssh-ed” is in fact just one line. That is pasted into the packer file above, as can be seen.

So now we have the key in place, let us or rather let Packer build it…

C:\Users\cs\git\packer>packer build packer.pkr.hcl
SSH-keys-loaded.amazon-ebs.amazon-linux: output will be in this color.

==> SSH-keys-loaded.amazon-ebs.amazon-linux: Prevalidating any provided VPC information
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Prevalidating AMI Name: packer-example
    SSH-keys-loaded.amazon-ebs.amazon-linux: Found Image ID: ami-0d74f1e79c38f2933
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Creating temporary keypair: packer_6609bf5b-e4a3-ce11-c840-1824b58cc7bd
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Creating temporary security group for this instance: packer_6609bf5c-7dfe-655b-a95e-1e3d3c063d2b
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Authorizing access to port 22 from [0.0.0.0/0] in the temporary security groups...
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Launching a source AWS instance...
    SSH-keys-loaded.amazon-ebs.amazon-linux: Instance ID: i-022160480d5269485
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Waiting for instance (i-022160480d5269485) to become ready...
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Using SSH communicator to connect: 13.51.163.233
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Waiting for SSH to become available...
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Connected to SSH!
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Provisioning with shell script: C:\Users\cs\AppData\Local\Temp\packer-shell1025933176
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Stopping the source instance...
    SSH-keys-loaded.amazon-ebs.amazon-linux: Stopping instance
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Waiting for the instance to stop...
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Creating AMI packer-example from instance i-022160480d5269485
    SSH-keys-loaded.amazon-ebs.amazon-linux: AMI: ami-0eb503b7ffff18ef2
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Waiting for AMI to become ready...
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Skipping Enable AMI deprecation...
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Terminating the source AWS instance...
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Cleaning up any extra volumes...
==> SSH-keys-loaded.amazon-ebs.amazon-linux: No volumes to clean up, skipping
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Deleting temporary security group...
==> SSH-keys-loaded.amazon-ebs.amazon-linux: Deleting temporary keypair...
Build 'SSH-keys-loaded.amazon-ebs.amazon-linux' finished after 2 minutes 58 seconds.

==> Wait completed after 2 minutes 58 seconds

==> Builds finished. The artifacts of successful builds are:
--> SSH-keys-loaded.amazon-ebs.amazon-linux: AMIs were created:
eu-north-1: ami-0eb503b7ffff18ef2


On a first glance this looks no different from the previous packer run, however, there is an extra line about halfway down… ==> SSH-keys-loaded.amazon-ebs.amazon-linux: Provisioning with shell script: C:\Users\cs\AppData\Local\Temp\packer-shell1025933176 It is clear that the shell provisioner has run, and done it’s magic.

If we now start up a new EC2 instance from our AMI, and – quite deliberatly – we do not give it any SSH keys, we would expect to find it quite difficult to log in – assuming all was normal… So let’s fire up a new EC2 instance from our newly created AMI – you can find it by selecting the “My AMIs” and “Owned by me” options

We will be sure to supply no key at all..

Now the image is running, we can find it’s IP address from the EC2 instances dashboard in the usual fashion. All the old instances you see here and what are left over from my creating AMI images with Packer and test building them…

The IP address is 13.60.27.58… so we go to the linux box that has the public/private key pair that we installed using packer and log on…

As you can see, the system logs in after checking the host key. No password needed, no AWS supplied key needs to be installed.

Cleanup

When you have finished, be sure to shut down the machines that are created otherwise they will eat into the free tier, and perhaps you will be charged for them.

Also – it does cost money to keep an AMI image lurking around in your private store. The actual AMI itself is free, but it is based on an EBS snapshot and that is covered to a point by the free tier – however I would still deRegister (not deactivate) the AMI to be sure there are not unexpected charges later on.

Recap

We took an existing Amazon-linx AMI and instructed Packer to do the following.

  • Spin up a copy on our account, using our CLI credentials
  • Modify that copy, by means of a builder which made use of a shell provider to add in the public key from our local linux box
  • Packer made the modification, and then copied the changed machine making a new Amazon Machine Image which it placed in our private store and then gave us the new AMI ID
  • We then started an EC2 image from that new AMI and were able to log in directly without using AWS supplied keys

On the face of it this is not that useful – we could just as well keep a key in AWS and use that instead. The principle however of creating significantly modified images with for example custom software loads, configurations, etc is clear to see. There is no need to build a webserver onto every EC2 instance that you stand up, if you can use Packer to install and configure, and then load the website onto a custom image for you.

Improving EC2 and SSH

In the last article we looked at how EC2 and SSH access worked. Whilst SSH keys offers greatly improved security over passwords and usernames, there are a few issues with it.

TOFU, is the first one, and I’m not talking about the slightly odd thing made from soyabeans. Trust on First Use means that the first time you connect, you have to trust that the computer you are connecting to is, well, the one you expect. This is also known as the MITM, the man in the middle problem – if someone can interpose themselves between you and the host you want to connect to, they can pretend to be that host, steal your login credentials, then they have you login, your identity, all your money the dog, and if you are really lucky they will also steal the in-laws as well…

Ahem. Seriously though, TOFU can be a real problem, or just a spectacular annoyance. One little quirk of EC2 though can really prove annoying.

Here we have an EC2 instance, running one the AWS free tier (you have signed up for this yes? Go on, definitly worth it). You can see it has a public IP address of 18.130.130.39

Incidentally, if you are using the PuTTY tool from windows, and want to know how to connect there is a short note on how to do so.. https://www.thenetnerd.uk/2022/12/20/putty/

If you feed in the default key and connection name of admin (because it’s a Debain instance it doesn’t use the more familiar ec2-user) then you can SSH to it and get connected. The first time it will ask you if you want to connect to the host, and if you trust it…

Click Accept, and it will log you in. This is exactly how the SSH keys are expected to work.

If you subsequently retry the connection, it will not ask you if you trust the host again. Trust is built in, everyone ahs the keys and all is well. That is, until you stop and restart the instance…

Once restarted, although the instance is the same it becomes clear that we have another Elastic IP address assigned.. 18.133.65.14

If we try to connect to this new address – even though its the same ED2 machine then perhaps inevitably…

The SSH connection throws a warning if either the hostname, or IP address changes. You can clearly see that the host is the same – the Key fingerprint is the same in both cases, but the change in IP is unexpected and hence the warning.

This is a problem that is an extention of the Trust on First Use issue. In the next post we shall look at how you can overcome this using an SSH certificate and a trusted Certificate Authority.

EC2 SSH security with keys – intro

If you launch an EC2 instance, come on, weve all done it – launched the thing and not provided any keys for it and you cannot connect. I’ve heard this time and time again – why can’t we use a password?

Because passwords suck thats why. https://xkcd.com/936/

AWS uses SSH keys, and if you have never used these before then there is a lot more to them than the first glance you will have seen with EC2 instances. Let’s take a look at them in a lab environment….

How Do SSH Keys Work?

SSH can authenticate clients using several different means. The most basic and familiar perhaps is username and password authentication. Although passwords are sent to the host over the network in a secure manner, it is possible to attack them by various means. Brute force attacks, shoulder surfing, and even keylogging devices, malware or cameras can all reveal passwords. Also, humans are really bad at picking passwords, and they can often be guessed. Despite there being sticking plasters for this, SSH keys prove to be a reliable and much more secure alternative.

An SSH keypair is made of two linked keys. There is a public key, which can be sent to anyone without security being comprimised, and a private key that must remain, as they name says, utterly private. The usual names for these keys are id_rsa and id_rsa.pub

The private key should remain on the client machine that will use it, and should never be moved anywhere else. If it does, anyone with access will also be able to access remote hosts as if they were the owner. As a precaution, the privatekey can be encrypted on disk with a passphrase.

The public key can be shared freely without consequence. The public key is used to encrypt messages, and due to the design, you cannot decrypt them with the public key. You are only able to decrypt them by using the private key. Consequently, this can be used as a way of proving that you possess the private key, if someone else has the public key. They send you a message encrypted with the public key – you decrypt with the private key and send it back to them.

For use with SSH, your public key is uploaded to a remote host you wish to connect to. The key is appended to a file that contains all the public keys that are allowed to connect to the server, known as the authorized keys file, which usually is found at ~/.ssh/authorized_keys.

When an SSH session takes place, the client connects to the remote host and presents the fingerprint of the public key. If the remote host has that public key it challenges the client to prove it has the private key by encrypting a message. If the client can decrypt the challenge, and prove it has the private key, then it is allowed to connect, an appropriate shell session is spawned, and the client connects.

Homelab

This is easy to investigate in a homelab if you are running something like Proxmox, VMware Workstation, or similar….

Here we have a few servers in my home lab. Host1, Host2, and client, all are running Debian 11. If we connect to the client machine and then try to SSH to another – say host1 we get a response about the authenticity of the host not being established…..

This is the Trust on first use or TOFU problem. You may have no idea if the server you are connecting to is actually the one you expect and so the system asks you if you are really sure that you want to connect to it. Type yes, and it says something about adding a key, and then it lets you login…

If you subsequently try again then it connects straight away – without the message as can be seen here. I disconnect from the far end server and then reconnect and it goes in without the message.

Let us have a look at just what SSH is doing here….

On the client machine, there is a hidden folder in the users home directory called .ssh Inside this, there may well be a number of files, but the one we are interested in is called known_hosts

The known_hosts file contains a list of the host keys that it uses to identify hosts that it has sucessfully connected to. If you look in the /etc/ssh folder on a server you will see a number of files.

┌─[✓]─[root@host2]─[~]
└─cd /etc/ssh

┌─[✓]─[root@host2]─[/etc/ssh]
└─ls
moduli sshd_config.d ssh_host_ed25519_key.pub
ssh_config ssh_host_ecdsa_key ssh_host_rsa_key
ssh_config.d ssh_host_ecdsa_key.pub ssh_host_rsa_key.pub
sshd_config ssh_host_ed25519_key

The ssh_host_ecdsa_key.pub is presented from the server to the client during the initial signon. Once successfully signed on you can see the key appear on the client known_hosts file. Looking at the example screens below you can clearly see the key appearing in the clients known_hosts file – a matching section has been highlighted.

The key comes as two parts – there is a public and private part. When log on the server sends the public part of it’s key and asks you to trust it. Assuming you do, then when you next connect you can use the public part of the key to send an encrypted challenge, that only the server with the private part can reply to. Hence trust is assured.

EC2 keys

When you log on to an AWS EC2 instance, you don’t use a username and password. Instead, you supply a keypair. This keypair acts as a logon credential, being something that you – and hopefully you alone possess. So how does it work.

Lets look at the lab environment again.

Host1 is set up to permit password based authentication – the default in SSH is to permit this and unless specifically set otherwise it will always allow a username and password to be supplied.

cat sshd_config
...
# To disable tunneled clear text passwords, change to no here!
#PasswordAuthentication yes
...

If you set this to PasswordAuthentication no then you cannot login with a password – it will simply refuse and if you have no other way of getting in, then you are stuck! If you want to try this, open TWO sessions to the host and make the change and then reconnect one of them. The other session will remain open so you can revert the change. Remember that you need to restart the sshd service after making the change with a systemctl restart sshd command.

The result is clear… on host1 we turn off password authentication

┌─[✓]─[root@host1]─[/etc/ssh]
└─cat sshd_config | grep PasswordAuthentication
PasswordAuthentication no

Connections from the client machine are now refused.

┌─[✓]─[cs@client]─[~]
└─ssh cs@host1
cs@host1: Permission denied (publickey).

┌─[✗]─[cs@client]─[~]
└─

In order to logon, we need some way other than a username and password now. I’ll show you know how you can do this with SSH keys.

To generate a public key pair we use ssh-keygen. This will create a public and a private keypair. If you allow it to use the defaults it will work, but we want to tweak them a little. By setting a couple of options, we generate RSA keys, and we have a keysize of 4096 bytes, or 4k.

This will give us a good old fashioned, but perfectly secure pair of keys that should work on just about anything. They will if created appear in the hidden .ssh directory in the home folder.


┌─[✗]─[cs@client]─[~]
└─ssh-keygen -t rsa -b 4096
Generating public/private rsa key pair.
Enter file in which to save the key (/home/cs/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/cs/.ssh/id_rsa
Your public key has been saved in /home/cs/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:a7Wukl39ArAdZlwPAMexB1IB6mG5/zgulRcnp1YLqJQ cs@client.chris-street.test
The key's randomart image is:
+---[RSA 4096]----+
|        +=B+     |
|       o o.oo    |
|      =. o...o   |
|     oEoo O.+ .  |
|     .o.SB.@ .   |
|      ..+oB.o    |
|       +++.. .   |
|      +.o+  . .  |
|       ++oo  .   |
+----[SHA256]-----+

┌─[✓]─[cs@client]─[~]
└─ls .ssh
id_rsa  id_rsa.pub  known_hosts

┌─[✓]─[cs@client]─[~]
└─

We now have the means of identifying ourselves to another server by something other than a password. We can have the remote host that we are connecting to send us a challenge, which we can prove with the private half of the keypair. This is something that only we (in theory!) know, so the host knows it is us.

For this to work though – we have to have the public part of the key on the remote server. So that needs to be copied over there, like this. If you haven’t yet set the host back to accepting password logins, then you need to do this first.

Most SSH based systems provide the ssh-copy-id tool which will be used to copy the key over to the host. It does require that you have a valid username and password. However below I will show the process manually so you can more easily see what is going on, with an example of ssh-copy-id given at the end.


┌─[✓]─[cs@client]─[~]
└─scp .ssh/id_rsa.pub cs@host1:/home/cs
cs@host1's password:
id_rsa.pub                                    100%  753     1.4MB/s   00:00

┌─[✓]─[cs@client]─[~]
└─ssh cs@host1
cs@host1's password:
Linux host1.chris-street.test 5.10.0-20-amd64 #1 SMP Debian 5.10.158-2 (2022-12-13) x86_64
┌─[✓]─[cs@host1]─[~]
└─ls
id_rsa.pub

┌─[✓]─[cs@host1]─[~]
└─ls .ssh
host1_rsa_key  host1_rsa_key-cert.pub  host1_rsa_key.pub

┌─[✓]─[cs@host1]─[~]
└─rm -rf .ssh

┌─[✓]─[cs@host1]─[~]
└─ls
id_rsa.pub

┌─[✓]─[cs@host1]─[~]
└─ls .ssh
ls: cannot access '.ssh': No such file or directory

┌─[✗]─[cs@host1]─[~]
└─mkdir .ssh

┌─[✓]─[cs@host1]─[~]
└─ls -la
total 36
drwxr-xr-x 4 cs   cs   4096 Jan 23 18:51 .
drwxr-xr-x 3 root root 4096 Jan 15 19:26 ..
-rw------- 1 cs   cs    202 Jan 23 02:48 .bash_history
-rw-r--r-- 1 cs   cs    220 Jan 15 19:26 .bash_logout
-rw-r--r-- 1 cs   cs   3930 Jan 16 03:21 .bashrc
-rw-r--r-- 1 cs   cs    753 Jan 23 18:40 id_rsa.pub
drwxr-xr-x 3 cs   cs   4096 Jan 16 03:21 .local
-rw-r--r-- 1 cs   cs    807 Jan 15 19:26 .profile
drwxr-xr-x 2 cs   cs   4096 Jan 23 18:51 .ssh

┌─[✓]─[cs@host1]─[~]
└─chmod 700 .ssh

┌─[✓]─[cs@host1]─[~]
└─cat id_rsa.pub >> .ssh/authorized_keys

┌─[✓]─[cs@host1]─[~]
└─ls -la .ssh
total 12
drwx------ 2 cs cs 4096 Jan 23 18:52 .
drwxr-xr-x 4 cs cs 4096 Jan 23 18:51 ..
-rw-r--r-- 1 cs cs  753 Jan 23 18:52 authorized_keys

┌─[✓]─[cs@host1]─[~]
└─chmod 600 .ssh/authorized_keys

┌─[✓]─[cs@host1]─[~]
└─ls -la .ssh
total 12
drwx------ 2 cs cs 4096 Jan 23 18:52 .
drwxr-xr-x 4 cs cs 4096 Jan 23 18:51 ..
-rw------- 1 cs cs  753 Jan 23 18:52 authorized_keys

┌─[✓]─[cs@host1]─[~]
└─

The stages of the process are…

  • SecureCoPy the public key from the client to the host using the SCP command.
  • Logon to the host with username/password combo.
  • Verify that the id_rsa.pub file has arrived
  • Check and if necessary create the hidden ~/.ssl directory
  • Make sure that the permissions for ~/.ssl are set to 700 or it will not work
  • The contents of the public key file are then listed and appended with the double redirects to the end of the authorized_keys file. If you use a single redirect, then it will overwrite the existing list of keys! This is why this method is not recommended as it is too easy to break something catastrophically.
  • Once the key has been appended, ensure that the authorized_keys file has a permission of 600 – again if this is incorrect then it will not work.

Once the clients public key is on the host then it can use this to be sure that the client is authorised to connect – because it can verify that the public key is present in it’s authorized_key files. The host can then send an encrypted challenge to the client and get a response back – proving that the client does have the private half of the key.

If once set up correctly you try to connect, SSH will use your local id_rsa keypair automatically, and you will find that it will just connect without requesting any further information.

┌─[✓]─[cs@client]─[~]
└─ssh cs@host1
Linux host1.chris-street.test 5.10.0-20-amd64 #1 SMP Debian 5.10.158-2 (2022-12-13) x86_64

┌─[✓]─[cs@host1]─[~]
└─

Well done. You now have ssh key based authentication to allow you to connect.

To set this up with no risk of breaking things ssh-copy-id should be used. It is a simple one line command and it will automatically pick the key for you, and send it to far end if you provide username and password.

┌─[✓]─[cs@client]─[~]
└─ssh-copy-id cs@host1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/cs/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
cs@host1's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'cs@host1'"
and check to make sure that only the key(s) you wanted were added.


┌─[✓]─[cs@client]─[~]
└─ssh cs@host1
Linux host1.chris-street.test 5.10.0-20-amd64 #1 SMP Debian 5.10.158-2 (2022-12-13) x86_64

┌─[✓]─[cs@host1]─[~]
└─

It carries out the same manual process as above, but far more effectively.

This is the same process that AWS EC2 uses – we have simply replicated what the process is. In the next post I will look at the common problems that are found with this, and the varying ways that these can be overcome.