I speak here {^_^}

A guide to a "safer" SSH!

August 11, 2019

Now after working on remote VPS for a good decent time, I have understood that it's not difficult to connect to remote nodes via SSH. Rather the concern is to connect to these remote nodes in a more safe and secure manner. Therefore, when this thought is taken into consideration, we need to work on certain things to ensure the security aspect.

So, this post will walk you through a demonstrated list of almost all the major steps that can be taken in this regard.

And now that when I am trying to learn writing "infrastructure as code" using ansible. I will try implementing this task of efficient configuration management, SSH-hardening & SSH-security via writing ansible roles again.

I recommend you to go and checkout some of the previous posts on ansible and iptables. That way, it would be quite easy for you to understand the norms used later in this post.

Taking this basic foundation further and not wasting a second more now, let's quickly start writing our ansible code for the today's use-case.


  • A newly created server/VM. (I will be writing it for Ubuntu or Debian based machines, but you can use the same code for other distros as well, with little alterations)
  • An ansible controller machine (i.e. a machine with ansible installed on it). It could be your local machine or a remote server again which can connect to other nodes via SSH.
  • And lastly, an ansible "hosts" file made ready with the IP addresses of your required remote servers.


  1. Create a non-root user with sudo access.
  2. Upgrade all the installed packages to ensure that they are in their latest state.
  3. Install few more basic required packages to make the initial configuration management easier.
  4. Copy the local SSH key to the remote node to enable passwordless logins.
  5. Perform SSH-hardening via altering sshd_config file in accordance with some basic security measures.
  6. Create some basic required iptables rules to improve and ensure security.
  7. Setup fail2ban to prevent SSH brute-force connection attacks.


  • The very first step is to create an ansible directory in our ansible-controller machine. This will contain the main provisioning ansible playbook, "playbook.yml" file and, all the other required ansible roles. (you can make this directory anywhere in your machine. I am creating one for me in "\etc\ansible\" directory path). Run the following commands for the same:

$ sudo mkdir playbooks
$ cd playbooks
$ sudo touch playbook.yml
$ sudo ansible-galaxy init users
$ sudo ansible-galaxy init packages
$ sudo ansible-galaxy init ssh
$ sudo ansible-galaxy init iptables
$ sudo ansible-galaxy init fail2ban

And this will give you a directory tree structure like one shown below ( and all the nested roles will be similar to ones like "users").

├── playbook.yml
├── users
│   ├── defaults
│   │   └── main.yml
│   ├── files
│   ├── handlers
│   │   └── main.yml
│   ├── meta
│   │   └── main.yml
│   ├── README.md
│   ├── tasks
│   │   └── main.yml
│   ├── templates
│   ├── tests
│   │   ├── inventory
│   │   └── test.yml
│   └── vars
│       └── main.yml
├── packages
├── ssh
├── iptables

  • The next step is to write our main provisioning ansible playbook, "playbook.yml". This will define the flow of commands to be run later on, in order to achieve the purpose. Meanwhile, we will create some variables as well here, to facilitate the latter tasks of multiple ansible roles like username, password etc.

- name: Provisioning a new SSH-hardened and more secure server.
  hosts: sshnodes
  become_user: root
  become: true
  become_method: sudo
          username: testuser
          # For password, we are required to pass a password hash/digest. 
          # So, I will be using python's hashlib module to create secure hash(or message digest) for our password.
          # passwordhash = python -c 'import hashlib; print(hashlib.pbkdf2_hmac('sha256', b'examplepassword', b'examplesalt', 100000))'
          password: passwordhash
          publickey: ~/.ssh/id_rsa.pub
          - users
          - packages
          - ssh
          - iptables
          - fail2ban

Here, we have our driving file ready. So, let's move further writing the actual hardening elements now.

Role: users

  • Change directory (cd) to "/users/tasks/main.yml" and write the basic tasks to check for the pre-requisites of creating a non-root sudo user.
  • A new thing, we will see here is "lineinfile" ansible module which is used to alter some specific lines in a file (file given through "dest") using regexp (regular expressions).
  • The tasks written in this role will perform the following jobs sequentially:
    • Check/Ensure whether "wheel" group is present or not. If not, it will create one.
    • Check whether the "wheel" group has sudo privileges or not. If not, check for the given regexp "^%wheel" (line starting with wheel) in the "/etc/sudoers" file and replace it with "%wheel ALL=(ALL:ALL) ALL".
    • Next comes installing the "sudo" package if it is not there.
    • And finally, create the non-root sudo user account as per the variables specified in "playbook.yml".

# tasks file for users
- name: Ensure wheel group is present
    name: wheel
    state: present
- name: Ensure wheel group has sudo privileges
    dest: /etc/sudoers
    state: present
    regexp: "^%wheel"
    line: "%wheel ALL=(ALL:ALL) ALL"
    validate: "/usr/sbin/visudo -cf %s"
- name: Install the `sudo` package
    name: sudo
    state: latest
- name: Create the non-root user account
    name: ""
    password: ""
    shell: /bin/bash
    update_password: on_create
    groups: wheel
    append: yes

Role: packages

  • This role is a pretty simple one and aims to keep the required packages intact and to ensure automatic upgrades of these packages.
  • Change directory to "/packages/tasks/main.yml" and create tasks to do the following:
    • Upgrade all the already installed/available packages in the remote node.
    • Install some extra packages which, in our case, are vim, htop and net-utils.
    • Finally, install the "Unattended-upgrades" package.
    • And copy the corresponding "20-auto-upgrdaes.j2" configuration file from the current role's template path to remote nodes "/etc/apt/apt.conf.d/20auto-upgrades" file. As we are trying to copy the data into a root privileged directory, we need to change the owner, group and mode as well.

# tasks file for packages
- name: Upgrading all packages (Ubuntu/Debian)
    upgrade: dist
- name: Install a few more packages
    name: ""
    state: present
   - vim
   - htop
   - net-tools
- name: Install the `unattended-upgrades` package
    name: unattended-upgrades
    state: present
- name: Copy the `20auto-upgrades` configuration file
    src: /etc/ansible/playbooks/packages/templates/20-auto-upgrdaes.j2
    dest: /etc/apt/apt.conf.d/20auto-upgrades
    owner: root
    group: root
    mode: 0644

  • Copy the following configuration in "packages/templates/20auto-upgrdades.j2" file to enable automatic security upgrades meanwhile ensuring that the server won't automatically reboot when these updates requires for it.

APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";

Role: ssh

  • This role aims to achieve passwordless remote logins via passing local SSH-key to remote node's newly created non-root sudo user account.
  • Change directory to "/ssh/tasks/main.yml" and create tasks to do the following:
    • Check for the local machines SSH public key at the location provided in the "vars" section of "playbook.yml" and copy it to remote node at ".ssh/authorized_keys" file.
    • Perform SSH-hardening again using ansible's "lineinfile" module to alter the default ansible configuration to a more secure and restricted one.
    • finally, in the last task, restart the sshd daemon service to reflect the configuration changes.

# tasks file for ssh
- name: Add local public key for key-based SSH authentication
          user: ""
          state: present
          key: ""
  with_fileglob: public_keys/*.pub
- name: Harden sshd configuration
          dest: /etc/ssh/sshd_config    
          regexp: ""    
          line: ""
          state: present
    - regexp: "^#?PermitRootLogin"
      line: "PermitRootLogin no"
    - regexp: "^^#?PasswordAuthentication"
      line: "PasswordAuthentication no"
    - regexp: "^#?AllowAgentForwarding"
      line: "AllowAgentForwarding no"
    - regexp: "^#?AllowTcpForwarding"
      line: "AllowTcpForwarding no"
    - regexp: "^#?MaxAuthTries"
      line: "MaxAuthTries 2"
    - regexp: "^#?MaxSessions"
      line: "MaxSessions 2"
    - regexp: "^#?TCPKeepAlive"
      line: "TCPKeepAlive no"
    - regexp: "^#?UseDNS"
      line: "UseDNS no"
    - regexp: "^#?AllowAgentForwarding"
      line: "AllowAgentForwarding no"
- name: Restart sshd
          state: restarted    
          daemon_reload: yes
          name: sshd

Role: iptables

  • If you are done with reading this previous article on iptables, you all how know what iptables are meant for. So, I am skipping directly to the tasks for this ansible role here.
  • Change directory to "/ssh/tasks/main.yml" and create tasks to do the following:
    • Check for "iptables" package in the remote node. If it's not present, install it.
    • Flush all the already written iptables firewall rules to start from scratch.
    • Create a firewall rule to allow all loopback traffic that might exist between various applications and services on the remote node.
    • Create another firewall rule to allow established connections like already establish SSH connections for both outgoing and incoming packet transfer.
    • The following next 4 firewall rules are meant to allow or open the frequently required ports i.e "ping, 22/ssh, 80/http & 443/https".
    • The next task appends another rule to "drop" any other traffic (packets) which don't match the above defined firewall rules.
    • And finally in order to retain these ephemeral rules, install "netfilter-persistent" and "iptables-persistent" packages.

# tasks file for iptables
- name: Install the `iptables` package
    name: iptables
    state: latest
- name: Flush existing firewall rules
    flush: true
- name: Firewall rule - allow all loopback traffic
    action: append
    chain: INPUT
    in_interface: lo
    jump: ACCEPT
- name: Firewall rule - allow established connections
    chain: INPUT
    jump: ACCEPT
- name: Firewall rule - allow port ping traffic
    chain: INPUT
    jump: ACCEPT
    protocol: icmp
- name: Firewall rule - allow port 22/SSH traffic
    chain: INPUT
    destination_port: 22
    jump: ACCEPT
    protocol: tcp
- name: Firewall rule - allow port80/HTTP traffic
    chain: INPUT
    destination_port: 80
    jump: ACCEPT
    protocol: tcp
- name: Firewall rule - allow port 443/HTTPS traffic
    chain: INPUT
    destination_port: 443
    jump: ACCEPT
    protocol: tcp
- name: Firewall rule - drop any traffic without rule
    chain: INPUT
    jump: DROP
- name: Firewall rule - drop any traffic without rule
    chain: INPUT
    jump: DROP
- name: Install `netfilter-persistent` && `iptables-persistent` packages
      name: ""
      state: present
     - iptables-persistent
     - netfilter-persistent

Role: fail2ban

  • This role again is a simple one to help prevent brute force attacks on the remote node.
  • Copy the following custom fail2ban configurations in "/fail2ban/templates/jail.local.j2" file. This configuration instructs to limit the maximum tries to establish a SSH connection to 3. Also, it bans the trouble-making node's IP address for an hour.

# Ban hosts for one hour:
bantime = 3600

# Override /etc/fail2ban/jail.d/00-firewalld.conf:
banaction = iptables-multiport

enabled = true
maxretry = 3

  • Change directory to "/fail2ban/tasks/main.yml" and create tasks to do the following:
    • Install "fail2ban" package.
    • Overide the basic fail2ban configurations by the custom "jail.local.j2" file present in the templates section of the current role directory. Again, we need to change the owner, group and mode as well because of the path being a privileged one.

# tasks file for fail2ban
- name: Install the `fail2ban` package
    name: fail2ban
    state: latest
- name: Override some basic fail2ban configurations
    src: /etc/ansible/playbooks/fail2ban/templates/jail.local.j2
    dest: /etc/fail2ban/jail.local
    owner: root
    group: root
    mode: 0644

So, finally we are done writing our all 5 ansible roles required to establish and ensure a secure SSH connection, the last thing is to try testing if all the plays work fine way or not.

$ ansible-playbook playbook.yml

and if you get an output like one shown below. You are good to go.

        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||              : ok=24   changed=9    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   

Cheers, We have reached our goal for this article.

But I have something more before I end.

The above ansible playbook which we have written for securing the SSH remote connection is undoubtedly serving our purpose to a large extend. But nothing can be made perfect in just one go.

It is always recommended to keep an eye on the system's processes and services to get a deeper insight of what things are working properly and what other things requires more attention from our end. And for this purpose, an auditing tool sounds handy.

So, our last step would be to quickly setup an auditing tool for us, here, the Lynis.

Lynis is a battle-tested security tool for systems running Linux, macOS, or Unix-based operating system. It performs an extensive health scan of your systems to support system hardening and compliance testing. The project is open source software with the GPL license and available since 2007.


  • SSH into our newly deployed VM and run the following commands to install lynis from the official git repository.

$ sudo apt-get install git 
$ git clone https://github.com/CISOfy/lynis
$ sudo chown -R 0:0 lynis
$ cd lynis

  • And finally run an audit with the following commands.

$ su -
# ./lynis audit system

And after a long dedicated scan, it will give you a detailed report which includes measures like Hardening index, Tests performed etc.

Lynis Audit report for a test VM hardened using our above ansible-playbook.

As we see, the Lynis scan on our newly deployed VM (hardened using the above playbook) has a Hardening index of 74 which is highly appreciable for our work.

Along with this report, it will also output warnings and suggestions that can be used to improve the results to higher extents. You can check the entire Lynis scan log at "/var/log/lynis.log".

And bam! 🙌 This is the end of this article. :D

[UPDATE: I have improved some part of the above playbook for optimized execution and simpler look. You can check the updated post here. ]