Introduction
This guide will show you how to automate the initial Ubuntu server configuration (18.04 through 22.10) using Ansible. Ansible is a software tool that automates the configuration of one or more remote nodes from a local control node. The local node can be another Linux system, a Mac, or a Windows PC. If you are using a Windows PC, you can install Linux using the Windows Subsystem for Linux. This guide will focus on using a Mac as the control node to set up a new rcs Ubuntu server.
The Ubuntu Ansible setup playbook is listed at the end of this guide.
This guide describes how to install and use it. The playbook, and this guide, have been updated and tested to work with Ubuntu 18.04, 20.04, 22.04, and 22.10. It also now works with the latest stable Ansible community version. Starting with Ubuntu 22.10, sshd now uses socket-based activation to start when an incoming connection request is received. The playbook is updated to automatically create the correct SSH configuration for the running Ubuntu version.
It takes a little work to set up and start using Ansible, but once it is set up and you become familiar with it, using Ansible will save a lot of time and effort. For example, you may want to experiment with different applications. Using the Ansible setup playbook described in this guide, you can quickly reinstall your Ubuntu instance and then run the playbook to configure your base server. I hope this playbook will be a good example for creating future playbooks for installing web servers, database servers, or even an email server.
We will use a single Ansible playbook that will do the following:
Upgrade installed apt packages.
Install a base set of useful software packages.
Update the systemd-resolve DNS to work with the Unbound DNS resolver.
Set a fully qualified domain name (FQDN).
Set the timezone.
Set sudo password timeout (can change the default 15-minute timeout).
Create a regular user with sudo privileges.
Install SSH Keys for the new regular user.
Ensure the authorized key for the root user is installed (or updated to a new key).
Update/Change the root user password.
Disable password authentication for root.
Disable tunneled clear-text passwords.
Create a 2-line prompt for the root and the new regular user.
Set the SSH port number, which allows for setting a non-standard port number, in sshd_config (Ubuntu versions less than 22.10) or in the SSH listen socket configuration (Ubuntu versions equal or greater than 22.10).
Configure a firewall using UFW.
Configure the UFW logging level.
Configure brute force mitigation using fail2ban.
Optionally configure static IP networking.
Reboot and restart services as needed.
A local DNS resolver is created by installing the unbound package using the default installed configuration. The systemd-resolved configuration is updated to work with unbound. They provide a local DNS recursive resolver, and the results are cached. This is really important if you want to run your own email server that includes DNS blacklist (DNSBL) lookups. Some DNSBL services will not work with a public DNS resolver because they limit the number of queries from a server IP.
If you have configured additional IPs in the rcs control panel, you can use this playbook to install a new or updated netplan networking file (/etc/netplan/50-cloud-init.yaml
). By default, the Configure static networking playbook task is disabled.
After you create your Ubuntu 21.10 (or later) instance, Cloud-init will install any available updates. If you try to execute the setup playbook right after creating your instance, there may be a conflict between Cloud-init and the apt upgrade task. You may see a task failure like:
TASK [Upgrade installed apt packages] **********************************************************************************************
FAILED - RETRYING: Upgrade installed apt packages (15 retries left).
FAILED - RETRYING: Upgrade installed apt packages (14 retries left).
FAILED - RETRYING: Upgrade installed apt packages (13 retries left).
ok: [ap1.altoplace.org]
This is completely normal. The setup playbook will retry up to 15 times to run the apt upgrade task (while it is waiting for the lockfile to be released). If you wait a few minutes after creating the instance, you will probably not see any apt upgrade task failures when running the setup playbook.
Prerequisites
A rcs server with a freshly installed Ubuntu (18.04, 20.04, 22.04, or 22.10) instance.
A local Mac, Windows (with Linux installed via the WSL), or a Linux system (this guide will focus on using a Mac, but the procedures are similar for any Linux control node).
If using a Mac, Homebrew should be installed.
A previously generated SSH Key for the rcs host; the SSH public key should already be installed for the root user.
The current stable Ansible version (this guide has been thoroughly tested with the Ansible core version 2.13.6 on a Mac, installed via Homebrew).
1. Install Ansible on the Local System
For this guide, we are using the latest (stable) community version of Ansible.
If you are using a Mac with Homebrew installed:
$ brew install ansible
This will install Ansible along with all the required dependencies, including python version 3.10.x. You can quickly test your installation by doing:
$ ansible --version
ansible [core 2.13.6]
config file = /Users/george/.ansible.cfg
configured module search path = ['/Users/george/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/6.6.0/libexec/lib/python3.10/site-packages/ansible
ansible collection location = /Users/george/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.8 (main, Oct 13 2022, 10:18:28) [Clang 13.0.0 (clang-1300.0.29.30)]
jinja version = 3.1.2
libyaml = True
Create a Simple Ansible Configuration
Create the .ansible.cfg
configuration file in the local user home directory. This will tell Ansible how to locate the host's inventory file.
If you are using a Mac, add the following content:
[defaults]
inventory = /Users/user/ansible/hosts.yml
interpreter_python = auto
Be sure to replace user with your actual user name.
Create the folder to store the hosts.yml
hosts inventory file:
$ mkdir ~/ansible
$ cd ~/ansible
Of course, you can put it anywhere you want to and give it any name. Just make sure that your .ansible.cfg
file points to the correct location. I like storing all my ansible files in a Dropbox folder where I can also run my setup playbook from other Mac machines.
This is example content to add to ~/ansible/hosts.yml
:
all:
children:
rcs:
hosts:
ap1.altoplace.org:
user: george
user_passwd: "{{ ap1_user_passwd }}"
ansible_sudo_passwd: "{{ ap1_sudo_passwd }}"
root_passwd: "{{ ap1_root_passwd }}"
ssh_port: "22"
ssh_pub_key: "{{ lookup('file', '~/.ssh/ap1_ed25519.pub') }}"
cfg_static_network: false
Be sure to update the host and user names.
The user
is the regular user to be created. The user_passwd
, ansible_sudo_passwd
, and root_passwd
are the user's, ansible sudo, and root passwords that are stored in an Ansible vault (described below). ssh_port
defines the desired to-be-configured SSH port number. ssh_pub_key
points to the SSH public key for the rcs host.
The cfg_static_network
is a boolean variable that is set to true
if you are configuring static networking in /etc/netplan
. Unless you have specifically created a static networking configuration, you should leave this set to false
. Configuring a static network is beyond the scope of this guide.
Using the Ansible Vault
Create the directory for the Ansible password vault and setup playbook:
$ mkdir -p ~/ansible/ubuntu
$ cd ~/ansible/ubuntu
Create the Ansible password vault:
$ ansible-vault create passwd.yml
New Vault password:
Confirm New Vault password:
This will start up your default system editor. Add the following content:
host_user_passwd: ELqZ9L70SSOTjnE0Jq
host_sudo_passwd: "{{ host_user_passwd }}"
host_root_passwd: tgM2Q5h8WCeibIdJtd
Replace host
with your actual hostname. Generate your own secure passwords. Save and exit your editor. This creates an encrypted file that only Ansible can read. You can add other host passwords to the same file.
pwgen
is a very handy tool that you can use to generate secure passwords. Install it on a Mac via Homebrew: brew install pwgen
. Use it as follows:
$ pwgen -s 18 2
ELqZ9L70SSOTjnE0Jq tgM2Q5h8WCeibIdJtd
You can view the contents of the ansible-vault file with:
$ ansible-vault view passwd.yml
Vault password:
You can edit the file with:
$ ansible-vault edit passwd.yml
Vault password:
2. Create an SSH Config File for the rcs Host
Next, we need to define the rcs hostname and SSH port number that Ansible will use to connect to the remote host.
The SSH configuration for the server host is stored in ~/.ssh/config
. An example configuration looks like this (on a Mac):
Host *
AddKeysToAgent yes
UseKeychain yes
IdentitiesOnly yes
AddressFamily inet
Host ap1.altoplace.org ap1
Hostname ap1.altoplace.org
Port 22
User george
IdentityFile ~/.ssh/ap1_ed25519
Using this SSH config file, you can change the default SSH port number (if changed by the Ansible playbook). (The playbook is always executed the first time with SSH port 22.) If the playbook changes the SSH port number, then the SSH port number in the SSH config file needs to be changed after the playbook runs.
With this example SSH configuration file, you can use a shorthand hostname to log into the server.
For the user login:
$ ssh ap1
For the root login:
$ ssh root@ap1
UserKeychain
is specific to macOS. It stores the SSH public key in the macOS key chain.
ap1.altoplace.org
is the rcs server FQDN (Fully Qualified Domain Name) that needs to be defined in your DNS or in the /etc/hosts file on your local system. Port
is optional but required if you define a non-standard SSH port.
Important: Install your SSH Key for the root user if you have not done so already:
$ ssh-copy-id -i ~/.ssh/host_ed25519 root@host
After installing your Ubuntu instance, verify that you can log in without using a password:
$ ssh root@ap1
The authenticity of host 'ap1.altoplace.org (207.148.13.194)' can't be established.
ECDSA key fingerprint is SHA256:Z94moYXpNh8iVr23Gk791U0TRsw6WS7A4mHNFI1YdD8.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'ap1.altoplace.org,207.148.13.194' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.15.0-53-generic x86_64)
o o o
root@ap1:~#
The first time you login, you will be prompted with Are you sure you want to continue connecting (yes/no/[fingerprint])?. Answer yes.
Note: If you reinstall your rcs instance, be sure to delete your rcs hostname from ~/.ssh/known_hosts
on your local control node. Otherwise, you will see an SSH error when you try to log into your reinstalled host. The hostname is added to this file during the first login attempt:
$ cat ~/.ssh/known_hosts
o o o
ap1.altoplace.org,207.148.13.194 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOM3gafKFgZLiHgEoQ02eCWoJkYYTSSGJJQNtQ3LxALUG+nW7JBw/ZONWSZXW07fazakXMnpH6S+LaRundHvd4g=
If you don't delete the hostname from this file after reinstalling your instance, you will see an error like:
$ ssh root@ap1
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
o o o
If this happens, delete the line entered for your hostname in the known_hosts file and rerun the ssh command.
3. Test Your SSH/Ansible Configuration
Before trying to run the setup Ansible playbook, we need to verify that Ansible is working correctly, you can access your Ansible vault, and can connect to your rcs host. First, verify that Ansible is installed correctly on a Mac:
$ ansible --version
ansible [core 2.13.6]
config file = /Users/user/.ansible.cfg
o o o
This is the latest version of Ansible on a Mac/Homebrew when this guide was written.
Run this example command, updating your actual hostname, to test your Ansible configuration (also, your SSH configuration):
$ ansible -m ping --ask-vault-pass --extra-vars '@passwd.yml' ap1.altoplace.org -u root
Vault password:
ap1.altoplace.org | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
If you see the above output, then everything is working fine. If not, go back and double-check all your SSH and Ansible configuration settings. Start by verifying that you can execute:
$ ssh root@host
Login in without a password (you have installed your SSH key for root).
4. Running the Ansible Ubuntu Server Configuration Playbook
You are ready to run the playbook; when you execute the playbook, you will be prompted for your vault password. The playbook will execute a number of tasks with a PLAY RECAP
at the end. You can rerun the playbook multiple times; for example, you may want to rerun the playbook to change something like the SSH port number. It will only execute tasks when needed. Be sure to update variables at the beginning of the playbook, such as your timezone and your local client IP address, before running the playbook. Setting your local client IP address prevents you from being accidentally locked out by fail2ban.
You can easily determine your client IP address by logging into your host and executing the who
command:
root@host:~# who
root pts/1 2021-10-11 20:24 (12.34.56.78)
Your client IP address, 12.34.56.78, will be listed in the output.
We are finally ready to run the Ansible playbook, which I listed below. Be sure that you are in the ~/ansible/ubuntu
directory. This is the command to run:
$ ansible-playbook --ask-vault-pass --extra-vars '@passwd.yml' setup-pb.yml -l host.example.com -u root
Vault password:
Enter your vault password when prompted. Depending on the speed of your Mac, it might take a few seconds to start up. If it completes successfully, you will see PLAY RECAP
like:
PLAY RECAP *************************************************************************************************************************
ap1.altoplace.org : ok=40 changed=27 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0
The most important thing to note is that there should be no failed tasks.
Next, I will describe some basic tests that you can run to verify your server setup.
5. Ubuntu Server Verification
After you have successfully executed the Ansible setup playbook, here are some basic tests that you can execute to verify your server setup. I will show some real-life examples with the server host that I used to test the setup playbook (my local hostname is ap1
, and my user name is george
). I executed these tests on Ubuntu 22.04.1.
Verify Your User Login
If you changed the SSH port number, update the port number in ~/.ssh/config and verify that you can log into your new user account using your host's public SSH key:
$ ssh ap1
Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.15.0-53-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Sat Nov 26 05:39:49 PM UTC 2022
System load: 0.02392578125
Usage of /: 12.5% of 93.67GB
Memory usage: 10%
Swap usage: 0%
Processes: 139
Users logged in: 0
IPv4 address for enp1s0: 207.148.13.194
IPv6 address for enp1s0: 2001:19f0:5c01:63f:5400:4ff:fe31:4708
0 updates can be applied immediately.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
george@ap1:~
$
Note the two-line prompt. The first line shows user@host
and the current directory.
Now, note how the l
, la
, and ll
LS aliases work:
george@ap1:~
$ l
tmpfile
george@ap1:~
$ la
.bash_logout .bashrc .cache .profile .ssh tmpfile
george@ap1:~
$ ll
total 28
drwxr-x--- 4 george george 4096 Nov 26 17:46 ./
drwxr-xr-x 4 root root 4096 Nov 26 17:36 ../
-rw-r--r-- 1 george george 220 Jan 6 2022 .bash_logout
-rw-r--r-- 1 george george 3879 Nov 26 17:37 .bashrc
drwx------ 2 george george 4096 Nov 26 17:39 .cache/
-rw-r--r-- 1 george george 807 Jan 6 2022 .profile
drwx------ 2 george george 4096 Nov 26 17:36 .ssh/
-rw-rw-r-- 1 george george 0 Nov 26 17:46 tmpfile
Verify Your User Password
Even though you use an SSH public key to log in to your user account, you still need to use your user password with the sudo
command. For example, use the sudo command to change to the root account (enter your user password when prompted):
george@ap1:~
$ sudo -i
[sudo] password for george:
root@ap1:~
# exit
logout
george@ap1:~
$
Verify the Root Password
While in your user account, you can also use su -
to change to the root account. One difference is that you will have to enter your root password:
george@ap1:~
$ su -
Password:
root@ap1:~
# exit
logout
george@ap1:~
$
Verify Your Hostname
While we are in the root account, let's verify our hostname and some other features that the playbook set up for us:
root@ap1:~
# hostname
ap1
root@ap1:~
# hostname -f
ap1.altoplace.org
root@ap1:~
# date
Sat Nov 26 05:50:30 PM UTC 2022
Here we verified both the short and FQDN hostnames. With the date command, verify that the timezone is set correctly.
Verify the Unbound Local DNS Caching Resolver
An in-depth discussion of Unbound is beyond the scope of this guide. However, I can provide a few quick tests to verify that the default Unbound local DNS caching resolver configuration is working. We will use the dig
command.
To verify that the resolver is working, do, for example:
root@ap1:~
# dig +noall +answer +stats altoplace.com
altoplace.com. 3600 IN A 66.39.87.31
;; Query time: 92 msec
;; SERVER: 127.0.0.1#53(127.0.0.1) (UDP)
;; WHEN: Sat Nov 26 17:51:41 UTC 2022
;; MSG SIZE rcvd: 58
Note that the server address is 127.0.0.1. Also, note the TTL (Time To Live). For this example, the TTL is 3600 seconds. Also, note the Query time, 92 msec. Now execute the same command again:
root@ap1:~
# dig +noall +answer +stats altoplace.com
altoplace.com. 3589 IN A 66.39.87.31
;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1) (UDP)
;; WHEN: Sat Nov 26 17:51:52 UTC 2022
;; MSG SIZE rcvd: 58
The Query time should be at or near 0 msec because the second query result came from our local cache. The cached result will remain active for the time-to-live interval, which, as you can see, is counting down.
Some (email) blocklist servers will rate-limit your access to their pre-defined DNS resolvers. This could cause issues when using a public DNS resolver. For example, when executing the following dig command, you should see "permanent testpoint" when using a local DNS resolver.
root#ap1:~
# dig test.uribl.com.multi.uribl.com txt +short
"permanent testpoint"
If you were using a public DNS resolver, you might see a failure like (after you first create your rcs instance, but have not executed the setup playbook):
root@ap1:~# dig test.uribl.com.multi.uribl.com txt +short
"127.0.0.1 -> Query Refused. See http://uribl.com/refused.shtml for more information [Your DNS IP: 149.28.122.136]"
You can have a look at that URL to read more about this topic.
Verify fail2ban and UFW SSH Port Protection
This set of tests will verify that fail2ban and UFW are integrated together to protect your SSH port. If you are using the default port 22, it will not take long for attackers to attempt to log in to your server. Their login attempt will fail, and fail2ban will take note of the failure. If there are multiple failed attempts in a short period of time (as noted in your fail2ban configuration), fail2ban will ban the IP for the time that you configured in your fail2ban configuration. Fail2ban will notify UFW to block the IP for the duration of the ban.
To see the current fail2ban status, you can execute fail2ban-client status sshd
(or f2bst sshd
to save some typing):
root@ap1:~
# f2bst sshd
Status for the jail: sshd
|- Filter
| |- Currently failed: 2
| |- Total failed: 8
| `- File list: /var/log/auth.log
`- Actions
|- Currently banned: 2
|- Total banned: 2
`- Banned IP list: 2600:380:677d:c75c:5db9:85f6:d434:2229 166.175.189.223
This output shows that there are currently two failed login attempts. There have been a total of 8 failures. Of those failed attempts, two IP addresses met the criteria to be banned. There are two IPs that are currently banned. You can observe these IP failures and bans at /var/log/fail2ban.log
:
2022-11-26 18:05:03,837 fail2ban.filter [19729]: INFO [sshd] Found 2600:380:677d:c75c:5db9:85f6:d434:2229 - 2022-11-26 18:05:03
2022-11-26 18:05:12,109 fail2ban.filter [19729]: INFO [sshd] Found 2600:380:677d:c75c:5db9:85f6:d434:2229 - 2022-11-26 18:05:12
2022-11-26 18:05:18,559 fail2ban.filter [19729]: INFO [sshd] Found 2600:380:677d:c75c:5db9:85f6:d434:2229 - 2022-11-26 18:05:18
2022-11-26 18:05:18,785 fail2ban.actions [19729]: NOTICE [sshd] Ban 2600:380:677d:c75c:5db9:85f6:d434:2229
2022-11-26 18:05:38,985 fail2ban.filter [19729]: INFO [sshd] Found 166.175.189.223 - 2022-11-26 18:05:38
2022-11-26 18:05:43,335 fail2ban.filter [19729]: INFO [sshd] Found 166.175.189.223 - 2022-11-26 18:05:43
2022-11-26 18:05:46,935 fail2ban.filter [19729]: INFO [sshd] Found 166.175.189.223 - 2022-11-26 18:05:46
2022-11-26 18:05:47,042 fail2ban.actions [19729]: NOTICE [sshd] Ban 166.175.189.223
You can also execute iptables -nL f2b-sshd
and see any currently banned IPs:
root@ap1:~
# iptables -nL f2b-sshd
Chain f2b-sshd (1 references)
target prot opt source destination
REJECT all -- 166.175.189.223 0.0.0.0/0 reject-with icmp-port-unreachable
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Note: Execute ip6tables -nL f2b-sshd
to view any banned IPv6 addresses. The f2b-sshd Chain may not be present if no IPs have been banned, and you will see an error message.
I produced the above output by running a test from my phone, logging into my server with invalid credentials, and verifying that I could no longer connect after my IP address was banned. It first tried an IPv6 address and then the IPv4 address. (I turned off my Wi-Fi to ensure that the IP address was different from my my_client_ip
. If my phone were using my_client_ip
, the connection would never fail.):
2022-11-26 18:06:31,779 fail2ban.filter [19729]: INFO [sshd] Ignore 72.34.15.207 by ip
6. Ubuntu Ansible Set-Up Playbook Listing
This is the setup-pb.yml
playbook:
# Initial server setup
#
---
- hosts: all
become: yes
vars:
my_client_ip: 72.34.15.207
tmzone: UTC
sudo_timeout: 60
# Set ufw logging: on | off | low | medium | high | full
ufw_log: off
# SSH socket config used for 22.10 and later.
# Disable any existing listen steam and enable the new stream.
ssh_socket_cfg: |
[Socket]
ListenStream=
ListenStream={{ ssh_port }}
resolved_cfg: |
[Resolve]
DNSStubListener=no
DNS=127.0.0.1
f2b_jail_local: |
[DEFAULT]
ignoreip = 127.0.0.1/8 ::1 {{ my_client_ip }}
findtime = 15m
bantime = 2h
maxretry = 5
[sshd]
enabled = true
maxretry = 3
port = {{ ssh_port }}
tasks:
- name: Get datestamp from the system
shell: date +"%Y%m%d"
register: dstamp
- name: Set current date stamp variable
set_fact:
cur_date: "{{ dstamp.stdout }}"
# Update and install the base software
- name: Update apt package cache
apt:
update_cache: yes
cache_valid_time: 3600
- name: Upgrade installed apt packages
apt:
upgrade: dist
register: upgrade
retries: 15
delay: 5
until: upgrade is success
- name: Ensure that a base set of software packages are installed
apt:
pkg:
- apt-transport-https
- build-essential
- fail2ban
- pwgen
- unbound
- unzip
state: latest
- name: Create a local systemd-resolved configuration directory.
file:
path: /etc/systemd/resolved.conf.d
state: directory
owner: root
group: root
mode: 0755
- name: Create a local systemd-resolved configuration that works with unbound.
copy:
dest: /etc/systemd/resolved.conf.d/local.conf
content: "{{ resolved_cfg }}"
owner: root
group: root
mode: 0644
- name: Update the systemd-resolved /etc/resolv.conf symbolic link.
file:
src: /run/systemd/resolve/resolv.conf
dest: /etc/resolv.conf
state: link
owner: root
group: root
- name: Restart systemd-resolved
service:
name: systemd-resolved
state: restarted
- name: Check if a reboot is needed for Debian-based systems
stat:
path: /var/run/reboot-required
register: reboot_required
# Host Setup
- name: Set static hostname
hostname:
name: "{{ inventory_hostname_short }}"
- name: Add FQDN to /etc/hosts
lineinfile:
dest: /etc/hosts
regexp: '^127\.0\.1\.1'
line: '127.0.1.1 {{ inventory_hostname }} {{ inventory_hostname_short }}'
state: present
- name: Check if cloud init is installed.
stat: path="/etc/cloud/templates/hosts.debian.tmpl"
register: cloud_installed
- name: Add FQDN to /etc/cloud/templates/hosts.debian.tmpl
lineinfile:
dest: /etc/cloud/templates/hosts.debian.tmpl
regexp: '^127\.0\.1\.1'
line: "127.0.1.1 {{ inventory_hostname }} {{ inventory_hostname_short }}"
state: present
when: cloud_installed.stat.exists
- name: set timezone
timezone:
name: "{{ tmzone }}"
# Set sudo password timeout (default is 15 minutes)
- name: Set sudo password timeout.
lineinfile:
path: /etc/sudoers
state: present
regexp: '^Defaults\tenv_reset'
line: 'Defaults env_reset, timestamp_timeout={{ sudo_timeout }}'
validate: '/usr/sbin/visudo -cf %s'
- name: Create/update regular user with sudo privileges
user:
name: "{{ user }}"
password: "{{ user_passwd | password_hash('sha512') }}"
state: present
groups: sudo
append: true
shell: /bin/bash
- name: Ensure ansible_sudo_passwd matches the [new] user password
set_fact:
ansible_sudo_passwd: "{{ user_passwd }}"
- name: Ensure authorized keys for remote user is installed
authorized_key:
user: "{{ user }}"
state: present
key: "{{ ssh_pub_key }}"
- name: Ensure authorized key for root user is installed
authorized_key:
user: root
state: present
key: "{{ ssh_pub_key }}"
- name: Update root user password.
user:
name: root
password: "{{ root_passwd | password_hash('sha512') }}"
- name: Disable password authentication for root
lineinfile:
path: /etc/ssh/sshd_config
state: present
regexp: '^#?PermitRootLogin'
line: 'PermitRootLogin prohibit-password'
- name: Disable tunneled clear-text passwords
lineinfile:
path: /etc/ssh/sshd_config
state: present
regexp: '^PasswordAuthentication yes'
line: 'PasswordAuthentication no'
- name: Set user PS1 to a two-line prompt
lineinfile:
dest: "/home/{{ user }}/.bashrc"
insertafter: EOF
line: "PS1='${debian_chroot:+($debian_chroot)}\\[\\033[01;32m\\]\\u@\\h\\[\\033[00m\\]:\\[\\033[01;34m\\]\\w\\[\\033[00m\\]\\n\\$ '"
state: present
- name: Set root PS1 to a two-line prompt
lineinfile:
path: '/root/.bashrc'
state: present
insertafter: EOF
line: PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\n\$ '
# Configure the UFW firewall
- name: Disable and reset ufw firewall to installation defaults.
ufw:
state: reset
- name: Find backup rules to delete
find:
paths: /etc/ufw
patterns: "*.{{ cur_date }}_*"
use_regex: no
register: files_to_delete
- name: Delete ufw backup rules
file:
path: "{{ item.path }}"
state: absent
with_items: "{{ files_to_delete.files }}"
- name: Set the ssh '{{ ssh_port }}' port number in sshd_config (ver < 22.10).
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#Port '
line: 'Port {{ ssh_port }}'
state: present
when: ansible_facts['distribution_version'] < '22.10'
- name: Create a ssh.socket.d configuration directory (ver >= 22.10).
file:
path: /etc/systemd/system/ssh.socket.d
state: directory
owner: root
group: root
mode: 0755
when: ansible_facts['distribution_version'] >= '22.10'
- name: Create a local SSH socket stream configuration (ver >= 22.10).
copy:
dest: /etc/systemd/system/ssh.socket.d/listen.conf
content: "{{ ssh_socket_cfg }}"
owner: root
group: root
mode: 0644
when: ansible_facts['distribution_version'] >= '22.10'
- name: daemon-reload (ver >= 22.10)
systemd:
daemon_reload: yes
when: ansible_facts['distribution_version'] >= '22.10'
- name: Restart the ssh service after updating the SSH port number (ver < 22.10).
service:
name: ssh
state: restarted
when: ansible_facts['distribution_version'] < '22.10'
- name: Restart the ssh socket unit after updating the SSH port number (ver >= 22.10).
systemd:
name: ssh.socket
state: restarted
when: ansible_facts['distribution_version'] >= '22.10'
- name: Change the ansible ssh port to '{{ ssh_port }}'
set_fact:
ansible_port: '{{ ssh_port }}'
- name: Allow ssh port '{{ ssh_port }}'.
ufw:
rule: allow
proto: tcp
port: '{{ ssh_port }}'
state: enabled
- name: Set the UFW log level.
ufw:
logging: '{{ ufw_log }}'
- name: configure fail2ban for ssh
copy:
dest: /etc/fail2ban/jail.local
content: "{{ f2b_jail_local }}"
owner: root
group: root
mode: 0644
notify:
- restart fail2ban
- name: enable fail2ban service on boot
service:
name: fail2ban
enabled: true
state: started
# simple shell script to display fail2ban-client status info; usage:
# f2bst
# f2bst sshd
- name: Configure f2bst
copy:
dest: /usr/local/bin/f2bst
content: |
#!/bin/sh
fail2ban-client status $*
owner: root
group: root
mode: 0750
- name: run needrestart
command: needrestart -r a
when: not reboot_required.stat.exists and upgrade.changed
- name: Configure static networking
copy:
src: etc/netplan/50-cloud-init.yaml
dest: /etc/netplan/50-cloud-init.yaml
owner: root
group: root
mode: 0644
notify:
- netplan apply
when: cfg_static_network == true
- name: Report if reboot is needed.
debug:
msg: Rebooting the server, please wait.
when: reboot_required.stat.exists
- name: Reboot the server if needed
reboot:
msg: "Reboot initiated by Ansible because of reboot required file."
connect_timeout: 5
reboot_timeout: 600
pre_reboot_delay: 0
post_reboot_delay: 30
test_command: whoami
when: reboot_required.stat.exists
- name: Remove old packages from the cache
apt:
autoclean: yes
- name: Remove dependencies that are no longer needed
apt:
autoremove: yes
purge: yes
handlers:
- name: restart fail2ban
service:
name: fail2ban
state: restarted
when: reboot_required.stat.exists == false
- name: netplan apply
command: netplan apply
when: cfg_static_network == true
You can read the Ansible Documentation to learn more about Ansible.
You should only have to update the vars:
section to change the settings for your specific situation. Most likely, you will want to set the client IP and timezone. Setting the client IP prevents one from being accidentally locked out by fail2ban.
Conclusion
In this guide, we have introduced Ansible for automating the initial Ubuntu server setup. This is very useful for deploying or redeploying a server after testing an application. It also creates a solid foundation for creating a web, database, or email server.