N2N is an open source layer 2/3 VPN application. Unlike many other VPN programs, N2N can connect computers which are located behind a NAT router. This offers a huge advantage to connect to a cloud environment without having to rely on special protocols such as the ESP protocol (used by ipsec). To achieve this connection, N2N uses a supernode which can route the information between NAT'ed nodes. This VPN connection can be used to connect multiple Rcs instances across regions together.
Prerequisites
- Three Ubuntu 16.04 LTS x64 server instances. (Any size will work)
- A sudo (or root account) user.
In this example we will be using three nodes in multiple zones:
- Paris
- Miami
- Sydney
Installation of the software
The following commands will be executed on each instance.
Start by installing the build-essential from the repo and also libssl-dev, as we will be building from the newest source code.
apt-get install -y build-essential libssl-devNext, download the source code from github.
cd /tmp
git clone https://github.com/ntop/n2n.gitCompile all binaries.
cd n2n
make
make installThe make install command will have created the supernode and edge binaries in the /usr/sbin directory.
Finish by cleaning up the files.
rm -rf /tmp/n2nInstallation - Node Paris
The first node will be our so called supernode. This supernode will start the supernode service that will listen on UDP port 1200.
By default the N2N application doesn't create a service file. So we will need to provide our own.
Create the 'n2n_supernode' service file:
nano /etc/systemd/system/n2n_supernode.serviceAdd the following content:
[Unit]
Description=n2n supernode
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/sbin/supernode -l 1200
[Install]
WantedBy=multi-user.targetThe '-l' directive defines the UDP port 1200. This is the port on which the supernode will listen. To ensure that everything is working, start the supernode service:
systemctl start n2n_supernodeCheck the status of the supernode.
systemctl status n2n_supernodeThis will show a status similar to the following.
● n2n_supernode.service - n2n supernode
Loaded: loaded (/etc/systemd/system/n2n_supernode.service; disabled; vendor prese
Active: active (running) since Wed 2018-08-15 17:07:46 UTC; 5s ago
Main PID: 4711 (supernode)
Tasks: 1
Memory: 80.0K
CPU: 1ms
CGroup: /system.slice/n2n_supernode.service
└─4711 /usr/sbin/supernode -l 1200Next we will create the edge service. This edge service will claim a private IP for communication between the other edges in other Rcs zones.
As with the supernode service, this will also need its own service file.
nano /etc/systemd/system/n2n_edge.serviceAdd the following content:
[Unit]
Description=n2n edge
Wants=network-online.target
After=network-online.target n2n_supernode.service
[Service]
ExecStart=/usr/sbin/edge -l localhost:1200 -c Rcs -a 192.168.1.1 -k mypassword -f
[Install]
WantedBy=multi-user.targetIn this service file we defined the following command line options:
-l localhost:1200: This will connect to localhost on UDP port1200.-c Rcs: This is the community the edge will be joining. All edges within the same community appear on the same LAN (layer 2 network segment). Edges that are not in the same community will not communicate with each other.-a 192.168.1.1: The IP assigned to this interface. This is the N2N virtual LAN IP address being claimed.-k mypassword: The password used for each edge. All edges communicating must use the same key and community name.-f: Disables daemon mode and causes edge to run in the foreground. This is needed for the service file, otherwisesystemctlwill not start the service.
To ensure that everything is working, start the service.
systemctl start n2n_edgeThen, query the service status.
systemctl status n2n_edge The output will be similar to the following.
● n2n_edge.service - n2n edge
Loaded: loaded (/etc/systemd/system/n2n_edge.service; disabled; vendor preset: en
Active: active (running) since Wed 2018-08-15 17:10:46 UTC; 3s ago
Main PID: 4776 (edge)
Tasks: 1
Memory: 396.0K
CPU: 8ms
CGroup: /system.slice/n2n_edge.service
└─4776 /usr/sbin/edge -l localhost:1200 -c Rcs -a 192.168.1.1 -k mypassIf we check 'ifconfig', you will see the N2N virtual IP being claimed by the edge0 interface.
ifconfigThe output will be similar to the following.
edge0 Link encap:Ethernet HWaddr 42:14:55:64:7d:21
inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::4014:55ff:fe64:7d21/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:648 (648.0 B)Once this is done, enable and create the firewall rules. Make sure to replace the node_miami_ip and node_sydney_ip text with the public IP of the Sydney and Miami instance. (We will use these later on).
ufw allow 22/tcp
ufw allow from node_miami_ip to any port 1200
ufw allow from node_sydney_ip to any port 1200
ufw enableThe last thing to do with this node is to enable both services at boot.
systemctl enable n2n_supernode.service
systemctl enable n2n_edge.serviceInstallation - Node Miami
The Miami node will connect to the super node which is currently running in the Paris zone. To achieve this we only need to create a service file for the edge application.
Start by creating an edge service file.
nano /etc/systemd/system/n2n_edge.serviceAdd the following content.
[Unit]
Description=n2n edge
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/sbin/edge -l node_paris_ip:1200 -c Rcs -a 192.168.1.2 -k mypassword -f
[Install]
WantedBy=multi-user.targetNote: Replace the node_paris_ip with the public IP of the instance running in Paris
This will connect to the node in Paris on UDP port 1200, join community 'Rcs', claim IP 192.168.1.2 and authenticate with 'mypassword'.
Next, start the service.
systemctl start n2n_edgeCheck the status for an indication that the service has started correctly and is running.
systemctl status n2n_edge Next, ensure that the edge0 IP gets claimed.
ifconfigIt will show the 192.168.1.2 IP address.
edge0 Link encap:Ethernet HWaddr 42:14:55:64:7d:21
inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::4014:55ff:fe64:7d21/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:648 (648.0 B)The next thing to do is to enable the service at boot.
systemctl enable n2n_edge.serviceOptionally, enable the firewall and add the SSH rules.
ufw allow 22/tcp
ufw enableWe will now be able to ping both edges running in our instances.
In Paris, ping the Rcs instance in Miami
ping 192.168.1.2In Miami, ping the edge in Paris
ping 192.168.1.1Installation - Node Sydney
Finally, we will add our last continent to the mix: Australia. Start again by creating an edge service, this edge service will also connect to the previous configured supernode in Paris.
nano /etc/systemd/system/n2n_edge.serviceAdd the following content.
[Unit]
Description=n2n edge
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/sbin/edge -l node_paris_ip:1200 -c Rcs -a 192.168.1.3 -k mypassword -f
[Install]
WantedBy=multi-user.targetNote: Replace the node_paris_ip with the public IP of the instance running in Paris.
This will connect to the node in Paris on UDP port 1200, join community 'Rcs', claim IP 192.168.1.3 and authenticate with 'mypassword'.
systemctl start n2n_edgeCheck the status to ensure the service is started.
systemctl status n2n_edge Make sure that the edge0 IP gets claimed.
edge0 Link encap:Ethernet HWaddr 46:56:b0:e9:8f:8a
inet addr:192.168.1.3 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::4456:b0ff:fee9:8f8a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:648 (648.0 B)Again, enable this service at boot.
systemctl enable n2n_edge.serviceOptionally, enable the firewall and add the SSH rules.
ufw allow 22/tcp
ufw enableWe will now be able to ping each Rcs instance from each node.
ping 192.168.1.1
ping 192.168.1.2
ping 192.168.1.3If you want to test the connection between each node edge, enable the firewall rules on the instances of Miami and Paris. This will allow communication between edges.
In Miami, add the following rules. (Make sure to replace the node_paris_ip and node_sydney_ip text with the public IPs of the Sydney and Paris instances.)
ufw allow from node_paris_ip to any port 1200
ufw allow from node_sydney_ip to any port 1200In Sydney, add the following rules.
ufw allow from node_paris_ip to any port 1200
ufw allow from node_miami_ip to any port 1200Now you can shutdown or reboot the supernode. Network connections will continue to exist. Only new edges will suffer connectivity issues while the supernode service is down.
Conclusion
We have successfully configured a VPN connection between multiple zones. This should offer quite a lot of new possibilities for high availability scenarios to our newly configured environment.