Header

After opening the GNS3 client and starting nodes, we primarily access the remote virtual devices with a console connection. This console connection is either Telnet, VNC, or SPICE. A console connection covers our requirement for physical access to the virtual devices, but today most administrative tasks are carried out with a link to the network management interface .

I define the GNS3 server as remote if the gns3server process is not running on the host operating system. I consider the GNS3 VM remote because the gns3server process is running on a guest operating system (Ubuntu), for instance.

Management Interface

The majority of network virtual appliances allocate the first network interface as the management interface. The management interface can be configured to allow protocols (e.g., SSH, HTTPS, NETCONF) to install, manipulate, and delete the configuration of the network device. For example, the following is just a sample of network virtual appliances that utilize HTTP(S) for device configuration:

  • Cisco Adaptive Security Device Manager (ASDM) for the Cisco ASAv
  • Web Interface for the Palo Alto Networks VM-Series Firewall
  • Configuration utility (Web UI) for the F5 BIG-IP LTM VE
  • Web Admin (GUI) for the FortiGate VM

Management Network

Each management interface needs to attach to a software bridge for connectivity. The Linux bridge is a software L2 device that is similar to a physical bridge device. It can forward traffic between virtualization guests, the host OS, and possibly off-node via the physical network interfaces of the host OS.

When we use the NAT node with GNS3 on Linux, we happen to be using the virbr0 Linux bridge. The bridge was created with the installation of libvirt . It’s a component of the libvirt default virtual network. Let’s delve further into the details of the network before we create our management network.

  1. Our first task is to log in to a shell. Choose from one of the following:

GNS3 VM

  1. Start the GNS3 VM with the VMware application.
  2. From the VMware console:
    • Select OK and ⏎ to get to the menu.
    • Select Shell and ⏎.
GNS3 Menu

Alternatively, we can connect via SSH with the username/password of gns3/gns3.

ssh gns3@<gns3_vm_ip>

Google Compute Engine

There are a multitude of ways to deploy GNS3 with Google Compute Engine (GCE), but we will use https://github.com/mweisel/gcp-gns3server as the reference for this post.

  1. Sign in to the GCP Console .
  2. Select the GNS3 project.
  3. Activate the Google Cloud Shell.
  4. List the Google Compute Engine instances.
gcloud compute instances list
  1. Start the gns3server instance (if required).
gcloud compute instances start gns3server --zone <gce_zone> --quiet
  1. SSH into the gns3server instance.
gcloud compute ssh gns3server --zone <gce_zone>
  1. Add yourself to the libvirt group.
sudo gpasswd -a $USER libvirt
  1. Log out and back in for the new group membership to take effect.
exit
gcloud compute ssh gns3server --zone <gce_zone>

Microsoft Azure

There are also a multitude of ways to deploy GNS3 with Microsoft Azure, but we will use https://github.com/mweisel/azure-gns3server as the reference for this post.

  1. Sign in to the Azure Portal .
  2. Launch the Azure Cloud Shell (Bash).
  3. List the Azure instances.
az vm list -d -g gns3-resources -o table
  1. Start the gns3server instance (if required).
az vm start -g gns3-resources -n gns3-server
  1. List the IP addresses for the Azure instances.
az vm list-ip-addresses -g gns3-resources -n gns3-server -o table
  1. SSH into the gns3server instance.
ssh <PublicIPAddress>
  1. Add yourself to the libvirt group.
sudo gpasswd -a $USER libvirt
  1. Log out and back in for the new group membership to take effect.
exit
ssh <PublicIPAddress>

  1. After we logged into a shell, list all the libvirt virtual networks on the host.
virsh net-list --all

output:

Name                 State      Autostart     Persistent
----------------------------------------------------------
default              active     yes           yes
  1. Display more information about the default virtual network.
virsh net-info default

output:

Name:           default
UUID:           68142228-ab29-445e-8bcd-4ea334546217
Active:         yes
Persistent:     yes
Autostart:      yes
Bridge:         virbr0

As noted earlier, we see the virbr0 Linux bridge is associated with the default network.

  1. We can get more detailed information by dumping the network definition in its XML form.
virsh net-dumpxml default

output:

<network>
  <name>default</name>
  <uuid>68142228-ab29-445e-8bcd-4ea334546217</uuid>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:11:84:43'/>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.2' end='192.168.122.254'/>
    </dhcp>
  </ip>
</network>

The following details of the default virtual network are revealed:

  • The virtual network operates in NAT mode.
  • DHCP services are enabled for guests connected to the virtual network.
  1. Show information about the virbr0 bridge.
brctl show virbr0

output:

bridge name    bridge id            STP enabled    interfaces
virbr0         8000.525400118443    yes            virbr0-nic
  1. Display the IPv4 address for the virbr0 bridge interface.
ip addr show virbr0 | awk '/inet/ { print $2 }'

output:

192.168.122.1/24

The IPv4 address assigned to the interface is also the gateway address for nodes (guests) connected to the virtual network.

We should now have a better understanding of how all the network pieces fit together, so let’s move on to creating a new libvirt virtual network. This will be the management network for our GNS3 devices.

  1. Install the nano text editor (if required).
sudo apt update && sudo apt install nano
  1. Create the XML definition file with a text editor.
nano net-mgmt.xml

Add the following content:

<network>
  <name>mgmt</name>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr1' stp='on' delay='0'/>
  <ip address='10.99.1.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='10.99.1.128' end='10.99.1.254'/>
    </dhcp>
  </ip>
</network>
  • ⌃ + o (Save) the file.
  • ⏎ to confirm.
  • ⌃ + x (exit) the nano text editor.
  1. As a safety measure, validate the libvirt XML for compliance.
virt-xml-validate net-mgmt.xml
  1. Define the mgmt network from the libvirt XML file.
virsh net-define net-mgmt.xml
  1. Again, list all the libvirt virtual networks on the host.
virsh net-list --all

output:

Name                 State      Autostart     Persistent
----------------------------------------------------------
default              active     yes           yes
mgmt                 inactive   no            yes

The mgmt network is listed, but it requires a couple more steps to complete the configuration.

  1. Start the inactive mgmt network.
virsh net-start mgmt
  1. Set the mgmt network to autostart with the libvirtd service on boot.
virsh net-autostart mgmt
  1. Verify the mgmt network is active and enabled for autostart.
virsh net-list --all

output:

Name                 State      Autostart     Persistent
----------------------------------------------------------
default              active     yes           yes
mgmt                 active     yes           yes
  1. Just as we did with the default network, we can display information about the mgmt network.
virsh net-info mgmt

output:

Name:           mgmt
UUID:           91c49b86-faaa-4b8a-a0d5-f3d00e5b2511
Active:         yes
Persistent:     yes
Autostart:      yes
Bridge:         virbr1
  1. Display the network definition in XML.
virsh net-dumpxml mgmt

output:

<network>
  <name>mgmt</name>
  <uuid>91c49b86-faaa-4b8a-a0d5-f3d00e5b2511</uuid>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr1' stp='on' delay='0'/>
  <mac address='52:54:00:64:b4:80'/>
  <ip address='10.99.1.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='10.99.1.128' end='10.99.1.254'/>
    </dhcp>
  </ip>
</network>
  1. Show information about the virbr1 bridge.
brctl show virbr1

output:

bridge name    bridge id            STP enabled    interfaces
virbr1         8000.52540064b480    yes            virbr1-nic
  1. DHCP for libvirt uses the dnsmasq program. Verify the process(es) for the mgmt network are listed.
pgrep -af 'libvirt/dnsmasq/mgmt.conf'

output:

1691 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/mgmt.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
1692 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/mgmt.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
  1. Verify the NAT rules for iptables have been added for the mgmt network.
sudo iptables --list --numeric --table nat

output:

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
RETURN     all  --  10.99.1.0/24         224.0.0.0/24
RETURN     all  --  10.99.1.0/24         255.255.255.255
MASQUERADE  tcp  --  10.99.1.0/24        !10.99.1.0/24         masq ports: 1024-65535
MASQUERADE  udp  --  10.99.1.0/24        !10.99.1.0/24         masq ports: 1024-65535
MASQUERADE  all  --  10.99.1.0/24        !10.99.1.0/24
RETURN     all  --  192.168.122.0/24     224.0.0.0/24
RETURN     all  --  192.168.122.0/24     255.255.255.255
MASQUERADE  tcp  --  192.168.122.0/24    !192.168.122.0/24     masq ports: 1024-65535
MASQUERADE  udp  --  192.168.122.0/24    !192.168.122.0/24     masq ports: 1024-65535
MASQUERADE  all  --  192.168.122.0/24    !192.168.122.0/24
MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0

Chain DOCKER (2 references)
target     prot opt source               destination
RETURN     all  --  0.0.0.0/0            0.0.0.0/0

GNS3

The Linux OS-level networking configuration is set, so we will continue to the next phase. My project includes both a Cisco ASAv and Palo Alto Networks (PAN) VM node. Each node will have its first network interface connected to the management network we just created. Although I’m simply using a Cisco ASAv and PAN VM device for my example, the following configuration pattern applies to most other virtual network devices in GNS3.

GNS3 management network
  1. Open the GNS3 client application.
  2. Create a new project.
  3. Drag a Cloud node from the Devices toolbar to the workspace.
  4. Right-click the Cloud node and select Configure.
  5. On the Ethernet interfaces tab:
    • Enable Show special Ethernet interfaces.
    • Click the drop-down menu and select virbr1.
    • Click the Add button.
  6. Select the Misc. tab and enter mgmt-virbr1 in the Name field.
  7. Click the OK button to save the configuration and close the window.
  8. Drag an Ethernet switch (Dynamips) node from the Devices toolbar to the workspace.
  9. Right-click the Ethernet switch node, select Change hostname and enter mgmt in the Hostname field.
  10. Click the OK button to save the configuration and close the window.
  11. Click the Add a link icon in the Devices toolbar.
  12. In the workspace, click the Ethernet switch node and select the Ethernet0 interface.
  13. Click the Cloud node and select the virbr1 interface to link the objects.
  14. Click the Add a link icon in the Devices toolbar to escape the mode.
  15. Drag a Cisco ASAv node from the Devices toolbar to the workspace.
  16. Click the Add a link icon in the Devices toolbar.
  17. In the workspace, click the Cisco ASAv node and select the Management0/0 interface.
  18. Click the Ethernet switch node and select the Ethernet1 interface to link the objects.
  19. Click the Add a link icon in the Devices toolbar to escape the mode.
  20. Drag a Palo Alto Networks (PAN) VM node from the Devices toolbar to the workspace.
  21. Click the Add a link icon in the Devices toolbar.
  22. In the workspace, click the Palo Alto Networks (PAN) VM node and select the management interface.
  23. Click the Ethernet switch node and select the Ethernet2 interface to link the objects.
  24. Click the Add a link icon in the Devices toolbar to escape the mode.

Palo Alto Networks (PAN) VM

The device-level PAN configuration is automagically set (for management) thanks to DHCP and PAN-OS defaults.

  1. Right-click the PAN VM node and select Start.
  2. Right-click the PAN VM node and select Console.
  3. Log in with a username/password of admin/admin.
  4. Verify the DHCP client for the management interface grabbed an IPv4 address from dnsmasq on the Linux host.
show interface management

output:

-------------------------------------------------------------------------------
Name: Management Interface
Link status:
  Runtime link speed/duplex/state: 1000/full/up
  Configured link speed/duplex/state: auto/auto/auto
MAC address:
  Port MAC address 0c:5f:c1:c7:75:00

Ip address: 10.99.1.212
Netmask: 255.255.255.0
Default gateway: 10.99.1.1
Ipv6 address: unknown
Ipv6 link local address: fe80::e5f:c1ff:fec7:7500/64
Ipv6 default gateway:
-------------------------------------------------------------------------------
  1. Optionally, verify the PAN management interface has Internet connectivity.
ping count 5 host www.gns3.com

Cisco ASAv

My project uses the Cisco ASAv 9.9(2) with ASDM 7.9(2), so I complete the following tasks in the CLI:

  • Create the gns3 user.
  • Configure the m0/0 interface.
  • Set the default route for the management security zone.
  • Configure SSH (Secure Shell).
  • Enable Cisco Adaptive Security Device Manager (ASDM) access.
  • Save the configuration.
  1. Right-click the Cisco ASAv node and select Start.
  2. Right-click the Cisco ASAv node and select Console.
  3. Set the configuration.
en
conf t
username gns3 password gns3 privilege 15
int m0/0
 nameif management
 security-level 100
 ip addr 10.99.1.10 255.255.255.0
 no shut
 exit
route management 0 0 10.99.1.1
aaa authentication ssh console LOCAL
aaa authorization exec LOCAL auto-enable
ssh version 2
ssh timeout 60
ssh key-exchange group dh-group14-sha1
ssh scopy enable
ssh 0 0 management
domain-name example.com
crypto key generate rsa usage-keys label SSHKEYS modulus 1024
asdm image boot:/asdm-79247.bin
http server enable 8443
http 0.0.0.0 0.0.0.0 management
end
copy run start
  1. Optionally, verify the ASAv management interface has Internet connectivity.
ping 8.8.8.8

SSH Local Port Forwarding

We have accomplished quite a bit, but we haven’t answered the most crucial question: how do we open an application (e.g., web browser, Cisco ASDM) on our local computer that connects to a remote device running on the GNS3 VM (server)?

SSH port forwarding serves as a wrapper around arbitrary TCP traffic. It is sometimes called tunneling because the SSH connection provides a secure tunnel through which another TCP/IP connection may pass.

The three types of port forwarding are local, remote, and dynamic. We only concern ourselves with local port forwarding for the purpose of this post. Local port forwarding redirects one port on the client to one port on the server. Essentially you’re saying, “Take a port on the SSH server and make it local to my client.”

  1. Identify the IPv4 address and port number for the service running on the remote device.

PAN VM

show interface management | match 'Ip addr'
show system services
show netstat listening yes numeric-ports yes | match :443

HTTPS is listening on the standard TCP/443 port for the logical address of the management interface.

Cisco ASAv

show int management ip brief
show run http
show asp table socket

HTTPS (for ASDM) is listening on TCP/8443 port for the logical address of the management interface.

  1. Choose a local port to use for the forwarding.

On Unix-like systems, TCP ports below 1024 are reserved for system use. Only root can bind to these ports. As an unprivileged user, we can attach the local end of our SSH port forwarder to any port above 1024. Microsoft operating systems do not implement privileged ports. Anyone can bind to any open port on the system.

I prefer to choose a number from the ephemeral port range (49152 - 65535). I will use the following assignment for my devices:

  • PAN VM: TCP/52001
  • Cisco ASAv: TCP/52002

Username and SSH Private Key (Google Compute Engine and Microsoft Azure)

Skip this section if using the GNS3 VM. The GNS3 VM uses password authentication for SSH.

If using either gcp-gns3server or azure-gns3server , the SSH server on the Ubuntu VM instance is configured only to accept public key authentication. We are tasked with both identifying the user and downloading the associated SSH private key file from the Cloud Shell instance.

Google Compute Engine

  1. Sign in to the GCP Console .
  2. Select the GNS3 project.
  3. Activate the Google Cloud Shell.
  4. Identify the user for the SSH private key file.
echo $USER
  1. Display the fully qualified file path for the SSH private key file.
ls $HOME/.ssh/google_compute_engine
  1. Click the (vertical ellipsis) icon located in the upper-right corner of the Google Cloud Shell toolbar.
  2. Select Download file.
  3. Enter the fully qualified file path from the output of the previous ls command.
  4. Click the DOWNLOAD link.

Microsoft Azure

  1. Sign in to the Azure Portal .
  2. Launch the Azure Cloud Shell (Bash).
  3. Identify the user for the SSH private key file.
echo $USER
  1. Click the Upload/Download files icon on the Azure Cloud Shell toolbar.
  2. Select Download.
  3. Enter /.ssh/id_rsa in the required field.
  4. Click the Download button.

Linux and macOS

SSH key pairs for Unix-like systems are usually stored in the local SSH directory ($HOME/.ssh). The directory is created with the initial usage of OpenSSH client programs.

  1. Create the local SSH directory (if required).
if [ ! -d "$HOME/.ssh" ]; then; mkdir -pm 700 $HOME/.ssh; fi
  1. Move (and rename) the SSH private key file to the local SSH directory.
# GCE
mv -v $HOME/Downloads/google_compute_engine $HOME/.ssh/google-gns3server
# Azure
mv -v $HOME/Downloads/id_rsa $HOME/.ssh/azure-gns3server
  1. Secure the SSH private key file permissions.
# GCE
chmod -v 0600 $HOME/.ssh/google-gns3server
# Azure
chmod -v 0600 $HOME/.ssh/azure-gns3server

OpenSSH

We use the OpenSSH client with:

To tell the client to activate local port forwarding, use the -L flag.

ssh -L <local_ip>:<local_port>:<remote_mgmt_ip>:<remote_mgmt_port> gns3_server_ip_or_hostname

I also like to add a couple of flags when local port forwarding. The -N flag tells SSH not to execute a remote command, including creating a terminal, on the server. The -f flag tells SSH to put itself in the background.

ssh -fNL <local_ip>:<local_port>:<remote_mgmt_ip>:<remote_mgmt_port> gns3_server_ip_or_hostname

GNS3 VM

Authenticate with the username and password of gns3 to the GNS3 VM with IP address 192.168.200.240, and forward port 443 (HTTPS) on the PAN VM management interface (10.99.1.212) to the local port 52001 on my computer:

ssh -fNL 127.0.0.1:52001:10.99.1.212:443 [email protected]

Google Compute Engine

Authenticate with user marc (and associated SSH private key file) to the remote GNS3 server with public IP address 35.199.147.244, and forward port 8443 (HTTPS) on the Cisco ASAv management interface (10.99.1.10) to the local port 52002 on my computer:

ssh -i $HOME/.ssh/google-gns3server -fNL 127.0.0.1:52002:10.99.1.10:8443 [email protected]

Disconnect the SSH local port forward session

Because we put each SSH session in the background, we need to terminate each session manually when we’re finished with the port forward. The following terminates both sessions in a single command:

pkill -f '127.0.0.1:5200[1-2]'

PuTTY

PuTTY is the most popular SSH client for the Windows platform. It’s a great alternative if you prefer to use a GUI instead of the command line-based SSH tools.

PEM → PPK Private Key Format Conversion

Skip this section if using the GNS3 VM. The GNS3 VM uses password authentication for SSH.

The SSH private key is in the .pem format from the Cloud Shell instances for either Google Compute Engine or Microsoft Azure. The key needs to be converted to the .ppk format before use in the PuTTY application.

  1. Open the PuTTY Key Generator application.
  2. Click the Load button for the Load an existing private key file action.
PuTTYgen load
  1. Select All Files (.) from the file extension drop-down list.
PuTTYgen select
  1. Navigate to and select the private key file:

    • google_compute_engine for Google Compute Engine
    • id_rsa for Microsoft Azure
  2. Click the Open button.

  3. Click the OK button to close the PuTTYgen Notice window.

PuTTYgen Notice
  1. Click the Save private key button for the Save the generated key action.
PuTTYgen save
  1. Click the Yes button to acknowledge and close the PuTTYgen Warning window.
PuTTYgen Warning
  1. Save private key as:

    • google-gns3server for Google Compute Engine
    • azure-gns3server for Microsoft Azure
  2. Click the Save button.

  3. Close the PuTTY Key Generator application.

GNS3 VM

Authenticate with the username and password of gns3 to the GNS3 VM with IP address 192.168.200.240 and:

  • Forward port 443 on the PAN VM management interface to the local port 52001 on my computer
  • Forward port 8443 on the Cisco ASAv management interface to the local port 52002 on my computer
  1. Open the PuTTY application.
  2. Go to Session.
  3. Enter 192.168.200.240 in the Host Name (or IP address) field.
PuTTY session
  1. Go to ConnectionSSHTunnels.
  2. Enter 52001 in the Source port field.
  3. Enter 10.99.1.212:443 in the Destination field.
  4. Click the Add button to add the entry.
  5. Enter 52002 in the Source port field.
  6. Enter 10.99.1.10:8443 in the Destination field.
  7. Click the Add button to add the entry.
PuTTY PAN and ASAv
  1. Click the Open button to be prompted for the login credentials and establish the session.

Microsoft Azure

Authenticate with user mweisel (and associated SSH private key file) to the GNS3 server hosted at Microsoft Azure with the public IP address 13.66.209.19 and:

  • Forward port 443 on the PAN VM management interface to the local port 52001 on my computer
  • Forward port 8443 on the Cisco ASAv management interface to the local port 52002 on my computer
  1. Open the PuTTY application.
  2. Go to Session.
  3. Enter 13.66.209.19 in the Host Name (or IP address) field.
PuTTY session Azure
  1. Go to ConnectionSSHAuth.
  2. Click the Browse button for the Authentication parameters section.
  3. Navigate to and select the azure-gns3server private key file.
  4. Click the Open button.
PuTTY auth for Azure
  1. Go to ConnectionSSHTunnels.
  2. Enter 52001 in the Source port field.
  3. Enter 10.99.1.212:443 in the Destination field.
  4. Click the Add button to add the entry.
  5. Enter 52002 in the Source port field.
  6. Enter 10.99.1.10:8443 in the Destination field.
  7. Click the Add button to add the entry.
PuTTY PAN and ASAv
  1. Click the Open button to be prompted for the username and establish the session.

There’s No Place Like 127.0.0.1

Verify ports are open and in a listening state on our local computer.

Windows

Run the following command in a PowerShell console:

Get-NetTCPConnection -LocalAddress 127.0.0.1 -LocalPort 52001,52002 | Format-Table Local*,State

output:

LocalAddress LocalPort  State
------------ ---------  -----
127.0.0.1        52002 Listen
127.0.0.1        52001 Listen

macOS

Run the following command in a terminal:

netstat -anf inet | grep '5200[1-2]'

output:

tcp4       0      0  127.0.0.1.52002        *.*                    LISTEN
tcp4       0      0  127.0.0.1.52001        *.*                    LISTEN

Linux

Run the following command in a terminal:

ss -tnlp | grep '5200[1-2]'

output:

LISTEN   0         128               127.0.0.1:52001            0.0.0.0:*        users:(("ssh",pid=1595,fd=4))
LISTEN   0         128               127.0.0.1:52002            0.0.0.0:*        users:(("ssh",pid=1603,fd=4))

Point applications to localhost

And now, for the moment of truth:

  • Open a web browser and enter https://127.0.0.1:52001 in the URL address bar to connect to the Web UI of the remote PAN VM device.
  • Open a new browser window (or tab) and enter https://127.0.0.1:52002 in the URL address bar to connect to the ASDM page of the remote Cisco ASAv device.
  • Ansible/Nornir: Set the specific host variables (e.g., IPv4 address, port) for each device.
PAN login
ASAv login
Ansible