LinuxWebsites

How to set up Network-Attached Storage to your high availability network

In our guides on how to create a High Availability network that will host websites or applications, we used lsync. It’s a quick and dirty way to sync files between multiple hosts. In this guide, we are going to set up network-attached storage or an NFS Server and add that to our existing network. We will update Apache in each backend server to get our website files from the NFS server. Once we do this, we will have a basic Cloud network using a few VPS servers.

If you are going to request files from an NFS server, they must be located near your backend servers. If it’s far away, added time will be needed to load your website. We’re using an Internal Instance that is being replicated on the hypervisor but you could use an NVMe VPS with a public IPv4. The most important thing is that it’s as close as possible to your backend servers.

Set up network-attached storage

First, let’s install the NFS server.

apt update -y && apt upgrade -y
apt install nfs-kernel-server -y

Next, we need to create a directory that is going to hold all of the files our backend servers will access. In this example, our website’s files will be located in /www.

mkdir -p /www
chown -R root:root /www
chmod 777 /www

Once the folder is created you can move all of your website’s files into the /www folder on your NFS server.

Allow Connections

So by default, all connections are blocked to the NFS Server. We use the exports file to allow access from specific IPs and what directory to allow access to. To allow more backend servers just repeat the statement changing the IP.

nano /etc/exports

/www 10.44.73.102(rw,sync,no_subtree_check)

Finally, export the file and restart the NFS server.

exportfs -a
systemctl restart nfs-kernel-server

Mount NFS Storage In Backend Server

Now we are going to mount the /www folder in our backend servers. Install the NFS common tools package.

apt install nfs-common

Finally, create the folder you will use as a mount point and mount the storage temporarily to ensure it works correctly.

mkdir -P /www
mount 10.20.20.15:/www /www

Assuming the folder is mounted correctly, you should now see your website’s files located in the /www folder. To make the change permanent, add the following line to the /etc/fstab file.

10.20.20.15:/www/www/blog.f2h.cloud/public_html     /www nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0

Now, all we need to do is make some edits to the apache configuration file.

Modify Apache

We need to tell Apache on our backend servers to use the network-attached storage to look for our website’s files. Open up the /etc/apache2/sites-available/blog.f2h.cloud.conf file replacing the name with your configuration file and specify the new directory in the DocumentRoot and Directory sections.

<VirtualHost *:80>
    ServerName blog.f2h.cloud
    ServerAlias www.f2h.cloud
    ServerAdmin [email protected]
    DocumentRoot /www

    <Directory /www>
        Options -Indexes +FollowSymLinks
        AllowOverride All
        Require all granted
    </Directory>

    ErrorLog ${APACHE_LOG_DIR}/blog.f2h.cloud-error.log
    CustomLog ${APACHE_LOG_DIR}/blog.f2h.cloud-access.log combined
</VirtualHost>

Restart Apache with systemctl restart apache2.

So that’s the process complete. You can now disable lsyncd on your backend servers and you have a basic cloud setup running. But whilst you do sacrifice some speed by using an NFS server, if the NFS server is close to your backend NVMe VPS servers, it’s an acceptable tradeoff. You can make this slightly faster by using an Internal server for the NFS server.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button