Setting up an UPS for Synology NAS, Odroid and Arch Linux

To protect data residing on my Synology NAS I’ve bought and installed an UPS, an APC 700U to be exact. The trigger for buying and installing one was the loss of (some) disk data event that happened to family member external hard disk due to power loss. The data recovery cost was higher than buying an UPS and of course the lack of backups added to outcome of that event.

While no Synology NAS was involved on the above data loss, I know first hand that backups by themselves only add one layer of protection to possible data loss, and since from time to time I also have some power loss events, it was just a matter of time that my NAS might be hit by an unrecoverable power event, and, who knows, data loss.

So buying an UPS just might be a good idea…

Anyway the UPS from APC that I bought has an USB port allowing it to be connected to the Synology, which allows the UPS to be monitored and also allows the NAS to gracefully shutdown before UPS battery exhaustion. As a bonus it also allows to run an UPS monitoring server where other devices that share the same UPS power source can be notified of a power event through the network. Just keep in mind that the network switch or router must also be power protected…

Installing the UPS:
Installing the UPS is as simple as power down all devices that will connect to the UPS: Synology NAS, Odroid, external hard disks, PC base unit and network switch, and connecting an USB cable from the UPS to the back Diskstation USB ports.

After starting up, just go to DSM Control Panel and select Hardware & Power and then the UPS tab. Enable the UPS by ticking the Enable UPS support and also enable the UPS server to allow remote clients by ticking Enable UPS Network Server:

UPS Configuration

We can see by pressing the Device Information button that the UPS was correctly detected:

UPS Device Information

To end the configuration we need to press the Permitted DiskStation Devices and add the IP’s address of the devices that also have their power sources connected to the UPS and will also monitor the power status, in my case the IP of Odroid and my home PC.

And that’s it.

Setting up Arch Linux
Interestingly I dind’t found the NUT tools (Network UPS tools) on the core Arch repositories, but they are available at the AUR repository:

 yaourt -S network-ups-tools nut-monitor

The above packages will install the core NUT tools and a graphical monitor.

We can now scan our network for the ups:

nut-scanner -s
Scanning USB bus.
Scanning SNMP bus.
Scanning XML/HTTP bus.
Scanning NUT bus (old connect method).
        driver = "nutclient"
        port = "ups@"

With the UPS reference found, we can now query it:

 upsc ups@
Init SSL without certificate database
battery.charge: 100
battery.voltage: 13.9
battery.voltage.nominal: 12.0
device.mfr: American Power Conversion
device.model: Back-UPS XS 700U  

We can check the load, for example, with:

upsc  ups@ | grep load
Init SSL without certificate database
ups.load: 37

We need now to modify the following files on /etc/ups:

  1. nut.conf
  2. upsmon.conf

First, as root, we copy a file from upsmon.conf.sample to upsmon.conf and add the following line:

MONITOR ups@ 1 * * slave

after the other commented out MONITOR lines. Since I’m only monitory, I’ve just put * at the username and password for authentication.

On nut.conf, we change the line MODE to MODE=netclient

After changing the files, we enable and start the UPS monitoring service:

sudo systemctl enable nut-monitor
sudo systemctl start nut-monitor

We are now monitoring the UPS status through the network. Keep in mind that the hub/switch power should also be connected to the UPS.

For monitoring we can use the nut-monitor application to see the UPS status in a nicer way:

NUT Monitor

To make the application easier to use, we can create a profile and save it, and when calling the application nut-monitor, we pass the profile name with the -F switch.

Setting up Odroid
To allow Odroid to monitor the UPS status through the Synology UPS server we need to also install the nut UPS tools (the same used by Arch Linux and DSM):

 apt-get install nut

The configuration steps for Odroid are the same as the Arch Linux steps, but since Odroid is running an Ubuntu variant, the files are located on a different path: /etc/nut.

To start the monitoring with the new configuration we just do /etc/init.d/ups-monitor restart.

If authentication is needed, on the Synology disk, check the NUT configuration files located at /usr/syno/etc/ups/.

The upsd.users file has the user and password defined by default by the NUT tools on DSM.

Synology Reverse Proxy with DSM 6.0

The latest Synology DSM 6.0 version now supports out of the box reverse proxy configuration. So no need to build and edit internal configuration files. Everything can be done now on the DSM web frontend.

To configure the reverse proxy we need to go to the DSM web application, select the Control Panel and then the Application Portal icon:

Application portal Configuration

Application portal Configuration

We can see that I already have some applications configured and with the HTTP internal ports defined. For example the Notes application is accessed internally by the URL http://diskstation:9350. Note that I haven’t defined a HTTPS port because I’ll use the reverse proxy as the HTTPS frontend.

So we take note of the ports for the applications that we want to make available at the reverse proxy, in my case the port 9350 for the Notes application, and create a new reverse proxy configuration by selecting the Reverse Proxy tab and pressing the Create button:

Notes Reverse Proxy Configuration

Notes Reverse Proxy Configuration

Take note of the following:

– I’m using one of the available domains provided by synology
– The Synology domains, at least the supports sub-domain wildcarding.
– So I can have the as my main domain
– And I can have all subdomains below the, like, for example,

So I will make the Notes application available at the and that is the hostname that I need to define under my reverse proxy configuration.

So with the above rule all requests to will be routed to the localhost server running on the port 9350…

And that’s it. Just make another set of rules for the other application under their own sub-domains.

Edit: The following configuration shows the Note Station, File Station and Video Station reverse proxy configuration that allows those “apps” to be accessible from the external IP. Note that this means that port 80 (plain HTTP, if used) and port 443 (HTTPS) must be forwarded on the router configuration to the Synology internal IP:


In my case only HTTPS is used, so I’ve not forwarded the port 80 from the external interface of the router to the Synology.

Synology – Installing Python PIP package installer

A simple recipe for installing the pip utility that is needed to install Python packages/Modules:

This needs to be done with the user root so ssh to the Diskstation with that user.

1st) Install Python from the Package Installer Web interface. I have Python 2.7 installed

2nd) Connect to your Synology NAS through ssh.

3rd) Get the pip installer: wget

4th) Execute the installer: python It will take a while:

 # python 
Collecting pip
  Downloading pip-8.0.2-py2.py3-none-any.whl (1.2MB)
    100% |████████████████████████████████| 1.2MB 42kB/s 
Collecting setuptools
  Downloading setuptools-19.6-py2.py3-none-any.whl (472kB)
    100% |████████████████████████████████| 475kB 109kB/s 
Collecting wheel
  Downloading wheel-0.26.0-py2.py3-none-any.whl (63kB)
    100% |████████████████████████████████| 65kB 291kB/s 
Installing collected packages: pip, setuptools, wheel
Successfully installed pip-8.0.2 setuptools-19.6 wheel-0.26.0

We can now execute the pip command:

# pip

  pip  [options]

  install                     Install packages.
  download                    Download packages.
  uninstall                   Uninstall packages.
  freeze                      Output installed packages in requirements format.
  list                        List installed packages.
  show                        Show information about installed packages.
  search                      Search PyPI for packages.
  wheel                       Build wheels from your requirements.
  hash                        Compute hashes of package archives.
  help                        Show help for commands.

Now we can install the modules that we need.

For example, installing paho-mqtt for MQTT support:

# pip install paho-mqtt
Collecting paho-mqtt
  Downloading paho-mqtt-1.1.tar.gz (41kB)
    100% |████████████████████████████████| 45kB 589kB/s 
Building wheels for collected packages: paho-mqtt
  Running bdist_wheel for paho-mqtt ... done
  Stored in directory: /var/services/homes/root/.cache/pip/wheels/97/db/5f/1ddca8ee2f9b58f9bb68208323bd39bb0b177f32f434aa4b95
Successfully built paho-mqtt
Installing collected packages: paho-mqtt
Successfully installed paho-mqtt-1.1

And we can use now the paho.mqtt module on our python programs.

Synology DSM – Reverse Proxy (Part 2)

On my previous post Synology virtual sites for FileStation and others we could see how to create new sub domains under the main DNS name for your Synology to access the DSM applications like FileStation, Notes Station, and so on.

The configuration that is done creates a new full virtual site, on other words, all URL paths under the new sub domain are proxied to the redirected application. By other words, supposing that points to FileStation, all URL paths below to that URL are redirected to the FileStation site (for example

The following configuration is a bit different and it’s purpose is to redirect a URL path from the main site to something else serving that path. For example, redirecting to a backend Node.js REST server. The main reason to this is to allow Synology to host a site that uses, for example JQuery, or AngularJS, to also host the REST API that needs to be on the same domain due to the browser CORS protection. CORS protection means that a loading page from a site can only request data, by REST for example, from the origin web site. For example, my pages on can call only REST services also on the same domain. It needs to be the same FQDN and PORT to the REST request be able to work. See more info at the Wikipedia page: Same-origin policy

And that’s because the above reason (CORS) that we need be able to proxy URL paths, and not full virtual sites, as the previous post as pointed out above.

To do this is quite simple, and the conditions are that the redirected path will be redirected from the main Synology user site.

So let’s redirect the path /api to a back end server:

Create a file named (for example) httpd-api-redirect.conf with the following content, replacing the backend_address and backend_port with the right information:

<Location /api>
  ProxyPass http://backend_address:backend_port/api

Save this file at /etc/httpd/sites-enabled-user, and stop and restart the Apache server:

(Edit: In DSM 6.0 the path is /usr/local/etc/httpd/sites-enabled  )

To stop the Web Station: /sbin/initctl stop httpd-user
To start the Web Station:  /sbin/initctl start httpd-user

After that accessing should work as usual, but should be redirected to the backend server.

And that’s it, we can now host JQuery and/or Angular sites that use REST services hosted on the Synology Apache server.

There also some caveats regarding this configuration, if serving another site, not a REST api, like the full/relative paths issue with the proxied site, and also, regarding REST, the use of the Accept-origin header. But for the simplest purpose of hosting the site and the REST api under the same domain name, this works fine.

Mosquitto Broker with Websockets enabled on the Synology NAS

Synology NAS are great devices, mainly due to the bundled software (CloudStation, Photo Station, to name a few), but also because it can run other software that is provided on non official package sources.
One of such providers is the SynoCommunity that provides a Mosquitto MQTT broker packaged for the Synology NAS. While I’ve successfully was able to cross compile the Mosquitto broker for my DS212+ NAS with websockets enabled (I do have a draft post with the steps that I’ve never published), the SynoCommunity package allows a simpler way to install and use a Mosquitto broker that has the WebSockets protocol support compiled in.

So, what is needed to be done?

Setting up:

  • 1st: Add to your Synology server the SynoCommunity package source. See the detailed instructions on the community web page.
  • 2nd: Install the Mosquitto package from the Synology package manager.
  • 3rd: Edit the file located at /usr/local/mosquitto/var/mosquitto.conf. You need to connect through ssh as root to do this.

At the end of this file add the following lines:

listener 1883
listener 9001
protocol websockets

Note that the 1883 is the standard MQTT broker port, and 9001 is the port that I’ve choose have the websockets server listening/answering.

After the change, stop and start the Mosquitto broker on the package manager action dropdown box for the Mosquitto package.

Still using ssh, and logged in as root, check now that the port 9001 is active:

root@DiskStation:/volume1/homes/root# netstat -na | grep 9001 
tcp        0      0  *               LISTEN       
tcp        0      0      ESTABLISHED

In the above case we can see that there is a websocket client connected to the broker from my workstation.


For testing we can use the HiveMQ websocket client: that allows to communicate using the websockets transport.

Despite the fact that the page is available on an external, internet based server, the websocket client will be running on YOUR web browser, on your machine in your own (internal) network, so at the connection settings for the broker, for Host I use my internal IP address:, and for the Port: I use the 9001 value.

I can now subscribe and publish to topics using the browser, and also using MQTT-SPY we can see and check the MQTT Websockets communication.

And that’s it!

md5deep and hashdeep on Synology DS212+

My personal photos are located at my desktop computer, and I have backups of them on my Synology DS212+ and my old faithful Linksys NSLU2 with an external disk.

The issue that I have now is that to keep backup times short I only backup the current year, because, well, the other years are static data. Previous years are already backed up and that data is not to get modified. Right? So what if corruption happens? How do I detected it? How can I be warned of such event? Right now I have no method to check if my digital photos of 2003 are totally intact in all of my three copies. And if  I detect corruption on one bad file/photo, which file is the correct one?

So I’m investigating this issue, and one of the tools available to create checksums and then to verify if everything is ok and no corruption has appeared is the md5deep and hashdeep programs. These are available as sources at this link:

These are the instructions for cross compiling these tools for the ARM based Synology DS212+. As usual this is done on a Linux Desktop/server machine.

1st) Set up the cross-compiling environment: Check out this link on the topic:

2nd) Setting up the cross compiling environment variables is now a bit different:

export INSTALLDIR=/usr/local/arm-marvell-linux-gnueabi
export TARGETMACH=arm-marvell-linux-gnueabi
export BUILDMACH=i686-pc-linux-gnu
export CROSS=arm-marvell-linux-gnueabi
export CC=${CROSS}-g++
export LD=${CROSS}-ld
export AS=${CROSS}-as
export AR=${CROSS}-ar
export GCC=${CROSS}-g++
export CXX=${CROSS}-g++

We will use the C++ compiler.

3rd) Create a working directory and clone the repository to your local machine: git clone
4th) Change to the hashdeep directory and execute the following commands:

[pcortex@pcortex:hashdeep]$ sh
[pcortex@pcortex:hashdeep]$ ./configure –host=arm-marvell-linux-gnueabi
[pcortex@pcortex:hashdeep]$ make

And that’s it. If everything went well then at hashdeep/src directory the commands are available:

[pcortex@pcortex:src]$ file hashdeep
hashdeep: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/, for GNU/Linux 2.6.16, not stripped

We can now copy the commands to the DiskStation and start using them.

Now all I need is to devise a method that creates and checks each year photo hash/signatures, and warns me if a difference is detected. I’m thing on using the audit mode of the hashdeep command, and do each day a check for one year, for example, on Mondays check 2003 and 2013, on Tuesday check 2004 and 2014 and so on.

Two Factor Authentication for Synology and others – Alternative to mobile Apps

One way to secure the access to our Synology Diskstation Web Management interface and File Manager tool is to enable the two factor authentication (2FA). This means that we need to have something we know (the username and password) and something that we have (a mobile phone) to access these interfaces.

Check out the following Synology page:

For that we need to install Google Authenticator that is a mobile application so that we can get the time depended code (TOTP)  needed on the two factor authentication process.

This works fine, but what if I loose my mobile phone? And what if I’m too lazy to get up and get my mobile phone or tablet to get the TOTP to login?

In the first case, if you have e-mail notification correctly configured on your Synology DiskStation you can get an emergency code to login again. But if you haven’t, only by accessing through ssh/telnet you can recover the 2FA key to get again a valid TOTP).  The keys and available emergency codes are located at /usr/syno/etc/preference/admin/google_authenticator

For the second situation there is a solution (well two, but I’ll use the simplest one) to achieve this. What we need is to install on our PC, a trusted one, at least, an application that mimics the mobile Google Authenticator application. This application is GAuth: We can installed as an add-on on our browser or launch directly from a web site or in my case from a local directory. For this I download it and added a shortcut to the index.html file. The application is available at an we can get a copy with the command git clone

Accessing the application, it will store the Secret Key into the Browser Local Storage. It’s not stored anywhere else so it is safe. Now we only need to get the key from the above Synology directory and we are all set. We can check if the generated GAuth code is the same as the code generated by the mobile device, and if yes, we have a backup!