A simple start with the RiotOS operating system for IoT devices

The RiotOS operating system is an Open Source operating system that targets different embedded platforms while allowing to use the same code base for some aspects of IoT development: Connectivity, protocols and security. It also supports other key aspects such as threads, Inter process communication, synchronization support on the kernel side. It also supports several communications protocols, suche BLE, 6LoWPAN, Lorawan, and so on.

At the target level for the RiotOS operating system, RiotOS supports a wide scope of different platforms, including a special native platform. What does this means? It means that within certain limitations, it is entirely possible to develop an IoT application that runs where the code is developed, this includes our PC and any SBC such as the RaspberryPI. This support opens a new window of opportunities where it is possible to integrate different classes of devices, such as Arduino, STM32 and RPI while leveraging the knowledge on the same supporting code base.

How to start:

This is a very quick start instructions for building a native demo application with network support. As usual all the instructions apply to a Linux Operating system, in my case Arch Linux.

Clone the RiotOS repository:

Just run:

cd /opt
git clone https://github.com/RIOT-OS/RIOT.git

I’ve chosen the /opt directory for the sake of example.

Compile one the example application:

All example applications are under the example applications.
In each directory, there is a Makefile file with a specific content.

For example on the examples/default there is Makefile that targets the native environment.

Of key interest is the line in this file that defines the target board:

BOARD ?= native

If the BOARD variable is not defined on the running environment it will use the native board.

We just need now to run the make command:

[pcortex@pcortex:default|master]$ make
Building application "default" for "native" with MCU "native".

"make" -C /opt/RIOT/boards/native
"make" -C /opt/RIOT/boards/native/drivers
"make" -C /opt/RIOT/core
"make" -C /opt/RIOT/cpu/native
...
...
...
"make" -C /opt/RIOT/sys/shell
"make" -C /opt/RIOT/sys/shell/commands
   text    data     bss     dec     hex filename
  88189    1104   72088  161381   27665 /opt/RIOT/examples/default/bin/native/default.elf

We can see that the compilation output was placed at /opt/RIOT/examples/default/bin/native/default.elf.

But if we ran the default.elf executable, we get:

[pcortex@pcortex:default|master]$ bin/native/default.elf 
usage: bin/native/default.elf  [-i ] [-d] [-e|-E] [-o] [-c ]
 help: bin/native/default.elf -h

Options:
    -h, --help
        print this help message
    -i , --id=
        specify instance id (set by config module)
    -s , --seed=
        specify srandom(3) seed (/dev/random is used instead of random(3) if
        the option is omitted)
    -d, --daemonize
        daemonize native instance
    -e, --stderr-pipe
        redirect stderr to file
    -E, --stderr-noredirect
        do not redirect stderr (i.e. leave sterr unchanged despite
        daemon/socket io)
    -o, --stdout-pipe
        redirect stdout to file (/tmp/riot.stdout.PID) when not attached
        to socket
    -c , --uart-tty=
        specify TTY device for UART. This argument can be used multiple
        times (up to UART_NUMOF)

A little side note, under bin there is a native directory but if we target the compilation to a different platform, a new directory under bin will be created to that platform.

Returning to result to the above command, we notice that the command requires a TAP (terminal interface point) interface. The fact is that the RiotOS code to access the network resources of the hosting operating system doesn’t access the network interface directly, but does it through the TAP interface. We can create only one TAP interface, or several TAP interfaces, which in such case, means that we can have several RiotOS applications using an TAP based network to communicate between themselves and also with the external network. Neat!

RiotOS offers a script to the TAP intefaces, but before running the script, just make sure if any Linux operating system Kernel was done recently, a reboot migh be needed to the interfaces be successfully created.

So:
[pcortex@pcortex:RIOT|master]$ ip a

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp6s0:  mtu 4000 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:24:8c:02:dd:7e brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.68/24 brd 192.168.1.255 scope global noprefixroute enp6s0
       valid_lft forever preferred_lft forever
    inet6 fe80::224:8cff:fe02:dd7e/64 scope link 
       valid_lft forever preferred_lft forever

Now we run the tapsetup script that creates the TAP interfaces. This script when called without any parameters, it creates two TAP interfaces. If we pass the -c argument with a number, for example -c 4, it will create 4 TAP interfaces. The -d parameter deletes all interface that where created.

[pcortex@pcortex:tapsetup|master]$ pwd
/opt/RIOT/dist/tools/tapsetup                
[pcortex@pcortex:tapsetup|master]$ sudo ./tapsetup 
creating tapbr0
creating tap0
creating tap1
[pcortex@pcortex:tapsetup|master]$ ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp6s0:  mtu 4000 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:24:8c:02:dd:7e brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.68/24 brd 192.168.1.255 scope global noprefixroute enp6s0
       valid_lft forever preferred_lft forever
    inet6 fe80::224:8cff:fe02:dd7e/64 scope link 
       valid_lft forever preferred_lft forever
8: tapbr0:  mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 2e:bd:d8:88:64:55 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::2cbd:d8ff:fe88:6455/64 scope link 
       valid_lft forever preferred_lft forever
9: tap0:  mtu 1500 qdisc fq_codel master tapbr0 state DOWN group default qlen 1000
    link/ether 9a:a0:bc:fa:c7:61 brd ff:ff:ff:ff:ff:ff
10: tap1:  mtu 1500 qdisc fq_codel master tapbr0 state DOWN group default qlen 1000
    link/ether 2e:bd:d8:88:64:55 brd ff:ff:ff:ff:ff:ff

We can see that both tap0 and tap1 where created and are down.

Within two different windows we can run the default.elf with each interface now:

Window one: > sudo ./default.elf tap0
Window two: > sudo ./default.elf tap1

The two instances will start communicate and the tap interface have addresses and are up now.

11: tapbr0:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0e:db:a6:77:3b:e5 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::cdb:a6ff:fe77:3be5/64 scope link 
       valid_lft forever preferred_lft forever
12: tap0:  mtu 1500 qdisc fq_codel master tapbr0 state UP group default qlen 1000
    link/ether 0e:db:a6:77:3b:e5 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::cdb:a6ff:fe77:3be5/64 scope link 
       valid_lft forever preferred_lft forever
13: tap1:  mtu 1500 qdisc fq_codel master tapbr0 state UP group default qlen 1000
    link/ether be:65:4f:4d:a0:ed brd ff:ff:ff:ff:ff:ff
    inet6 fe80::bc65:4fff:fe4d:a0ed/64 scope link 
       valid_lft forever preferred_lft forever

Conclusion:
The above is a simple example, but RiotOS offers several examples including COAP, MQTT-SN and OpenThread examples.

The next step is to test RiotOS on ESP8266 and ESP32.

Advertisements

SdrPlay and ArchLinux

I’ve used the RTLSDR usb cheap dongles for a while for radio spectrum listening and scanning, but the fact is that in my case, they all are garbage. Yes they work fine for bands above 200/250Mhz (not too much noise there), but below that, my nearby powerful FM stations, overload the dongle and I can’t get much out of them (Even with a home made FM notch filter). Also in most cases, to hear HF bands (80m/40m/20) a hack (Q branch sampling) or an upconverter is needed, but still the FM stations blank out the other signals.

So for a bit of more money, I’ll give SDRPlay a go, since it is a full spectrum SDR (1.5Mz to 2GHz with a 10Mhz scope band) and also it can be used successfully as a pan adapter for Amateur band radios.

While waiting for my SDRPlay to arrive, and since the SDRUno program doesn’t run on Linux (I have Arch Linux), I started to get the software ready to use the SDR device with CubicSDR.

The software needed to access the SdrPlay devices in Linux is SoapySDR, the SDRPlay Driver for SoapySDR, and of course Cubic SDR. Also needed is the binary driver (yes I know… ) API/HW Driver – v2.13 to be installed.

While there are a lot of instructions to compile SoapySDR and Cubic SDR, including mine: Cubic SDR and SoapySDR, and a also very good video from SDRPlay I had a problem with the SoapySDRPlay component.

As we will se the problem is not with the SoapySDRPlay component but with the installation of the binary proprietary driver.

Basically during the binary driver installation I had the following output:

Press y and RETURN to accept the license agreement and continue with
the installation, or press n and RETURN to exit the installer [y/n] y
./install_lib.sh: line 17: arch: command not found
Architecture: 
API Version: 2.13
Remove old libraries...
Install /usr/local/lib/libmirsdrapi-rsp.so.2.13
cp: cannot stat '/libmirsdrapi-rsp.so.2.13': No such file or directory
chmod: cannot access '/usr/local/lib/libmirsdrapi-rsp.so.2.13': No such file or directory
Remove old header files...
Install /usr/local/include/mirsdrapi-rsp.h
Udev rules directory found, adding rules...
Libusb found, continuing...
Finished.

If we continue with this situation (the cp: cannot stat ‘/libmirsdrapi-rsp.so.2.13’: No such file or directory
chmod: cannot access ‘/usr/local/lib/libmirsdrapi-rsp.so.2.13’: No such file or directory
errors), the SoapySDRPlay compilation will fail:

[ 20%] Building CXX object CMakeFiles/sdrPlaySupport.dir/Registation.cpp.o
[ 40%] Building CXX object CMakeFiles/sdrPlaySupport.dir/Settings.cpp.o
[ 60%] Building CXX object CMakeFiles/sdrPlaySupport.dir/Streaming.cpp.o
[ 80%] Building CXX object CMakeFiles/sdrPlaySupport.dir/Version.cpp.o
make[2]: *** No rule to make target '/usr/local/lib/libmirsdrapi-rsp.so', needed by 'libsdrPlaySupport.so'.  Stop.
make[1]: *** [CMakeFiles/Makefile2:68: CMakeFiles/sdrPlaySupport.dir/all] Error 2
make: *** [Makefile:130: all] Error 2

To solve this we need to correcly install the driver, that in my case was the failure of copying the binary blob to the correct location.

So all we need is to go the directory where the binary installation is and create a working directory:

mkdir work
/SDRplay_RSP_API-Linux-2.13.1.run --target work

So the driver will install but in the working directory will be the missing driver to copy to the correct location:

sudo cp work/x86_64/libmirsdrapi-rsp.so.2.13 /usr/local/lib

Done.

I’ve also added to my home directory .bashrc file the following line to allow the new libraries to be found:

export LD_LIBRARY_PATH=/usr/local/lib/:$LD_LIBRARY_PATH

And then just do a source .bashrc so that changes take place.

We can now compile SoapySDRPlay with success:

Scanning dependencies of target sdrPlaySupport
[ 20%] Building CXX object CMakeFiles/sdrPlaySupport.dir/Registation.cpp.o
[ 40%] Building CXX object CMakeFiles/sdrPlaySupport.dir/Settings.cpp.o
[ 60%] Building CXX object CMakeFiles/sdrPlaySupport.dir/Streaming.cpp.o
[ 80%] Linking CXX shared module libsdrPlaySupport.so
[100%] Built target sdrPlaySupport

Execute the usual sudo make install and running the SoapySDRUtil command, the new driver should be available:

SoapySDRUtil --info
######################################################
##     Soapy SDR -- the SDR abstraction library     ##
######################################################

Lib Version: v0.7.0-g37429d89
API Version: v0.7.0
ABI Version: v0.7
Install root: /usr/local
Search path:  /usr/local/lib/SoapySDR/modules0.7
Module found: /usr/local/lib/SoapySDR/modules0.7/libremoteSupport.so  (0.5.0-6efb692)
Module found: /usr/local/lib/SoapySDR/modules0.7/librtlsdrSupport.so  (0.2.5-ad3d8e0)
Module found: /usr/local/lib/SoapySDR/modules0.7/libsdrPlaySupport.so (0.1.0-12c3db6)
Available factories... remote, rtlsdr, sdrplay
Available converters...
 -  CF32 -> [CF32, CS16, CS8, CU16, CU8]
 -  CS16 -> [CF32, CS16, CS8, CU16, CU8]
 -  CS32 -> [CS32]
 -   CS8 -> [CF32, CS16, CS8, CU16, CU8]
 -  CU16 -> [CF32, CS16, CS8]
 -   CU8 -> [CF32, CS16, CS8]
 -   F32 -> [F32, S16, S8, U16, U8]
 -   S16 -> [F32, S16, S8, U16, U8]
 -   S32 -> [S32]
 -    S8 -> [F32, S16, S8, U16, U8]
 -   U16 -> [F32, S16, S8]
 -    U8 -> [F32, S16, S8]

We can see now that the libsdrPlaySupport.so module is loaded.

So who ever encounters the issue that I had, hopefully this will be the solution since the sdrplay binary installation scripts might fail.

MakeBlock STEM mbot Robot – Using nodeJS to control mbot through BLE

A few weeks ago I’ve bought a mbot robot out of curiosity (also as a gift), since they became available at a nearby major electronic retailer and cheaper than buying them online.

The mbot is a robot chassis with two wheels, some external and onboard sensors, including an external ultrasonic sensor, and all this supported on a custom version of Arduino 328 board whish incorporates a motor driver, battery charger and so on. The version that I’ve bought also came with a LED Matrix display where it is possible to draw faces, text and numbers (mbot Face version).

The mbot robot can be controlled or used by either some Android (and IOS) mobile applications, or by using the Scratch programming environment. MakeBlock has a specific mbot version for Scratch called mblock that supports a set of new programming blocks to control the robot. The use of Scratch and mbot makes it ideal combination for teaching kids about programming and robotics.

We caan communicate/interface with mbot either by using an USB cable, or either by Low Power Bluetooth (the mbot BLE version) or through a 2.4GHz radio (the mbot 2.4GHz version). The 2.4Ghz version is more adequate for a classroom environment, since each robot is automatically bounded by radio to the 2.4GHz USB computer stick radio controller, which basically makes it plug and go and no need to fiddle with BLE discovery and bounding.

Anyway, the version that I have is the BLE one, and this post is about how to use NodeJS with the BLE Noble Library to communicate with mbot when using the factory firmware.

Requisites:

To make this work we need to have some requisites first:

Since mbot uses BLE, the computer must support also BLE. In my case I’m using the CSR 4 BLE dongle available on eBay, Ali and so on, to have BLE support on my computer.

The mbot must be loaded with the factory firmware so the this code can work. This is off course just for testing this code.
The factory firmware can be loaded either by using the mblock program when connected by USB cable, or by using the Arduino IDE.

The mbot BLE module is connected to the serial pins of the onboard arduino, so while the factory firmware has a specific interface, nothing stops us from replacing it with our own code and interface. For now we just keep the factory interface that is based on messages that start with 0xFF 0x55 ….

The code was tested on Linux, and it works fine. No idea if it works on windows…

As far it goes today, the NodeJS Bleno library doesn’t work with the latest node version 10, so we need to use this with a previous version of NodeJS. I’m using NodeJS V8, and also use the NodeJS Version Manager to have several versions of NodeJS active and available.

The BLE interface:
Using the Nordic Connect mobile application, we turn on the mbot, and on the application we start the BLE scan:

A device named Makeblock_LE should appear. We can connect to it and see the published services and characteristics:

There are two known services, and two unknown services. After some testing writing data to those services the service ffe1 is the service that connects to the mbot arduino serial port, and the service ffe4 I have no idea what it is for. Probably for controlling something on the BLE module itself.

The characteristics that the service ffe1 service exposes are:

As we can see, on is for reading data: ffe2 and it supports notification. This means we are warned when data is available so we can read it. The other characteristic is ffe3 that is for writing.

Basically if we connect to the Makeblock_LE BLE device, use the ffe1 service and write on the ffe3 characteristic we can control the robot. Data from the robot is automatically sent to us if we have notifications enabled on the ffe2 characteristic.

The mbot protocol:

There is one post that explains the protocol structure to communicate with the mbot.

Basically every command begins with 0xff 0x55 and then a set of bytes to control something.

The responses follow the same principle of starting with 0xff 0x55 and can return several values types.

An easier way to see what to send is to use mblock, program a scratch example in Arduino mode, and on the mblock serial monitor see what is sent to the robot.

My GitHub code source has some command examples for sending to mbot, namely to control the WS2812 RGB leds, the buzzer, the Led Matrix and to read data from the ultrasonic sensor.

How to use it:

Download the code from here MBot_BLE.

git clone https://github.com/fcgdam/mbot_ble

Make sure that you are using NodeJS version 8:

node -v
v8.11.3

If using Node V10, you can try to install the modules since in a future date from this post, the issues with Noble and NodeJS V10 might be solved.

Install the modules dependencies:

npm install

The code to access the BLE device needs root access, or check how to use Noble without root access:

sudo node mbotble.js

If the Ultrasonic distance sensor is connected to port 3, distance data is shown on the terminal.

That’s it!

Sample output:

The sample output for the mbotble.js when running as root on the RPI 3:

root@firefly:/home/pi/BLEMbot# node blembot.js 
- Bluetooth state change
  - Start scanning...
! Found device with local name: Makeblock_LE
! Mbot robot found! 
  - Stopped scanning...
- Connecting to Makeblock_LE [001010F13480]
! Connected to 001010F13480
! mbot BLE service found!
! mbot READ BLE characteristic found.
! mbot WRITE BLE characteristic found.
- End scanning BLE characteristics.
! Subscribed for mbot read notifications
Reading the ultrasound sensor data...
> mbot data received: "ff550002cb3db9410d0a"
Distance: 23.15517234802246
Reading the ultrasound sensor data...
> mbot data received: "ff5500020000bc410d0a"
Distance: 23.5
Reading the ultrasound sensor data...
> mbot data received: "ff5500027c1ab9410d0a"
Distance: 23.13793182373047
Reading the ultrasound sensor data...
> mbot data received: "ff5500028db0c0410d0a"
Distance: 24.086206436157227
Reading the ultrasound sensor data...
> mbot data received: "ff550002ddd398400d0a"
Distance: 4.775862216949463
Reading the ultrasound sensor data...
> mbot data received: "ff5500024f23d8410d0a"
Distance: 27.017240524291992
...
...
...

Synology Let’s Encrypt Certificate manual renew

My internet access router has port 80, the HTTP port, blocked, which only allows to access to DSM applications and hosted sites only through HTTPS.

Unfortunately this HTTP port blocking has the side effect, of also blocking the automatic certificate renewal for the Let’s encrypt certificates…

This should be clearer, since to create a Let’s Encrypt Certificate on DSM Console I had to open port 80 and add a forward rule on the internet access router, rule which I disabled after the certification creation.

Anyway, through the DSM interface, there is no way to manually renew the Let’s Encript certificate, except through the creation of a Certificate Request for a manual renewal.

Searching the forums, on this post: Synology Forum lays the solution:

– First open up Port 80 on the Router and forward it to the port 80 of the Synology IP address.
– Access to Synology command line prompt by using ssh.
– Execute the following command: /usr/syno/sbin/syno-letsencrypt renew-all
– Wait for around 3 to 4 minutes for the command completion
– Check now on the DSM console that all certificates where renewed.
– Block again port 80 on the internet access router.
– Done.

As a final note, there isn’t, as far as I’m aware, a command line option to renew only a specific certificate, so when renewing, all certificates are renewed.

Skype for Linux – Using corporate proxy

Since, unfortunately, I some times need to use Skype on Linux and because I’m behind a corporate proxy server, Skype doesn’t work or accept, at least on Arch Linux running KDE Plasma desktop, the system proxy settings.

Setting the http_proxy/https_proxy variables have no effect on the Electron (I think…) Skype based app.

Anyway the solution for this is quite simple: Just install ProxyChains.

[pcortex@desktop:~]$ sudo pacman -S proxychains-ng

After installing, edit the file /etc/proxychains.conf and under the [ProxyList] add your proxy:

http  10.1.1.1  3128  my_proxy_username  my_proxy_password

Save the file, and now just run skype with the following command:

[fpcortex@desktop:~]$ proxychains skypeforlinux

Making now the test call works.

Node-Red: Checking network service port status + UI status indicator

This post is about how to do two simple things using Node-Red:

  1. Check if network service on the machine running Node-Red is available by checking the corresponding listening port.
  2. The Node-Red UI doesn’t have a status indicator available, so I’ve built one

The only limitation on the following solution is that it only tests for ports for services that are running on the same server, where Node-Red is also running.

Preparation:

We need to install the Is Port Available NPM Module and make it available into our Node-Red instance.

For doing so, in Linux we must do the following:

root@server:~# cd .node-red/
root@server:~/.node-red# node i --save is-port-available

We need now to make this node module available to Node-Red by editing the settings.js file:

root@server:~/.node-red# vi settings.js

Add the module to the global context on the function named functionGlobalContext:

    functionGlobalContext: {
        // os:require('os'),
        // octalbonescript:require('octalbonescript'),
        // jfive:require("johnny-five"),
        // j5board:require("johnny-five").Board({repl:false})

        portavail:require('is-port-available')
    },

You might have other modules configured, so we need to add the above portavail:require(‘is-port-available’) line to that list preceded by a comma.

We need to restart Node-Red to make the module available to the flows.

The testing flow
In our Function nodes, we can now use the global context object portavail to access the is-port-available module.

For example for testing the InfluxDB server port (1086/TCP) we can write the following function:

    // Instantiate locally on the flow the is-port-available module
    const isPortAvailable = context.global.portavail;

    msg.payload = {};   // Zero out the message. Not really necessary
     
    var port = 1086; // Replace this with your service port number. In this case 1086 is the Influx DB port
    
    isPortAvailable(port).then( status => {
        if(status) {
            //console.log('Port ' + port + ' IS available!');
            msg.payload = {'InfluxDB':false,"title":"InfluxDB","color":"red"};   // The port is available, hence the server is NOT running
            node.send(msg);
        } else {
            //console.log('Port ' + port + ' IS NOT available!');
            //console.log('Reason : ' + isPortAvailable.lastError);
            msg.payload = {'InfluxDB':true,"title":"InfluxDB","color":"green"};    // The port is not available, so the server MIGHT be running
            node.send(msg);
           
        }
    });

    // Note that we DO NOT return a message here since the above code is asynchronous and it will emit the message in the future. 

Since the test is using promises, Node-Red will continue executing without waiting for the test response (the isPortAvailable(port) code ). So we do not send any message further on the normal Node-Red execution flow (hence there is no return msg; object) and the message is only emitted when the promise fulfils. When that happens we just send the message with the node.send(msg) statement.

The message payload can be anything, being the only important properties the title and color that are used for creating the UI status indicator.

The status indicator is a simple Angularjs template that displays the title and a status circle with the chosen colour.

Since pasting CSS and HTML code in WordPress is recipe to disaster, the template code can be accessed on this gist or on the complete test flow below:

[{"id":"1f506795.4be25","type":"inject","z":"53f8b852.885c6","name":"Check todos os 60s","topic":"","payload":"","payloadType":"date","repeat":"60","crontab":"","once":true,"x":260,"y":96,"wires":[["5d180fc7.9ad06","27e67f9b.4f9158"]]},{"id":"5d180fc7.9ad06","type":"function","z":"53f8b852.885c6","name":"Test Influx DB","func":"    const isPortAvailable = context.global.portavail;\n    msg.payload = {};\n     \n    var port = 8086;\n    \n    isPortAvailable(port).then( status =>{\n        if(status) {\n            //console.log('Port ' + port + ' IS available!');\n            msg.payload = {'InfluxDB':false,\"title\":\"InfluxDB\",\"color\":\"red\"};   // The port is available, hence the server is NOT running\n            node.send(msg);\n        } else {\n            //console.log('Port ' + port + ' IS NOT available!');\n            //console.log('Reason : ' + isPortAvailable.lastError);\n            msg.payload = {'InfluxDB':true,\"title\":\"InfluxDB\",\"color\":\"green\"};    // The port is not available, so the server MIGHT be running\n            node.send(msg);\n           \n        }\n    });\n    ","outputs":1,"noerr":0,"x":533.5,"y":97,"wires":[["3f3f8226.c9bfb6"]]},{"id":"3f3f8226.c9bfb6","type":"ui_template","z":"53f8b852.885c6","group":"44e5d7ea.043b2","name":"Status Icon","order":0,"width":0,"height":0,"format":"\n.dot {\n    height: 25px;\n    width: 25px;\n    background-color: #bbb;\n    border-radius: 50%;\n    display: inline-block;\n    float: right;\n}\n\n\n
{{msg.payload.title}}\n \n
","storeOutMessages":true,"fwdInMessages":true,"x":780,"y":96,"wires":[[]]},{"id":"27e67f9b.4f9158","type":"function","z":"53f8b852.885c6","name":"Test MongoDB","func":" const isPortAvailable = context.global.portavail;\n msg.payload = {};\n \n var port = 27017;\n \n isPortAvailable(port).then( status =>{\n if(status) {\n //console.log('Port ' + port + ' IS available!');\n msg.payload = {'MongoDB':false,\"title\":\"MongoDB\",\"color\":\"red\"}; // The port is available, hence the server is NOT running\n node.send(msg);\n } else {\n //console.log('Port ' + port + ' IS NOT available!');\n //console.log('Reason : ' + isPortAvailable.lastError);\n msg.payload = {'MongoDB':true,\"title\":\"MongoDB\",\"color\":\"green\"}; // The port is not available, so the server MIGHT be running\n node.send(msg);\n \n }\n });\n ","outputs":1,"noerr":0,"x":533,"y":158,"wires":[["2e85d9d.cc25126"]]},{"id":"2e85d9d.cc25126","type":"ui_template","z":"53f8b852.885c6","group":"44e5d7ea.043b2","name":"Status Icon","order":0,"width":0,"height":0,"format":"\n.dot {\n height: 25px;\n width: 25px;\n background-color: #bbb;\n border-radius: 50%;\n display: inline-block;\n float: right;\n}\n\n\n
{{msg.payload.title}}\n \n
","storeOutMessages":true,"fwdInMessages":true,"x":781,"y":161,"wires":[[]]},{"id":"44e5d7ea.043b2","type":"ui_group","z":"","name":"System Status","tab":"7011ff77.15cb18","disp":true,"width":"6"},{"id":"7011ff77.15cb18","type":"ui_tab","z":"","name":"Home","icon":"dashboard"}]

The result:

The above flow and Node UI status indicator template should produce the following result:

NR UI Status Indicator

Node-Red UI Status Indicator

Synology Reverse Proxy revisited (again..)

Work is taking too much time, so I haven’t updated the blog for a long time. Anyway a series of quick posts are on the publishing queue, and this is one of the first ones.

In February, my single disk installed in my Synology DS212 failed, after 7 long years working. It still works, but the bad sector error count is high, and can not be used on the NAS.

Anyway, this meat that I needed to replace it, and this time I replaced it with two disks for RAID 1, instead of using a single disk on the NAS.

So why the long introduction?

Well installing new disks implied I need to do a full DSM install from scratch which meant that several things changed from the previous DSM version that I had and be upgraded along as the years passed.

One of such things that changed, for the better, was the reverse proxy support using nginx and the Apache http server abandonment.

While reverse proxy now is supported out of the box on the Application portal, it only works for sub domain sites. For example If I want to reverse proxy Audio Station, it is quite easy to do it on the Control Panel -> Application Portal. The same is true for reverse proxy any other service running on the network. An example of such configuration is in this post: DSM 6.0 Reverse Proxy

What is not still able to do on the DSM interface is to map URL paths to other servers as I’ve explained on this post: Reverse proxy for URL paths. For example mapping the path /api to a back end server from the main Synology site.

Still, it is quite simple to do, and here are the instructions.

  1. First we need to have ssh or telnet access to the DSM. Of course recommendation is to use ssh.
  2. We need to change to this directory: /usr/local/etc/nginx/conf.d
    root@DiskStation:/usr/local/etc/nginx/conf.d# pwd
    /usr/local/etc/nginx/conf.d
    
  3. Now we create in this directory a file that must have the following naming convention: www.our_name.conf.
    For example, let’s create the following file, named

    www.rproxy.api.conf

    with the following content:

    location ~ /api/ {
      proxy_pass http://192.168.4.20:3001;
    }
    

    This means that on the main Web Station site, the /api is passed out to the above server, in this case the http://192.168.4.20:3001.

  4. We save the file and test now the configuration:
    root@DiskStation:/usr/local/etc/nginx/conf.d# nginx -T > /tmp/nginx.conf
    nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /etc/nginx/nginx.conf test is successful
    

    We can check the file /tmp/nginx.conf to see if there are no errors, and if the above configuration is in the file.

  5. So all we need now is to restart the nginx server:
    nginx -s reload
    

And that’s it, our Web Station URL path /api should be redirected to the back end server.