Setting up a Grafana Dashboard using Node-Red and InfluxDB – Part 3: Single point of access – Reverse proxy the services with nginx

Since we will be running a lot of services, each running on its own port, the following configuration, is optional, but allows to access all services through the same entry point by using Nginx server as a reverse proxy to Node-Red, Node-Red UI/Dashboard, Node-Red Worldmap and Grafana.

With this configuration the base URL is always the same without any appended ports, and the only thing that changes are the URL path:

http://server/nodered
http://server/nodered/worldmap
http://server/grafana

To allow this we install and configure Nginx:

apt-get install nginx

The configuration files will reside in /etc/nginx directory. Under that directory there are two directories: sites-available and sites-enable where the later normally contains a link to configuration files located at sites-available.
At that directory there is a file named default that defines the default web site configuration used by Nginx. This is the file where we will add the reverse proxy directives.

Reverse proxy for Node-Red and Node-Red Contrib Worldmap
For setting up the reverse proxy for Node-Red we must first change the base URL for Node Red from / (root) to something else that we can map the reverse proxy.

For this we will need to edit the settings.js file located on the .node-red directory on the home path of the user running Node-Red.

We need to uncomment and change the entry httpRoot to point to our new base URL.

   httpRoot: '/nodered',

Don’t forget the trailing comma.

We need to restart now Node-Red and it should be accessible at the URL http://server:1880/nodered instead of http://server:1880/.

To configure Nginx, we edit the file default at /etc/nginx/sites-available and add the following section:


        location  /nodered {
                proxy_set_header Host $http_host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection "upgrade";
                proxy_pass "http://127.0.0.1:1880";
        }

        location /socket.io {
                proxy_set_header Host $http_host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection "upgrade";
                proxy_pass "http://127.0.0.1:1880";

        }

Note the following: The first location defines the reverse proxy URL /nodered to be served by the backend server http://127.0.0.1:1880. The incoming path, /nodered, will be passed to the backend server URL /nodered, since paths are passed directly. No need to add the /nodered path to the backend server definition.
Also I’m using the 127.0.0.1 address instead of localhost to avoid the IPv6 mapping to the localhost. In this way I’m sure that IPv4 will be used.

The location mapping for /nodered will make all the functionality of node red to work as it should at the base url /nodered. But some nodes, like node-red-contrib-worldmap will request to the proxy server ignoring the node-red base root map. Hence the /socket.io mapping. It will allow the worldmap nodes to work, but will stop this mapping to be used for something else.

Reverse proxy for Grafana

Setting up the reverse proxy for Grafana we can, and should use the following documentation: Grafana Reverse Poxy. For me the following configuration worked:

First edit the [server] section on the Grafana configuration file grafana.ini located at /etc/grafana.

Uncomment and edit the following lines:

[server]
# Protocol (http or https)
protocol = http

# The ip address to bind to, empty will bind to all interfaces
;http_addr =

# The http port  to use
http_port = 3000

# The public facing domain name used to access grafana from a browser
domain = server.domain.com

# Redirect to correct domain if host header does not match domain
# Prevents DNS rebinding attacks
;enforce_domain = false

# The full public facing url you use in browser, used for redirects and emails
# If you use reverse proxy and sub path specify full url (with sub path)
root_url = http://server.domain.com/grafana/

Note the ending slash at the root_url. The same applies to the Nginx configuration

The files for the Nginx configuration are the same as the above configuration for reverse proxy.

We just need to add the following section after the previous location directives:

        location /grafana/ {
                proxy_pass http://localhost:3000/;
        }

We should now restart nginx to refresh the configuration, and all should be working as it should by accessing the Grafana dashboard at http://server.domain.com/grafana

Setting up a Grafana Dashboard using Node-Red and InfluxDB – Part 2: Database configuration and data collection

On the previous post we’ve installed and the base software for our Grafana based dash board.

We need now to configure our InfluxDB database and Node Red to start collecting data.

Configuring InfluxDB:
Detailed instructions for configuring an InfluxDB database are on this InfluxDB documentation link..

The main concepts that we need to be aware when using the InfluxDB is that record of data has a time stamp, a set of tags and a measured value. This allows, for example to create a value named Temperature and tag it depending on the source sensor:

Temperature: Value=22.1 , Sensor=Kitchen
Temperature: Value=21.9 , Sensor=Room1

This allows to process all the data or only process data based on a certain tag or tags. Values and tags can be created on the fly without previously define them, which is a bit different from standard RDBMS engines.

Creating an InfluxDB database:
To create the database, we need to access the machine hosting the InfluxDB server and execute the command influx:

odroid@odroid:~$ influx
Connected to http://localhost:8086 version 1.2.0
InfluxDB shell version: 1.2.0
> create database SensorData
> show databases
name: databases
name
----
_internal
SensorData

> 

Now we have our database created and I’ve named SensorData. To make an example with the above temperature data we can do the following:

> insert Temperature,Sensor=kitchen value=22.1
ERR: {"error":"database is required"}

Note: error may be due to not setting a database or retention policy.
Please set a database with the command "use " or
INSERT INTO . 
> use SensorData
Using database SensorData
> 

As we can see we need first to select the database where we are going to insert data with the command use SensorData:

> use SensorData
Using database SensorData
> insert Temperature, Sensor=kitchen value=22.1
ERR: {"error":"unable to parse 'Temperature, Sensor=kitchen value=22.1': missing tag key"}

> insert Temperature,Sensor=kitchen value=22.1
> insert Temperature,Sensor=Room1 value=21.9
> select * from Temperature
name: Temperature
time                Sensor  value
----                ------  -----
1487939008959909164 kitchen 22.1
1487939056354678353 Room1   21.9

Note that we can’t use spaces between the Measure name and the tags. The correct syntax is as follows:

 insert MeasureName,tag1=t1,tag2=t2,...   value1=val1,value2=val2,value3=val3,....

Also note that no DDL (data definition language) was used to create the tags or the measured value, we’ve just inserted data for our measurement with the our tags and value(s) without the need of previously define the schema.

Configuring Node-Red
Since we now have a database we can configure the InfluxDB Node Red nodes to store data onto the database:

There are two types of InfluxDB nodes, one that has an Input and Output and other that only has Input. The former is for making queries to the database where we provide on the input node the query, and on the output the results are returned. The later is for storing data only onto the database.
For both nodes we need to configure an InfluxDB server:

InfluxDB Server Configuration

We need to press the Pen icon right next to the server to add or reconfigure a new InfluxDB server:

InfluxDB server

A set of credentials are required, but since I’ve yet configured security, we can just put admin/admin as username and password. In a real deployment we must activate security.

From now on it is rather simple. Referring to InfluxDB node configuration screenshot (Not the InfluxDB server configuration) we have a configuration field named Measurement. This is our measure name that we associate a value. Picking up on the above example with the Insert command it will be Temperature, for example.

Now if the msg.payload provided has input to the node is a single value, let’s say 21, this is equivalent to do:

Insert Temperature value=12

We other formats for msg.payload that allows to associate tags and measures. Just check the Info tab for the node.

Simple example:

The following flow shows a simple example of a value received through MQTT, in this case the free heap from one of my ESP8266 and its storage in InfluxDB:

Sample Flow

[{"id":"20bec5de.8881c2","type":"mqtt in","z":"ced40abb.3c92e","name":"Heap","topic":"/outbox/ESP12DASH/Heap","qos":"2","broker":"2a552b3c.de8d2c","x":83.16668701171875,"y":206.41668701171875,"wires":[["e0d9c912.8c57f8","876fb151.6f2fa"]]},{"id":"876fb151.6f2fa","type":"debug","z":"ced40abb.3c92e","name":"","active":true,"console":"false","complete":"false","x":408.5,"y":177,"wires":[]},{"id":"e0d9c912.8c57f8","type":"influxdb out","z":"ced40abb.3c92e","influxdb":"bbd62a93.1a7108","name":"","measurement":"heap","x":446.1666717529297,"y":224.58335876464844,"wires":[]},{"id":"2a552b3c.de8d2c","type":"mqtt-broker","broker":"192.168.1.17","port":"1883","clientid":"node-red","usetls":false,"verifyservercert":true,"compatmode":true,"keepalive":15,"cleansession":true,"willQos":"0","birthQos":"0"},{"id":"bbd62a93.1a7108","type":"influxdb","z":"","hostname":"127.0.0.1","port":"8086","protocol":"http","database":"SensorData","name":"ODroid InfluxDB"}]

We can see with this flow the data stored in InfluxDB:

> select * from heap;
name: heap
time                value
----                -----
1487946319638000000 41600
1487946440913000000 41600
1487946562206000000 41600
1487946683474000000 41600
1487946804751000000 41600
1487946926061000000 41600
1487947047309000000 41616
1487947168594000000 41600

Now we have data that we can graph with Grafana, subject of my next posts.

Setting up a Grafana Dashboard using Node-Red and InfluxDB – Part 1: Installing

A more or less standard software stack used for control, processing and displaying data, has emerged that is almost used by everyone when hacking around on Arduinos, ESP8266, Raspeberry Pi’s and other plethora of devices. This “standard” software stack basically always includes the MQTT protocol, some sort of Web based services, Node-Red and several different cloud based services like Thingspeak, PubNub and so on. For displaying data locally, solutions like Freeboard and Node-Red UI are a great resources, but they only shows current data/status, and has no easy way to see historical data.

So on this post I’ll document a software stack based on Node-Red, InfluxDB and Graphana that I use to store and display data from sensors that I have around while keeping and be able to display historical memory of data. The key asset here is the specialized time-series database InfluxDB that keeps data stored and allows fast retrieval based on time-stamps: 5 minutes ago, the last 7 days, and so on. InfluxDB is not the only Time-Series database that is available, but it integrates directly with Grafana the software that allows the building of dashboards based on stored data.

I’m running an older version of InfluxDB on my ARM based Odroid server, since a long time ago, ARM based builds of InfluxDB and Grafana where not available. This is now not the case, but InfluxDB and Grafana have ARM based builds so we can use them on Raspberry PI and Odroid ARM based boards.

So let’s start:

Setting up Node-Red with InfluxDB
I’ll not detail the Node-Red installation itself since it is already documented thoroughly everywhere. To install the supporting nodes for InfluxDB we need to install the package node-red-contrib-influxdb

cd ~/.node-red
npm install  node-red-contrib-influxdb

We should now restart Node-red to assume/detect the new nodes.

Node Red InfluxDB nodes

Installing InfluxDB
We can go to the InfluxDB downloads page and follow the installation instructions for our platform. In my case I need the ARM build to be used on Odroid.

cd ~
wget https://dl.influxdata.com/influxdb/releases/influxdb-1.2.0_linux_armhf.tar.gz
tar xvzf influxdb-1.2.0_linux_armhf.tar.gz

The InfluxDB engine is now decompressed in the newly created directory influxdb-1.2.0-1. Inside this directory there are the directories that should be copied to the system directories /etc, /usr and /var:

sudo -s
cd /home/odroid/influxdb-1.2.0-1

Copy the files to the right location. I’ve added the -i switch just to make sure that I don’t overwrite nothing.

root@odroid:~/influxdb-1.2.0-1# cp -ir etc/ /etc
root@odroid:~/influxdb-1.2.0-1# cp -ir usr/* /usr
root@odroid:~/influxdb-1.2.0-1# cp -ir var/* /var

We need now to create the influxdb user and group:

root@odroid:~/influxdb-1.2.0-1# groupadd influxdb
root@odroid:~/influxdb-1.2.0-1# useradd -M -s /bin/false -d /var/lib/influxdb -G influxdb influxdb

We need now to change permissions on /var/lib/influxdb:

cd /var/lib
chown influxdb:influxdb influxdb

We can now set up the automatic start up script. On the directory /usr/lib/influxdb/scripts there are scripts for the systemctl based Linux versions and init.d based versions that is my case. So all I have to do is to copy the init.sh script from that directory to the /etc/init.d and link it to my run level:

root@odroid:~# cd /etc/init.d
root@odroid:/etc/init.d# cp /usr/lib/influxdb/scripts/init.sh influxdb
root@odroid:/etc/init.d# runlevel
 N 2
root@odroid:/etc/init.d# cd /etc/rc2.d
root@odroid:/etc/init.d# ln -s /etc/init.d/influxdb S90influxdb

And that’s it. We can now start the database with the command /etc/init.d/influxdb start

root@odroid:~# /etc/init.d/influxdb start
Starting influxdb...
influxdb process was started [ OK ]

We can see the influxdb logs at /var/log/influxdb and start using it through the command line client influx:

root@odroid:~# influx
Connected to http://localhost:8086 version 1.2.0
InfluxDB shell version: 1.2.0
> show databases
name: databases
name
----
_internal

> 

Installing Grafana
We need now to download Grafana. In my case for Odroid since it is an ARMv7 based processor, no release/binary is available.
But a ARM builds are available on this GitHub Repository: https://github.com/fg2it/grafana-on-raspberry for both the Raspberry Pi and other ARM based computer boards, but only for Debian/Ubuntu based OS’s. Just click on download button on the description for the ARMv7 based build and at the end of the next page a download link should be available:

odroid@odroid:~$ wget https://bintray.com/fg2it/deb/download_file?file_path=main%2Fg%2Fgrafana_4.1.2-1487023783_armhf.deb -O grafana.deb

And install:

root@odroid:~# dpkg -i grafana.deb
Selecting previously unselected package grafana.
(Reading database ... 164576 files and directories currently installed.)
Preparing to unpack grafana.deb ...
Unpacking grafana (4.1.2-1487023783) ...
Setting up grafana (4.1.2-1487023783) ...
Installing new version of config file /etc/default/grafana-server ...
Installing new version of config file /etc/grafana/grafana.ini ...
Installing new version of config file /etc/grafana/ldap.toml ...
Installing new version of config file /etc/init.d/grafana-server ...
Installing new version of config file /usr/lib/systemd/system/grafana-server.service ...

Set the automatic startup at boot:

root@odroid:~# ln -s /etc/init.d/grafana-server /etc/rc2.d/S91grafana-server

And we can now start the server:

root@odroid:~# /etc/init.d/grafana-server start
 * Starting Grafana Server    [ OK ] 
root@odroid:~# 

We can now access the server at the address: http://server:3000/ where server is the IP or DNS name of our ODroid or RPi.

Conclusion:
This ends the installation part for the base software.

The following steps are:

  • Create the Influx databases –
  • Receive data from sensors/devices and store it on the previously created database
  • Configure and create Grafana data sources and dashboards
  • Add some plugins to Grafana

Node Red Dashboard and UPS Monitoring

Just a quick hack to use the Node Red dashboard to monitor some of the UPS values that is attached to My Synology NAS.

Gathering the data and feeding it to Node-Red
First I thought to do some sort of Python or NodeJS program to run the upsc command, process the output and feed it, through MQTT, to Node Red.
But since it seemed to me a bit of overkill to just process a text output, transform it to JSON and push it through MQTT by using a program, I decided that I’ll use some shell scripting, bash to be more explicit.

I’m running on my Odroid C1+ “server” all the necessary components, namely Node Red with the Dashboard UI module.

So on Odroid I also have the ups monitoring tools, and upsc outputs a text with the ups status:

odroid@odroid:~$ upsc ups@192.168.1.16
Init SSL without certificate database
battery.charge: 100
battery.charge.low: 10
...
input.transfer.high: 300
input.transfer.low: 140
input.voltage: 230.0
...
ups.load: 7
ups.mfr: American Power Conversion
...
ups.model: Back-UPS XS 700U  
...

So all we need now is to transform the above output from that text format to JSON and feed it to MQTT.
This means that we need to put between ” the parameter names and values, replace the : by , and also we need to replace the . on parameter names to _ so that in Node Red javascript we don’t have problems working with the parameter names.

Since I’m processing each line of the output, I’m using gawk/awk that allows some text processing. The awk program is as follow:

BEGIN {print "{"}
 {
   print lline  "\42" $1 "\42:\42"$2"\42"
 }
 {lline =", "}
END {print "}"}

This will at the beginning print the opening JSON bracket, then line by line the parameter name and value between ” and separated by : .
The lline variable at the first line is empty, so it prints nothing, but at the following lines it prints , which separates the JSON values.
We just need awk now to recognize parameters and values, and that is easy since they are separated by :

So if the above code is saved as procupsc.awk file, then the following command:

 upsc ups@192.168.1.16 2>/dev/null | awk -F: -f ~/upsmon/procupsc.awk |  sed 's/[.]/_/g'

Transforms the upsc output into a JSON output, including the replacement of . on variable names into _

{
"battery_charge":" 100"
, "battery_charge_low":" 10"
, "battery_charge_warning":" 50"
...
, "ups_load":" 7"
..
, "ups_vendorid":" xxxx"
}

Now all we need is to feed the output to the MQTT broker, and for this I’ll use the mosquitto_pub command, that has a switch that accepts the message from the standard input:

upsc ups@192.168.1.16 2>/dev/null | awk -F: -f /home/odroid/upsmon/procupsc.awk |  sed 's/[.]/_/g' | mosquitto_pub -h 192.168.1.17 -t upsmon -s

So we define the host and the topic: upsmon and the message is the output of the previous command (the -s switch).

All we need now is on Node Red to subscribe to the upsmon topic and process the received JSON object.

Since I’m running this periodically on crontab, I also add the PATH variable so that all files and commands are found.
The complete script is as follows:

upsmon.sh

PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
upsc ups@192.168.1.16 2>/dev/null | awk -F: -f /home/odroid/upsmon/procupsc.awk |  sed 's/[.]/_/g' | mosquitto_pub -h 192.168.1.17 -t upsmon -s

and on Crontab:

# m h  dom mon dow   command
*/5 * * * * /home/odroid/upsmon/upsmon.sh

Node Red processing and visualization
On node red side, now is easy. We receive the above upsc JSON object as a string on msg.payload, and we use the JSON node to separate into different msg.# variables.
From here we just feed the data to charts and gauges. The code is:

[{"id":"603732e7.bb8464","type":"mqtt in","z":"a8b82890.09ca7","name":"","topic":"upsmon","qos":"2","broker":"2a552b3c.de8d2c","x":114.5,"y":91,"wires":[["193a550.2b0ea2b"]]},{"id":"f057bae4.8e4678","type":"debug","z":"a8b82890.09ca7","name":"","active":true,"console":"false","complete":"payload","x":630.5,"y":88,"wires":[]},{"id":"563ec2f5.6475e4","type":"ui_gauge","z":"a8b82890.09ca7","name":"UPS Load","group":"ba196e43.b35398","order":0,"width":0,"height":0,"gtype":"donut","title":"Load","label":"%","format":"{{value}}","min":0,"max":"100","colors":["#00b500","#e6e600","#ca3838"],"x":630.5,"y":173,"wires":[]},{"id":"dbdb6731.531e8","type":"function","z":"a8b82890.09ca7","name":"UPS_Load","func":"msg.payload = Number(msg.payload.ups_load);\nreturn msg;","outputs":1,"noerr":0,"x":372.5,"y":175,"wires":[["563ec2f5.6475e4","f057bae4.8e4678","57bc92ab.ce4234"]]},{"id":"193a550.2b0ea2b","type":"json","z":"a8b82890.09ca7","name":"To Json","x":129.5,"y":178,"wires":[["dbdb6731.531e8","5433b116.a93b4","52604271.71f43c","11ad64b4.f70ad3","589635b3.9bacc4"]]},{"id":"57bc92ab.ce4234","type":"ui_chart","z":"a8b82890.09ca7","name":"Ups Load/Time","group":"ba196e43.b35398","order":0,"width":0,"height":0,"label":"Load/Time","chartType":"line","legend":"false","xformat":"HH:mm","interpolate":"linear","nodata":"","ymin":"0","ymax":"100","removeOlder":1,"removeOlderPoints":"","removeOlderUnit":"3600","cutout":0,"x":639.5,"y":226,"wires":[[],[]]},{"id":"5433b116.a93b4","type":"function","z":"a8b82890.09ca7","name":"Battery Status","func":"var Vbats = msg.payload.battery_voltage;\n\nvar Vbat = Vbats.replace(\"_\",\".\");\n\nmsg.payload = Number(Vbat);\n\nreturn msg;","outputs":1,"noerr":0,"x":400.5,"y":336,"wires":[["8c6f269f.87b618"]]},{"id":"52604271.71f43c","type":"function","z":"a8b82890.09ca7","name":"V IN","func":"var Vins = msg.payload.input_voltage;\n\nvar Vin = Vins.replace(\"_\",\".\");\n\nmsg.payload = Number(Vin);\nreturn msg;","outputs":1,"noerr":0,"x":369.5,"y":570,"wires":[["240f6e31.7aa5da","b37dffd2.1cb0d"]]},{"id":"8c6f269f.87b618","type":"ui_gauge","z":"a8b82890.09ca7","name":"","group":"421e19.192041e8","order":0,"width":0,"height":0,"gtype":"gage","title":"Curr. Bat. Voltage","label":"V. Bat","format":"{{value}}","min":"11","max":"15","colors":["#b50000","#e6e600","#00b500"],"x":616.5,"y":332,"wires":[]},{"id":"240f6e31.7aa5da","type":"ui_gauge","z":"a8b82890.09ca7","name":"Input AC Voltage","group":"4e74439e.ee7e74","order":0,"width":0,"height":0,"gtype":"gage","title":"VIN AC","label":"V AC","format":"{{value}}","min":"190","max":"240","colors":["#00b500","#e6e600","#ca3838"],"x":660.5,"y":569,"wires":[]},{"id":"11ad64b4.f70ad3","type":"function","z":"a8b82890.09ca7","name":"Bat Runtime","func":"msg.payload = Number(msg.payload.battery_runtime);\nreturn msg;","outputs":1,"noerr":0,"x":397.5,"y":397,"wires":[["98292081.f4f718","62bb1456.0f45ec"]]},{"id":"98292081.f4f718","type":"ui_gauge","z":"a8b82890.09ca7","name":"UPS Level","group":"fccb6f27.7691d8","order":0,"width":0,"height":0,"gtype":"wave","title":"UPS Runtime","label":"UPS Level","format":"{{value}}","min":0,"max":"1500","colors":["#00b500","#e6e600","#ca3838"],"x":642.5,"y":391,"wires":[]},{"id":"589635b3.9bacc4","type":"function","z":"a8b82890.09ca7","name":"Bat Charge","func":"var Vcharges = msg.payload.battery_charge;\n\nvar Vcharge = Vcharges.replace(\"_\",\".\");\n\nmsg.payload = Number(Vcharge);\n\n\nreturn msg;","outputs":1,"noerr":0,"x":405,"y":495,"wires":[["d2ab2543.d53a68"]]},{"id":"d2ab2543.d53a68","type":"ui_gauge","z":"a8b82890.09ca7","name":"Battery Charge","group":"421e19.192041e8","order":0,"width":0,"height":0,"gtype":"gage","title":"Battery Charge","label":"%","format":"{{value}}","min":0,"max":"100","colors":["#ff0000","#e6e600","#00ff01"],"x":661.5,"y":494,"wires":[]},{"id":"62bb1456.0f45ec","type":"ui_chart","z":"a8b82890.09ca7","name":"","group":"fccb6f27.7691d8","order":0,"width":0,"height":0,"label":"Runtime (sec)","chartType":"line","legend":"false","xformat":"HH:mm","interpolate":"linear","nodata":"","ymin":"","ymax":"","removeOlder":1,"removeOlderPoints":"","removeOlderUnit":"3600","cutout":0,"x":625.5,"y":441,"wires":[[],[]]},{"id":"b37dffd2.1cb0d","type":"ui_chart","z":"a8b82890.09ca7","name":"Vin/Time","group":"4e74439e.ee7e74","order":0,"width":0,"height":0,"label":"VAC In/Time","chartType":"line","legend":"false","xformat":"HH:mm","interpolate":"linear","nodata":"","ymin":"190","ymax":"240","removeOlder":"12","removeOlderPoints":"","removeOlderUnit":"3600","cutout":0,"x":635.5,"y":641,"wires":[[],[]]},{"id":"2a552b3c.de8d2c","type":"mqtt-broker","broker":"192.168.1.17","port":"1883","clientid":"node-red","usetls":false,"verifyservercert":true,"compatmode":true,"keepalive":15,"cleansession":true,"willQos":"0","birthQos":"0"},{"id":"ba196e43.b35398","type":"ui_group","z":"","name":"UPS Load","tab":"61ec3881.53526","disp":true,"width":"6"},{"id":"421e19.192041e8","type":"ui_group","z":"","name":"UPS Battery","tab":"61ec3881.53526","disp":true,"width":"6"},{"id":"4e74439e.ee7e74","type":"ui_group","z":"","name":"Input Voltage","tab":"61ec3881.53526","disp":true,"width":"6"},{"id":"fccb6f27.7691d8","type":"ui_group","z":"","name":"UPS Runtime","tab":"61ec3881.53526","disp":true,"width":"6"},{"id":"61ec3881.53526","type":"ui_tab","z":"","name":"UPS","icon":"dashboard","order":2}]

The final output is as follow:

Node Red UPS Monitoring

Node Red UPS Monitoring

Setting up an UPS for Synology NAS, Odroid and Arch Linux

To protect data residing on my Synology NAS I’ve bought and installed an UPS, an APC 700U to be exact. The trigger for buying and installing one was the loss of (some) disk data event that happened to family member external hard disk due to power loss. The data recovery cost was higher than buying an UPS and of course the lack of backups added to outcome of that event.

While no Synology NAS was involved on the above data loss, I know first hand that backups by themselves only add one layer of protection to possible data loss, and since from time to time I also have some power loss events, it was just a matter of time that my NAS might be hit by an unrecoverable power event, and, who knows, data loss.

So buying an UPS just might be a good idea…

Anyway the UPS from APC that I bought has an USB port allowing it to be connected to the Synology, which allows the UPS to be monitored and also allows the NAS to gracefully shutdown before UPS battery exhaustion. As a bonus it also allows to run an UPS monitoring server where other devices that share the same UPS power source can be notified of a power event through the network. Just keep in mind that the network switch or router must also be power protected…

Installing the UPS:
Installing the UPS is as simple as power down all devices that will connect to the UPS: Synology NAS, Odroid, external hard disks, PC base unit and network switch, and connecting an USB cable from the UPS to the back Diskstation USB ports.

After starting up, just go to DSM Control Panel and select Hardware & Power and then the UPS tab. Enable the UPS by ticking the Enable UPS support and also enable the UPS server to allow remote clients by ticking Enable UPS Network Server:

UPS Configuration

We can see by pressing the Device Information button that the UPS was correctly detected:

UPS Device Information

To end the configuration we need to press the Permitted DiskStation Devices and add the IP’s address of the devices that also have their power sources connected to the UPS and will also monitor the power status, in my case the IP of Odroid and my home PC.

And that’s it.

Setting up Arch Linux
Interestingly I dind’t found the NUT tools (Network UPS tools) on the core Arch repositories, but they are available at the AUR repository:

 yaourt -S network-ups-tools nut-monitor

The above packages will install the core NUT tools and a graphical monitor.

We can now scan our network for the ups:

nut-scanner -s 192.168.1.16
Scanning USB bus.
Scanning SNMP bus.
Scanning XML/HTTP bus.
Scanning NUT bus (old connect method).
[nutdev1]
        driver = "nutclient"
        port = "ups@192.168.1.16"

With the UPS reference found, we can now query it:

 upsc ups@192.168.1.16
 
Init SSL without certificate database
battery.charge: 100
....
....
battery.voltage: 13.9
battery.voltage.nominal: 12.0
device.mfr: American Power Conversion
device.model: Back-UPS XS 700U  
....

We can check the load, for example, with:

upsc  ups@192.168.1.16 | grep load
Init SSL without certificate database
ups.load: 37

We need now to modify the following files on /etc/ups:

  1. nut.conf
  2. upsmon.conf

First, as root, we copy a file from upsmon.conf.sample to upsmon.conf and add the following line:

MONITOR ups@192.168.1.16 1 * * slave

after the other commented out MONITOR lines. Since I’m only monitory, I’ve just put * at the username and password for authentication.

On nut.conf, we change the line MODE to MODE=netclient

After changing the files, we enable and start the UPS monitoring service:

sudo systemctl enable nut-monitor
sudo systemctl start nut-monitor

We are now monitoring the UPS status through the network. Keep in mind that the hub/switch power should also be connected to the UPS.

For monitoring we can use the nut-monitor application to see the UPS status in a nicer way:

NUT Monitor

To make the application easier to use, we can create a profile and save it, and when calling the application nut-monitor, we pass the profile name with the -F switch.

Setting up Odroid
To allow Odroid to monitor the UPS status through the Synology UPS server we need to also install the nut UPS tools (the same used by Arch Linux and DSM):

 apt-get install nut

The configuration steps for Odroid are the same as the Arch Linux steps, but since Odroid is running an Ubuntu variant, the files are located on a different path: /etc/nut.

To start the monitoring with the new configuration we just do /etc/init.d/ups-monitor restart.

Authentication
If authentication is needed, on the Synology disk, check the NUT configuration files located at /usr/syno/etc/ups/.

The upsd.users file has the user and password defined by default by the NUT tools on DSM.

Using Netbeans, OpenOCD and GDBServer plugin for ARM development

On my previous post we’ve seen how to setup the base software for starting up programming for ARM processor based boards.

Normally these boards have no base code running, like Arduino for example, to allow direct programming through USB, and so we need to have a programmer to flash the code onto the board through a specific interface, either JTAG or SWD. As a bonus the programmer also allows to debug in real time the code on the processor with breakpoints and watchpoints without resourcing to the common printf or Serial.print…

These programmers come on all sizes, shapes and prices… Some of the available are:

  1. STLink/V2 – Standard programmer from 2.5€ and up for clones, 22€ for the original.
  2. BlackMagic probe – An alternative that can be flashed on some hardware to build programmers, and that supports some devices based on the ARM processors, for example the BLE NRF51822 chip.
  3. Segger J-Link – State of the art programmer. A cheaper EDU version is available but with some licensing restrictions.

STLink/V2 is supported by OpenOCD which allows to use openocd to flash and debug code through JTAG or SWD (Single Wire Debug).

Code Sample
The “Hello World” example on the hardware world is the Blink Led example. On this site there is an example for my processor, in my case the STM32F103 processor. Just download and expand the file and import the project into NetBeans:

New Project with existing sources

Just make sure that the correct toolchain is selected:

Project with ARM Toolchain

The build process should run without errors, and at the root of the project a new file should be created: main.elf

Flashing the code
For using OpenOCD to flash the code onto the ARM processor based board we need to configure openocd to know which programmer is using and which target board is programming.
In my case since I’m using a STLink/V2 programmer connected to a target STM32F103 based board, I’ve created the following configuration file:

ebay_board.cfg

set CHIPNAME STM32F103C8T6
source [find interface/stlink-v2.cfg]
transport select hla_swd
set WORKAREASIZE 0x2000
source [find target/stm32f1x.cfg]

Based on this configuration we can flash now our board:

openocd -f /opt/ARM/ebay_board.cfg -c init -c targets -c "halt" -c "flash write_image erase /opt/ARM/Projects/STM32F103VHB6_RevZ_Demo1/main.elf" -c "verify_image /opt/ARM/Projects/STM32F103VHB6_RevZ_Demo1/main.elf" -c "reset run" -c shutdown

And the output is:

Open On-Chip Debugger 0.9.0 (2016-04-27-23:18)
Licensed under GNU GPL v2
For bug reports, read
        http://openocd.org/doc/doxygen/bugs.html
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
adapter speed: 1000 kHz
adapter_nsrst_delay: 100
none separate
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : clock speed 950 kHz
Info : STLINK v2 JTAG v24 API v2 SWIM v4 VID 0x0483 PID 0x3748
Info : using stlink api v2
Info : Target voltage: 3.208372
Info : STM32F103C8T6.cpu: hardware has 6 breakpoints, 4 watchpoints
    TargetName         Type       Endian TapName            State       
--  ------------------ ---------- ------ ------------------ ------------
 0* STM32F103C8T6.cpu  hla_target little STM32F103C8T6.cpu  halted
auto erase enabled
Info : device id = 0x20036410
Info : flash size = 64kbytes
target state: halted
target halted due to breakpoint, current mode: Thread 
xPSR: 0x61000000 pc: 0x2000003a msp: 0x20004fd0
wrote 7168 bytes from file /opt/ARM/Projects/STM32F103VHB6_RevZ_Demo1/main.elf in 0.468006s (14.957 KiB/s)
target state: halted
target halted due to breakpoint, current mode: Thread 
xPSR: 0x61000000 pc: 0x2000002e msp: 0x20004fd0
verified 6836 bytes in 0.035336s (188.923 KiB/s)
shutdown command invoked

Success! A blinking led (we have to connect one on my board to the correct pin) is now blinking.

Debugging the code:
To be able to debug our code we need to start OpenOCD and connect it to the board:

openocd -f ebay_board.cfg 
Open On-Chip Debugger 0.9.0 (2016-04-27-23:18)
Licensed under GNU GPL v2
For bug reports, read
        http://openocd.org/doc/doxygen/bugs.html
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
adapter speed: 1000 kHz
adapter_nsrst_delay: 100
none separate
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : clock speed 950 kHz
Info : STLINK v2 JTAG v24 API v2 SWIM v4 VID 0x0483 PID 0x3748
Info : using stlink api v2
Info : Target voltage: 3.211511
Info : STM32F103C8T6.cpu: hardware has 6 breakpoints, 4 watchpoints

And openocd is running.

On the NetBeans size we need to goto Debug->Attach Debugger. A new window should appear and we must change the debugger type from the default GDB Debugger to gdbserver and configure the openocd remote port and project:

gdbserver configuration

Make sure that the target is defined as: extended-remote localhost:3333
and the correct project is selected.

And that’s it. If the code already has some breakpoint defined, and the code passes through it, the execution is stopped and the correct code line is shown. Otherwise we can press the Pause button:

Debugger control

On the openocd output we can see the breakpoints and code pause working:

Info : device id = 0x20036410
Info : flash size = 64kbytes
Info : halted: PC: 0x08000806

Hardware breakpoint

If there are weird errors regarding connection dropped or failed, make sure that under the toolchain for ARM, the correct debugger is selected, meaning it should be the ARM debugger and not the gdb debugger.

And that’s it.

Setting up Netbeans for ARM development

My quick notes for setting up Netbeans, OpenOCD for ARM cortex processor development on Arch Linux. The instructions, excluding the ARCH Linux specific pacman commands, should be the same for any Linux platform.

So I’ve bought some STM32F103 ARM Cortex based boards, and for starting building software for them these are the steps:

Arm toolchain

Download the ARM toolchain from ARM GNU Toolchain. In my case I downloaded the latest available version for Linux 64 bit.

Create a working directory, in my case I just created /opt/ARM and unzip the toolchain there.

cd /opt
mkdir ARM
cd ARM
tar xvf ~/Downloads/gcc-arm-none-eabi-6_2-2016q4-20161216-linux.tar.bz2

Add now the ARM toolchain to your path, by editing the .bashrc file at our home directory:

cd
vi .bashrc

Add at the end the following line:

export PATH=$PATH:/opt/ARM/gcc-arm-none-eabi-6_2-2016q4/bin

and then execute the following command for assuming the new setting, on the current terminal window:

source ~/.bashrc

To the new path to be globally available we have to logout and login again, but we won’t do that right now.

We can now test the ARM toolchain installation by calling, for example the command: arm-none-eabi-gcc -v

arm-none-eabi-gcc -v
Using built-in specs.
COLLECT_GCC=arm-none-eabi-gcc
COLLECT_LTO_WRAPPER=/opt/ARM/gcc-arm-none-eabi-6_2-2016q4/bin/../lib/gcc/arm-none-eabi/6.2.1/lto-wrapper
Target: arm-none-eabi
Configured with: /tmp/jenkins-GCC-6-build...
...
...

Success!

OpenOCD
OpenOCD is a tool that will allow to flash the ARM processors and also allow to debug code. Netbeans by itself won’t be able to flash code on the processor. So regarding OpenOCD we need to do the installation and some configurations first:

On ARCH Linux it goes more or less like this:

sudo -s
pacman -S openocd
cp /usr/share/openocd/contrib/99-openocd.rules /etc/udev/rules.d

groupadd plugdev
usermod -a -G plugdev pcortex

udevadm control --reload-rules

Replace pcortex on the usermod command with your user name. Now we can logout and logon again to assume the new user group and path.

Setting up Netbeans
After starting up NetBeans, I’m using the latest version 8.2 (at the date of this post), we select Tools->Plugins and try to search and install the gdbserver plugin. If it fails searching for it, just download it from the GDBServer Plugin home page and install it manually.

Then we need to add the ARM toolchain to the available C++ Tools Collection. Just go to Tools->Options->C++ and press the Add button:

Add ARM Toolchain

Then add the correct path to the binaries for C/C++ and very important for the ARM debugger:
ARM tools

We don’t need to do nothing for anything else, since Code Completion tab will be filled automatically.

And that’s it, we can now use Netbeans to develop for ARM based boards.

In the next post we will see how to flash and debug code on these ARM based boards.