A quick, fast note, following up on the previous post regarding installing Portainer web interface for handling Docker operations through an web browser.
If deploying Portainer on Docker that is running either on a Raspberry PI or Odroid, the instructions are exactly the same as specified on the previous post, with the only difference being the image that is pulled: So instead of doing docker pull portainer/portainer, the correct image for ARM based SBC (Single board computers) such as the RPI or Odroid is: docker pull portainer/portainer:linux-arm. And that’s it:
On the previous post we’ve installed and the base software for our Grafana based dash board.
We need now to configure our InfluxDB database and Node Red to start collecting data.
Configuring InfluxDB:
Detailed instructions for configuring an InfluxDB database are on this InfluxDB documentation link..
The main concepts that we need to be aware when using the InfluxDB is that record of data has a time stamp, a set of tags and a measured value. This allows, for example to create a value named Temperature and tag it depending on the source sensor:
This allows to process all the data or only process data based on a certain tag or tags. Values and tags can be created on the fly without previously define them, which is a bit different from standard RDBMS engines.
Creating an InfluxDB database:
To create the database, we need to access the machine hosting the InfluxDB server and execute the command influx:
odroid@odroid:~$ influx
Connected to http://localhost:8086 version 1.2.0
InfluxDB shell version: 1.2.0
> create database SensorData
> show databases
name: databases
name
----
_internal
SensorData
>
Now we have our database created and I’ve named SensorData. To make an example with the above temperature data we can do the following:
> insert Temperature,Sensor=kitchen value=22.1
ERR: {"error":"database is required"}
Note: error may be due to not setting a database or retention policy.
Please set a database with the command "use " or
INSERT INTO .
> use SensorData
Using database SensorData
>
As we can see we need first to select the database where we are going to insert data with the command use SensorData:
> use SensorData
Using database SensorData
> insert Temperature, Sensor=kitchen value=22.1
ERR: {"error":"unable to parse 'Temperature, Sensor=kitchen value=22.1': missing tag key"}
> insert Temperature,Sensor=kitchen value=22.1
> insert Temperature,Sensor=Room1 value=21.9
> select * from Temperature
name: Temperature
time Sensor value
---- ------ -----
1487939008959909164 kitchen 22.1
1487939056354678353 Room1 21.9
Note that we can’t use spaces between the Measure name and the tags. The correct syntax is as follows:
Also note that no DDL (data definition language) was used to create the tags or the measured value, we’ve just inserted data for our measurement with the our tags and value(s) without the need of previously define the schema.
Configuring Node-Red
Since we now have a database we can configure the InfluxDB Node Red nodes to store data onto the database:
There are two types of InfluxDB nodes, one that has an Input and Output and other that only has Input. The former is for making queries to the database where we provide on the input node the query, and on the output the results are returned. The later is for storing data only onto the database.
For both nodes we need to configure an InfluxDB server:
We need to press the Pen icon right next to the server to add or reconfigure a new InfluxDB server:
A set of credentials are required, but since I’ve yet configured security, we can just put admin/admin as username and password. In a real deployment we must activate security.
From now on it is rather simple. Referring to InfluxDB node configuration screenshot (Not the InfluxDB server configuration) we have a configuration field named Measurement. This is our measure name that we associate a value. Picking up on the above example with the Insert command it will be Temperature, for example.
Now if the msg.payload provided has input to the node is a single value, let’s say 21, this is equivalent to do:
Insert Temperature value=12
We other formats for msg.payload that allows to associate tags and measures. Just check the Info tab for the node.
Simple example:
The following flow shows a simple example of a value received through MQTT, in this case the free heap from one of my ESP8266 and its storage in InfluxDB:
A more or less standard software stack used for control, processing and displaying data, has emerged that is almost used by everyone when hacking around on Arduinos, ESP8266, Raspeberry Pi’s and other plethora of devices. This “standard” software stack basically always includes the MQTT protocol, some sort of Web based services, Node-Red and several different cloud based services like Thingspeak, PubNub and so on. For displaying data locally, solutions like Freeboard and Node-Red UI are a great resources, but they only shows current data/status, and has no easy way to see historical data.
So on this post I’ll document a software stack based on Node-Red, InfluxDB and Graphana that I use to store and display data from sensors that I have around while keeping and be able to display historical memory of data. The key asset here is the specialized time-series database InfluxDB that keeps data stored and allows fast retrieval based on time-stamps: 5 minutes ago, the last 7 days, and so on. InfluxDB is not the only Time-Series database that is available, but it integrates directly with Grafana the software that allows the building of dashboards based on stored data.
I’m running an older version of InfluxDB on my ARM based Odroid server, since a long time ago, ARM based builds of InfluxDB and Grafana where not available. This is now not the case, but InfluxDB and Grafana have ARM based builds so we can use them on Raspberry PI and Odroid ARM based boards.
So let’s start:
Setting up Node-Red with InfluxDB
I’ll not detail the Node-Red installation itself since it is already documented thoroughly everywhere. To install the supporting nodes for InfluxDB we need to install the package node-red-contrib-influxdb
cd ~/.node-red
npm install node-red-contrib-influxdb
We should now restart Node-red to assume/detect the new nodes.
Installing InfluxDB
We can go to the InfluxDB downloads page and follow the installation instructions for our platform. In my case I need the ARM build to be used on Odroid.
cd ~
wget https://dl.influxdata.com/influxdb/releases/influxdb-1.2.0_linux_armhf.tar.gz
tar xvzf influxdb-1.2.0_linux_armhf.tar.gz
The InfluxDB engine is now decompressed in the newly created directory influxdb-1.2.0-1. Inside this directory there are the directories that should be copied to the system directories /etc, /usr and /var:
sudo -s
cd /home/odroid/influxdb-1.2.0-1
Copy the files to the right location. I’ve added the -i switch just to make sure that I don’t overwrite nothing.
We need now to change permissions on /var/lib/influxdb:
cd /var/lib
chown influxdb:influxdb influxdb
We can now set up the automatic start up script. On the directory /usr/lib/influxdb/scripts there are scripts for the systemctl based Linux versions and init.d based versions that is my case. So all I have to do is to copy the init.sh script from that directory to the /etc/init.d and link it to my run level:
root@odroid:~# cd /etc/init.d
root@odroid:/etc/init.d# cp /usr/lib/influxdb/scripts/init.sh influxdb
root@odroid:/etc/init.d# runlevel
N 2
root@odroid:/etc/init.d# cd /etc/rc2.d
root@odroid:/etc/init.d# ln -s /etc/init.d/influxdb S90influxdb
And that’s it. We can now start the database with the command /etc/init.d/influxdb start
root@odroid:~# /etc/init.d/influxdb start
Starting influxdb...
influxdb process was started [ OK ]
We can see the influxdb logs at /var/log/influxdb and start using it through the command line client influx:
root@odroid:~# influx
Connected to http://localhost:8086 version 1.2.0
InfluxDB shell version: 1.2.0
> show databases
name: databases
name
----
_internal
>
Installing Grafana
We need now to download Grafana. In my case for Odroid since it is an ARMv7 based processor, no release/binary is available.
But a ARM builds are available on this GitHub Repository: https://github.com/fg2it/grafana-on-raspberry for both the Raspberry Pi and other ARM based computer boards, but only for Debian/Ubuntu based OS’s. Just click on download button on the description for the ARMv7 based build and at the end of the next page a download link should be available:
root@odroid:~# dpkg -i grafana.deb
Selecting previously unselected package grafana.
(Reading database ... 164576 files and directories currently installed.)
Preparing to unpack grafana.deb ...
Unpacking grafana (4.1.2-1487023783) ...
Setting up grafana (4.1.2-1487023783) ...
Installing new version of config file /etc/default/grafana-server ...
Installing new version of config file /etc/grafana/grafana.ini ...
Installing new version of config file /etc/grafana/ldap.toml ...
Installing new version of config file /etc/init.d/grafana-server ...
Installing new version of config file /usr/lib/systemd/system/grafana-server.service ...
Just a quick hack to use the Node Red dashboard to monitor some of the UPS values that is attached to My Synology NAS.
Gathering the data and feeding it to Node-Red
First I thought to do some sort of Python or NodeJS program to run the upsc command, process the output and feed it, through MQTT, to Node Red.
But since it seemed to me a bit of overkill to just process a text output, transform it to JSON and push it through MQTT by using a program, I decided that I’ll use some shell scripting, bash to be more explicit.
I’m running on my Odroid C1+ “server” all the necessary components, namely Node Red with the Dashboard UI module.
So on Odroid I also have the ups monitoring tools, and upsc outputs a text with the ups status:
So all we need now is to transform the above output from that text format to JSON and feed it to MQTT.
This means that we need to put between ” the parameter names and values, replace the : by , and also we need to replace the . on parameter names to _ so that in Node Red javascript we don’t have problems working with the parameter names.
Since I’m processing each line of the output, I’m using gawk/awk that allows some text processing. The awk program is as follow:
BEGIN {print "{"}
{
print lline "\42" $1 "\42:\42"$2"\42"
}
{lline =", "}
END {print "}"}
This will at the beginning print the opening JSON bracket, then line by line the parameter name and value between ” and separated by : .
The lline variable at the first line is empty, so it prints nothing, but at the following lines it prints , which separates the JSON values.
We just need awk now to recognize parameters and values, and that is easy since they are separated by :
So if the above code is saved as procupsc.awk file, then the following command:
Now all we need is to feed the output to the MQTT broker, and for this I’ll use the mosquitto_pub command, that has a switch that accepts the message from the standard input:
So we define the host and the topic: upsmon and the message is the output of the previous command (the -s switch).
All we need now is on Node Red to subscribe to the upsmon topic and process the received JSON object.
Since I’m running this periodically on crontab, I also add the PATH variable so that all files and commands are found.
The complete script is as follows:
# m h dom mon dow command
*/5 * * * * /home/odroid/upsmon/upsmon.sh
Node Red processing and visualization
On node red side, now is easy. We receive the above upsc JSON object as a string on msg.payload, and we use the JSON node to separate into different msg.# variables.
From here we just feed the data to charts and gauges. The code is:
To protect data residing on my Synology NAS I’ve bought and installed an UPS, an APC 700U to be exact. The trigger for buying and installing one was the loss of (some) disk data event that happened to family member external hard disk due to power loss. The data recovery cost was higher than buying an UPS and of course the lack of backups added to outcome of that event.
While no Synology NAS was involved on the above data loss, I know first hand that backups by themselves only add one layer of protection to possible data loss, and since from time to time I also have some power loss events, it was just a matter of time that my NAS might be hit by an unrecoverable power event, and, who knows, data loss.
So buying an UPS just might be a good idea…
Anyway the UPS from APC that I bought has an USB port allowing it to be connected to the Synology, which allows the UPS to be monitored and also allows the NAS to gracefully shutdown before UPS battery exhaustion. As a bonus it also allows to run an UPS monitoring server where other devices that share the same UPS power source can be notified of a power event through the network. Just keep in mind that the network switch or router must also be power protected…
Installing the UPS:
Installing the UPS is as simple as power down all devices that will connect to the UPS: Synology NAS, Odroid, external hard disks, PC base unit and network switch, and connecting an USB cable from the UPS to the back Diskstation USB ports.
After starting up, just go to DSM Control Panel and select Hardware & Power and then the UPS tab. Enable the UPS by ticking the Enable UPS support and also enable the UPS server to allow remote clients by ticking Enable UPS Network Server:
We can see by pressing the Device Information button that the UPS was correctly detected:
To end the configuration we need to press the Permitted DiskStation Devices and add the IP’s address of the devices that also have their power sources connected to the UPS and will also monitor the power status, in my case the IP of Odroid and my home PC.
And that’s it.
Setting up Arch Linux
Interestingly I dind’t found the NUT tools (Network UPS tools) on the core Arch repositories, but they are available at the AUR repository:
yaourt -S network-ups-tools nut-monitor
The above packages will install the core NUT tools and a graphical monitor.
We can now scan our network for the ups:
nut-scanner -s 192.168.1.16
Scanning USB bus.
Scanning SNMP bus.
Scanning XML/HTTP bus.
Scanning NUT bus (old connect method).
[nutdev1]
driver = "nutclient"
port = "ups@192.168.1.16"
With the UPS reference found, we can now query it:
upsc ups@192.168.1.16
Init SSL without certificate database
battery.charge: 100
....
....
battery.voltage: 13.9
battery.voltage.nominal: 12.0
device.mfr: American Power Conversion
device.model: Back-UPS XS 700U
....
We are now monitoring the UPS status through the network. Keep in mind that the hub/switch power should also be connected to the UPS.
For monitoring we can use the nut-monitor application to see the UPS status in a nicer way:
To make the application easier to use, we can create a profile and save it, and when calling the application nut-monitor, we pass the profile name with the -F switch.
Setting up Odroid
To allow Odroid to monitor the UPS status through the Synology UPS server we need to also install the nut UPS tools (the same used by Arch Linux and DSM):
apt-get install nut
The configuration steps for Odroid are the same as the Arch Linux steps, but since Odroid is running an Ubuntu variant, the files are located on a different path: /etc/nut.
To start the monitoring with the new configuration we just do /etc/init.d/ups-monitor restart.
Authentication
If authentication is needed, on the Synology disk, check the NUT configuration files located at /usr/syno/etc/ups/.
The upsd.users file has the user and password defined by default by the NUT tools on DSM.
I run many services on my Odroid C1+ including Node-Red. But since NodeJs on Odroid C1+ is version v0.10 is starting to be seriously old for running Node-Red or other NodeJS dependent software.
So my quick instructions for upgrading NodeJS and Node-Red on the Odroid C1+
Upgrading NodeJS
First verify what version is available/installed on the Odroid:
Since I’ve already had previously installed a more recent version of NodeJS (the node command), the version used by Node-Red is v0.12.14 while the default NodeJS version is v0.10.25.
We can also, and should, check the npm version:
odroid@odroid:~$ npm -v
2.15.1
We also need to find what architecture we are using, just for completeness since ODroid C1+ is an ARM7 based architecture:
odroid@odroid:~$ uname -a
Linux odroid 3.10.96-151 #1 SMP PREEMPT Wed Jun 15 18:47:37 BRT 2016 armv7l armv7l armv7l GNU/Linux
This will allow us to download the correct version of the NodeJS binaries from the NodeJS site: NodeJS downloads.
In our case we choose the ARM7 architecture binaries, which at the current time is file: node-v6.9.2-linux-armv7l.tar.xz
So I’ve just copied the link from the NodeJS site and did a wget on the Odroid:
I then created a working directory and “untared” the file:
odroid@odroid:~$ mkdir nodework
odroid@odroid:~$ cd nodework
odroid@odroid:~/nodework$ tar xvf ../node-v6.9.2-linux-armv7l.tar.xz
odroid@odroid:~/nodework$ cd node-v6.9.2-linux-armv7l/
odroid@odroid:~/nodework/node-v6.9.2-linux-armv7l$
Since there isn’t an install script we need to move the new NodeJS files to the correct locations:
Binaries to /usr/bin
Include files to /usr/include
Libs files to /usr/lib
Copy the binaries, replacing, if existing the older versions:
From the Node-Red startup log, we can see the previous versions of node-red and nodejs used:
Welcome to Node-RED
===================
28 Dec 17:55:40 - [info] Node-RED version: v0.15.2
28 Dec 17:55:40 - [info] Node.js version: v0.12.14
28 Dec 17:55:40 - [info] Linux 3.10.96-151 arm LE
28 Dec 17:55:42 - [info] Loading palette nodes
28 Dec 17:55:50 - [info] Dashboard version 2.1.0 started at /ui
28 Dec 17:55:54 - [warn] ------------------------------------------------------
28 Dec 17:55:54 - [warn] [rpi-gpio] Info : Ignoring Raspberry Pi specific node
28 Dec 17:55:54 - [warn] ------------------------------------------------------
Starting up Node-Red should show now the new software versions:
Welcome to Node-RED
===================
1 Jan 20:35:46 - [info] Node-RED version: v0.15.2
1 Jan 20:35:46 - [info] Node.js version: v6.9.2
1 Jan 20:35:46 - [info] Linux 3.10.96-151 arm LE
1 Jan 20:35:47 - [info] Loading palette nodes
1 Jan 20:35:54 - [info] Dashboard version 2.2.1 started at /ui
As an owner of an Odroid C1+ that is a great little device, it came to my knowledge it it might now be obsolete 🙂
This is because the Odroid C2 has came out: Odroid C2, and if I think that the C1+ is great, what to tell about the C2, that is around the same price point?
The specifications:
64bit S905 ARM quad-core 2GHz processor
2GB of RAM !
HDMI 2.0 with 4K support (not that it matters for me… )
Gigabit Ethernet, as the C1+
eMMC support.
Several other tid bits very similar to C1+.
Missing RTC.
So a major leap forward from the C1+ and from the RPi.
The 40$ price in Europe is no near that value, but probably around 50/55€ mark, my guess ( + P/P). Also the eMMC prices are still very high, almost, or even higher, than the price of the board itself.
Still I’m a bit wary of buying one since HardKernel (the Odroid manufacturer) only provides a 4 week warranty !? I bought mine in Europe, so I suppose it has a 2 year warranty by law.
I had no problems with my C1+, but I wonder, if something goes wrong…
Just for a quick reference, the following instructions detail how to install the latest Mosquitto MQTT broker with Websockets enabled on the ODroid C1+ running (L)Ubuntu release. The instructions are probably also valid for other platforms, but not tested.
1. Install the prerequisites
As the root user, install the following libraries:
Probably the SSL libraries, and a few others are also needed, but in my case they where already installed.
2. Mosquitto install
Download and compile the Mosquitto broker:
mkdir ~/mosq
cd ~/mosq
wget http://mosquitto.org/files/source/mosquitto-1.4.5.tar.gz
tar xvzf mosquitto-1.4.5.tar.gz
cd mosquitto-1.4.5/
Edit the config.mk file to enable the websockets support:
# Build with websockets support on the broker.
WITH_WEBSOCKETS:=yes
and compile and install:
make
make install
3. Configuration
Copy the file mosquitto.conf to /usr/local/etc, and edit the file:
cp mosquitto.conf /usr/local/etc
cd /usr/local/etc
Add at least the following lines to mosquitto.conf file to enable websockets support.
# Port to use for the default listener.
#port 1883
listener 1883
listener 9001
protocol websockets
Add an operating system runtime user for the Mosquitto daemon:
useradd -lm mosquitto
cd /usr/local/etc
chown mosquitto mosquitto.conf
If needed, or wanted, change also on the configuration file the logging level and destination.
For example:
# Note that if the broker is running as a Windows service it will default to
# "log_dest none" and neither stdout nor stderr logging is available.
# Use "log_dest none" if you wish to disable logging.
log_dest file /var/log/mosquitto.log
# If using syslog logging (not on Windows), messages will be logged to the
# "daemon" facility by default. Use the log_facility option to choose which of
# local0 to local7 to log to instead. The option value should be an integer
# value, e.g. "log_facility 5" to use local5.
#log_facility
# Types of messages to log. Use multiple log_type lines for logging
# multiple types of messages.
# Possible types are: debug, error, warning, notice, information,
# none, subscribe, unsubscribe, websockets, all.
# Note that debug type messages are for decoding the incoming/outgoing
# network packets. They are not logged in "topics".
log_type error
log_type warning
log_type notice
log_type information
# Change the websockets logging level. This is a global option, it is not
# possible to set per listener. This is an integer that is interpreted by
# libwebsockets as a bit mask for its lws_log_levels enum. See the
# libwebsockets documentation for more details. "log_type websockets" must also
# be enabled.
websockets_log_level 0
# If set to true, client connection and disconnection messages will be included
# in the log.
connection_messages true
# If set to true, add a timestamp value to each log message.
log_timestamp true
4. Automatic start
The easiest way is to add the following file to the init.d directory and link it to the current runlevel:
On the ARM based small computers front, running Linux or Android, there are several contenders, being the most famous one the Raspberry PI. Due to the huge community and documentation available the Raspberry Pi is the most chosen for a whole range of projects including low power servers, controllers, media players and so on.
Anyway, there are several alternatives, even cheaper, than the RPi. For example the 15$ Orange Pi looks promising on the price front, but not on the software side due to the Allwinner chip, it doesn’t look very promising without closed binary blobs. Other alternatives are the Banana Pi, Beaglebone board, Cubie board, and so on.
Anyway, since I needed something low power for running Node-Red, Sparkfun’s Phant, for logging data from the ESP8266, the RPi, was the way to go. But for the same price and with better specs, the Odroid-C1+ boosts also a very good community, supports Android, Ubuntu and even Arch Linux for ARM V7.
The advantages from the Odroid-C1+ over the RPi are:
– Same price (at least for me, 45€ for the RPi vs 44€ for the Odroid)
– Powerful 1.5 GHz quad core cpu
– Dedicated Ethernet chip not sharing the USB ports with 1GBps port.
– An emmc slot for storage allowing faster I/O than the SD cards. Sizes available between 8GB and 64Gb
– IR receiver
– Supports Ubuntu, Android, Arch Linux…
And so on.
As same as the RPi, the board by itself can’t do much, so I’ve also bought the power supply, acrylic box and a 32Gb emmc solid disk.
I was torn between the cheaper typical sd card and this emmc disk. Since the odroid supports it, why not?
As we can see, while it’s not faster than a typical SSD, it has the same performance than a spinning regular harddisk:
root@odroid:/etc/NetworkManager# hdparm -Tt /dev/mmcblk0p2
/dev/mmcblk0p2:
Timing cached reads: 1440 MB in 2.00 seconds = 720.23 MB/sec
Timing buffered disk reads: 240 MB in 3.02 seconds = 79.56 MB/sec
root@odroid:/etc/NetworkManager#
Anyway, while the emmc is expensive, more than the price of the Odroid for the 32GB, it feels snappy and boots quite fast.
The emmc comes with a standard 4GB partition, and we need to expand it to the full emmc capacity using the provided odroid utility, or also, the gparted utility that is shipped on the ubuntu release.
So after powering up:
– Went to my router web interface and under the DHCP server tried to find which IP the Odroid got.
– Note, if connected to a HDMI monitor and if a keyboard and mouse is also connected, we can use the Odroid utility on the LXE window manager to expand the partition size from the default 4GB.
– The odroid utility can also be used from the command line: odroid-utility.sh
– With the IP I’ve got, I’ve ssh to the odroid and logon as root with odroid as password.
– Started up Gparted and increased the partition from 4GB to the 32GB emmc capacity.
– Updated the system: apt-get update and apt-get upgrade and it took a while to upgrade a lot of packages.
– After the upgrade I added a user: pcortex and made it to belong to the sudo group. This will be my working user, to avoid mishaps with the root user….
For Android Notifications using the Google Cloud Messaging infrastructure: npm install -g node-gcm (Check out: Node-Red GCM notifications
And finally making a link to let node be an alias of nodejs: ln -s /usr/bin/nodejs /usr/bin/node
Now with the user pcortex, we can start Node-Red and phant and starting doing interesting things with a low power super server 🙂
Edit: I’ve also changed the IP from a dynamic IP to a fixed IP. I’ve kept the hostname, odroid is cool enough 🙂
So I’ve edit the NetworkManager.conf file located at /etc/NetworManager, and changed the managed=false to managed=true under the section [ifupdown]. The final file is this one:
And then changed the network interface configuration to the fixed IP by editing the file interfaces located in /etc/network
# interfaces(5) file used by ifup(8) and ifdown(8)
# Include files from /etc/network/interfaces.d:
source-directory /etc/network/interfaces.d
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 192.168.1.45
netmask 255.255.255.0
broadcast 192.168.1.255
gateway 192.168.1.254
dns-nameservers 8.8.8.8 208.67.222.123
Note: I’ve also installed nginx instead of Apache just to keep resource usage low, but that’s another story…