Setting up a Grafana Dashboard using Node-Red and InfluxDB – Part 1: Installing

A more or less standard software stack used for control, processing and displaying data, has emerged that is almost used by everyone when hacking around on Arduinos, ESP8266, Raspeberry Pi’s and other plethora of devices. This “standard” software stack basically always includes the MQTT protocol, some sort of Web based services, Node-Red and several different cloud based services like Thingspeak, PubNub and so on. For displaying data locally, solutions like Freeboard and Node-Red UI are a great resources, but they only shows current data/status, and has no easy way to see historical data.

So on this post I’ll document a software stack based on Node-Red, InfluxDB and Graphana that I use to store and display data from sensors that I have around while keeping and be able to display historical memory of data. The key asset here is the specialized time-series database InfluxDB that keeps data stored and allows fast retrieval based on time-stamps: 5 minutes ago, the last 7 days, and so on. InfluxDB is not the only Time-Series database that is available, but it integrates directly with Grafana the software that allows the building of dashboards based on stored data.

I’m running an older version of InfluxDB on my ARM based Odroid server, since a long time ago, ARM based builds of InfluxDB and Grafana where not available. This is now not the case, but InfluxDB and Grafana have ARM based builds so we can use them on Raspberry PI and Odroid ARM based boards.

So let’s start:

Setting up Node-Red with InfluxDB
I’ll not detail the Node-Red installation itself since it is already documented thoroughly everywhere. To install the supporting nodes for InfluxDB we need to install the package node-red-contrib-influxdb

cd ~/.node-red
npm install  node-red-contrib-influxdb

We should now restart Node-red to assume/detect the new nodes.

Node Red InfluxDB nodes

Installing InfluxDB
We can go to the InfluxDB downloads page and follow the installation instructions for our platform. In my case I need the ARM build to be used on Odroid.

cd ~
wget https://dl.influxdata.com/influxdb/releases/influxdb-1.2.0_linux_armhf.tar.gz
tar xvzf influxdb-1.2.0_linux_armhf.tar.gz

The InfluxDB engine is now decompressed in the newly created directory influxdb-1.2.0-1. Inside this directory there are the directories that should be copied to the system directories /etc, /usr and /var:

sudo -s
cd /home/odroid/influxdb-1.2.0-1

Copy the files to the right location. I’ve added the -i switch just to make sure that I don’t overwrite nothing.

root@odroid:~/influxdb-1.2.0-1# cp -ir etc/ /etc
root@odroid:~/influxdb-1.2.0-1# cp -ir usr/* /usr
root@odroid:~/influxdb-1.2.0-1# cp -ir var/* /var

We need now to create the influxdb user and group:

root@odroid:~/influxdb-1.2.0-1# groupadd influxdb
root@odroid:~/influxdb-1.2.0-1# useradd -M -s /bin/false -d /var/lib/influxdb -G influxdb influxdb

We need now to change permissions on /var/lib/influxdb:

cd /var/lib
chown influxdb:influxdb influxdb

We can now set up the automatic start up script. On the directory /usr/lib/influxdb/scripts there are scripts for the systemctl based Linux versions and init.d based versions that is my case. So all I have to do is to copy the init.sh script from that directory to the /etc/init.d and link it to my run level:

root@odroid:~# cd /etc/init.d
root@odroid:/etc/init.d# cp /usr/lib/influxdb/scripts/init.sh influxdb
root@odroid:/etc/init.d# runlevel
 N 2
root@odroid:/etc/init.d# cd /etc/rc2.d
root@odroid:/etc/init.d# ln -s /etc/init.d/influxdb S90influxdb

And that’s it. We can now start the database with the command /etc/init.d/influxdb start

root@odroid:~# /etc/init.d/influxdb start
Starting influxdb...
influxdb process was started [ OK ]

We can see the influxdb logs at /var/log/influxdb and start using it through the command line client influx:

root@odroid:~# influx
Connected to http://localhost:8086 version 1.2.0
InfluxDB shell version: 1.2.0
> show databases
name: databases
name
----
_internal

> 

Installing Grafana
We need now to download Grafana. In my case for Odroid since it is an ARMv7 based processor, no release/binary is available.
But a ARM builds are available on this GitHub Repository: https://github.com/fg2it/grafana-on-raspberry for both the Raspberry Pi and other ARM based computer boards, but only for Debian/Ubuntu based OS’s. Just click on download button on the description for the ARMv7 based build and at the end of the next page a download link should be available:

odroid@odroid:~$ wget https://bintray.com/fg2it/deb/download_file?file_path=main%2Fg%2Fgrafana_4.1.2-1487023783_armhf.deb -O grafana.deb

And install:

root@odroid:~# dpkg -i grafana.deb
Selecting previously unselected package grafana.
(Reading database ... 164576 files and directories currently installed.)
Preparing to unpack grafana.deb ...
Unpacking grafana (4.1.2-1487023783) ...
Setting up grafana (4.1.2-1487023783) ...
Installing new version of config file /etc/default/grafana-server ...
Installing new version of config file /etc/grafana/grafana.ini ...
Installing new version of config file /etc/grafana/ldap.toml ...
Installing new version of config file /etc/init.d/grafana-server ...
Installing new version of config file /usr/lib/systemd/system/grafana-server.service ...

Set the automatic startup at boot:

root@odroid:~# ln -s /etc/init.d/grafana-server /etc/rc2.d/S91grafana-server

And we can now start the server:

root@odroid:~# /etc/init.d/grafana-server start
 * Starting Grafana Server    [ OK ] 
root@odroid:~# 

We can now access the server at the address: http://server:3000/ where server is the IP or DNS name of our ODroid or RPi.

Conclusion:
This ends the installation part for the base software.

The following steps are:

  • Create the Influx databases –
  • Receive data from sensors/devices and store it on the previously created database
  • Configure and create Grafana data sources and dashboards
  • Add some plugins to Grafana

Cloud based deployment for IOT devices

Following up on my previous post Cloud based CI with Platformio, after we have the build output from the Continuous Integration process, we are able now to deploy to our devices.

This last deploy phase of the cycle Develop, CI, Deliver using Cloud infrastructure, only makes sense to devices that are powerful enough to have permanent or periodic network connectivity and have no problems or limitations with power usage, bandwidth, are in range and are able to remotely be updated.

In reality this means that most low power devices, devices that use LPWAN technologies like LoraWan or SigFox, devices that are sleeping most of the time and are battery powered are not able to be easily updated. For these cases the only solution is really out of band management by upgrading locally the device.

So the scope of this post is just to simply build a cloud based process to allow ESP8266 devices to get update firmware from the CI output. On it’s simplest form all we need is to create a web server, make the firmware available at the server and provide the URL for OTA updates to the ESP8266 that use the HTTP updater.

One can already use from the squix blog the PHP file to be deployed on PHP enabled web server that delivers the latest builds for devices requesting over the air updates.

Openshift PaaS Cloud Platform

The simplest way of making the Squix PHP page available on the cloud is to use the great Platform as a Service Openshift by RedHat. The free tier allows to have three applications (gears) available and the sign up is free. At sign up time we need to name our own domain suffix so that, for example I choose primal I’ll have URL’s such as application-primal.rhcloud.com.

Openshift offers a series of pre-configured applications ready to be deployed such NodeJs, Java, Python and PHP.

Openshift preconfigured platforms

So after sign up, all we need is to create a new application based on the PHP 5.4 template, give it an URL (it can be the default PHP), and that’s it: we have our PHP enabled web server.

Deploying code to Openshift

To deploy code to Openshift we use the Git tool for manipulating our application repository on the PaaS cloud platform.

So we must first clone our repository locally, modify it and then upload the changes.

For obtaining the repository URL and connection details, we must first setup our local machine with the rhc command line tool and upload our public SSH key to the Openshift servers:

 [pcortex@pcortex:~]$ gem install rhc

If the gem tool is not available, first install Ruby (sudo pacman -S ruby).

We then setup the rhc tool with the command rhc setup. Complete details here.

The command rhc apps should list now our Openshift applications:

[pcortex@pcortex:~]$ rhc apps
nodejs @ http://nodejs-primal.rhcloud.com/ (uuid: 9a72d50252d09a72d5)
-----------------------------------------------------------------------------
  Domain:     primal
  Created:    Aug 26  3:43 PM
  Gears:      1 (defaults to small)
  Git URL:    ssh://9a72d50252d09a72d5@nodejs-primal.rhcloud.com/~/git/nodejs.git/
  SSH:        9a72d50252d09a72d5@nodejs-primal.rhcloud.com
  Deployment: auto (on git push)

  nodejs-0.10 (Node.js 0.10) 
----------------------------             
    Gears: 1 small 
                    
php @ http://php-primal.rhcloud.com/ (uuid: c0c157c41271b559e66) 
-----------------------------------------------------------------------                    
  Domain:     primal          
  Created:    12:16 PM  
  Gears:      1 (defaults to small) 
  Git URL:    ssh://c0c157c41271b559e66@php-primal.rhcloud.com/~/git/php.git/                
  SSH:        c0c157c41271b559e66@php-primal.rhcloud.com 
  Deployment: auto (on git push) 

  php-5.4 (PHP 5.4)
  -----------------
    Gears: 1 small

You have access to 2 applications.

We pull now the remote repository to our machine:

[pcortex@pcortex:~]$ mkdir Openshift
[pcortex@pcortex:~]$ cd Openshift
[pcortex@pcortex:Openshift]$ git clone ssh://c0c157c41271b559e66@php-primal.rhcloud.com/~/git/php.git/
[pcortex@pcortex:Openshift]$ cd php
[pcortex@pcortex:php]$ wget https://raw.githubusercontent.com/squix78/esp8266-ci-ota/master/server/firmware.php 

We should now change the PHP file so it uses our repository to bring up our firmware:

 <?php
    $githubApiUrl = "https://api.github.com/repos/squix78/esp8266-ci-ota/releases/latest";
    $ch = curl_init();

And then it’s just to commit the change to Openshift:

[pcortex@pcortex:php]$ git add firmware.php
[pcortex@pcortex:php]$ git commit -m "Added firmware.php file"
[pcortex@pcortex:php]$ git push
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 924 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
remote: Stopping PHP 5.4 cartridge (Apache+mod_php)
remote: Waiting for stop to finish
remote: Waiting for stop to finish
remote: Building git ref 'master', commit a72403a
remote: Checking .openshift/pear.txt for PEAR dependency...
remote: Preparing build for deployment
remote: Deployment id is 8fdecb3f
remote: Activating deployment
remote: Starting PHP 5.4 cartridge (Apache+mod_php)
remote: Application directory "/" selected as DocumentRoot
remote: -------------------------
remote: Git Post-Receive Result: success
remote: Activation status: success
remote: Deployment completed with status: success
To ssh://php-primal.rhcloud.com/~/git/php.git/
   321e48b..a72403a  master -> master

And that’s it: the link for HTTP OTA is available at http://php-primal.rhcloud.com/firmware.php

Final notes:

With the above firmware.php file we can deliver a single firmware file to any device that calls the page.

But a better solution is needed if we want to:

– Deliver multiple firmware files to different devices
– Deliver different versions of firmware files, for example be able to lock a specific version to some devices
– Know which devices have updated
– Know which version of firmware the devices are running

and of course, add some security.

Cloud based continuous integration and delivery for IOT using PlatformIO

After finding out the PlatformIO for IoT development I started to read some of the Platformio documentation and also what other users have written about it.

One of the most interesting features of Platformio is that it supports to be used on a https://en.wikipedia.org/wiki/Continuous_integration process for any PlatformIO based project. This is important for using automated build systems for CI (Continuous Integration), and so, allows early detection of possible build problems. CI makes sense when several contributors/team are working on the same code repository, and we need to make sure that the project is able to build with all the team/contributors code inputs/changes. At the end, deliverables can be pushed to their destination:

CI

What Platformio CI enables is that for our IOT projects we can have automatic builds after code commits on the code repository (for example, GitHub). When the automatically build is triggered, PlatformIO is able to pull all dependencies and frameworks to build our project on the automated build system.

After the automatically build is triggered and the build is successful we can then deliver the output.

One of the most interesting examples of this workflow is the following post http://blog.squix.org/2016/06/esp8266-continuous-delivery-pipeline-push-to-production.html that shows the process of developing, committing code to the repository, triggering automatic builds and, at major releases, deploy firmware updates over the air (OTA) to the ESP8266. All of this using Platformio and 100% cloud infrastructure for IOT deployment.

Starting up with Platformio and TravisCI

Platformio supports several CI systems, and one of them is Travis.CI that integrates out of the box with GitHub. To enable Travis.CI on your GitHub projects/repository, just sign in on GitHub, and on (another browser tab) goto the TravisCI site and press the Sign in with GitHub button, accept the authorization requests, and select the repositories to enable for CI by going to your user profile.

Enabled Repository

After enabling the repositories, every commit will trigger an automatic build on the Travis CI platform. As a build status indicator we can add a IMG tag to the README.md file so we can have the build status without going to the Travis site, for example: https://travis-ci.org/fcgdam/RFGW_SensorNodes.svg?branch=master.

Setting up the build
Travis.CI will start the build process according to instructions defined on the hidden repository file .travis.yml that is unique and specific for each repository.

This Travis Documentation link explains in detail the logic and options behind the travis.yml file and build process.

Fortunately PlatformIO when initializing a project creates a sample travis.yml file.
Based on that sample, here is one of mine for compiling two sample Arduino projects on the same code repository:

language: python
python:
     - "2.7"

sudo: false
cache:
     directories:
         - "~/.platformio"

install:
     - pip install -U platformio

script:
     - cd RFGW && platformio run -e nanoatmega328
     - cd ../ATTINNY_MailMonitor_TX && platformio run

The tags language:, sudo: and cache: are not changed in this case.

The install: and script: tags are customized so that our project can be built.

On the install: tag, the first command is always the installation of the platformio tools, followed, if necessary, by installation of other dependencies. For example if our project depends on libraries from the Platformio library registry we can do the following:

install:
     - pip install -U platformio
     - platformio lib install 64

This, before building, will install first platformio, and then it will install the ArduinoJson (Id 64) library. We can add as much commands as we want prefixed by the dash character.
Also this is one way of doing it, but this means that we need to change .travis.yml file every time we add/remove libraries.

Another way is to add the library dependency on the project file platformio.ini like this:

[env:nanoatmega328]
platform = atmelavr
framework = arduino
board = nanoatmega328
lib_install= 64

And in this case all the dependencies are associated within the project file. But in this case the build commands are different.

So one example with multiple libraries could be as follow:

install:
     - pip install -U platformio
     - platformio lib install 64
     - platformio lib install 652
script:
     - platformio run

And this is the same as above:

install:
     - pip install -U platformio
script:
     - platformio run -e nanoatmega328

where nanoatmega328 is the environment configuration on platformio.ini file:

[env:nanoatmega328]
platform = atmelavr
framework = arduino
board = nanoatmega328
lib_install= 64,652

To end this topic, notice that we can have several builds on the same repository, just add several command lines to the script: tag:

script:
     - cd RFGW && platformio run -e nanoatmega328
     - cd ../ATTINNY_MailMonitor_TX && platformio run

I’m always using relative paths related to the project root in the above example.

Further information can be found on the Platformio Travis integration guide and on the Travis.CI site.

Continuous delivery

Since every commit to our repository triggers the Travis build process, we need now to distinguish between working commits and release commits so that on this last type, the build output is made available to be deployed to end devices/platforms for OTA updates (or not).

This can be easily achieved by using git tags and conditional deploy process that only works when a git tag is defined.

With this scheme the normal cycle of git add, commit and push will create a working commit that triggers as usual the CI build process but not the deployment phase of copying the build output (binaries, firmware) to the GitHub releases tab.

Creating a tag and a release can be done either by command line or by using the Github web interface, being this the easiest way of doing it.

But there are some pre-requisites to allow this to happen:
– Generate an OAuth personal GitHub token so that Travis can copy the output to the Releases GitHub tab.
– Encrypt the OAuth token with the travis command line tool.
– Change the .travis.yml file so that it deploys the build output to the releases tab only at tagged commits.

The GitHub token is generated by going to your Github Profile, selecting settings and then Personal Access tokens.
Press Generate new token, enter your password and add permissions to access your repositories.
The permissions should be at least full repo access:

GitHub Personal token permission

Make sure that at the end you copy the OAuth token, otherwise you must generate another one from the beginning.

The Github token must be kept secret at all times, and since we need to have it on the travis.yml file which can be read by everyone, we must make sure that we encrypt it in such a way that only Travis.CI can use it.
This is achieved by using the travis command line tool on our machine so we need to:

[pcortex@pcortex:RFGW_SensorNodes|master]$ travis encrypt GH_TOKEN="7d54...df5977" --add 

The GH_TOKEN is the name that must hold the OAuth token so that Travis can use it securely.

With the above command, the .travis.yml file is modified and the following entry is added:

env:
  global:
    secure: WqroI5PtWWm94svvau5G3LFz4PMBU...fY=

We can now add the final configuration to the Travis.CI configuration file, so that at tag releases, the build output is automatically added.

deploy:
  provider: releases
  api_key: ${GH_TOKEN}
  file:
    - $TRAVIS_BUILD_DIR/.pioenvs/nanoatmega328/firmware.hex
    - $TRAVIS_BUILD_DIR/.pioenvs/digispark-tiny/firmware.elf
  skip_cleanup: true
  on:
    tags: true  

The provider: tag defines that we want to deploy to GitHub Releases, and the api_key: tag contains the secure Oauth token to access GitHub.

The file: tag define which files we want to deploy, and in this case we use the $TRAVIS_BUILD_DIR environment variable to locate our build directory root. The skip_cleanup: will avoid cleaning all build outputs.

The on: tag is the most important because it conditionally defines that the deploy process only happens at tagged release.

So after this configuration, if we commit without tagging, the build is made, but no deploy to the Releases happens:

Travis Build without tagging

If we want to trigger a tagged commit we can do it purely on the command line:

[pcortex@pcortex:RFGW_SensorNodes|master]$ git tag -a v0.3 -m "Version 0.3"
[pcortex@pcortex:RFGW_SensorNodes|master]$ git push origin v0.3
....
To https://github.com/fcgdam/RFGW_SensorNodes.git
 * [new tag]         v0.3 -> v0.3

And that’s it: Automatic build process and release:

Tagged build process output

And the final result:

Tagged build output

We have now a tagged release with source code and binaries automatically created and packaged.

Deployment

At this point we have the deliverables for a release, and we should now distributed/deploy it. This is by itself another process that can be done through Cloud services or locally, it really depends of the end architecture.

The most important issue here is related to security: making sure that the correct build is delivered, was not changed in any way and reaches the intended devices.

Platformio

Normally I don’t use or look solutions for problems that I don’t have. And for this reason alone, meant that http://platformio.org/ stayed under my radar for so long.

Whats my problem?
Since I’m building my mailbox monitoring solution, I’m using two different types of Arduino boards: a Arduino nano 328p based board for the RF gateway, and some Digispark AtTinny85 based boards for the sensors. The Digispark AtTinny85 boards are not completely energy efficient for battery power sensor usage, but they are good enough to be used as initial proof of concept.

To be able to program the Digispark board, I had to use the Arduino IDE, and through the IDE Boards Manager, add support for them, so that these new boards are available to be selected and programmed.

Now, this bring two problems:

– The first one is that after selecting on the IDE the board type, every window instance of the IDE assumes the same board. This means that I can’t have side by side one Arduino IDE window for the RF gateway based Atmega328p board, and other window for the AtTinny85 sensor board. I have to constantly change board types depending of what code I’m working for. A good solution (as the Platformio uses) should to associate the board type to the project, but that is not possible on the Arduino IDE.

– The second problem, is that the last Arduino IDE tools update broke the integration between the native Arduino boards and the Digispark based boards. I can have support for one of them or the other, but not both at the same time, otherwise I get errors. There are some discussions on the Arduino forums that acknowledge the issues that I’m having.

Still I could use one IDE/editor for one type of board, and the Arduino IDE for the Attinny boards, but is not very efficient. Anyway, the Arduino IDE is too much hassle when complexity starts to grow. I’m already using the Netbeans IDE for programming the ESP8266 and the KDE Kate editor for some Arduino basic programming, so that all I need was something that supported the Digispark AtTinny85 toolset.

And so, I have several problems, which means I need to look for a solution, and preferably one that unifies all the platforms.

Platformio and Platformio IDE

Platformio is an open source toolset that allows, using the same base tools, to target different target environments: Atmel/Arduino, Espressif ESP8266, ARM, and so on.

This means that from a unified toolset/IDE I can target different platforms, and one important thing, the target is defined by project and not by the tool or IDE, which solves my first problem.

Also Platformio, since it supports out of the box several targets, it probably also solves problem number two of having possible future clashes between different device platforms/architectures.

Platformio is a command line based tool, and associated with it there is an IDE where development can take place on a modern editor (Atom) that, among another things, supports code completion, serial port monitoring, embedded terminal, and so on…

The command line tool supports searching and installing support for the several boards available on the market, and also allows to search and install user contributed libraries.

Anyway, the Platformio docs can explain better the purpose and capabilities of these tools, but the greatest achievement of this is that allows an unified toolset to be used for different boards/targets.

Keep in mind that there are at least two tools:

– Platformio – This is a Python based command line tool that unifies the compiling, uploading, library management, and so on.
– Platformio IDE – This is a NodeJS, Atom Editor based IDE that integrates the Platform tools on the IDE.

While I had no issues, worth of mention, on Arch Linux, in using Platformio cli tools, the IDE has a lot of issues, not due to Platformio IDE, but due to Atom editor and supporting software (Electron). I’m still not able to use the IDE to it’s full potential, but as an editor that has code completion and project management it works fine, but so far for me, upload to the boards must be done through the command line platformio tools.

Installing Platformio on Arch Linux
So I’m running Arch Linux, which by definition is quite near bleeding edge… There are instructions for other platforms, and so it is my take on the installation on Arch:

The main platformio package is available on the AUR repository, so just install it with pacaur or yaourt:

 yaourt -S platformio

We should then have the command line tools:

root@pcortex:~# pio
Usage: pio [OPTIONS] COMMAND [ARGS]...

Options:
  --version          Show the version and exit.
  -f, --force        Force to accept any confirmation prompts.
  -c, --caller TEXT  Caller ID (service).
  -h, --help         Show this message and exit.

Commands:
  boards       Pre-configured Embedded Boards
  ci           Continuous Integration
  init         Initialize new PlatformIO based project
  lib          Library Manager
  platforms    Platforms and Packages Manager
  run          Process project environments
  serialports  List or Monitor Serial ports
  settings     Manage PlatformIO settings
  update       Update installed Platforms, Packages and Libraries
  upgrade      Upgrade PlatformIO to the latest version

To start a simple arduino project we can first install the Atmel AVR platform:

root@pcortex:~#  pio platforms install atmelavr
Installing toolchain-atmelavr package:
Downloading  [####################################]  100%             
Unpacking  [####################################]  100%             
The platform 'atmelavr' has been successfully installed!
The rest of packages will be installed automatically depending on your build environment.

We can search for available platforms with pio platforms search

 mkdir myproject
 cd myproject
 pio init --board uno

And that’s it. We can start to edit the src.main.cpp file, add libraries to the lib directory, execute pio run to compile, and pio run -t upload to upload to the board.

We can see further instructions here

And that basically it for the Command Line tools.

For the IDE:

Install from the main repository the clang and atom editor. Minicom is to have the Serial port monitoring from the IDE (or not):

Edit: Do not install atom editor from the main repository. Install atom-editor-bin from AUR instead. Many problems are solved with the AUR version. You may first install the editor from the main repositories so that all possible dependencies are pulled first, and then remove it with pacman -R atom apm and install the AUR version with yaourt -S atom-editor-bin

root@pcortex:~# pacman -S clang atom minicom
resolving dependencies...
looking for conflicting packages...

Packages (11) apm-1.12.6-1  electron-1.3.3-1  http-parser-2.7.1-1  libuv-1.9.1-1  minizip-1:1.1-1  nodejs-6.4.0-1  npm-3.10.6-1  re2-20160301-1  semver-5.3.0-1  atom-1.9.8-3  clang-3.8.1-1 minicom-2.7-2

This will bring also the node-js and electron platforms.

We can now start the Atom editor to add the package Platformio-IDE. Installing the package Platformio-IDE will also pull the Platformio-IDE-Terminal.

root@pcortex:~# atom

To clear the error (if it appears) that the atom editor can’t watch the .atom/config.cson file, execute also the following command:

sudo sysctl fs.inotify.max_user_watches=32768

In my case, after starting Atom, the main window appears, but nothing else works. For example, going to Edit->Preferences to try add the Platformio-IDE package does nothing. The same applies to other menu options. On the other hand, running atom as root, seems to work, but is not a solution.

Starting atom on the foreground (atom -f ) I can see the following error:

  TypeError: Path must be a string. Received undefined", source: path.js (7)

What I’ve found out is that if we open a file passed through the command line, close atom, and start it again without any parameter, it starts to work…

So, just do, for example:

 root@pcortex:~# atom somefile.txt

 Close atom, and start it again:

 root@pcortex:~# atom 

The menus should start to work and we should be to install the platformio-ide package through the IDE Graphical Package Manager. Just go to Edit->Settings->Install search for Platformio and add Platformio IDE. The Platformio IDE Terminal will also be installed automatically.

If, as in my case, we are behind a corporate proxy, we set the proxy environment variables on a terminal session, and start atom from there.

PlatformIO Instalation

After installation the Platformio menu and toolbar should appear.

One thing that I’ve found out was that the terminal window and serial port monitor wouldn’t work. In one of my machines the window just opens and stays blank with a blinking cursor. On other machine, an error appears saying that Platformio Terminal is not installed, which is not the case. In this last machine the error that appears with atom -f is:

 "Failed to require the main module of 'platformio-ide-terminal' because it requires an incompatible native module

On the first situation, the window only with the blinking cursor, pressing CTRL-SHIFT-I to open the debugger and viewing the console, an error like this is shown:

/usr/lib/atom/src/task.js:52 Cannot find module '../bin/linux/x64.m49.node' Error: Cannot find module '../bin/linux/x64.m49.node'
    at Function.Module._resolveFilename (module.js:440:15)
    at Function.Module._load (module.js:388:25)
    at Module.require (module.js:468:17)
    at require (internal/module.js:20:19)
    at Object. (/home/fdam/.atom/packages/platformio-ide-terminal/node_modules/pty.js/lib/pty.js:18:9)
    at Module._compile (module.js:541:32)
    at Object.value [as .js] (/usr/lib/atom/src/compile-cache.js:208:21)
    at Module.load (module.js:458:32)
....

What I’ve done to solve this:

– Goto ~/.atom/packages/plataformio-ide-terminal
– Delete completely the node_modules directory: rm -rf node_modules
– Install nslog: npm install nslog
– Edit the package json file, and change the nan version from 2.0.5 to >2.0.5

...
      {
        "name": "nan",
        "version": ">2.0.5",
        "path": "node_modules/nan/include_dirs.js"
      },
...

– Install the packages: npm install.
– It should error on the pty.js package. Do not worry (yet…)
– Goto node_modules/pty.js and edit the package.json file. Change the version of nan from 2.0.5 to >2.0.5

  "dependencies": {
    "extend": "~1.2.1",
    "nan": ">2.0.5"
  },

– Remove the node_modules directory (for the pty.js): rm -rf node_modules
– Check what is our electron version: electron -v
– In my case it is v1.3.3
– Paste the following lines on the terminal:

# Electron's version.
export npm_config_target=1.3.3
# The architecture of Electron, can be ia32 or x64.
export npm_config_arch=x64
# Download headers for Electron.
export npm_config_disturl=https://atom.io/download/atom-shell
# Tell node-pre-gyp that we are building for Electron.
export npm_config_runtime=electron
# Tell node-pre-gyp to build module from source code.
export npm_config_build_from_source=true
# Install all dependencies, and store cache to ~/.electron-gyp.
HOME=~/.electron-gyp npm install

Start again the atom editor. The terminal should work now. If not, atom might complain and show a red icon bug on the bottom right side. Just press it, and choose module rebuild, restart atom and it should be ok.

Conclusion

While the installation and usage of the command line tools is straight forward and it works out of the box, the Atom based IDE is another story. It has a steep installation curve, not Platformio fault, but due to the number of components involved. Also those issues might be due to my Linux distribution (Arch), but still, it might be a real show stopper for some users if this happens on other distributions. I’ve lost some serious hours debugging this 🙂 to arrive to an almost fully functional IDE.

Anyway at the end, the platform and the IDE are fantastic. With code completion, platformio tools seamlessly integrated, including simultaneous serial port monitoring to different boards, support for different targets and so on, is really a great product.

Platformio is highly recommended as also the IDE, despite it’s rough edges.

Orange Pi PC, Armbian and SDR

A few weeks ago I’ve bought an SDR RTL2832U+R820T2 dongle to do some tests with Software Defined radio. Despite of being able to catch some signals with the provided antenna, I have a huge interference problem originated from my desktop PC. Using the SDR dongle connected to my laptop and with the desktop PC off, most part of those interferences disappear. Still reception was poor due to the antenna quality and my office location.

So one solution for the above issues is to remotely put the SDR dongle in a better location and with a better antenna and connect the SDR software running on my desktop PC to this remote SDR dongle by using the rtl_tcp program.

Since I didn’t want to shell out a lot of money again for a RPi or Odroid C1/C2, I’ve decided to by an Orange Pi PC from Aliexpress for about 16.5€, postage included.

The Orange PI PC
The Orange PI PC is a small form computer like the Raspberry Pi and Odroid C1/C2 for example. It has a quad-core Allwinner processor that runs at 1.2GHz (under Armbian), with 1GB of memory, 3 USB2 ports and HDMI.
The Orange Pi provided operating system seems to overclock the CPU to 1.6GHz which brings a lot of stability and heat issues to the device.
The Armbian version seems to not suffer from such issues and works out of the box, including HDMI video output and apparently video acceleration (Haven’t tested it yet).
I’ve also bought, separately the acrylic box and a 2A 5V charger with the correct plug to connect to the Orange PI. The Orange PI, box and power supply took less than 3 weeks to arrive.

Starting up
The Orange PI needs an small form micro SD card to have the operating system installed and space for the file system. The recommend cards are Sandisk UHS-1 or Samsung UHS-1, but I’ve bought a Toshiba micro SD Exceria UHS-1 card, that works fine.
The card comes with a standard SD card adapter to be used when connected to a card reader.

After copying the Armbian operating system to the card on my desktop computer, and putting it the micro SD card slot on the Orange PI, I’ve just connected the network, HDMI and power.

The initial power up sequence can take several minutes, since it will expand the file-system on the SD card and probably sets up other things.

At the end there was a RED led steadily lit and a blinking GREEN led, with the Armbian desktop on my monitor.

Setting up
The following steps can be done through the desktop environment or through ssh. I’ve done all these steps through ssh:

– Change the root password from the default 1234 to a secure password.
– Change the Time Zone to my time zone: dpkg-reconfigure tzdata
– Change the hostname: vi /etc/hostname
– Add a working user: adduser opi
– Add the opi user to the sudo group: usermod -aG sudo opi
– Change the password of the opi user: passwd opi
– Move the network from DHCP to a fixed IP: vi /etc/network/interfaces

# Wired adapter #1
auto eth0
#iface eth0 inet dhcp
iface eth0 inet static
address 192.168.1.5
netmask 255.255.255.0
gateway 192.168.1.254

– Update the software: apt-get update and apt-get upgrade
– Install the rtl sdr software: apt-get install rtl-sdr gqrx-sdr librtlsdr-dev libusb-1.0-0-dev
– Blacklist the dvb modules so that the RTL software can load: vi /etc/modprobe.d/rtl-sdr-blacklist.conf

# This system has librtlsdr0 installed in order to
# use digital video broadcast receivers as generic
# software defined radios.
blacklist dvb_usb
blacklist dvb_core
blacklist dvb_usb_rtl2832u
blacklist dvb_usb_rtl28xxu
blacklist e4000
blacklist rtl2832

And finally we can reboot and connect our RTL SDR dongle.

Some final notes:
The SD card speed with the Toshiba Exceria card is:

hdparm -Tt /dev/mmcblk0p1

/dev/mmcblk0p1:
 Timing cached reads:   856 MB in  2.00 seconds = 427.67 MB/sec
 Timing buffered disk reads:  58 MB in  3.03 seconds =  19.12 MB/sec

About 40% slower than my Odroid emmc disk.

To allow the remote Gqrx SDR software to connect we need to run:

 rtl_tcp -a 192.168.1.5

And configure Gqrx as following:

Gqrx Device Config

and start using GQRX.

The Orange Pi CPU and temperatures with rtl_tcp running and connected never rouse above 7/8% CPU and 43ºC, so it looks good!

Gqrx tunning
Just one final note regarding GQRX:

On the Orange Pi, the rtl_tcp program was outputting a lot of ll+:### where ### is an increasing number, and I had several seconds of lag between the change of frequency on the Gqrx aplication.

After checking the rtl_tcp code source, these messages are related to buffering issues, so I’ve changed the Gqrx connection string to:

rtl_tcp=192.168.1.5:1234,buffers=384,psize=65536

and all ll+ messages where eliminated never rising above 5.

MQTT Mosquitto broker with SSL/TLS transport security

Just a quick note in setting up transport layer security on the MQTT Mosquitto broker for both supported protocols: MQTT and WebSockets.

There are several posts on the web regarding this, namely:

SSL Client certs to secure mqtt and Mosquitto websocket support

Those posts explain more or less what is needed to be done to have TLS/SSL transport security. These are just my notes:

Generating the server certificates:
This can be quite easily accomplished by using the following script: https://github.com/owntracks/tools/blob/master/TLS/generate-CA.sh.
This script will generate a self signed certificate to be used by Mosquito for providing TLS for the MQTT and WebSocket protocol. All that is needed to run the script is to have openssl installed on your Linux machine.

If the script is called without parameters, it will generate a self signed certificate for the hostname where the script is running. Otherwise a we can pass a hostname as the first parameter to the script.

After running the script, the following files are generated:

  1. ca.crt – The CA (Certificate Authority, who published the host certificate) public certificate.
  2. hostname.crt – The hostname, that will run the mosquitto broker, public certificate.
  3. hostname.key – The hostname private key.

After having these files, we need to configure the Mosquitto Broker to use them.

Mosquitto configuration:
To configure the Mosquito broker we need first to copy the certificates and key files to a known directory. We will create a certs directory under /etc/mosquitto:

sudo -s
mkdir -p /etc/mosquitto/certs
cp ca.crt /etc/mosquitto/certs
cp hostname.* /etc/mosquitto/certs

After this we can modify the mosquitto configuration file. One important thing to keep in mind is that lines must be following each other without blank lines after the listener directive.

So:

# Plain MQTT protocol
listener 1883

# End of plain MQTT configuration

# MQTT over TLS/SSL
listener 8883
cafile /etc/mosquitto/certs/ca.crt
certfile /etc/mosquitto/certs/hostname.crt
keyfile /etc/mosquitto/certs/hostname.key

# End of MQTT over TLS/SLL configuration

# Plain WebSockets configuration
listener 9001
protocol websockets

# End of plain Websockets configuration

# WebSockets over TLS/SSL
listener 9883
protocol websockets
cafile /etc/mosquitto/certs/ca.crt
certfile /etc/mosquitto/certs/hostname.crt
keyfile /etc/mosquitto/certs/hostname.key

We will make one more change, but restart mosquitto broker now and do some testing.

Testing MQTT TLS/SSL configuration
We can use Mqtt-Spy to subscribe to our defined test topic: test. We can use plain MQTT or use MQTT over TLS/SSL:

MQTT Spy simple TLS configuration

MQTT Spy simple TLS configuration

We can use then the MQTT spy tool to publish or subscribe MQTT topics.

By command line, the mosquitto_sub and mosquitto_pub only worked if the port number for MQTTS is provided, otherwise it gives a TLS error:

mosquitto_pub --cafile /etc/mosquitto/certs/ca.crt -h localhost -t "test" -m "message" -p 8883

mosquitto_sub -t \$SYS/broker/bytes/\# -v --cafile /etc/mosquitto/certs/ca.crt -p 8883

This should work without any issues.

Testing MQTT websockets over TLS/SSL configuration
The issue with this testing is that we are using a self signed certificate, so only useful for local, restricted, testing.
Before we can use the MQTT websockets with TLS/SSL enabled we need to use the browser and visit the following URL:

  https://MQTT_BROKER_IP_OR_HOSTNAME:9883/mqtt

Note that we are using HTTPS. When connecting to the above URL, the browser should complain about the insecure connection, due to the self signed certificate, and we need to add an exception and always accept that certificate. After that the error should be something like connection reset or failed to load page. This is normal, since the browser won’t upgrade the connection to a web socket.
We can now use the Hive MQTT Websockets Client to test our connection, and it should work fine (Note the connected green icon and SSL is selected):
Hive MQTT WebSocket client

Forcing TLSv1.2
All this work of enabling TLS/SSL on the Mosquitto Broker is needed, since most IoT clouds that have MQTT interface need that the connection is over TLS/SSL. More specifically AWS IoT cloud needs the connection to be protected by TLS/SSL, but that connection must be only on version 1.2 of the TLS protocol. AWS IoT cloud also requires client authentication through client certificates, but we are not dealing with this part on this post.

So we are now configuring our Mosquitto broker to only accept TLSv1.2 connections. To do that we modify the mosquitto.conf file and add the following line:

# MQTT over TLS/SSL
listener 8883
cafile /etc/mosquitto/certs/ca.crt
certfile /etc/mosquitto/certs/hostname.crt
keyfile /etc/mosquitto/certs/hostname.key
tls_version tlsv1.2

# WebSockets over TLS/SSL
listener 9883
protocol websockets
cafile /etc/mosquitto/certs/ca.crt
certfile /etc/mosquitto/certs/hostname.crt
keyfile /etc/mosquitto/certs/hostname.key
tls_version tlsv1.2

and restart the broker.

Testing TLS V1.2
We can specify the TLS version on the mosquitto command line utils:

[pcortex@pcortex:~]$ mosquitto_pub --cafile ./ca.crt --tls-version tlsv1.2 -h localhost -t "test" -m "mes" -p 8883 -d
Client mosqpub/26994-pcortex sending CONNECT
Client mosqpub/26994-pcortex received CONNACK
Client mosqpub/26994-pcortex sending PUBLISH (d0, q0, r0, m1, 'test', ... (3 bytes))
Client mosqpub/26994-pcortex sending DISCONNECT
[pcortex@pcortex:~]$ mosquitto_pub --cafile ./ca.crt --tls-version tlsv1.1 -h localhost -t "test" -m "m3224" -p 8883 
Error: A TLS error occurred.

As we can see the lower versions of the TLS protocol are now not accepted.
The Websockets client should work without any issues.

Conclusion:
This configuration only solves the transport security, not the authentication security. The later can be accomplished by using the username/password process or using client certificates, which is the process that Amazon AWS IoT cloud uses. But those are topics for other posts. Edit: Follow-up at: Client authentication

ODroid – Mosquitto MQTT Broker install

Just for a quick reference, the following instructions detail how to install the latest Mosquitto MQTT broker with Websockets enabled on the ODroid C1+ running (L)Ubuntu release. The instructions are probably also valid for other platforms, but not tested.

1. Install the prerequisites
As the root user, install the following libraries:

apt-get update
apt-get install uuid-dev libwebsockets-dev

Probably the SSL libraries, and a few others are also needed, but in my case they where already installed.

2. Mosquitto install
Download and compile the Mosquitto broker:

mkdir ~/mosq
cd ~/mosq
wget http://mosquitto.org/files/source/mosquitto-1.4.5.tar.gz
tar xvzf mosquitto-1.4.5.tar.gz
cd mosquitto-1.4.5/

Edit the config.mk file to enable the websockets support:

# Build with websockets support on the broker.
WITH_WEBSOCKETS:=yes

and compile and install:

make
make install

3. Configuration
Copy the file mosquitto.conf to /usr/local/etc, and edit the file:

cp mosquitto.conf /usr/local/etc
cd /usr/local/etc

Add at least the following lines to mosquitto.conf file to enable websockets support.

# Port to use for the default listener.
#port 1883
listener 1883
listener 9001
protocol websockets

Add an operating system runtime user for the Mosquitto daemon:

useradd -lm mosquitto
cd /usr/local/etc
chown mosquitto mosquitto.conf

If needed, or wanted, change also on the configuration file the logging level and destination.
For example:

# Note that if the broker is running as a Windows service it will default to
# "log_dest none" and neither stdout nor stderr logging is available.
# Use "log_dest none" if you wish to disable logging.
log_dest file /var/log/mosquitto.log

# If using syslog logging (not on Windows), messages will be logged to the
# "daemon" facility by default. Use the log_facility option to choose which of
# local0 to local7 to log to instead. The option value should be an integer
# value, e.g. "log_facility 5" to use local5.
#log_facility

# Types of messages to log. Use multiple log_type lines for logging
# multiple types of messages.
# Possible types are: debug, error, warning, notice, information, 
# none, subscribe, unsubscribe, websockets, all.
# Note that debug type messages are for decoding the incoming/outgoing
# network packets. They are not logged in "topics".
log_type error
log_type warning
log_type notice
log_type information

# Change the websockets logging level. This is a global option, it is not
# possible to set per listener. This is an integer that is interpreted by
# libwebsockets as a bit mask for its lws_log_levels enum. See the
# libwebsockets documentation for more details. "log_type websockets" must also
# be enabled.
websockets_log_level 0

# If set to true, client connection and disconnection messages will be included
# in the log.
connection_messages true

# If set to true, add a timestamp value to each log message.
log_timestamp true

4. Automatic start
The easiest way is to add the following file to the init.d directory and link it to the current runlevel:

Create the file named mosquitto under /etc/init.d

#!/bin/bash

case "$1" in
    start)
        echo "Starting Mosquitto MQTT Broker"
        touch /var/log/mosquitto.log
        chown mosquitto /var/log/mosquitto.log
        su - mosquitto -c "/usr/local/sbin/mosquitto -c /usr/local/etc/mosquitto.conf" & > /dev/null 2>/dev/null
        ;;
    stop)
        echo "Stopping Mosquitto MQTT Broker"
        killall mosquitto
        ;;
    *)
        echo "Usage: /etc/init.d/mosquitto start|stop"
        exit 1
        ;;
esac

exit 0

Find out the current run level. On Odroid it seems that is level 2:

root@odroid:/etc/init.d# runlevel
N 2
root@odroid:/etc/init.d# 

And link the automatic start for Mosquitto broker at run level 2:

cd /etc/rc2.d
ln -s  /etc/init.d/mosquitto S97mosquitto

And that’s it.

We can start manually the broker with the command /etc/init.d/mosquitto start, and stop it with the mosquitto stop command.