Using the Blackmagic probe with Netbeans

Using the Blackmagic debug probe (BMP) with Netbeans is very similar when doing the same but with using Openocd.

The initial steps are the same to use the BMP probe and Openocd. First we set up the ARM toolchain or other, as needed. We also need the gdbserver plugin for Netbeans, and so, we also need to install it. After these two initial steps, the remaining configuration and how to use the probe differs a bit differently from when using Openocd.

One of the main differences is that with Openocd, externally we launch an instance of Openocd and then use the Netbeans gdbserver plugin to connect to that openocd instance. For that we use the Netbeans menu Debug and Attach Debugger and use connection string extended-remote localhost:3333 and the selected project:

For using the BMP probe we don’t need no intermediary software since the code supporting the gdbserver plugin is already on the probe firmware, which means that the debugger can connect directly to the target. In fact due to this we do not need now to attach to a debugger and we can start using the Debug Project (CTRL-F5) to program/flash the code output (optional) and to start debugging the code running on the target device.

For this to happen we need to configure the Debug Command for the project (this must be done for each project if needed) and if wanted we can also configure the Run Command.
Before we are able to do that, we need first to create a command file for the gdb debugger that will pre-execute some commands, namely to attach to the targets probe, before starting the debugging session.

For that I’ve created a file named BMPgdbinit located, in my case, in /opt/ARM:

target extended-remote /dev/ttyACM0
monitor swdp_scan
att 1
load
start
run

This file is invoked by the ARM debugger toolchain (like this: arm-none-eabi-gdb -x /opt/ARM/BMPgdbinit file.out) and it will connect to the Blackmagic probe, that should be located at /dev/ttyACM0, scan the SWD searching for the device, attaches to it, loads/programs the output file, namely the firmware, and starts it. From this point the gdbserver from Netbeans takes control and should stop at the first line of code or the first break point.

So to configure the run command we edit the Project Properties and modify the Run command:

Just define the Run Command as arm-none-eabi-gdb -x /opt/ARM/BMPgdbinit “${OUTPUT_PATH}”

With this configuration we can now press F6 to flash our firmware (because the load command is on the BMPgdbinit file) and run it.

For debugging we need the following configuration:

We must define the Debug Command as ${OUTPUT_PATH} since this will be passed as the arm-none-eabi-gdb file parameter.
We also need to point to the correct location of the BMPgdbinit file.

We can now just press CTRL-F5 to launch a debug session, and again, the firmware will be loaded, and the code will stop at the first break point.

An example of using Netbeans to build a simple program targeting the nRF52832 (the blinky example from pcbreflux ) with an added variable and a watch point set:

That’s it….

Issues:

There are some issues with the above process…

Can’t stop the debugger. Pressing Pause does nothing…
This is a long standing bug apparently so the solution is on a external terminal window send a SIGINT signal to the gdb process:

[pcortex@pcortex:~]$ ps -ef | grep arm-none
pcortex 9792  9405  0 20:22 ?        00:00:00 /tmp/dlight_fdam/5fe1c993/0397644155/pty --dir /opt/nordic/nRF52832/blinky --no-pty /opt/ARM/gcc-arm-none-eabi-6_2-2016q4/bin/arm-none-eabi-gdb -x /opt/ARM/BMPgdbinit -nx --interpreter mi -tty /dev/pts/7
pcortex 9794  9792  0 20:22 ttyACM0  00:00:00 /opt/ARM/gcc-arm-none-eabi-6_2-2016q4/bin/arm-none-eabi-gdb -x /opt/ARM/BMPgdbinit -nx --interpreter mi -tty /dev/pts/7
 

Just send a SIGINT signal to the process associated to the Blackmagic probe:

kill -SIGINT 9794

The control is returned to Netbeans.

The debugger doesn’t seem to follow my code…
Just Clean and build all the project and select run so a clean version of the program is flashed.

I unplug and plug the probe a lot and the tty ports change..

Because of this the BMPgdbinit script can be broken because the probe port changes, for example, to /dev/ttyACM2.

The solution is to create a rules file:

Mine, for Arch Linux is as follows:

# Black Magic Probe
# there are two connections, one for GDB and one for uart debugging
  SUBSYSTEM=="tty", ATTRS{interface}=="Black Magic GDB Server", SYMLINK+="ttyBMP"
  SUBSYSTEM=="tty", ATTRS{interface}=="Black Magic UART Port", SYMLINK+="ttyBMPUART"

This file is named 99-blackmagic.rules located at /etc/udev/rules.d.

We now need to change the port used on the BMPgdbinit file from /dev/ttyACM0 to /dev/ttyBMP. Note that after plugging in the probe to the USB port, it takes a while (2 to 3 seconds) to the ports appear.

This just doesn’t work…

Yes, some times the behaviour seems strange… I’ve worked around some of the issues by flashing again the softdevice and the program mannually from the GDB command line.

Also if something like this happens:

It is due to the fact that the Netbeans environment variables $OUTPUT_PATH is empty… The solution is to explicitly state the outputfile on the RUN and Debug commands instead of using the ENV var. For example change this:

arm-none-eabi-gdb -x /opt/ARM/BMPgdbinit "${OUTPUT_PATH}"

to this

arm-none-eabi-gdb -x /opt/ARM/BMPgdbinit /opt/Projects/nRF/proj1/program.out

or to this:

arm-none-eabi-gdb -x /opt/ARM/BMPgdbinit "${PROJECT_DIR}/file.out"

Final note:
As a final note, some other errors might creep up like:

  • Program is not being run: Check the command line of arm-none-eabi-gdb to see if the BMPgdbinit file is correctly provided.
  • Don’t know how to run. Try \”help target\”.: Same issue. Check if the run and debug commands are correctly defined.
  • Sometimes my target isn’t detected.: Adding an extra monitor swdp_scan line to the BMPgdbinit file might help.

NodeJS BLE Applications using BLENO on Arch Linux

BLENO is a greate NodeJS based library for building applications that communicat with other devices (Smartphones, tables, sensor tags) using Bluetooth Low Energy (BLE).

This post is just to quickly document some requirements for successfully use the BLENO library, in my case, on Arch Linux running the latest Plasma (KDE) desktop.

The tools:

Most the information available on the internet for using and controlling the bluetooth adapter uses the now deprecated tools hcitool, hciconfig and so on. Check here the deprecated list of commands.

So we need to use the new tools from the latest Bluez (Bluetooth Linux implementation): btmgm, btinfo, …

Making Bleno examples work:

The simplest example to try out the BLENO library is the battery example located at: […]/bleno/examples/battery-service

First let’s check if our computer/laptop bluetooth adapter is available: Note that all commands must be ran as the root user:

root@pcortex:/opt/bleno/examples/battery-service# btinfo local
Bluetooth information utility ver 5.45
Failed to open HCI user channel
root@pcortex:/opt/bleno/examples/battery-service# 

This issue can be circumvented by stopping the higher level bluetooth stack:

root@pcortex:/opt/bleno/examples/battery-service# systemctl stop bluetooth
root@pcortex:/opt/bleno/examples/battery-service# btinfo local
Bluetooth information utility ver 5.45
HCI version: 6
HCI revision: 7869
LMP version: 6
LMP subversion: 64512
Manufacturer: 2
root@pcortex:/opt/bleno/examples/battery-service#

In case of previously disabling the Bluetooth through the graphical interface:

Disabling the Bluetooth here will have this behaviour (in this case the bluetooth service is still running):

root@pcortex:/opt/bleno/examples/battery-service# systemctl start bluetooth   (<- After this disable bluetooth on the graphical interface)
root@pcortex:/opt/bleno/examples/battery-service# btinfo local
Bluetooth information utility ver 5.45
Failed to open HCI user channel
root@pcortex:/opt/bleno/examples/battery-service# btmgmt power on
Set Powered for hci0 failed with status 0x12 (Blocked through rfkill)
root@pcortex:/opt/bleno/examples/battery-service# 

Even stopping the Bluetooth service keeps the BT adapter disabled:

root@pcortex:/opt/bleno/examples/battery-service# systemctl stop bluetooth
root@pcortex:/opt/bleno/examples/battery-service# btmgmt power on
Set Powered for hci0 failed with status 0x12 (Blocked through rfkill)
root@pcortex:/opt/bleno/examples/battery-service#

We can check this with the rfkill command:

root@pcortex:/opt/bleno/examples/battery-service# rfkill list
0: phy0: Wireless LAN
        Soft blocked: no
        Hard blocked: no
2: hci0: Bluetooth
        Soft blocked: yes
        Hard blocked: no
root@pcortex:/opt/bleno/examples/battery-service# 

We can unblock now the adapter:

root@pcortex:/opt/bleno/examples/battery-service# rfkill unblock 2
root@pcortex:/opt/bleno/examples/battery-service# rfkill list
0: phy0: Wireless LAN
        Soft blocked: no
        Hard blocked: no
2: hci0: Bluetooth
        Soft blocked: no
        Hard blocked: no
root@pcortex:/opt/bleno/examples/battery-service# btinfo local
Bluetooth information utility ver 5.45
HCI version: 6
HCI revision: 7869
LMP version: 6
LMP subversion: 64512
Manufacturer: 2
root@pcortex:/opt/bleno/examples/battery-service# btmgmt power on
hci0 class of device changed: 0x00010c
hci0 Set Powered complete, settings: powered bondable ssp br/edr le secure-conn 
root@pcortex:/opt/bleno/examples/battery-service# 

So why we are having all this work for making sure that the BT adapter is powered on AND the bluetooth stack is stopped (systemctl stop bluetooth).

The answer is quite simple. If we don’t do this the BLENO examples will seem to work (they start) but the BLE advertised services are the bluetooth Bluez services and not our code.

To explain, check the following behaviour where we start the BLENO Battery Service with the Bluetooth stack started:

root@halcyon:/opt/bleno/examples/battery-service# systemctl start bluetooth
root@halcyon:/opt/bleno/examples/battery-service# node main.js 
on -> stateChange: poweredOn
on -> advertisingStart: success
setServices: success

Using the Nordic nRF Connect Android App we can see the non working behaviour vs what we should expect from the Bleno Battery example:

BLE Scan Results

Pressing Connect we can see on Client that no service are provided. This is due to the fact that the desktop bluetooth is enabled):

Now let’s disable the bluetooth stack (which powers the BT adapter) and start again the Bleno Battery example:

root@pcortex:/opt/bleno/examples/battery-service# systemctl stop bluetooth
root@pcortex:/opt/bleno/examples/battery-service# node main.js 

Example hangs in here, because BT adapter is disabled/off

^Croot@pcortex:/opt/bleno/examples/battery-service# btmgmt power on
hci0 class of device changed: 0x00010c
hci0 Set Powered complete, settings: powered bondable ssp br/edr le secure-conn 
root@pcortex:/opt/bleno/examples/battery-service# node main.js 
on -> stateChange: poweredOn
on -> advertisingStart: success
setServices: success

And now if we scan again and connect to the Battery example with our mobile phone through the Nordic application we have:

It works now!

We can confirm that because on the file battery-service.js the service identifier is defined:

function BatteryService() {
  BatteryService.super_.call(this, {
      //uuid: '180F',
      uuid: 'ff51b30e-d7e2-4d93-8842-a7c4a57dfb07',
      characteristics: [
          new BatteryLevelCharacteristic()
      ]
  });
}

and it is the same detected by the Android application.

Using Node-Red and Grafana WorldMap for geolocalized data visualization

Based on my previous posts we are now able to build a system that can receive, store and visualize data by using Node-Red, InfluxDB and Graphana. Grafana allows us build dashboards, query and visualize the stored data across time efficiently by using, in our case, the InfluxDB database engine. So far we’ve used simple line/bar charts to visualize data but we can use both Node-Red and Grafana to plot data onto a map:

  1. NodeRed Contrib World Map: Openstreet UI based map for plotting data with several options, including icon types, vectors, circles and heatmaps totally controlled through nodered flows.
  2. Grafana WorldMap plugin: Grafana panel with also an OpenStreet map for visualizing data.

Both have pros and cons, but the main differences between the two is that Node-Red Worldmap is suited more to real time display, and the Grafana plugin is better adapted to display data based on some time based query. Other major difference is that Node-Red Worldmap would require some coding, but, at least I consifer it, at an easy level, and the Grafana plugin is much harder to make it work.

Mapping data using Node-Red Worldmap:
One of the easiest ways for mapping data in realtime is using Node Red Worldmap node. The map is plotted and updated in real time.

cd ~
cd .node-red
npm install node-red-contrib-web-worldmap

After restarting and deploying a worldmap node, the map should be available at: http://server:1880/worldmap or other URL depending on the Node-Red base configuration.

One thing to keep in mind is that Node-Red is single user, so all instances of world maps (several different clients/browsers) will always have the same view.
The simplest way to start using the worldmap is just to copy and deploy the demo workflow provided by the node, but the key concept is that each point has a name and a set of coordinates.

msg.payload = {};
msg.payload.name = "CentralLX";
msg.payload.lat = 38.7223;
msg.payload.lon = -9.1393;
msg.payload.layer = "SensorData";
msg.payload.UVLevel = getUVLevel() ;
msg.payload.Temperature = getTemp();

The cool thing is that if we inject repeatedly the above message (keeping the same name) but with different coordinates, the data point will move across the map in real time, and as I said earlier, the move will be reflect onto every client.

So all we need is an Inject node to a function node with the above code and feed it to the world map:

At the end we get this at the URL http://server:1880/worldmap:

Mapping data using Grafana Worldmap plugin:

The Grafana Worldmap plugin can get the location data in several ways. One of them is to use geohash data that is associated to the values/measurements.
There is a Node Red Geohash node that generates the geohash value from the latitude and longitude of data location. As usual we install the node:

cd ~
cd .node-red
npm install node-red-node-geohash

and then the Grafana plugin. We just follow the plugin instructions:

cd ~
grafana-cli plugins install grafana-worldmap-panel
/etc/init.d/grafana-server restart

With this per-requisites installed we can now feed data onto the database, in our case InfluxDB, that will be used by Grafana. We just need make sure that we add the geohash field. The geohash node will calculate from the node-red message properties lat – latitude and lon – longitude the required info:

A simple example:

Using the Influx tool, we can query our database and see that the geohash localization is now set:

> select * from SensorData limit 2
name: DemoValue
time                Temp UVLevel geohash    lat       lon        
----                ---- ------- -------   ---------- ----------  
1490706639630929463 22   8       eyckpjywh 38.7045055 -9.1754669  
1490706651606553077 21   7       eyckpjzjr 38.7044008 -9.1746488 

Anyway for setting up the World map plugin to display the above data was not straight forward, so the following instructions are more for a startup point rather than a solution.

The first thing to know is that the plugin is waiting for two fields: geohash and metric. With this in mind, before wasting too much time with the map plugin, a table panel that is filled with the required query is a precious tool to debug the query:

After we infer from the table that the data is more or less the data we want, we just transfer the query to WorldMap plugin:

Notice two important things: The aliasing for the query field to metric with the alias(metric) instruction, and the Format as: Table.

We can now setup the specific Worldmap settings:

On the Map Visual Options , I’ve centered the map in my location and set the zoom level. Fiddling around here can be seen in real time.

On the Map Data Options for this specific example, the Location Data comes from a table filled with the previous query (hence the format as table on the query output), and we want to see the current values with no aggregation.

When hoovering around a spot plotted on the map we can see a label: value, and the label used is obtained from a table field. In my case I just used geohash (not really useful…). Anyway these changes only work after saving and reloading the panel with F5 in my experience.

At the end we have now graphed data and localized data:

If we drag the selector on the left graphic panel, or select another time interval on top right menu of the grafana dashboard, the visualized information on the map changes.

Setting up a Grafana Dashboard using Node-Red and InfluxDB – Part 4: Installing Grafana and configuring

Our software stack installation ends with configuring and installing Grafana. Ubuntu installation instructions are at this Grafana documentation link..

Installation is as easy as to download and install the latest Grafana dashboard version:

root@server:~# wget https://grafanarel.s3.amazonaws.com/builds/grafana_4.1.2-1486989747_amd64.deb
root@server:~# sudo dpkg -i grafana_4.1.2-1486989747_amd64.deb

In my case, the above instructions where enough to install successfully the software.

After installing if we will access the Dashboard through a reverse proxy, check out my previous post to configure correctly the Grafana server: Grafana Reverse Proxy

We can now start the Grafana server but if the server is exposed to the public internet the first thing to do is to change the default admin user password to something safe, or better yet, create a new admin user with a strong password and delete the admin default account.

We should press the top left icon, choose admin and then Global users. We can now select the Admin user and change the default password:

Here we also can create a new admin account, which is my recommendation.

Data source configuration:

Dashboard data provides from data sources that we configure on  Grafana. We can configure several data sources that we will used to fetch data to feed to our dashboards and graphs.

In our case we will configure a single InfluxDB datasource. Note that we didn’t defined any authentication for the InfluxDB server, so no access credentials are needed yet, but we should set ones as soon as possible.

Since Grafana and InfluxDB are running on the same server we use the base URL http://localhost:8086. Make sure that the connection type is set as proxy.

After setting the datasource name and datasource database, press Save & Test and it should report success.

Initial Dashboard configuration

We are now able to create a basic first dashboard by building some charts/graphs based on the previous created data source.

Press the New Dashboard button to create a new dashboard. In case that you don’t have a screen as the above screen shot, just press the top left Grafana Icon, select Dashboards and then New.

The following screen should appear where we can add graphical panels to show our data. We should the way that we want to show our data, by selecting the Graph type. The selected graph type should reflect immediately on the panel below. To edit the panel source data we should press the panel title and a pop-up window will appear, and then we press Edit.

We are now able to edit the associated queries that will fill the panel with data. Associated with the panel we can have multiple queries for the source data. In the default case a default query named A is already set up. We just need to change our measurement field to the correct one, which in our case could be Temperature or heap, based on the previous post (InfluxDB data configuration)

.

We need now to press the Select Measurement and a drop down box with the available measurement should appear. We select the one that we want and data should now appear on the above graph.

From now on is just a bit of perfecting things out like giving a sound name to the panel in the General tab and at the end pressing the Save icon on the top of the screen.

Setting up a Grafana Dashboard using Node-Red and InfluxDB – Part 3: Single point of access – Reverse proxy the services with nginx

Since we will be running a lot of services, each running on its own port, the following configuration, is optional, but allows to access all services through the same entry point by using Nginx server as a reverse proxy to Node-Red, Node-Red UI/Dashboard, Node-Red Worldmap and Grafana.

With this configuration the base URL is always the same without any appended ports, and the only thing that changes are the URL path:

http://server/nodered
http://server/nodered/worldmap
http://server/grafana

To allow this we install and configure Nginx:

apt-get install nginx

The configuration files will reside in /etc/nginx directory. Under that directory there are two directories: sites-available and sites-enable where the later normally contains a link to configuration files located at sites-available.
At that directory there is a file named default that defines the default web site configuration used by Nginx. This is the file where we will add the reverse proxy directives.

Reverse proxy for Node-Red and Node-Red Contrib Worldmap
For setting up the reverse proxy for Node-Red we must first change the base URL for Node Red from / (root) to something else that we can map the reverse proxy.

For this we will need to edit the settings.js file located on the .node-red directory on the home path of the user running Node-Red.

We need to uncomment and change the entry httpRoot to point to our new base URL.

   httpRoot: '/nodered',

Don’t forget the trailing comma.

We need to restart now Node-Red and it should be accessible at the URL http://server:1880/nodered instead of http://server:1880/.

To configure Nginx, we edit the file default at /etc/nginx/sites-available and add the following section:


        location  /nodered {
                proxy_set_header Host $http_host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection "upgrade";
                proxy_pass "http://127.0.0.1:1880";
        }

        location /socket.io {
                proxy_set_header Host $http_host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection "upgrade";
                proxy_pass "http://127.0.0.1:1880";

        }

Note the following: The first location defines the reverse proxy URL /nodered to be served by the backend server http://127.0.0.1:1880. The incoming path, /nodered, will be passed to the backend server URL /nodered, since paths are passed directly. No need to add the /nodered path to the backend server definition.
Also I’m using the 127.0.0.1 address instead of localhost to avoid the IPv6 mapping to the localhost. In this way I’m sure that IPv4 will be used.

The location mapping for /nodered will make all the functionality of node red to work as it should at the base url /nodered. But some nodes, like node-red-contrib-worldmap will request to the proxy server ignoring the node-red base root map. Hence the /socket.io mapping. It will allow the worldmap nodes to work, but will stop this mapping to be used for something else.

Reverse proxy for Grafana

Setting up the reverse proxy for Grafana we can, and should use the following documentation: Grafana Reverse Poxy. For me the following configuration worked:

First edit the [server] section on the Grafana configuration file grafana.ini located at /etc/grafana.

Uncomment and edit the following lines:

[server]
# Protocol (http or https)
protocol = http

# The ip address to bind to, empty will bind to all interfaces
;http_addr =

# The http port  to use
http_port = 3000

# The public facing domain name used to access grafana from a browser
domain = server.domain.com

# Redirect to correct domain if host header does not match domain
# Prevents DNS rebinding attacks
;enforce_domain = false

# The full public facing url you use in browser, used for redirects and emails
# If you use reverse proxy and sub path specify full url (with sub path)
root_url = http://server.domain.com/grafana/

Note the ending slash at the root_url. The same applies to the Nginx configuration

The files for the Nginx configuration are the same as the above configuration for reverse proxy.

We just need to add the following section after the previous location directives:

        location /grafana/ {
                proxy_pass http://localhost:3000/;
        }

We should now restart nginx to refresh the configuration, and all should be working as it should by accessing the Grafana dashboard at http://server.domain.com/grafana

Setting up a Grafana Dashboard using Node-Red and InfluxDB – Part 2: Database configuration and data collection

On the previous post we’ve installed and the base software for our Grafana based dash board.

We need now to configure our InfluxDB database and Node Red to start collecting data.

Configuring InfluxDB:
Detailed instructions for configuring an InfluxDB database are on this InfluxDB documentation link..

The main concepts that we need to be aware when using the InfluxDB is that record of data has a time stamp, a set of tags and a measured value. This allows, for example to create a value named Temperature and tag it depending on the source sensor:

Temperature: Value=22.1 , Sensor=Kitchen
Temperature: Value=21.9 , Sensor=Room1

This allows to process all the data or only process data based on a certain tag or tags. Values and tags can be created on the fly without previously define them, which is a bit different from standard RDBMS engines.

Creating an InfluxDB database:
To create the database, we need to access the machine hosting the InfluxDB server and execute the command influx:

odroid@odroid:~$ influx
Connected to http://localhost:8086 version 1.2.0
InfluxDB shell version: 1.2.0
> create database SensorData
> show databases
name: databases
name
----
_internal
SensorData

> 

Now we have our database created and I’ve named SensorData. To make an example with the above temperature data we can do the following:

> insert Temperature,Sensor=kitchen value=22.1
ERR: {"error":"database is required"}

Note: error may be due to not setting a database or retention policy.
Please set a database with the command "use " or
INSERT INTO . 
> use SensorData
Using database SensorData
> 

As we can see we need first to select the database where we are going to insert data with the command use SensorData:

> use SensorData
Using database SensorData
> insert Temperature, Sensor=kitchen value=22.1
ERR: {"error":"unable to parse 'Temperature, Sensor=kitchen value=22.1': missing tag key"}

> insert Temperature,Sensor=kitchen value=22.1
> insert Temperature,Sensor=Room1 value=21.9
> select * from Temperature
name: Temperature
time                Sensor  value
----                ------  -----
1487939008959909164 kitchen 22.1
1487939056354678353 Room1   21.9

Note that we can’t use spaces between the Measure name and the tags. The correct syntax is as follows:

 insert MeasureName,tag1=t1,tag2=t2,...   value1=val1,value2=val2,value3=val3,....

Also note that no DDL (data definition language) was used to create the tags or the measured value, we’ve just inserted data for our measurement with the our tags and value(s) without the need of previously define the schema.

Configuring Node-Red
Since we now have a database we can configure the InfluxDB Node Red nodes to store data onto the database:

There are two types of InfluxDB nodes, one that has an Input and Output and other that only has Input. The former is for making queries to the database where we provide on the input node the query, and on the output the results are returned. The later is for storing data only onto the database.
For both nodes we need to configure an InfluxDB server:

InfluxDB Server Configuration

We need to press the Pen icon right next to the server to add or reconfigure a new InfluxDB server:

InfluxDB server

A set of credentials are required, but since I’ve yet configured security, we can just put admin/admin as username and password. In a real deployment we must activate security.

From now on it is rather simple. Referring to InfluxDB node configuration screenshot (Not the InfluxDB server configuration) we have a configuration field named Measurement. This is our measure name that we associate a value. Picking up on the above example with the Insert command it will be Temperature, for example.

Now if the msg.payload provided has input to the node is a single value, let’s say 21, this is equivalent to do:

Insert Temperature value=12

We other formats for msg.payload that allows to associate tags and measures. Just check the Info tab for the node.

Simple example:

The following flow shows a simple example of a value received through MQTT, in this case the free heap from one of my ESP8266 and its storage in InfluxDB:

Sample Flow

[{"id":"20bec5de.8881c2","type":"mqtt in","z":"ced40abb.3c92e","name":"Heap","topic":"/outbox/ESP12DASH/Heap","qos":"2","broker":"2a552b3c.de8d2c","x":83.16668701171875,"y":206.41668701171875,"wires":[["e0d9c912.8c57f8","876fb151.6f2fa"]]},{"id":"876fb151.6f2fa","type":"debug","z":"ced40abb.3c92e","name":"","active":true,"console":"false","complete":"false","x":408.5,"y":177,"wires":[]},{"id":"e0d9c912.8c57f8","type":"influxdb out","z":"ced40abb.3c92e","influxdb":"bbd62a93.1a7108","name":"","measurement":"heap","x":446.1666717529297,"y":224.58335876464844,"wires":[]},{"id":"2a552b3c.de8d2c","type":"mqtt-broker","broker":"192.168.1.17","port":"1883","clientid":"node-red","usetls":false,"verifyservercert":true,"compatmode":true,"keepalive":15,"cleansession":true,"willQos":"0","birthQos":"0"},{"id":"bbd62a93.1a7108","type":"influxdb","z":"","hostname":"127.0.0.1","port":"8086","protocol":"http","database":"SensorData","name":"ODroid InfluxDB"}]

We can see with this flow the data stored in InfluxDB:

> select * from heap;
name: heap
time                value
----                -----
1487946319638000000 41600
1487946440913000000 41600
1487946562206000000 41600
1487946683474000000 41600
1487946804751000000 41600
1487946926061000000 41600
1487947047309000000 41616
1487947168594000000 41600

Now we have data that we can graph with Grafana, subject of my next posts.

Setting up a Grafana Dashboard using Node-Red and InfluxDB – Part 1: Installing

A more or less standard software stack used for control, processing and displaying data, has emerged that is almost used by everyone when hacking around on Arduinos, ESP8266, Raspeberry Pi’s and other plethora of devices. This “standard” software stack basically always includes the MQTT protocol, some sort of Web based services, Node-Red and several different cloud based services like Thingspeak, PubNub and so on. For displaying data locally, solutions like Freeboard and Node-Red UI are a great resources, but they only shows current data/status, and has no easy way to see historical data.

So on this post I’ll document a software stack based on Node-Red, InfluxDB and Graphana that I use to store and display data from sensors that I have around while keeping and be able to display historical memory of data. The key asset here is the specialized time-series database InfluxDB that keeps data stored and allows fast retrieval based on time-stamps: 5 minutes ago, the last 7 days, and so on. InfluxDB is not the only Time-Series database that is available, but it integrates directly with Grafana the software that allows the building of dashboards based on stored data.

I’m running an older version of InfluxDB on my ARM based Odroid server, since a long time ago, ARM based builds of InfluxDB and Grafana where not available. This is now not the case, but InfluxDB and Grafana have ARM based builds so we can use them on Raspberry PI and Odroid ARM based boards.

So let’s start:

Setting up Node-Red with InfluxDB
I’ll not detail the Node-Red installation itself since it is already documented thoroughly everywhere. To install the supporting nodes for InfluxDB we need to install the package node-red-contrib-influxdb

cd ~/.node-red
npm install  node-red-contrib-influxdb

We should now restart Node-red to assume/detect the new nodes.

Node Red InfluxDB nodes

Installing InfluxDB
We can go to the InfluxDB downloads page and follow the installation instructions for our platform. In my case I need the ARM build to be used on Odroid.

cd ~
wget https://dl.influxdata.com/influxdb/releases/influxdb-1.2.0_linux_armhf.tar.gz
tar xvzf influxdb-1.2.0_linux_armhf.tar.gz

The InfluxDB engine is now decompressed in the newly created directory influxdb-1.2.0-1. Inside this directory there are the directories that should be copied to the system directories /etc, /usr and /var:

sudo -s
cd /home/odroid/influxdb-1.2.0-1

Copy the files to the right location. I’ve added the -i switch just to make sure that I don’t overwrite nothing.

root@odroid:~/influxdb-1.2.0-1# cp -ir etc/ /etc
root@odroid:~/influxdb-1.2.0-1# cp -ir usr/* /usr
root@odroid:~/influxdb-1.2.0-1# cp -ir var/* /var

We need now to create the influxdb user and group:

root@odroid:~/influxdb-1.2.0-1# groupadd influxdb
root@odroid:~/influxdb-1.2.0-1# useradd -M -s /bin/false -d /var/lib/influxdb -G influxdb influxdb

We need now to change permissions on /var/lib/influxdb:

cd /var/lib
chown influxdb:influxdb influxdb

We can now set up the automatic start up script. On the directory /usr/lib/influxdb/scripts there are scripts for the systemctl based Linux versions and init.d based versions that is my case. So all I have to do is to copy the init.sh script from that directory to the /etc/init.d and link it to my run level:

root@odroid:~# cd /etc/init.d
root@odroid:/etc/init.d# cp /usr/lib/influxdb/scripts/init.sh influxdb
root@odroid:/etc/init.d# runlevel
 N 2
root@odroid:/etc/init.d# cd /etc/rc2.d
root@odroid:/etc/init.d# ln -s /etc/init.d/influxdb S90influxdb

And that’s it. We can now start the database with the command /etc/init.d/influxdb start

root@odroid:~# /etc/init.d/influxdb start
Starting influxdb...
influxdb process was started [ OK ]

We can see the influxdb logs at /var/log/influxdb and start using it through the command line client influx:

root@odroid:~# influx
Connected to http://localhost:8086 version 1.2.0
InfluxDB shell version: 1.2.0
> show databases
name: databases
name
----
_internal

> 

Installing Grafana
We need now to download Grafana. In my case for Odroid since it is an ARMv7 based processor, no release/binary is available.
But a ARM builds are available on this GitHub Repository: https://github.com/fg2it/grafana-on-raspberry for both the Raspberry Pi and other ARM based computer boards, but only for Debian/Ubuntu based OS’s. Just click on download button on the description for the ARMv7 based build and at the end of the next page a download link should be available:

odroid@odroid:~$ wget https://bintray.com/fg2it/deb/download_file?file_path=main%2Fg%2Fgrafana_4.1.2-1487023783_armhf.deb -O grafana.deb

And install:

root@odroid:~# dpkg -i grafana.deb
Selecting previously unselected package grafana.
(Reading database ... 164576 files and directories currently installed.)
Preparing to unpack grafana.deb ...
Unpacking grafana (4.1.2-1487023783) ...
Setting up grafana (4.1.2-1487023783) ...
Installing new version of config file /etc/default/grafana-server ...
Installing new version of config file /etc/grafana/grafana.ini ...
Installing new version of config file /etc/grafana/ldap.toml ...
Installing new version of config file /etc/init.d/grafana-server ...
Installing new version of config file /usr/lib/systemd/system/grafana-server.service ...

Set the automatic startup at boot:

root@odroid:~# ln -s /etc/init.d/grafana-server /etc/rc2.d/S91grafana-server

And we can now start the server:

root@odroid:~# /etc/init.d/grafana-server start
 * Starting Grafana Server    [ OK ] 
root@odroid:~# 

We can now access the server at the address: http://server:3000/ where server is the IP or DNS name of our ODroid or RPi.

Conclusion:
This ends the installation part for the base software.

The following steps are:

  • Create the Influx databases –
  • Receive data from sensors/devices and store it on the previously created database
  • Configure and create Grafana data sources and dashboards
  • Add some plugins to Grafana