Cloud based deployment for IOT devices

Following up on my previous post Cloud based CI with Platformio, after we have the build output from the Continuous Integration process, we are able now to deploy to our devices.

This last deploy phase of the cycle Develop, CI, Deliver using Cloud infrastructure, only makes sense to devices that are powerful enough to have permanent or periodic network connectivity and have no problems or limitations with power usage, bandwidth, are in range and are able to remotely be updated.

In reality this means that most low power devices, devices that use LPWAN technologies like LoraWan or SigFox, devices that are sleeping most of the time and are battery powered are not able to be easily updated. For these cases the only solution is really out of band management by upgrading locally the device.

So the scope of this post is just to simply build a cloud based process to allow ESP8266 devices to get update firmware from the CI output. On it’s simplest form all we need is to create a web server, make the firmware available at the server and provide the URL for OTA updates to the ESP8266 that use the HTTP updater.

One can already use from the squix blog the PHP file to be deployed on PHP enabled web server that delivers the latest builds for devices requesting over the air updates.

Openshift PaaS Cloud Platform

The simplest way of making the Squix PHP page available on the cloud is to use the great Platform as a Service Openshift by RedHat. The free tier allows to have three applications (gears) available and the sign up is free. At sign up time we need to name our own domain suffix so that, for example I choose primal I’ll have URL’s such as

Openshift offers a series of pre-configured applications ready to be deployed such NodeJs, Java, Python and PHP.

Openshift preconfigured platforms

So after sign up, all we need is to create a new application based on the PHP 5.4 template, give it an URL (it can be the default PHP), and that’s it: we have our PHP enabled web server.

Deploying code to Openshift

To deploy code to Openshift we use the Git tool for manipulating our application repository on the PaaS cloud platform.

So we must first clone our repository locally, modify it and then upload the changes.

For obtaining the repository URL and connection details, we must first setup our local machine with the rhc command line tool and upload our public SSH key to the Openshift servers:

 [pcortex@pcortex:~]$ gem install rhc

If the gem tool is not available, first install Ruby (sudo pacman -S ruby).

We then setup the rhc tool with the command rhc setup. Complete details here.

The command rhc apps should list now our Openshift applications:

[pcortex@pcortex:~]$ rhc apps
nodejs @ (uuid: 9a72d50252d09a72d5)
  Domain:     primal
  Created:    Aug 26  3:43 PM
  Gears:      1 (defaults to small)
  Git URL:    ssh://
  Deployment: auto (on git push)

  nodejs-0.10 (Node.js 0.10) 
    Gears: 1 small 
php @ (uuid: c0c157c41271b559e66) 
  Domain:     primal          
  Created:    12:16 PM  
  Gears:      1 (defaults to small) 
  Git URL:    ssh://                
  Deployment: auto (on git push) 

  php-5.4 (PHP 5.4)
    Gears: 1 small

You have access to 2 applications.

We pull now the remote repository to our machine:

[pcortex@pcortex:~]$ mkdir Openshift
[pcortex@pcortex:~]$ cd Openshift
[pcortex@pcortex:Openshift]$ git clone ssh://
[pcortex@pcortex:Openshift]$ cd php
[pcortex@pcortex:php]$ wget 

We should now change the PHP file so it uses our repository to bring up our firmware:

    $githubApiUrl = "";
    $ch = curl_init();

And then it’s just to commit the change to Openshift:

[pcortex@pcortex:php]$ git add firmware.php
[pcortex@pcortex:php]$ git commit -m "Added firmware.php file"
[pcortex@pcortex:php]$ git push
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 924 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
remote: Stopping PHP 5.4 cartridge (Apache+mod_php)
remote: Waiting for stop to finish
remote: Waiting for stop to finish
remote: Building git ref 'master', commit a72403a
remote: Checking .openshift/pear.txt for PEAR dependency...
remote: Preparing build for deployment
remote: Deployment id is 8fdecb3f
remote: Activating deployment
remote: Starting PHP 5.4 cartridge (Apache+mod_php)
remote: Application directory "/" selected as DocumentRoot
remote: -------------------------
remote: Git Post-Receive Result: success
remote: Activation status: success
remote: Deployment completed with status: success
To ssh://
   321e48b..a72403a  master -> master

And that’s it: the link for HTTP OTA is available at

Final notes:

With the above firmware.php file we can deliver a single firmware file to any device that calls the page.

But a better solution is needed if we want to:

– Deliver multiple firmware files to different devices
– Deliver different versions of firmware files, for example be able to lock a specific version to some devices
– Know which devices have updated
– Know which version of firmware the devices are running

and of course, add some security.

Cloud based continuous integration and delivery for IOT using PlatformIO

After finding out the PlatformIO for IoT development I started to read some of the Platformio documentation and also what other users have written about it.

One of the most interesting features of Platformio is that it supports to be used on a process for any PlatformIO based project. This is important for using automated build systems for CI (Continuous Integration), and so, allows early detection of possible build problems. CI makes sense when several contributors/team are working on the same code repository, and we need to make sure that the project is able to build with all the team/contributors code inputs/changes. At the end, deliverables can be pushed to their destination:


What Platformio CI enables is that for our IOT projects we can have automatic builds after code commits on the code repository (for example, GitHub). When the automatically build is triggered, PlatformIO is able to pull all dependencies and frameworks to build our project on the automated build system.

After the automatically build is triggered and the build is successful we can then deliver the output.

One of the most interesting examples of this workflow is the following post that shows the process of developing, committing code to the repository, triggering automatic builds and, at major releases, deploy firmware updates over the air (OTA) to the ESP8266. All of this using Platformio and 100% cloud infrastructure for IOT deployment.

Starting up with Platformio and TravisCI

Platformio supports several CI systems, and one of them is Travis.CI that integrates out of the box with GitHub. To enable Travis.CI on your GitHub projects/repository, just sign in on GitHub, and on (another browser tab) goto the TravisCI site and press the Sign in with GitHub button, accept the authorization requests, and select the repositories to enable for CI by going to your user profile.

Enabled Repository

After enabling the repositories, every commit will trigger an automatic build on the Travis CI platform. As a build status indicator we can add a IMG tag to the file so we can have the build status without going to the Travis site, for example:

Setting up the build
Travis.CI will start the build process according to instructions defined on the hidden repository file .travis.yml that is unique and specific for each repository.

This Travis Documentation link explains in detail the logic and options behind the travis.yml file and build process.

Fortunately PlatformIO when initializing a project creates a sample travis.yml file.
Based on that sample, here is one of mine for compiling two sample Arduino projects on the same code repository:

language: python
     - "2.7"

sudo: false
         - "~/.platformio"

     - pip install -U platformio

     - cd RFGW && platformio run -e nanoatmega328
     - cd ../ATTINNY_MailMonitor_TX && platformio run

The tags language:, sudo: and cache: are not changed in this case.

The install: and script: tags are customized so that our project can be built.

On the install: tag, the first command is always the installation of the platformio tools, followed, if necessary, by installation of other dependencies. For example if our project depends on libraries from the Platformio library registry we can do the following:

     - pip install -U platformio
     - platformio lib install 64

This, before building, will install first platformio, and then it will install the ArduinoJson (Id 64) library. We can add as much commands as we want prefixed by the dash character.
Also this is one way of doing it, but this means that we need to change .travis.yml file every time we add/remove libraries.

Another way is to add the library dependency on the project file platformio.ini like this:

platform = atmelavr
framework = arduino
board = nanoatmega328
lib_install= 64

And in this case all the dependencies are associated within the project file. But in this case the build commands are different.

So one example with multiple libraries could be as follow:

     - pip install -U platformio
     - platformio lib install 64
     - platformio lib install 652
     - platformio run

And this is the same as above:

     - pip install -U platformio
     - platformio run -e nanoatmega328

where nanoatmega328 is the environment configuration on platformio.ini file:

platform = atmelavr
framework = arduino
board = nanoatmega328
lib_install= 64,652

To end this topic, notice that we can have several builds on the same repository, just add several command lines to the script: tag:

     - cd RFGW && platformio run -e nanoatmega328
     - cd ../ATTINNY_MailMonitor_TX && platformio run

I’m always using relative paths related to the project root in the above example.

Further information can be found on the Platformio Travis integration guide and on the Travis.CI site.

Continuous delivery

Since every commit to our repository triggers the Travis build process, we need now to distinguish between working commits and release commits so that on this last type, the build output is made available to be deployed to end devices/platforms for OTA updates (or not).

This can be easily achieved by using git tags and conditional deploy process that only works when a git tag is defined.

With this scheme the normal cycle of git add, commit and push will create a working commit that triggers as usual the CI build process but not the deployment phase of copying the build output (binaries, firmware) to the GitHub releases tab.

Creating a tag and a release can be done either by command line or by using the Github web interface, being this the easiest way of doing it.

But there are some pre-requisites to allow this to happen:
– Generate an OAuth personal GitHub token so that Travis can copy the output to the Releases GitHub tab.
– Encrypt the OAuth token with the travis command line tool.
– Change the .travis.yml file so that it deploys the build output to the releases tab only at tagged commits.

The GitHub token is generated by going to your Github Profile, selecting settings and then Personal Access tokens.
Press Generate new token, enter your password and add permissions to access your repositories.
The permissions should be at least full repo access:

GitHub Personal token permission

Make sure that at the end you copy the OAuth token, otherwise you must generate another one from the beginning.

The Github token must be kept secret at all times, and since we need to have it on the travis.yml file which can be read by everyone, we must make sure that we encrypt it in such a way that only Travis.CI can use it.
This is achieved by using the travis command line tool on our machine so we need to:

[pcortex@pcortex:RFGW_SensorNodes|master]$ travis encrypt GH_TOKEN="7d54...df5977" --add 

The GH_TOKEN is the name that must hold the OAuth token so that Travis can use it securely.

With the above command, the .travis.yml file is modified and the following entry is added:

    secure: WqroI5PtWWm94svvau5G3LFz4PMBU...fY=

We can now add the final configuration to the Travis.CI configuration file, so that at tag releases, the build output is automatically added.

  provider: releases
  api_key: ${GH_TOKEN}
    - $TRAVIS_BUILD_DIR/.pioenvs/nanoatmega328/firmware.hex
    - $TRAVIS_BUILD_DIR/.pioenvs/digispark-tiny/firmware.elf
  skip_cleanup: true
    tags: true  

The provider: tag defines that we want to deploy to GitHub Releases, and the api_key: tag contains the secure Oauth token to access GitHub.

The file: tag define which files we want to deploy, and in this case we use the $TRAVIS_BUILD_DIR environment variable to locate our build directory root. The skip_cleanup: will avoid cleaning all build outputs.

The on: tag is the most important because it conditionally defines that the deploy process only happens at tagged release.

So after this configuration, if we commit without tagging, the build is made, but no deploy to the Releases happens:

Travis Build without tagging

If we want to trigger a tagged commit we can do it purely on the command line:

[pcortex@pcortex:RFGW_SensorNodes|master]$ git tag -a v0.3 -m "Version 0.3"
[pcortex@pcortex:RFGW_SensorNodes|master]$ git push origin v0.3
 * [new tag]         v0.3 -> v0.3

And that’s it: Automatic build process and release:

Tagged build process output

And the final result:

Tagged build output

We have now a tagged release with source code and binaries automatically created and packaged.


At this point we have the deliverables for a release, and we should now distributed/deploy it. This is by itself another process that can be done through Cloud services or locally, it really depends of the end architecture.

The most important issue here is related to security: making sure that the correct build is delivered, was not changed in any way and reaches the intended devices.


Normally I don’t use or look solutions for problems that I don’t have. And for this reason alone, meant that stayed under my radar for so long.

Whats my problem?
Since I’m building my mailbox monitoring solution, I’m using two different types of Arduino boards: a Arduino nano 328p based board for the RF gateway, and some Digispark AtTinny85 based boards for the sensors. The Digispark AtTinny85 boards are not completely energy efficient for battery power sensor usage, but they are good enough to be used as initial proof of concept.

To be able to program the Digispark board, I had to use the Arduino IDE, and through the IDE Boards Manager, add support for them, so that these new boards are available to be selected and programmed.

Now, this bring two problems:

– The first one is that after selecting on the IDE the board type, every window instance of the IDE assumes the same board. This means that I can’t have side by side one Arduino IDE window for the RF gateway based Atmega328p board, and other window for the AtTinny85 sensor board. I have to constantly change board types depending of what code I’m working for. A good solution (as the Platformio uses) should to associate the board type to the project, but that is not possible on the Arduino IDE.

– The second problem, is that the last Arduino IDE tools update broke the integration between the native Arduino boards and the Digispark based boards. I can have support for one of them or the other, but not both at the same time, otherwise I get errors. There are some discussions on the Arduino forums that acknowledge the issues that I’m having.

Still I could use one IDE/editor for one type of board, and the Arduino IDE for the Attinny boards, but is not very efficient. Anyway, the Arduino IDE is too much hassle when complexity starts to grow. I’m already using the Netbeans IDE for programming the ESP8266 and the KDE Kate editor for some Arduino basic programming, so that all I need was something that supported the Digispark AtTinny85 toolset.

And so, I have several problems, which means I need to look for a solution, and preferably one that unifies all the platforms.

Platformio and Platformio IDE

Platformio is an open source toolset that allows, using the same base tools, to target different target environments: Atmel/Arduino, Espressif ESP8266, ARM, and so on.

This means that from a unified toolset/IDE I can target different platforms, and one important thing, the target is defined by project and not by the tool or IDE, which solves my first problem.

Also Platformio, since it supports out of the box several targets, it probably also solves problem number two of having possible future clashes between different device platforms/architectures.

Platformio is a command line based tool, and associated with it there is an IDE where development can take place on a modern editor (Atom) that, among another things, supports code completion, serial port monitoring, embedded terminal, and so on…

The command line tool supports searching and installing support for the several boards available on the market, and also allows to search and install user contributed libraries.

Anyway, the Platformio docs can explain better the purpose and capabilities of these tools, but the greatest achievement of this is that allows an unified toolset to be used for different boards/targets.

Keep in mind that there are at least two tools:

– Platformio – This is a Python based command line tool that unifies the compiling, uploading, library management, and so on.
– Platformio IDE – This is a NodeJS, Atom Editor based IDE that integrates the Platform tools on the IDE.

While I had no issues, worth of mention, on Arch Linux, in using Platformio cli tools, the IDE has a lot of issues, not due to Platformio IDE, but due to Atom editor and supporting software (Electron). I’m still not able to use the IDE to it’s full potential, but as an editor that has code completion and project management it works fine, but so far for me, upload to the boards must be done through the command line platformio tools.

Installing Platformio on Arch Linux
So I’m running Arch Linux, which by definition is quite near bleeding edge… There are instructions for other platforms, and so it is my take on the installation on Arch:

The main platformio package is available on the AUR repository, so just install it with pacaur or yaourt:

 yaourt -S platformio

We should then have the command line tools:

root@pcortex:~# pio
Usage: pio [OPTIONS] COMMAND [ARGS]...

  --version          Show the version and exit.
  -f, --force        Force to accept any confirmation prompts.
  -c, --caller TEXT  Caller ID (service).
  -h, --help         Show this message and exit.

  boards       Pre-configured Embedded Boards
  ci           Continuous Integration
  init         Initialize new PlatformIO based project
  lib          Library Manager
  platforms    Platforms and Packages Manager
  run          Process project environments
  serialports  List or Monitor Serial ports
  settings     Manage PlatformIO settings
  update       Update installed Platforms, Packages and Libraries
  upgrade      Upgrade PlatformIO to the latest version

To start a simple arduino project we can first install the Atmel AVR platform:

root@pcortex:~#  pio platforms install atmelavr
Installing toolchain-atmelavr package:
Downloading  [####################################]  100%             
Unpacking  [####################################]  100%             
The platform 'atmelavr' has been successfully installed!
The rest of packages will be installed automatically depending on your build environment.

We can search for available platforms with pio platforms search

 mkdir myproject
 cd myproject
 pio init --board uno

And that’s it. We can start to edit the src.main.cpp file, add libraries to the lib directory, execute pio run to compile, and pio run -t upload to upload to the board.

We can see further instructions here

And that basically it for the Command Line tools.

For the IDE:

Install from the main repository the clang and atom editor. Minicom is to have the Serial port monitoring from the IDE (or not):

Edit: Do not install atom editor from the main repository. Install atom-editor-bin from AUR instead. Many problems are solved with the AUR version. You may first install the editor from the main repositories so that all possible dependencies are pulled first, and then remove it with pacman -R atom apm and install the AUR version with yaourt -S atom-editor-bin

root@pcortex:~# pacman -S clang atom minicom
resolving dependencies...
looking for conflicting packages...

Packages (11) apm-1.12.6-1  electron-1.3.3-1  http-parser-2.7.1-1  libuv-1.9.1-1  minizip-1:1.1-1  nodejs-6.4.0-1  npm-3.10.6-1  re2-20160301-1  semver-5.3.0-1  atom-1.9.8-3  clang-3.8.1-1 minicom-2.7-2

This will bring also the node-js and electron platforms.

We can now start the Atom editor to add the package Platformio-IDE. Installing the package Platformio-IDE will also pull the Platformio-IDE-Terminal.

root@pcortex:~# atom

To clear the error (if it appears) that the atom editor can’t watch the .atom/config.cson file, execute also the following command:

sudo sysctl fs.inotify.max_user_watches=32768

In my case, after starting Atom, the main window appears, but nothing else works. For example, going to Edit->Preferences to try add the Platformio-IDE package does nothing. The same applies to other menu options. On the other hand, running atom as root, seems to work, but is not a solution.

Starting atom on the foreground (atom -f ) I can see the following error:

  TypeError: Path must be a string. Received undefined", source: path.js (7)

What I’ve found out is that if we open a file passed through the command line, close atom, and start it again without any parameter, it starts to work…

So, just do, for example:

 root@pcortex:~# atom somefile.txt

 Close atom, and start it again:

 root@pcortex:~# atom 

The menus should start to work and we should be to install the platformio-ide package through the IDE Graphical Package Manager. Just go to Edit->Settings->Install search for Platformio and add Platformio IDE. The Platformio IDE Terminal will also be installed automatically.

If, as in my case, we are behind a corporate proxy, we set the proxy environment variables on a terminal session, and start atom from there.

PlatformIO Instalation

After installation the Platformio menu and toolbar should appear.

One thing that I’ve found out was that the terminal window and serial port monitor wouldn’t work. In one of my machines the window just opens and stays blank with a blinking cursor. On other machine, an error appears saying that Platformio Terminal is not installed, which is not the case. In this last machine the error that appears with atom -f is:

 "Failed to require the main module of 'platformio-ide-terminal' because it requires an incompatible native module

On the first situation, the window only with the blinking cursor, pressing CTRL-SHIFT-I to open the debugger and viewing the console, an error like this is shown:

/usr/lib/atom/src/task.js:52 Cannot find module '../bin/linux/x64.m49.node' Error: Cannot find module '../bin/linux/x64.m49.node'
    at Function.Module._resolveFilename (module.js:440:15)
    at Function.Module._load (module.js:388:25)
    at Module.require (module.js:468:17)
    at require (internal/module.js:20:19)
    at Object. (/home/fdam/.atom/packages/platformio-ide-terminal/node_modules/pty.js/lib/pty.js:18:9)
    at Module._compile (module.js:541:32)
    at Object.value [as .js] (/usr/lib/atom/src/compile-cache.js:208:21)
    at Module.load (module.js:458:32)

What I’ve done to solve this:

– Goto ~/.atom/packages/plataformio-ide-terminal
– Delete completely the node_modules directory: rm -rf node_modules
– Install nslog: npm install nslog
– Edit the package json file, and change the nan version from 2.0.5 to >2.0.5

        "name": "nan",
        "version": ">2.0.5",
        "path": "node_modules/nan/include_dirs.js"

– Install the packages: npm install.
– It should error on the pty.js package. Do not worry (yet…)
– Goto node_modules/pty.js and edit the package.json file. Change the version of nan from 2.0.5 to >2.0.5

  "dependencies": {
    "extend": "~1.2.1",
    "nan": ">2.0.5"

– Remove the node_modules directory (for the pty.js): rm -rf node_modules
– Check what is our electron version: electron -v
– In my case it is v1.3.3
– Paste the following lines on the terminal:

# Electron's version.
export npm_config_target=1.3.3
# The architecture of Electron, can be ia32 or x64.
export npm_config_arch=x64
# Download headers for Electron.
export npm_config_disturl=
# Tell node-pre-gyp that we are building for Electron.
export npm_config_runtime=electron
# Tell node-pre-gyp to build module from source code.
export npm_config_build_from_source=true
# Install all dependencies, and store cache to ~/.electron-gyp.
HOME=~/.electron-gyp npm install

Start again the atom editor. The terminal should work now. If not, atom might complain and show a red icon bug on the bottom right side. Just press it, and choose module rebuild, restart atom and it should be ok.


While the installation and usage of the command line tools is straight forward and it works out of the box, the Atom based IDE is another story. It has a steep installation curve, not Platformio fault, but due to the number of components involved. Also those issues might be due to my Linux distribution (Arch), but still, it might be a real show stopper for some users if this happens on other distributions. I’ve lost some serious hours debugging this🙂 to arrive to an almost fully functional IDE.

Anyway at the end, the platform and the IDE are fantastic. With code completion, platformio tools seamlessly integrated, including simultaneous serial port monitoring to different boards, support for different targets and so on, is really a great product.

Platformio is highly recommended as also the IDE, despite it’s rough edges.

Communication over 433Mhz links

I’ve bought, about a year ago, maybe more, some 433Mhz transmitter and receivers , so I could build a mailbox monitoring solution, loosely based on LOFI Project. The base idea was when someone put something in my mailbox I was notified. Anyway, the ESP8266 came along, and after some experiments, there is no way that the ESP8266 can be used for implementing the monitoring mailbox project I was thinking due to power consumption but mainly because the high distance across several floors and walls between the mailbox and my access point. So back to basics and to the original simpler idea of using plain 433Mhz link for transmitting data.

I’ve started to do some experiments using these devices:

eBay 433Mhz RX TX

All I can say about these is while the transmitter is ok and works fine across floors and walls (I can see the signal clearly using my SDR), the receiver is absolute garbage and useless for the intended purpose.

The receiver is only able to receive in line of sight with the emitter when there are no obstacles, and even in this scenario with both emitter an receiver with attached antennas, the maximum distance between the emitter and the receiver is 4/5 meters at maximum.

The solution is to use the much better and higher cost RXB8 receiver that has according to the datasheet -114dBm sensitivity, but it does work and is very good.


The code for interfacing with this receiver is the same code that works with the cheaper receiver and is the Arduino Manchester encoding library.

Using the RXB8 with an attached 433Mhz antenna (I’m using an antena with 5dBi and a SMA connector), the results are simply superior, with the end result that the signal/messages when the emitter is located several floors down and across several walls, are received correctly and are able to be decoded by the Arduino.

Anyway, one of the interesting things on the LOFI project is/was the Hamming Error Correction implementation found here in the RobotRoom blog.

So I’ve forked the original mchr3k Arduino Manchester encoding library and added the Hamming Code Error correction support: Arduino Manchester encoding library with Hamming EC support.

The usage is quite simple, and can be seen on the examples Hamming_TX e Hamming_RX.

The two new functions are:

uint8_t Manchester::EC_encodeMessage( uint8_t numBytes, uint8_t *data, uint8_t *ecout)

where the input data buffer is provided by the data parameter with the buffer size also provided by the numBytes parameter. The output buffer ecout should at least be one third bigger than the original buffer since for every two input bytes an additional parity byte is added.

For receiving the buffer with the data and parity data is decoded and corrected if any errors are detected. Errors are corrected if possible.

uint8_t Manchester::EC_decodeMessage( uint8_t numBytes, uint8_t *ecin, uint8_t *bytesOut, uint8_t *dataout )

The input data is the size and the buffer with the data and parity, namely the numBytes and ecin input buffer pointer, and the output is returned on the dataout buffer, with the decoded size also returned on the bytesOut parameter.

Above these functions we can build some logical protocol for the received data, but that really depends on the way we want to use the library.

ESP8266 – Logging data in a backend – AES and Crypto-JS

After building, on the previous posts, the Node-Red based backend to support E2EE (End to End Encryption) so we can log data into a central server/database, from our devices securely, without using HTTPS, we need now to build the firmware for the ESP8266 that allows it to call our E2EE backend.

The firmware for the ESP8266 must gather the data that it wants to send, get or generate the current sequence number for the data (to avoid replay attacks), encrypt the data and send it to the backend.
On the backend we are using the Java script library for cryptographic functions Crypto-js, and specifically we are encrypting data with the encryption algorithm AES. So all we need is to encrypt our data with AES on the ESP8266, send it to the Node-Red Crypto-js backend, decrypt it and store it, easy right?

Not quite, let’s see why:

Crypto-js and AES:
We can see that on my Node-Red function code and testing programs I’m using something similar to the following code example:

var CryptoJS = require("crypto-js");
var message  = "Message to encrypt";
var AESKey   = '2B7E151628AED2A6ABF7158809CF4F3C';

// Encrypt
var ciphertext = CryptoJS.AES.encrypt(message, AESKey );

console.log("Cypher text in Base64: " ,  ciphertext.toString(CryptoJS.enc.base64) );

// Decrypt
var bytes  = CryptoJS.AES.decrypt(ciphertext.toString(), AESKey );
var plaintext = bytes.toString(CryptoJS.enc.Utf8);

console.log("Decrypted message UTF8 decoded: ", plaintext);

Several points regarding the above code need clarification:

The code variable AESKey the way it is used on the above example encrypt and decrypt functions isn’t really a key but a passphrase from where the real key and an initialization vector or salt value is generated (I’m using the names interchangeably, but they are not exactly the same thing except they are public viewable data that must change over time/calls).
The use for the generated key is self explanatory, but the initialization vector (IV) or salt value is used to allow that the encrypted data output to be not the same for the same initial message. While the key is kept constant and secret to both parties, the IV/salt changes between calls, which means that the above code, when run multiple times, will never produce the same output for the same initial message.

Still referring to the above code, the algorithm that generates the key from the passphrase is the PBKDF2 algorithm. More info at Crypto-js documentation. At the end of the encryption the output is a OpenSSL salted format that means that the output begins by the signature id: Salted_, followed by an eight byte salt value, and after the salt, the encrypted data.

So if we want use the API has above on the node-js/crypto-js side, we need to implement on the ESP8266 side both the AES and PBKDF2 algorithms.

I decided not to do that, first because finding a C/C++ implementation of the PBKDF2 algorithm that could be portable and worked on the ESP822 proved difficult, and second the work for porting it to the ESP8266 won’t be needed if I use a KEY/IV directly, and so I decided to use the more standard way of providing an AES key and an initialization vector for encrypting and decrypting data.

In the case of Node-JS and Crypto-JS when using an explicit key and IV the code looks like:

var CryptoJS = require("crypto-js");
var request = require('request');

// The AES encryption/decription key to be used.
var AESKey = '2B7E151628AED2A6ABF7158809CF4F3C';

// The JSON object that we want to encrypt and transmit
var msgObjs = {"data":{"value":300}, "SEQN":145 };

// Convert the JSON object to string
var message = JSON.stringify(msgObjs);

var iv = CryptoJS.enc.Hex.parse('0000000000000000');
var key= CryptoJS.enc.Hex.parse(AESKey);

// Encrypt
var ciphertext = CryptoJS.AES.encrypt(message, key , { iv: iv } );

//console.log("Cypher: ", ciphertext );
console.log("Cypher text: " ,  ciphertext.toString(CryptoJS.enc.base64) );
console.log(" ");

console.log(" ");
console.log("Let's do a sanity check: Let's decrypt: ");

// Decrypt
var bytes  = CryptoJS.AES.decrypt(ciphertext.toString(), key , { iv: iv} );
var plaintext = bytes.toString(CryptoJS.enc.Utf8);

console.log("Decrypted message UTF8 decoded: ", plaintext);

Now, with above code, where the IV is always initialized to the same value, in this case ‘0000000000000000’, we can see when running the above code several times that the output is always the same since the IV is kept constant. Also the encrypted output is now just the raw encrypted data and not the Openssl format.

So to make the above code secure we must randomize the IV value for producing an output that is always different, even from several program runs when encrypting the same source data.

As a final note, if we count the number of HEX characters on the Key string, we find out that they are 16 bytes, which gives a total of 128 key bits. So the above example is using AES128 encryption, and with default Crypto-js block mode and padding algorithms which are CBC (Chain block mode) and pkcs7.

Interfacing Crypto-js and the ESP8266:
Since we are using AES for encrypting data and decrypting data, we need first to have an AES library for the ESP8266. The AES library that I’m using is this one Spaniakos AES library for Arduino and RPi. This library uses AES128, CBC and pkcs7 padding, so it ticks all boxes for compatibility with Crypto-js…

I just added the code from the above library to my Sming project and also added this Base64 library so that I can encode to and from Base64.

The only remaining issue was to securely generate a truly random initialization vector. And while at first I’ve used some available libraries to generate pseudo-random numbers to real random numbers, I’ve found out that the ESP8266 seems to have already a random number generator that is undocumented: Random number generator

So to generate a random IV value is as easy as:

uint8_t getrnd() {
    uint8_t really_random = *(volatile uint8_t *)0x3FF20E44;
    return really_random;

// Generate a random initialization vector
void gen_iv(byte  *iv) {
    for (int i = 0 ; i < N_BLOCK ; i++ ) {
        iv[i]= (byte) getrnd();

So our ESP8266 code is as follow:

Global variables declarations:
The N_Block defines the encryption block size, that for AES128 is 16 bytes.

#include "AES.h"
#include "base64.h"

// The AES library object.
AES aes;

// Our AES key. Note that is the same that is used on the Node-Js side but as hex bytes.
byte key[] = { 0x2B, 0x7E, 0x15, 0x16, 0x28, 0xAE, 0xD2, 0xA6, 0xAB, 0xF7, 0x15, 0x88, 0x09, 0xCF, 0x4F, 0x3C };

// The unitialized Initialization vector
byte my_iv[N_BLOCK] = { 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};

// Our message to encrypt. Static for this example.
String msg = "{\"data\":{\"value\":300}, \"SEQN\":700 , \"msg\":\"IT WORKS!!\" }";

The example function ESP8266 Sming code is:

void testAES128()  {

    char b64data[200];
    byte cipher[1000];
    byte iv [N_BLOCK] ;
    Serial.println("Let's encrypt:");
    aes.set_key( key , sizeof(key));  // Get the globally defined key
    gen_iv( my_iv );                  // Generate a random IV
    // Print the IV
    base64_encode( b64data, (char *)my_iv, N_BLOCK);
    Serial.println(" IV b64: " + String(b64data));
    Serial.println(" Mensagem: " + msg );
    int b64len = base64_encode(b64data, (char *)msg.c_str(),msg.length());
    Serial.println (" Message in B64: " + String(b64data) );
    Serial.println (" The lenght is:  " + String(b64len) );
    // For sanity check purpose
    //base64_decode( decoded , b64data , b64len );
    //Serial.println("Decoded: " + String(decoded));
    // Encrypt! With AES128, our key and IV, CBC and pkcs7 padding    
    aes.do_aes_encrypt((byte *)b64data, b64len , cipher, key, 128, my_iv);
    Serial.println("Encryption done!");
    Serial.println("Cipher size: " + String(aes.get_size()));
    base64_encode(b64data, (char *)cipher, aes.get_size() );
    Serial.println ("Encrypted data in base64: " + String(b64data) );

When the above code/function is executed on the ESP8266 it outputs the following:

Let's encrypt:
 IV b64: cAFviaDMHejlteGn9/4eQQ==
 Mensagem: {"data":{"value":300}, "SEQN":700 , "msg":"IT WORKS!" }
 Message in B64: eyJkYXRhIjp7InZhbHVlIjozMDB9LCAiU0VRTiI6NzAwICwgIm1zZyI6IklUIFdPUktTISIgfQ==
 The lenght is:  76
Encryption done!
Cipher size: 80
Encrypted data in base64: /1aZRwVaw3jv+ct8HS4pCV5lThvTG70M90ARiyAsIDYMkfJE3w8F3bgxaOKVA0rX4m1Mq50VVN0u9gRw9F2gKE4r2OcY8oECv8bKT80F9pY=

And now we can feed the above Base64 IV and encrypted data to our decoding program in Node-Js using Crypto-JS:

var CryptoJS = require("crypto-js");
var request = require('request');

var esp8266_msg = '/1aZRwVaw3jv+ct8HS4pCV5lThvTG70M90ARiyAsIDYMkfJE3w8F3bgxaOKVA0rX4m1Mq50VVN0u9gRw9F2gKE4r2OcY8oECv8bKT80F9pY=';
var esp8266_iv  = 'cAFviaDMHejlteGn9/4eQQ==';

// The AES encryption/decryption key to be used.
var AESKey = '2B7E151628AED2A6ABF7158809CF4F3C';

var plain_iv =  new Buffer( esp8266_iv , 'base64').toString('hex');
var iv = CryptoJS.enc.Hex.parse( plain_iv );
var key= CryptoJS.enc.Hex.parse( AESKey );

console.log("Let's decrypt: ");

// Decrypt
var bytes  = CryptoJS.AES.decrypt( esp8266_msg, key , { iv: iv} );
var plaintext = bytes.toString(CryptoJS.enc.Base64);
var decoded_b64msg =  new Buffer(plaintext , 'base64').toString('ascii');
var decoded_msg =     new Buffer( decoded_b64msg , 'base64').toString('ascii');

console.log("Decrypted message: ", decoded_msg);

and the output is:

Decrypted message:  {"data":{"value":300}, "SEQN":700 , "msg":"IT WORKS!" }

So, as the message shows, it WORKS!. AES encryption on the ESP8266 to a Node-JS Crypto-JS based code where decryption occurs.

Final notes:
So all is needed now is to build on the ESP8266 side the message with the encrypted data and the IV, and send through plain HTTP a JSON object to the Node-Red back end.


On the Node-Red back end the decryption can now be done easily as was shown on the above testing code.

CubicSDR and SoapyRemote on Orange PI PC

So I’m using remotely my RTL-SDR dongle connected to an Orange PI PC using GQrx on my desktop computer while on the Orange Pi PC I’m running the rtl_tcp server.

This combination works fine, but still some times I have some lag, not common, but it happens.

Anyway Gqrx is great, but I wanted to try another SDR programs, and one of those programs is Cubic SDR. Cubic SDR uses an abstration layer for accessing the SDR hardware either locally connected or over the network. So I wanted to see how the behaviour of CubicSdr, comparing to Gqrx when accessing the RTLSDR over the network.

Installing CubicSDR on the Desktop
I’ve not used the available binaries, but I’ve used the code from the Git repository: So far the git repository works fine.

I’m not posting here the instructions for building the CubicSDR because the full instructions are at the Cubic SDR wiki.

Just make sure, when compiling, to give the correct path to wxWidgets when building Cubic SDR.

So at the desktop, we need to obtain, build and install the following components:

  • SoapySDR – Abstraction layer
  • Liquid-dsp – The digital signal processing libs
  • wxWidgets – The display widgets
  • CubicSDR – The Sdr program itself
  • SoapyRTLSDR – The Soapy abstraction layer driver for the RTLSDR USB type dongles. To use the dongles locally

Installing the above software we can use the attached locally RTLSDR dongle.

Installing SoapySDR Remote on the remote server
The remote server where my RTLSDR dongle is connected is an Orange Pi PC running Armbian. To let my desktop running CubicSDR program to access remotely the dongle, we need to install SoapySDR Remote that allows remote access to the SDR.

So at the remote server we need to obtain, build and install the following components:

  • SoapySDR – The abstraction layer
  • SoapyRTLSDR – The driver for our RTL USB dongle
  • SoapySDR Remote – The server for remotely access the RTL dongle

So basically the instructions are something like this:

  mkdir ~/SDR 
  cd ~/SDR
  git clone
  cd SoapySDR/
  mkdir build
  cd build
  cmake ../ -DCMAKE_BUILD_TYPE=Release
  make -j4
  sudo make install
  sudo ldconfig
  SoapySDRUtil --info

Building the SoapySDR remote:

 cd ~/SDR
 git clone
 cd SoapyRemote/
 cd build
 cmake ..
 sudo make install

and build the RTLSDR driver:

  cd ~/SDR
  sudo apt-get install librtlsdr-dev
  git clone
  cd SoapyRTLSDR/
  mkdir build
  cd build
  cmake .. -DCMAKE_BUILD_TYPE=Release
  sudo make install
  sudo ldconfig

So at the end we should have the following outputs:

opi@opi:~# SoapySDRUtil --probe
## Soapy SDR -- the SDR abstraction library

Probe device 
Found Rafael Micro R820T tuner
Found Rafael Micro R820T tuner

-- Device identification

-- Peripheral summary
  Channels: 1 Rx, 0 Tx
  Timestamps: NO
  Other Settings:
     * Direct Sampling - RTL-SDR Direct Sampling Mode
       [key=direct_samp, default=0, type=string, options=(0, 1, 2)]
     * Offset Tune - RTL-SDR Offset Tuning Mode
       [key=offset_tune, default=false, type=bool]
     * I/Q Swap - RTL-SDR I/Q Swap Mode
       [key=iq_swap, default=false, type=bool]

-- RX Channel 0
  Full-duplex: YES
  Supports AGC: YES
  Stream formats: CS8, CS16, CF32
  Native format: CS8 [full-scale=128]
  Stream args:
     * Buffer Size - Number of bytes per buffer, multiples of 512 only.
       [key=bufflen, units=bytes, default=16384, type=int]
     * Buffer Count - Number of buffers per read.
       [key=buffers, units=buffers, default=15, type=int]
  Antennas: RX
  Full gain range: [0, 49.6] dB
    TUNER gain range: [0, 49.6] dB
  Full freq range: [23.999, 1764] MHz
    RF freq range: [24, 1764] MHz
    CORR freq range: [-0.001, 0.001] MHz
  Sample rates: [0.25, 3.2] MHz

And SoapySDR should have the following configuration:

opi@opi:~# SoapySDRUtil --info
## Soapy SDR -- the SDR abstraction library

API Version: v0.5.0-gfec33c63
ABI Version: v0.5-2
Install root: /usr/local
Module found: /usr/local/lib/SoapySDR/modules/
Module found: /usr/local/lib/SoapySDR/modules/
Loading modules... done
Available factories...null, remote, rtlsdr, 

So all we need is to start our server:

opi@opi:~# SoapySDRServer --bind
## Soapy Server -- Use any Soapy SDR remotely

Launching the server... tcp://[::]:55132
Server bound to [::]:55132
Launching discovery server... 
Press Ctrl+C to stop the server

Using CubicSDR and SoapySDRRemote
So all we need is now on the startup SDR device screen selection add (by pressing the Add button) the remote IP of the Orange Pi PC server to access remotely the RTLSDR dongle.

My Orange PI PC IP address is

CubicSDR device selection

CubicSDR device selection

And here it is the CubicSDR in action.

CubicSDR in action

CubicSDR in action

Conclusion and final notes
The CPU usage and temperature on the Orange PI PC is not a problem when using the server. CPU usage floats around 40%, and no meaningful or worrying changes on the CPU temperature. So the Orange PI PC is up to the task without any issues when serving data with SoapySDRRemote.

Also with CubicSDR and SoapySDRRemote, I’ve experienced no lag when changing frequencies, namely when dragging the frequency selector.. It seems that all changes are instantaneous and note that my desktop connects to the remote server through a 200Mbps PLC and only then it is cable network to the Orange Pi. According to my desktop PC network widget, when receiving data, I have around 6.5Mbps of data comming in when using the maximum sample rate of 3.2MHz.

Also it took me a while to get used to the CubicSDR user interface, but overall for things like fine tuning, since it has a dedicaded codec screen, is much better than Gqrx.

CubicSDR  fine tuning

CubicSDR fine tuning

Also one great feature is if we keep dragging the spectrogram window, the central frequency changes so it keeps up with the SDR bandwidth, shile in Gqrx we need to dial in.

Still I’m using Gqrx and rtl_tcp since CubicSDR has no data output, other then piping audio. Gqrx can pipe to UDP, that allows the decoding of digital modes locally or on other servers without messing around with PulseAudio and Jackd.
Also bookmarking isn’t as direct/easy as with Gqrx. Not sure if I can give labels/names to bookmarks and search them, like I can in Gqrx, but then the problem might be between the chair and computer…

Anyway CubicSDR is a great SDR application and the future looks bright.

I do recommend to give it a test drive.

ESP8266 and the Micropython firmware

One of the alternative firmwares available for the ESP8266 is MicroPython Python interpreter. I’ve found by chance a great tutorial at Adafruit for building the Micropython firmware and I thought to give it a try.

Building the firmware:
The Adafruit tutorial uses a Vagrant based virtual machine to build the firmware, but since I’m already running Linux (Arch Linux to be more specific) and already have the Falcon open ESP8266 sdk installed (see here) and the also available since I’m using the Sming firmware, I’ve just downloaded only the latest Micropython source code from the Github repository to a local directory.

cd ~
git clone
cd ~/micropython
git submodule update --init
cd ~/micropython/esp8266
export PATH=/opt/esp-open-sdk/xtensa-lx106-elf/bin:$PATH
make axtls

So far nothing different from the Adafruit tutorial except that I’m not using the vagrant VM. Also make sure that you first execute the command make axtls otherwise the main make command will compiling about not finding a version.h file. Also make sure that the export command that adds the path to the Xtensa compiler points to the right location.

After compiling, which was fast, I’ve just flashed the firmware on my Wemos mini D1 board. Again I had trouble flashing this board with other speeds than the default 115200 bps.


cd ~/micropython/esp8266/build -p /dev/ttyUSB0 --baud 115200 write_flash 0 firmware-combined.bin

And after flashing, we can connect through the serial terminal, and pressing CTRL-D we should be greeted with the following message:

PYB: soft reboot
could not open file '' for reading
MicroPython v1.8-157-g480159c on 2016-05-29; ESP module with ESP8266
Type "help()" for more information.

Some basic tests:
When doing some tests I’ve found out that most information is outdated regarding the version of Micropython that I’ve flashed. For example:

>>> import pyb
Traceback (most recent call last):
  File "", line 1, in 
ImportError: no module named 'pyb'

The common refered module pyb doesn’t exist, because it’s now machine:

>>> import machine
>>> dir(machine)
['__name__', 'mem8', 'mem16', 'mem32', 'freq', 'reset', 'reset_cause', 'unique_id', 'deepsleep', 'disable_irq', 'enable_irq', 'RTC', 'Timer', 'Pin', 'PWM', 'ADC', 'UART', 'I2C', 'SPI', 'DEEPSLEEP', 'PWR_ON_RESET', 'HARD_RESET', 'DEEPSLEEP_RESET']

So the following code:

>>> import pyb
>>> pin = pyb.Pin(14, pyb.Pin.OUT)  # Set pin 14 as an output.
>>> for i in range(10):
...    pin.value(0)     # Set pin low (or use pin.low())
...    pyb.delay(1000)  # Delay for 1000ms (1 second)
...    pin.value(1)     # Set pin high (or use pin.high())
...    pyb.delay(1000)

should be now:

>>> import machine
>>> pin = machine.Pin(14, machine.Pin.OUT)  # Set pin 14 as an output.
>>> for i in range(10):
...    pin.value(0)     # Set pin low (or use pin.low())
...    pyb.delay(1000)  # Delay for 1000ms (1 second)
...    pin.value(1)     # Set pin high (or use pin.high())
...    pyb.delay(1000)

Other interesting stuff is for example, the esp module:

>>> import esp
>>> dir(esp)
['__name__', 'osdebug', 'sleep_type', 'deepsleep', 'flash_id', 'flash_read', 'flash_write', 'flash_erase', 'flash_size', 'neopixel_write', 'apa102_write', 'dht_readinto', 'freemem', 'meminfo', 'info', 'malloc', 'free', 'esf_free_bufs', 'SLEEP_NONE', 'SLEEP_LIGHT', 'SLEEP_MODEM', 'STA_MODE', 'AP_MODE', 'STA_AP_MODE']
>>> esp.freemem()
>>> esp.meminfo()
data  : 0x3ffe8000 ~ 0x3ffe8410, len: 1040
rodata: 0x3ffe8410 ~ 0x3ffe9038, len: 3112
bss   : 0x3ffe9038 ~ 0x3fff6bd0, len: 56216
heap  : 0x3fff6bd0 ~ 0x3fffc000, len: 21552

There are examples on GitHub to use the deepsleep functions:

Regarding the Wifi connectivity, by default when starting up the ESP8266 sets up a Wifi access point with the name Micropython-XXXXX where XXXX are some digits from the MAC address. Following the documentation the AP is protected with the password micropythoN, and sure enough the connection works. Still I haven’t tested it enough, for example, accessing the Python interpreter over Wifi, instead of through the serial port.

Anyway, one final test is to use Python to connect to make the ESP8266 to connect to my network. The instructions are simple, just write help(), and the micropython shows how to do it:

import network
>>> help()
Welcome to MicroPython!

For online docs please visit .
For diagnostic information to include in bug reports execute 'import port_diag'.

Basic WiFi configuration:

import network
sta_if = network.WLAN(network.STA_IF)
sta_if.scan()                             # Scan for available access points
sta_if.connect("", "") # Connect to an AP

and we can see if it connected successfully:

>>> sta_if.isconnected() 

and what IP configuration was set:

>>> sta_if.ifconfig()
('', '', '', '')

Also I was unable to access the Python interpreter through the access point connection. Supposedly there should be a listener running on port 8266 that allows access over WIFI, but I my tests found that the port 8266 was closed. Probably I need to initialize something first…
Anyway, there is a tool webrepl that allows to use the browser through websockets to connect to the ESP8266 and access the Python prompt and also to copy files to the ESP8266, namely the startup file.

To finish. during my tests I had no crashes or surprise reboots. Using Python also has the advantage, in my opinion, that is more mainstream than Lua, since we leverage desktop programming with device programming. Also the useful tool ESPLorer already supports Micropython, it means that probably it is a better alternative for quick hacks using the ESP8266 instead of Nodemcu running LUA.