ESP8266 – Setting up Sming and NetBeans

Sming is one of the alternative firmware for the ESP8266 that is built on top of an extensive library that hides some of the ESP8266 SDK complexities. There is also another alternative firmware that is directly integrated with the Arduino IDE ESP8266 Arduino that mimics much of the process and ways off building Arduino sketchs that deploy and run on the ESP8266.

I’ve tried both, and for my tastes, since I prefer Netbeans or other editor to the Arduino IDE, I’ve choose to use the Sming firmware.

Setting up the SDK and Sming
For starting to use the Sming firmware, we should download the ESP8266 SDK and Sming from Github. The pfalcon esp-open-sdk is the fastest way to set up the SDK. Just do:

ESP8266 SDK

$ cd /opt
$ git clone --recursive https://github.com/pfalcon/esp-open-sdk.git
$ cd esp-open-sdk
$ make STANDALONE=y

Make sure for running the above commands that you have the prerequisites installed for your platform.
The make command will take a while depending of your internet speed and machine speed. It can take around 20 minutes or more.

For installing Sming, the Github site instructions explain what is needed. Basically it’s:

$ cd /opt
$ git clone https://github.com/anakod/Sming.git

To use Sming we need to set our environment variables.

$ export ESP_HOME="/opt/esp-open-sdk";
$ export SMING_HOME="/opt/Sming/Sming";

But this change is not permanent and it won’t work with Netbeans without changing the Makefiles.
So we need to change the core Makefile to add permanently the paths to the SDK and SMING homes:

Set Sming environment paths permanently:
Just got to Sming home directory /opt/Sming/Sming and edit the file Makefile:

Change the lines (uncomment them, removing the #):

## Defaults for Sming Core

## ESP_HOME sets the path where ESP tools and SDK are located.
## Windows:
# ESP_HOME = c:/Espressif

## MacOS / Linux:
ESP_HOME = /opt/esp-open-sdk

## SMING_HOME sets the path where Sming framework is located.
## Windows:
# SMING_HOME = c:/tools/sming/Sming 

# MacOS / Linux
SMING_HOME = /opt/Sming/Sming

## COM port parameter is required to flash firmware correctly.
## Windows: 
# COM_PORT = COM3

# MacOS / Linux:
# COM_PORT = /dev/tty.usbserial

# Com port speed
COM_SPEED = 115200

For each project demo that we are going to compile, at the root of the project we need to change/uncomment at the Makefile-user.mk file the same variables to define the same values. Otherwise Netbeans will complain that the SMING_HOME and ESP_HOME Variables are not defined. We can use the Netbeans itself to edit that file.

One final change is needed regarding the esptool.py that is used by Sming to flash the ESP8266 chips. The tool is made to run with Python 2 and on my Arch Machine I have both Python 3 and Python 2, being Python 3 the default. Due to this, the esptool might fail to run with the following error:

# Generating image...
  File "/opt/esp-open-sdk/esptool/esptool.py", line 135
    print 'Connecting...'
                        ^
SyntaxError: Missing parentheses in call to 'print'
/opt/Sming/Sming/Makefile-project.mk:274: recipe for target 'out/build/app.out' failed
make: *** [out/build/app.out] Error 1

So we need to edit the esptool.py file and change the header to use Python2:

#!/usr/bin/env python2

Also the tool assumes that the ESP8266 is connected and available at the serial port ttyUSB0.

We can now go to one of the examples directory and compile one of the examples. Since I’m using my own developer board that uses serial RTS and DTR to enter “automagically” the ESP8266 in flash mode, I just need to do make at the root of the example directory. If needed we can flash again with the command make flash

And that’s it.
Just a word of caution, since my developer board uses the RTS and DTR lines, the Sming Make file calls the miniterm.py that doesn’t set the RTS and DTR lines to inactive state, and so keep the ESP8266 in reset mode. To solve this, just change the TERMINAL directory on the makefile of your platform located at Sming/Sming to something that works. EDIT: Or use CTRL-T + D and CTRL-T + R to reset the lines.

Setting Netbeans to compile with the Sming Framework
Important note/update: With the latest Netbeans 8.1 version, code completion doesn’t work. Use version 8.0.2 for now to solve the issue.

To use Netbeans, just open up Netbeans, and go to Tools->Options and select the C++ icon. If needed press the Activate button to enable C++ support.
The instructions are now very similar to those in this post Setting up Netbeans for ESP8266. Just make sure that you have the C++ plugin installed, and restart the IDE if necessary.
Then at the C++ Window Build Tools tab, just add a new Tool Collection, but this time the compiler path is /opt/esp-open-sdk/xtensa-lx106-elf, and ignore the error, but select the family as GNU. The error should be gone. Give it a name: ESP8266 Xtensa and press OK. Just set up now the path to the C compiler and C++ compiler located on the bin directory: /opt/esp-open-sdk/xtensa-lx106-elf/bin.

Netbeanx Xtensa compiler

Netbeanx Xtensa compiler

For making Netbeans Code completion work, we need to add the path to include files on the Xtensa compiler tools. Make sure that it’s the ESP8266 Xtensa compiler that is selected and add the following paths:

Include paths

Include paths

These might not be enough. It really depends on the project. Make sure that you add the include paths to both the C++ and C compiler.

Edit:
I had to add the following paths at the Code Completion for the Xtensa compiler so the code completion could work without warnings on the SmingCore.h file with the latest Sming version:

  • Sming/rboot
  • Sming/rboot/appcode

Just add this directories the same way as the above screenshot.

Also at the project level properties it is necessary to add the project include directory so that the user_include.h file and others can be found.

And we are set!

Compiling Basic_Blink Project
Using Netbeans, just add a new C++ project with existing sources, and select the root directory of any of the examples, in this case, let’s choose the /opt/Sming/Basic_Blink. Just make sure that the compiler tools choosen is the ESP8266 Xtensa that we previously have defined. At start Netbeans will try to run the make command, and it might fail.
Just open on the project tree the Makefile-user.mk file and remove the comment from the lines defining the ESP_HOME and SMING_HOME paths.
Also select the Makefile, right click Make target->Add target and add the word flash as a target.

That’s it, we can now make the project and select Make Target->flash to flash to the device.

Happy coding

Odroid emmc speeds

I’ve found out this link that shows some speed benchmarks, on the Raspeberry PI, for some SD cards and disks: RPi SD card benchmarks.

For comparison, my data:

Odroid C1+ with 32GB emmc:

root@odroid:~# hdparm -T /dev/mmcblk0p1 
/dev/mmcblk0p1:
 Timing cached reads:   1494 MB in  2.00 seconds = 746.46 MB/sec

My Seagate 2TB on my DS212+, also driven by an ARM processor:

root@DiskStation:/volume1/homes/root# hdparm -T /dev/sda1

/dev/sda1:
 Timing cached reads:   832 MB in  2.00 seconds = 415.24 MB/sec

The flash pen that holds the operating system on my old and faithful NSLU2:

root@nslu2:~# hdparm -T /dev/sda1

/dev/sda1:
Timing buffer-cache reads:    78 MB in 0.51 seconds = 155972 kB/s

And finally my Crucial SSD disk on my desktop computer:

[root@pcortex:~]# hdparm -T /dev/sdc2

/dev/sdc2:
 Timing cached reads:   16804 MB in  2.00 seconds = 8408.63 MB/sec

The conclusion? Not bad for the emmc. Recommended over an SD card? I think definitely. Expensive? Well, yes…

Linux: Slow or unbearable performance with high memory usage applications

Well this post title is a bit vague because the issue that have affected me can have several sources, not just the one that I’ll describe. But I suppose the solution is the same independently for all sources.

Anyway, I was suffering from a strange problem on both my computers, the desktop has 6GB of RAM, and the laptop, 16GB.

On the desktop machine, where I use KDE 4 on Arch Linux, the desktop can froze when using some resource intensive applications like Google Maps on Firefox or Chromium and Android Studio. When the frozen desktop situation happens, I just have to wait a few minutes, during which time, the keyboard is irresponsible and the mouse is “jumpy”, or doesn’t work at all. And so after a while, one random application, that could be anything, including the desktop, is killed, with some cryptic message regarding sacrifice on the system logs…

Anyway, on the Desktop, due to the fact that I’ve upgraded the main disk to an SSD, I didn’t enabled the swap partition. And that was one of the main reasons of this behaviour that I was having. Despite having 6GB of RAM, a swap file is good to have, and so I’ve used a swap partition on one of the spinning disks.

That apparently stopped the issues on the desktop and it never froze like it used to when performing the same tasks.

On the laptop it was another matter, mainly because of the fact that it only has a 120GB SSD disk, and no spinning disks, I had no swap file/partition created or enabled. I though that with 16GB of RAM, why bother… but due to professional related activities, I had to start to work with the HortonWorks HDP 2.3 big data/hadoop platform, and that is a heavy resource hog…

So, despite of having 16GB of RAM available, the Hadoop virtual machines based on Vmware could make the computer just froze for lenghty periods of time, just like my desktop computer.

But, now I know that was related to fact that I didn’t have a swap partition/file, and so I’ve created one.

The swap file solution for this specific case didn’t solve completely my issue, but I did managed to see a process named khugepaged consuming huge amounts of CPU.

And that was the reason that I had for my frozen desktop and virtual machine for some lengths of time.

So I’ve disable the Huge Page support on my host operating system, AND on my guest CentOS virtual machine that is running HDP 2.3:

[root@dune:cortex]# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
[root@dune:cortex]# echo never > /sys/kernel/mm/transparent_hugepage/enabled

[root@dune:cortex]# echo never > /sys/kernel/mm/transparent_hugepage/defrag
[root@dune:cortex]# cat /sys/kernel/mm/transparent_hugepage/enabled          
always madvise [never]

More info can be obtain here:

http://docs.mongodb.org/manual/tutorial/transparent-huge-pages/

So this might not be the magic bullet if we are having desktop freezes, and not using huge amounts of RAM, but it might help.

md5deep and hashdeep on Synology DS212+

My personal photos are located at my desktop computer, and I have backups of them on my Synology DS212+ and my old faithful Linksys NSLU2 with an external disk.

The issue that I have now is that to keep backup times short I only backup the current year, because, well, the other years are static data. Previous years are already backed up and that data is not to get modified. Right? So what if corruption happens? How do I detected it? How can I be warned of such event? Right now I have no method to check if my digital photos of 2003 are totally intact in all of my three copies. And if  I detect corruption on one bad file/photo, which file is the correct one?

So I’m investigating this issue, and one of the tools available to create checksums and then to verify if everything is ok and no corruption has appeared is the md5deep and hashdeep programs. These are available as sources at this link: http://md5deep.sourceforge.net/start-hashdeep.html

These are the instructions for cross compiling these tools for the ARM based Synology DS212+. As usual this is done on a Linux Desktop/server machine.

1st) Set up the cross-compiling environment: Check out this link on the topic: https://primalcortex.wordpress.com/2015/01/04/synology-ds-crosscompiling-eclipseibm-rsmb-mqtt-broker/

2nd) Setting up the cross compiling environment variables is now a bit different:

export INSTALLDIR=/usr/local/arm-marvell-linux-gnueabi
export PATH=$INSTALLDIR/bin:$PATH
export TARGETMACH=arm-marvell-linux-gnueabi
export BUILDMACH=i686-pc-linux-gnu
export CROSS=arm-marvell-linux-gnueabi
export CC=${CROSS}-g++
export LD=${CROSS}-ld
export AS=${CROSS}-as
export AR=${CROSS}-ar
export GCC=${CROSS}-g++
export CXX=${CROSS}-g++

We will use the C++ compiler.

3rd) Create a working directory and clone the repository to your local machine: git clone https://github.com/jessek/hashdeep.git
4th) Change to the hashdeep directory and execute the following commands:

[pcortex@pcortex:hashdeep]$ sh bootstrap.sh
[pcortex@pcortex:hashdeep]$ ./configure –host=arm-marvell-linux-gnueabi
[pcortex@pcortex:hashdeep]$ make

And that’s it. If everything went well then at hashdeep/src directory the commands are available:

[pcortex@pcortex:src]$ file hashdeep
hashdeep: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.3, for GNU/Linux 2.6.16, not stripped

We can now copy the commands to the DiskStation and start using them.

Now all I need is to devise a method that creates and checks each year photo hash/signatures, and warns me if a difference is detected. I’m thing on using the audit mode of the hashdeep command, and do each day a check for one year, for example, on Mondays check 2003 and 2013, on Tuesday check 2004 and 2014 and so on.

Arch Linux e Cartão do Cidadão

Só um quick post para lembrar como se pode por o Midleware/software do cartão do cidadão a funcionar em Arch Linux.

– Primeiro determinar que o cartão é detectado pelo sistema:

lsusb
Bus 007 Device 002: ID 058f:9520 Alcor Micro Corp. EMV Certified Smart Card Reader

– Segundo instalar o software opensc:

[root@pcortex ~]# pacman -S opensc

Isto só vai servir para verificar que conseguimos aceder correctamente ao leitor do cartão.

– Testar o acesso ao leitor:

[root@pcortex ~]# opensc-tool -l
No smart card readers found.

Se for este o caso, arrancar com o daemon de suporte de acesso aos leitores:

[minime@pcortex ~]# sudo -s
[root@pcortex ~]# pcscd

Testar novamente:

[minime@pcortex ~]# opensc-tool -l
# Detected readers (pcsc)
Nr. Card Features Name
0 No Alcor Micro AU9520 00 00

– Podemos agora instalar o software do cartão do cidadao. No meu caso usei a versão disponível no repositório AUR:

yaourt -S cartao-cidadao

E no fim, executa-se:

[minime@pcortex ~]# pteigui

Pode-se ignorar as mensagens de erros, e o icon deverá aparecer no SysTray.

Para ter acesso aos certificados do cartão a partir do FireFox, por exemplo, bastará em princípio seguir estas instrucções: https://faqs.up.pt/faq/content/29/557/pt/como-configurar-o-mozilla-firefox-para-usar-o-cart%C3%A3o-de-cidad%C3%A3o.html

Node-Red: Push Notifications to Google Cloud Messaging GCM

The following instructions will allow Android devices to receive notifications from Node-red flows using the Google Cloud Messaging infra-structure. This is achieved by using the Node-js library for accessing GCM (Google Cloud Messaging) node-gcm.

Installing node-gcm:

We need to install the Google Cloud Messaging node support module either locally on Node-red or globally. If installing locally just cd to the node-red directory and do npm install node-gcm. Globally the command needs the -g switch added like this: sudo npm install node-gcm -g.

Configuring Node-red:

To be possible to invoke Node-gcm functions from Node-red steps, like functions, we need to make the node-gcm module available to Node-Red. For that, go to the base directory of Node-red and edit the settings.js file.

Modify the file so that the following configuration is now defined:

functionGlobalContext: {
// os:require('os'),
// bonescript:require('bonescript'),
// arduino:require('duino')
 gcm:require('node-gcm')
},
Starting up Node-red and our first push notification to GCM:
Start up Node-red by using the command line: node red.js -s settings.js . It’s important and imperative that we now start uo with the modified settings.js file or some other file with the above configuration.
We can now add a function step with the following code to push a notification:
var gcm = context.global.gcm;
/// Define our message:
var message = new gcm.Message(
  { collapseKey: ‘node-red’,
     delayWhileIdle: true,
     timeToLive: 3,
     data: { key1: ‘Node-Red Notification’, key2: ‘Hello android from Node-red’ }
  }
);
// Set up the sender with you API key
var sender = new gcm.Sender(‘AIza…’); // Replace with your KEY!!!!!
// Add the registration IDs of the devices you want to send to
var registrationIds = [];
registrationIds.push(‘APA…..’);  // Replace with your device registration ID!!!!!
// Send the message
// … trying only once
sender.sendNoRetry(message, registrationIds, function(err, result)
    { if(err)
           console.error(err);
       else
           console.log(result);
    });
return msg;
And that’s it. Note that on the Data JSON object the key names, key1 and key2 are arbitrary. The keys can have any valid name as long that the receiving application knows about them.
For testing I’m using an inject node to activate the above function and I can see at the console (Not on the Debug tab) the following:
{ multicast_id: 9123452345452345, success: 1, failure: 0, canonical_ids: 0, results: [ { message_id: ‘0:7957832452%3676573df353’ } ] }

 

If you receive other messages, like failure:1, there is something wrong with your Google GCM key or your registration device id is invalid.

For example:

{ multicast_id: 5439642763222344000, success: 0, failure: 1, canonical_ids: 0, results: [ { error: ‘InvalidRegistration’ } ] }

This means that the registrationId.push value doesn’t belong to any registered device to our GCM project.

If you just receive 401 that means that you’re using an Invalid Browser GCM key. Check the Google developer console to get the correct key.

Testing:

I’ve done my basic testing with the sample Google GCM client available at https://code.google.com/p/gcm/source/checkout.

I’ve used Android Studio and the provided gcm-client source code to compile and test the communication.

The problem with the above gcm-client code is that we need first to compile and deploy at the device (or emulator) to make the GCM client application to at least run at least once to get the registration id of the device, so that we can provide the registration key to the above node-red code.

We also can change the provided Google code to send to Node-red the registration ID. This process alows devices to inform the Node-red flows of their registation ID, and for that we need to change the gcm-client program adding a registration process and a registration server on our side.

Upgrading to an SSD, moving from Kubuntu to Arch Linux and setting up KDE

This is a single post just to make notice of some points:

My home desktop PC, was running since 2009 the Kubuntu distribution. I’ve never reinstalled it and did always the upgrades from 8.10 to 9.04, 9.10 and so on until 14.04. Almost six (!) years with no virus, trojan horses, and a single initial installation. I’ve upgraded through the years the KDE desktop since the ill famed initial 4.0 release, to the latest 4.6 release.

But things where starting to go awry… At the beginning I had a ATI 4870 graphic board, but ATI dropped support for it on Linux (and Windows) and since I had so many troubles with the proprietary driver, and to add it up I had no support for the board, I sold it and bought an Nvidia GTX660.

But with both graphic boards, I had suffered from a long time problem: I could only do a sleep/resume a number of times before the system hanged. The symptoms starting from loosing network connectivity and then restoring right back, and some cycling of these symptoms to a complete lockup. At first I thought that was due to the network driver used by my motherboard, a Asus P6T with  a Realtek RTL8111/8168/8411 board, but with the correct driver for it, the issue continued.  Booting up was a painful two/three minute wait from BIOS to desktop.

But after a while I could pin point my issues not to the network board but to the Xorg and video driver combination. Some EQ buffer overflow, hanged X, and finally the machine.

Ubuntu and Kubuntu based distributions have PPA’s repositories where some alternative software and versions of mainstream software are available. I had trouble with some of these alternative PPA’s when upgrading, and had to remove them (with ppa-purge) to be able to upgrade the Kubuntu distribution. Anyway, there is an PPA, xorg-edgers,  where alternative and more recent versions of Xorg Software and Nvidia drivers are provided and that almost solved my sleep/resume problem. The problem was that after adding this PPA the Ubuntu/Kubuntu nvidia driver and the Xorg-edgers driver, after updates the X server was unable to find the nvidia driver, despite being installed, and had to reinstall the driver again to get my desktop working.

That was it. I had enough with Ubuntu/Kubuntu distributions. My bad and my issues, sure…

I’ve being testing Arch Linux on my work laptop for almost an year with great success. I’ve chosen Arch Linux because I didn’t had to loose an unknown number of hours per year upgrading to the next distribution version, and an rolling release version makes more sense. And Arch has the greatest Linux Wiki out there even solving problems that happen on the Ubuntu/Kubuntu line, and also mainly there are only one (well two) repositories. No PPA’s mess.

So I completely moved all my systems to Arch Linux, and my sleep/resume issue is 100% solved.

SSD on a SATA II based motherboard

My desktop computer has an Asus P6T motherboard. This motherboard only has SATA II (SATA 2) ports. Is it worth to use an SSD on such a board without upgrading to a SATA III board?

The answer is YES. If you have a SATA II based motherboard AND not buying an SSD because of that, do not wait any more. Buy it now. It is totally worth it.

Arch Linux is of course way lighter than Kubuntu, and with an SSD ( by the way it’s a Crucial MX100 256GB) my two/three minute boot to desktop time came down to 15s…

hdparm -Tt /dev/sdc
/dev/sdc:
Timing cached reads:   16710 MB in  2.00 seconds = 8361.40 MB/sec
Timing buffered disk reads: 728 MB in  3.00 seconds = 242.59 MB/sec

Note the Samsung EVO 120GB that I have on my work computer, that has the flawed firmware that slows down with time, on a SATA III based motherboard:

/dev/sda:
Timing cached reads: 23864 MB in 2.00 seconds = 11943.41 MB/sec
Timing buffered disk reads: 164 MB in 3.01 seconds = 54.47 MB/sec

Compare this with my desktop old boot disk, an WD 640GB blue:
/dev/sdb:
Timing cached reads: 16580 MB in 2.00 seconds = 8296.87 MB/sec
Timing buffered disk reads: 356 MB in 3.00 seconds = 118.59 MB/sec

I hadn’t yet had the time to upgrade the firmware on the EVO…

Making KDE looking good on Arch Linux

To end this rather long and tedious post, and also to my future reference, to make KDE looking good:

    • Make sure that the infinality modifications are installed: https://wiki.archlinux.org/index.php/Infinality. Make sure that you update.
    • Make sure that the DPI detected by your X server are correct for your monitor. Install xdpyinfo with pacman -S xorg-xdpyinfo. Figure out what might be your monitor dpi using this site: http://dpi.lv/ . Execute xdpyinfo | grep -B2 resolution and see if are similar. If not you need to set the correct dpi either in xorg.conf or in the Kde settings.
      In my case I have a dual monitor setup, and hence the weird dpi 95×96.screen

      #0: dimensions: 3200×1080 pixels (856×286 millimeters)

      resolution: 95×96 dots per inch

      So my font settings are:

Selection_002

  • While the above settings make my fonts look good, the taskbar fonts where awful and took me a while to figure out that was not the dpi settings but the Desktop Theme.
  • To solve this first install Krita with pacman -S calligra-krita . This will install the Krita color themes that are, in my opinion very good. And on KDE System Settings -> Workspace Appearance -> Desktop Themes select Get New Themes and get the AndroidBit theme.
  • That’s it:Selection_004
  • I’ve selected the Produkte Desktop Theme
  • And the colors:Selection_005