Good Morning: Step in to Arduino….

So I’ve bought one of those cheap Arduino kits off eBay… In fact I’ve bought the cheapest one that I could find from a European seller… Bought it on Chinese shop with an UK warehouse, but the kit came from Sweden… Talk about globalization… After waiting around 10 days, I’ve got my kit, and in 5 minutes a LED was blinking in pure RGB glory (just red…).

The kit came with a UNO R3 clone, identical to the original, and several other components.

It depends what you want to do with your Arduino, but just to have an idea what came with my Kit:

- Some LED’s, push buttons, resistors, 7 segment (single and 4 side by side) and led matrix.

- One shift register 74hc595.

While the leds and the single segment can be driven by Arduino pins, for the led Matrix e the 4 7-segment display the shift register allows to drive them and use a minimal number of pins.

- An Infra red receiver and small remote. This is great because it allows to have multiple inputs (the remote switches) only using a single input.

- A 16×2 LCD display. I never used it directly, and just also bought an I2C driver for it, so I only need 2 pins to drive it and 2 pins for powering it.

- One servo SG-9 motor, and one stepper motor with a ULN2003 driver.
This allows to do some basic learning with these type of motors, but I think to do something useful, more motors are needed.

- An expansion shield with a mini bread board. Not used yet.

- A larger breadboard and some dupont cables. These last item are only enough for some basic experiments but for more advanced stuff there is the need to buy more…

- A 9V battery clip for providing standalone power

- An USB cable.

- Some assorted stuff. (On pot, flame sensor, LDR, tilt switch, etc…)

So, it is worth it to buy this kind of kit?

The short answer, yes, but for intermediate levels or more advanced levels, some more items are needed to be added to the kit, namely cables, and to allow the use of the 16×2 lcd while having pins available, at least an I2C driver/shield for the lcd. This is cheap out of ebay, works fine, and allows introducing to the I2C protocol

JVM Peer Gone in WebLogic T3 connection

So I have this exception when connecting to a FileNet P8 API from my Linux Machine:

com.filenet.api.exception.EngineRuntimeException: FNRCE0040E: E_NOT_AUTHENTICATED: The user is not authenticated. Message was: java.net.ConnectException: t3://1.2.3.4:9210: Bootstrap to 1.2.3.4/1.2.3.4:9210 failed. It is likely that the remote side declared peer gone on this JVM  at com.filenet.apiimpl.core.UserPasswordToken.getSubject(UserPasswordToken.java:121)
 at com.filenet.api.util.UserContext.createSubject(UserContext.java:288)

This happens when my application is connecting to FileNet P8 Content Engine deployed into a WebLogic Cluster and not into a single node (Well it might happen with a single node…). The connection point used by the application is a single node address.

The solution for this error message?

Just add to the host files of the client machine the name and ip address of each weblogic cluster physical machine.

Synology and MyDS Quickconnect

The issue: Quickconnect doesn’t work

After upgrading to the latest DSM version 5.0, it took a while to notice that my quickconnect id that I had chosen was not working…

On the DSM Control Panel, if I tried to change and apply the settings it gave a Unknow Error. On the logs, the only message related to the Quickconnect settings was network error: -23, and that was it…

On the myds site, my DS status was red, and clicking on the Quickconnect ID just gave a page that said that my DS was offline or with no network connection, but clicking on the host name just worked fine.

Using the Apps with the hostname and/or IP worked fine, just not with the Quickconnect ID.

The solution:

I don’t have a solution that might work for everyone, but the steps that I’ve taken solved the issue for me.

First on MyDS site I deleted the hostname, and on the DS Control Panel on DDNS settings I tried to register it again. This failed as said that the hostname doesn’t exist…

So, I also deleted the entry for the DDNS Synology provider and configured it again. I needed to enter again my login credentials to the MyDS site, and my hostname again.

This time, it worked, and on the MyDS page the hostname (after I deleted it from there) appeared again. Still clicking on the Quickconnect Id failed.

So, again on the DS Control Panel I went to the QuickConnect on Control Panel, and this time it said that I need to register a QuickConnect ID, so, I registered again my ID, providing the MyDS site credentials, and ID. And it worked.

Now my Quickconnect ID works and DDNS name also works.

The status of my DS on the MyDS site remained red for a large period of time, but at the end it turned green. Also clicking on the Quickconnect ID now works and gives me access to the Web frontend of DS.

This was quite a suprise for me as I didn’t expected to have the Web Administration console available to the internet.

I’ll have to see how to block this.

tl;dr:

Delete your DDNS configuration and register it again. Register again the QuickConnect ID.

Synology DSM 5 web station and virtual sites for FileStation

I’ve upgraded my Synology DS 212+ to the latest DSM version, version 5, a few weeks ago.

Many things have changed on this new DSM version, and one of the things that changed, was the way Web station and Apache/PHP works.

The configuration files are different from the previous version 4 and they use now upstart to start and stop the service:

To stop the Web Station: /sbin/initctl stop httpd-user

To start the Web Station: /sbin/initctl start httpd-user

There is also a status command, to show the state of the server: httpd-user stop/waiting (Stopped) or httpd-user start/running, process #### (running)

For FileStation and Audio station, these application/sites normally expect the browser to connect to application ports, namely in my case the port 7000 and 7002. This has potentially two problems: I need to open them up on my home router to allow access, and sometimes I’m behind a proxy that doesn’t allow anything else to be used as a port on the url (except for port 80-HTTP or 443-HTTPS).

So I have these applications behind a reverse proxy on the WebStation. With this configuration I have my “normal” http address: http://primalcortex.somedomain.com and I also have http://filestation.primalcortex.somedomain.com and http://audio.primalcortex.somedomanin.com.

What is needed to achieve this? Well first a a dns server that accepts wirldcard domains. The Myds.me is one of them and is provided by Synology and directly integrated into the DSM.

Second a file like this is needed to be created:

<IfModule !proxy_module>
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so
LoadModule proxy_http_module modules/mod_proxy_http.so
</IfModule>

ProxyRequests Off
#ProxyPreserveHost On

NameVirtualHost *:80

#For File Station
<VirtualHost *:80>
ServerName filestation.mydomainname.myds.me
<Location />
RedirectPermanent / https://filestation.mydomainname.myds.me/
</Location>
</VirtualHost>

NameVirtualHost *:443

<VirtualHost *:443>
ServerName filestation.mydomainname.myds.me
SSLCipherSuite HIGH:MEDIUM
SSLProtocol all -SSLv2
SSLCertificateFile /usr/syno/etc/ssl/ssl.crt/server.crt
SSLCertificateKeyFile /usr/syno/etc/ssl/ssl.key/server.key
SSLEngine on
SSLProxyEngine on
ProxyRequests Off
ProxyVia Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyPass / http://localhost:7000/
ProxyPassReverse / http://localhost:7000/

</VirtualHost>

(In case something is missing above due to WordPress formating issues see this ->

http://pastebin.com/C7WF8kTN

)

Save this file as httpd-FILESTATION-vh.conf-user at /usr/syno/etc

Then add the following line at the end of this file /etc/httpd/conf/httpd.conf-user

Include /usr/syno/etc/httpd-FILESTATION-vh.conf-user

Repeat for other sites like Audio Station, changing the hostname and the localhost port.

You can now run the following command: /usr/syno/etc/rc.sysv/httpd-user-conf-writer.sh and check if the above include is added to the /etc/httpd/conf/httpd.conf file. This file is always regenerated at start of the Web Station.
Now we can restart the web station with /sbin/initctl stop httpd-user followed by the start command.

Check now if you can access the url: https://filestation.mydomain.myds.com

Edit: This works for version 5.0-4482

For more recent versions you need also to do the following:

Edit file /etc/httpd/conf/extra/httpd-ssl.conf-user and comment out the ServerName and ServerAlias like this

#ServerName *

#ServerAlias *

Save the files, write again the configuration (/usr/syno/etc/rc.sysv/httpd-user-conf-writer.sh), stop ( /sbin/initctl stop httpd-user) and start again ( /sbin/initctl start httpd-user), and it should work now.

Thanks for Markus (below at the comments for the solution) and Tensai for corrections.

Synology Cloudstation on Kubuntu/KDE Desktop

One of the softwares available for the Synology is Cloudstation that mimics Dropbox functionality but with your own server (in this case your Synology device). For the Cloustation server there are several clients available, and one of them is CloudStation for Linux.

But I run Kubuntu version of Ubuntu with the KDE desktop, and during installation of the CloudStation for Linux/Ubuntu, one of the installation steps by the Cloudstation install program  is to run apt-get where Nautilus, Brasero and a lot of other supporting libraries are required to be installed.

But, as I’ve found out, those packages are only needed for file manager integration, not for the functionality of the CloudStation software.

So just answer NO to the request of packages (you may want to keep a copy of the package list for future installation), and let CloudStation install.

On my KDE 4.11.3 it CloudStation works fine, and it has the Status icon on the Systray, without the required Nautilus libraries. Of course there is no Dolphin integration.

LogStash and IBM FileNet P8 5.2 logs

At work, the production environment of FileNet P8 5.2 is deployed on several Oracle WebLogic server instances. This means when a problem crops up, I have a lot of of log files to analyze…. Not an easy task to find and correlate an error with so many instances and log files.

A solution exists for this madness of log files… In fact we have Splunk to ingest and to manage the log files of several applications. But Splunk is licensed by volume, and it’s expensive, and I can’t touch it… Not helping my work, so…

So I’m checking out logstash and it’s web interface Kibana.

The main FileNet P8 5.2 log files are the p8_server_error.log file and the pesvr_system.log file, for Content Engine and Process Engine.

These files are located under the Content Engine domain on a directory named FileNet and sub-divided by server instance.

So to keep thing short, here it is a logstash agent file that monitors and sends the logs to a REDIS remote instance:

input {
        ## P8 Content Engine CE1 Server Log
        file {
                type => “IBMP8_CE”
                path => [ “/weblogic/user_projects/domains/fnce/FileNet/CeServer01/p8_server_error.log” ]
                codec => multiline {
                        ##pattern => “^\s”
                        pattern => “^%{TIMESTAMP_ISO8601}”
                        negate => true
                        what => “previous”
                }
                tags => [“P8CEServerLog”]
        }

        ## P8 Process Engine CE1 System Log
        file {
                type => “IBMP8_PE”
                path => [ “/weblogic/user_projects/domains/fnce/FileNet/CeServer01/pesvr_system.log” ]
                codec => multiline {
                        ##pattern => “^\s”
                        pattern => “^(?>\d\d){1,2}”
                        negate => true
                        what => “previous”
                }
                tags => [“P8CEServerLog”]
        }
}

output {
  stdout { codec => rubydebug }
  redis { host => “redis_server.domain.com” data_type => “list” key => “logstash” }
}

You should change the redis_server.domain.com to your redis real ip/name, and after debugging, disable the stdout line.

Yo can add several input files for each server instance that is co-located on the same server machine.

 

IMAP and SMTP over HTTP Proxy

The solution that I’m using for allowing Thunderbird (and if you really want, Kmail) to connect to my employer IMAP and SMTP servers, is not straightforward but it simply works…

For this to work you really need an external server where you can connect through ssh. This ssh server must be able to contact and connect to the required mail server, namely accessing their IMAP and SMTP ports.

Right now, I use a Linux VPS, located somewhere in the world ( :) ) that I’ve bought through the lowendbox.com site. Great price per year (around 2€ per month).

I run ssh on this server on a non standard port.

The trick is simple:

Just open up two terminal sessions, and if you have ssh through corkscrew tunnelling working (see my previous posts: http://primalcortex.wordpress.com/2014/02/19/ssh-over-http-proxy/ ), it’s simple as executing this:

On terminal 1 and for IMAP (secure):

ssh -L 1993:imap.server.com:993 -p 12345 mysshserver

where imap.server.com is the name or external IP of the IMAP server and 993 is the secure IMAP port. The 1993 is the port at the local address 127.0.0.1 that is listening to connections from thunderbird. The -p 12345 is the port that my remote ssh server is running on and listening on for connections, and of course, mysshserver is the dns or ip address for the ssh server.

On terminal 2 and for secure SMTP:

ssh -L 1465:smtp.server.com:465 -p 12345 mysshserver.

When this two connections are established, then the local machine ports 1993 and 1465 connect through ssh and corkscrew tunnelling to the mail server… and thunderbird can now work as it should.

Just use as IMAP server the localhost and port 1993, and as SMTP server the localhost and 1465 port.

Of course for thunderbird to work, first is needed to create the tunnels.