Apache Axis deployment on WebLogic and RedHat

Deployed in an WebLogic application server, that is running on RedHat Linux machines, I have some Apache Axis web services deployed. Version 1.6 for the record…

The issue with this combination, is that periodically people complained that the Axis web services stopped working, for no apparent reason….

Looking at the Axis and webservices log, it complained that some directories and files that Axix needed are now missing from the /tmp directory:

java.io.FileNotFoundException: /tmp/axis2-tmp-1781229799800844743.tmp/axis23357234725283571419sler.jar (No such file or directory)

It was a fact that indeed the /tmp/axis* directories where all gone, but nobody has deleted them…

A temporary solution for this issue was (because it’s solved for good now) to redeploy the web services that had the issue.

But that doesn’t solve the mystery of why the /tmp directory had files cleaned…

Well it’s simple: RedHat has on /etc/cron.daily a script to check for files that are on /tmp and had no access for more than 720 hours.

This script tmpwatch, was the culprit of deleting axis files.

So there are two permanent solutions for this:

1st) Change/disable the tmpwatch script
2nd) Move Axis temporary files to another location

Because I’m not the machine administrator, but the Weblogic administrator, I’ve choosed 2) and added the following option to the server startup:

-Djava.io.tmpdir=/opt/axis-tmp

Restarted the servers, and allas: Axis temp files in /opt/axis-tmp

Problem solved.

Rising from the ashes: NSLU2

Despite having a Synology Diskstation DS212+ for storing my data (photos, videos and PC/laptop backups), I also backup that data to an external disk connected to my faithful Linksys NSLU2 bought in 2005 using rsync from the Diskstation.

The NSLU2 is flashed ith the openSlug 5.3Beta firmware since 2009 (when it came out), with the operating system installed in a crappy 2GB SD card.

But this weekend due to a power failure, the NSLU2 failed to boot up. It kept the amber led blinking signalling that it couldn’t forward from the initial stages of booting up.

Using my desktop computer, I’ve FSCK’ed the external disk filesystem, that had some inconsistencies, nothing serious (most of the time it is dormant), and FSCK’ed the SD card, and, well, most of the /etc and /var directory where gone.

Due to having a backup of the SD card (these things die…), I’ve recovered the /etc directory, but still the NSLU2 didn’t boot.

Booting up without SD card, the NSLU2 did finish booting up, but it wouldn’t ping, neither the original IP address (192.168.1.77) neither the configured IP address. All I had on my Linux machine was incomplete at the arp table…

nslu2 (191.168.1.32) at <incomplete> [ether] on enp4s0f2

Not good….

I’ve flashed it again with the openSlug firmare, but still I was unable to ssh to it so I could initialize. Because I was able to flash it again with the upslug2 tool, it mean that the ethernet port was ok, and probably everything was ok, except the NVRAM settings that define the ip address where pretty much corrupted… Let’s hope that.

So the solution was to boot into RedBoot and erase the NVRAM (http://www.nslu2-linux.org/wiki/HowTo/ResetSysConf) with the following command: fis erase -f 0x50040000 -l 0x20000  (Attention to this command!!!! Don’t get it wrong)

And then upgrade from the RedBoot interface. The original Linksys firmware was flashed and after rebooting this firmware initialized the NVRAM with default settings: IP address 192.168.1.77, and bingo, ping works, and I can access the original Linksys Web Interface. On the web interface I’ve configured the old IP address, DNS, host name, and so on, and rebooted.

After reseting the NVRAM from redboot you must install the original Linksys firmware, because the openSlug doesn’t initialize the NVRAM.

Everything was fine, and the NSLU2 was working on the new IP. From this point on I’ve just flashed again the openSlug firmware, and formatted the SD Card (turnup with the memstick otion), and configured everything again (crontab, ntpclient and rsync daemon).

In no time I had the Diskstation again backing up to the NSLU2 external disk.

So, welcome again NSLU2 :)

Good Morning: Step in to Arduino….

So I’ve bought one of those cheap Arduino kits off eBay… In fact I’ve bought the cheapest one that I could find from a European seller… Bought it on Chinese shop with an UK warehouse, but the kit came from Sweden… Talk about globalization… After waiting around 10 days, I’ve got my kit, and in 5 minutes a LED was blinking in pure RGB glory (just red…).

The kit came with a UNO R3 clone, identical to the original, and several other components.

It depends what you want to do with your Arduino, but just to have an idea what came with my Kit:

- Some LED’s, push buttons, resistors, 7 segment (single and 4 side by side) and led matrix.

- One shift register 74hc595.

While the leds and the single segment can be driven by Arduino pins, for the led Matrix e the 4 7-segment display the shift register allows to drive them and use a minimal number of pins.

- An Infra red receiver and small remote. This is great because it allows to have multiple inputs (the remote switches) only using a single input.

- A 16×2 LCD display. I never used it directly, and just also bought an I2C driver for it, so I only need 2 pins to drive it and 2 pins for powering it.

- One servo SG-9 motor, and one stepper motor with a ULN2003 driver.
This allows to do some basic learning with these type of motors, but I think to do something useful, more motors are needed.

- An expansion shield with a mini bread board. Not used yet.

- A larger breadboard and some dupont cables. These last item are only enough for some basic experiments but for more advanced stuff there is the need to buy more…

- A 9V battery clip for providing standalone power

- An USB cable.

- Some assorted stuff. (On pot, flame sensor, LDR, tilt switch, etc…)

So, it is worth it to buy this kind of kit?

The short answer, yes, but for intermediate levels or more advanced levels, some more items are needed to be added to the kit, namely cables, and to allow the use of the 16×2 lcd while having pins available, at least an I2C driver/shield for the lcd. This is cheap out of ebay, works fine, and allows introducing to the I2C protocol

JVM Peer Gone in WebLogic T3 connection

So I have this exception when connecting to a FileNet P8 API from my Linux Machine:

com.filenet.api.exception.EngineRuntimeException: FNRCE0040E: E_NOT_AUTHENTICATED: The user is not authenticated. Message was: java.net.ConnectException: t3://1.2.3.4:9210: Bootstrap to 1.2.3.4/1.2.3.4:9210 failed. It is likely that the remote side declared peer gone on this JVM  at com.filenet.apiimpl.core.UserPasswordToken.getSubject(UserPasswordToken.java:121)
at com.filenet.api.util.UserContext.createSubject(UserContext.java:288)

This happens when connecting to a WebLogic Cluster and not into a single node (Well it might happen with a single node…).

The solution?

Easy: just add to the host files of the client machine the name and ip address of each weblogic cluster node.

Synology and MyDS Quickconnect

The issue: Quickconnect doesn’t work

After upgrading to the latest DSM version 5.0, it took a while to notice that my quickconnect id that I had chosen was not working…

On the DSM Control Panel, if I tried to change and apply the settings it gave a Unknow Error. On the logs, the only message related to the Quickconnect settings was network error: -23, and that was it…

On the myds site, my DS status was red, and clicking on the Quickconnect ID just gave a page that said that my DS was offline or with no network connection, but clicking on the host name just worked fine.

Using the Apps with the hostname and/or IP worked fine, just not with the Quickconnect ID.

The solution:

I don’t have a solution that might work for everyone, but the steps that I’ve taken solved the issue for me.

First on MyDS site I deleted the hostname, and on the DS Control Panel on DDNS settings I tried to register it again. This failed as said that the hostname doesn’t exist…

So, I also deleted the entry for the DDNS Synology provider and configured it again. I needed to enter again my login credentials to the MyDS site, and my hostname again.

This time, it worked, and on the MyDS page the hostname (after I deleted it from there) appeared again. Still clicking on the Quickconnect Id failed.

So, again on the DS Control Panel I went to the QuickConnect on Control Panel, and this time it said that I need to register a QuickConnect ID, so, I registered again my ID, providing the MyDS site credentials, and ID. And it worked.

Now my Quickconnect ID works and DDNS name also works.

The status of my DS on the MyDS site remained red for a large period of time, but at the end it turned green. Also clicking on the Quickconnect ID now works and gives me access to the Web frontend of DS.

This was quite a suprise for me as I didn’t expected to have the Web Administration console available to the internet.

I’ll have to see how to block this.

tl;dr:

Delete your DDNS configuration and register it again. Register again the QuickConnect ID.

Synology DSM 5 web station and virtual sites for FileStation

I’ve upgraded my Synology DS 212+ to the latest DSM version 5 few weeks ago.

Many things have changed on this new DSM version, and one of the things that changed, was the way Web station and Apache/PHP works.

The configuration files are different from the previous version 4 and they use now upstart to start and stop the service:

To stop the Web Station: /sbin/initctl stop httpd-user

To start the Web Station: /sbin/initctl start httpd-user

There is also a status command, to show the state of the server: httpd-user stop/waiting (Stopped) or httpd-user start/running, process #### (running)

For FileStation and Audio station, these application/sites normally expect the browser to connect to application ports, namely in my case the port 7000 and 7002. This has potentially two problems: I need to open them up on my home router to allow access, and sometimes I’m behind a proxy that doesn’t allow anything else to be used as a port on the url (except for port 80-HTTP or 443-HTTPS).

So I have these applications behind a reverse proxy on the WebStation. With this configuration I have my “normal” http address: http://primalcortex.somedomain.com and I also have http://filestation.primalcortex.somedomain.com and http://audio.primalcortex.somedomanin.com.

What is needed to achieve this? Well first a a dns server that accepts wirldcard domains. the Myds.me is one of them and is provided by Synology and directly integrated into the DSM.

Second a file like this is needed to be created:

<IfModule !proxy_module>
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so
LoadModule proxy_http_module modules/mod_proxy_http.so
</IfModule>

ProxyRequests Off
#ProxyPreserveHost On

NameVirtualHost *:80

#For File Station
<VirtualHost *:80>
ServerName filestation.mydomainname.myds.me
<Location />
RedirectPermanent / https://filestation.mydomainname.myds.me/
</Location>
</VirtualHost>

NameVirtualHost *:443

<VirtualHost *:443>
ServerName filestation.mydomainname.myds.me
SSLCipherSuite HIGH:MEDIUM
SSLProtocol all -SSLv2
SSLCertificateFile /usr/syno/etc/ssl/ssl.crt/server.crt
SSLCertificateKeyFile /usr/syno/etc/ssl/ssl.key/server.key
SSLEngine on
SSLProxyEngine on
ProxyRequests Off
ProxyVia Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyPass / http://localhost:7000/
ProxyPassReverse / http://localhost:7000/

</VirtualHost>

(In case something is missing above due to WordPress formating issues see this -> http://pastebin.com/C7WF8kTN)

Save this file as httpd-FILESTATION-vh.conf-user at /usr/syno/etc

Then add the following line at the end of this file /etc/httpd/conf/httpd.conf-user

Include /usr/syno/etc/httpd-FILESTATION-vh.conf-user

Repeat for other sites like Audio Station, changing the hostname and the localhost port.

You can now run the following command: /usr/syno/etc/rc.sysv/httpd-user-conf-writer.sh and check if the above include is added to the /etc/httpd/conf/httpd.conf file. This file is always regenerated at start of the Web Station.
Now we can restart the web station with /sbin/initctl stop httpd-user followed by the start command.

Check now if you can access the url: https://filestation.mydomain.myds.com

Edit: This works for version 5.0-4482

For more recent versions you need also to do the following:

Edit file /etc/httpd/conf/extra/httpd-ssl.conf-user and comment out the ServerName and ServerAlias like this

#ServerName *

#ServerAlias *

Save the files, write again the configuration (/usr/syno/etc/rc.sysv/httpd-user-conf-writer.sh), stop ( /sbin/initctl stop httpd-user) and start again ( /sbin/initctl start httpd-user), and it should work now.

Thanks for Markus (below at the comments for the solution) and Tensai for corrections.

Synology Cloudstation on Kubuntu/KDE Desktop

One of the softwares available for the Synology is Cloudstation that mimics Dropbox functionality but with your own server (in this case your Synology device). For the Cloustation server there are several clients available, and one of them is CloudStation for Linux.

But I run Kubuntu version of Ubuntu with the KDE desktop, and during installation of the CloudStation for Linux/Ubuntu, one of the installation steps by the Cloudstation install program  is to run apt-get where Nautilus, Brasero and a lot of other supporting libraries are required to be installed.

But, as I’ve found out, those packages are only needed for file manager integration, not for the functionality of the CloudStation software.

So just answer NO to the request of packages (you may want to keep a copy of the package list for future installation), and let CloudStation install.

On my KDE 4.11.3 it CloudStation works fine, and it has the Status icon on the Systray, without the required Nautilus libraries. Of course there is no Dolphin integration.