LogStash and IBM FileNet P8 5.2 logs

At work, the production environment of FileNet P8 5.2 is deployed on several Oracle WebLogic server instances. This means when a problem crops up, I have a lot of of log files to analyze…. Not an easy task to find and correlate an error with so many instances and log files.

A solution exists for this madness of log files… In fact we have Splunk to ingest and to manage the log files of several applications. But Splunk is licensed by volume, and it’s expensive, and I can’t touch it… Not helping my work, so…

So I’m checking out logstash and it’s web interface Kibana.

The main FileNet P8 5.2 log files are the p8_server_error.log file and the pesvr_system.log file, for Content Engine and Process Engine.

These files are located under the Content Engine domain on a directory named FileNet and sub-divided by server instance.

So to keep thing short, here it is a logstash agent file that monitors and sends the logs to a REDIS remote instance:

input {
        ## P8 Content Engine CE1 Server Log
        file {
                type => “IBMP8_CE”
                path => [ "/weblogic/user_projects/domains/fnce/FileNet/CeServer01/p8_server_error.log" ]
                codec => multiline {
                        ##pattern => “^\s”
                        pattern => “^%{TIMESTAMP_ISO8601}”
                        negate => true
                        what => “previous”
                tags => ["P8CEServerLog"]

        ## P8 Process Engine CE1 System Log
        file {
                type => “IBMP8_PE”
                path => [ "/weblogic/user_projects/domains/fnce/FileNet/CeServer01/pesvr_system.log" ]
                codec => multiline {
                        ##pattern => “^\s”
                        pattern => “^(?>\d\d){1,2}”
                        negate => true
                        what => “previous”
                tags => ["P8CEServerLog"]

output {
  stdout { codec => rubydebug }
  redis { host => “redis_server.domain.com” data_type => “list” key => “logstash” }

You should change the redis_server.domain.com to your redis real ip/name, and after debugging, disable the stdout line.

Yo can add several input files for each server instance that is co-located on the same server machine.


IMAP and SMTP over HTTP Proxy

The solution that I’m using for allowing Thunderbird (and if you really want, Kmail) to connect to my employer IMAP and SMTP servers, is not straightforward but it simply works…

For this to work you really need an external server where you can connect through ssh. This ssh server must be able to contact and connect to the required mail server, namely accessing their IMAP and SMTP ports.

Right now, I use a Linux VPS, located somewhere in the world ( :) ) that I’ve bought through the lowendbox.com site. Great price per year (around 2€ per month).

I run ssh on this server on a non standard port.

The trick is simple:

Just open up two terminal sessions, and if you have ssh through corkscrew tunnelling working (see my previous posts: http://primalcortex.wordpress.com/2014/02/19/ssh-over-http-proxy/ ), it’s simple as executing this:

On terminal 1 and for IMAP (secure):

ssh -L 1993:imap.server.com:993 -p 12345 mysshserver

where imap.server.com is the name or external IP of the IMAP server and 993 is the secure IMAP port. The 1993 is the port at the local address that is listening to connections from thunderbird. The -p 12345 is the port that my remote ssh server is running on and listening on for connections, and of course, mysshserver is the dns or ip address for the ssh server.

On terminal 2 and for secure SMTP:

ssh -L 1465:smtp.server.com:465 -p 12345 mysshserver.

When this two connections are established, then the local machine ports 1993 and 1465 connect through ssh and corkscrew tunnelling to the mail server… and thunderbird can now work as it should.

Just use as IMAP server the localhost and port 1993, and as SMTP server the localhost and 1465 port.

Of course for thunderbird to work, first is needed to create the tunnels.

SSH over HTTP Proxy that uses NTLM Authentication

As can be read on my post http://primalcortex.wordpress.com/2014/02/19/ssh-over-http-proxy/ whe can use SSH to connect to a remote client even when there is between an HTTP Proxy.

But some proxys,like Microsoft ISA or Forefront, requires authentication, but using the NTLM protocol….

In this case the solution is to use TWO proxys where one of them is running on your own machine, that provides the NTLM authentication, and allows Firefox, Chrome and corkscreew to connect using those proxys.

So what you need?

1) Install the cntlm proxy on your machine: apt-get install ctnlm

2) Edit the ctnlm.conf config file to config it: the upstream proxy and credentials. This file is normally located in /etc.

3) For example add/edit the following lines:

Username  mydomainusername
Domain  MSDomainName
Password cleartextpasswordP
Proxy upstreamproxy:port
Listen cntlmproxylistenport

A “real example”:

Username PrimalCortex
Domain  ACME
Password itsasecret
Proxy  corp_proxy.acme.com:8080
Listen 3128

Now, the cntlm proxy can be started: as root start the proxy /etc/init.d/cntlm start

Now you can point your clients to the local address  (the port defined in the Listen config property), and the proxy access is automatic with the NTLM authentication running in the background.

So now corkscrew can work through a proxy that requires NTLM authentication, just edit the SSH config file and change the proxy address to the localhost and cntlm port:

  ProxyCommand corkscrew 3128 %h %p

and that’s it.


SSH over HTTP Proxy

Using SSH to connecting to an host when an HTTP Proxy is between the client and the host, can not be done directly without some configuration.

On Linux based machines the solution is to install and run corkscrew, a program that can tunnel the SSH protocol through an HTTP Proxy.

So how to do the configuration?

1) First install the corkscrew program with your package manager. On Ubuntu family: apt-get install corkscrew

2) Then you need to configure SSH to use corkscrew when connecting to the host that has a http proxy between.

3) Goto to your home directory and change to the hidden directoy .ssh within a command shell window.

4) Create or edit a file named config. The name is just config. No extensions.

5) Add the following lines to the config file

Host <IP_of _remote_host>  
 ProxyCommand corkscrew <IP_of_HTTP_Proxy> <HTTP_Proxy_Port> %h %p <auth_file>

Where the <IP_of_remote_host> is the public ip address of the host where you wish to connect.

The <IP_of_HTTP_Proxy> and <HTTP_Proxy_Port>  are the IP address and Port of you local http proxy server that you wish to go through.

And finally, if your proxy server requires authentication, by username and password, just give a complete path to a file where Proxy credentials are stored, for example /home/primalcortex/.corkscrew_auth

This file content must be something like:


For example a complete config file example:

    ProxyCommand 8080 %h %p /home/primalcortex/.corkscrew-auth

and the .corkscrew-auth file:


6) Just connect now:

ssh myremoteuser@

or when not using the default ssh port:

ssh -p 12345 myremoteuser@

7) Done!

So why we need this?

Well, first is of course, to access a remote machine, but ssh can forward local ports to remote ports, and this is important because, with this feature we can use Thunderbird to directly connect to a remote server by using the standard IMAP and SMTP protocols through an HTTP proxy.

KDE device notifier and Konqueror/Dolphin file manager

This may seem a simple issue, but it took me a while to find why “Open with File Manager” on the device manager opened up the Konqueror file manager instead of Dolphin…

To change it’s easy. Just go to System Settings and Default Aplications. Choose File Manager, and there choose your File Manager of preference.

That’s it.

MySQL on a Ubuntu VPS

Using the great site lowendbox.com I’ve “bought” a Ubuntu based VPS (Virtual Private Server) so that I can use for my testings…

Anyway, I needed to install MySQL database on this Ubuntu Server based VPS, which is simply done by running the following command:

apt-get update
apt-get install mysql-client mysql-server

During the installation process a password for the root user is required. Just make sure that it’s strong enough (Hint: use keypass password generator…)

After installing and running the MySQL server is available at port 3306 and normally only available at the loopback address. But anyway I’ve changed the local firewall rules to block all connections to port 3306 from outside the loopback adapter: Just edit the /etc/rc.local file and add the following lines before the exit 0 command

iptables -A INPUT -p all -s localhost -d localhost -j ACCEPT
iptables -A INPUT -p tcp --destination-port 3306 -j REJECT

Then as the root user just run the file: /etc/rc.local and make sure that the rules are active:

root@vpss:~# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  localhost.localdomain  localhost.localdomain 
ACCEPT     all  --  localhost.localdomain  localhost.localdomain 
ACCEPT     all  --  localhost.localdomain  localhost.localdomain 
ACCEPT     all  --  localhost.localdomain  localhost.localdomain 
REJECT     tcp  --  anywhere             anywhere             tcp dpt:mysql reject-with icmp-port-unreachable
ACCEPT     all  --  localhost.localdomain  localhost.localdomain 
ACCEPT     all  --  localhost.localdomain  localhost.localdomain 
ACCEPT     all  --  localhost.localdomain  localhost.localdomain 
ACCEPT     all  --  localhost.localdomain  localhost.localdomain 
REJECT     tcp  --  anywhere             anywhere             tcp dpt:mysql reject-with icmp-port-unreachable

And that’s it.

Now we need a backup policy so that anything goes wrong, at least we have some data to recover…



Kubuntu upgrade from 12.04 to 13.04

Despite Kubuntu 12.04 being a LTS release, and after some weeks upgrading to 13.04 on my personal desktop computer, I decided to do the same on my Work Laptop.

Things didn’t ran as expected…

First the upgrade from 12.04 to 12.10 deleted a bunch of packages, by my command :( and I ended up with no graphical display… A quick look at Xorg.0.log file showed me that my Xorger’s driver for my Intel graphic card was gone.

Anyway, I’ve installed the xservers-xorg-video-intel package and proceeded to upgrade to 13.04.

At the end despite having a graphical desktop, after login on KDE, a qdbus error appeared… qdbus package was missing… (apt-get install qdbus).

Also on 12.04 the transition from the login greeter to the desktop is silky smooth (it is on my desktop), on my laptop it blanks showing a black screen with a mouse cursor, but the desktop shows up abruptly. I don’t have now the KDE logon progress icons…

And finally, DNS settings from my DHCP servers didn’t worked, I had to manually add the dns servers IP to resolv.conf…  This issue was also a missing package, namely dnsmaq. After adding up the nameserver to my /etc/resolv.conf file, everything is up….

Let’s see what is waiting again in the dark…