Synology and MyDS Quickconnect

The issue: Quickconnect doesn’t work

After upgrading to the latest DSM version 5.0, it took a while to notice that my quickconnect id that I had chosen was not working…

On the DSM Control Panel, if I tried to change and apply the settings it gave a Unknow Error. On the logs, the only message related to the Quickconnect settings was network error: -23, and that was it…

On the myds site, my DS status was red, and clicking on the Quickconnect ID just gave a page that said that my DS was offline or with no network connection, but clicking on the host name just worked fine.

Using the Apps with the hostname and/or IP worked fine, just not with the Quickconnect ID.

The solution:

I don’t have a solution that might work for everyone, but the steps that I’ve taken solved the issue for me.

First on MyDS site I deleted the hostname, and on the DS Control Panel on DDNS settings I tried to register it again. This failed as said that the hostname doesn’t exist…

So, I also deleted the entry for the DDNS Synology provider and configured it again. I needed to enter again my login credentials to the MyDS site, and my hostname again.

This time, it worked, and on the MyDS page the hostname (after I deleted it from there) appeared again. Still clicking on the Quickconnect Id failed.

So, again on the DS Control Panel I went to the QuickConnect on Control Panel, and this time it said that I need to register a QuickConnect ID, so, I registered again my ID, providing the MyDS site credentials, and ID. And it worked.

Now my Quickconnect ID works and DDNS name also works.

The status of my DS on the MyDS site remained red for a large period of time, but at the end it turned green. Also clicking on the Quickconnect ID now works and gives me access to the Web frontend of DS.

This was quite a suprise for me as I didn’t expected to have the Web Administration console available to the internet.

I’ll have to see how to block this.


Delete your DDNS configuration and register it again. Register again the QuickConnect ID.

Synology DSM 5 web station and virtual sites for FileStation

I’ve upgraded my Synology DS to the latest DSM version 5 in the few last days.

Many things have changed on this DSM version, and one of the things that changed was the way Web stations and Apache/PHP works.

The configuration files are different and they use now upstart to start and stop the service.

To stop the Web Station: /sbin/initctl stop httpd-user

To start the Web Station: /sbin/initctl start httpd-user

There is also a status command, to show the state of the server: httpd-user stop/waiting (Stopped) or httpd-user start/running, process #### (running)

For FileStation and Audio station, these application/sites normally expect the browser to connect to application ports, namely in my case the port 7000 and 7002. This has potentially two problems: I need to open them up on my home router to allow access, and sometimes I’m behind a proxy that doesn’t allow anything else to be used as a port on the url (except for port 80-HTTP or 443-HTTPS).

So I have these applications behind a reverse proxy on the WebStation. With this configuration I have my “normal” http address: and I also have and

What is needed to achieve this? Well first a a dns server that accepts wirldcard domains. the is one of them and is provided by Synology and directly integrated into the DSM.

Second a file like this is needed to be created:

<IfModule !proxy_module>
   LoadModule proxy_module modules/
   LoadModule proxy_connect_module modules/
   LoadModule proxy_http_module modules/

ProxyRequests Off
#ProxyPreserveHost On

NameVirtualHost *:80

#For File Station
<VirtualHost *:80>
   <Location />
      RedirectPermanent /

NameVirtualHost *:443

<VirtualHost *:443>
   SSLProtocol all -SSLv2
   SSLCertificateFile /usr/syno/etc/ssl/ssl.crt/server.crt
   SSLCertificateKeyFile /usr/syno/etc/ssl/ssl.key/server.key
   SSLEngine on
   SSLProxyEngine on
   ProxyRequests Off
   ProxyVia Off
   <Proxy *>
       Order deny,allow
       Allow from all
   ProxyPass / http://localhost:7000/
   ProxyPassReverse / http://localhost:7000/


Save this file as http-FILESTATION-vh.conf-user at /usr/syno/etc

Then add the following line at the end of this file /etc/httpd/conf/httpd.conf-user

Include /usr/syno/etc/httpd-FILESTATION-vh.conf-user

Repeat for other sites like Audio Station, changing the hostname and the localhost port.

You can now run the following command: /usr/syno/etc/rc.sysv/ and check if the above include is added to the /etc/httpd/conf/httpd.conf file. This file is always regenerated at start of the Web Station.
Now we can restart the web station with /sbin/initctl stop httpd-user followed by the start command.

Check now if you can access the url:

Synology Cloudstation on Kubuntu/KDE Desktop

One of the softwares available for the Synology is Cloudstation that mimics Dropbox functionality but with your own server (in this case your Synology device). For the Cloustation server there are several clients available, and one of them is CloudStation for Linux.

But I run Kubuntu version of Ubuntu with the KDE desktop, and during installation of the CloudStation for Linux/Ubuntu, one of the installation steps by the Cloudstation install program  is to run apt-get where Nautilus, Brasero and a lot of other supporting libraries are required to be installed.

But, as I’ve found out, those packages are only needed for file manager integration, not for the functionality of the CloudStation software.

So just answer NO to the request of packages (you may want to keep a copy of the package list for future installation), and let CloudStation install.

On my KDE 4.11.3 it CloudStation works fine, and it has the Status icon on the Systray, without the required Nautilus libraries. Of course there is no Dolphin integration.

LogStash and IBM FileNet P8 5.2 logs

At work, the production environment of FileNet P8 5.2 is deployed on several Oracle WebLogic server instances. This means when a problem crops up, I have a lot of of log files to analyze…. Not an easy task to find and correlate an error with so many instances and log files.

A solution exists for this madness of log files… In fact we have Splunk to ingest and to manage the log files of several applications. But Splunk is licensed by volume, and it’s expensive, and I can’t touch it… Not helping my work, so…

So I’m checking out logstash and it’s web interface Kibana.

The main FileNet P8 5.2 log files are the p8_server_error.log file and the pesvr_system.log file, for Content Engine and Process Engine.

These files are located under the Content Engine domain on a directory named FileNet and sub-divided by server instance.

So to keep thing short, here it is a logstash agent file that monitors and sends the logs to a REDIS remote instance:

input {
        ## P8 Content Engine CE1 Server Log
        file {
                type => “IBMP8_CE”
                path => [ "/weblogic/user_projects/domains/fnce/FileNet/CeServer01/p8_server_error.log" ]
                codec => multiline {
                        ##pattern => “^\s”
                        pattern => “^%{TIMESTAMP_ISO8601}”
                        negate => true
                        what => “previous”
                tags => ["P8CEServerLog"]

        ## P8 Process Engine CE1 System Log
        file {
                type => “IBMP8_PE”
                path => [ "/weblogic/user_projects/domains/fnce/FileNet/CeServer01/pesvr_system.log" ]
                codec => multiline {
                        ##pattern => “^\s”
                        pattern => “^(?>\d\d){1,2}”
                        negate => true
                        what => “previous”
                tags => ["P8CEServerLog"]

output {
  stdout { codec => rubydebug }
  redis { host => “” data_type => “list” key => “logstash” }

You should change the to your redis real ip/name, and after debugging, disable the stdout line.

Yo can add several input files for each server instance that is co-located on the same server machine.


IMAP and SMTP over HTTP Proxy

The solution that I’m using for allowing Thunderbird (and if you really want, Kmail) to connect to my employer IMAP and SMTP servers, is not straightforward but it simply works…

For this to work you really need an external server where you can connect through ssh. This ssh server must be able to contact and connect to the required mail server, namely accessing their IMAP and SMTP ports.

Right now, I use a Linux VPS, located somewhere in the world ( :) ) that I’ve bought through the site. Great price per year (around 2€ per month).

I run ssh on this server on a non standard port.

The trick is simple:

Just open up two terminal sessions, and if you have ssh through corkscrew tunnelling working (see my previous posts: ), it’s simple as executing this:

On terminal 1 and for IMAP (secure):

ssh -L -p 12345 mysshserver

where is the name or external IP of the IMAP server and 993 is the secure IMAP port. The 1993 is the port at the local address that is listening to connections from thunderbird. The -p 12345 is the port that my remote ssh server is running on and listening on for connections, and of course, mysshserver is the dns or ip address for the ssh server.

On terminal 2 and for secure SMTP:

ssh -L -p 12345 mysshserver.

When this two connections are established, then the local machine ports 1993 and 1465 connect through ssh and corkscrew tunnelling to the mail server… and thunderbird can now work as it should.

Just use as IMAP server the localhost and port 1993, and as SMTP server the localhost and 1465 port.

Of course for thunderbird to work, first is needed to create the tunnels.

SSH over HTTP Proxy that uses NTLM Authentication

As can be read on my post whe can use SSH to connect to a remote client even when there is between an HTTP Proxy.

But some proxys,like Microsoft ISA or Forefront, requires authentication, but using the NTLM protocol….

In this case the solution is to use TWO proxys where one of them is running on your own machine, that provides the NTLM authentication, and allows Firefox, Chrome and corkscreew to connect using those proxys.

So what you need?

1) Install the cntlm proxy on your machine: apt-get install ctnlm

2) Edit the ctnlm.conf config file to config it: the upstream proxy and credentials. This file is normally located in /etc.

3) For example add/edit the following lines:

Username  mydomainusername
Domain  MSDomainName
Password cleartextpasswordP
Proxy upstreamproxy:port
Listen cntlmproxylistenport

A “real example”:

Username PrimalCortex
Domain  ACME
Password itsasecret
Listen 3128

Now, the cntlm proxy can be started: as root start the proxy /etc/init.d/cntlm start

Now you can point your clients to the local address  (the port defined in the Listen config property), and the proxy access is automatic with the NTLM authentication running in the background.

So now corkscrew can work through a proxy that requires NTLM authentication, just edit the SSH config file and change the proxy address to the localhost and cntlm port:

  ProxyCommand corkscrew 3128 %h %p

and that’s it.


SSH over HTTP Proxy

Using SSH to connecting to an host when an HTTP Proxy is between the client and the host, can not be done directly without some configuration.

On Linux based machines the solution is to install and run corkscrew, a program that can tunnel the SSH protocol through an HTTP Proxy.

So how to do the configuration?

1) First install the corkscrew program with your package manager. On Ubuntu family: apt-get install corkscrew

2) Then you need to configure SSH to use corkscrew when connecting to the host that has a http proxy between.

3) Goto to your home directory and change to the hidden directoy .ssh within a command shell window.

4) Create or edit a file named config. The name is just config. No extensions.

5) Add the following lines to the config file

Host <IP_of _remote_host>  
 ProxyCommand corkscrew <IP_of_HTTP_Proxy> <HTTP_Proxy_Port> %h %p <auth_file>

Where the <IP_of_remote_host> is the public ip address of the host where you wish to connect.

The <IP_of_HTTP_Proxy> and <HTTP_Proxy_Port>  are the IP address and Port of you local http proxy server that you wish to go through.

And finally, if your proxy server requires authentication, by username and password, just give a complete path to a file where Proxy credentials are stored, for example /home/primalcortex/.corkscrew_auth

This file content must be something like:


For example a complete config file example:

    ProxyCommand 8080 %h %p /home/primalcortex/.corkscrew-auth

and the .corkscrew-auth file:


6) Just connect now:

ssh myremoteuser@

or when not using the default ssh port:

ssh -p 12345 myremoteuser@

7) Done!

So why we need this?

Well, first is of course, to access a remote machine, but ssh can forward local ports to remote ports, and this is important because, with this feature we can use Thunderbird to directly connect to a remote server by using the standard IMAP and SMTP protocols through an HTTP proxy.