Proper SSL using

Now that entered public beta, I took the opportunity to use their fantastic service and deploy a new site certificate for and the other sites hosted on this platform.

The letsencrypt client was a bit confusing first but after I understood what the ACME protocol actually means, I saw the entire beauty behind this solution. So I did a quick review of the code in Github and shortly after I requested the first certificate. As I already have an Apache server running and don’t need the embedded web server stuff provided by the letsencrypt client, I only use the CSR and certificate download part of the client.

After a while I figured out how to request proper alternate subject names as well and was finally able to secure all the services running on with only a single certificate deployed to my Apache.

To request/renew a certificate for different hosts/domains, you basically just do:

letsencrypt-auto certonly \
--webroot \
--renew-by-default \
-w /home/to/htdocs/ \
-d \
-d \
-d \
-w /home/to/htdocs/anothersite/ \
-d \

I also took the chance to improve the SSL configuration of my Apache web server considering recommendations from SSLLabs

After an upgrade to Apache 2.4.18, generating a proper DH parameter file and append it to the letsencrypt certificate chain, SSLLabs rates us with an A+.


Apache 2.4 SSL configuration (ssl-global.conf) 

# Added dhparams to fullchain.pem
# If you have different certificates per vhost, 
# add these to your vhost config instead
SSLCertificateFile /etc/letsencrypt/live/xxxx/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/xxxx/privkey.pem

SSLHonorCipherOrder     on

SSLProtocol All -SSLv2 -SSLv3


SNI enabled virtual host configuration

<VirtualHost *:443>
  SSLEngine On

  # Strict Transport Security per virtual host
  <IfModule mod_headers.c>
     Header always set Strict-Transport-Security "max-age=15768000; includeSubDomains; preload"

Getting ready for IPv6

Today, I decided my server needs to get ready for the future. That means I have to configure IPv6.
Luckily my cloud provider supports IPv6 for quite some time now and I was able to pick an address from his IP address space like 1-2-3.

Configuring the OpenSuSE was a bit tricky and my first 2 attempts failed miserably.
But once I figured out what the auto config script of my provider’s Virtual Server Network Build did wrong, it was quite easy to configure the static IP and add the right default routes.


PING 32 data bytes
40 bytes from 2001:8e0:40:315::1:ff: icmp_seq=0 ttl=56 time=15.1 ms
40 bytes from 2001:8e0:40:315::1:ff: icmp_seq=1 ttl=56 time=15.0 ms
40 bytes from 2001:8e0:40:315::1:ff: icmp_seq=2 ttl=56 time=15.0 ms
40 bytes from 2001:8e0:40:315::1:ff: icmp_seq=3 ttl=56 time=15.0 ms

--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3013ms
rtt min/avg/max/mdev = 15.039/15.083/15.155/0.157 min schatz, pipe 2

Checked port 80 on Host/IP

The checked port (80, service http) is online/reachable!

Completed portscan in 0.1194 seconds

New Photography Page

I decided to create a dedicated site for my photographs.

Please visit my personal page to find the first galleries.

I hope to be able to add more soon…


How bad can software be

Today, I wanted to try the QSync functionality of my QNAP NAS to be able to move away from Dropbox. Oh boy…

Although QSync is labeled “beta”, I thought this is more of a marketing maneuver than a strong indication for an “absolutely useless piece of software”.

So I download the .dmg image of the Mac client for QSync and install the QSync client. Well kinda installed. As soon as I wanted to configure the email server the entire thing dies. OK, another try without the email server configured and I made it to the end of the setup procedure.

So there it is – the new QSync folder. Like the one you get when you use Dropbox. “Nice”, I thought.
I copied over 2 folders from my Dropbox and … nothing happens. Not what I would’ve expected.
The menu bar icon somehow tries to indicate me that it is most probably still syncing – at least that’s what I’m guessing.

So I browse to the Web File Manager to check out the remote QSync folder and there I see a single folder with no content.
Sure, a total of 5MB worth of data can take a while on a 1 Gbit/s LAN.

After waiting another minute or so the icon turns to something that looks very similar to the Dropbox icon when everything is synchronized. With the only difference that with QNAP QSync this is not the case. Nope.

When I pull up my network monitor to see if there is something going on in transfer-land, I can in fact see that QSync is transferring data at 2.5Kbit/s! WTF!?

So I google “qnap qsync not working” (in German) and found that awesome piece of customer advice that stands for itself.

hi ***

yes, qnap was designed for working in the background , and which use different protocol as samaba, so it will be much slower than you transfer data to nas directly.

we had award this problem. developer are working on this, we will keep improve qsync function. sorry for your inconvenient.


Meaningless to say that I immediately deleted the “QSync (Beta)” client from my system.


Moving my linux server to the cloud

After more than 10 years and after a thorough re-calculation of my family’s TCO on internet and server infrastructure, we came to the conclusion that it would be more economic to move the physical server and the associated leased line and IP subnet to a more modern cloud based infrastructure. (OK, I admit… It was actually way more spontaneous than it sounds… But I always wanted to say “TCO” in a blog post. 😉 )

Anyway – I had to

  1. find a cloud provider I trust
  2. relocate the server, services and data with as little downtime as possible

Luckily, my internet provider ( also offers virtual machines on a cloud based infrastructure entirely hosted in Switzerland – hello NSA – for a reasonable price. After going through all the contractual negotiations (i.e. click on ‘Agree’ on a web page) I got my cloud console login – like the one you get from Amazon.

I’ve set up an openSuSE 12.2 server from the provided template, ran a distro upgrade to 12.3 and my primary infrastructure was basically up-and-running within less than an hour.

A couple days before the actual migration, I did set the default TTL and refresh values for all my DNS zones to a very low value. This was to compensate the IP address change I had to undertake for my hosted domains. I did the change and pushed the updated zones out in the wild.

The first service I migrated was the DNS server. I “rsync” the relevant directories and then I went to the registrar of my domains and updated my name server IP address. The A records in the zone files still pointing to he old server, obviously. Shortly after I did the IP change, I began to see DNS requests to appear in my new server’s log files.

The next step was to replicate all the customer data (home directories, mail boxes and mysql data files) to the new server. I also chose to do this using rsync over ssh. After I set up the necessary sync jobs, I let it run over night to copy the initial set of data.

The next day I replicated the configuration settings of Apache, Postfix, dovecot and mysql to the new server the same way using rsync.
After some initial testing and tweaking on the new server, I ran the rsync jobs again and then I was finally able to make the switch on my DNS server by altering only 3 lines (A record for MX, A record for www and serial number). I shutdown the services on the old server and bounced the DNS on the new one and less than a minute later the HTTP requests and mail messages began to hit the new server.

At the end, I was a bit surprised how easy the migration went.
I realized that with only little planning and the right set of basic tools, you can get things done quickly – unlike some recent experiences I made at other places…


Upgrade your SuSE server

I run a SuSE 11 internet server providing some basic services.
I recently had to upgrade to a new version of SuSE (11.3) and it took my quite some time to do so.
Therefore I am listing here the necessary steps, hoping that the next time I will spend less time on such an upgrade…

Services installed

  • dovecot IMAP/IMAPS Mail Server
  • dovecot POP3/POP3S Mail Server
  • postfix SMTP TLS MTA
  • Apache HTTP/HTTPs Webserver
  • Subversion Repository
  • WebDAV online Disk

Preparations and basic setup

  1. Take the server off-line and make sure no mail arrives. The emails will be queued on the alternate MX and delivered later.
  2. Do the backup (rsync including all deletions)
  3. Dump the installed packes to an XML using yast2 Software Management
  4. Install base software from boot ISO over the network
  5. Setup Networking
  6. Restore /etc/passwd
  7. Restore /etc/shadow
  8. Restore /home in the background

Setup Mail Server

  1. Setup postfix (check possibility to restore /etc/sysconfig/postfix from backup), but do not start
  2. Setup dovecot (restore /etc/dovecot/dovecot.conf), but do not start
  3. Check certificate values in /etc/sysconfig/postfix (or restore from backup)
  4. Create postfix certificates and SSL CA using mkpostfixcert
  5. Edit the config file /usr/share/doc/packages/dovecot/dovecot-openssl.cnf (attention, will be overwritten when upgrading dovecot)
  6. Run /usr/share/doc/packages/dovecot/
  7. Download and install roundcube mail in /srv/www/htdocs

Setup Apache Webserver

  1. Setup apache2 (check possibility to restore /etc/sysconfig/apache2)
  2. Restore /etc/apache2/vhost.d from backup
  3. Restore /etc/apache2/conf.d/subversion.conf from backup
  4. Generate the following certificates for apache using yast2 (mail, svn, disk)
  5. Export the PEM encoded certificates to /etc/apache/ssl.crt|key/

Setup BIND Name Server

  1. Restore /etc/named.conf from backup
  2. Restore /etc/named.d from backup
  3. Restore /var/lib/named/master from backup

Setup MySQL

  1. Restore /etc/my.cnf from backup
  2. Restore /var/lib/mysql from backup

Setup Subversion

  1. If subversion is the same version (or compatible) just restore /srv/svn from backup
  2. If subversion is not compatible anymore use svnadmin load to load the dump from the backup

There’s noting to do for the WebDAV disk 🙂

After all the configuration files etc. have been restored and the settings in /etc/sysconfig have been checked, run SuSEconfig for the last time and test the mail server.
Unplug from the internet and start postfix and dovecot.
Check if a locally created mail is correctly handled by postfix, amavis and successfully delivered with dovecot.
Also check if the IMAP mbox is created in /var/spool/mail.

If this test succeeds we can restore /var/spool/mail from backup and connect to the internet again.

Now use yast2 to edit the runlevel configuration and make sure all the services are started at boot-time.
Also start them now.

Stored e-mails should no be delivered and correctly handled by the mail server.

Test all the virtual apache servers, webmail, subversion and WebDAV.


maven and canoo webtest pitfall

Today, I spent about 2 hours to find out why our canoo webtests didn’t run anymore after adding the webDAV wagon to our maven build.

The problem was that when adding the webDAV-wagon to your build as an extension, you also get the commons-httpclient dependency to the maven runtime classpath. This wouldn’t be too bad, but the 1.0-beta2 version of the webDAV-wagon depends on commons-httpclient version 2.0.2. That’s way too old for canoo webtest and is also deprecated for almost a year now.

I found out that starting with maven 2.0.9 you don’t need to have an explicit reference to the webDAV-wagon as extension anymore.

E voilà! Suddenly it worked. Maven 2.0.9 also seems to work with version 3.1 of commons-httpclient, the same version canoo webtest depends on.

So, if you get a ‘NoSuchMethodFound’ for HttpConnectionManager.getParams() first check if you’re using the webDAV-wagon extension first.