Proper SSL using

Now that entered public beta, I took the opportunity to use their fantastic service and deploy a new site certificate for and the other sites hosted on this platform.

The letsencrypt client was a bit confusing first but after I understood what the ACME protocol actually means, I saw the entire beauty behind this solution. So I did a quick review of the code in Github and shortly after I requested the first certificate. As I already have an Apache server running and don’t need the embedded web server stuff provided by the letsencrypt client, I only use the CSR and certificate download part of the client.

After a while I figured out how to request proper alternate subject names as well and was finally able to secure all the services running on with only a single certificate deployed to my Apache.

To request/renew a certificate for different hosts/domains, you basically just do:

letsencrypt-auto certonly \
--webroot \
--renew-by-default \
-w /home/to/htdocs/ \
-d \
-d \
-d \
-w /home/to/htdocs/anothersite/ \
-d \

I also took the chance to improve the SSL configuration of my Apache web server considering recommendations from SSLLabs

After an upgrade to Apache 2.4.18, generating a proper DH parameter file and append it to the letsencrypt certificate chain, SSLLabs rates us with an A+.


Apache 2.4 SSL configuration (ssl-global.conf) 

# Added dhparams to fullchain.pem
# If you have different certificates per vhost, 
# add these to your vhost config instead
SSLCertificateFile /etc/letsencrypt/live/xxxx/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/xxxx/privkey.pem

SSLHonorCipherOrder     on

SSLProtocol All -SSLv2 -SSLv3


SNI enabled virtual host configuration

<VirtualHost *:443>
  SSLEngine On

  # Strict Transport Security per virtual host
  <IfModule mod_headers.c>
     Header always set Strict-Transport-Security "max-age=15768000; includeSubDomains; preload"

Getting ready for IPv6

Today, I decided my server needs to get ready for the future. That means I have to configure IPv6.
Luckily my cloud provider supports IPv6 for quite some time now and I was able to pick an address from his IP address space like 1-2-3.

Configuring the OpenSuSE was a bit tricky and my first 2 attempts failed miserably.
But once I figured out what the auto config script of my provider’s Virtual Server Network Build did wrong, it was quite easy to configure the static IP and add the right default routes.


PING 32 data bytes
40 bytes from 2001:8e0:40:315::1:ff: icmp_seq=0 ttl=56 time=15.1 ms
40 bytes from 2001:8e0:40:315::1:ff: icmp_seq=1 ttl=56 time=15.0 ms
40 bytes from 2001:8e0:40:315::1:ff: icmp_seq=2 ttl=56 time=15.0 ms
40 bytes from 2001:8e0:40:315::1:ff: icmp_seq=3 ttl=56 time=15.0 ms

--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3013ms
rtt min/avg/max/mdev = 15.039/15.083/15.155/0.157 min schatz, pipe 2

Checked port 80 on Host/IP

The checked port (80, service http) is online/reachable!

Completed portscan in 0.1194 seconds

New Photography Page

I decided to create a dedicated site for my photographs.

Please visit my personal page to find the first galleries.

I hope to be able to add more soon…


How bad can software be

Today, I wanted to try the QSync functionality of my QNAP NAS to be able to move away from Dropbox. Oh boy…

Although QSync is labeled “beta”, I thought this is more of a marketing maneuver than a strong indication for an “absolutely useless piece of software”.

So I download the .dmg image of the Mac client for QSync and install the QSync client. Well kinda installed. As soon as I wanted to configure the email server the entire thing dies. OK, another try without the email server configured and I made it to the end of the setup procedure.

So there it is – the new QSync folder. Like the one you get when you use Dropbox. “Nice”, I thought.
I copied over 2 folders from my Dropbox and … nothing happens. Not what I would’ve expected.
The menu bar icon somehow tries to indicate me that it is most probably still syncing – at least that’s what I’m guessing.

So I browse to the Web File Manager to check out the remote QSync folder and there I see a single folder with no content.
Sure, a total of 5MB worth of data can take a while on a 1 Gbit/s LAN.

After waiting another minute or so the icon turns to something that looks very similar to the Dropbox icon when everything is synchronized. With the only difference that with QNAP QSync this is not the case. Nope.

When I pull up my network monitor to see if there is something going on in transfer-land, I can in fact see that QSync is transferring data at 2.5Kbit/s! WTF!?

So I google “qnap qsync not working” (in German) and found that awesome piece of customer advice that stands for itself.

hi ***

yes, qnap was designed for working in the background , and which use different protocol as samaba, so it will be much slower than you transfer data to nas directly.

we had award this problem. developer are working on this, we will keep improve qsync function. sorry for your inconvenient.


Meaningless to say that I immediately deleted the “QSync (Beta)” client from my system.


Moving my linux server to the cloud

After more than 10 years and after a thorough re-calculation of my family’s TCO on internet and server infrastructure, we came to the conclusion that it would be more economic to move the physical server and the associated leased line and IP subnet to a more modern cloud based infrastructure. (OK, I admit… It was actually way more spontaneous than it sounds… But I always wanted to say “TCO” in a blog post. 😉 )

Anyway – I had to

  1. find a cloud provider I trust
  2. relocate the server, services and data with as little downtime as possible

Luckily, my internet provider ( also offers virtual machines on a cloud based infrastructure entirely hosted in Switzerland – hello NSA – for a reasonable price. After going through all the contractual negotiations (i.e. click on ‘Agree’ on a web page) I got my cloud console login – like the one you get from Amazon.

I’ve set up an openSuSE 12.2 server from the provided template, ran a distro upgrade to 12.3 and my primary infrastructure was basically up-and-running within less than an hour.

A couple days before the actual migration, I did set the default TTL and refresh values for all my DNS zones to a very low value. This was to compensate the IP address change I had to undertake for my hosted domains. I did the change and pushed the updated zones out in the wild.

The first service I migrated was the DNS server. I “rsync” the relevant directories and then I went to the registrar of my domains and updated my name server IP address. The A records in the zone files still pointing to he old server, obviously. Shortly after I did the IP change, I began to see DNS requests to appear in my new server’s log files.

The next step was to replicate all the customer data (home directories, mail boxes and mysql data files) to the new server. I also chose to do this using rsync over ssh. After I set up the necessary sync jobs, I let it run over night to copy the initial set of data.

The next day I replicated the configuration settings of Apache, Postfix, dovecot and mysql to the new server the same way using rsync.
After some initial testing and tweaking on the new server, I ran the rsync jobs again and then I was finally able to make the switch on my DNS server by altering only 3 lines (A record for MX, A record for www and serial number). I shutdown the services on the old server and bounced the DNS on the new one and less than a minute later the HTTP requests and mail messages began to hit the new server.

At the end, I was a bit surprised how easy the migration went.
I realized that with only little planning and the right set of basic tools, you can get things done quickly – unlike some recent experiences I made at other places…