Streaming and recording IP TV – follow up

I noticed that the recording jobs sometimes got interrrupted, since the streams were not completely reliable. If ffmpeg does not get any data delivered it gives up after some time and I do not see any parameter to configure that timeout. (Please let me know if you know about it…).

I came up with a fairly crude and simple, but working, let’s call it pragmatic solution, which is generic enough to share in a post like this.

The script that registers the recording at jobs does:

...
at -m ${FORMATTED_STARTTIME} <<!
bash /home/xxx/scripts/recordIptvRobust.sh ${SEC_LENGTH} ${CHANNEL_NO} ${START_TIME}
!
...

and the recordIptvRobust.sh bash (5.0 and later since it uses the convenient epochseconds variable) script does:

...
START_EPOCH=$EPOCHSECONDS
STOP_EPOCH=$(( "${START_EPOCH} + ${SEC_LENGTH}" ))
while (("${EPOCHSECONDS} < ${STOP_EPOCH}")); do
/usr/bin/ffmpeg -i "http://stream-url-here/${CHANNEL_NO}" -y -err_detect ignore_err -c:v copy -c:a copy -t $SEC_LENGTH /home/xxx/public_html/record/record_${STARTTIME//:}${CHANNEL_NO}${EPOCHSECONDS}.mkv
# Give the stream some time to restore order
sleep 10
done
# Merge segments with mkvmerge
/usr/bin/mkvmerge -o /home/xxx/public_html/record/record_${STARTTIME//:}${CHANNEL_NO}.mkv $(ls -1 /home/xxx/public_html/record/record${STARTTIME//:}${CHANNEL_NO}.mkv | paste -sd" " | sed 's/ / + /g')
...

Posted in datorer, tv, webbprojekt | Tagged , | Leave a comment

Streaming and recording IP TV

Sport is better on a big screen and in my case that means a project screen in the wintergarden. I have a Chromecast and a laptop connected to the projector and previously I have been using the laptop with Kodi for movies and various web based streaming services. The Chromecast was basically only used for guests that wanted to stream something from their mobile phones via the guest wifi.

If you have an IPTV provider that provides an unencrypted and available stream, and there are many such, there is nobody stopping us from developing a solution with focus on user friendliness with a fairly low investment of time and hardware which is better than what the various out-of-the-box solutions can provide. I will point out the key points from my solutions below so you can stitch together something for yourself quickly if you find yourself with similar needs at some point.

I use and like Home Assistant, and have created a view there where the sport channels can be watched, also the last recorded event. Nothing fancy but the actions are available with a single finger tap which was my first priority. I did not want to fiddle with a computer or start streaming from the phone when it is time to watch something. There are obviously some steps involved so let’s take a look at what we need.

For a starter we need a Home Assistant installation and I run mine on a RPI4 which is sufficient for my needs (and it is doing a fair bit nowadays). To go into details on HA is beyond the scope of this blog post. (In theory we could maybe get away with letting the same RPI4 be the server that acts as streaming proxy and recorder as well if we avoid the transcoding but that is less robust and I have other servers around at home so there is no need to stretch the boundaries of the RPI4’s capacity.) I have another server, the stream server, running the scripts for exposing the stream to the Chromecast and to do the stream recording and hosting the small web site to do the recording.

In Home Assistant I have a Home Assistant script that invokes a shell_command which executes a script on the “stream server” via non-interactive ssh. The script on the stream server uses VLC to point the Chromecast to the stream corresponding to the channel:

/usr/bin/cvlc "http://stream-url-here" --sout="#chromecast{ip=X.X.X.X}" --demux-filter=demux_chromecast --sout="#transcode{venc=x264{preset=ultrafast},vcodec=h264,threads=1}:chromecast{ip=X.X.X.X,conversion-quality=0}" --loop

Why transcoding with VLC we might ask ourselves, would it not work to just hand over the stream url to the Chromecast with something like go-chromecast or pyChromecast? I did tests with all three and VLC is clearly more fault tolerant due to the transcoding it is doing on the fly, meanwhile the other two relies on the stream being compatible and relies on the Chromecast to handle it. VLC is able to handle a much broader range of formats so your chances are must better that way. The stream from my provider did not work when Chromecast tries to interpret it “natively”. Your success may vary… The “–loop” part is essential since it will make vlc re-try in case the connection is interrupted.

That was the “live streaming part”, now over to the recording. I use the same mechanism (with VLC) when playing already recorded streams and go-chromecast for controling the stream (play/pause/rewind/forward/seek). As an example, let’s say I want to record an NHL game at 2 am in my time zone, we would need a way to specify the channel, time and duration. I made a rudimentary web ui for this purpose. The only interesting feature there is probably the possibility to look up matching events based on a team. Since I anyway source all sports events from a sports TV site and have the future events represented by ical files on a reachable filesystem, I can simply rgrep among all the ical files and find info about an upcoming event matching for example “Colorado” and prepopulate the channel and time fields. The biggest hurdle was to transform datetime stamps in the formats used in ical/the html component datetime-local and something that “at” understands. Fortunately this is all pretty simple with date which understands the datetime from the html component and can give us the format that at prefers, date -d "${STARTTIME}" +"%H:%M %Y-%m-%d"

When the form is submitted, a record job is placed on the stream server with “at”, which is a convenient tool to schedule ad-hoc jobs which should run at a specific time. The job is simply to let ffmpeg record the stream url matching the channel and save it to the disk, followed by a command to let ffmpeg switch container format (from mkv to mp4, the Chromecast was doing better with mp4) but avoiding re-encoding video and audio streams (still with h264) just to avoid any irregularities caused when saving the stream which most likely will prevent proper navigating in the file (relative and absolute positioning). Such a “container switch” is fast since the streams are left as is. The last thing the scheduled job does is to update the link to the “last recording” to point to the new one.

ffmpeg -i "stream-url-here" -y -err_detect ignore_err -c:v copy -c:a copy -t $SEC_LENGTH /home/username/public_html/record/record_${STARTTIME//:}${CHANNELNO}.mkv
ffmpeg -i /home/username/public_html/record/record${STARTTIME//:}${CHANNELNO}.mkv -c:a copy -c:v copy /home/username/public_html/record/record${STARTTIME//:}${CHANNELNO}.mp4
ln -sf /home/username/public_html/record/record${STARTTIME//:}${CHANNELNO}.mp4 /home/username/public_html/record/latest.mp4
rm /home/username/public_html/record/record${STARTTIME//:}_${CHANNELNO}.mkv

I should point out one thing which caused my a bit of head ache. The scheduling of the at job needs to be done differently when scripting compared to in an interactive shell. I first tried to pipe the job to at but that did not work because the standard input in the script is different. To avoid that, simply use a “here document” (from the Advanced Bash scripting guide: “A here document is a special-purpose code block. It uses a form of I/O redirection to feed a command list to an interactive program or a command, such as ftpcat, or the ex text editor.”)

command <<EOF-MARKER
input
more input
EOF-MARKER
nextCommand

For me that meant:

at -m ${FORMATTED_STARTTIME} <<!
the commands to run in at job
multiple commands can be used, one per line
!

A typical scenario when it comes to actually watching recordings, is to simply play the latest recording, for example an NHL game that took place during the night. One tap in Home Assistant is enough since a symbolic link points to the latest recording, and we can use the same setup as outlined above, but point Chromecast to the url for the recording on the stream server (it saved the recording to a directory in the user’s public_html exposed via Apache2 (a2enmod userdir). I did some experiments with dbus-send to VLC, and meanwhile that works, using go-chromecast for the navigation turned out more convinient. With that utility you can simply do things like “/usr/bin/go-chromecast seek 60 -a X.X.X.X” to fast forward a minute (a commercial break…) or using seek-to to go to an absolute position.

You might have noticed I have been referring to the Chromecast by IP address. Usually the tools make it possible to call it by its friendly name but I figure the name is more likely to change (for example when resetting the device) than the IP address due to the static IP address allocation based on the MAC address (this would only need a config change when router changes which is less frequent).

Posted in sport, tv, webbprojekt, webbservern | Tagged , , , | Leave a comment

Have you also forgot (or never knew) the IP address of a Linksys SRW2024 switch?!

I bought a 24 port managed switch, it is a cool thing to have in your private network, since they are basically given away for free nowadays when serious people move from 1 gigabit to 10 gigabit. I don’t foresee my network going 10G in the near future so an enterprise grade 1G switch almost for free seems like a good deal.

I bought one on Swiss Tutti (“Blocket” for Swedish people, “Avito” for Russians) for 20 bucks and it arrived a few days later (I am still waiting to get deceived on Tutti or Ricardo even though there are scammers on these platforms too). Since I did not get to know the correct IP address of the switch (the seller told me the wrong one but I do not count this as deceived, he tried to help) I could not connect to the web management interface and used the managed switch as a dumb switch for a while. I even bought a serial to usb cable until simply doing the reasonable thing…

The reasonable thing, at least afaik, is to connect an ethernet cable between the switch and the a computer with a NIC, start Wireshark or whatever similar tool you have at hand (tcpdump for you cli aficionados), and take a look. When restarting the switch one of the first packets clearly discloses the IP address of the device (I forgot to make a screenshot but it is obvious when you inspect), and then you can manually set your computer to have an IP address in the same subnetwork, get to the web ui and change the IP address of the switch. If you actually have one of those ancient Linksys router like myself, you better take advantage of the compatibility mode in IE since they apparently put some trainee on coding the web ui and it does not load in a modern browser…

Voila, hopefully this saved some people waiting for serial to usb cables from Guangdong via aliexpress et al.

Posted in datorer, elektronik, hårdvara | Tagged , | Leave a comment

Kubernetes powered backup solution over VPN

After ironing out the last bugs with my home grown containerized, distributed, remote backup solution, I can’t say I would recommend it for the average user but if you are comfortable with some hacking, need a backup solution and have a bunch of computers idling at your disposal, this might be for you…

Let’s start with the actual problem, like reasonable people would do. For obvious reasons I do want to backup up files that can’t be easily recreated. For the same obvious reasons I want them stored offsite, the backups should not be burned or stolen together with the original data.

I have done a few iterations of such backup solutions, all utilizing bash and tar in slightly different ways. When hard drives and internet connections got reasonably affordable and quick I added offsite backup over something SSH tunneled (rsync/scp). The location I am currently backing up to has the ftps server only available via vpn so that is part of my limitations in the solution below (otherwise I would say ftps/sftp/scp would have been sufficient for my use case).

The solution I have used lately, running a simple bash backup script (full monthly backup and daily increments which are tared and then encrypted with 7zip) in a Network Namespace with an openvpn tunnel to the offsite location for ftps transfer in the tunnel, has been working more or less fine but had one drawback – lack of parallelization – and running it on multiple bare metal servers is tedious to set up and maintain.

The 7zip encryption is quite demanding and it would be great to scale out in order to take advantage of the available computing capacity in the LAN. Kubernetes to the rescue…

I have a Kubernetes cluster running on two fairly powerful (as of 2021…) Ryzen servers (12 cores/32 GB + 8 cores/32 GB, with one of them running the master node) plus 6 Raspberry Pi 4B 4 GB in my LAN. (The Ryzen servers are running other “nice” processes so should give up available capacity when needed but at the moment I have actually configured the backup jobs to only run on the RPIs tagged with the “rpi” label to not bother the Ryzen servers.)

With version 1.21 Kubernetes included the workload resource Cronjob which basically does what you can imagine (if you have some basic *nix experience). That is quite handy for a backup task since we want the container to run on schedule and self destroy when finished.

Since I want to transfer my encrypted archive to an offsite location in openvpn (without having the host’s networking being affected by this vpn connection) I have a container establishing the vpn connection and one container doing the actual backup task. Since the cronjob is created in a pod where the containers share the networking etc. the backup job is able to transfer to the offsite location.

What about the initial problem, the lack of parallelization? I did not implement some sophisticated queue solution where some workers create the archives and put on a queue meanwhile other workers listens to the queue and encrypts and yet other workers does the actual transfer. The problem itself is quite simple and I want the solution to be simple enough to actually be maintained and to keep it running everyday for years to come.

My simple solution; one cron job for each bigger chunk (the source control repo, photos, family member’s non-cloud documents, mysql databases, etc.) which are scheduled and run independently on the cluster in parallel. I start them during the night (when the internet connections on both ends are not used much anyway) and the compression and encryption tasks don’t finish at the same time due to different file sizes of the archives so they spread out the openvpn/network usage . The transfer tasks share the same limited capacity (about 30 mbit/s to the offsite location in the openvpn tunnel) but the openvpn server is configured to allow multiple concurrent connections from the same user so it is not an issue.

After this introduction, let’s go through the actual implementation including actual configuration and scripts in order for this blog post to actually be useful for someone who wants to implement something similar.

To start with, I run Ubuntu 20.04 LTS (EOL April 2030 so still many years left…) on both the Ryzen servers and the RPIs. The RPI’s are booting and running on reasonably fast USB 3 flash drives and mounted in one of those RPI cluster cases with fans that you can buy cheaply from Amazon or AliExpress. A 7” monitor, power supply for all nodes and gigabit switch is all attached to the case to form one “cluster unit” with only power and 1 network cable as “physical interface”. (When running Ubuntu 20.04 on RPI4, do consider the advice at https://jamesachambers.com/raspberry-pi-4-ubuntu-20-04-usb-mass-storage-boot-guide/.)

I am running Ubuntu’s Kubernetes distribution, microk8s 1.22.4. There are a lot of fancy add ons but according to my experience it is easy to get it into a state where one has to start over if one adds various add ons. After a few attempts I now keep it as slimmed down as possible, no dashboard for example, and only have the add on “ha-cluster” enabled.

Setting it up is basically as easy as running “microk8s add-node” on the master node, and running the corresponding “microk8s join” command on the joining nodes. After that procedure you might admire your long list with “kubectl get no -o wide –show-labels” for example.

Now, over to the “meat” of the solution. The yaml files… I would recommend storing your declarations of the desired states in your source repo so that you can restore your solution on a new cluster with one simple command if needed.

My structure looks like this (I omit the multiple batchjobs and only show two in the file listing below):

-rw-rw-r-- 1 jonas jonas 2745 nov 25 11:02 backup-config.yaml
rw-rw-r-- 1 jonas jonas 3719 nov 29 14:33 batchjob-backup-dokument.yaml
-rw-rw-r-- 1 jonas jonas 3719 nov 29 14:33 batchjob-backup-mysql.yaml


-rw-rw-r-- 1 jonas jonas 5408 nov 24 15:06 client.ovpn
-rw-rw-r-- 1 jonas jonas 242 aug 23 01:30 route-config.yaml

The “backup-config.yaml” (I have tried to indicate the places to update) which contains the script doing the full or incremental backup (depending on the date) including encryption and transfer:

kind: ConfigMap
metadata:
  name: backup-script
apiVersion: v1
data:
  backup.sh: |-
    #!/bin/bash
    DIR_NAME=$(echo $DIRECTORY_TO_BACKUP | tr "/" "-")
    DIR_NAME_FORMATTED=${DIR_NAME::-1}
    BACKUPNAME="rpicluster${DIR_NAME_FORMATTED}"
    BACKUPDIR=/your/path/to/where/you/store/your/archives
    TIMEDIR=/your/path/to/where/you/store/your/time/stamp/files
    TAR="/bin/tar"
    ARCHIVEFILE=""
    echo "DIRECTORY_TO_BACKUP=$DIRECTORY_TO_BACKUP"
    echo "BACKUPNAME=$BACKUPNAME"
    echo "TIMEDIR=$TIMEDIR"
    echo "ARCHIVEFILE=$ARCHIVEFILE"
    export LANG="en_US.UTF-8"
    PATH=/usr/local/bin:/usr/bin:/bin
    DOW=date +%a # Day of the week e.g. Mon
    DOM=date +%d # Date of the Month e.g. 27
    DM=date +%d%b # Date and Month e.g. 27Sep
    MONTH=$(date -d "$D" '+%m') # Number of month
    NOW=$(date '+%Y-%m-%d')    
# First day in month (exception for photos in order to reduce the file sizes)
    if [[ $DOM = "01" && $DIR_NAME_FORMATTED != "Pictures" ]]; then
      ARCHIVEFILE="$BACKUPNAME-01.tar"
      echo "Full backup, no exclude list"
      NEWER=""
      echo $NOW > $TIMEDIR/$BACKUPNAME-full-date
      echo "Creating tar archive at $NOW for $DIRECTORY_TO_BACKUP"
      /usr/bin/nice $TAR $NEWER -c --exclude='/.opera' --exclude='/.google ' -f $BACKUPDIR/$ARCHIVEFILE $DIRECTORY_TO_BACKUP 
    else
      ARCHIVEFILE="$BACKUPNAME-$DOW.tar"
      echo "Make incremental backup - overwrite last weeks"
      NEWER="--newer $(date '+%Y-%m-01')"
      if [ ! -f $TIMEDIR/$BACKUPNAME-full-date ]; then
        echo "$(date '+%Y-%m-01')" > $TIMEDIR/$BACKUPNAME-full-date
      else
         NEWER="--newer cat $TIMEDIR/$BACKUPNAME-full-date"
      fi
      echo "Creating tar archive at $NOW for $DIRECTORIES later than $NEWER"
      /usr/bin/nice $TAR $NEWER -c --exclude='/.opera' --exclude='/.google ' -f $BACKUPDIR/$ARCHIVEFILE $DIRECTORY_TO_BACKUP 
    fi

echo "Encrypt with 7zip…"
/usr/bin/nice /usr/bin/7z a -t7z -m0=lzma2 -mx=0 -mfb=64 -md=32m -ms=on -mh e=on -mmt -p'put-your-secret-phrase-here' $BACKUPDIR/$ARCHIVEFILE.7z $BACKUPDIR/$ARCHIVE FILE
echo "Remove the unencrypted tar archive"
/bin/rm -f $BACKUPDIR/$ARCHIVEFILE
echo "Transfer with lftp"
FILESIZE=$(stat -c%s "$BACKUPDIR/$ARCHIVEFILE.7z")
echo "date -u: About to transfer $BACKUPDIR/$ARCHIVEFILE.7z ($FILESIZE bytes)" >> $BACKUPDIR/$ARCHIVEFILE.7z.scriptlog
lftp -c "open -e \"set ssl:verify-certificate false;set ssl:check-hostname no;set log:file/xfer $BACKUPDIR/$ARCHIVEFILE.7z.log;set net:timeout 60;set net:max-retries 10;\" -u user,password ftp://address-of-your-ftp-server-via-vpn; put -O your-remote-path-here $BACKUPDIR/$ARCHIVEFILE.7z"
echo "date -u: Finished transfer $BACKUPDIR/$ARCHIVEFILE.7z " >> $BACKUPDIR/$ARCHIVEFILE.7z.scriptlog

Alright, with that basic backup script in place which will be re-used by all cronjobs, let’s take a look at one specific batchjob, batchjob-backup-dokument.yaml, which does the backup of the documents directory (I kept my paths in order to show how the volumes are referred):

apiVersion: batch/v1
kind: CronJob
metadata:
  name: backup-dokument
spec:
  schedule: "30 0 * * *"
  concurrencyPolicy: Forbid
  successfulJobsHistoryLimit: 5
  failedJobsHistoryLimit: 5
  startingDeadlineSeconds: 3600
  jobTemplate:
    spec:
       template:
         spec:
shareProcessNamespace: true
         restartPolicy: OnFailure volumes: - name: scripts configMap: name: backup-script - name: backuptargetdir nfs: server: qnap path: /USBDisk3 - name: jonas nfs: server: qnap path: /jonas - name: vpn-config secret: secretName: vpn-config items: - key: client.ovpn path: client.ovpn - name: vpn-auth secret: secretName: vpn-auth items: - key: auth.txt path: auth.txt - name: route-script configMap: name: route-script items: - key: route-override.sh path: route-override.sh - name: tmp emptyDir: {} initContainers: - name: vpn-route-init image: busybox:1.33 command: ['/bin/sh', '-c', 'cp /vpn/route-override.sh /tmp/route/route-override.sh; chown root:root /tmp/route/route-override.sh; chmod o+x /tmp/route/route-override.sh;'] volumeMounts: - name: tmp mountPath: /tmp/route - name: route-script mountPath: /vpn/route-override.sh subPath: route-override.sh containers: - name: vpn image: dperson/openvpn-client command: ["/bin/sh","-c"] args: ["openvpn --config 'vpn/client.ovpn' --auth-user-pass 'vpn/auth.txt' --script-security 3 --route-up /tmp/route/route-override.sh;"] stdin: true tty: true securityContext: privileged: true capabilities: add: - NET_ADMIN env: - name: TZ value: "Switzerland" volumeMounts: - name: vpn-config mountPath: /vpn/client.ovpn subPath: client.ovpn - name: vpn-auth mountPath: /vpn/auth.txt subPath: auth.txt - name: tmp mountPath: /tmp/route - name: backup-dokument image: debian:stable-slim securityContext: privileged: true env: - name: SCRIPT value: backup.sh - name: DIRECTORY_TO_BACKUP value: /home/jonas/dokument/ volumeMounts: - mountPath: /opt/scripts/ name: scripts - mountPath: /home/jonas name: jonas - mountPath: /media/backup name: backuptargetdir command: - /bin/bash - -c - | apt-get update; apt-get install -y lftp p7zip-full procps bash /opt/scripts/$SCRIPT pkill -f -SIGINT openvpn true stdin: true tty: true dnsConfig: nameservers: - 8.8.8.8 - 8.8.4.4 nodeSelector: rpi: "true"

As you might have seen in the cronjob above, it is creating the vpn tunnel as a sidecar container (“vpn”) which gets killed after the backup script is done. The “pkill” step is essential for kubernetes to know that the cronjob has finished. Otherwise it would be left unfinished and next nights job would not start (and SIGINT instead of KILL signal is important since the container will be restarted otherwise). Let’s now take a look at the last piece, the vpn tunnel. (The lack of container communication possibilities is hopefully something that gets addresses in an upcoming, not too distant, release. At least there are ongoing discussions for a few years on that topic.)

The vpn container is simply referring to the ovpn config (if it works for you standalone, it will work in this container) and the vpn credentials. Both are stored as credentials, so put your ovpn client config in a file called client.ovpn and create the secret:

kubectl create secret generic vpn-config --from-file=client.ovpn

Same thing with the credentials (I assume now that you will use username and password), create auth.txt with username and password on separate lines and create the secret:

kubectl create secret generic vpn-auth --from-file=auth.txt

That should be it. To test the job without waiting for 00:30 in the case above, kick it off as an ad-hoc job:

kubectl create job --from=cronjob/backup-dokument name-of-manual-dokument-job

You see which pod got created:

kubectl get po -o wide|grep name-of-manual-dokument-job

This pod was called name-of-manual-dokument-job–1-zpn56 and the container name was backup-dokument so the live log could therefore be checked with:

kubectl logs name-of-manual-dokument-job--1-zpn56 backup-dokument --follow

Alright, that wraps it up. Hope it was useful for something. If not for backups, maybe for other use cases where you need to run something in an openvpn tunnel.

Posted in Uncategorized | Tagged , , | 1 Comment

Solar power installation on the winter garden roof

I have had quite a bill from the local energy supplier for many years. Admittedly due to some extensive usage of electrical equipment such as computers in various forms. The AC is also quite power hungry but the days when its service is needed is limited to around 25 days (I did not count, but could serve as a ballpark figure in Canton Zurich…) a typical year.

Pay and forget would kind of solve the problem I guess but with the increased solar panel efficiency this is an option for us who has some roof/balcony or similar to host some panels.

I obviously did a bit of reading and feeding back to the grid would obviously be quite interesting to reduce the electricity bill but it has two drawbacks. More than 600 W requires some bureaucracy which I would be happy to live without and it does not give the same sense of independence/backup compared to the island approach since it requires the grid to be online (which it typically does but nevertheless).

So I settled for the hybrid island approach meaning the panels charge batteries via an intelligent hybrid inverter which also serves as the AC power source for the devices that should be at least partially solar powered. If the panels don’t deliver enough of energy to maintain a certain battery capacity, the batteries are charged via the AC grid.

Regarding my actual solution… I currently have 4 monocrystalline panels (Penta+ ASM6610M-series) with a theoretical capacity of 305 W each, in series-parallel (meaning two pairs of serially connected panels which are connected in parallel). (If you slept during those lessons in school and don’t work in that area, let’s say serial increases the voltage and parallel the current.) These panels are located on our winter garden roof and the hybrid converter is in a room in the cellar. Since the hybrid converter is kind of loud, imagine a medium noisy vacuum cleaner, it should really be in a room where you don’t need to conduct a lot of serious business on a regular basis.

My hybrid converter is a very common 24 V model from Voltronic Power (ODM, sold under a number of brands) rated for 2.4 kW which offers a USB interface (I use a simple client from https://github.com/nrm21/skymax to poll the device for values). It should be possible also to set values (e.g. change from PV charging to bypass mode) but I currently only use it to get values and expose them on a website for my personal pleasure. (I did play with a container feeding a MQTT message broker to expose the current state in a dashboard but my raw output in a table with some homegrown graphs turned out even more useful at the moment. I might revisit that later.)

Battery wise I was choosing among AGM, Gel or Lithium. I settled for the middle road, 4 times 12 V 140 Ah Gel batteries connected in a matching series-parallel connection, which are supposed to be good for 5-7 years. Hopefully we have cheaper Lithium batteries or some other great option when it is time to replace them.

To help the batteries to keep healthy I am using an equalizer to balance the battery load depending on its current state and a battery pulser desulfator to extend the lifetime.

On top of this I have a DC switch (40 A) for the panels, a DC fuse for the batteries (100 A), 16 A AC switch and a Residual current circuit breaker (30 mA) in case some connected AC device (or human) misbehaves.

In theory this system should be able to deliver up to 6 kWh per day but what I have seen during the short time I have had it connected is more like 3 kWh per day. Since a kilowatt hour costs 15.8 rappen on average (considering the “hoch- and niedertarif” hours and rates) I would not consider my setup very cost effective. In about 26 years we should have the costs back (assuming my time is free…) but the batteries won’t last that long.

Financially I think we will have to let the “backup feature” bear some of the costs, redundancy and safety comes at a cost, but I have to admit that the driver for this project was not money but rather the fun of producing ones own electricity…

Posted in datorer, hårdvara, linux, programmering | Tagged , | Leave a comment

Fixing the freezer remote

I got a Dometic CFX 35W which boasts about a convenient wifi interface. The description was not a lie, it does indeed have a wireless network connection, just not implemented the way I would have preferred it. When reading about it, it is not clear exactly how that wifi connection works and I have to admit I was a bit disappointed when I unpacked and noticed that it exposes its own access point instead of the possibility to connect to an existing wireless network. If it would have been able to connect it to existing infrastructure it could have exposed itself for remote control over Internet.

Since it is not possible to connect to an existing internet connected network, it is obviously tricky to control the freezer over Internet out of the box. You are supposed to connect to the freezer’s own network via your mobile phone, so you and your mobile phone has to be physically close to the freezer and in addition loosing the Internet connection (since the phone is connected to a wifi network without Internet access) and on top of that only one client is allowed to connect at a time. I can imagine a use case when this makes sense but in general it is quite useless compared to be able to manage the freezer from anywhere over Internet. Imagine for example where you have the freezer off but you want to cool down some drinks an hour before coming home from work.

So, what to do…? Since the hardware is there and seemingly well working, the case is obviously not lost. Time to apply some basic hacking. Since I plan to typically have the freezer in the vicinity of the house, it will be within reach for my Raspberry Pi acting as server which anyway always is on. It uses the Ethernet interface for network connectivity and the wireless network interface was idle. By connecting it to the freezer’s network we have the network communication established.

The remaining part would be to make it speak the freezer’s protocol and to be able to control it over the web. Since the Android client is easily downloadable (the apk file that is) and not obfuscated, the protocol is basically open. With some basic understanding of Java, anyone is able to decompile and turn the Android app to a plain Java console app. I added a few command-line arguments for the things I wanted to control, like switching the freezer on/off and settings a certain temperature. On top of that I created a simple web page (PHP to the rescue) exposing that capability over the web so I don’t have to SSH onto the server.

Simple web UI for controlling the Dometic CFX 35W remotely.
Posted in datorer, elektronik, Java, linux, webbprojekt | Tagged , , , | Leave a comment

Hong Kong Trail

I have spent the last week in Hong Kong and have had the opportunity to spend some of it on Hong Kong Trail.

In brief, it is a 50 km foot path through the five country parks on Hong Kong island. It is divided into 8 sections and is fairly well marked and documented. The trail has its finish at the beach Big Wave Bay, not too far from our hotel so it was pretty convenient to be able to walk to it and start walking backwards. On Tuesday I walked over there and did section 8-5 and the following day I did the first four sections which made me end up at the starting point of the trail, Victoria Peak.

After a more normal tourist day with Tanya and her class mates and monkeys on Kowloon on Thursday, my legs and feet were fit for fight. On Friday, i.e. yesterday, I decided to try to do the whole trail in one day. 50 km is obviously not a long distance for one day but the elevation profile is almost “Swiss style”. I would not say that it is demanding but the heat and humidity adds an exotic touch and the main issue is to avoid getting some blisters or sprain that prevents you from continuing without severe pain.

I had my watch tracking my steps and on the “race day” I reached 74974 steps (61.51 km) which is a new record and which can be compared to last year’s 66560 steps when I ran Adliswil-Zug back and forth.

I was on the trail from 10:40 to 19:10 so that’s 8 hours and 30 minutes. The official conservative estimates says the whole trail takes 15 hours but that is probably more for elderly and picnic excursions.

I walked most of the distance and I only did some slow pace running on flat stretches like section 7 in order to not risk anything. I brought fresh water but refilled with “mountain water” at two places. Halfway through the distance I had a Corny Big 50 gram peanut bar (240 kcal).

For those that think it sounds easy peasy, maybe Hong Kong 4in4 Challenge could be something? At least day 4 and The MacLehose Trail sounds like a challenge.

Posted in löpning, motion, natur, resor, semester, utflykter | Tagged , | Leave a comment

July wrap up

I did not sum up the “get in shape” exercise when it reached its official end on 1st of July so I better do it now meanwhile I still remember…

The target was to reach 70 kg by first of July without loosing too much muscle tissue and I reached that by cutting back on the carbs in the end of June. The scales showed 68.7 kg but in fairness a part of that was due to the depletion of the glycogen depots (and more significantly the water that it binds) in muscles and liver that happens when you cut back on carbs. Anyway, I considered the target as reached and moved on to a more relaxed diet to compensate for a somewhat ascetic lifestyle.

What is more interesting in the long run is of course if the weight jumped back severely afterwards. Naturally some weight was added when the carbs were re-introduced and now, 5 weeks after, the weight is at around 72 kg. If one looks at the weight from the beginning of the exercise in February, it means a loss of 10 kg of which it should be mostly fat since I did not get weaker in the major lifts. I aim to keep running and biking during the autumn, knowing that I won’t have as much time as I have had lately, and then it should be possible to stay around 70 kg going forward. That seems to be a healthy weight and if I could switch a few kilos of fat to muscles, that would be great.

Talking of running, yesterday I did a trip to Zug which ended up being 59 km. The weather was perfect and it felt great. I had to walk a bit in the end but in general it worked out well. The pace was nothing to brag about, 5:59 per km, but the goal was to make it home and the time was not of significance. My smart watch reported a new record regarding the number of steps, 66560 and 68 km in total for the day which included some walking in Uster with Andrzej in the evening. I think it will take some time until I beat that record.

Lately I have also re-discovered my Cyclo-cross bicycle. I bought it five years ago but have not used it much. It is great fun though and I have perfect tracks along the river so I look forward to more of that in the future. I probably also need a bike for the road as well but I think that has to wait until I have a bigger garage…

Since I wrote last time I have been to Malta for a week. I was combining work, holiday and a bubb.la meetup. I enjoyed the reliable weather, 30 degrees Celsius every day, and got to swim a bit in the Mediterranean sea. Malta is indeed a very interesting place and the living costs are indeed attractive compared to Zurich. I did a lot of “sightseeing by running” and stayed in a room in Sliema which I rented via airbnb. My host had his peculiarities but it worked out reasonably well. Especially taken into account the modest fee I paid.

Before Malta I also spent a week in Wroclaw, where I always have a great time. Obviously it was a lot of work during the days but I also had a chance to meet with the guys in a more relaxed environment during the evenings. Except for Lufthansa letting me down and stranding me in Frankfurt for a night on the way to Poland, it all worked out fine.

On a final note, I should also mention a great weekend with Rasmus and Veronica in mid-July. Among other things, we went for a very nice hike at Fürenalp in the vicinity of Engelberg.

Now I have one final work week in front of me before the trip to Siberia. (Thieves don’t need to worry, I have a guy staying in the apartment meanwhile who can take care of them.)

Posted in bil, båt, tåg och flyg, cykel, hälsa, löpning, motion, resor, semester, sport, utflykter | Leave a comment

June milestone

The final milestone before the final target was reached this morning. The target was 72 kg and I ended up with 70.7 kg this morning since I did not want to take any chances and fail by 0.1 kg or so 🙂

The idea was to get rid of fat, not muscle tissue, and that seems to have succeeded. The weight loss so far is about 11 kg and it seems to be mostly fat if I should believe the scales and the dead lift results.

The strategy in May was very basic, i.e. trying to get as many days as possible with a negative calorie balance by getting the cardio done in the morning (usually on a crosstrainer) and the resistance training during lunch meanwhile eating as before. The resistance training was a basic half-split with either upper or lower body exercises. I tried to do at least one of the major basic exercises (in my case; dead lifts, squats, bench press, military press) on a daily basis.

May was a month with a quite a lot of cheating with too many tempting hotel breakfasts and restaurant visits in Barcelona and Venice. Anyway, some progress was made and the body definitely got some days of extensive energy input as well which potentially could have helped to keep the BMR reasonable. The strategy regarding the food is also simple in theory, go high on protein and low on carbs and don’t over eat too often. Research is pretty clear that it is easier that way, especially when it comes to keeping the weight under control in the long term, due to a number of factors.

June will hopefully be a month with some further moderate progress. Nothing radical, just some slow steady steps towards the previously defined target of 70 kg.

Posted in hälsa | Tagged , , | 2 Comments

May milestone

I did study the scales with some interest on Sunday morning to see if I had met the target of the 1st-of-May-milestone. To improve my chances and have some margins, I did not eat an awful lot the day before.

The goal was to weigh less than 74 kg and the scales did show 72.5 kg so I can tick that one off.

The milestone for 1st of June is still 72 kg. I am aware that the weight is currently varying between 72.5 and 74 kg so getting it stabilized below 72 is still a valid target for the next milestone and it would put me in a good position for the 1st of July final.

Posted in hälsa | Tagged , | 1 Comment