How to make swimming a bit less boring

I find swimming in the nearby swimming pool a reasonable way to get some exercise done, especially during the winter. It is a 5 min walk from home, another 5 mins to get dressed and in the water, easily affordable (surely heavily subsidized) one year flat fee, plus the number of visitors are available online so you can pick a good spot (yes, the metric is in my Home Assistand dashboard), so there aren’t any valid excuses really. Except, it is just unreasably boring. Fine, 15 minutes, one can do but pushing it to 45-60 minutes, which would make more sense if one actually wants to get some exercise done, is stretching it.

Surely he have solved this problem 2025? Yes, yes, I am glad to tell you, the solution is solid and well tested by now. I bought myself a pair of waterproof (IPX8) bluetooth headphones and by leaving the mobile phone in the towel on the side of the pool, I can swim back and forth listening to podcasts, or even taking calls (tried once), without the slightest hiccup. It just works and the sound is not worse than a normal low-cost bluetooth headset. I wonder why I have not seen anyone else doing this (or do they hide it under their swimming caps?), it is so much better than old-school swimming. Maybe they are further ahead of me and enjoy being offline and to actually think and reflect. Let’s hope so.

Posted in sport | Tagged , | Leave a comment

Locally and externally available Home Assistant

Warning issued; Another tech related niche post…

So you have set up Home Assistant and went with Duck DNS and Let’s Encrypt which is the standard solution but is only 95 % happy since it is not great when you are accessing your instance locally. What to do, aim for 100 % and spend the time to find a better solution or move on with your life. Since you have read this far, you are obviously going the extra mile…

You are not alone my friend, I walk with you. (Ok, I stop the cheesey writing style now and switch to dry technical writing…). So, we do not want to rely on the Duck DNS entry and take control of things and use our own hosted domain name (ha.webbservern.se in my case) for the Home Assistant instance. At least for me, I did sometimes notice weird connection issues and finally got tired of it and decided to set up my own DNS entry (also using Let’s Encrypt btw, Let’s Encrypt is fantastic) and this Apache configuration (without fiddling with the currently almost working Duck DNS/Let’s Encryt setup) which I think is the actual meat of this blog post:

<VirtualHost *:443>
ServerAlias ha.webbservern.se
ServerName ha.webbservern.se
SSLEngine on
	Include /etc/letsencrypt/options-ssl-apache.conf

SSLProxyEngine on
SSLProxyVerify none 
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCheckPeerExpire off
ProxyPreserveHost On

ProxyPass /api/websocket wss://homeassistant:8123/api/websocket
ProxyPassReverse /api/websocket wss://homeassistant:8123/api/websocket
ProxyPass /  https://homeassistant:8123/
ProxyPassReverse /  https://homeassistant:8123/
CustomLog ${APACHE_LOG_DIR}/vhosts/ha.log combined

RewriteEngine on
RewriteCond %{HTTP:Upgrade} =websocket [NC]
RewriteRule /(.*) wss://homeassistant:8123/$1 [P,L]
RewriteCond %{HTTP:Upgrade} !=websocket [NC]
RewriteRule /(.*)  https://homeassistant:8123/$1 [P,L]
SSLCertificateFile /etc/letsencrypt/live/yourdomain/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/yourdomain/privkey.pem
</VirtualHost>

I won’t say each and every line is correct or needed, but yes, it works great, also with the Android Companion app. It is indeed the result of a fair bit of trial and error but I do think it could help others, if search engines will help to index this, since I did see that people have similar issues in various forums when trying to set up a virtual host proxying a https exposed backend with a certificate valid for the default Duck DNS/Let’s Encrypt setup. Nota bene, it might be needed to enable the Apache module wstunnel: sudo a2enmod proxy_wstunnel

Some people might be scared by the SSLProxy relaxations above but in this kind of set up this is not a concern for me, we are talking about connections within the LAN and the attack vector is not significantly increased…

Posted in datorer, webbservern | Tagged | Leave a comment

New media center, RPI5!

Due to ongoing re-configurations at home there was a need/opportunity to upgrade the media center serving the living room. The Intel NUC which has served as media center for many years has shifted base to the cellar and a Raspberry Pi 5 has taken its place. It also means that once again I have at least one RPI running of each existing version 🙂

Nowadays there are many Raspberry Pi to choose from, but I have learnt to appreciate the RPIs with their still excellent support when it comes to accessories and general support, both from hardware- and software providers but also the communities.

When I decided to once more buy an RPI, close to my 20th, I wanted to give it a performance boost and settles for an M.2 HAT and a 1 TB M.2 SSD. After putting the fiddly cabel in place correctly it just works and shows up, and the raspi-config tool even let’s you set it as the preferred boot device. There is an upside to not running “bleeding edge”, the RPI5 was released September 2024


When running KODI I did observe a lot of DMA errors in the kodi log and problems with playback. That was with the default kodi in the current stable Debian, Bookworm, but installing Kodi 21, which is also available in the standard repositories, solves it. And even if the RPI5 officially only supports PCIE 2.0, it runs fine with PCIE 3.0, which the M.2 SSD also support. Writing to disk at half a gigabyte per second is far from SD card performance and it boots quickly.

Initially I thought I will stick to the AC wifi but since it sometimes picked the 2.4 GHz wifi with same name (still running Google Wifi since the AC performance has been sufficient and reliably covering the estate), and my re-connection script sometimes struggled, I went with an Ethernet cable from the wifi puck instead. That works fine without a single re-buffering incident also when playing and un-compressed Blueray at 25 mpbs over the network.

So far so good, I might get back with further comments if I learn anything of interest.

Posted in datorer, elektronik, hÄrdvara, hus och hem, linux | Tagged , , , | Leave a comment

VPN on the go

I have until recently been connecting manually to the “home VPN” on the mobile when not being connected to the home network. I knew it should be possible to automate with Tasker and it turned out to be very easy.

To start with, yes I am still using OpenVPN. I have too many things automated using OpenVPN to jump ship for Wireguard as long as OpenVPN is a decent option doing what it is supposed to do.

Next thing, which Android client to use. I would say Arne Schwabe’s open source client is a good choice.

Then in Tasker you set up two new profiles, one for when you are connected to your home WiFi and one for when you are not. For the not-at-home profile you add a task “Connect VPN”, with a Task of type “Send Intent”:

Action: android.intent.action.MAIN
Extra: de.blinkt.openvpn.api.profileName:<your-vpn-profile-from-the-openvpn-client>
Package: de.blinkt.openvpn
Class: de.blinkt.openvpn.api.ConnectVPN
Target: Activity

For the at-home profile you add a task “Disconnect VPN”, with a Task of type “Send Intent”:

Action: android.intent.action.MAIN
Extra: de.blinkt.openvpn.api.profileName:<your-vpn-profile-from-the-openvpn-client>
Package: de.blinkt.openvpn
Class: de.blinkt.openvpn.api.DisconnectVPN
Target: Activity

Easy peasy, try it out with connecting and disconnecting from the home WiFi.

I can not say I use Tasker for a lot but for creating a hotspot automatically in the car (based on the bluetooth conenction to the navigator) and at my workplace (based on the geographical position), I find it very useful.

Posted in datorer, datorsÀkerhet, hÄrdvara, linux | Tagged , , , | Leave a comment

Utilizing Nvidia GPUs in a MicroK8s cluster

Not that I have any serious processing to do, but a few days ago I decided to check how it could be done if/when the need is there. It can’t hurt to have the ability I thought. Since I already have a Kubernetes cluster, it would be silly to write some custom code for distributing the jobs, and the GPUs are more suitable for the imagined heavy lifting compared to the CPUs. A Kubernetes operator sounds like the way to go for the actual interaction with the hardware driver, and sure, Google lets us know that Nvidia has a GPU operator hosted on GitHub. It was a too obvious idea for me to be the first one to think along those lines…

Since I am running Ubuntu’s Kubernetes distribution, MicroK8s, I also had a look to see what they offer, and they provide an addon which attempts to bundle the operator and pre-configure it to fit MicroK8s out-of-the-box. Sounds like the way to go, a simple “microk8s enable gpu” is suggested. Unfortunately that did not work for me despite a number of attempts with various parameters. Maybe it works for others but in my situation, where I already have the driver installed on the nodes that have GPUs, and want to use that host driver, I had no luck despite specifying the latest driver version and forced host driver. So, back to square one and I decided to try my luck with using Nvidias GPU operator “directly”. The MicroK8s add-on installs to the namespace “gpu-operator-resources” by default, so a simple “microk8s disable gpu” and deletion of all resources in that namespace (“microk8s delete namespace gpu-operator-resources”, to avoid conflicts, put us back to a reasonable starting position.

In the Nvidia documentation there is a section about the Containerd settings to use with MicroK8s so the paths are matching what MicroK8s expect. And by specifying “driver.enabled=false” in order to avoid the nvidia driver as a container and using the pre-installed host driver, we have a winner:

microk8s helm install gpu-operator -n gpu-operator --create-namespace \
nvidia/gpu-operator --set driver.enabled=false \
--set toolkit.env[0].name=CONTAINERD_CONFIG \
--set toolkit.env[0].value=/var/snap/microk8s/current/args/containerd-template.toml \
--set toolkit.env[1].name=CONTAINERD_SOCKET \
--set toolkit.env[1].value=/var/snap/microk8s/common/run/containerd.sock \
--set toolkit.env[2].name=CONTAINERD_RUNTIME_CLASS \
--set toolkit.env[2].value=nvidia \
--set toolkit.env[3].name=CONTAINERD_SET_AS_DEFAULT \
--set-string toolkit.env[3].value=true

At least with that, the resources in the gpu-operator namespace are healthy, and it passes the validation test (“microk8s kubectl logs -n gpu-operator-resources -lapp=nvidia-operator-validator -c nvidia-operator-validator”) and can run that CUDA sample application “cuda-vector-add”.

Now I just have to figure out what to do with it… Re-encoding movies, forecasting the local weather based on measurements on the balcony and open weather data, beating the gambling firms or the hedge funds. The opportunities are endless for the naĂŻve developer. 🙂

Posted in hÄrdvara, linux | Tagged , , , | Leave a comment

Building and hosting multi architectural OCI images for your local Kubernetes cluster

I have been running my MicroK8s cluster a few years now with a few applications without much issues. It is hosting my web exposed photo galleries and runs the distributed backup solution for example.

Until now it has been sufficient to use public images from Ducker Hub and whatever tweaks or additional packages have been applied without much effort post pod initialization. Now I have containirized a solution I developed for exposing an arbitrary YouTube channel via RSS and noticed that installing the right packages (for example FFmpeg which brings a bunch of mandatory dependencies with it which one can not opt out of) took some time and I did not want to add this overhead to each cron job execution (yes, I run these updates and conversions as Kubernetes CronJobs).

My cluster is running on 6 Raspberry Pi 4s stacked in a tower, but the master node is running on a Ryzer server (and I have some other nodes on Ryzen servers which are labelled according so that they could pick up heavier load if neeed) so that means that the images I use should be available for both arm64 and amd64 architecture.

I was choosing between Docker and Podman for building but since I had Docker installed on the machine I was building on, I went with Docker and more specifically docker buildx. To host the images locally I use the built-in MicroK8s registry addon which can be enabled easily (and allocating 20 GB) with: microk8s enable registry:size=20G

My image requirements are fairly simple, Debian Stable has what I need except Python 3 and some media packages (and yes, procps is for that nice process signaling utility pkill):

FROM debian:stable
RUN <<EOF
apt-get update && apt-get install -y ffmpeg mediainfo wget ca-certificates python3 procps
EOF

Buidling with Docker is typically as easy as “docker build .” but due to the cross-platform image needs I used the following to build for amd64/arm64, tag the image, export as OCI image and upload to my local registry on localhost (I did the build on the same server as where the MicroK8s master node runs and use the default registry port 32000). In order to be able to reach the local registry I had to create a custom builder mybuilder with relaxed security constraints (“docker buildx create --use --name mybuilder --driver-opt network=host --buildkitd-flags '--allow-insecure-entitlement“) and then build using that builder:

docker buildx build -f podcast-stable-image -t localhost:32000/mydebianstable:registry --platform linux/amd64,linux/arm64 . --push --builder mybuilder

In the CronJob description it is then possible to refer to the image with:
image: localhost:32000/mydebianstable:registry

The loading is then a matter of seconds instead of minutes.

In order to not upset Google it makes sense to use a sidecar container with vpn connection so you use different IP addresses when accessing their servers.

Posted in datorer, linux, webbprojekt | Tagged , , , , | Leave a comment

Upgrading the NVMe storage

One of my servers has been running low on disk space and after a row of half measures I finally did something about it. A few months ago I added a second NVMe drive to another server and it was easy peasy since it had a second M.2 slot for NVMe drives. The server I upgraded yesterday only had one M.2 slot for drives, the other slot was meant for WiFi devices and has a different contact.

Yesterday I got a 4 TB NVMe drive which I bought cheaply because it is using the older but still quick enough for me PCIE 3.0 standard. I was not keen on reinstalling the OS and simply wanted to have the new 4 TB drive replacing the current 1 TB drive. The server runs a few VMs and I also wanted to minimize the downtime. Some quick research showed that Clonezilla still (initially released 2007) gets the job done so I installed it on an UEFI bootable USB drive. The new 4 TB drive I put temporarily in one of those NVMe-USB 3.0 cases and plugged them both in the server’s USB 3.0 slots and rebooted. Clonezilla is intuitive for someone used to TUIs and after about 20 minutes the 1 TB drive was cloned to the 4 TB drive including partition table and all.

Since the partition drive is cloned the root filesystem partition had to be increased to utilize all the new free space but that was easily done in GParted. After replacing the 1 TB drive with the 4 TB drive in the M.2 slot I was prepared to have to tell the BIOS where to boot from but it booted straight away from the new drive and it just worked exactly as before, but with 25 % of the disk space used instead of nearly 100%. Success at the first shot, not every time but nice with a positive surprise for once.

I have seen that there are devices for this specific purpose but paying for such a device seems unnecessary if you only do some cloning occasionally. For professionals with a walk-in “I quickly clone your NVMe drive shop” it probably makes sense though…

What to do with the leftover 1 TB drive? For now I leave it in the NVMe-USB 3.0 case and have it as a 1 TB USB 3.0 stick (which at the time of writing goes for about 100 bucks).

Posted in datorer, hÄrdvara, linux | Tagged | Leave a comment

The Chipolo that got a second chance

To replace or to recharge, that is the question when it comes to battery powered devices. I have switched most of my small devices to rechargable batteries (the only exception is devices where Felix is suspected to kill/drop the battery powered toy) and in general I strongly dislike devices with built-in irreplacable batteries, where you are supposed to throw away/recycle perfectly working electronics just because the battery has discharged or can’t be recharged anymore.

One such example is a Chipolo Card (1st gen) which has an odd battery (CP113130, which can be bought from China in minimum quantities of 100 batteries…) It is a 3 V battery so I replaced it with a USB to 3 V converter (a few euros on Amazon or AliExpress). That works fine but since USB battery packs typically disconnects the load if it is regarded as “too low” (typically somewhere below 100 mA) one needs to either add extra load, get a pack without where such “intelligence” (check Voltaic) can be disabled, or put an adapter between the power source and the consumer that keeps the battery pack online with a quick “ping load”.

I went for the latter option since the Voltaic packs were hard to find with delivery to my place. For those who wants to solder themselves, this is a comprehensive guide with an option to buy the parts you need. Maybe I do that later if I need one more. There are many DIY projects at Tindie and I got one of them. After putting that “pulse generator” between the battery pack and the USB 5.5 V to 3 V converter, the Chipolo Card works as expected. After a few days the battery pack is down from 100 % to 75 %. The USB battery pack can still be charged and used as a normal battery pack since the pulse generator is not blocking the other USB ports.

Not the most beautiful installation I have done in my life but still better than throwing away a perfectly working Chipolo. 🙂

First test without pulse generator. The battery pack turned off after some time.
Battery pack with pulse generator (and electric tape…)
Posted in hÄrdvara, hus och hem | Tagged , | Leave a comment

Raspberry Pi 4 with OS on big (>2 TB) drives

I recently switched to Ubuntu 22.04 on my Raspberry Pi 4 and faced an issue with trying to use the full space on my RAID cabinet (Icy Box IB-RD3640SU3E2). The parition table type MBR is no good for such cases and that is what one gets after writing the installer on the USB drive with rpi-imager. What to do?

Fortunately there is a convenient tool called mbr2gpt (don’t bother with gparted, it failed me at least) which does the conversion in-place without data loss (at least the two times I have used it…).

The process for a fresh Ubuntu 22.04 installation on an external USB connected drive (for example a RAID cabinet as outlined above).

1. Use rpi-imager to write Ubuntu 22.04 preinstalled image to the usb drive and to a SD card
2. Boot with the USB drive plugged in and go through the Ubuntu installation guide
3. Boot with the SD card plugged in and without USB drive and go through the Ubuntu installation guide.
4. Boot with the SD card but without USB drive, then plug in the USB drive when Ubuntu has started. Unmount the USB drive if it is mounted.
5. Use the mbr2gpt utility to convert mbr to gpt and expand the root partition (sudo mbr2gpt /dev/sda). Choose to expand the root filesystem and not to boot from SD card.
6. Reboot without the SD card.

If you have to also restore a previous backup (for example a simple tar archive done with tar cvf /backup.tar --exclude=/backup.tar --exclude=/dev --exclude=/mnt --exclude=/proc --exclude=/sys --exclude=/tmp --exclude=/media --exclude=/lost+found \ /), like in my case, this would be the additional steps:

7. Start with the SD card and without USB drive.
8. Mount the USB filesystem on /media/new and create /media/backup
9. Mount the filesystem with backup file: mount -t nfs serve:/media/backupdir /media/backup
10. Make a backup of /media/new/etc/fstab and /media/new/boot/firmware/cmdline.txt
11. Copy the backup to USB drive’s filesystem: tar xvf /media/backupdir/backup-file.tar -C /media/new
12. If you refer to PARTUUIDs in fstab or cmdline.txt, restore to match the partition UUID from the backup
13. Reboot without the SD card and hopefully the system boots fine (like it did for me…)

Posted in datorer, hÄrdvara, linux, webbservern | Tagged , , , , , , | Leave a comment

Scripting inspiration

Yesterday, too late in the evening, I stumbled upon a need, to save an album available for streaming for offline use, to be able to stream it conveniently via my Sonos system (yes there are streaming services to stream directly from as well but that is not as future proof let’s say). If you live in a legislation where it is allowed to make a private copy, this is fine from a regulatory viewpoint.

I asked google what solutions/products other people had come up with. Nothing obvious, a bunch of crap where scammers want you to pay for some shitty Windows applications. No thanks.

Next thought. How difficult could that be in a bash script since you are able to combine a bunch of well tested software components. This type of scripting is more like lego than software development.

I think I had something working in less that 15 minutes since I had already used parec to record sound output produced by PulseAudio. My thought is that there are probably a bunch of problems and cases where people would have a lot of use of some basic scripting abilities, if that is PowerShell in Windows or Bash in Ubuntu does not matter, in order to get things done without buying/installing software.

If you are already onboard, good for you, otherwise a primer in scripting might be a well invested evening. First this one, and then the classic Advanced Bash-Scripting Guide.

!/bin/bash
#Dependencies: script that moves sound output to a named sink (moveSinks.sh), mp3splt
NOW=$(date '+%Y-%m-%d')
mkdir ~/Music/$NOW
echo "Start playing the album"
sleep 2
DEFAULT_OUTPUT_NAME=$(pacmd list-sinks | grep -A1 "* index" | grep -oP "<\K[^ >]+")
pactl load-module module-combine-sink sink_name=record-n-play slaves=$DEFAULT_OUTPUT_NAME sink_properties=device.description="Record-and-Play"
${HOME}/scripts/moveSinks.sh record-n-play
/usr/bin/parec -d record-n-play.monitor | /usr/bin/lame -r -V0 - "${HOME}/Music/${NOW}/${NOW}.mp3" &
while true; do
  sleep 15
  number_sinks=$(pacmd list-sink-inputs | grep available. | cut -c1-1)
  echo "Found this no. of pulse sinks: $number_sinks"
  if [[ $number_sinks -le 1 ]]; then
    # Stop recording kill -9 $!
    break
  fi
  sleep 45
done
#Split into separate tracks
cd ${HOME}/Music/${NOW}/
mp3splt -s ${NOW}.mp3

Posted in datorer, linux, programmering, webbprojekt | Tagged , , | Leave a comment