tzdata fails to upgrade on Ubuntu 23.10

If you’re getting a dpkg error about tzdata prematurely exiting, you need to get this file:
Ubuntu – Package Download Selection — tzdata_2024a-0ubuntu0.23.10_all.deb

Afterwards, install with dpkg -i tzdata_2024a-0ubuntu0.23.10_all.deb

Afterwards, you can do apt upgrade to upgrade to the latest one in the apt repo that was failing.

Hope this helps someone!

dpkg: error processing package tzdata (–configure):
installed tzdata package post-installation script subprocess returned error exit status 10
Errors were encountered while processing:
tzdata
needrestart is being skipped since dpkg has failed

Docker Compose with an external LAN / VLAN IP!

I just figured this out, and it’s too cool not to share. I have business grade switches at my house, so I have various VLANs setup already. You’ll need that in place to make this work, and have your port tagging in place already, etc.

This requires no additional configuration on the host. In the below, I’ve included two examples — default_lan and vlan5. So if you just want to give a container an IP on your local LAN, you can use default_lan for that. And if you’re looking to create a service on a vlan IP, you can use vlan5 as an example for that.

EDIT: YOU MAY NEED TO modprobe 8021q (and/or add it to /etc/modules)

You do not need to include default_lan in order to use a vlan. This also of course works great in Portainer.

networks:
  default_lan: # the name you'll reference in the service configuration
    driver: ipvlan
    driver_opts:
      parent: enp1s0d1 # the interface on your docker host that it will tunnel through
    ipam:
      config:
        - subnet: 10.1.1.0/24 # your networks subnet
          gateway: 10.1.1.1 # your networks gateway

  vlan5:
    driver: ipvlan
    driver_opts:
      parent: enp1s0d1.5 # I've added '.5' for vlan 5
    ipam:
      config:
        - subnet: 10.1.5.0/24 # the vlans subnet
          gateway: 10.1.5.1 # the vlans gateway

services:
  service_on_lan:
    networks:
      default_lan:
        ipv4_address: 10.1.1.51

  service_on_vlan:
    networks:
      vlan5:
        ipv4_address: 10.1.5.55

I have not tested, but I believe you can also just add another two subnet and gateway lines for ipv6 routing as well, and then specify your ipv6_address in the service.

You can also use macvlan instead, which will give the container a unique MAC address that you can see on your network. I have found the best way to do this is individually per-IP, at least for my needs. Otherwise you can easily run into duplicate IP problems.

networks:
  macvlan5_5: # the name you'll reference in the service configuration, and I give _5 as the IP
    driver: macvlan
    driver_opts:
      parent: enp1s0d1.5 # the interface on your docker host and .# for the vlan #
    ipam:
      config:
        - subnet: 10.1.5.0/24 # your networks subnet
          gateway: 10.1.5.1 # your networks gateway
          ip_range: 10.1.5.5/32 # the static ip you want to assign to this networks container

And then just assign the network in your container:

services:
  service_on_macvlan5:
    networks:
      - macvlan5_5

Unfortunately, the container does not seem to try to register with the defined hostname so my firewall just sees a new ‘unknown’ host on the random MAC address in the arp tables.

Check out the complete Docker Network Drivers Overview page for more examples and usage.

Frigate Docker Compose / Portainer

You will need to have Mosquitto MQTT setup before using Frigate — fortunately, I have a guide for that already!

https://itbacon.com/2023/08/01/installing-mosquitto-mqtt-in-portainer/

Once you have confirmed Mosquitto is up and running, we can deploy a Frigate stack. This particular stack has a device mapping for a Google Coral A+E key device, as well as using /dev/dri/renderD128 for onboard graphics (Intel, in this case). You’ll want to adjust some things, such as whatever MQTT username and password you created during MQTT setup/install (see the guide for help!), as well as your camera admin username and password. If you use different usernames and passwords for all your cameras, you can specify them individually in your Frigate configuration file after the stack is deployed.

Also in this stack is a configuration for using a Samba/Windows based NAS as a volume for /media/frigate, which is where recordings and snapshots will be saved to. Basically, what I’m saying is, you’ll need to make some changes to the below code after pasting it, in order to have it suit your needs.

The majority of my configuration file was taken from the Full Reference Configuration File which is an excellent reference with comments about the various options in the configuration file.

I had also initially planned to include my nvidia setup/configuration sections, but the machine I just moved Frigate into can only take one full length card, I have used it for something else.

services:
  frigate:
    container_name: "frigate"
    image: "ghcr.io/blakeblackshear/frigate:stable"
    hostname: "frigate"
    shm_size: 1024mb # increase if getting bus errors
    privileged: true
    restart: unless-stopped
    cap_add:
      - CAP_PERFMON
    devices:
      - /dev/dri/renderD128:/dev/dri/renderD128:ro # onboard video
      - /dev/apex_0:/dev/apex_0:ro # coral
    environment:
      - "TZ=EST5EDT" # your timezone
      - "FRIGATE_RTSP_USERNAME=admin" # camera admin username
      - "FRIGATE_RTSP_PASSWORD=password" # camera admin password
      - "FRIGATE_MQTT_USERNAME=frigate" # mqtt server username
      - "FRIGATE_MQTT_PASSWORD=password" # mqtt server password
    network_mode: host
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - data:/config
      # if you're not using a NAS, change NAS to the path you're using
      # e.g. /mnt/frigate:/media/frigate or /any/path:/media/frigate
      - NAS:/media/frigate
      - type: tmpfs
        target: /tmp/cache
        tmpfs:
          size: 1G

volumes:
  data:
  mqtt_data:
  NAS:
    driver_opts:
      type: cifs
      o: "addr=IP.OF.NAS,username=SAMBA_USERNAME,password=SAMBA_PASSWORD,iocharset=utf8,file_mode=0600,dir_mode=0700"
      device: "//IP.OF.NAS/SharedFolder"

networks:
  frigate:

Here is my Frigate configuration file. It will be in /var/lib/docker/volumes/frigate_data/_data/config.yml

I know it’s kind of a mess, and there’s probably some redundant things in here, but I just felt bad about not having anything up still after so long. So there’s definitely some useful examples in here I imagine, for Amcrest, Reolink, Hikvision cameras. Examples of how to use separate streams for recording and detection, etc. Unfortunately at the time of writing all of this up tonight, I am extremely tired and must just get it out as is at this point.

Now, admittedly, full on support and assistance with configuring your Frigate NVR is vastly out of the scope of this guide. There is plenty of great documentation already available on the Official Frigate Website. Good luck!

mqtt:
  enabled: true
  host: 10.1.1.5 # the IP of the computer running MQTT, or localhost
  port: 1883
  topic_prefix: frigate
  client_id: frigate
  user: '{FRIGATE_MQTT_USERNAME}'
  password: '{FRIGATE_MQTT_PASSWORD}'
  stats_interval: 30

detectors:
  coral:
    type: edgetpu
    device: pci
  #cuda:
  #  type: tensorrt
  #  device: 0
  #openvino:
  #  type: openvino
  #  device: AUTO
  #cpu:
  #  type: cpu
  #  num_threads: 2

database:
  path: /config/frigate.db

logger:
  # Optional: Default log verbosity
  default: warning
  #default: warning
  # Optional: Component specific logger overrides
  #logs:
  #  frigate.nginx: error
  #  frigate.event: error

birdseye:
  enabled: true
  restream: true
  width: 1280
  height: 720
  quality: 7
  # motion (if motion was detected), objects (if it detected an object), or continuous (always on)
  mode: continuous

ffmpeg:
  global_args: -hide_banner -loglevel warning -threads 2
  #hwaccel_args: preset-vaapi
  hwaccel_args: preset-intel-qsv-h264
  #hwaccel_args: preset-nvidia-h264

  #input_args: preset-rtsp-generic
  input_args: preset-rtsp-restream
  output_args:
    record: preset-record-generic-audio-copy

# default detect settings for all cameras
detect:
  enabled: true
  width: 704
  height: 480

  fps: 10
  max_disappeared: 50
  stationary:
    interval: 10
    threshold: 50

# default object tracking for all cameras
objects:
  track:
  - person
  #filters:
  #  person:
  #    min_area: 100
  #    max_area: 75000

motion:
  threshold: 25
  contour_area: 25
  delta_alpha: 0.2
  frame_alpha: 0.2
  frame_height: 75
  improve_contrast: false
  mqtt_off_delay: 30

# default record settings for all cameras
record:
  enabled: true
  expire_interval: 60
  retain:
    days: 15
    mode: all
  events:
    pre_capture: 5
    post_capture: 5
    objects:
    - person
    retain:
      default: 15
      mode: all

snapshots:
  enabled: true
  clean_copy: true
  timestamp: false
  bounding_box: true
  crop: false
  retain:
    default: 15

# configure your cameras here
go2rtc:
  streams:
    # reolink poe doorbell
    doorbell:
    - ffmpeg:rtsp://{FRIGATE_RTSP_USERNAME}:{FRIGATE_RTSP_PASSWORD}@10.1.5.108:554/h264Preview_01_main#video=copy#audio=copy#audio=opus
    doorbell_sub:
    - ffmpeg:rtsp://{FRIGATE_RTSP_USERNAME}:{FRIGATE_RTSP_PASSWORD}@10.1.5.108:554/h264Preview_01_sub#video=copy
    # amcrest
    front:
    - rtsp://{FRIGATE_RTSP_USERNAME}:{FRIGATE_RTSP_PASSWORD}@10.1.5.114:554/cam/realmonitor?channel=1&subtype=0
    - ffmpeg:back#audio=opus
    front_sub:
    - rtsp://{FRIGATE_RTSP_USERNAME}:{FRIGATE_RTSP_PASSWORD}@10.1.5.114:554/cam/realmonitor?channel=1&subtype=1
    # amcrest
    back:
    - rtsp://{FRIGATE_RTSP_USERNAME}:{FRIGATE_RTSP_PASSWORD}@10.1.5.107:554/cam/realmonitor?channel=1&subtype=0
    - ffmpeg:back#audio=opus
    back_sub:
    - rtsp://{FRIGATE_RTSP_USERNAME}:{FRIGATE_RTSP_PASSWORD}@10.1.5.107:554/cam/realmonitor?channel=1&subtype=1
    # amcrest
    porch:
    - rtsp://{FRIGATE_RTSP_USERNAME}:{FRIGATE_RTSP_PASSWORD}@10.1.5.106:554/cam/realmonitor?channel=1&subtype=0
    - ffmpeg:porch#audio=opus
    porch_sub:
    - rtsp://{FRIGATE_RTSP_USERNAME}:{FRIGATE_RTSP_PASSWORD}@10.1.5.106:554/cam/realmonitor?channel=1&subtype=1
    # amcrest wifi camera
    livingroom:
    - rtsp://{FRIGATE_RTSP_USERNAME}:{FRIGATE_RTSP_PASSWORD}@10.1.5.112:554/cam/realmonitor?channel=1&subtype=0&authbasic=64
    - ffmpeg:livingroom#audio=opus
    livingroom_sub:
    - rtsp://{FRIGATE_RTSP_USERNAME}:{FRIGATE_RTSP_PASSWORD}@10.1.5.112:554/cam/realmonitor?channel=1&subtype=1&authbasic=64
    # hikvision
    basement:
    - rtsp://{FRIGATE_RTSP_USERNAME}:{FRIGATE_RTSP_PASSWORD}@10.1.5.104:554/Streaming/Channels/101
    - ffmpeg:basement#video=copy
    #basement_sub:
    #- rtsp://{FRIGATE_RTSP_USERNAME}:{FRIGATE_RTSP_PASSWORD}@10.1.5.104:554/Streaming/Channels/102
    #- ffmpeg:basement#video=copy

# we use localhost because go2rtc is restreaming them locally based on the names we gave them above
cameras:
  doorbell:
    enabled: true
    ffmpeg:
      inputs:
      - path: rtsp://localhost:8554/doorbell
        roles:
        - record
      - path: rtsp://localhost:8554/doorbell_sub
        roles:
        - detect
    detect:
      enabled: true
      width: 640
      height: 480
    objects:
      track:
      - person
    mqtt:
      enabled: true
      timestamp: true
      bounding_box: true
      crop: true
      height: 720
      quality: 92
    live:
      height: 720
      quality: 7
    ui:
      order: 1
    #motion:
    #  mask:
    #  - 640,0,640,309,605,335,482,368,252,366,0,311,0,0
  front:
    enabled: true
    ffmpeg:
      inputs:
      - path: rtsp://localhost:8554/front
        roles:
        - record
    detect:
      enabled: true
    record:
      events:
        #required_zones:
        #- FrontYard
        retain:
          default: 15
    objects:
      track:
      - person
      - cat
    mqtt:
      enabled: true
      timestamp: true
      bounding_box: true
      crop: true
      height: 720
      quality: 92
    live:
      height: 720
      quality: 7
    ui:
      order: 2
    #motion:
    #  mask:
    #  - 704,0,704,202,0,150,0,0
    #zones:
    #  FrontYard:
    #    coordinates: 650,196,82,136,0,228,0,480,704,480
  back:
    enabled: true
    ffmpeg:
      inputs:
      - path: rtsp://localhost:8554/back
        roles:
        - record
      - path: rtsp://localhost:8554/back_sub
        roles:
        - detect
    record:
      events:
        #required_zones:
        #- BackYard
        retain:
          default: 15
    objects:
      track:
      - person
      - cat
    mqtt:
      enabled: true
      timestamp: true
      bounding_box: true
      crop: true
      height: 720
      quality: 92
    live:
      height: 720
      quality: 7
    ui:
      order: 3
    #motion:
    #  mask:
    #  - 291,0,288,41,0,39,0,0
    #zones:
    #  BackYard:
    #    coordinates: 374,0,600,480,0,480,0,0
  porch:
    enabled: true
    ffmpeg:
      inputs:
      - path: rtsp://localhost:8554/porch
        roles:
        - record
      - path: rtsp://localhost:8554/porch_sub
        roles:
        - detect
    record:
      events:
        retain:
          default: 15
    objects:
      track:
      - person
      - cat
    mqtt:
      enabled: true
      timestamp: true
      bounding_box: true
      crop: true
      height: 720
      quality: 92
    live:
      height: 720
      quality: 7
    ui:
      order: 4
    #motion:
    #  mask:
    #  - 261,0,270,104,323,223,367,333,480,348,534,76,640,51,640,480,0,480,0,0
  livingroom:
    enabled: true
    ffmpeg:
      inputs:
      - path: rtsp://localhost:8554/livingroom
        roles:
        - record
      - path: rtsp://localhost:8554/livingroom_sub
        roles:
        - detect
    detect:
      enabled: true
    record:
      events:
        objects:
        - cat
        retain:
          default: 15
    objects:
      track:
      - cat
    mqtt:
      enabled: true
      timestamp: true
      bounding_box: true
      crop: true
      height: 720
      quality: 92
    live:
      height: 720
      quality: 7
    ui:
      order: 5

  basement:
    enabled: true
    ffmpeg:
      inputs:
      - path: rtsp://localhost:8554/basement
        roles:
        - record
        - detect
    record:
      events:
        #required_zones:
        #- BasementStairs
        objects:
        - cat
        retain:
          default: 15
    detect:
      enabled: true
      width: 1280
      height: 720
      fps: 10
    objects:
      track:
      - cat
    mqtt:
      enabled: true
      timestamp: true
      bounding_box: true
      crop: true
      height: 720
      quality: 92
    live:
      height: 720
      quality: 7
    ui:
      order: 6

timestamp_style:
  position: tl
  format: '%m/%d/%Y %H:%M:%S'
  color:
    red: 255
    green: 255
    blue: 255
  thickness: 1
  effect: solid

ui:
  live_mode: webrtc
  timezone: EST5EDT
  use_experimental: false
  time_format: 12hour
  date_style: short
  time_style: medium
  strftime_fmt: '%Y/%m/%d %H:%M'

telemetry:
  version_check: true

Installing Google Coral on Ubuntu 23.10

MAY THE 4TH BE WITH YOU!… and also this guide.

I have not tested it in Ubuntu 24.04 but it may work. Let me know in the comments if you try!

This brief guide has been revised from:
https://coral.ai/docs/m2/get-started#2-install-the-pcie-driver-and-edge-tpu-runtime

This will essentially be the first part of a few parts coming for Frigate and Home Assistant.

First, we need to setup the apt repository, and install the required packages:

echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list

curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/google-coral-edgetpu.gpg

sudo apt update
sudo apt install gasket-dkms libedgetpu1-std

If gasket-dkms fails (it probably will — if it doesn’t, skip down to the udev rule section):

sudo apt purge gasket-dkms
git clone https://github.com/KyleGospo/gasket-dkms
apt install dkms libfuse2 dh-dkms devscripts
cd gasket-dkms; debuild -us -uc -tc -b

You’ll have a .deb file one folder up:

cd ..
ls *.deb
dpkg -i gasket-dkms-*.deb

We need to add a udev rule for permission to the hardware device:

sudo sh -c "echo 'SUBSYSTEM==\"apex\", MODE=\"0660\", GROUP=\"docker\"' >> /etc/udev/rules.d/65-apex.rules"

If you don’t want to use the docker group, replace docker in the udev command string:

sudo groupadd apex
sudo usermod -a -G apex your_linux_username
sudo sh -c "echo 'SUBSYSTEM==\"apex\", MODE=\"0660\", GROUP=\"apex\"' >> /etc/udev/rules.d/65-apex.rules"

REBOOT!

Verify the device is detected and available:

ls -alh /dev/apex*
crw-rw---- 1 root docker 120, 0 May 4 20:34 /dev/apex_0

Server Service Shuffle

Over the next week or two as I find time and motivation (Helldivers 2 has been winning both of them lately), I’ll be moving some services to a new server, namely Docker/Portainer, Frigate and Home Assistant. I’ll be doing my best to keep notes from beginning to end and get something posted, finally, to help anyone else trying to get the two of them working.

I do use a coral, and have also used an nVidia card for graphics offloading. With 5-6 cameras I can’t say I noticed a huge impact offloading the graphics, but I will try to cover that part as well since I plan to move the card over anyway. For reference, it’s just a lowly GTX 1050ti that I’m using for the task. I figure if I ever bother to buy a Plex license, I can use it for that as well.

I’ll be using Ubuntu Server 23.10.

Revisiting Frigate & Home Assistant

I’ve been revisiting frigate as of late, and using homeassistant for sending notifications to my phone. I wish I could do this without homeassistant…it feels so excessive just to get camera notifications.

Anyway, after months and months of procrastinating I hope to post my frigate portainer config soon, and an overall frigate configuration file that I’m using. I’ll also attempt to cover notifications on a separate post afterwards.

Hope everyone had a Happy Thanksgiving! 🦃

Portainer as Docker Compose file

Updating portainer becomes:
docker compose down
docker pull portainer/portainer-ce:latest
docker compose up -d

version: '3.0'
services:
  portainer:
    container_name: portainer
    hostname: portainer
    command: --sslcert /certs/lan.fullchain --sslkey /certs/lan.key
    image: portainer/portainer-ce:latest
    restart: unless-stopped
    network_mode: bridge
    environment:
      - "TZ=EST5EDT"
    ports:
      - 9443:9443
    volumes:
      - data:/data
      - /ssl/lancerts:/certs:ro
      - /etc/localtime:/etc/localtime:ro
      - /var/run/docker.sock:/var/run/docker.sock

volumes:
  data:

Paste the above into a docker-compose.yml file, I placed mine in a ‘portainer’ folder inside my home directory. Then just run docker compose up -d

I use a folder on my system, /ssl/lancerts, which I map to /certs inside the container. You will have to modify your certificate locations in the volumes section, and the command line towards the top of the compose file. If you are not using SSL, then simply comment out or remove the command line at the top of the compose file and remove the volume mapping.

ITBacon is now ARMed!

…with an Orange-Pi Zero 3! I’ve moved the website to this tiny little device that doesn’t even have a home yet.

A case of sorts is on the way. I did attach a heatsink to the CPU in the above picture, which was not included.

My impressions so far are very positive. I’m running their Ubuntu image and haven’t had any problems. It reboots in several seconds. It runs the website very well.

I even toyed around with building an arm version of q2pro with no problems at all.

Only downsides: USB 2.0 port not 3.0, and SD slot speed. SD speed is maxed around 22-24MB/s. So a high speed card will not make a difference.

Update: Now with a home of its own! This little fan lowered temps by 13+⁰C!

Quake 2

Update: Please see https://quake2.itbacon.com for more information! Come check out my jump server, itbacon.com:27920

Had a lot of fun learning meson environments this weekend, and finally building a 32bit version of q2pro (in Ubuntu) to load old mods that never released source code. Now I can load gamei386.so mods. I even compiled an i386 version of q2admin just for the heck of it. I love how everything works still.

PacketFlingers excellent work with PakServe and pakutil made creating compressed pkz files a breeze.

Skullernets excellent work on q2pro made using it all just as easy. HTTP downloads, etc.

It’s absolutely wild to me that I can still load a quake2 mod from 20 years ago on a modern system. And everything works.

I’m running the best quake2 servers of my life and there’s nobody to use them anymore! It’s maddening.

I think I am mainly doing this for my own nostalgia at this point. There are definitely still a number of quake2 servers out there but there’s only a small, small handful of people.

I’d give anything for a truly active q2 community again. I want to play weapons factory, and freeze tag, and rocket arena, and expert CTF again.