So I decided to try a new Razer Viper V3 Hyperspeed Wireless Gaming Mouse (whew!).
I installed Razer Synapse to configure the DPI. All my RGB settings then proceeded to get reset, 3 fans had no lights, and my light strip was now only half-lit.
“Why?” I thought to myself. “This mouse doesn’t even have RGB!”
I had not asked it to reset my settings from ASRock RGB Polychrome (nor did it ask!). It just decided to take over, like a virus.
I figured omitting Razer Chroma would have been enough.
There is nowhere in the settings to tell Synapse to relinquish control, either. It is just enabled. Permanently as far as I can tell.
A quick Google mentions how it appears you used to be able to tell Polychrome to sync with Synapse or Chroma or something. Inside Synapse it states I am using an X870 Taichi…uhm, no. I am not rich, Razer. It’s an X870 Pro. It doesn’t even have the RGB areas Razer wanted me to configure.
By this point I was much too frustrated. I set the one DPI I plan to ever use and promptly just uninstalled the software.
I will state, the mouse feels wonderful. I much prefer the shape to the Logitech G703 I also recently purchased.
But the experience would have been exceptional had it not decided to wreck my RGB setup. Instead it was frustrating. I did try, briefly, to get all the lights lit up again but no combination of adjusting number of LEDs or etc brought them back. I think it’s because it was only detecting two of the ARGB headers for whatever reason.
By comparison, Logitech G-Pro software is amazing. No frills. Configure your mouse. Configure RGB if you want. Done.
Today I realized my VPN has been setup kind-of-wrong the last oh… 4 years or so. I’ve just never had a reason to notice as I was always accessing internal things.
So, an updated WireGuard Guide for pfsense 2.7.2 will be forthcoming soon!
I’m also just working on adding some better CSS support to my powershell server status script as well. I will soon get that online.
That’s right! If you didn’t know, powershell is (nearly?) completely cross-platform.
I’ve recently been working on a server status script for work and I chose to do it in powershell. It does threaded pings, generates html files, etc.
With zero modifications at all the script runs flawlessly on both Ubuntu Linux and my Orange Pi. All I had to do was extract the powershell tarball, chmod +x pwsh and run the script.
Truly awesome.
What blew my mind was Powershell was markedly faster on my Pi than it was on my Ubuntu server.
I’ll be posting more about the script soon, as I need to figure out some things still. But it will be released in due time. I really like it!
Afterwards, install with dpkg -i tzdata_2024a-0ubuntu0.23.10_all.deb
Afterwards, you can do apt upgrade to upgrade to the latest one in the apt repo that was failing.
Hope this helps someone!
dpkg: error processing package tzdata (–configure): installed tzdata package post-installation script subprocess returned error exit status 10 Errors were encountered while processing: tzdata needrestart is being skipped since dpkg has failed
I just figured this out, and it’s too cool not to share. I have business grade switches at my house, so I have various VLANs setup already. You’ll need that in place to make this work, and have your port tagging in place already, etc.
This requires no additional configuration on the host. In the below, I’ve included two examples — default_lan and vlan5. So if you just want to give a container an IP on your local LAN, you can use default_lan for that. And if you’re looking to create a service on a vlan IP, you can use vlan5 as an example for that.
EDIT: YOU MAY NEED TO modprobe 8021q (and/or add it to /etc/modules)
You do not need to include default_lan in order to use a vlan. This also of course works great in Portainer.
networks:
default_lan: # the name you'll reference in the service configuration
driver: ipvlan
driver_opts:
parent: enp1s0d1 # the interface on your docker host that it will tunnel through
ipam:
config:
- subnet: 10.1.1.0/24 # your networks subnet
gateway: 10.1.1.1 # your networks gateway
vlan5:
driver: ipvlan
driver_opts:
parent: enp1s0d1.5 # I've added '.5' for vlan 5
ipam:
config:
- subnet: 10.1.5.0/24 # the vlans subnet
gateway: 10.1.5.1 # the vlans gateway
services:
service_on_lan:
networks:
default_lan:
ipv4_address: 10.1.1.51
service_on_vlan:
networks:
vlan5:
ipv4_address: 10.1.5.55
I have not tested, but I believe you can also just add another two subnet and gateway lines for ipv6 routing as well, and then specify your ipv6_address in the service.
You can also use macvlan instead, which will give the container a unique MAC address that you can see on your network. I have found the best way to do this is individually per-IP, at least for my needs. Otherwise you can easily run into duplicate IP problems.
networks:
macvlan5_5: # the name you'll reference in the service configuration, and I give _5 as the IP
driver: macvlan
driver_opts:
parent: enp1s0d1.5 # the interface on your docker host and .# for the vlan #
ipam:
config:
- subnet: 10.1.5.0/24 # your networks subnet
gateway: 10.1.5.1 # your networks gateway
ip_range: 10.1.5.5/32 # the static ip you want to assign to this networks container
And then just assign the network in your container:
Unfortunately, the container does not seem to try to register with the defined hostname so my firewall just sees a new ‘unknown’ host on the random MAC address in the arp tables.
Once you have confirmed Mosquitto is up and running, we can deploy a Frigate stack. This particular stack has a device mapping for a Google Coral A+E key device, as well as using /dev/dri/renderD128 for onboard graphics (Intel, in this case). You’ll want to adjust some things, such as whatever MQTT username and password you created during MQTT setup/install (see the guide for help!), as well as your camera admin username and password. If you use different usernames and passwords for all your cameras, you can specify them individually in your Frigate configuration file after the stack is deployed.
Also in this stack is a configuration for using a Samba/Windows based NAS as a volume for /media/frigate, which is where recordings and snapshots will be saved to. Basically, what I’m saying is, you’ll need to make some changes to the below code after pasting it, in order to have it suit your needs.
The majority of my configuration file was taken from the Full Reference Configuration File which is an excellent reference with comments about the various options in the configuration file.
I had also initially planned to include my nvidia setup/configuration sections, but the machine I just moved Frigate into can only take one full length card, I have used it for something else.
services:
frigate:
container_name: "frigate"
image: "ghcr.io/blakeblackshear/frigate:stable"
hostname: "frigate"
shm_size: 1024mb # increase if getting bus errors
privileged: true
restart: unless-stopped
cap_add:
- CAP_PERFMON
devices:
- /dev/dri/renderD128:/dev/dri/renderD128:ro # onboard video
- /dev/apex_0:/dev/apex_0:ro # coral
environment:
- "TZ=EST5EDT" # your timezone
- "FRIGATE_RTSP_USERNAME=admin" # camera admin username
- "FRIGATE_RTSP_PASSWORD=password" # camera admin password
- "FRIGATE_MQTT_USERNAME=frigate" # mqtt server username
- "FRIGATE_MQTT_PASSWORD=password" # mqtt server password
network_mode: host
volumes:
- /etc/localtime:/etc/localtime:ro
- data:/config
# if you're not using a NAS, change NAS to the path you're using
# e.g. /mnt/frigate:/media/frigate or /any/path:/media/frigate
- NAS:/media/frigate
- type: tmpfs
target: /tmp/cache
tmpfs:
size: 1G
volumes:
data:
mqtt_data:
NAS:
driver_opts:
type: cifs
o: "addr=IP.OF.NAS,username=SAMBA_USERNAME,password=SAMBA_PASSWORD,iocharset=utf8,file_mode=0600,dir_mode=0700"
device: "//IP.OF.NAS/SharedFolder"
networks:
frigate:
Here is my Frigate configuration file. It will be in /var/lib/docker/volumes/frigate_data/_data/config.yml
I know it’s kind of a mess, and there’s probably some redundant things in here, but I just felt bad about not having anything up still after so long. So there’s definitely some useful examples in here I imagine, for Amcrest, Reolink, Hikvision cameras. Examples of how to use separate streams for recording and detection, etc. Unfortunately at the time of writing all of this up tonight, I am extremely tired and must just get it out as is at this point.
Now, admittedly, full on support and assistance with configuring your Frigate NVR is vastly out of the scope of this guide. There is plenty of great documentation already available on the Official Frigate Website. Good luck!
Over the next week or two as I find time and motivation (Helldivers 2 has been winning both of them lately), I’ll be moving some services to a new server, namely Docker/Portainer, Frigate and Home Assistant. I’ll be doing my best to keep notes from beginning to end and get something posted, finally, to help anyone else trying to get the two of them working.
I do use a coral, and have also used an nVidia card for graphics offloading. With 5-6 cameras I can’t say I noticed a huge impact offloading the graphics, but I will try to cover that part as well since I plan to move the card over anyway. For reference, it’s just a lowly GTX 1050ti that I’m using for the task. I figure if I ever bother to buy a Plex license, I can use it for that as well.
I’ve been revisiting frigate as of late, and using homeassistant for sending notifications to my phone. I wish I could do this without homeassistant…it feels so excessive just to get camera notifications.
Anyway, after months and months of procrastinating I hope to post my frigate portainer config soon, and an overall frigate configuration file that I’m using. I’ll also attempt to cover notifications on a separate post afterwards.
Paste the above into a docker-compose.yml file, I placed mine in a ‘portainer’ folder inside my home directory. Then just run docker compose up -d
I use a folder on my system, /ssl/lancerts, which I map to /certs inside the container. You will have to modify your certificate locations in the volumes section, and the command line towards the top of the compose file. If you are not using SSL, then simply comment out or remove the command line at the top of the compose file and remove the volume mapping.