I just figured this out, and it’s too cool not to share. I have business grade switches at my house, so I have various VLANs setup already. You’ll need that in place to make this work, and have your port tagging in place already, etc.
This requires no additional configuration on the host. In the below, I’ve included two examples — default_lan and vlan5. So if you just want to give a container an IP on your local LAN, you can use default_lan for that. And if you’re looking to create a service on a vlan IP, you can use vlan5 as an example for that.
EDIT: YOU MAY NEED TO modprobe 8021q (and/or add it to /etc/modules)
You do not need to include default_lan in order to use a vlan. This also of course works great in Portainer.
networks:
default_lan: # the name you'll reference in the service configuration
driver: ipvlan
driver_opts:
parent: enp1s0d1 # the interface on your docker host that it will tunnel through
ipam:
config:
- subnet: 10.1.1.0/24 # your networks subnet
gateway: 10.1.1.1 # your networks gateway
vlan5:
driver: ipvlan
driver_opts:
parent: enp1s0d1.5 # I've added '.5' for vlan 5
ipam:
config:
- subnet: 10.1.5.0/24 # the vlans subnet
gateway: 10.1.5.1 # the vlans gateway
services:
service_on_lan:
networks:
default_lan:
ipv4_address: 10.1.1.51
service_on_vlan:
networks:
vlan5:
ipv4_address: 10.1.5.55
I have not tested, but I believe you can also just add another two subnet and gateway lines for ipv6 routing as well, and then specify your ipv6_address in the service.
You can also use macvlan instead, which will give the container a unique MAC address that you can see on your network. I have found the best way to do this is individually per-IP, at least for my needs. Otherwise you can easily run into duplicate IP problems.
networks:
macvlan5_5: # the name you'll reference in the service configuration, and I give _5 as the IP
driver: macvlan
driver_opts:
parent: enp1s0d1.5 # the interface on your docker host and .# for the vlan #
ipam:
config:
- subnet: 10.1.5.0/24 # your networks subnet
gateway: 10.1.5.1 # your networks gateway
ip_range: 10.1.5.5/32 # the static ip you want to assign to this networks container
And then just assign the network in your container:
Unfortunately, the container does not seem to try to register with the defined hostname so my firewall just sees a new ‘unknown’ host on the random MAC address in the arp tables.
Once you have confirmed Mosquitto is up and running, we can deploy a Frigate stack. This particular stack has a device mapping for a Google Coral A+E key device, as well as using /dev/dri/renderD128 for onboard graphics (Intel, in this case). You’ll want to adjust some things, such as whatever MQTT username and password you created during MQTT setup/install (see the guide for help!), as well as your camera admin username and password. If you use different usernames and passwords for all your cameras, you can specify them individually in your Frigate configuration file after the stack is deployed.
Also in this stack is a configuration for using a Samba/Windows based NAS as a volume for /media/frigate, which is where recordings and snapshots will be saved to. Basically, what I’m saying is, you’ll need to make some changes to the below code after pasting it, in order to have it suit your needs.
The majority of my configuration file was taken from the Full Reference Configuration File which is an excellent reference with comments about the various options in the configuration file.
I had also initially planned to include my nvidia setup/configuration sections, but the machine I just moved Frigate into can only take one full length card, I have used it for something else.
services:
frigate:
container_name: "frigate"
image: "ghcr.io/blakeblackshear/frigate:stable"
hostname: "frigate"
shm_size: 1024mb # increase if getting bus errors
privileged: true
restart: unless-stopped
cap_add:
- CAP_PERFMON
devices:
- /dev/dri/renderD128:/dev/dri/renderD128:ro # onboard video
- /dev/apex_0:/dev/apex_0:ro # coral
environment:
- "TZ=EST5EDT" # your timezone
- "FRIGATE_RTSP_USERNAME=admin" # camera admin username
- "FRIGATE_RTSP_PASSWORD=password" # camera admin password
- "FRIGATE_MQTT_USERNAME=frigate" # mqtt server username
- "FRIGATE_MQTT_PASSWORD=password" # mqtt server password
network_mode: host
volumes:
- /etc/localtime:/etc/localtime:ro
- data:/config
# if you're not using a NAS, change NAS to the path you're using
# e.g. /mnt/frigate:/media/frigate or /any/path:/media/frigate
- NAS:/media/frigate
- type: tmpfs
target: /tmp/cache
tmpfs:
size: 1G
volumes:
data:
mqtt_data:
NAS:
driver_opts:
type: cifs
o: "addr=IP.OF.NAS,username=SAMBA_USERNAME,password=SAMBA_PASSWORD,iocharset=utf8,file_mode=0600,dir_mode=0700"
device: "//IP.OF.NAS/SharedFolder"
networks:
frigate:
Here is my Frigate configuration file. It will be in /var/lib/docker/volumes/frigate_data/_data/config.yml
I know it’s kind of a mess, and there’s probably some redundant things in here, but I just felt bad about not having anything up still after so long. So there’s definitely some useful examples in here I imagine, for Amcrest, Reolink, Hikvision cameras. Examples of how to use separate streams for recording and detection, etc. Unfortunately at the time of writing all of this up tonight, I am extremely tired and must just get it out as is at this point.
Now, admittedly, full on support and assistance with configuring your Frigate NVR is vastly out of the scope of this guide. There is plenty of great documentation already available on the Official Frigate Website. Good luck!
Over the next week or two as I find time and motivation (Helldivers 2 has been winning both of them lately), I’ll be moving some services to a new server, namely Docker/Portainer, Frigate and Home Assistant. I’ll be doing my best to keep notes from beginning to end and get something posted, finally, to help anyone else trying to get the two of them working.
I do use a coral, and have also used an nVidia card for graphics offloading. With 5-6 cameras I can’t say I noticed a huge impact offloading the graphics, but I will try to cover that part as well since I plan to move the card over anyway. For reference, it’s just a lowly GTX 1050ti that I’m using for the task. I figure if I ever bother to buy a Plex license, I can use it for that as well.
Paste the above into a docker-compose.yml file, I placed mine in a ‘portainer’ folder inside my home directory. Then just run docker compose up -d
I use a folder on my system, /ssl/lancerts, which I map to /certs inside the container. You will have to modify your certificate locations in the volumes section, and the command line towards the top of the compose file. If you are not using SSL, then simply comment out or remove the command line at the top of the compose file and remove the volume mapping.
This is fairly quick, with some configuration edits required at the end. In this guide, we will be installing Mosquitto MQTT inside of Portainer. If you need to install Portainer, that guide is available here.
In your Portainer environment (local typically), click on Stacks on the left hand side. Then on the right hand of the page, click on + Add Stack. At the top of the add stack screen you’ll need to give your stack a name. This name will also be prepended to any volumes we create in the stack. I chose mosquitto for my stack name.
Then, you’ll need to paste in a compose file. Here is what I’m using, and what the remainder of the guide will be based upon:
You’ll want to change EST5EDT to a location in your timezone (see this list to get yours). You may also want to change the hostname, Personally, I have not made use of the hostnames. You can remove it entirely for a randomly generated hostname.
In my volumes section, I have mapped localtime. I don’t know that this is necessary (same for the TZ environment variable), but I like to just add them to everything in case something does need it. Frigate, for example, definitely does.
The compose file will create a volume, mosquitto_data, and everything will reside in that volumes root directory (/var/lib/docker/volumes/mosquitto_data/_data).
You’ll want to deploy the stack at this point, and then stop the stack shortly after so we can make a few changes.
Open up a shell, or SSH into your server, and become the root user, either with su if you know your root password, or sudo su.
cd /var/lib/docker/volumes/mosquitto_data/_data
touch passwd
nano -w mosquitto.conf
Please also take note of the touch passwd command in the above snippet. This will create a blank passwd file for us to use in a moment.
I use nano to edit my files, you can use whichever editor you are comfortable with. If you’re in a GUI, I can’t help you. Below are the main changes you’ll need to make. Since /mosquitto/data is mapped to the mosquitto_data volume, there is no need to make any subfolders.
mosquitto.conf:
# if you change the listener, you'll need to change your stack port to match
listener 1883
persistence true
persistence_file mosquitto.db
persistence_location /mosquitto/data
# logging to stderr will show the logs in portainers logs output
log_dest stderr
# you can also log to a file:
log_dest file /mosquitto/log/mosquitto.log
# the types of log entries we will receive:
log_type error
log_type warning
log_type notice
log_type information
log_timestamp true
log_timestamp_format %Y-%m-%dT%H:%M:%S
# do not allow anonymous access to this mqtt server
allow_anonymous false
# the password file for mosquitto mqtt
password_file /mosquitto/data/passwd
After the configuration file is in place, the last step is to add a user for accessing Mosquitto (quick edit: I believe you’ll need to start your mosquitto stack before the below command will work):
Run the above command as sudo, or as a user that is part of the docker group. It will prompt you for a password which is up to you to create. You can replace your_mqtt_username with whatever makes sense to you. For example, my MQTT user is frigate so that Frigate NVR can access the MQTT server as a user named frigate. You may just want to add one generic user instead and use that for all services.
And that’s it! You should now be able to start your Mosquitto stack and the logs should indicate it is listening on port 1883.
2023-08-01T15:29:12: mosquitto version 2.0.15 starting
2023-08-01T15:29:12: Config loaded from /mosquitto/config/mosquitto.conf.
2023-08-01T15:29:12: Opening ipv4 listen socket on port 1883.
2023-08-01T15:29:12: Opening ipv6 listen socket on port 1883.
2023-08-01T15:29:12: mosquitto version 2.0.15 running
Random side note: If you want to install nano inside of the mosquitto container for some reason (docker exec -it mosquitto sh), you’ll need to use the apk command. apk update; apk add nano
But basically it comes down to the below two commands.
The second ‘docker run’ command is what you would use if you have an SSL certificate and key to use. In the second command, I am mapping the local folder /etc/ssl/private to inside the portainer docker container as /certs. So then Portainer can reference the certificates at /certs. You’ll need to change the path to match where you store the certificates.
If you want to install Portainer with SSL support, map your SSL certificate directory (in this example, to /certs) and add the sslcert and sslkey options:
Update 9-02-2023: I’ve stopped using HomeAssistant as it’s just not for me.
Update 8-01-2023: Ok! I feel fairly confident with everything now. Inititally my plan was to just give some docker run commands that would get everyone up and running quickly. But I have since discovered Stacks in Portainer, and I feel this is a much better method for deploying containers. Especially since it offers an easy way to upgrade them. Truly hope to have something together eventually!
Update 7-18-2023: I’ve managed to get an iPhone, an OBS stream, and my Amcrest camera into frigate using go2rtc as a restream source. Guide is coming along nicely!
…guide will be coming soon. I am slowly learning it all this weekend. I am really enjoying Portainer. I have a camera arriving tomorrow, an Amcrest one, and hope to have everything up and running by next weekend. Then I can begin taking some screenshots for the guide.
The absolute mixture and mess across the internet has made this challenging at best. But I really want to run my own NVR!
Oh yeah, and I’ll include Google Coral AI support as well assuming the card I ordered works in the PC I’m using for frigate. Hoping to make use of the wifi card slot.
I’m using Ubuntu for the base OS. Personally, I enabled auto-login and screen sharing so I can remote desktop in to it. I may switch to just plain VNC later on but this is working well for me at the moment. As I’ve always been a Gentoo Linux guy, learning Ubuntu (well, Gnome) has been interesting too. I haven’t ran a window manager in YEARS!