How I Set Up Frigate NVR with Face Recognition on an Intel N100 Mini PC
Those of you who follow my channel know that I already had Frigate NVR running on Proxmox until my Mini PC crashed. I had 5 Reolink cameras but no recording, no object detection anymore. After a break-in in my neighborhood, proper camera monitoring became even more urgent and I decided to reinvest.
100% Local – No Cloud Required
This is what makes this setup special: everything runs locally. No subscriptions, no cloud dependencies, no data leaving your network. Here is where each component lives:
| Component | Runs On | Cloud? | Purpose |
|---|---|---|---|
| Proxmox VE | N100 Mini PC | No | Hypervisor / Host OS |
| Frigate NVR | N100 (Docker) | No | Object detection + recording |
| Coral TPU | N100 (USB) | No | Hardware-accelerated detection (10ms/frame) |
| Face Recognition | N100 (Docker) | No | InsightFace identifies family members |
| Noise Monitor | N100 (Docker) | No | Audio analysis from camera RTSP feeds |
| Ollama + Gemma 3 | N100 (native) | No | Local LLM for voice control |
| MQTT Broker | Home Assistant | No | Event communication |
| Home Assistant | Separate server | No | Automation + dashboard |
| Recording Storage | 5.5TB External HDD | No | 14-30 days retention |
The only external API call in the entire stack is the optional GPT-4o Vision check for trash bin detection – and even that could be replaced with a local model.
Why does this matter? – Your camera feeds never leave your network – Face recognition data stays on your hardware – No monthly subscriptions (looking at you, Ring and Nest) – Works during internet outages – You own your data, forever
Hardware
| Component | Price | Purpose |
|---|---|---|
| Intel N100 Mini PC (16GB RAM, 512GB SSD) | ~$250 | Frigate NVR host |
| Google Coral USB TPU | already owned | Object detection acceleration |
| WD Elements 5.5TB External HDD | ~$155 | Recording storage |
Total: ~$405 (but you can start with just the Mini PC + Coral for ~$280)
Step 1: Install Proxmox VE
I chose Proxmox as the base OS because it gives me flexibility to run additional services later (like Ollama for local AI).
- Download Proxmox VE 9.1 ISO from proxmox.com
- Flash to USB stick with Balena Etcher or
dd - Boot from USB, install to internal SSD
- After install, disable enterprise repos:
# SSH into Proxmox
sed -i 's/^deb/#deb/' /etc/apt/sources.list.d/pve-enterprise.sources
echo "deb http://download.proxmox.com/debian/pve trixie pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list
apt update && apt full-upgrade -y
Step 2: Network Setup
My cameras are in a separate VLAN (NoT – Network of Things, 192.168.3.x). The N100 needs direct access to this VLAN for RTSP streams.
Important: I initially put the N100 in my default VLAN and tried firewall rules to reach the camera VLAN – this did NOT work reliably. The solution: put the N100 directly in the camera VLAN via a dedicated switch port with the correct port profile.
You also need: – Internet access for the N100 (for Docker pulls, updates) – Access to Home Assistant for MQTT
Step 3: Install Docker + Frigate
I run Docker directly on the Proxmox host (simpler for USB passthrough than LXC):
apt install -y docker.io docker-compose
systemctl enable docker
Create /opt/frigate/docker-compose.yml:
services:
frigate:
container_name: frigate
image: ghcr.io/blakeblackshear/frigate:stable
restart: unless-stopped
privileged: true
shm_size: "256mb"
volumes:
- /opt/frigate/config:/config
- /mnt/recordings:/media/frigate
- /dev/bus/usb:/dev/bus/usb
- type: tmpfs
target: /tmp/cache
tmpfs:
size: 1000000000
ports:
- "5000:5000"
- "8971:8971"
- "1984:1984"
- "8554:8554"
- "8555:8555/tcp"
- "8555:8555/udp"
Step 4: Frigate Configuration
Create /opt/frigate/config/config.yml:
mqtt:
enabled: true
host: 192.168.1.70 # Your HA IP
user: mqtt
password: your_mqtt_password
port: 1883
detectors:
coral:
type: edgetpu
device: usb
ffmpeg:
hwaccel_args: preset-vaapi
output_args:
record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -an
record:
enabled: true
alerts:
retain:
days: 30
mode: motion
detections:
retain:
days: 14
mode: motion
objects:
track:
- person
- car
- dog
- cat
detect:
enabled: true
snapshots:
enabled: true
retain:
default: 14
cameras:
ffmpeg:
inputs:
- path: rtsp://user:pass@192.168.3.79/Preview_01_main
roles: [record]
- path: rtsp://user:pass@192.168.3.79/Preview_01_sub
roles: [detect]
detect:
fps: 5
# Add more cameras following the same pattern
Pro tip: Use the sub-stream for detection (lower resolution = faster processing) and the main stream for recording (full quality).
Step 5: Face Recognition with InsightFace
This is where it gets really interesting. Frigate tells you WHAT is in the frame (person, car, dog). But it does not tell you WHO. I built a custom service that identifies family members by their face.
Why not dlib/face_recognition?
I initially tried the popular Python face_recognition library (based on dlib). It completely failed. At 640×480 security camera resolution, it could not detect a single face. Even with 3x upscaling and the CNN model – zero faces found. The faces are simply too small in a wide-angle camera image.
InsightFace with the buffalo_sc model solved this. It detected faces that dlib completely missed.
The Architecture
Frigate detects "person" → MQTT event
↓
Face Recognition Service (Docker container)
↓ grabs FULL camera frame (not the event crop!)
↓ InsightFace: detect face + compute embedding
↓ compare against known_faces folder
↓
Match found → Set Frigate sub-label + publish to HA via MQTT
No match → Periodic scanner retries every 30 seconds on all cameras
Building the Service
Create the Dockerfile (/opt/face-recognition/Dockerfile):
FROM python:3.11-slim-bookworm
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential cmake libgl1 libglib2.0-0 && \
rm -rf /var/lib/apt/lists/*
RUN pip install --no-cache-dir insightface onnxruntime numpy pillow requests paho-mqtt opencv-python-headless
COPY app.py /app/app.py
WORKDIR /app
CMD ["python", "app.py"]
The core of app.py – the identification function:
from insightface.app import FaceAnalysis
import cv2, numpy as np
# Initialize InsightFace (once at startup, takes ~60s on N100)
app = FaceAnalysis(name='buffalo_sc', providers=['CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))
def identify_all_faces(image_bytes, known_embeddings):
# Decode image
img = cv2.imdecode(np.frombuffer(image_bytes, np.uint8), cv2.IMREAD_COLOR)
# Upscale for better detection (critical for security cameras!)
h, w = img.shape[:2]
if w < 1000:
img = cv2.resize(img, (w * 2, h * 2))
# Detect and get embedding
faces = app.get(img)
if not faces:
return "no_face", 0
# Compare against known faces using cosine similarity
embedding = faces[0].embedding
best_match, best_sim = "unknown", -1
for person, known_embs in known_embeddings.items():
for known_emb in known_embs:
sim = np.dot(embedding, known_emb) / (
np.linalg.norm(embedding) * np.linalg.norm(known_emb))
if sim > best_sim:
best_sim, best_match = sim, person
if best_sim >= 0.3: # Threshold (lower = stricter)
return best_match, round(float(best_sim) * 100, 1)
return "unknown", round(float(best_sim) * 100, 1)
The Background Scanner (Key Feature)
The initial event snapshot often shows the person from behind (walking into the room). By the time they turn around, the event snapshot is frozen. My solution: a background thread that re-scans every 15 seconds using a fresh latest.jpg from Frigate:
def scan_active_persons():
while True:
time.sleep(15)
for camera, info in active_persons.items():
if info["identified"]:
continue
# Grab fresh image (not the frozen event snapshot!)
resp = requests.get(f"{FRIGATE_URL}/api/{camera}/latest.jpg")
name, confidence = identify_face(resp.content)
if name != "no_face" and name != "unknown":
# Match found! Set sub-label in Frigate
requests.post(f"{FRIGATE_URL}/api/events/{info['event_id']}/sub_label",
json={"subLabel": name})
Training: Camera-Perspective Photos Are Essential
This was my biggest lesson: selfies and portrait photos are not enough. The security camera sees you from a completely different angle (typically above, wide-angle lens).
| Training Data | Recognition Score |
|---|---|
| Only selfies/portraits | 33% (failed threshold) |
| Added camera snapshot | 55% (match!) |
| Multiple camera angles | 60%+ |
How to get camera training photos: 1. Stand in front of each camera, face towards it 2. Grab snapshot: curl http://frigate-ip:5000/api/CameraName/latest.jpg > photo.jpg 3. Copy to /opt/face-recognition/known_faces/yourname/ 4. Restart the container
Aim for 5-10 photos per person: selfies + camera snapshots from different rooms.
Add to docker-compose.yml
face-recognition:
container_name: face-recognition
image: face-recognition:local
build: /opt/face-recognition
restart: unless-stopped
volumes:
- /opt/face-recognition/known_faces:/known_faces
MQTT Discovery for Home Assistant
The service automatically registers sensors via MQTT Discovery: – sensor.face_recognition_living_room → name of last person recognized – sensor.face_recognition_office → name of last person recognized – sensor.face_recognition_persons_home → count of family members detected
Step 6: Home Assistant Integration
- Install the Frigate integration via HACS
- Add integration: URL =
http://your-n100-ip:5000 - All cameras appear as
camera.frigate_*entities
I created a „Frigate-First“ setup: Dashboard and automations use Frigate camera entities as primary, with fallback to direct Reolink entities if Frigate is down.
Face Recognition Entities
The face recognition service creates MQTT Discovery sensors in HA: – sensor.face_recognition_living_room = „dad“ – sensor.face_recognition_persons_home = „3“
Smart Automations
- Known family at outdoor camera → Silent room tracking update
- Unknown person at garden/garage → Alert with snapshot
Step 7: Bonus – AI Trash Bin Detection
My garage camera can see the trash bins. On collection day eve, a GPT-4o Vision workflow: 1. Takes a snapshot of the garage 2. Asks GPT: „Which of the 4 bins are missing?“ 3. If the correct bin is still inside → Push reminder to take it out
Performance on N100
| Metric | Value |
|---|---|
| CPU Usage (5 cameras) | ~30-40% |
| RAM Usage | ~6GB of 16GB |
| Coral TPU detection time | ~10ms per frame |
| Face recognition (InsightFace) | ~3-5s per face |
| Recording storage | ~50-100GB per day |
| Ollama Gemma 3 4B | 2 tok/s (CPU only) |
What’s Next
- Improving face recognition accuracy with more camera-angle training photos
- Ollama with Gemma 3 for local voice control (already installed, 2 tok/s)
- Potentially upgrading to Apple Silicon Mac Mini for faster local AI
Leave a Reply