Add pet-aware security features design spec

Covers pet detection (YOLOv8), pet ID classifier, wildlife threat
tiers, zone-based alerting, training UI, and pet dashboard.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Aaron D. Lee 2026-04-03 12:50:14 -04:00
parent 11d776faa6
commit 4b8da811df

View File

@ -0,0 +1,384 @@
# Pet-Aware Security Features — Design Spec
## Overview
Add pet detection, identification, wildlife monitoring, and pet activity tracking to Vigilar. Transforms the system from a people/vehicle security platform into a comprehensive home awareness system that knows your pets by name, alerts when they're somewhere they shouldn't be, and tracks wildlife activity with threat-tiered notifications.
### Pets
| Name | Species | Breed | Description |
|------|---------|-------|-------------|
| Angel | Cat (she/her) | Black DSH | 8yo, short hair, black ("pocket panther") |
| Taquito (Tito/Quito) | Cat (she/her) | Maine Coon mix | 2yo, long hair, tan/grey/brown, small for breed, long whiskers |
| Milo | Dog (he/him) | Pomsky | 6mo, shorter hair (not woolly), husky with squirrel tail |
### Camera Layout
| Camera | Location Type | Pet Alert Behavior |
|--------|--------------|-------------------|
| Front Entrance (porch + driveway) | `exterior` | Pet = ALERT, Wildlife = tiered |
| Back Deck | `exterior` | Pet = ALERT, Wildlife = tiered |
| Side Entrance / Bootroom | `transition` | Pet = ALERT (escape risk) |
| Garage | `transition` | Pet = ALERT (escape risk) |
| Kitchen / Living Room | `interior` | Pet = LOG (location update only) |
## 1. Detection Pipeline
### Architecture: Unified Model
Replace MobileNet-SSD v2 with YOLOv8-small as the single object detection model. YOLOv8-small handles people, vehicles, AND animals in one inference pass. The pet ID classifier runs as a lightweight second stage on domestic animal crops only.
```
Motion Detected (MOG2, unchanged)
→ Frame Captured
→ YOLOv8-small inference (replaces MobileNet-SSD)
→ Person/Vehicle: existing event flow (unchanged behavior)
→ Cat/Dog (domestic): crop → Pet ID Classifier → named pet or "Unknown"
→ Bear/Bird/etc (wildlife): classify threat tier → event
```
### YOLOv8-small
- Model: `yolov8s.pt` (~22MB)
- Input: 640x640
- COCO-80 classes including: person, car, truck, cat, dog, bear, bird, horse, cow, sheep
- Confidence threshold: 0.5 (configurable)
- Replaces both `person.py` and `vehicle.py` detection — vehicle color/size fingerprinting remains as a post-detection stage
### Pet ID Classifier
- Model: MobileNetV3-Small, transfer learning from ImageNet
- Input: cropped bounding box resized to 224x224
- Output: pet name + confidence score
- High confidence (>= 0.7): auto-identified
- Low confidence (0.5 - 0.7): queued for human review in unlabeled queue
- Below 0.5: classified as "Unknown Cat" / "Unknown Dog"
- Only runs on cat/dog detections, never on wildlife
- Stored at `/var/vigilar/models/pet_id.pt` with backup at `pet_id_backup.pt`
### Wildlife Threat Classification
COCO classes map directly where possible:
| Threat Level | COCO Classes | Heuristic Classes |
|-------------|-------------|-------------------|
| PREDATOR | bear | fox, coyote (size heuristic: medium quadruped) |
| NUISANCE | — | raccoon, skunk, possum (size heuristic: small quadruped) |
| PASSIVE | bird, horse, cow, sheep | deer (size heuristic: large quadruped) |
Size heuristics for non-COCO wildlife use bounding box area as percentage of frame:
- Small (< 2%): likely raccoon, skunk, possum NUISANCE
- Medium (2-8%): likely fox, coyote → PREDATOR
- Large (> 8%): likely deer → PASSIVE (or bear, which COCO catches directly)
Phase 2 consideration: a dedicated wildlife classifier similar to pet ID, trained on local wildlife photos. Not in scope for this spec.
## 2. Zone Logic & Alert Routing
### Camera Location Types
Each camera gets a `location` field: `exterior`, `interior`, or `transition`. Transition zones (bootroom, garage) are treated as exterior for pet escape purposes.
### Alert Matrix
| Detection | Interior | Exterior/Transition |
|-----------|----------|-------------------|
| Known pet | LOG (silent location update) | ALERT (PET_ESCAPE event) |
| Unknown domestic animal | WARN (push notification) | WARN (push notification) |
| Wildlife — Predator | URGENT ALERT (shouldn't happen but covered) | URGENT ALERT |
| Wildlife — Nuisance | URGENT ALERT (shouldn't happen) | NOTIFY (normal push) |
| Wildlife — Passive | LOG | LOG (record only) |
### Alert Profile Integration
New detection types plug into the existing smart alert profile system. New `detection_type` values for profile rules:
- `known_pet` — identified pet (Angel, Taquito, or Milo)
- `unknown_animal` — domestic cat/dog not in registry
- `wildlife_predator` — bear, fox, coyote
- `wildlife_nuisance` — raccoon, skunk, possum
- `wildlife_passive` — deer, bird, rabbit
New `location` field on alert profile rules filters by the camera's `location` property (e.g., `known_pet` + `exterior` = push_and_record, `known_pet` + `interior` = quiet_log). When `location` is omitted from a rule, it matches any camera location.
### New MQTT Topics
```
vigilar/camera/{id}/pet/detected — pet detection with ID result
vigilar/camera/{id}/wildlife/detected — wildlife detection with threat tier
vigilar/pets/{name}/location — pet location updates (last-seen camera)
```
### New Event Types
| Event Type | Severity | Trigger |
|-----------|----------|---------|
| PET_DETECTED | INFO | Known pet identified on any camera |
| PET_ESCAPE | ALERT | Known pet detected in exterior/transition zone |
| UNKNOWN_ANIMAL | WARNING | Unregistered domestic animal detected |
| WILDLIFE_PREDATOR | CRITICAL | Bear, fox, coyote detected |
| WILDLIFE_NUISANCE | WARNING | Raccoon, skunk, possum detected |
| WILDLIFE_PASSIVE | INFO | Deer, bird, rabbit detected |
## 3. Pet ID Training Interface
Two training paths feed the same classifier:
### Path 1: Photo Upload
Pet Profiles page (`/pets/`) where users manage registered pets. Each pet shows name, species, breed, profile photo, training image count, and model status (untrained / needs more photos / trained with accuracy %).
Upload interface: drag-and-drop or file picker for JPG/PNG images. Multiple files supported. Images are auto-cropped to the animal region if a bounding box can be detected, otherwise stored as-is. Stored in `/var/vigilar/pets/training/{pet_name}/`.
### Path 2: Label From Playback
When viewing recordings, detected animals show bounding boxes overlaid on the video. Clicking a bounding box opens a labeling popup: "Who is this?" with buttons for each registered pet of the matching species (cat detection only shows cat profiles, dog detection only shows dog profiles), plus "Unknown" and "+ New Pet" options.
### Unlabeled Detection Queue
Every animal detection auto-saves a cropped image to `/var/vigilar/pets/staging/`. These appear in an "Unlabeled Detections" section on the Pet Profiles page as clickable thumbnails. Users can quickly label them (assigning to a pet) or dismiss them. Unlabeled crops auto-expire after 7 days.
Low-confidence identifications (0.5-0.7) are prioritized in this queue — they represent the cases where human input most improves the model.
### Training Process
- Minimum 20 labeled images per pet to train
- Transfer learning from ImageNet pretrained MobileNetV3-Small
- Training triggered manually from the UI ("Train Model" button)
- UI suggests retraining when 20+ new labeled images accumulate since last training
- Training time: ~2-5 minutes on x86_64
- Previous model backed up before overwrite for rollback
- Training runs in a background process, UI shows progress
## 4. Pet Dashboard & Activity Features
### Pet Dashboard
New section on the main dashboard or accessible via `/pets/` showing:
**Per-pet cards** with:
- Name, species, profile photo/icon
- Current status: Safe (interior) / Check (transition zone) / Alert (exterior)
- Last known location (camera name)
- Last seen timestamp
- Active time today
- Mini location trail (last 3 hours of camera transitions)
**Wildlife summary bar:**
- Today's counts by threat tier (predators, nuisance, passive)
- Most recent notable sighting with timestamp and camera
- Link to full wildlife log
**Activity timeline:**
- Per-pet horizontal bar showing detection activity across 24 hours
- Color-coded by camera/zone (orange for transition/exterior zones)
- Integrates visually with the existing recording timeline component
### Daily Pet Digest
Appended to the existing daily digest push notification. Per-pet summary including:
- Activity duration
- Rooms/zones visited
- Notable moments (zoomies, birdwatching, escape alerts)
- Wildlife summary for the day
### Highlight Reel
Auto-generated short clip references based on heuristics:
- **Zoomies**: pet detected + high motion intensity (>0.8 threshold, configurable)
- **Birdwatching**: cat detected + window zone + sustained presence (>2 min)
- **Wildlife encounter**: any wildlife detection event
- **Pet escape**: PET_ESCAPE event
Highlights are bookmarks into existing recordings (start timestamp, duration, camera, description) — no additional storage. Displayed on the pet dashboard as a scrollable list with playback links.
## 5. Data Model
### New Tables
**`pets`**
| Column | Type | Notes |
|--------|------|-------|
| id | TEXT PK | UUID |
| name | TEXT NOT NULL | Display name |
| species | TEXT NOT NULL | "cat" or "dog" |
| breed | TEXT | Optional breed info |
| color_description | TEXT | For reference, not used in detection |
| photo_path | TEXT | Profile photo path |
| training_count | INTEGER | Number of labeled training images |
| created_at | REAL | Timestamp |
**`pet_sightings`**
| Column | Type | Notes |
|--------|------|-------|
| id | INTEGER PK | Autoincrement |
| ts | REAL NOT NULL | Detection timestamp |
| pet_id | TEXT FK → pets | NULL if unknown |
| species | TEXT NOT NULL | "cat" or "dog" |
| camera_id | TEXT NOT NULL | Which camera |
| confidence | REAL | Pet ID confidence |
| crop_path | TEXT | Saved bounding box crop |
| labeled | BOOLEAN | Human-confirmed identification |
| event_id | INTEGER FK → events | Linked event |
**`wildlife_sightings`**
| Column | Type | Notes |
|--------|------|-------|
| id | INTEGER PK | Autoincrement |
| ts | REAL NOT NULL | Detection timestamp |
| species | TEXT NOT NULL | "bear", "deer", etc. |
| threat_level | TEXT NOT NULL | PREDATOR, NUISANCE, or PASSIVE |
| camera_id | TEXT NOT NULL | Which camera |
| confidence | REAL | Detection confidence |
| crop_path | TEXT | Saved crop image |
| event_id | INTEGER FK → events | Linked event |
**`pet_training_images`**
| Column | Type | Notes |
|--------|------|-------|
| id | INTEGER PK | Autoincrement |
| pet_id | TEXT FK → pets | Which pet |
| image_path | TEXT NOT NULL | Path to crop |
| source | TEXT NOT NULL | "upload", "detection", or "playback" |
| created_at | REAL | Timestamp |
### Indexes
- `idx_pet_sightings_ts` — ts DESC
- `idx_pet_sightings_pet` — pet_id, ts DESC
- `idx_pet_sightings_camera` — camera_id, ts DESC
- `idx_wildlife_ts` — ts DESC
- `idx_wildlife_threat` — threat_level, ts DESC
## 6. Configuration
### New Config Sections
```toml
[pets]
enabled = true
model = "yolov8s"
model_path = "/var/vigilar/models/yolov8s.pt"
confidence_threshold = 0.5
# Pet ID classifier
pet_id_enabled = true
pet_id_model_path = "/var/vigilar/models/pet_id.pt"
pet_id_threshold = 0.7
pet_id_low_confidence = 0.5
# Training data
training_dir = "/var/vigilar/pets/training"
crop_staging_dir = "/var/vigilar/pets/staging"
crop_retention_days = 7
min_training_images = 20
[pets.wildlife]
[pets.wildlife.threat_map]
predator = ["bear"]
nuisance = []
passive = ["bird", "horse", "cow", "sheep"]
[pets.wildlife.size_heuristics]
small = 0.02
medium = 0.08
large = 0.15
[pets.activity]
daily_digest = true
highlight_clips = true
zoomie_threshold = 0.8
```
### Camera Config Addition
```toml
[[cameras]]
id = "front_entrance"
name = "Front Entrance"
location = "exterior" # NEW FIELD
# ... existing fields
```
### New StrEnums in constants.py
```python
class ThreatLevel(StrEnum):
PREDATOR = "predator"
NUISANCE = "nuisance"
PASSIVE = "passive"
class CameraLocation(StrEnum):
EXTERIOR = "exterior"
INTERIOR = "interior"
TRANSITION = "transition"
```
New EventType values: `PET_DETECTED`, `PET_ESCAPE`, `UNKNOWN_ANIMAL`, `WILDLIFE_PREDATOR`, `WILDLIFE_NUISANCE`, `WILDLIFE_PASSIVE`.
## 7. File Storage
```
/var/vigilar/
models/
yolov8s.pt — main detection model (~22MB)
pet_id.pt — trained pet classifier (~8MB)
pet_id_backup.pt — previous version (rollback)
pets/
training/ — labeled images for training
angel/
taquito/
milo/
staging/ — unlabeled detection crops (auto-expire 7d)
profiles/ — pet profile photos
```
## 8. New Web Routes
| Route | Method | Purpose |
|-------|--------|---------|
| `/pets/` | GET | Pet dashboard and profiles page |
| `/pets/api/status` | GET | Current pet locations and status (JSON) |
| `/pets/api/sightings` | GET | Pet sighting history (JSON, filterable) |
| `/pets/api/wildlife` | GET | Wildlife sighting history (JSON, filterable) |
| `/pets/<id>/upload` | POST | Upload training photos for a pet |
| `/pets/<id>/label` | POST | Label a detection crop as a specific pet |
| `/pets/register` | POST | Register a new pet |
| `/pets/<id>/update` | POST | Update pet profile |
| `/pets/<id>/delete` | DELETE | Remove a pet |
| `/pets/train` | POST | Trigger model training |
| `/pets/api/training-status` | GET | Training progress (JSON) |
| `/pets/api/highlights` | GET | Today's highlight clips (JSON) |
| `/pets/api/unlabeled` | GET | Unlabeled detection crops queue (JSON) |
## 9. Dependencies
### New Python Packages
- `ultralytics` — YOLOv8 inference and model loading
- `torchvision` — MobileNetV3-Small for pet ID classifier (torch is a dependency of ultralytics)
### Model Downloads
- YOLOv8-small: auto-downloaded by ultralytics on first run, or pre-placed at configured path
- Pet ID model: trained locally from user's labeled images, no download needed
### No Removed Dependencies
- OpenCV remains for frame capture, motion detection, image manipulation
- MobileNet-SSD model files can be removed after migration is validated
## 10. Migration Path
The YOLOv8 model replaces MobileNet-SSD for all detection. To minimize risk:
1. Implement YOLOv8 detector with the same output interface as current MobileNet detector
2. Validate person/vehicle detection accuracy matches or exceeds MobileNet
3. Add animal detection classes and pet ID pipeline
4. Remove MobileNet code and model files once validated
Existing vehicle fingerprinting (color + size profiling) continues to work — it receives bounding box crops from the new detector instead of the old one.