Vucense

YOLOv11 on Raspberry Pi 5 & Jetson Nano 2026: Edge AI Object Detection

🟡Intermediate

Deploy YOLOv11 object detection on edge hardware (Raspberry Pi 5, Jetson Nano, Orange Pi) for real-time inference without cloud APIs. Covers model quantization, hardware optimization, privacy-first deployments for wildlife monitoring, garden sensors, and edge security cameras.

Kofi Mensah

Author

Kofi Mensah

Inference Economics & Hardware Architect

Published

Duration

Reading

26 min

Build

45 min (including model optimization)

YOLOv11 on Raspberry Pi 5 & Jetson Nano 2026: Edge AI Object Detection
Article Roadmap

The Edge AI Privacy Paradox

You want to monitor your home, garden, or business for security and insights. But sending video to cloud surveillance services (Ring, Wyze, Google Nest) creates three problems:

  1. Metadata Leakage: Video timestamps, motion patterns, detected people/animals are logged and analyzed by third parties.
  2. Regulatory Risk: GDPR/CCPA require explicit consent to process and retain video. Cloud storage of footage may violate local laws.
  3. Bandwidth Waste: Upload 1080p video 24/7 = 100+ GB/month of internet bandwidth. Most home connections can’t sustain it.

Edge AI (YOLOv11 on Raspberry Pi) solves all three:

  • Frame-by-frame processing → discard original video → only store detection results (tiny)
  • Data stays on-device → zero cloud upload → zero third-party visibility
  • Local GPU/CPU inference → requires only enough bandwidth for detection webhooks (~1 KB per detection)

Hardware Comparison for YOLOv11 Inference

HardwareCPU/GPUPriceYOLOv11n FPSYOLOv11s FPSBest For
Raspberry Pi 4ARM CPU (4 core)$502–3 FPS<1 FPSLow-power, slow monitoring
Raspberry Pi 5ARM CPU (4 core, 2.4GHz)$1005–10 FPS2–3 FPSGarden sensors, wildlife
Jetson Nano 2GBARM CPU + 128-core GPU$6040–50 FPS20–30 FPSReal-time detection, embedded
Jetson Orin NanoARM CPU + 1024-core GPU$200200+ FPS100+ FPSHigh-throughput edge AI
Orange Pi 5ARM CPU (8 core)$808–12 FPS3–5 FPSCPU optimization focus
x86 Mini PCIntel/AMD CPU$150–30020–30 FPS10–15 FPSGeneral-purpose edge compute

Rule of thumb: For <15 FPS, use Raspberry Pi 5. For 30+ FPS real-time, use Jetson.


Part 1: Model Selection & Quantization

Step 1: Download Pre-Trained Model

YOLOv11 comes in five sizes. For edge devices, use nano or small.

# On your development machine (not Pi), download models
from ultralytics import YOLO

# Nano: 2.5M parameters, ~50MB F16
model_n = YOLO('yolov11n.pt')

# Small: 9.1M parameters, ~200MB F16  
model_s = YOLO('yolov11s.pt')

# Test inference speed on target device
results = model_n('test_image.jpg')
print(f"Inference time: {results[0].speed}ms")

Step 2: Quantize to Smaller Size

Quantization reduces model size without significant accuracy loss.

from ultralytics import YOLO

model = YOLO('yolov11n.pt')

# Export to ONNX with quantization
# opset 14 for compatibility with Raspberry Pi ONNX runtime
model.export(
    format='onnx',
    opset=14,
    half=False,  # Use 32-bit floats (CPU inference)
    imgsz=416,   # Smaller input size = faster inference
    batch=1      # Single image at a time
)

# Result: yolo11n.onnx (~25MB instead of 50MB F16)

Quantization Options:

FormatSizeLatencyAccuracyHardware
F32 (full precision)100MBBaseline100%CPU/GPU
F16 (half precision)50MB1.5× faster~99.9%GPU recommended
Q8 (8-bit)25MB2× faster~99%CPU/GPU
Q4_K_M (4-bit)12MB4× faster~97%CPU (slow)

Recommendation for Pi: Export to ONNX F32 (not quantized) or Q8. Skip Q4 unless model size is critical.


Part 2: Raspberry Pi Setup

Prerequisites

  • Hardware: Raspberry Pi 5 (8GB), Pi Camera 3 Wide
  • OS: Raspberry Pi OS Bookworm 64-bit
  • Python: 3.11+ (install via apt install python3.11)

Installation

# SSH into Pi
ssh [email protected]

# Update system
sudo apt update && sudo apt upgrade -y

# Install Python dev tools
sudo apt install -y python3.11-venv python3.11-dev git

# Create venv
python3.11 -m venv ~/yolo_env
source ~/yolo_env/bin/activate

# Install YOLOv11 and dependencies
pip install ultralytics opencv-python numpy Pillow

# Verify installation
python3 -c "from ultralytics import YOLO; print(YOLO('yolov11n.pt'))"

Test on Camera

# test_inference.py
from ultralytics import YOLO
import cv2
import time

# Load model
model = YOLO('yolov11n.pt')

# Open camera
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
cap.set(cv2.CAP_PROP_FPS, 15)

frame_count = 0
start_time = time.time()

while True:
    ret, frame = cap.read()
    if not ret:
        break
    
    # Inference
    results = model(frame, conf=0.5, imgsz=416)
    
    # Draw bounding boxes
    annotated = results[0].plot()
    
    # Calculate FPS
    frame_count += 1
    elapsed = time.time() - start_time
    fps = frame_count / elapsed if elapsed > 0 else 0
    
    cv2.putText(annotated, f"FPS: {fps:.1f}", (10, 30),
                cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
    
    # Display
    cv2.imshow('YOLOv11 on Pi', annotated)
    
    # Press 'q' to exit
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()
print(f"Average FPS: {fps:.1f}")

Expected on Raspberry Pi 5: 5–8 FPS with 416×416 input.


Part 3: Jetson Nano Setup

Jetson boards have NVIDIA GPU, enabling faster inference.

Installation

# On Jetson Nano with Jetpack 6.0 (Ubuntu-based)
sudo apt update && sudo apt upgrade -y

# Install CUDA/cuDNN (comes with Jetpack)
python3 -m pip install ultralytics opencv-python

# Verify GPU usage
python3 -c "import torch; print(torch.cuda.is_available())"

GPU-Accelerated Inference

# jetson_inference.py
from ultralytics import YOLO
import cv2
import time

# Load model (automatically uses GPU)
model = YOLO('yolov11n.pt')

cap = cv2.VideoCapture('/dev/video0')  # USB camera
frame_count = 0
start_time = time.time()

while True:
    ret, frame = cap.read()
    if not ret:
        break
    
    # GPU inference
    results = model(frame, device=0, conf=0.5)  # device=0 = GPU 0
    
    annotated = results[0].plot()
    
    frame_count += 1
    elapsed = time.time() - start_time
    fps = frame_count / elapsed if elapsed > 0 else 0
    
    cv2.putText(annotated, f"FPS: {fps:.1f}", (10, 30),
                cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
    
    cv2.imshow('YOLOv11 on Jetson', annotated)
    
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()
print(f"Average FPS: {fps:.1f}")  # Should be 30-60 FPS

Expected on Jetson Nano: 30–50 FPS with 416×416 input.


Part 4: Production Deployment

Logging Detections (Privacy-First)

Log detections to local SQLite database, never store original frames.

# edge_detector.py
import sqlite3
from ultralytics import YOLO
import cv2
from datetime import datetime
import json

# Setup database
conn = sqlite3.connect('/home/pi/detections.db')
cursor = conn.cursor()

cursor.execute('''
    CREATE TABLE IF NOT EXISTS detections (
        id INTEGER PRIMARY KEY,
        timestamp DATETIME DEFAULT CURRENT_TIMESTAMP,
        class_name TEXT,
        confidence REAL,
        bbox TEXT
    )
''')
conn.commit()

# Run inference
model = YOLO('yolov11n.pt')
cap = cv2.VideoCapture(0)

while True:
    ret, frame = cap.read()
    if not ret:
        break
    
    # Inference (frame discarded after this)
    results = model(frame, conf=0.5)
    
    # Log detections
    for box in results[0].boxes:
        class_id = int(box.cls)
        class_name = model.names[class_id]
        confidence = float(box.conf)
        bbox = box.xyxy[0].tolist()  # [x1, y1, x2, y2]
        
        cursor.execute('''
            INSERT INTO detections (class_name, confidence, bbox)
            VALUES (?, ?, ?)
        ''', (class_name, confidence, json.dumps(bbox)))
    
    conn.commit()
    
    # Frame is now garbage-collected — never stored

cap.release()
conn.close()

Result: Detection logs (1KB each), no video files.

Systemd Service (24/7 Operation)

# /etc/systemd/system/yolo-detector.service
[Unit]
Description=YOLOv11 Object Detector
After=network.target

[Service]
Type=simple
User=pi
WorkingDirectory=/home/pi
ExecStart=/home/pi/yolo_env/bin/python3 /home/pi/edge_detector.py
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Enable and start:

sudo systemctl enable yolo-detector
sudo systemctl start yolo-detector
sudo systemctl status yolo-detector

Real-World Examples

1. Wildlife Monitoring (Garden)

Goal: Log bird species visiting feeder.

# Detect only: bird, cat, dog
CLASSES_OF_INTEREST = ['bird', 'cat', 'dog']

results = model(frame)
for box in results[0].boxes:
    class_name = model.names[int(box.cls)]
    if class_name in CLASSES_OF_INTEREST:
        # Log detection
        cursor.execute('INSERT INTO detections ...')

Cost: $140 (Pi + camera + storage)
Alternative: Wyze cam ($30) + detection subscription ($5/mo) = $30 + $60/year = expensive and loses privacy.

2. Home Security (Intrusion Detection)

Goal: Alert if person detected on porch at night.

import os
from datetime import datetime

def send_alert(frame, bbox):
    # Save annotated frame (5 sec of video max)
    timestamp = datetime.now().isoformat()
    filename = f"/home/pi/alerts/{timestamp}.jpg"
    cv2.imwrite(filename, frame)
    
    # Send webhook to cloud (only on alert)
    import requests
    requests.post("https://api.example.com/alert", json={
        "timestamp": timestamp,
        "bbox": bbox.tolist(),
        "image_url": f"file://{filename}"
    })

results = model(frame)
for box in results[0].boxes:
    if model.names[int(box.cls)] == 'person':
        send_alert(frame, box.xyxy[0])

Cost: $100 (Pi + camera)
Alternative: Ring/Nest cameras: $100–300 + $10/month subscription = constant cloud upload.


Troubleshooting

Slow inference on Pi?

  • Reduce input size: model(frame, imgsz=320) instead of 416
  • Lower frame capture FPS: cap.set(cv2.CAP_PROP_FPS, 5) for 5 FPS capture
  • Use yolo11n instead of yolo11s

High CPU usage?

  • Profile with: python3 -m cProfile -s cumulative script.py
  • Identify bottleneck (model inference vs preprocessing vs logging)

Camera not detected?

  • List cameras: ls -la /dev/video*
  • Check permissions: sudo usermod -a -G video pi
  • Test with: ffplay /dev/video0


Further Reading

Edge AI Hardware

YOLOv11 & Model Optimization

Further Reading

All Dev Corner

Comments