Files
obsidian-vault/WORK/Projects/Scale QR Code Reader.md
2026-02-24 16:27:47 -06:00

317 lines
10 KiB
Markdown
Raw Permalink Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
# Scale QR Code Reader
## Project Overview
Research project for implementing QR code reading with IP cameras for vehicle identification at weigh scales.
---
## Research Findings
### Summary
This research covers QR code detection libraries, IP camera integration methods, reading QR codes at distance/in motion, camera hardware specifications, and lighting/mounting considerations for weigh scale applications.
---
### 1. QR Code Detection Libraries
#### Recommended Libraries (Ranked)
**1. pyzbar (ZBar)** - Best Overall for Python
- **Pros:** Fastest runtime (~157ms), good reading rate, supports Python 2 & 3, easy integration
- **Cons:** Struggles with blurred, curved, and damaged codes
- **Install:** `pip install pyzbar`
- **Best for:** General purpose QR reading with static/slow-moving vehicles
**2. OpenCV WeChat QR Code Detector**
- **Pros:** Excellent with damaged and pathological QR codes, handles non-compliant codes (colored, logos), robust in challenging conditions
- **Cons:** Slower runtime (~758ms), struggles with multiple codes in one image
- **Best for:** Dirty/damaged codes on industrial vehicles
**3. BoofCV**
- **Pros:** Fastest runtime (~105ms), excellent with multiple codes per image, open-source
- **Cons:** Poor performance on non-compliant codes
- **Best for:** Multiple QR codes or high-speed processing needs
**4. YOLO-QR (Deep Learning)**
- **Pros:** Can detect QR codes in complex scenes, handles motion blur better than traditional methods
- **Cons:** Requires training data, more complex setup, computationally intensive
- **Approach:** Train custom YOLO model to detect QR codes, then extract region and decode with traditional library
- **Best for:** Advanced scenarios with challenging angles and motion
#### Library Comparison from Benchmarks
| Library | Reading Rate | Runtime | Blurred | Damaged | Multiple Codes |
|---------|-------------|---------|---------|---------|----------------|
| Dynamsoft (Commercial) | 83.29% | 195ms | 66% | 51% | 100% |
| BoofCV | 60.69% | 105ms | 38% | 16% | 99.76% |
| ZBar (pyzbar) | 38.95% | 157ms | 35% | 26% | 18% |
| OpenCV WeChat | 48.89% | 758ms | 46% | 30% | 0% |
---
### 2. IP Camera Integration Methods
#### Primary Methods
**1. RTSP Streaming (Recommended)**
```python
import cv2
# Hikvision RTSP URL
rtsp_url = "rtsp://username:password@camera_ip:554/Streaming/channels/101"
# Dahua RTSP URL
rtsp_url = "rtsp://username:password@camera_ip:554/cam/realmonitor?channel=1&subtype=0"
cap = cv2.VideoCapture(rtsp_url)
while True:
ret, frame = cap.read()
if ret:
# Process frame for QR codes
process_frame(frame)
```
**2. HTTP Snapshot (Lower Latency)**
- Hikvision: `http://user:pass@camera_ip/ISAPI/Streaming/channels/101/picture`
- Useful for single-frame capture on trigger
**3. ONVIF Control**
```python
from onvif import ONVIFCamera
# Camera control for PTZ, presets
mycam = ONVIFCamera('192.168.1.10', 80, 'admin', 'password')
ptz = mycam.create_ptz_service()
# Can move to preset positions, control zoom
```
**Library Requirements:**
```bash
pip install opencv-python pyzbar onvif-zeep
```
---
### 3. Reading QR Codes at Distance and in Motion
#### Key Challenges & Solutions
**Motion Blur:**
- Vehicles moving >35 km/h cause significant motion blur
- **Solutions:**
- Fast shutter speed: 1/1000s minimum for moving vehicles
- Higher frame rate cameras (30fps minimum, 60fps preferred)
- Use multiple frame capture and select sharpest image
**Distance Factors:**
- QR code size vs. distance calculation:
- Minimum: QR code should be at least 80-130 pixels wide in image
- Formula: Distance (feet) ≈ (MP × 10) / QR width in inches
- For 4MP camera reading 4" QR code: ~40 feet maximum
**Reading Angle:**
- Vertical angle: Maximum 30° from perpendicular
- Horizontal angle: Maximum 30° from perpendicular
- Tilt tolerance: +/- 5°
#### Recommended Approach for Moving Vehicles
1. **Trigger-based capture:** Use vehicle detection/loop sensors to trigger capture
2. **Burst mode:** Take 5-10 frames when vehicle is in sweet spot
3. **Select best frame:** Use image quality metrics (blur detection, contrast)
4. **Multi-library attempt:** Try pyzbar first, fallback to OpenCV WeChat
---
### 4. Camera Hardware Specifications
#### Recommended Camera Specs for Weigh Scale
| Feature | Minimum | Recommended | Optimal |
|---------|---------|-------------|---------|
| Resolution | 2MP (1920x1080) | 4MP (2560x1440) | 4K (8MP) |
| Frame Rate | 25fps | 30fps | 60fps |
| Shutter Speed | 1/500s | 1/1000s | 1/2000s |
| WDR | 120dB | 120dB+ | 140dB+ |
| IR Range | 20m | 30m | 50m |
| Lens | 4mm | 6-12mm varifocal | 6-12mm motor zoom |
| Protection | IP66 | IP67 | IP67+IK10 |
#### Sensor Size Considerations
- **1/2.8" or larger:** Better low-light performance for outdoor/industrial use
- **1/1.8":** Excellent for challenging lighting conditions
#### Lens Selection Formula
```
Focal Length (mm) = (Sensor Width × Distance) / Field of View Width
Example: For 4m wide lane at 6m distance with 1/3" sensor:
= (4.8mm × 6000mm) / 4000mm = 7.2mm focal length
```
#### Recommended Camera Models
- **Hikvision:** DS-2CD2643G2-IZS (4MP, WDR, motor zoom)
- **Dahua:** IPC-HFW5442T-ASE (4MP, WDR, IR 50m)
- **Axis:** P1375 (Lightfinder, WDR, pole mount)
---
### 5. Lighting and Mounting Considerations
#### Mounting Position
**Height:**
- 2.5-3.5 meters above ground (8-12 feet)
- Higher mounting = larger detection zone but requires zoom lens
**Horizontal Distance:**
- 4-8 meters from capture point
- Rule: Distance ≈ 2 × mounting height
**Angles:**
- Vertical angle: Max 30° looking down
- Horizontal angle: Max 30° from vehicle path
- Avoid direct headlight glare into lens
**Position Options:**
1. **Side-mounted:** Camera perpendicular to vehicle side
2. **Overhead:** Camera looking down at windshield (less optimal for QR)
3. **Gantry-mounted:** Over the scale, perpendicular to direction of travel
#### Lighting Requirements
**Daytime:**
- Avoid direct sunlight into lens (causes glare)
- Position camera with sun behind it when possible
- Use lens hood/cover
**Nighttime/Infrared:**
- IR illuminators essential for 24/7 operation
- Built-in IR range must cover distance to vehicle
- External IR illuminators for longer distances (20-50m)
- IR wavelength: 850nm (visible red glow) or 940nm (covert)
**Shutter Speed Settings:**
- Stationary vehicles: 1/100 - 1/300
- Moving vehicles (30mph): 1/1000 - 1/1500
- High speed (70mph): 1/2500 - 1/3500
#### Environmental Protection
- **IP67:** Protected against dust and temporary water immersion
- **IK10:** Vandal-resistant housing
- **Operating temperature:** -30°C to +60°C for outdoor use
- **Heater/blower:** For cold climates to prevent condensation
---
### Implementation Steps
#### Phase 1: Proof of Concept
1. Install camera with RTSP access
2. Implement basic QR detection with pyzbar
3. Test with stationary vehicles at various distances
4. Determine optimal capture zone
#### Phase 2: Optimization
1. Add motion trigger/sensor integration
2. Implement burst capture (5-10 frames)
3. Add frame selection algorithm (blur detection)
4. Test with slow-moving vehicles
#### Phase 3: Production
1. Weatherproof housing installation
2. IR lighting setup for night operation
3. Integration with weigh scale software
4. Database logging and reporting
#### Sample Python Implementation
```python
import cv2
import pyzbar.pyzbar as pyzbar
import numpy as np
class QRCodeReader:
def __init__(self, rtsp_url):
self.cap = cv2.VideoCapture(rtsp_url)
self.frame_buffer = []
def capture_burst(self, num_frames=5):
"""Capture multiple frames for selection"""
frames = []
for _ in range(num_frames):
ret, frame = self.cap.read()
if ret:
frames.append(frame)
return frames
def detect_blur(self, frame, threshold=100):
"""Detect image blur using Laplacian variance"""
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
return cv2.Laplacian(gray, cv2.CV_64F).var()
def decode_qr(self, frame):
"""Decode QR codes from frame"""
decoded = pyzbar.decode(frame)
results = []
for obj in decoded:
results.append({
'data': obj.data.decode('utf-8'),
'type': obj.type,
'rect': obj.rect
})
return results
def process_vehicle(self):
"""Main processing when vehicle detected"""
frames = self.capture_burst(10)
# Select sharpest frame
best_frame = max(frames, key=lambda f: self.detect_blur(f))
# Try primary decoder
results = self.decode_qr(best_frame)
# Fallback to OpenCV if needed
if not results:
qr_detector = cv2.QRCodeDetector()
retval, decoded_info, _ = qr_detector.detectAndDecode(best_frame)
if retval:
results = [{'data': decoded_info, 'type': 'QR-Code'}]
return results
```
---
### Source Links
1. **QR Code Library Benchmarks:**
- <https://www.dynamsoft.com/codepool/qr-code-reading-benchmark-and-comparison.html>
- <https://boofcv.org/index.php?title=Performance:QrCode>
2. **Library Implementation Guides:**
- <https://learnopencv.com/barcode-and-qr-code-scanner-using-zbar-and-opencv/>
- <https://pyimagesearch.com/2018/05/21/an-opencv-barcode-and-qr-code-scanner-with-zbar/>
3. **YOLO QR Detection:**
- <https://www.dynamsoft.com/codepool/qr-code-detect-decode-yolo-opencv.html>
- <https://www.sciencedirect.com/science/article/pii/S0045790624003045>
4. **Weighbridge Systems:**
- <https://punitinstrument.com/a-complete-guide-to-unmanned-erp-integrated-automated-weighbridge-systems/>
- <https://endel.digital/weighbridge-software/>
5. **Camera Positioning (ANPR - applies to QR):**
- <https://platerecognizer.com/camera-setup-for-best-anpr/>
- <https://www.dipolnet.com/license_plate_recognition_lpr_systems_-_part_1_camera_positioning_bib318.htm>
6. **ONVIF Camera Control:**
- <https://github.com/RichardoMrMu/python-onvif>
- <https://www.onvif.org/specs/srv/ptz/ONVIF-PTZ-Service-Spec-v1712.pdf>
7. **Motion Blur Research:**
- <https://arxiv.org/html/2410.05497v1>
- <https://dl.acm.org/doi/10.1016/j.neucom.2022.04.041>
---
*Research compiled: 2026-02-24*