BioSensing Algorithm - A Case Study

2025/06/15

The LH Strip Detection & Quantification System

This case documents a production biosensing algorithm used to extract quantitative LH (Luteinizing Hormone) values from user-uploaded smartphone photos of ovulation test strips.

Unlike lab-controlled imaging, this system operates under fully unconstrained conditions: arbitrary backgrounds, lighting variations, multiple strips in one image, blur, partial occlusion, and heterogeneous user behavior. The core challenge is not only detection accuracy, but stability, explainability, and latency in a medical-adjacent context.

The final solution is a cascaded hybrid pipeline combining classical computer vision, feature-based machine learning, and deep learning — each used where it is strongest.


Problem Definition

Input

Output

Core Constraints


System Overview

The system is intentionally hierarchical, prioritizing fast and deterministic methods first, and escalating to more powerful (but expensive) deep learning models only when necessary.

Full Image
	↓
[StripDetector] (Stage 1: Strip Region Detection)
	↓
Candidate Strip ROI
	↓
[AssayInspectorWrapper] (Stage 2-3: Assay Region Localization)
	|
	├─ [CVInspector] (OpenCV based fast path)
	|
	└─ [TfInspector] (Tensorflow DL based fallback)
	↓
Candidate Assay ROI
	↓
Refinement & LH calculation (Stage 4: Quantification)

Stage 1 — Strip Region Detection (HoG + SVM)

Goal: Locate ovulation test strip regions from a full, unconstrained image.

Why HoG + SVM

HoG Feature Visualization: Positive (Left) vs. Negative (Right) Samples at Different Pooling Sizes

Implementation Highlights

Sliding Window Visualization with SVM Scoring

Data Engineering (Critical)

This hard-mining process dramatically reduced pathological false positives (e.g., pure black regions, strong linear textures).


Stage 2 — Assay Region Detection (OpenCV First)

Once a strip region is identified, the system attempts pure computer vision analysis.

Why CV First

OpenCV Pipeline

  1. Resize to standard width

  2. Grayscale conversion

  3. Gaussian blur (noise suppression)

  4. Adaptive / OTSU thresholding

  5. Morphological open/close

  6. Contour detection

  7. Polygon approximation + geometric filtering

Visualization of Filtering Process for Assay Region Saliency

The goal is to detect:

Robustness Rules

The algorithm enforces multiple domain constraints:

When these checks pass, the system proceeds directly to LH calculation.


Stage 3 — Assay Region Fallback (TensorFlow Object Detection)

If OpenCV detection fails or is ambiguous, the system escalates, rather than guessing.

Trigger Mechanism

A wrapper layer (AssayInspectorWrapper) evaluates the OpenCV result:

This is a contract-based decision, not parallel execution.

Deep Learning Role

LH Detection Dashboard: Pre- vs. Post-TF Fallback Launch (Jul 18)

Importantly: The deep model does not compute LH values.

Instead, it recovers a reliable ROI, which is then passed back into the same OpenCV-based measurement logic (Stage 4). This preserves numerical consistency across fast and fallback paths.

Engineering Considerations


Stage 4 — LH Value Calculation (Deterministic CV)

The LH value calculation follows a straight-forward Non-ML process.

Algorithm

For both test and control lines:

  1. Extract line ROI

  2. Compute grayscale intensity distribution

  3. Sort pixel values

  4. Take the 25th percentile as line density

  5. Subtract local background density

Final LH ratio:

LH = (TestLineDensity - TestBackground) /
     (ControlLineDensity - ControlBackground)

Stability Enhancements

This approach prioritizes:


Evaluation & Tradeoffs

What Worked Well

Known Tradeoffs

DecisionBenefitCost
HoG + SVMFast, interpretableFeature symmetry edge cases
CV-first strategyDeterministicFragile under extreme lighting
DL as fallbackHigh recallSystem complexity
Non-ML quantificationStable, explainableRequires careful heuristics

Key Insights