// AI-Adaptive Vision Headlights

See everything.
Blind
no one.

HALO is a real-time adaptive headlight system that perceives the road in three dimensions and selectively shapes its beam — illuminating the path ahead while masking glare around oncoming drivers, pedestrians, and cyclists.

Live Beam Simulation · Stereo Depth + Selective Masking
Adaptive Beam ShapingStereo Depth LocalisationEdge Inference at the SourceGlare-Critical Region Masking Adaptive Beam ShapingStereo Depth LocalisationEdge Inference at the SourceGlare-Critical Region Masking
01 — Company Vision

Light should think before it shines.

Trismegistus Works exists to rebuild the most fundamental piece of automotive safety — the headlight — for an age where vehicles can finally see. For over a century, headlights have been blunt instruments: bright or dim, on or off, oblivious to the world they illuminate.

HALO is our answer. A perception-first lighting system that fuses stereo vision, depth-aware localisation, and on-edge intelligence to produce light that is precise, adaptive, and considerate. We do not switch between high-beam and low-beam. We sculpt the beam, frame by frame, across sixteen independently controlled segments — illuminating the road while preserving the vision of every driver, pedestrian, and cyclist sharing it.

Our mission is to make night-time driving demonstrably safer for two-wheelers first, then four-wheelers, then everything that moves after dark. We are building toward a future where adaptive perception is not a luxury feature, but the default expectation of every vehicle on the road.

// 01
Perceive, then illuminate
Every photon emitted is the product of a decision. Stereo cameras and a learned perception model identify glare-critical regions before the beam is shaped — not after.
// 02
Hybrid by design
Data-driven learning handles perception. Deterministic control logic handles actuation. The result is a system that is robust, explainable, and ready for the safety constraints of automotive deployment.
// 03
Built at the edge
All inference runs on-vehicle, with no cloud dependency. Sub-frame latency is non-negotiable. The architecture scales from a Jetson prototype to a custom FPGA built for production.

A high-beam decides everything.
HALO decides per pixel.

02 — Beam Pattern Comparison
// Static · Adaptive · Selective
// CONVENTIONAL — Binary Switching
Static Beam Pattern
Either floods the road and blinds the oncoming driver, or dims globally and sacrifices forward visibility. No spatial awareness. No middle ground.
// HALO — Selective Masking
Adaptive Matrix Beam
Sixteen LED segments are individually attenuated only where glare would occur. Maximum illumination is preserved everywhere else, frame after frame.

Four subsystems, one continuous loop from photon to photon.

// 01 · SENSING
Stereo Camera Pair
Two synchronised cameras capture the forward field of view at 30 FPS. Calibrated baseline enables disparity-based depth recovery for every detected entity.

Sixteen pixels of intent.

The HALO matrix array divides the forward beam into sixteen Cree XP-G3 LED segments, each individually addressable through an EVQ7228 / MPQ7228 driver bank. Every frame, the perception pipeline emits an attenuation map: which segments stay bright, which dim, which fully extinguish.

Move your cursor across the array. Each segment responds independently — the same logic that, on the road, decides whether to illuminate a tree or shield an oncoming rider's eyes.