Tattva3D · Deterministic video analysis

Deterministic ground
truth from video.

Tattva3D converts monocular video and known scene constraints into auditable camera poses, trajectories, and speed estimates for forensic analysis and liability determination.

FIG. 01 · Pipeline artifactcaptured
Split view: source video frame with selected 2D points alongside the constrained 3D scene geometry used for camera-to-scene alignment.
Camera-to-scene alignment from measured 2D ↔ 3D correspondences.src · monocular video
01
Lens validation from measured 2D ↔ 3D correspondences
02
Per-frame camera pose recovery from tracked points
03
Constrained speed and trajectory estimation
04
Reviewable runs with a logged audit manifest
01What exists today

Working components of the current analysis pipeline.

[A]

Lens Validation

Intrinsics and distortion are solved from measured 2D ↔ 3D correspondences and checked by reprojection residuals on held-out points before any downstream estimate is used.

[B]

Camera Trajectory Recovery

Per-frame pose is recovered from tracked image points and known scene geometry. Each frame's solution is independent and inspectable on its own merits.

[C]

Constrained Motion Analysis

Velocities and trajectories are derived in scene-aligned units from constrained motion, with explicit uncertainty — never a single unqualified number.

[D]

Reviewable Runs

Inputs, parameters, and intermediates are logged so a run can be reproduced and challenged step by step from the audit manifest.

R&DEarly exploration

Scene-constrained human motion from monocular video.

An early prototype recovering an individual's motion from a single video and re-expressing it inside the same constrained scene geometry used for camera and vehicle estimates. The human figure is anchored to measured ground and the recovered camera — not free-floating.

Note the vehicles in the background: they jitter frame-to-frame because, in this clip, they are tracked without a metric anchor of their own. That instability is the exact failure mode the next prototype addresses — by constraining each vehicle to scene geometry and known dimensions, rather than letting it drift.

Exploratory R&D, not a shipped capability. Shown to illustrate the direction: extending the same scene-constrained, inspectable approach from vehicles to people.

FIG. 02 · Prototype clipprototype
Recovered human motion replayed against the constrained scene — early prototype, not a final output.src · monocular video
FIG. 03 · Prototype clipprototype
Per-frame semantic vehicle mask constrained to scene geometry and known vehicle dimensions to derive a speed estimate.src · monocular video
R&DEarly exploration

Semantic vehicle masking for constrained speed estimation.

The direct response to the background jitter seen in the previous clip. Here, the vehicle is segmented per frame and locked into the same scene-aligned 3D environment, with the vehicle's known physical dimensions acting as a metric anchor. The bounding box stays dimensionally consistent across frames instead of drifting.

That dimensional anchoring is what turns per-frame masks into a defensible speed reading: motion is derived from constrained, measurable inputs rather than an unconstrained track.

Exploratory R&D — not a validated speed measurement. Shown to make the working approach visible.

02Why constraint matters

Many video-to-3D systems infer geometry the camera never observed. In a forensic context, an inferred surface is not evidence — it is an assumption presented as a measurement.

Tattva3D works the other way. Every estimate is anchored to a calibrated lens, tracked image points, and scene geometry that can be measured or surveyed. Nothing downstream is computed from unseen structure, and every intermediate remains open to inspection.

The output is not a guessed scene. It is an inspectable chain from evidence to measurement.
03Who this is for

Forensic engineers

Recover scene-aligned measurements from video evidence with a defensible computational chain.

Accident reconstruction teams

Pair video with survey or scan data to constrain trajectories and speeds without proprietary scene reconstruction.

Insurers and liability teams

Inspect, reproduce, and challenge measurements without relying on a black-box estimate.

04What Tattva3D is not
  • Not a generative video-to-3D toy
  • Not a photogrammetry replacement
  • Not a black-box “AI says the speed was X” system
  • Not optimized for visuals before mathematical validity
05Current validation focus

Work today centers on constrained motion recovery from monocular video. Each component below is held to the same requirement: the result must be reproducible from a logged manifest by an outside reviewer.

  • F.01
    Lens calibration
  • F.02
    Tracked points
  • F.03
    Per-frame camera pose recovery
  • F.04
    Constrained motion analysis
  • F.05
    Speed estimation
  • F.06
    Auditability and rerunability
Read the validation log →

Working on a case where video is the only evidence?