Source code Made by Ben Schattinger

A tool to automatically create 3D mappings of lights from two videos


First, run a program that will make each light sequentially light up for a certain period of time. If using an Arduino or compatible device, there is a sample FastLED sketch available. Now, record the sequence from two different camera angles, each 90° apart. The vertical axis of each recording must match, so make sure that if you had to rotate your camera, the videos must be rotated accordingly. A tripod is highly recommended to ensure consistency. A modern browser is required.


  • Fully client-side: your video is never uploaded
  • Multithreaded: videos are processed with as many cores as your CPU has
  • Error correction: lights unable to be seen from the camera are automatically detected and corrected using neighboring data
  • Different output formats available: 3D data can be downloaded as JSON or CSV, and the raw data processed from the videos can be downloaded as JSON
  • 3D Preview: A 3D preview is shown, updating live with tweaked parameters


I captured videos from a tree outside my house and got this very accurate 3D view:

First video

First, let's set some parameters.

Number of lights
Seconds per light The number of seconds that each light is lit for.

Now, upload your video.

This first video will define the X and Z axis.

The vertical axis will become the Z axis, so both videos "share" their vertical axis.