2022-Now / Fluid

Natural motion controls for apps and screens.

Fluid started because I saw touchless control as an essential future interface and Microsoft would not fund it internally. Built-in cameras and computer vision AI were finally good enough to make it work in real time.

Two people watching a video on a laptop while Fluid shows a touchless volume gesture overlay.

Anything larger than a tablet starts to make touch feel wrong. TVs, public kiosks, digital menus, large displays, 3D content, games, learning, shopping, and couch-distance media all want a different interaction model.

The benefits stacked up quickly: hygienic public screens, accessible control at a distance, large-screen computing, and 3D-native interaction without controllers or wearables.

Motion plus voice felt like the obvious answer: natural control at a distance, using the camera people already have.

Dirty public screens were the COVID-era wedge, but the idea was bigger than hygiene. It was accessibility, usability, new use cases, media control, large-screen computing, and native 3D interaction on 2D displays.

No controllers. No wearables. No extra cameras. Just the webcam.

The first direction was 3D shoe interaction for ecommerce. The broader opportunity became more interesting: a motion layer for manipulating the web itself.

The current direction is a Chrome browser extension for touchless browsing, articles, Netflix, YouTube, and media controls, with a path toward richer 3D interaction.

A good wedge matters. The platform-sized idea can be right and still need a smaller, cleaner entry point.

The product needs to feel useful immediately, not like a demo of the future.

Fluid is pre-launch and being repackaged around the browser extension thesis.

The long-term idea is still a motion OS: motion plus voice as a first-class interaction model for the surfaces touch and remotes never solved.