- Space
- Posts
- The Vision for Space
The Vision for Space
A 3D ambient personal computer
Problem
The problem with tech the last 50 years is that it shoves a 3D person through a 2D straw resulting in:
a bandwidth problem
numerous health issues
accessibility barriers
collaboration barriers
Humans are spatial and tech should be too. My contrarian belief (as an XR enthusiast for 10 years who has 10 k worth of headsets) is that the world is over allocated to personal devices like glasses, goggles, pins, etc. when the future is invisible, spatial first, and omnipresent.
The reason is simple — even if glasses are amazing why are we limiting ourselves to what your eyeballs can see when you happen to be wearing them?
Vision
If we knew what was happening in our Space couldn’t we give that context to applications and AI to make our lives better? Imagine having unlimited 24/7 staff for pennies an hour:
| ![]() |
What if we could touch our space and have it react to us like Iron Man’s lab? No phone, no computer, no problem.

Throw your game app to a table top touch screen to play together on the same screen — no screen share swapping, no config, no confusion
Throw you screen to a friends phone to share a funny GIF
Tech is isolating by default when it could be collaborative through a shared spatial desktop:
What if the neighborhood watch got realtime alerts for falls, crimes, and violence by pooling exterior cams?
What if you could scale that up to city scale and beyond?
To do that we need a new distributed computer that understands spatial context with show and tell as the new mouse and keyboard. That is Space.
Product

We are building a 3D computer called Space. 2D computers don’t know anything about the world while Space is a shared, show and tell based, decentralized spatial computer.
Instead of a 2D desktop your literal Space is the “desktop”.
Instead of sitting on your desk your compute sits in the closet or cloud.
Instead of the monitor being stuck in one place the monitor moves to you.
The benefits of this model are:
Applications have spatial context
Natural show and tell interfaces
First class Collaborative and social paradigm
The downsides are that it requires camera setup but it’s a one time hassle.
DevKit
Come build apps on Space with our DevKit. Space provides standard I/O to devs through media streams, pub/sub signals, private compute, common built-ins, models, and standards for integration.

Stream In — instead of a trackpad and keyboard use a 1-2 cameras per room for gestures and voice input
Stream Out — instead of one display you use any display wherever it’s needed — a dozen $50 monitors throughout the house, your phone, your TVs, and audio out from your cameras and speakers
Extended I/O — instead of just audio/visual an IoT smart lock be an output device and an air quality sensor an input device
Integrations — connect with any service or integration over standard web APIs
Space is a developer platform that takes care of the basics like hardware, integrations, infra, private compute, and spatial reconstruction:
Developers build apps and staff that have the exact same context as the user, they know what the user is looking at, where they are positioned, and what objects are in the scene.
Users hire “apps” that are always available, endlessly patient, as staff for pennies on the dollar.

Concept for SpaceKit
What’s Next
Stay tuned for weekly DevLog announcements and announcements. Make sure you are on the waitlist to be first in line for early access.
I can’t wait to show you Space!
— Michael