Glass 2

Research edition

Open-source visual intelligence platform. Shipping Q1 2026.

Visual intelligence, designed for clarity.

Visual intelligence, designed for clarity.

Monochrome Display

Green dot-matrix waveguide projection. Crisp, legible, readable in one glance. No clutter.

Open Hardware

Published schematics. Documented optics. Forkable designs. Inspect it, modify it, manufacture it.

Privacy-First

Local processing prioritized. You control your data. Transparent architecture. No corporate harvesting.

Multi-Model AI

Run any model, local or cloud. OpenRouter integration. No vendor lock-in. Your intelligence stack, your choice.

Sub-50g Frame

All-day wearability. Comfortable enough to forget you're wearing a research platform.

Audio + Visual

Glass 1's audio intelligence plus visual overlays. Text, icons, status, navigation in your peripheral vision.

A Canvas for Visual AI Research.

A Canvas for Visual AI Research.

For Developers

Build AR overlays that label tools on a workbench. Ship navigation agents. Create real-time translation interfaces. Test on open hardware you can inspect and modify.

For Researchers

Run controlled experiments on human-AI interaction. Study how visual cues affect attention, task performance, memory. Access the full stack: optics, firmware, models, data pipelines.

For Open-Source

Fork the platform. Contribute drivers, UX experiments, agents. Share under open licenses. Shape the roadmap.

For Privacy-Conscious

Local processing prioritized. Transparent data flows. User-controlled storage. Your data stays yours.

What Glass 2 enables.

What Glass 2 enables.

Navigation Overlay

Subtle directional cues in your field of view. Turn-by-turn without pulling out your phone.

Live Captioning

Real-time transcription and translation. Accessibility-first design.

Research Protocols

Capture interactions, run studies, log data for analysis while maintaining privacy controls.

Tool Recognition

AR labels for workbenches, labs, kitchens. Context-aware visual assistance.

Hands-Free Note Taking

Thought logging without interrupting your work. Voice-triggered, visually confirmed.

Status Monitoring

Metrics, alerts, system states. Glanceable information for professionals.

Gemini

Chat with Gemini, an AI assistant

ChatGPT

Ask ChatGPT questions in real-time, hands-free.

Perplexity

The most powerful answer engine powered by AI

Character AI

Super-intelligent chat bots that hear you, understand you, and remember you.

Build Your Own App

Use the Seeit API to make **anything you imagine

Designed for focus.

Designed for focus.

01

One element at a time.

No clutter. Visuals readable in a single glance.

01

One element at a time.

No clutter. Visuals readable in a single glance.

01

One element at a time.

No clutter. Visuals readable in a single glance.

02

Purposeful interfaces.

No feeds. No dark patterns. No attention traps.

02

Purposeful interfaces.

No feeds. No dark patterns. No attention traps.

02

Purposeful interfaces.

No feeds. No dark patterns. No attention traps.

03

Documented behaviors.

No black boxes. Every major function is published.

03

Documented behaviors.

No black boxes. Every major function is published.

03

Documented behaviors.

No black boxes. Every major function is published.

04

Readable first.

Monochrome display optimized for legibility, not spectacle.

04

Readable first.

Monochrome display optimized for legibility, not spectacle.

04

Readable first.

Monochrome display optimized for legibility, not spectacle.

How it works.

Layer 1

Display

Monochrome waveguide optics. Green dot-matrix projection. Optimized for text and simple graphics.

Layer 2

Compute

Seamlessly switch between ChatGPT, Claude, Perplexity, or your own model. Total flexibility, ultimate control.

Layer 3

AI Layer

Multi-model support. Local inference for latency-sensitive tasks. Cloud for complex reasoning. You choose the stack.

Layer 4

Privacy Layer

Local-first processing. Optional cloud sync with user control. Transparent data flows. No corporate harvesting.

Layer 5

Audio

Inherited from Glass 1. 5-array microphone. Open-ear speakers. Wake word with custom training.

Frames, without

boundaries.

Open App Store

An unrestricted ecosystem for AI, AR & wearable apps—no approvals, no censorship. Just pure innovation

Your AI, Your Choice

Seamlessly switch between ChatGPT, Claude, Perplexity, or your own model. Total flexibility, ultimate control.

Open-Ear Audio

Hear, talk & interact naturally—without shutting out your surroundings. The future of immersive audio.

Snap-On Shades

Removable shades in Black or Brown for seamless transitions between indoors & outdoors.

Open-Source OS & APIs

Devs get full API access to create AI models, real-time overlays & next-gen assistants.

⭐️

Connect to Content

Add layers or components to make infinite auto-playing slideshows.

How it works.

Layer 1

Display

Monochrome waveguide optics. Green dot-matrix projection. Optimized for text and simple graphics.

Layer 2

Compute

Seamlessly switch between ChatGPT, Claude, Perplexity, or your own model. Total flexibility, ultimate control.

Layer 3

AI Layer

Multi-model support. Local inference for latency-sensitive tasks. Cloud for complex reasoning. You choose the stack.

Layer 4

Privacy Layer

Local-first processing. Optional cloud sync with user control. Transparent data flows. No corporate harvesting.

Layer 5

Audio

Inherited from Glass 1. 5-array microphone. Open-ear speakers. Wake word with custom training.

Frames, without

boundaries.

Open App Store

An unrestricted ecosystem for AI, AR & wearable apps—no approvals, no censorship. Just pure innovation

Your AI, Your Choice

Seamlessly switch between ChatGPT, Claude, Perplexity, or your own model. Total flexibility, ultimate control.

Open-Ear Audio

Hear, talk & interact naturally—without shutting out your surroundings. The future of immersive audio.

Snap-On Shades

Removable shades in Black or Brown for seamless transitions between indoors & outdoors.

Open-Source OS & APIs

Devs get full API access to create AI models, real-time overlays & next-gen assistants.

⭐️

Connect to Content

Add layers or components to make infinite auto-playing slideshows.

Seeit Glass 1 Tech Specs

Seeit Glass 2

Seeit Glass 2

RESEARCH EDITION

RESEARCH EDITION

Display

Display

Display

Monochrome waveguide, green dot-matrix projection

Monochrome waveguide, green dot-matrix projection

Monochrome waveguide, green dot-matrix projection

Optics

Optics

Optics

Custom waveguide design, documented optical path

Custom waveguide design, documented optical path

Custom waveguide design, documented optical path

Frame

Frame

Frame

Sub-50g target, comfortable all-day wear

Sub-50g target, comfortable all-day wear

Sub-50g target, comfortable all-day wear

Audio

Audio

Audio

5-array microphone, dual open-ear speakers

5-array microphone, dual open-ear speakers

5-array microphone, dual open-ear speakers

AI Support

AI Support

AI Support

Multi-model, local + cloud, OpenRouter integration

Multi-model, local + cloud, OpenRouter integration

Multi-model, local + cloud, OpenRouter integration

Privacy

Privacy

Privacy

Local-first processing, transparent data flows

Local-first processing, transparent data flows

Local-first processing, transparent data flows

Connectivity

Connectivity

Connectivity

Bluetooth 5.4, Wi-Fi 6, phone/compute tethering

Bluetooth 5.4, Wi-Fi 6, phone/compute tethering

SDK

SDK

SDK

Visual overlay APIs, text, icons, layouts, timing controls

Visual overlay APIs, text, icons, layouts, timing controls

Visual overlay APIs, text, icons, layouts, timing controls

For Developers

For Developers

For Researchers

For Researchers

Development timeline.

Development timeline.

Phase 1

Phase 1

Phase 1

Now

SDK development

Privacy architecture

Early partner outreach

Phase 2

Phase 2

Phase 2

Q4 2025

Developer beta units

Public SDK release

Sample application library

Research cohort recruitment

Phase 3

Phase 3

Phase 3

Q1 2026

Public shipping begins

Community registry launch

First hackathon

Research publications

Join the research cohort.

Join the research cohort.

Developer Access

Developer Access

Apply for beta hardware. Build the first visual AI applications.

Apply for beta hardware. Build the first visual AI applications.

Apply For Access

Apply For Access

Apply For Access

Research Partnership

Research Partnership

Propose studies. Access the full stack. Collaborate on publications.

Propose studies. Access the full stack. Collaborate on publications.

Propose Collaboration

Propose Collaboration

Propose Collaboration

Waitlist

Waitlist

Get notified when Glass 2 ships.

Get notified when Glass 2 ships.

Join Wailist

Join Wailist

Join Wailist

Frequently Asked Questions

Frequently Asked Questions

Frequently Asked Questions

What is Glass 2?

How is it different from Glass 1?

Why monochrome?

What can I build with it?

How does privacy work?

Is the hardware really open-source?

When does it ship?

How do I get early access?

What is Glass 2?

How is it different from Glass 1?

Why monochrome?

What can I build with it?

How does privacy work?

Is the hardware really open-source?

When does it ship?

How do I get early access?

What is Glass 2?

How is it different from Glass 1?

Why monochrome?

What can I build with it?

How does privacy work?

Is the hardware really open-source?

When does it ship?

How do I get early access?

Privacy Policy

©Seeit AI • Seeit AI is a Public Benefit Co.

Terms of Service

Privacy Policy

©Seeit AI • Seeit AI is a Public Benefit Co.

Terms of Service

Privacy Policy

©Seeit AI • Seeit AI is a Public Benefit Co.

Terms of Service