Editorial

They Figured Out How to Do Math With [Laser Light](https://www.photonics.com/Articles/What_is_Photonic_Computing/a67814)

These guys built lenses smaller than a hair strand and stuck them directly on silicon chips. Instead of burning electricity to do AI math, they convert the numbers to light, bounce it through tiny lenses, and convert back to digital.

Here's why this matters: convolution operations eat 60-80% of the power in current AI chips. Your GPU burns 700 watts just moving electrons around to multiply numbers. Light doesn't give a shit about electrical resistance - it just goes through the lens and does the math for basically free.

Previous optical computing was lab demo bullshit with room-sized equipment and PhD students babysitting lasers. This uses standard 7nm CMOS processes - same fabrication lines TSMC runs for Apple's A-series chips.

Multiple Colors = Multiple Calculations at Once

Optical Computing Systems

They run red, green, and blue lasers through the same lens system simultaneously. Each color carries different data - same trick that lets fiber optic cables handle terabits per second.

Regular chips process AI layers one at a time, like a single-core processor from 1995. This thing processes multiple layers in parallel using different colored light. It's like having a GPU where each color wavelength is a separate compute unit.

Hit 98% accuracy on handwritten digits while using almost zero energy for the actual math. The power draw comes from the lasers and photodetectors, not the computation - so scaling up doesn't kill your electric bill.

The Good News: They Can Actually Make This Shit

Chip Manufacturing

Previous optical computing needed exotic materials and custom manufacturing. This uses the same CMOS process as regular chips. The lenses are just etched silicon - no weird external components, no retooling fabs.

They showed electron microscope photos of the actual working chips. Feature size is 100 nanometers, which is easy for current lithography. TSMC has been doing way smaller features for years.

The catch: scaling to real AI models needs thousands of these optical units per chip. Right now it's just a few lenses. You need 10,000+ perfect optical structures per die for serious workloads. That's where manufacturing yield will make or break the economics.

Where This Gets Fucked Up: Analog vs Digital Hell

The optical computation part works fine. The nightmare is connecting analog light systems to digital everything else. Light is analog - it drifts with temperature, wavelength variations, manufacturing tolerances. AI training needs precise digital numbers.

You're constantly converting between digital electrical signals and analog optical signals. Every conversion burns power and adds latency. Do that too much and you've lost your efficiency gains to the interface overhead.

The only way this works in production is hybrid: keep training digital, use optical for inference-only convolution where you can tolerate some accuracy loss. Pure optical AI is a pipe dream - mixed systems might actually work.

Will Anyone Actually Buy This?

Data center operators spend billions on electricity. If optical processing cuts even 50% of GPU power consumption, they'll throw money at it. Current GPUs burn 300-700 watts each, mostly on data movement and convolution operations.

But going from lab demo to production means handling models with 175B+ parameters, staying accurate when your data center hits 85°C, and making CUDA developers rewrite their software stacks. That's where most "revolutionary" chip technologies die a slow, expensive death.

Nvidia already uses optical interconnects in some systems, so the industry isn't allergic to hybrid designs. Question is whether this scales beyond handwritten digit recognition to actual transformer models and LLMs. Based on the math, it should work. Based on 20 years of optical computing promises, I'll believe it when data centers start buying chips.

Photonic vs Electronic AI Processing Comparison

What Actually Matters

Photonic AI Chip

Traditional Electronic Chip

Power Bills

Uses 100x less electricity (data centers rejoice)

Electricity vampire (300-700W per GPU)

How It Works

Converts math to laser light

Burns electrons through silicon

Speed

Light-speed calculations

Electron-speed (fast, but not light-fast)

Multi-tasking

Different laser colors = parallel tasks

One thing at a time, mostly

Heat Problems

Barely gets warm

Needs serious cooling or it melts

Can You Build It

Same fabs that make current chips

Same fabs that make current chips

Actually Works

98% accuracy in testing

98% accuracy (battle-tested)

Plays Well With Others

NVIDIA already uses optical stuff

All-electric ecosystem

Scaling Up

More compute ≠ much more power

More compute = proportionally more power

The Magic

Tiny lighthouse lenses on chip

Billions of transistors switching

Questions I Keep Getting

Q

AI chips that run on light - that sounds like sci-fi bullshit?

A

Nah, they actually figured it out. Convert AI math to laser light, bounce it through microscopic lenses, convert back to digital. No electricity for the heavy lifting means almost zero heat and power draw.

Q

What's this convolution thing everyone mentions?

A

The math that makes AI work. Every face recognition, text reading, video processing

  • billions of convolution operations. That's what's burning all the power in your GPU right now.
Q

"100x more efficient" sounds like marketing lies.

A

It's legit. Current GPUs burn 300-700 watts each, mostly just moving electrons around. Light doesn't give a fuck about electrical resistance

  • it just does the math and moves on.
Q

Actually works or just another lab demo that goes nowhere?

A

Hit 98% accuracy on handwritten digits in real tests. Same performance as traditional chips, fraction of the power. Still early, but the math checks out.

Q

How the hell do you fit lenses on a chip?

A

Same way they build regular chips, just etching lens shapes instead of transistors. Hair-width lenses made from silicon. TSMC could fab these tomorrow if they wanted to.

Q

Multiple calculations at once?

A

Different colored lasers carry different data through the same lens system. Like fiber optic cables handling terabits

  • same principle, chip-scale. Run multiple AI layers in parallel.
Q

When can I actually get one?

A

They can use existing semiconductor processes, so manufacturing isn't the blocker. But first-generation anything is buggy as hell. Real products probably 2-3 years out.

Q

What fails first?

A

Computer vision gets the biggest benefit but also the biggest risk. Self-driving cars, medical imaging, facial recognition

  • all the high-stakes stuff that needs convolution processing done right.
Q

Does it really run cool?

A

Light doesn't generate heat like electricity does. No resistance means no thermal energy. Your data center cooling bills would actually drop.

Q

Why isn't this just another "revolutionary" chip that disappears?

A

They're not reinventing manufacturing from scratch. Uses existing CMOS processes, works with current optical infrastructure. Evolutionary, not revolutionary

  • which means it might actually ship.

Related Tools & Recommendations

tool
Recommended

Azure AI Foundry Production Reality Check

Microsoft finally unfucked their scattered AI mess, but get ready to finance another Tesla payment

Microsoft Azure AI
/tool/microsoft-azure-ai/production-deployment
100%
tool
Recommended

Microsoft Azure Stack Edge - The $1000/Month Server You'll Never Own

Microsoft's edge computing box that requires a minimum $717,000 commitment to even try

Microsoft Azure Stack Edge
/tool/microsoft-azure-stack-edge/overview
80%
tool
Recommended

Azure - Microsoft's Cloud Platform (The Good, Bad, and Expensive)

integrates with Microsoft Azure

Microsoft Azure
/tool/microsoft-azure/overview
80%
pricing
Recommended

Don't Get Screwed Buying AI APIs: OpenAI vs Claude vs Gemini

competes with OpenAI API

OpenAI API
/pricing/openai-api-vs-anthropic-claude-vs-google-gemini/enterprise-procurement-guide
65%
integration
Recommended

Making Pulumi, Kubernetes, Helm, and GitOps Actually Work Together

Stop fighting with YAML hell and infrastructure drift - here's how to manage everything through Git without losing your sanity

Pulumi
/integration/pulumi-kubernetes-helm-gitops/complete-workflow-integration
59%
alternatives
Recommended

GitHub Actions Alternatives for Security & Compliance Teams

integrates with GitHub Actions

GitHub Actions
/alternatives/github-actions/security-compliance-alternatives
58%
news
Recommended

Marc Benioff Just Fired 4,000 People and Bragged About It - September 6, 2025

"I Need Less Heads": Salesforce CEO Admits AI Replaced Half Their Customer Service Team

Microsoft Copilot
/news/2025-09-06/salesforce-ai-workforce-transformation
56%
news
Recommended

Marc Benioff Finally Said What Every CEO Is Thinking About AI

"I need less heads" - 4,000 customer service jobs gone, replaced by AI agents

Microsoft Copilot
/news/2025-09-08/salesforce-ai-workforce-transformation
56%
news
Recommended

Salesforce Cuts 4,000 Jobs as CEO Marc Benioff Goes All-In on AI Agents - September 2, 2025

"Eight of the most exciting months of my career" - while 4,000 customer service workers get automated out of existence

salesforce
/news/2025-09-02/salesforce-ai-layoffs
56%
pricing
Recommended

Databricks vs Snowflake vs BigQuery Pricing: Which Platform Will Bankrupt You Slowest

We burned through about $47k in cloud bills figuring this out so you don't have to

Databricks
/pricing/databricks-snowflake-bigquery-comparison/comprehensive-pricing-breakdown
56%
tool
Recommended

Google Cloud Platform - After 3 Years, I Still Don't Hate It

I've been running production workloads on GCP since 2022. Here's why I'm still here.

Google Cloud Platform
/tool/google-cloud-platform/overview
39%
news
Recommended

Anthropic Hits $183B Valuation - More Than Most Countries

Claude maker raises $13B as AI bubble reaches peak absurdity

anthropic
/news/2025-09-03/anthropic-183b-valuation
37%
news
Recommended

Hackers Are Using Claude AI to Write Phishing Emails and We Saw It Coming

Anthropic catches cybercriminals red-handed using their own AI to build better scams - August 27, 2025

anthropic
/news/2025-08-27/anthropic-claude-hackers-weaponize-ai
37%
news
Recommended

Google Gemini Fails Basic Child Safety Tests, Internal Docs Show

EU regulators probe after leaked safety evaluations reveal chatbot struggles with age-appropriate responses

Microsoft Copilot
/news/2025-09-07/google-gemini-child-safety
37%
compare
Recommended

Claude vs GPT-4 vs Gemini vs DeepSeek - Which AI Won't Bankrupt You?

I deployed all four in production. Here's what actually happens when the rubber meets the road.

google-gemini
/compare/anthropic-claude/openai-gpt-4/google-gemini/deepseek/enterprise-ai-decision-guide
37%
news
Recommended

Mistral AI Scores Massive €1.7 Billion Funding as ASML Takes 11% Stake

European AI champion valued at €11.7 billion as Dutch chipmaker ASML leads historic funding round with €1.3 billion investment

OpenAI GPT
/news/2025-09-09/mistral-ai-funding
35%
news
Recommended

ASML Drops €1.3B on Mistral AI - Europe's Desperate Play for AI Relevance

Dutch chip giant becomes biggest investor in French AI startup as Europe scrambles to compete with American tech dominance

Redis
/news/2025-09-09/mistral-ai-asml-funding
35%
news
Recommended

Mistral AI Reportedly Closes $14B Valuation Funding Round

French AI Startup Raises €2B at $14B Valuation

mistral-ai
/news/2025-09-03/mistral-ai-14b-funding
35%
troubleshoot
Recommended

CrashLoopBackOff Exit Code 1: When Your App Works Locally But Kubernetes Hates It

integrates with Kubernetes

Kubernetes
/troubleshoot/kubernetes-crashloopbackoff-exit-code-1/exit-code-1-application-errors
35%
integration
Recommended

Temporal + Kubernetes + Redis: The Only Microservices Stack That Doesn't Hate You

Stop debugging distributed transactions at 3am like some kind of digital masochist

Temporal
/integration/temporal-kubernetes-redis-microservices/microservices-communication-architecture
35%

Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization