Skip to main content
Type 1 Compute

Your model. Your hardware.

10× more efficient FPGA AI inference

We convert your existing AI models to run on FPGAs at a fraction of the power—no retraining, no new hardware, no GPU dependency. How we work on type1compute.com →

Currently engaged with tier 1 defense primes and international systems integrators.

What we've shipped

Documentation in this site matches the open-source lineup featured on type1compute.com. Each project is also listed on our GitHub organization, with a mirrored build at GitHub Pages docs.

Performance comparison

Gesture recognition benchmark (DVS128 dataset)
Energy efficiency on neuromorphic-emulated FPGA hardware. Gesture recognition PDF brief →

Type 1 Compute FPGA
75.76 GOP/s/W
Jetson Nano
8.00 GOP/s/W
RTX 3060
3.81 GOP/s/W
Intel i9
0.31 GOP/s/W

244× more efficient than CPU, 9.5× more efficient than Jetson

Published work

Technical results across sparse inference, event-driven perception, and autonomous edge compute—additional medical and industrial applications available on request.

First page preview: Event-based object detection (PDF brief on type1compute.com)
Object detection
Event-based object detection
5.7× efficiency gain

SpikeYOLO: track fast-moving threats in real time using far less power than conventional AI stacks.

Open PDF brief →
First page preview: Gesture recognition (PDF brief on type1compute.com)
Human–machine interface
Gesture recognition
75.76 GOP/s/W

Energy efficiency measured on neuromorphic-emulated FPGA hardware vs. edge GPUs and CPUs.

Open PDF brief →
First page preview: UAV edge control (PDF brief on type1compute.com)
Autonomous systems
UAV edge control
7× over Jetson Nano

Low-SWaP inference for platforms where every watt counts.

Open PDF brief →
First page preview: Radiation-tolerant compute (PDF brief on type1compute.com)
Space & radiation
Radiation-tolerant compute
5× higher MTBF

Deterministic, low-latency inference for harsh environments.

Open PDF brief →

How we work

Full process narrative on type1compute.com →

03.1 Intake

Bring your model

Hand us your PyTorch or ONNX model as-is. No retraining. No pipeline changes. Your training workflow stays the same.

03.2 Deployment

10× more efficient

We handle conversion, optimization, and FPGA deployment. Your team works with the outcome—no HDL expertise required on your side.

03.3 Ongoing

Managed service

We maintain the deployment after handoff. Optimization continues as your workload evolves.

On the roadmap: custom ASIC, 100× efficiency over Jetson, priority access for pipeline partners.

Work with us

Contact, backers, and apply on type1compute.com →

Defense
Active

SWaP-constrained platforms, autonomous systems, EW, radiation-tolerant compute.

Telecom
Accepting partners

5G baseband, RF classification, sparse signal processing on FPGAs.

Industrial
Accepting partners

Vibration, acoustic, and thermal sensor fusion at the edge.

Medical
Research stage

EEG, EMG, and implantable-device inference—defining the problem together.

Edge AI inference for platforms where every watt counts.