Skip to main content

Architecture

System Overview

TALON is a neuromorphic computing SDK built around a layered package architecture. Each package has a single responsibility and depends only on packages below it in the stack.

Dependency rules:

  • talon.ir has no dependency on PyTorch, visualization, or hardware
  • talon.bridge imports talon.ir; the reverse is never allowed
  • talon.viz imports talon.ir; does not import talon.bridge
  • talon.graph imports talon.ir for node type inspection
  • talon.backend imports talon.ir and talon.graph
  • talon.io is fully independent; handles event data without any IR dependency
  • talon.sdk imports all packages and re-exports a unified API

Package Dependency Map

                    ┌──────────────────────────────┐
│ talon.ir │
│ (numpy, rustworkx, h5py) │
└──────────────┬───────────────┘

┌────────────┬─────────────┼────────────┬──────────────────┐
│ │ │ │ │
▼ ▼ ▼ ▼ │
┌────────────┐ ┌─────────┐ ┌──────────────┐ ┌──────────┐ │
│talon.bridge│ │talon.viz│ │ talon.graph │ │ talon.io │ │
│ (torch) │ │(pillow) │ │(numpy,scipy) │ │ (numpy) │ │
└────────────┘ └─────────┘ └──────┬───────┘ └──────────┘ │
│ │
▼ │
┌──────────────┐ │
│talon.backend │◄──────────────────────┘
│(numpy) │
└──────────────┘

┌────────────────────────────────────────────┐
│ talon.sdk │
│ (all packages + click, rich) │
└────────────────────────────────────────────┘

talon.ir: Core IR

The foundation layer. All other packages depend on it; it depends on nothing from the TALON stack.

Primitive Categories

TALON IR provides 36 primitives across 8 categories:

CategoryPrimitives
LinearAffine, SpikingAffine
ConvolutionConv1d, Conv2d, SepConv2d, SConv, SDConv, SGhostConv
SpatialMaxPool2d, AvgPool2d, Upsample, Flatten
SNN NeuronsLIF, IF
ANN ActivationsReLU, Sigmoid, Tanh, Softmax, GELU, ELU, PReLU
NormalizationBatchNorm1d, BatchNorm2d, LayerNorm, Dropout
RoutingChannelSplit, Concat, Skip, HybridRegion
Ghost/DetectionSGhostEncoderLite, GhostBasicBlock1/2, SDDetect, DFLDecode, Dist2BBox, NMS

Graph Lifecycle

Define nodes dict + edges list

ir.Graph(nodes, edges) ← rustworkx DAG, validated on construction

ir.write('model.t1c', graph) ← HDF5, all params as datasets

ir.read('model.t1c') ← deserializes back to Graph

talon.bridge: PyTorch Bridge

Bidirectional conversion between PyTorch nn.Module objects and talon.ir.Graph.

Export Flow (PyTorch → IR)

nn.Module

├── TALONTracer (symbolic trace)
│ └── captures module graph, shapes, dtypes

├── node_converter (module → IR node)
│ └── Linear → Affine, Conv → Conv2d, snn.Leaky → LIF, ...

└── ir.Graph(nodes, edges)

Import Flow (IR → PyTorch)

ir.Graph

├── node_to_module (IR node → nn.Module)
│ └── Affine → Linear, LIF → LIFModule, SDConv → DepthwiseConv2d, ...

├── GraphExecutor
│ ├── topological sort of nodes
│ ├── routes tensors along edges during forward()
│ └── handles multi-output nodes (ChannelSplit → tuple)

└── CyclicGraphExecutor (for recurrent/SNN graphs)
└── carries hidden state across timesteps

Multi-Output Routing

ChannelSplit nodes are marked with _is_multi_output = True. The GraphExecutor uses a set of such node names to correctly route tuple outputs without treating them as hidden state:

from talon import ir, bridge
import numpy as np

split = ir.ChannelSplit(split_sections=[8, 8], dim=1)
concat = ir.Concat(num_inputs=2, dim=1)

graph = ir.Graph(
nodes={'x': ir.Input(...), 'split': split, 'concat': concat, 'y': ir.Output(...)},
edges=[('x', 'split'), ('split', 'concat'), ('split', 'concat'), ('concat', 'y')]
)
executor = bridge.ir_to_torch(graph)

talon.viz: Visualization

Generates self-contained HTML visualizations using D3.js and Dagre. No server required.

Visualization Pipeline

ir.Graph

├── graph_to_dict() ← JSON-serializable representation
│ ├── serialize_node() per node (type, shape, params, color, icon)
│ └── detect_all_patterns() (RepConv, SPP, skip connections, fan-out)

├── render_html() ← inject graph_dict into Jinja2 template
│ └── viewer.html template with D3.js + Dagre

└── export_html() / serve()

Pattern Detection

PatternDetector.detect_all_patterns(graph)
├── detect_repconv_patterns() → RepConv blocks
├── detect_spp_patterns() → Spatial Pyramid Pooling
├── detect_sppf_patterns() → Fast SPP (YOLOv8 style)
├── detect_skip_connection_patterns() → residual / concatenate skips
├── find_fan_out_points() → nodes with multiple output branches
└── find_merge_points() → nodes with multiple inputs

talon.graph: Partitioning and Placement

Maps IR graphs onto neuromorphic hardware meshes.

Partitioning Algorithms

AlgorithmStrategyBest For
partition_greedyBFS-order, fill cores sequentiallyDense feedforward networks
partition_edgemapMinimize cross-core edges greedilyNetworks with many skip connections
partition_spectralGraph Laplacian spectral bisectionBalanced partitions on complex topologies

Core Metrics

Core utilization  = layer_sram_bytes / sram_bytes_per_core
Hop distance = Σ manhattan_dist(src_core, dst_core) over cross-core edges
PlacementResult.improvement = (baseline_hops - placed_hops) / baseline_hops

talon.backend: Compilation and Simulation

Compiles IR graphs to hardware descriptors and simulates execution on CPU and FPGA targets.

Simulation Pipeline

ir.Graph + CompileConfig

├── validate() ← check all node types are supported
├── compile() ← produce HardwareDescriptor (JSON/dict)
├── simulate() ← CPU forward pass for n_steps timesteps
│ └── returns SimulationResult (outputs, spikes, membrane per core)
└── profile() ← SimulationResult + energy estimation
└── returns ProfileResult with .summary() formatted string

Energy Presets

ENERGY_PRESETS = {
"45nm_cmos": {mac_pj: 1.0, spike_pj: 0.1, sram_pj: 2.0}
"zynq_7020": {mac_pj: 4.5, spike_pj: 0.5, sram_pj: 8.0}
"zynq_us_plus": {mac_pj: 3.2, spike_pj: 0.35, sram_pj: 6.0}
"neuromorphic_int8": {mac_pj: 0.08, spike_pj: 0.03, sram_pj: 0.5}
"t1c_asic_target": {mac_pj: 0.05, spike_pj: 0.02, sram_pj: 0.3}
}

talon.io: Event Streaming and Encoding

Handles neuromorphic event data I/O, neural spike encoding, and sensor integration. Fully independent of talon.ir.

Event Data Model

EVENT_DTYPE = np.dtype([
('t', np.int64), # timestamp in microseconds
('x', np.int16), # pixel x coordinate
('y', np.int16), # pixel y coordinate
('p', np.int8), # polarity: 0 (OFF) or 1 (ON)
])

Encoding Schemes

SchemeDescriptionOutput Shape
rate_encodePoisson spike train from firing rate(T, N)
latency_encodeSingle spike at time proportional to intensity(T, N)
delta_encodeSpike on pixel value change(T, H, W, 2)
temporal_encodeThreshold-crossing temporal codes(T, N)

talon.sdk: SDK Meta-Package

talon.sdk is the unified entry point. It re-exports the full API of all six sub-packages plus SDK-specific tools.

CLI Commands

talon --help          # Show all commands
talon info # Ecosystem version table
talon primitives # List all 36 IR primitives

talon analyze <model.t1c> # Layer stats, parameter counts
talon profile <model.t1c> # Latency/memory/energy estimation
talon lint <model.t1c> # Validate IR constraints
talon compare <a.t1c> <b.t1c> # Diff two graphs
talon inspect <model.t1c> --json # JSON node/edge dump
talon validate <model.t1c> # Quick validity check

talon convert <model.t1c> --to spiking # ANN → SNN conversion
talon energy <model.t1c> # Energy breakdown
talon run <model.t1c> --steps 10 # CPU simulation
talon pipeline <model.t1c> # Full partition+compile+run

Full System Data Flow

End-to-end workflow from PyTorch model to hardware deployment:


Design Decisions

Why HDF5?

  • Efficient storage of large weight matrices (float16/32/64, int8/16/32)
  • Self-describing format with embedded metadata and version info
  • Wide language support (Python, C++, MATLAB, Julia)
  • Compression support for deployment on embedded targets
  • Compatible with myhdf5.hdfgroup.org web viewer for inspection

Why Separate Packages?

ConcernBenefit
Dependency isolationtalon.ir works without PyTorch; talon.io works without talon.ir
Independent versioningIR can evolve separately from framework bindings
Clear ownershipDifferent teams own different packages
Focused testingEach package has its own test suite (327 + 171 + 162 + 61 + 96 + 63 + 142 = 1022 tests)
Deployment flexibilityHardware teams only need talon.ir + talon.backend, not PyTorch

Why Namespace Packages?

All packages share the talon namespace:

from talon import ir, bridge, viz, sdk
from talon.ir import Graph, LIF, Conv2d
from talon.backend import get_backend

This allows the full ecosystem to feel like a single library (talon) while still being independently installable packages (talon-ir, talon-bridge, etc.). The SDK meta-package (t1c-talon) pulls in everything.

Talon Package Manifest

PyPI PackagePython NamespaceRepository
talon-irtalon.irtalonir
talon-bridgetalon.bridgetalonbridge
talon-viztalon.viztalonviz
talon-graphtalon.graphtalongraph
talon-backendtalon.backendtalonbackend
talon-iotalon.iotalonio
t1c-talontalon.sdkt1ctalon