HULK-SMASH Instance Segmentation
The HULK-SMASH algorithm [3] extends semantic segmentation to instance segmentation by decoding each classification spike separately and then grouping related instances into objects.
Algorithm Pipeline
flowchart LR
CS[Classification Spikes] --> HULK
HULK[HULK: Unravel Each Spike] --> INST[Per-Instance Masks]
INST --> ASH_STEP[Compute ASH]
ASH_STEP --> SCORE[Pairwise SMASH Scores]
SCORE --> GROUP[Threshold + Group]
GROUP --> OBJ[Detected Objects]
Step 1: HULK -- Hierarchical Unravelling of Linked Kernels
Instead of feeding all classification spikes through the decoder at once, HULK decodes each spike individually:
- Create a one-hot activation at the spike's spatial location in classification space.
- Pass it through the decoder (TransConv3 → Unpool2 → TransConv2 → Unpool1 → TransConv1).
- The output is a pixel mask showing which input pixels contributed to that detection.
- Record intermediate spike activity at each decoder layer.
"The Hierarchical Unravelling of Linked Kernels (HULK) process permits spiking activity from the classification convolution layer to be tracked as it propagates through the decoding layers." [3]
Step 2: ASH -- Active Spike Hashing
The intermediate spike activity from HULK is a 4D tensor . ASH compresses this into a 2D binary matrix indexed by (feature, timestep):
This discards spatial detail but preserves the featural-temporal fingerprint of each instance.
Step 3: SMASH -- Similarity Matching through Active Spike Hashing
The SMASH score between two instances and combines ASH similarity with spatial overlap:
where is the Jaccard similarity computed on binary matrices:
Instances with are grouped into the same object. The default threshold .
Implementation
from spikeseg.algorithms import HULKDecoder, group_instances_to_objects
hulk = HULKDecoder.from_encoder(encoder)
# Decode all classification spikes into instances
instances = hulk.process_to_instances(
classification_spikes=encoder_output.classification_spikes,
pool1_indices=encoder_output.pooling_indices.pool1_indices,
pool2_indices=encoder_output.pooling_indices.pool2_indices,
pool1_output_size=encoder_output.pooling_indices.pool1_output_size,
pool2_output_size=encoder_output.pooling_indices.pool2_output_size,
n_timesteps=10,
)
# Group instances into objects
objects = group_instances_to_objects(instances, smash_threshold=0.1)
Each Instance carries: instance_id, ash (ActiveSpikeHash), bbox (BoundingBox), mask, class_id.
Each Object aggregates instances and computes a combined bounding box.
See API: Algorithms for full class signatures.