Key Takeaways
- A field-wide goal is to achieve generalizable, cross-subject models.
- A major obstacle towards this goal is the substantial variability in neural representations across individuals, which has so far required training bespoke models or fine-tuning separately for each…
Why It Matters
Context
A field-wide goal is to achieve generalizable, cross-subject models. A major obstacle towards this goal is the substantial variability in neural representations across individuals, which has so far required training bespoke models or fine-tuning separately for each subject. To address this challenge, arXiv cs. LG introduces a meta-optimized approach for semantic visual decoding from f MRI that generalizes to novel subjects without any fine-tuning. By simply conditioning on a small set of image-brain activation examples from the new individual, arXiv cs. LG's model rapidly infers their unique neural encoding patterns to facilitate robust and efficient visual decoding. LG's approach is explicitly optimized for in-context learning of the new subject's encoding model and performs decoding by hierarchical inference, inverting the encoder. First, for multiple brain regions, arXiv cs. LG estimate the per-voxel visual response encoder parameters by constructing a context over multiple stimuli and responses. Second, arXiv cs. LG construct a context consisting of encoder parameters and response values over multiple voxels to perform aggregated functional inversion. LG demonstrates strong…
For Builders
A field-wide goal is to achieve generalizable, cross-subject models.