
We investigate whether a simple autoencoder trained on spatiotemporal reaction-diffusion data spontaneously discovers low-dimensional manifold structure in its latent representations. Using the Gray-Scott model as a test system, we simulate five qualitatively distinct pattern regimes (spots, stripes, waves, labyrinth, holes) and train a fully-connected autoencoder with a 16-dimensional bottleneck on spatiotemporal patches. Analysis of the latent space via PCA, t-SNE, TwoNN intrinsic dimensionality estimation, and latent interpolation reveals that the network compresses 17,280-dimensional spatiotemporal input onto an approximately 3-dimensional submanifold — consistent with the two-parameter structure of the Gray-Scott system. We discuss implications for how artificial and biological neural networks represent spatiotemporal structure, connecting the results to the literature on neural manifolds in motor cortex, hippocampal-entorhinal circuits, and the broader manifold hypothesis in deep learning. All code is available as a fully reproducible Google Colab notebook requiring no GPU and no deep learning framework.
