A classical Hopfield network stores binary patterns ξ^μ on N = 100 spins arranged as a 10×10 grid. Symmetric Hebbian weightsw_ij = (1/N) Σ_μ ξ_i^μ ξ_j^μ (zero diagonal) define the Lyapunov (energy) function E = −½ Σ_{i,j} w_ij S_i S_j. Asynchronous zero-temperature updates S_i ← sign(Σ_j w_ij S_j) never increase E, so noisy states relax toward stored attractors—the standard cartoon of associative memory and an energy landscape, with caveats about capacity and spurious minima when P grows.
Who it's for: Students in statistical mechanics, neural-networks introductions, or information theory who want a hands-on Hopfield energy picture.
Key terms
Hopfield model
Hebbian learning
Associative memory
Energy function
Attractor network
Asynchronous dynamics
How it works
Small Hopfield associative memory on a 10×10 grid: Hebbian weights, energy landscape, paint states, memorize patterns, add noise, and watch zero-temperature async dynamics retrieve a stored attractor.
Key equations
w_ij = (1/N) Σ_μ ξ_i^μ ξ_j^μ (i ≠ j), E = −½ Σ_{i,j} w_ij S_i S_j, update: pick i at random, set S_i = sign(Σ_j w_ij S_j) (tie keeps S_i).
Frequently asked questions
Why cap at seven patterns?
The classic Hopfield capacity on random unbiased patterns scales like P ~ 0.14 N in this normalization; seven is a conservative teaching cap on N = 100 before overlaps and spurious states dominate.
Does synchronous updating also lower E?
Not guaranteed. The Hopfield Lyapunov argument is standard for random sequential (single-unit) updates; parallel sweeps can increase energy and oscillate.
What does the overlap readout mean?
For each stored pattern μ, m_μ = N⁻¹ Σ_i ξ_i^μ S_i measures alignment of the current state with that memory; the sidebar highlights the largest magnitude among them as a quick “which memory wins” indicator near convergence.