PhysSandbox
Classical MechanicsWaves & SoundElectricity & MagnetismOptics & LightGravity & OrbitsLabs
🌙Astronomy & The Sky🌡️Thermodynamics🌍Biophysics, Fluids & Geoscience📐Math Visualization🔧Engineering🧪Chemistry

More from Math Visualization

Other simulators in this category — or see all 46.

View category →
NewSchool

Gradient Descent (2D)

Level sets of f(x,y) and path (x,y) ← (x,y) − η∇f; bowl or elliptic well.

Launch Simulator
NewUniversity / research

Minkowski Diagram

Light cone and boosted axes in 1+1D; γ from v.

Launch Simulator
NewSchool

Twin Paradox

Out-and-back worldlines; proper time τ = T/γ vs Earth time T.

Launch Simulator
NewKids

Monte Carlo π

Uniform samples in a square; 4·(in disk)/N estimates π.

Launch Simulator
NewSchool

Random Walk

1D or 2D steps; trail and running mean ⟨r²⟩ vs diffusion intuition.

Launch Simulator
FeaturedSchool

Vector Addition

Place vectors and see the resultant with head-to-tail animation.

Launch Simulator
PhysSandbox

Interactive physics, chemistry, and engineering simulators for students, teachers, and curious minds.

Physics

  • Classical Mechanics
  • Waves & Sound
  • Electricity & Magnetism

Science

  • Optics & Light
  • Gravity & Orbits
  • Astronomy & The Sky

More

  • Thermodynamics
  • Biophysics, Fluids & Geoscience
  • Math Visualization
  • Engineering
  • Chemistry

© 2026 PhysSandbox. Free interactive science simulators.

PrivacyTermsContact
Home/Math Visualization/Markov Chain (Weather)

Markov Chain (Weather)

Markov chains provide a powerful mathematical framework for modeling systems that transition randomly between a set of distinct states, where the future depends only on the present state, not the past. This simulator visualizes a classic two-state Markov chain using a simple weather model, where each day is categorized as either Sunny (S) or Rainy (R). The core of the model is the transition probability matrix P. This 2x2 matrix defines the probability of moving from one state to another in a single step. For example, P(S→R) is the probability it rains tomorrow given it is sunny today. The chain's memoryless property is the defining Markov assumption. By running the simulation over many days, you can observe the empirical frequency of sunny and rainy days. A fundamental result is that under certain conditions (the chain being irreducible and aperiodic), these empirical frequencies converge to a unique stationary distribution, π. This vector satisfies the equation πP = π, meaning if the system starts in distribution π, it will remain in π forever. The simulator allows you to manipulate the transition probabilities and observe how they affect both the simulated weather sequence and the calculated stationary distribution, illustrating the link between local rules (transition probabilities) and global, long-term behavior (stationary distribution).

Who it's for: Undergraduate students in probability, statistics, or data science courses learning the fundamentals of stochastic processes and Markov models.

Key terms

  • Markov Chain
  • Transition Probability Matrix
  • Stationary Distribution
  • Stochastic Process
  • State Space
  • Memoryless Property
  • Empirical Distribution
  • Eigenvector

Transition probabilities

0.75
0.65

Stationary distribution π satisfies π = πP. For this 2×2 chain, π_sun = (1 − P_rr) / (2 − P_ss − P_rr). Long runs should match π — compare empirical sun fraction after many steps.

Measured values

π_sun (theory)0.5833
π_rain (theory)0.4167
Steps0
Empirical P(sun)—

How it works

A minimal Markov model for teaching: memoryless transitions, stationary distribution, and convergence of time averages.

Frequently asked questions

Why does the simulated percentage of sunny days eventually settle to a fixed value?
It settles to the stationary distribution, π. This is a long-run equilibrium where the proportion of time the chain spends in each state becomes constant. The transition probabilities dictate this value; changing the probability of rain after a sunny day, for instance, will change the long-run sunny percentage. The simulator shows this convergence empirically and calculates π theoretically from the matrix equation πP = π.
Is the 'memoryless' property realistic for real weather?
It is a simplification. Real weather has memory beyond one day; a rainy pattern might persist for a week due to a large storm system. The one-step Markov model is a useful first approximation for some phenomena and a crucial pedagogical tool for understanding more complex models, like higher-order Markov chains or hidden Markov models, which are used in sophisticated weather forecasting and many other applications.
What do the arrows and numbers on the state diagram represent?
The circles are the states (Sunny, Rainy). The arrows show possible transitions between states. The number on each arrow is the conditional transition probability—the probability of moving to the state the arrow points to, given you are currently in the state the arrow comes from. All probabilities leaving a state must sum to 1. These numbers are the entries of the transition probability matrix P.
Can the chain get 'stuck' in one state forever?
In this basic two-state model with all probabilities between 0 and 1 (non-extreme), it cannot get permanently stuck. You will always have a chance to transition to the other state. However, if a transition probability is set to 0 (e.g., P(S→R)=0), then the 'Sunny' state becomes absorbing—once sunny, always sunny—and the chain's long-term behavior changes fundamentally. This simulator typically models ergodic chains without absorbing states.