# Ranté Markov

Loncat ke navigasi Loncat ke pencarian

Dina matematik, ranté Markov nyaéta prosés stokastik nu ngagunakeun Markov property. Salaku prosés, jarak ti heula taya hubunganana jeung jarak ayeuna di dipikanyaho.

Dina kasus discrete-time, prosés mibanda sekuen X1, X2, X3, ... tina variabel acak. Domain tina variabel ieu disebut tetapan ruang, nu mibanda nilai Xn salila dina waktu nu ditangtukeun n. Lamun conditional distribution Xn+1 dina tetapan ti heula salaku fungsi Xn sorangan,

${\displaystyle P(X_{n+1}|X_{0},X_{1},X_{2},\ldots ,X_{n})=P(X_{n+1}|X_{n})}$

saterusna prosés ieu disebut mibanda sipat Markov.

Ranté Markov aya sanggeus A.A. Markov, nu mimiti ngahasilkeun ieu prosés dina taun (1906). Dijadikeun bisa diitung keur tetapan dina ruang anu "tidak tebatas" ku Kolmogorov (1936). Ranté Markov patali jeung gerak Brown sarta hipotesa ergodik, dua topik penting dina widang fisik di awal abad kaduapuluh, tapi Markov digunakeun ogé di luar widang matematika, saperti hukum wilangan gede dina kajadian anu pakait.

## Sifat ranté Markov

 Artikel ieu keur dikeureuyeuh, ditarjamahkeun tina basa Inggris. Bantosanna diantos kanggo narjamahkeun.

Ranté Markov dicirikeun ku conditional distribution

${\displaystyle P(X_{n+1}|X_{n})\,}$

nu disebut prosés transition probability. Kadangkala disebut ogé "one-step" transition probability. Transition probability dua tahap, tilu tahap atawa leuwih dijéntrékeun tina transition probability satahap sarta sifat Markov:

${\displaystyle P(X_{n+2}|X_{n})=\int P(X_{n+2},X_{n+1}|X_{n})\,dX_{n+1}=\int P(X_{n+2}|X_{n+1})\,P(X_{n+1}|X_{n})\,dX_{n+1}}$

Saperti,

${\displaystyle P(X_{n+3}|X_{n})=\int P(X_{n+3}|X_{n+2})\int P(X_{n+2}|X_{n+1})\,P(X_{n+1}|X_{n})\,dX_{n+1}\,dX_{n+2}}$

Rumus ieu nyaruakeun keur kayaan waktu nu teu teratur di mangsa datang n+k ku cara ngalikeun transition probabilities sarta nga-integral-keun waktu k.

Marginal distribution P(Xn) nyaéta distribusi nu ditangtukeun dina waktu n. Distiribusi awal nyaéta P(X0). Evolusi prosés ngaliwatan sakali tahap waktu nu dijéntrékeun ku

${\displaystyle P(X_{n+1})=\int P(X_{n+1}|X_{n})\,P(X_{n})\,dX_{n}}$

Ieu mangrupa versi Frobenius-Perron equation. di dinya aya hiji atawa leuwih tetapan distribusi π saperti

${\displaystyle \pi (X)=\int P(X|Y)\,\pi (Y)\,dY}$

nu mana Y ngan sakadar ngaran variabel integrasi. Saperti distribution π disebut stationary distribution atawa steady-state distribution. Stationary distribution nyaéta eigenfunction tina fungsi conditional distribution, nu pakait jeung eigenvalue 1.

Whether or not there is a stationary distribution, and whether or not it is unique if it does exist, are determined by certain properties of the process. Irreducible méans that every state is accessible from every other state. Aperiodic méans that there exists at léast one state for which the transition from that state to itself is possible. Positive recurrent méans that the expected return time is finite for every state. Sometimes the terms indecomposable, acyclic, and persistent are used as synonyms for "irreducible", "aperiodic", and "recurrent", respectively.

If the Markov chain is positive recurrent, there exists a stationary distribution. If it is positive recurrent and irreducible, there exists a unique stationary distribution, and furthermore the process constructed by taking the stationary distribution as the initial distribution is ergodic. Then the average of a function f over samples of the Markov chain is equal to the average with respect to the stationary distribution,

${\displaystyle \lim _{n\rightarrow \infty }\;{\frac {1}{n}}\;\sum _{k=0}^{n-1}f(X_{k})=\int f(X)\,\pi (X)\,dX}$

In particular, this holds for f equal to the identity function. Mangka nilai average sampel dina waktu nyaéta sarua jeung nilai ekspektasi tina sebaran stationary.

Furthermore, the equivalence of averages also holds if f is the indicator function on some subset A of the state space.

${\displaystyle \lim _{n\rightarrow \infty }\;{\frac {1}{n}}\;\sum _{k=0}^{n-1}\chi _{A}(X_{k})=\int _{A}\pi (X)\,dX=\mu _{\pi }(A)}$

where μπ is the méasure induced by π. This makes it possible to approximate the stationary distribution by a histogram or other density estimate of a sequence of samples.

## Markov chains dina ruang diskrit state

If the state space is finite, the transition probability distribution can be represented as a matrix, called the transition matrix, with the (i, j)'th element equal to

${\displaystyle P(X_{n+1}=i\mid X_{n}=j)}$

(In this formulation, element (i, j) is the probability of a transition from j to i. An equivalent formulation is sometimes given with element (i, j) equal to the probability of a transition from i to j. In that case the transition matrix is just the transpose of the one given here.)

For a discrete state space, the integrations in the k-step transition probability are summations, and can be computed as the k'th power of the transition matrix. That is, if P is the one-step transition matrix, then Pk is the transition matrix for the k-step transition.

Writing P for the transition matrix, a stationary distribution is a vector which satisfies the equation

${\displaystyle P\pi =\pi \,}$

In this case, the stationary distribution is an eigenvector of the transition matrix, associated with the eigenvalue 1. If the transition matrix P is positive recurrent, irreducible, and aperiodic, then Pk converges elementwise to a matrix in which éach column is the unique stationary distribution.

A transition matrix which is positive (that is, every element of the matrix is positive) is irreducible, aperiodic, and positive recurrent. A matrix is a stochastic matrix if and only if it is the matrix of transition probabilities of some Markov chain.

## Scientific applications

Markov chains are used to modél various processes in queueing theory and statistics, and can also be used as a signal modél in entropy coding techniques such as arithmetic coding. Markov chains also have many biological applications, particularly population processes, which are useful in modélling processes that are (at léast) analogous to biological populations. Markov chains have been used in bioinformatics as well. An example is the genemark algorithm for coding region/gene prediction.

Markov processes can also be used to generate superficially "real-looking" text given a sample document: they are used in various pieces of recréational "parody generator" software (see Jeff Harrison).

## Rujukan

• A.A. Markov. "Rasprostranenie zakona bol'shih chisel na velichiny, zavisyaschie drug ot druga". Izvestiya Fiziko-matematicheskogo obschestva pri Kazanskom universitete, 2-ya seriya, tom 15, pp 135–156, 1906.
• A.A. Markov. "Extension of the limit theorems of probability theory to a sum of variables connected in a chain". reprinted in Appendix B of: R. Howard. Dynamic Probabilistic Systems, volume 1: Markov Chains. John Wiley and Sons, 1971.
• Léo Breiman. Probability. Original edition published by Addison-Wesley, 1968; reprinted by Society for Industrial and Applied Mathematics, 1992. ISBN 0-89871-296-3. (See Chapter 7.)
• J.L. Doob. Stochastic Processes. New York: John Wiley and Sons, 1953. ISBN 0-471-52369-0.