Ranté Markov

Ti Wikipédia, énsiklopédia bébas basa Sunda
Luncat ka: pituduh, sungsi

Dina matematik, ranté Markov nyaéta prosés stokastik nu ngagunakeun Markov property. Salaku prosés, jarak ti heula taya hubunganana jeung jarak ayeuna di dipikanyaho.

Dina kasus discrete-time, proses ngabogaan sekuen X1, X2, X3, ... tina variabel acak. Domain tina variabel ieu disebut tetapan ruang, nu mibanda nilai Xn salila dina waktu nu ditangtukeun n. Lamun conditional distribution Xn+1 dina tetapan ti heula salaku fungsi Xn sorangan,

 P(X_{n+1}|X_0, X_1, X_2, \ldots, X_n) = P(X_{n+1}|X_n)

saterusna prosés ieu disebut mibanda sipat Markov.

Ranté Markov aya sanggeus A.A. Markov, nu mimiti ngahasilkeun ieu proses dina taun (1906). Dijadikeun bisa diitung keur tetapan dina ruang anu "tidak tebatas" ku Kolmogorov (1936). Ranté Markov patali jeung gerak Brown sarta hipotesa ergodik, dua topik penting dina widang fisik di awal abad kaduapuluh, tapi Markov digunakeun oge di luar widang matematika, saperti hukum wilangan gede dina kajadian anu pakait.

Sifat ranté Markov[édit | sunting sumber]

Ranté Markov dicirikeun ku conditional distribution

 P(X_{n+1}| X_n)\,

nu disebut prosés transition probability. Kadangkala disebut oge "one-step" transition probability. Transition probability dua tahap, tilu tahap atawa leuwih dijentrekeun tina transition probability satahap sarta sifat Markov:

 P(X_{n+2}|X_n) = \int P(X_{n+2},X_{n+1}|X_n)\,dX_{n+1} 
 = \int P(X_{n+2}|X_{n+1}) \, P(X_{n+1}|X_n) \, dX_{n+1}

Saperti,

 P(X_{n+3}|X_n) = \int P(X_{n+3}|X_{n+2}) \int P(X_{n+2}|X_{n+1}) \, P(X_{n+1}|X_n) \, dX_{n+1} \, dX_{n+2}

Rumus ieu nyaruakeun keur kayaan waktu nu teu teratur di mangsa datang n+k ku cara ngalikeun transition probabilities sarta nga-integral-keun waktu k.

Marginal distribution P(Xn) nyaeta distribusi nu ditangtukeun dina waktu n. Distiribusi awal nyaeta P(X0). Evolusi proses ngaliwatan sakali tahap waktu nu dijentrekeun ku

 P(X_{n+1}) = \int P(X_{n+1}|X_n)\,P(X_n)\,dX_n

Ieu ngarupakeun versi Frobenius-Perron equation. Didinya aya hiji atawa leuwih tetapan distribusi π saperti

 \pi(X) = \int P(X|Y)\,\pi(Y)\,dY

numana Y ngan sakadar ngaran variabel integrasi. Saperti distribution π disebut stationary distribution atawa steady-state distribution. Stationary distribution nyaeta eigenfunction tina fungsi conditional distribution, nu pakait jeung eigenvalue 1.

Whether or not there is a stationary distribution, and whether or not it is unique if it does exist, are determined by certain properties of the process. Irreducible means that every state is accessible from every other state. Aperiodic means that there exists at least one state for which the transition from that state to itself is possible. Positive recurrent means that the expected return time is finite for every state. Sometimes the terms indecomposable, acyclic, and persistent are used as synonyms for "irreducible", "aperiodic", and "recurrent", respectively.

If the Markov chain is positive recurrent, there exists a stationary distribution. If it is positive recurrent and irreducible, there exists a unique stationary distribution, and furthermore the process constructed by taking the stationary distribution as the initial distribution is ergodic. Then the average of a function f over samples of the Markov chain is equal to the average with respect to the stationary distribution,

 \lim_{n\rightarrow\infty}\; \frac{1}{n} \; \sum_{k=0}^{n-1} f(X_k)
  = \int f(X)\,\pi(X)\,dX

In particular, this holds for f equal to the identity function. Mangka nilai average sampel dina waktu nyaeta sarua jeung nilai ekspektasi tina sebaran stationary.

Furthermore, the equivalence of averages also holds if f is the indicator function on some subset A of the state space.

 \lim_{n\rightarrow\infty}\; \frac{1}{n} \; \sum_{k=0}^{n-1} \chi_A(X_k)
  = \int_A \pi(X)\,dX = \mu_{\pi}(A)

where μπ is the measure induced by π. This makes it possible to approximate the stationary distribution by a histogram or other density estimate of a sequence of samples.

Markov chains dina ruang diskrit state[édit | sunting sumber]

If the state space is finite, the transition probability distribution can be represented as a matrix, called the transition matrix, with the (i, j)'th element equal to

P(X_{n+1}=i\mid X_n=j)

(In this formulation, element (i, j) is the probability of a transition from j to i. An equivalent formulation is sometimes given with element (i, j) equal to the probability of a transition from i to j. In that case the transition matrix is just the transpose of the one given here.)

For a discrete state space, the integrations in the k-step transition probability are summations, and can be computed as the k'th power of the transition matrix. That is, if P is the one-step transition matrix, then Pk is the transition matrix for the k-step transition.

Writing P for the transition matrix, a stationary distribution is a vector which satisfies the equation

 P\pi = \pi\,

In this case, the stationary distribution is an eigenvector of the transition matrix, associated with the eigenvalue 1. If the transition matrix P is positive recurrent, irreducible, and aperiodic, then Pk converges elementwise to a matrix in which each column is the unique stationary distribution.

A transition matrix which is positive (that is, every element of the matrix is positive) is irreducible, aperiodic, and positive recurrent. A matrix is a stochastic matrix if and only if it is the matrix of transition probabilities of some Markov chain.

Scientific applications[édit | sunting sumber]

Markov chains are used to model various processes in queueing theory and statistics, and can also be used as a signal model in entropy coding techniques such as arithmetic coding. Markov chains also have many biological applications, particularly population processes, which are useful in modelling processes that are (at least) analogous to biological populations. Markov chains have been used in bioinformatics as well. An example is the genemark algorithm for coding region/gene prediction.

Markov processes can also be used to generate superficially "real-looking" text given a sample document: they are used in various pieces of recreational "parody generator" software (see Jeff Harrison).

Tempo oge[édit | sunting sumber]

Rujukan[édit | sunting sumber]

  • A.A. Markov. "Rasprostranenie zakona bol'shih chisel na velichiny, zavisyaschie drug ot druga". Izvestiya Fiziko-matematicheskogo obschestva pri Kazanskom universitete, 2-ya seriya, tom 15, pp 135-156, 1906.
  • A.A. Markov. "Extension of the limit theorems of probability theory to a sum of variables connected in a chain". reprinted in Appendix B of: R. Howard. Dynamic Probabilistic Systems, volume 1: Markov Chains. John Wiley and Sons, 1971.
  • Leo Breiman. Probability. Original edition published by Addison-Wesley, 1968; reprinted by Society for Industrial and Applied Mathematics, 1992. ISBN 0-89871-296-3. (See Chapter 7.)
  • J.L. Doob. Stochastic Processes. New York: John Wiley and Sons, 1953. ISBN 0-471-52369-0.

Tumbu kaluar[édit | sunting sumber]