|Artikel ieu keur dikeureuyeuh, ditarjamahkeun tina basa Inggris.
Bantosanna diantos kanggo narjamahkeun.
Dina statistik, covariance matrix nyaruakeun konsep varian tina hiji ka n dimension, dina basa sejen, bentuk nilai-scalar random variable ka nilai-vector random variables (tuple tina variabel random skalar). Lamun X ngarupakeun nilai-skalar variabel random nu mibanda nilai ekspektasi μ mangka variance-na nyaeta
Lamun X nyaeta hiji n-ku-1 nilai-vektor kolon variabel random whose expected value is an n-by-1 column vector μ then its variance is the n-by-n nonnegative-definite matrix
The entries in this matrix are the covariances between the n different scalar components of X. Since the covariance between a scalar-valued random variable and itself is its variance, it follows that, in particular, the entries on the diagonal of this matrix are the variances of the scalar components of X. This may appear to be a property of this matrix that depends on which coordinate system is chosen for the space in which the random vector X resides. However, it is true generally that if u is any unit vector, then the variance of the projection of X on u is uTΣu. (This point is expanded upon somewhat at . It is a consequence of an identity that appears below.)
Nomenclatures differ. Some statisticians, following the probabilist William Feller, call this the variance of the random vector X, because it is the natural generalization to higher dimensions of the 1-dimensional variance. Others call it the covariance matrix, because it is the matrix of covariances between the scalar components of the vector X.
With scalar-valued random variables X, we have the identity
if a is constant, i.e., not random. If X is an n-by-1 column vector-valued random variable and A is an m-by-n constant (i.e., non-random) matrix, then AX is an m-by-1 column vector-valued random variable, whose variance must therefore be an m-by-m matrix. It is
This covariance matrix (though very simple) is a very useful tool in many very different areas. From it a transformation matrix can be derived that allows one to completely decorrelate the data or, from a different point of view, to find an optimal basis for representing the data in a compact way. This is called PCA (principal components analysis) in statistics and KL-Transform (Karhunen-Loève transform) in image processing.