Klasifikasi naif Bayésian

Ti Wikipédia, énsiklopédia bébas
Luncat ka: pituduh, sungsi
Panneau travaux.png Artikel ieu keur dikeureuyeuh, ditarjamahkeun tina basa Inggris.
Bantosanna diantos kanggo narjamahkeun.

Klasifikasi Naive Bayesian mangrupa metoda klasifikasi probabiliti sederhana. Watesan nu leuwih jentre dina kaayaan model probibiliti nyaéta independent feature model. Watesan naive Bayes dumasar kana kanyataan yén model probabiliti bisa diturunkeun ngagunakeun Bayes' Theorem (keur ngahargaan Thomas Bayes) sarta pakait kacida jeung asumsi bébas nu teu kapanggih di alam nyata, sabab kitu mangrupa (sacara ngahaja) naive. Gumantung kana katepatan pasti tina model probiliti, klasifikasi naive Bayes bisa direntetkeun kacida efisien dina susunan supervised learning. Dina pamakéan praktis, parameter estimasi keur model naive Bayes maké metoda maximum likelihood; dina basa séjén, hiji hal bisa digawekeun mibanda model naive Bayes bari teu nuturkeun Bayesian probability atawa ngagunakeun unggal metoda Bayesian.

Model probabiliti naive Bayes[édit | édit sumber]

Sacara abstrak, model probabiliti klasifikasi mangrupa model kondisional

dina kelas variabel terikat mibanda sajumlah leutik hasil atawa kelas, kondisional dina sababaraha sipat variabel nepi ka . Masalahna lamun jumlah sipat badag atawa waktu sipat bisa dicokot tina nilai wilangan nu badag, mangka dumasar kana model dina tabel probabiliti mangrupa hal infeasible. Mangak kudu dirumuskeun duei modelna keur nyieun nu leuwih hade.

Ngangunakeun Bayes' theorem, dituliskeun

Dina praktekna urang ngan museurkeun kana pembilang, pembagi heunteu gumantung kana sarta nilai sipat dibérékeun, mangka pembagi mangrupa konstanta. Pembilang sarua jeung model joint probability

nu bisa dituliskeun saperti di handap, ngagunakeun pamakéan pengulangan tina harti conditional probability:

jeung saterusna. Kiwari asumsi "naive" kondisional bébas loba dipaké: anggap unggal sipat mangrupa independent dina unggal sipat keur . Ieu hartina yen

sarta model gabungan ditembongkeun ku

Ieu hartina yén dina kaayaan asumsi bébas di luhur, sebaran kondisional dina kelas variabel bisa ditembongkeun saperti kieu:

numana mangrupa faktor skala terikat ngan dina , contona, konstanta lamun nilai sipat variabel dipikanyaho.

Model dina bentuk ieu leuwih gamapang diurus, ti saprak ieu faktor disebut kelas prior sarta sebaran probabiliti bébas . Lamun di dinya kelas classes sarta lamun model keur bisa digambarkeun dina watesan parameter , mangka pakait jeung model naive Bayes ngabogaan parameter (k - 1) + n r k. Dina praktek, salawasna (klasifikasi biner) sarta (Bernoulli variable salaku sipat) mangrupa hal umum, sarta jumlah wilangan parameter tina model naive Bayes nyaéta , numana mangrupa wilangan sipat biner nu dipaké keur prediksi.

Parameter estimasi[édit | édit sumber]

Dina watesan supervised learning, kahayang nga-estimasi parameter tina model sebaran. Sabab asumsi sipat bébas, éta cukup keur estimasi kelas prior jeung model sipat kondisional bébas, ku maké metoda maximum likelihood, Bayesian inference atawa prosedur parameter estimasi séjénna.

Ngawangun klasifikasi tina model probabiliti[édit | édit sumber]

Diskusi leuwih jentre diturunkeun tina sipat model bébas, nyaéta, model probabiliti naive Bayes. Klasifikasi naive Bayes ngombinasikeun ieu model nu mibanda decision rule. Salah sahiji aturan nu umum keur nangtukeun hipotesa nu leuwih mungkin; dipikanyaho salaku aturan kaputusan maksimum posterior atawa MAP. Klasifikasi pakait mangrupa fungsi nu dihartikeun saperti:

Diskusi[édit | édit sumber]

Klasifikasi naive Bayes ngabogaan sababaraha sipat nu ilahar dipaké dina praktek, despite the fact that the far-réaching independence assumptions are often violated. Like all probabilistic classifiers under the MAP decision rule, it arrives at the correct classification as long as the correct class is more probable than any other class; class probabilities do not have to be estimated very well. In other words, the overall classifier is robust to serious deficiencies of its underlying naive probability model. Other réasons for the observed success of the naive Bayes classifier are discussed in the literature cited below.

In réal life, the naive Bayes approach is more powerful than might be expected from the extreme simplicity of its model; in particular, it is fairly robust in the presence of non-independent attributes wi. Recent théoretical analysis has shown why the naive Bayes classifier is so robust.

Conto: klasifikasi dokumen[édit | édit sumber]

Conto di dieu pagawéan nu maké klasifikasi naive Bayesian classification keur masalah document classification. Consider the problem of classifying documents by their content, for example into spam and non-spam E-mails. Imagine that documents are drawn from a number of classes of documents which can be modelled as sets of words where the (independent) probability that the i-th word of a given document occurs in a document from class C can be written as

(For this tréatment, we simplify things further by assuming that the probability of a word in a document is independent of the length of a document, or that all documents are of the same length).

Then the probability of a given document D, given a class C, is

The question that we desire to answer is: "what is the probability that a given document D belongs to a given class C?"

Now, by their definition, (see Probability axiom)

and

Bayes' théorem manipulates these into a statement of probability in terms of likelihood.

Assume for the moment that there are only two classes, S and ¬S.

and

Using the Bayesian result above, we can write:

Dividing one by the other gives:

Which can be re-factored as:

Thus, the probability ratio p(S | D) / p(¬S | D) can be expressed in terms of a series of likelihood ratios. The actual probability p(S | D) can be éasily computed from log (p(S | D) / p(¬S | D)) based on the observation that p(S | D) + p(¬S | D) = 1.

Taking the logarithm of all these ratios, we have:

This technique of "log-likelihood ratios" is a common technique in statistics. In the case of two mutually exclusive alternatives (such as this example), the conversion of a log-likelihood ratio to a probability takes the form of a sigmoid curve: see logit for details.

Tempo oge[édit | édit sumber]

Sumber sejen[édit | édit sumber]

  • Pedro Domingos and Michael Pazzani. "On the optimality of the simple Bayesian classifier under zero-one loss". Machine Learning, 29:103-­130, 1997. (also online at CiteSeer: [1])
  • Irina Rish. "An empirical study of the naive Bayes classifier". IJCAI 2001 Workshop on Empirical Methods in Artificial Intelligence. (available online: PDF, PostScript)

Tumbu kaluar[édit | édit sumber]