人工知能
Online ISSN : 2435-8614
Print ISSN : 2188-2266
人工知能学会誌(1986~2013, Print ISSN:0912-8085)
特異点を持つ学習モデルと事前分布の代数幾何(<論文特集>「情報論的学習理論(IBIS2000)」)
渡辺 澄夫
著者情報
解説誌・一般情報誌 フリー

2001 年 16 巻 2 号 p. 308-315

詳細
抄録

The parameter space of a hierarchical learning machine is not a Riemannian manifold since the rank of the Fisher information metric depends on the parameter.In the previous paper, we proved that the stochastic complexity is asymptotically equal to λ log n-(m-1)log log n, where λ is a rational number, m is a natural number, and n is the number of empirical samples.Also we proved that both λ and m are calculated by resolution of singularties.However, both λ and m depend on the parameter representation and the size of the true distribution.In this paper, we study Jeffreys' prior distribution which is coordinate free, and prove that 2λ is equal to the dimension of the parameter set and m=1 independently of the parameter representation and singularities.This fact indicated that Jeffreys' prior is useful in model selection and knowledge discovery, in spite that it makes the prediction error to be larger than positive distributions.

著者関連情報
© 2001 人工知能学会
前の記事 次の記事
feedback
Top