更改

跳到导航 跳到搜索
删除10字节 、 2021年2月3日 (三) 22:32
第380行: 第380行:     
===Inductive inference===
 
===Inductive inference===
 +
归纳推理
    
This is the recursion-theoretic branch of learning theory. It is based on Gold's model of learning in the limit from 1967 and has developed since then more and more models of learning. The general scenario is the following: Given a class ''S'' of computable functions, is there a learner (that is, recursive functional) which outputs for any input of the form (''f''(0),''f''(1),...,''f''(''n'')) a hypothesis. A learner ''M'' learns a function ''f'' if almost all hypotheses are the same index ''e'' of ''f'' with respect to a previously agreed on acceptable numbering of all computable functions; ''M'' learns ''S'' if ''M'' learns every ''f'' in ''S''. Basic results are that all recursively enumerable classes of functions are learnable while the class REC of all computable functions is not learnable. Many related models have been considered and also the learning of classes of recursively enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards.
 
This is the recursion-theoretic branch of learning theory. It is based on Gold's model of learning in the limit from 1967 and has developed since then more and more models of learning. The general scenario is the following: Given a class ''S'' of computable functions, is there a learner (that is, recursive functional) which outputs for any input of the form (''f''(0),''f''(1),...,''f''(''n'')) a hypothesis. A learner ''M'' learns a function ''f'' if almost all hypotheses are the same index ''e'' of ''f'' with respect to a previously agreed on acceptable numbering of all computable functions; ''M'' learns ''S'' if ''M'' learns every ''f'' in ''S''. Basic results are that all recursively enumerable classes of functions are learnable while the class REC of all computable functions is not learnable. Many related models have been considered and also the learning of classes of recursively enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards.
第385行: 第386行:  
This is the recursion-theoretic branch of learning theory. It is based on Gold's model of learning in the limit from 1967 and has developed since then more and more models of learning. The general scenario is the following: Given a class S of computable functions, is there a learner (that is, recursive functional) which outputs for any input of the form (f(0),f(1),...,f(n)) a hypothesis. A learner M learns a function f if almost all hypotheses are the same index e of f with respect to a previously agreed on acceptable numbering of all computable functions; M learns S if M learns every f in S. Basic results are that all recursively enumerable classes of functions are learnable while the class REC of all computable functions is not learnable. Many related models have been considered and also the learning of classes of recursively enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards.
 
This is the recursion-theoretic branch of learning theory. It is based on Gold's model of learning in the limit from 1967 and has developed since then more and more models of learning. The general scenario is the following: Given a class S of computable functions, is there a learner (that is, recursive functional) which outputs for any input of the form (f(0),f(1),...,f(n)) a hypothesis. A learner M learns a function f if almost all hypotheses are the same index e of f with respect to a previously agreed on acceptable numbering of all computable functions; M learns S if M learns every f in S. Basic results are that all recursively enumerable classes of functions are learnable while the class REC of all computable functions is not learnable. Many related models have been considered and also the learning of classes of recursively enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards.
   −
这是学习理论的递归理论分支。它是基于1967年戈尔德的极限学习模型,并从那时起发展了越来越多的学习模型。一般情况如下: 给定一类可计算函数 s,是否有一个学习者(即递归函数)输出任何形式的输入(f (0) ,f (1) ... ,f (n))的假设。学习者 m 学习一个函数 f,如果几乎所有的假设都是 f 的相同索引 e,对于以前商定的所有可计算函数的可接受的编号; m 学习 s,如果 m 学习 s 中的每个 f,基本结果是所有可递归可枚举的函数类都是可学习的,而所有可计算函数的类 REC 是不可学习的。许多相关的模型已经被考虑,而且从正数据的递归可枚举集类的学习也是戈尔德在1967年的开创性论文中研究的主题。
+
这是学习理论的递归理论分支。它是基于1967年戈尔德的极限学习模型,并从那时起发展了越来越多的学习模型。一般情况如下: 给定一类可计算函数 s,是否有一个学习者(即递归函数)由形式(''f''(0),''f''(1),...,''f''(''n''))的任何输入,输出一个假设。一个学习者M学习一个函数f,如果关于所有可计算函数的容许编号,几乎所有的假设都与f的指数e相同;如果M学S中的每一个f,那么M就学习S,基本结果是,所有递归枚举的函数类都是可学习的,而所有可计算函数的类REC是不可学习的。许多相关的模型已经被考虑,而且从正数据中学习递归枚举集的类是哥德尔在1967年的先驱论文中研究的主题。
 
  −
 
      
===Generalizations of Turing computability===
 
===Generalizations of Turing computability===
307

个编辑

导航菜单