第716行: |
第716行: |
| ==== 支持向量机 Support vector machines ==== | | ==== 支持向量机 Support vector machines ==== |
| | | |
− | {{Main|Support vector machines}}
| + | :''主文章:[[SVM支持向量机]]'' |
| + | [[SVM支持向量机]](SupportVectorMachine,SVMs)是一种用于[https://en.wikipedia.org/wiki/Statistical_classification 分类]和[https://en.wikipedia.org/wiki/Regression_analysis 回归]的[https://en.wikipedia.org/wiki/Supervised_learning 监督学习]算法。给出一组训练实例,每个样本会被标记为属于两类中的一个,SVM算法建立了一个模型来预测一个新的例子是否属于一个类别或另一个类别。 |
| | | |
| Support vector machines (SVMs), also known as support vector networks, are a set of related [[supervised learning]] methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other.<ref name="CorinnaCortes">{{Cite journal |last1=Cortes |first1=Corinna |authorlink1=Corinna Cortes |last2=Vapnik |first2=Vladimir N. |year=1995 |title=Support-vector networks |journal=[[Machine Learning (journal)|Machine Learning]] |volume=20 |issue=3 |pages=273–297 |doi=10.1007/BF00994018 |doi-access=free }}</ref> An SVM training algorithm is a non-[[probabilistic classification|probabilistic]], [[binary classifier|binary]], [[linear classifier]], although methods such as [[Platt scaling]] exist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the [[kernel trick]], implicitly mapping their inputs into high-dimensional feature spaces. | | Support vector machines (SVMs), also known as support vector networks, are a set of related [[supervised learning]] methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other.<ref name="CorinnaCortes">{{Cite journal |last1=Cortes |first1=Corinna |authorlink1=Corinna Cortes |last2=Vapnik |first2=Vladimir N. |year=1995 |title=Support-vector networks |journal=[[Machine Learning (journal)|Machine Learning]] |volume=20 |issue=3 |pages=273–297 |doi=10.1007/BF00994018 |doi-access=free }}</ref> An SVM training algorithm is a non-[[probabilistic classification|probabilistic]], [[binary classifier|binary]], [[linear classifier]], although methods such as [[Platt scaling]] exist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the [[kernel trick]], implicitly mapping their inputs into high-dimensional feature spaces. |
第722行: |
第723行: |
| Support vector machines (SVMs), also known as support vector networks, are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other. An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. | | Support vector machines (SVMs), also known as support vector networks, are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other. An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. |
| | | |
− | '''支持向量机 Support vector machines,SVMs''',也称为支持向量网络,是一系列用于分类和回归的相关监督式学习方法。给定一组训练样本,每个样本标记为两个类别中的一个,SVM 训练算法通过建立一个模型来预测一个新样本是两个类别中的哪一个。支持向量机的训练算法用到的是一种非概率的二进制线性分类器,尽管在概率分类环境中也存在使用支持向量机的方法,如 Platt 缩放法。除了执行线性分类,支持向量机可以有效地执行非线性分类使用所谓的'''核技巧 Kernel trick''',隐式地将模型输入映射到高维特征空间。
| + | 支持向量机的训练算法用到的是一种非概率的二进制线性分类器,尽管在概率分类环境中也存在使用支持向量机的方法,如 Platt 缩放法。除了执行线性分类,支持向量机可以有效地执行非线性分类使用所谓的'''核技巧 Kernel trick''',隐式地将模型输入映射到高维特征空间。 |
| | | |
| | | |
第731行: |
第732行: |
| | | |
| 数据集上的线性回归。 | | 数据集上的线性回归。 |
− |
| |
− |
| |
| | | |
| ==== 回归分析 Regression analysis ==== | | ==== 回归分析 Regression analysis ==== |