2018-11-17 21:47:56 +08:00
|
|
|
|
前言
|
2018-11-16 00:23:54 +08:00
|
|
|
|
====
|
2018-11-16 00:19:34 +08:00
|
|
|
|
|
2018-11-17 21:41:39 +08:00
|
|
|
|
力求每行代码都有注释,重要部分注明公式来源。具体会追求下方这样的代码,学习者可以照着公式看程序,让代码有据可查。
|
2018-11-17 21:30:09 +08:00
|
|
|
|
|
2018-11-17 21:33:39 +08:00
|
|
|
|
![image](https://github.com/Dod-o/Statistical-Learning-Method_Code/blob/master/CodePic.png)
|
2018-11-16 00:19:34 +08:00
|
|
|
|
|
2018-11-17 21:29:55 +08:00
|
|
|
|
|
|
|
|
|
如果时间充沛的话,可能会试着给每一章写一篇博客。先放个博客链接吧:[传送门](http://www.pkudodo.com/)。
|
2018-11-16 00:19:34 +08:00
|
|
|
|
|
2018-11-16 00:49:38 +08:00
|
|
|
|
##### 注:其中Mnist数据集已转换为csv格式,由于体积为107M超过限制,改为压缩包形式。下载后务必先将Mnist文件内压缩包直接解压。
|
2018-11-16 00:19:34 +08:00
|
|
|
|
|
2018-11-17 21:48:41 +08:00
|
|
|
|
|
|
|
|
|
|
2018-11-17 21:47:56 +08:00
|
|
|
|
实现
|
|
|
|
|
======
|
2018-11-16 00:19:34 +08:00
|
|
|
|
|
2018-11-16 00:23:54 +08:00
|
|
|
|
### 第二章 感知机:
|
2018-11-27 18:37:41 +08:00
|
|
|
|
博客:[统计学习方法|感知机原理剖析及实现](http://www.pkudodo.com/2018/11/18/1-4/)
|
|
|
|
|
实现:[perceptron/perceptron_dichotomy.py](https://github.com/Dod-o/Statistical-Learning-Method_Code/blob/master/perceptron/perceptron_dichotomy.py)
|
2018-11-16 00:19:34 +08:00
|
|
|
|
|
2018-11-17 21:41:39 +08:00
|
|
|
|
### 第三章 K近邻:
|
2018-11-27 18:37:41 +08:00
|
|
|
|
博客:[统计学习方法|K近邻原理剖析及实现](http://www.pkudodo.com/2018/11/19/1-2/)
|
|
|
|
|
实现:[KNN/KNN.py](https://github.com/Dod-o/Statistical-Learning-Method_Code/blob/master/KNN/KNN.py)
|
2018-11-16 21:49:53 +08:00
|
|
|
|
|
2018-11-17 21:41:39 +08:00
|
|
|
|
### 第四章 朴素贝叶斯:
|
2018-11-27 18:37:41 +08:00
|
|
|
|
博客:[统计学习方法|朴素贝叶斯原理剖析及实现](http://www.pkudodo.com/2018/11/21/1-3/)
|
|
|
|
|
实现:[NaiveBayes/NaiveBayes.py](https://github.com/Dod-o/Statistical-Learning-Method_Code/blob/master/NaiveBayes/NaiveBayes.py)
|
2018-11-21 11:15:14 +08:00
|
|
|
|
|
|
|
|
|
### 第五章 决策树:
|
2018-11-27 18:37:41 +08:00
|
|
|
|
实现:[DecisionTree/DecisionTree.py](https://github.com/Dod-o/Statistical-Learning-Method_Code/blob/master/DecisionTree/DecisionTree.py)
|
|
|
|
|
|
|
|
|
|
### 第六章 逻辑斯蒂回归与最大熵模型:
|
|
|
|
|
实现:
|
|
|
|
|
逻辑斯蒂回归:[Logistic_and_maximum_entropy_models/logisticRegression.py](https://github.com/Dod-o/Statistical-Learning-Method_Code/blob/master/Logistic_and_maximum_entropy_models/logisticRegression.py)
|
2018-11-30 11:27:02 +08:00
|
|
|
|
最大熵:[Logistic_and_maximum_entropy_models/maxEntropy.py](https://github.com/Dod-o/Statistical-Learning-Method_Code/blob/master/Logistic_and_maximum_entropy_models/maxEntropy.py)
|
|
|
|
|
|