其他
Python数据分析、挖掘常用工具
点击上方“Python高校”,马上关注
真爱,请置顶或星标
来源:https://www.jianshu.com/p/6d66e40ef838
列表(可以被修改),元组(不可以被修改)
字典(<k,v>结构)
集合(同数学概念上的集合)
函数式编程(主要由lambda()、map()、reduce()、filter()构成)
Python数据分析常用库:
NumPy
a = np.array([2, 0, 1, 5])
print(a)
print(a[:3])
print(a.min())
a.sort() # a被覆盖
print(a)
b = np.array([[1, 2, 3], [4, 5, 6]])
print(b*b)
[2 0 1]
0
[0 1 2 5]
[[ 1 4 9]
[16 25 36]]
Scipy
from scipy.optimize import fsolve
def f(x):
x1 = x[0]
x2 = x[1]
return [2 * x1 - x2 ** 2 - 1, x1 ** 2 - x2 - 2]
result = fsolve(f, [1, 1])
print(result)
# 积分
from scipy import integrate
def g(x): # 定义被积函数
return (1 - x ** 2) ** 0.5
pi_2, err = integrate.quad(g, -1, 1) # 输出积分结果和误差
print(pi_2 * 2, err)
[ 1.91963957 1.68501606]
3.141592653589797 1.0002356720661965e-09
Matplotlib
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 10, 10000) # 自变量x,10000为点的个数
y = np.sin(x) + 1 # 因变量y
z = np.cos(x ** 2) + 1 # 因变量z
plt.figure(figsize=(8, 4)) # 设置图像大小
# plt.rcParams['font.sans-serif'] = 'SimHei' # 标签若有中文,则需设置字体
# plt.rcParams['axes.unicode_minus'] = False # 保存图像时若负号显示不正常,则添加该句
# 两条曲线
plt.plot(x, y, label='$\sin (x+1)$', color='red', linewidth=2) # 设置标签,线条颜色,线条大小
plt.plot(x, z, 'b--', label='$\cos x^2+1$')
plt.xlim(0, 10) # x坐标范围
plt.ylim(0, 2.5) # y坐标范围
plt.xlabel("Time(s)") # x轴名称
plt.ylabel("Volt") # y轴名称
plt.title("Matplotlib Sample") # 图的标题
plt.legend() # 显示图例
plt.show() # 显示作图结果
Pandas
import pandas as pd
s = pd.Series([1, 2, 3], index=['a', 'b', 'c'])
d = pd.DataFrame([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12], [13, 14, 15], [16, 17, 18]], columns=['a', 'b', 'c'])
d2 = pd.DataFrame(s)
print(s)
print(d.head()) # 预览前5行
print(d.describe())
# 读取文件(路径最好别带中文)
df=pd.read_csv("G:\\data.csv", encoding="utf-8")
print(df)
a 1
b 2
c 3
dtype: int64
a b c
0 1 2 3
1 4 5 6
2 7 8 9
3 10 11 12
4 13 14 15
a b c
count 6.000000 6.000000 6.000000
mean 8.500000 9.500000 10.500000
std 5.612486 5.612486 5.612486
min 1.000000 2.000000 3.000000
25% 4.750000 5.750000 6.750000
50% 8.500000 9.500000 10.500000
75% 12.250000 13.250000 14.250000
max 16.000000 17.000000 18.000000
Empty DataFrame
Columns: [1068, 12, 蔬果, 1201, 蔬菜, 120104, 花果, 20150430, 201504, DW-1201040010, 散称, 生鲜, 千克, 0.973, 5.43, 2.58, 否]
Index: []
Scikit-Learn
from sklearn.linear_model import LinearRegression
model= LinearRegression()
print(model)
所有模型都提供的接口:
model.fit():训练模型,监督模型是fit(X,y),无监督模型是fit(X)
监督模型提供的接口:
model.predict(X_new):预测新样本
model.predict_proba(X_new):预测概率,仅对某些模型有用(LR)
无监督模型提供的接口:
model.ransform():从数据中学到新的“基空间”
model.fit_transform():从数据中学到的新的基,并将这个数据按照这组“基”进行转换
from sklearn import datasets # 导入数据集
from sklearn import svm
iris = datasets.load_iris() # 加载数据集
clf = svm.LinearSVC() # 建立线性SVM分类器
clf.fit(iris.data, iris.target) # 用数据训练模型
print(clf.predict([[5, 3, 1, 0.2], [5.0, 3.6, 1.3, 0.25]]))
[0 0]
Keras
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
model = Sequential() # 模型初始化
model.add(Dense(20, 64)) # 添加输入层(20节点)、第一隐藏层(64节点)的连接
model.add(Activation('tanh')) # 第一隐藏层用tanh作为激活函数
model.add(Dropout(0.5)) # 使用Dropout防止过拟合
model.add(Dense(64, 64)) # 添加第一隐藏层(64节点)、第二隐藏层(64节点)的连接
model.add(Activation('tanh')) # 第二隐藏层用tanh作为激活函数
model.add(Dense(64, 1)) # 添加第二隐藏层(64节点)、输出层(1节点)的连接
model.add(Activation('sigmod')) # 第二隐藏层用sigmod作为激活函数
sgd=SGD(lr=0.1,decay=1e-6,momentum=0.9,nesterov=True) # 定义求解算法
model.compile(loss='mean_squared_error',optimizer=sgd) # 编译生成模型,损失函数为平均误差平方和
model.fit(x_train,y_train,nb_epoch=20,batch_size=16) # 训练模型
score = model.evaluate(X_test,y_test,batch_size=16) # 测试模型
Keras中文文档
如何计算两个文档的相似度(二)
Genism
import logging
from gensim import models
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s',
level=logging.INFO)
sentences = [['first', 'sentence'], ['second', 'sentence']] # 将分好词的句子按列表形式输入
model = models.Word2Vec(sentences, min_count=1) # 用以上句子训练词向量模型
print(model['sentence']) # 输出单词sentence的词向量
2017-10-24 19:02:40,785 : INFO : collecting all words and their counts
2017-10-24 19:02:40,785 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2017-10-24 19:02:40,785 : INFO : collected 3 word types from a corpus of 4 raw words and 2 sentences
2017-10-24 19:02:40,785 : INFO : Loading a fresh vocabulary
2017-10-24 19:02:40,785 : INFO : min_count=1 retains 3 unique words (100% of original 3, drops 0)
2017-10-24 19:02:40,785 : INFO : min_count=1 leaves 4 word corpus (100% of original 4, drops 0)
2017-10-24 19:02:40,786 : INFO : deleting the raw counts dictionary of 3 items
2017-10-24 19:02:40,786 : INFO : sample=0.001 downsamples 3 most-common words
2017-10-24 19:02:40,786 : INFO : downsampling leaves estimated 0 word corpus (5.7% of prior 4)
2017-10-24 19:02:40,786 : INFO : estimated required memory for 3 words and 100 dimensions: 3900 bytes
2017-10-24 19:02:40,786 : INFO : resetting layer weights
2017-10-24 19:02:40,786 : INFO : training model with 3 workers on 3 vocabulary and 100 features, using sg=0 hs=0 sample=0.001 negative=5 window=5
2017-10-24 19:02:40,788 : INFO : worker thread finished; awaiting finish of 2 more threads
2017-10-24 19:02:40,788 : INFO : worker thread finished; awaiting finish of 1 more threads
2017-10-24 19:02:40,788 : INFO : worker thread finished; awaiting finish of 0 more threads
2017-10-24 19:02:40,789 : INFO : training on 20 raw words (0 effective words) took 0.0s, 0 effective words/s
2017-10-24 19:02:40,789 : WARNING : under 10 jobs per worker: consider setting a smaller `batch_words' for smoother alpha decay
[ -1.54225400e-03 -2.45212857e-03 -2.20486755e-03 -3.64410551e-03
-2.28137174e-03 -1.70348200e-03 -1.05830852e-03 -4.37875278e-03
-4.97106137e-03 3.93485563e-04 -1.97932171e-03 -3.40653211e-03
1.54990738e-03 8.97102174e-04 2.94041773e-03 3.45200230e-03
-4.60584508e-03 3.81468004e-03 3.07120802e-03 2.85422982e-04
7.01598416e-04 2.69670971e-03 4.17246483e-03 -6.48593705e-04
1.11404411e-03 4.02203249e-03 -2.34672683e-03 2.35153269e-03
2.32632101e-05 3.76200466e-03 -3.95653257e-03 3.77303245e-03
8.48884694e-04 1.61545759e-03 2.53374409e-03 -4.25464474e-03
-2.06338940e-03 -6.84972096e-04 -6.92955102e-04 -2.27969326e-03
-2.13766913e-03 3.95324081e-03 3.52649018e-03 1.29243149e-03
4.29229392e-03 -4.34781052e-03 2.42843386e-03 3.12117115e-03
-2.99768522e-03 -1.17538485e-03 6.67148328e-04 -6.86432002e-04
-3.58940102e-03 2.40547652e-03 -4.18888079e-03 -3.12567432e-03
-2.51603196e-03 2.53451476e-03 3.65199335e-03 3.35336081e-03
-2.50071986e-04 4.15537134e-03 -3.89242987e-03 4.88173496e-03
-3.34603712e-03 3.18462006e-03 1.57053335e-04 3.51517834e-03
-1.20337342e-03 -1.81524854e-04 3.57784083e-05 -2.36600707e-03
-3.77405947e-03 -1.70441647e-03 -4.51521482e-03 -9.47134569e-04
4.53894213e-03 1.55767589e-03 8.57840874e-04 -1.12304837e-03
-3.95945460e-03 5.37869288e-04 -2.04461766e-03 5.24829782e-04
3.76719423e-03 -4.38512256e-03 4.81262803e-03 -4.20147832e-03
-3.87057988e-03 1.67581497e-03 1.51928759e-03 -1.31744961e-03
3.28474329e-03 -3.28777428e-03 -9.67226923e-04 4.62622894e-03
1.34165725e-03 3.60148447e-03 4.80416557e-03 -1.98963983e-03]
如何计算两个文档的相似度(二)
本次笔记是对数据分析和挖掘中常用工具的简要介绍,详细使用会在以后笔记中进行介绍。
— — — END — — —
最后在推荐两本不错的书籍
copy下面文字后台回复关键词
Python数据科学
👆公众号后台回复:
Python进阶
获取106页电子书
历史阅读:
Python 开发者必知的 11 个 Python GUI 库
基于TensorFlow 2.0的中文深度学习开源书来了!GitHub趋势日榜第一,斩获2K+星
Python 爬取 3000 部电影,最具人气烂片排行榜出炉!
微软官方上线了Python 教程,7个章节就把Python说通了
10款 Web开发最佳的 Python 框架
我给曾经暗恋的初中女同学,用Python实现了她飞机上刷抖音
被女朋友三番五次拉黑后,我用 Python 写了个“舔狗”必备神器
最全 14 张思维导图:教你构建 Python 编程的核心知识体系!
这里除了干货一无所有
人生苦短,我选在看