机器学习(周志华) 西瓜书 第十一章课后习题11.1—— Python实现
-
实验题目
试编程实现 Relif 算法,并考察其在西瓜数据集 3.0 上运行结果
-
实验原理
Relif算法原理
Relif算法目的
-
实验过程
数据集获取
将西瓜数据集3.0保存为data_3.txt
编号,色泽,根蒂,敲声,纹理,脐部,触感,密度,含糖率,好瓜
1,青绿,蜷缩,浊响,清晰,凹陷,硬滑,0.697,0.46,是
2,乌黑,蜷缩,沉闷,清晰,凹陷,硬滑,0.774,0.376,是
3,乌黑,蜷缩,浊响,清晰,凹陷,硬滑,0.634,0.264,是
4,青绿,蜷缩,沉闷,清晰,凹陷,硬滑,0.608,0.318,是
5,浅白,蜷缩,浊响,清晰,凹陷,硬滑,0.556,0.215,是
6,青绿,稍蜷,浊响,清晰,稍凹,软粘,0.403,0.237,是
7,乌黑,稍蜷,浊响,稍糊,稍凹,软粘,0.481,0.149,是
8,乌黑,稍蜷,浊响,清晰,稍凹,硬滑,0.437,0.211,是
9,乌黑,稍蜷,沉闷,稍糊,稍凹,硬滑,0.666,0.091,否
10,青绿,硬挺,清脆,清晰,平坦,软粘,0.243,0.267,否
11,浅白,硬挺,清脆,模糊,平坦,硬滑,0.245,0.057,否
12,浅白,蜷缩,浊响,模糊,平坦,软粘,0.343,0.099,否
13,青绿,稍蜷,浊响,稍糊,凹陷,硬滑,0.639,0.161,否
14,浅白,稍蜷,沉闷,稍糊,凹陷,硬滑,0.657,0.198,否
15,乌黑,稍蜷,浊响,清晰,稍凹,软粘,0.36,0.37,否
16,浅白,蜷缩,浊响,模糊,平坦,硬滑,0.593,0.042,否
17,青绿,蜷缩,沉闷,稍糊,稍凹,硬滑,0.719,0.103,否
算法实现
定义相关变量,例如离散属性及其取值
读取数据函数
处理数据函数,将连续属性规范化处理
计算两个样本向量的欧式距离
计算两个样本在属性j上的diff值
寻找输入样本向量在数据集中的猜中近邻
寻找输入样本向量在数据集中的猜错近邻
基于Relif算法的特征选择函数
main函数,调用上述函数,并按顺序输出属性及其对应分量值
-
实验结果
-
程序清单:
import math
import random as rd
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import preprocessing
D_keys = {
'色泽': ['青绿', '乌黑', '浅白'],
'根蒂': ['蜷缩', '硬挺', '稍蜷'],
'敲声': ['清脆', '沉闷', '浊响'],
'纹理': ['稍糊', '模糊', '清晰'],
'脐部': ['凹陷', '稍凹', '平坦'],
'触感': ['软粘', '硬滑'],
'好瓜': ['否', '是'],
}
class_name = '好瓜'
names = ['色泽', '根蒂', '敲声', '纹理', '脐部', '触感', '密度', '含糖率']
# 读取数据
def loadData(filename):
dataSet = pd.read_csv(filename)
dataSet.drop(columns=['编号'], inplace=True)
return dataSet
def processData(dataSet):
# 连续值规范化到[0,1]区间
for key in names:
if key in D_keys:
continue
x = np.array(dataSet[key])
x = x.reshape(x.shape[0], 1)
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
dataSet[key] = pd.DataFrame(x_scaled)
return dataSet
# 计算欧式距离
# 离散值若两者不同记1,同记0
# 连续值计算绝对值,且已规范化到[0,1]区间
def calc_distance(xa, xb):
distance = 0
for key in names:
if key in D_keys:
distance += 0 if xa[key] == xb[key] else 1
else:
distance += (xa[key] - xb[key])**2
return distance**(.5)
# 计算两个样本在属性j上的diff值
def calc_diff(xa, xb, j):
if j in D_keys:
return 0 if xa[j] == xb[j] else 1
else:
return abs(xa[j] - xb[j])
#寻找猜中近邻
def find_near_hit(dataSet, i, xi):
label = xi[class_name]
hit_samples = dataSet.loc[dataSet[class_name]==label]
least_distance = 9999
for index, row in hit_samples.iterrows():
if index == i:
continue
distance = calc_distance(xi, row)
if distance < least_distance:
xi_nh = row
least_distance = distance
return xi_nh
#寻找猜错近邻
def find_near_miss(dataSet, i, xi):
label = xi[class_name]
miss_samples = dataSet.loc[dataSet[class_name]!=label]
least_distance = 9999
for index, row in miss_samples.iterrows():
distance = calc_distance(xi, row)
if distance < least_distance:
xi_nm = row
least_distance = distance
return xi_nm
def Relief(dataSet):
features = []
for key in names:
power = 0
for index, xi in dataSet.iterrows():
label = xi[class_name]
# near-hit
xi_nh = find_near_hit(dataSet, index, xi)
# near-miss
xi_nm = find_near_miss(dataSet, index, xi)
# compute power of key
diff_nh = calc_diff(xi, xi_nh, key)
diff_nm = calc_diff(xi, xi_nm, key)
power += -diff_nh**2 + diff_nm**2
features.append(power)
return features
if __name__=='__main__':
filename = 'data_3.txt'
dataSet = loadData(filename)
dataSet = processData(dataSet)
features = Relief(dataSet)
sequence = {feature: name for name, feature in zip(names, features)}
for feature in sorted(features, reverse=True):
print(sequence[feature], feature)