王者荣耀赛事比赛结果预测

编程入门 行业动态 更新时间:2024-10-17 21:17:27

王者<a href=https://www.elefans.com/category/jswz/34/1767204.html style=荣耀赛事比赛结果预测"/>

王者荣耀赛事比赛结果预测

基于王者荣耀赛事结果的胜利预测

1.背景

        王者荣耀是一个MOBA(多人在线战斗竞技场),其中2支队伍(蓝色和红色)对峙。有3条路线,一个河道以及野区和每方五个个角色。目标是击倒敌方水晶以赢得比赛.

        名词介绍:

                'date':时间,

                'blue_is_win':蓝方获得胜利是否,

                'blue_assist_num':蓝色方总助攻数,

                'blue_death_num':蓝色方死亡数,

                'blue_gold': 蓝色方总经济,

                'blue_kda':蓝色方kda,

                'blue_kill_big_dragon_num':蓝色方击杀大龙数量,

                'blue_kill_dark_tyrant_num':蓝色方击杀黑暗暴君数,

                'blue_kill_num':蓝色方击杀数 ,

                'blue_kill_prophet_dragon_num':蓝色方击杀先知龙数量,

                'blue_kill_shadow_dragon_num':蓝色方击杀暗影龙数量,

                'blue_kill_storm_dragon_king_num':蓝色方击杀风暴龙王数量,

                'blue_kill_tyrant_num':蓝色方击杀暴君数量,

                'blue_push_tower_num':蓝色方推塔数,

                红色方相同

        

       对其进行数据分析的可行性和目的:


               随着网络游戏在年轻人中的盛行,电子竞技也变得越来越流行.王者荣耀作为电子竞技的代表游戏之一,受到越来越多年轻人的关注和喜爱

                可行性:在每一次比赛的过程中,最终的胜利受到许许多多因素的影响,不可否认,玩家的操作技术和意识是决定比赛输赢的关键因素,但玩家的所有自身实力都会反映在游戏中的数据里面,而王者荣耀又是一款团队竞技游戏,因此,一局比赛进行到后期时的数据能够比较好的反映出操作者的水平和团队间的配合.此时的数据就可以比较准确的用来对比赛的输赢进行预测.

                目的:通过对而已 对局数据的分析,不仅可以用来对比赛的输赢进行预测,同时,也可以发现对一局比赛输赢影响较大的因素,从而对现实生活中的比赛具有指导意义.

2.数据采集

import asyncio
import aiohttp
import re
import datetime
import json
import pandas as pd
import aiofiles
import osheaders = {'Cookie': '_uab_collina=168174060388197251392936; JSESSIONID=870079FC7C9CF454596C43BF8E46E10D; guidesStatus=off; highContrastMode=defaltMode; cursorStatus=off; RAIL_EXPIRATION=1682022495040; RAIL_DEVICEID=UGNWKvlIttEPrAHYs5qoS3W1LQijGzITNay4GGB_ygWkyDZF_zqWgTu2S0PKEQI5TwP25b13YSymCfctnktfSuC5wJZYBdC5-63VNOTwX3uZwtutxdm1hBZj627PiJ75s1e6FdBVLcWIOgnq6Ch5NA0yJKD2MJrU; fo=2ha297esu02yt2e951tOmP5M61joZTjo-p5_eHo9V2C1t91qCH61-ujGBp_wokyRynYHrf9xVckkj0cwXXk0IF26MK6xEe91migR3MembRNxxKuzKpFuoT5m8IDUyQYN42JxKt_kBlMAhyM8wXGIG6GDXoZLMhApU2fUYq95k-FcucY5QGiIfhlps2U; _jc_save_fromStation=%u957F%u6C99%2CCSQ; _jc_save_wfdc_flag=dc; _jc_save_toStation=%u4E0A%u6D77%2CSHH; route=c5c62a339e7744272a54643b3be5bf64; BIGipServerotn=3973513482.24610.0000; _jc_save_fromDate=2023-04-22; _jc_save_toDate=2023-04-18','User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36 Edg/112.0.1722.48'
}
async def save_to_excel(file_name,data):async with aiofiles.open(file_name, mode='a') as csv_file:if not os.path.exists(file_name):  # 检查文件是否存在await csv_file.write(','.join(data.keys()) + '\n')  # 写入标签await csv_file.write(','.join(str(data[key]) for key in data.keys()) + '\n')  # 写入数据内容
async def get_team_data(team_id):data_list = []url = f'={team_id}'async with aiohttp.ClientSession() as session:async with session.get(url, headers=headers) as response:response_text = await response.text()try:response_json = json.loads(response_text)# print(response_json['results'])except json.JSONDecodeError as e:print(f"Error decoding JSON: {e}")response_json = None# print(response_json.content_type)# if response_json.content_type == 'application/json':if response_json!=None:# data = await response.json()data = response_jsonrace_list = data['results']for race in race_list:teamOne_name = race['camp1']['team_name']teamOneScore = race['camp1']['score']teamTwo_name = race['camp2']['team_name']teamTwoScore = race['camp2']['score']teamOne_outcome = race['camp1']['is_win']league_id = race['league_id']match_id = race['match_id']date = race['start_time']date = date[:10]if teamOneScore > teamTwoScore:winningTeam = 1else:winningTeam = 0# print(race)date_str = race['match_id'][:8]# try:#     date_obj = datetime.datetime.strptime(date_str, "%Y%m%d")#     time_str_formatted = date_obj.strftime("%Y-%m-%d")# except ValueError:#     print('时间格式不匹配!!!')print(date, teamOne_name, ':', teamTwo_name, teamOneScore, ':', teamTwoScore, '获胜队:', winningTeam, league_id)# detail_Page_url = f'.html?league_id={league_id}&match_id={race["date"]}'detail_url = f'={match_id}'df = pd.DataFrame(columns=['date','blue_is_win','blue_assist_num','blue_death_num','blue_gold','blue_kda','blue_kill_big_dragon_num','blue_kill_dark_tyrant_num','blue_kill_num','blue_kill_prophet_dragon_num','blue_kill_shadow_dragon_num','blue_kill_storm_dragon_king_num','blue_kill_tyrant_num','blue_push_tower_num','red_assist_num','red_death_num','red_gold','red_kda','red_kill_big_dragon_num','red_kill_dark_tyrant_num','red_kill_num','red_kill_prophet_dragon_num','red_kill_shadow_dragon_num','red_kill_storm_dragon_king_num','red_kill_tyrant_num','red_push_tower_num'])async with session.get(detail_url, headers=headers) as detail_response:# print(await detail_response.text())detail_id_list = re.findall('"battle_id":"(.*?)"', await detail_response.text())i = 1for id in detail_id_list:detail_url_2 = f'={id}'# print(detail_url_2)async with session.get(detail_url_2, headers=headers) as response_2:text = await response_2.text()text_2 = json.loads(text)camp1 = text_2['data']['camp1']camp2 = text_2['data']['camp2']# print(camp1)if camp1['team_name'] == teamOne_name:teama_data = camp1teamb_data = camp2else:teama_data = camp2teamb_data = camp1blue_assist_num = teama_data['assist_num']blue_death_num = teama_data['death_num']blue_gold = teama_data['gold']blue_is_win = teama_data['is_win']blue_kda = teama_data['kda']blue_kill_big_dragon_num = teama_data['kill_big_dragon_num']blue_kill_dark_tyrant_num = teama_data['kill_dark_tyrant_num']blue_kill_num = teama_data['kill_num']blue_kill_prophet_dragon_num = teama_data['kill_prophet_dragon_num']blue_kill_shadow_dragon_num = teama_data['kill_shadow_dragon_num']blue_kill_storm_dragon_king_num = teama_data['kill_storm_dragon_king_num']blue_kill_tyrant_num = teama_data['kill_tyrant_num']blue_push_tower_num = teama_data['push_tower_num']red_assist_num = teamb_data['assist_num']red_death_num = teamb_data['death_num']red_gold = teamb_data['gold']# red_is_win = teamb_data['is_win']red_kda = teamb_data['kda']red_kill_big_dragon_num = teamb_data['kill_big_dragon_num']red_kill_dark_tyrant_num = teamb_data['kill_dark_tyrant_num']red_kill_num = teamb_data['kill_num']red_kill_prophet_dragon_num = teamb_data['kill_prophet_dragon_num']red_kill_shadow_dragon_num = teamb_data['kill_shadow_dragon_num']red_kill_storm_dragon_king_num = teamb_data['kill_storm_dragon_king_num']red_kill_tyrant_num = teamb_data['kill_tyrant_num']red_push_tower_num = teamb_data['push_tower_num']if blue_is_win == True:win = 1else:win = 0print(f'第{i}局', teamOne_name + ':' + teamTwo_name,blue_is_win,win )kpl_data = {'date':date,'blue_is_win':win,'blue_assist_num':blue_assist_num,'blue_death_num':blue_death_num,'blue_gold': blue_gold,'blue_kda':blue_kda,'blue_kill_big_dragon_num':blue_kill_big_dragon_num,'blue_kill_dark_tyrant_num':blue_kill_dark_tyrant_num,'blue_kill_num':blue_kill_num ,'blue_kill_prophet_dragon_num':blue_kill_prophet_dragon_num,'blue_kill_shadow_dragon_num':blue_kill_shadow_dragon_num,'blue_kill_storm_dragon_king_num':blue_kill_storm_dragon_king_num,'blue_kill_tyrant_num':blue_kill_tyrant_num,'blue_push_tower_num':blue_push_tower_num,'red_assist_num':red_assist_num,'red_death_num':red_death_num,'red_gold':red_gold,'red_kda':red_kda,'red_kill_big_dragon_num':red_kill_big_dragon_num,'red_kill_dark_tyrant_num':red_kill_dark_tyrant_num,'red_kill_num':red_kill_num ,'red_kill_prophet_dragon_num':red_kill_prophet_dragon_num,'red_kill_shadow_dragon_num':red_kill_shadow_dragon_num ,'red_kill_storm_dragon_king_num':red_kill_storm_dragon_king_num,'red_kill_tyrant_num': red_kill_tyrant_num ,'red_push_tower_num':red_push_tower_num,}# print(','.join(kpl_data.keys()))  ####获取标签await save_to_excel('数据.csv', kpl_data)i += 1else:print(f'Error: Unexpected content type: {response.content_type} for URL: {url}')if __name__ == '__main__':team_url_id_list = [20230003]  #将以下赛季id依次加入列表进行爬取,一个一个加入放置数据紊乱(异步的问题),# 20190002, 20190003, 20190004, 20190006,# 20200001, 20200002, 20200003, 20200004, 20200005,# 20190001, 一九二零年比赛id##待爬##已爬20210021,202100201,20210003,20210004,20210005,20220001,20220002,20220003,'20220001&os=1',20230001,20230002,async def scrape_data():tasks = []async with aiohttp.ClientSession() as session:for team_id in team_url_id_list:task = asyncio.ensure_future(get_team_data(team_id))tasks.append(task)await asyncio.gather(*tasks)loop = asyncio.get_event_loop()loop.run_until_complete(scrape_data())

代码简介:该代码是基于异步爬虫实现将一个赛季的数据内容爬取到csv文件中,只需要将赛季id更换一下就能将数据叠加到数据文件中;该代码要所需要库都安装好是可以直接运行的,如果有错误可以给我留言我再修改。
数据结构:

3.数据分析处理

数据清洗与处理:
   处理缺失值并删除无关数据:   
data.dropna(axis=0, how='any', inplace=True)  # 删除有空值的行
data = data.drop(['date'], axis=1)  # 删除时间列
        显示数据结果
print("前五行数据:", data.head())
print("数据形状:", data.shape)
pd.set_option('display.width', 10) 
print("数据列名:", data.columns)
pd.set_option('display.width', 80)  
print("数据概览:", data.describe())
print("数据概览:", data.info())
        热力图分析处理:
                在相关性矩阵的热力图中可以发现存在高度相关的变量,这些变量解释了相同的事物。因此,如果它们显示的数据与另一列相同,则它们对分类没有帮助。例如在列 RedKills(红色团队击杀的次数)和BlueDeaths(蓝队被击杀的人数)中。红队的击杀人数就是蓝队的死亡人数。因此,正确的做法是删除一个列。注:(有时在游戏redkills和bluedeaths不一定相等,因为玩家可能会被野怪和防御塔杀死,但我们的数据来自于高段位玩家,这种情况可以忽略不记)
plt.figure(figsize=(18, 15))
sns.heatmap(round(data.corr(), 1), cmap="coolwarm", annot=True, linewidths=.5)  # 相关性(-1,1)
plt.savefig('热力图相关性分析.jpg', bbox_inches='tight')
        特征与目标相关性联系:
plt.figure(figsize=(10, 6))
target = data.filter(['blue_is_win'])
corr_matrix = data.corr()
correlations = corr_matrix['blue_is_win'].abs()
correlations.sort_values(ascending=False, inplace=True)
plt.bar(data.columns, correlations)
plt.xlabel('Feature index')
# 设置横坐标标签倾斜90度
plt.xticks(rotation=90)
plt.ylabel('Correlation with target')
plt.title('Feature correlation with target')
plt.show()
plt.savefig('特征相关性.png')
        更改数据特征:

        观察热力图发现,蓝方击杀数与红方死亡数以及蓝方死亡数与红方击杀数,和蓝红双方经济成1的相关性,我们完全可以删除掉其中的一项来减少数据量,但是双方经济减少一项的话对我们分析并不利,与其将双方经济将特征喂入数据不如将双方经济作差作为特征会更具有符合我们预测结果。

data['gold_economy'] = data['blue_gold']-data['red_gold']
data = data.drop(['blue_gold'],axis=1)
data = data.drop(['red_gold'],axis=1)
删除相关性强的数据:
# 定义一个函数 作用:找出相关系数矩阵中相关性大的一组数据,同时返回其中一列数据
def remove_redundancy(r):to_remove = []for i in range(len(r.columns)):for j in range(i):if (abs(r.iloc[i, j]) >= 0.98 and (r.columns[j] not in to_remove)):print("相关性:", r.iloc[i, j], r.columns[j], r.columns[i])to_remove.append(r.columns[i])return to_remove
clean_data = data.drop(remove_redundancy(data.corr()), axis=1)  # 删去相关性较高项
数据分类:

        

unscaled_inputs = clean_data.filter(['blue_assist_num','blue_death_num','blue_kda','blue_kill_big_dragon_num','blue_kill_dark_tyrant_num','blue_kill_num','blue_kill_prophet_dragon_num','blue_kill_shadow_dragon_num','blue_kill_storm_dragon_king_num','blue_kill_tyrant_num','blue_push_tower_num','red_assist_num','red_kda','red_kill_big_dragon_num','red_kill_dark_tyrant_num','red_kill_prophet_dragon_num','red_kill_shadow_dragon_num','red_kill_storm_dragon_king_num','red_kill_tyrant_num','red_push_tower_num','gold_economy'], axis=1)
target = clean_data.filter(['blue_is_win'])
 数据标准化

        简介:数据的标准化(normalization)是将数据按比例缩放,使之落入一个小的特定区间。在某些比较和评价的指标处理中经常会用到,去除数据的单位限制,将其转化为无量纲的纯数值,便于不同单位或量级的指标能够进行比较和加权。其中最典型的就是数据的归一化处理,即将数据统一映射到[0,1]区间上。数据集的标准化对于众多机器学习评估器来说是必须的;如果各独立特征不进行标准化,结果标准正态分布数据差距很大:比如使用均值为0、方差为1的高斯分布.

        标准化的流程简单来说可以表达为:将数据按其属性(按列进行)减去其均值,然后除以其方差。最后得到的结果是,对每个属性/每列来说所有数据都聚集在0附近,方差值为1

# 创建自定义缩放器类(标准化)
class CustomScaler(BaseEstimator, TransformerMixin):# 声明一些基本内容和信息def __init__(self, columns, copy=True, with_mean=True, with_std=True):# scaler是Standard Scaler对象self.scaler = StandardScaler(copy, with_mean, with_std)self.columns = columnsself.mean_ = Noneself.var_ = None# 基于StandardScale的拟合方法def fit(self, X, y=None):self.scaler.fit(X[self.columns], y)self.mean_ = np.mean(X[self.columns])self.var_ = np.var(X[self.columns])return self# 进行实际缩放的变换方法def transform(self, X, y=None, copy=None):# 记录列的初始顺序init_col_order = X.columns# 缩放创建类实例时选择的所有功能X_scaled = pd.DataFrame(self.scaler.transform(X[self.columns]), columns=self.columns)# 声明一个包含所有未缩放信息的变量X_not_scaled = X.loc[:, ~X.columns.isin(self.columns)]# 返回包含所有已缩放要素和所有未缩放要素的数据框return pd.concat([X_not_scaled, X_scaled], axis=1)[init_col_order]# 数据缩放要忽略的列
columns_to_omit = ['blueFirstBlood']  # 忽略一血,因为它是分类变量# 根据要缩放的列创建列表
columns_to_scale = [x for x in unscaled_inputs.columns.values if x not in columns_to_omit]
blue_scaler = CustomScaler(columns_to_scale)
blue_scaler.fit(unscaled_inputs)
scaled_inputs = blue_scaler.transform(unscaled_inputs)
pd.set_option('display.width', 80)  # 设置Console每一行展示的最大宽度,屏幕一行显示满之后才会进行换行
print("标准化处理后的数据:", scaled_inputs)
数据切片:
x_train, x_test, y_train, y_test = train_test_split(scaled_inputs, target, train_size=0.8,shuffle=False)print("训练数据:", x_train.shape, y_train.shape, "测试数据:", x_test.shape, y_test.shape)

4.数据建模与分析

        模型分析与训练:
reg = LogisticRegression()
reg.fit(x_train, y_train)
# 创建一个汇总表以可视化变量以及各自的系数和几率
variables = unscaled_inputs.columns.values
intercept = reg.intercept_  # 截距
summary_table = pd.DataFrame(columns=['Variables'], data=variables)
summary_table['Coef'] = np.transpose(reg.coef_)
summary_table.index = summary_table.index + 1
summary_table.loc[0] = ['Intercept', reg.intercept_[0]]
summary_table['Odds Ratio'] = np.exp(summary_table.Coef)
summary_table.sort_values(by=['Odds Ratio'], ascending=False)
print("模型变量评价:", summary_table.sort_values(by=['Odds Ratio'], ascending=False))
        模型测试:
# 模型测试
print("训练数据评分:", reg.score(x_train, y_train))
print("测试数据评分:", reg.score(x_test, y_test))
# 将测试结果写入到原始数据集中
predicted_prob = reg.predict_proba(x_test)
data['predicted'] = reg.predict_proba(scaled_inputs)[:, 1].round(8)
# print("经过预测后的包含预测结果的完整数据集:", data)

        输入结果:

# 原始数据和胜率分析对比
col_n = ['blue_is_win', 'predicted']
a = pd.DataFrame(data, columns=col_n)
print("原始数据和胜率分析对比:", a)
a.to_csv('./预测结果.csv')

数据建模与分析全代码:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.preprocessing import StandardScaler
from sklearn.base import BaseEstimator, TransformerMixinplt.rcParams['font.sans-serif'] = ['SimHei'] #设置正确显示中文字体
plt.rcParams['axes.unicode_minus'] = False   #正确显示负号pd.set_option('display.max_columns', None)  # 显示所有列
pd.set_option('mode.chained_assignment', None)  # 关闭警告
pd.set_option('display.width', 100)        #设置Console每一行展示的最大宽度,屏幕一行显示满之后才会进行换行data = pd.read_csv('./数据.csv')print("前五行数据:", data.head())
print("数据形状:", data.shape)
pd.set_option('display.width', 10)  # 设置Console每一行展示的最大宽度,屏幕一行显示满之后才会进行换行
print("数据列名:", data.columns)
pd.set_option('display.width', 80)  # 设置Console每一行展示的最大宽度,屏幕一行显示满之后才会进行换行
print("数据概览:", data.describe())
print("数据概览:", data.info())data.dropna(axis=0, how='any', inplace=True)  # 删除有空值的行
data = data.drop(['date'], axis=1)  # 删除时间列plt.figure(figsize=(18, 15))
sns.heatmap(round(data.corr(), 1), cmap="coolwarm", annot=True, linewidths=.5)  # 相关性(-1,1)
plt.savefig('热力图相关性分析.jpg', bbox_inches='tight')
plt.show()
#查看各特征对目标属性的相关性
plt.figure(figsize=(10, 6))
target = data.filter(['blue_is_win'])
corr_matrix = data.corr()
correlations = corr_matrix['blue_is_win'].abs()
correlations.sort_values(ascending=False, inplace=True)
plt.bar(data.columns, correlations)
plt.xlabel('Feature index')
# 设置横坐标标签倾斜90度
plt.xticks(rotation=90)
plt.ylabel('Correlation with target')
plt.title('Feature correlation with target')
plt.show()
plt.savefig('特征相关性.png')data['gold_economy'] = data['blue_gold']-data['red_gold']
data = data.drop(['blue_gold'],axis=1)
data = data.drop(['red_gold'],axis=1)
# 定义一个函数 作用:找出相关系数矩阵中相关性大的一组数据,同时返回其中一列数据
def remove_redundancy(r):to_remove = []for i in range(len(r.columns)):for j in range(i):if (abs(r.iloc[i, j]) >= 0.98 and (r.columns[j] not in to_remove)):print("相关性:", r.iloc[i, j], r.columns[j], r.columns[i])to_remove.append(r.columns[i])return to_remove
clean_data = data.drop(remove_redundancy(data.corr()), axis=1)  # 删去相关性较高项pd.set_option('display.width', 10)  # 设置Console每一行展示的最大宽度,屏幕一行显示满之后才会进行换行
print("处理后的数据:", clean_data.columns)unscaled_inputs = clean_data.filter(['blue_assist_num','blue_death_num','blue_kda','blue_kill_big_dragon_num','blue_kill_dark_tyrant_num','blue_kill_num','blue_kill_prophet_dragon_num','blue_kill_shadow_dragon_num','blue_kill_storm_dragon_king_num','blue_kill_tyrant_num','blue_push_tower_num','red_assist_num','red_kda','red_kill_big_dragon_num','red_kill_dark_tyrant_num','red_kill_prophet_dragon_num','red_kill_shadow_dragon_num','red_kill_storm_dragon_king_num','red_kill_tyrant_num','red_push_tower_num','gold_economy'], axis=1)
target = clean_data.filter(['blue_is_win'])# 创建自定义缩放器类
class CustomScaler(BaseEstimator, TransformerMixin):# 声明一些基本内容和信息def __init__(self, columns, copy=True, with_mean=True, with_std=True):# scaler是Standard Scaler对象self.scaler = StandardScaler(copy, with_mean, with_std)self.columns = columnsself.mean_ = Noneself.var_ = None# 基于StandardScale的拟合方法def fit(self, X, y=None):self.scaler.fit(X[self.columns], y)self.mean_ = np.mean(X[self.columns])self.var_ = np.var(X[self.columns])return self# 进行实际缩放的变换方法def transform(self, X, y=None, copy=None):# 记录列的初始顺序init_col_order = X.columns# 缩放创建类实例时选择的所有功能X_scaled = pd.DataFrame(self.scaler.transform(X[self.columns]), columns=self.columns)# 声明一个包含所有未缩放信息的变量X_not_scaled = X.loc[:, ~X.columns.isin(self.columns)]# 返回包含所有已缩放要素和所有未缩放要素的数据框return pd.concat([X_not_scaled, X_scaled], axis=1)[init_col_order]# 根据要缩放的列创建列表
columns_to_scale = [x for x in unscaled_inputs.columns.values]
blue_scaler = CustomScaler(columns_to_scale)
blue_scaler.fit(unscaled_inputs)
scaled_inputs = blue_scaler.transform(unscaled_inputs)
pd.set_option('display.width', 80)  # 设置Console每一行展示的最大宽度,屏幕一行显示满之后才会进行换行
print("标准化处理后的数据:", scaled_inputs)# 数据切片
x_train, x_test, y_train, y_test = train_test_split(scaled_inputs, target, train_size=0.8,shuffle=False)
# x_test, x_verify, y_test, y_verify = train_test_split(x_test,y_test,train_size=0.9,random_state=2)
print("训练数据:", x_train.shape, y_train.shape, "测试数据:", x_test.shape, y_test.shape)# 模型训练
reg = LogisticRegression()
reg.fit(x_train, y_train)
# 创建一个汇总表以可视化变量以及各自的系数和几率
variables = unscaled_inputs.columns.values
intercept = reg.intercept_  # 截距
summary_table = pd.DataFrame(columns=['Variables'], data=variables)
summary_table['Coef'] = np.transpose(reg.coef_)
summary_table.index = summary_table.index + 1
summary_table.loc[0] = ['Intercept', reg.intercept_[0]]
summary_table['Odds Ratio'] = np.exp(summary_table.Coef)
summary_table.sort_values(by=['Odds Ratio'], ascending=False)
print("模型变量评价:", summary_table.sort_values(by=['Odds Ratio'], ascending=False))# 模型测试
print("训练数据评分:", reg.score(x_train, y_train))
print("测试数据评分:", reg.score(x_test, y_test))
# 将测试结果写入到原始数据集中
predicted_prob = reg.predict_proba(x_test)
data['predicted'] = reg.predict_proba(scaled_inputs)[:, 1].round(8)
# print("经过预测后的包含预测结果的完整数据集:", data)# 原始数据和胜率分析对比
col_n = ['blue_is_win', 'predicted']
a = pd.DataFrame(data, columns=col_n)
print("原始数据和胜率分析对比:", a)
a.to_csv('./预测结果.csv')

更多推荐

王者荣耀赛事比赛结果预测

本文发布于:2024-02-10 22:51:12,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1677769.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:荣耀   比赛结果   王者   赛事

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!