av第一毛片毛片,一级a一级a爱片免费免免视频,中文字幕乱码人妻一区二区三区
當(dāng)前位置: 主頁(yè) > 資訊 >

第十屆“泰迪杯”數(shù)據(jù)挖掘挑戰(zhàn)賽B題

來(lái)源:阿里云 瀏覽數(shù):
責(zé)任編輯:傳說(shuō)的落葉
時(shí)間:2024-09-24 11:16

[導(dǎo)讀]【第十屆“泰迪杯”數(shù)據(jù)挖掘挑戰(zhàn)賽】B題:電力系統(tǒng)負(fù)荷預(yù)測(cè)分析 ARIMA、AutoARIMA、LSTM、Prophet、多元Prophet 實(shí)現(xiàn)

相關(guān)鏈接

(1)【第十屆“泰迪杯”數(shù)據(jù)挖掘挑戰(zhàn)賽】B題:電力系統(tǒng)負(fù)荷預(yù)測(cè)分析 問(wèn)題一Baseline方案

(2)【第十屆“泰迪杯”數(shù)據(jù)挖掘挑戰(zhàn)賽】B題:電力系統(tǒng)負(fù)荷預(yù)測(cè)分析 問(wèn)題一ARIMA、AutoARIMA、LSTM、Prophet 多方案實(shí)現(xiàn)

(3)【第十屆“泰迪杯”數(shù)據(jù)挖掘挑戰(zhàn)賽】B題:電力系統(tǒng)負(fù)荷預(yù)測(cè)分析 問(wèn)題二 時(shí)間突變分析 Python實(shí)現(xiàn)

(4)【第十屆“泰迪杯”數(shù)據(jù)挖掘挑戰(zhàn)賽】B題:電力系統(tǒng)負(fù)荷預(yù)測(cè)分析 31頁(yè)省一等獎(jiǎng)?wù)撐募按a

完整代碼下載鏈接

https://www.betterbench.top/#/35/detail

1 讀取數(shù)據(jù)預(yù)處理的文件

import numpy as np
import pandas as pd

import seaborn as sns 
import matplotlib.pyplot as plt 
from colorama import Fore

from sklearn.metrics import mean_absolute_error, mean_squared_error
import math

import warnings # Supress warnings 
warnings.filterwarnings('ignore')
plt.rcParams['font.sans-serif'] = ['SimHei']  # 用來(lái)正常顯示中文標(biāo)簽
plt.rcParams['axes.unicode_minus'] = False  # 用來(lái)正常顯示負(fù)號(hào)
np.random.seed(7)

df = pd.read_csv(r"./data/泰迪杯數(shù)據(jù)2.csv")
df.head()

 

df  = df.rename(columns={'日期1':'date'})
df

 

2 查看時(shí)序

from datetime import datetime, date 

df['date'] = pd.to_datetime(df['date'])
df.head().style.set_properties(subset=['date'], **{'background-color': 'dodgerblue'})

 

# To compelte the data, as naive method, we will use ffill
f, ax = plt.subplots(nrows=7, ncols=1, figsize=(15, 25))

for i, column in enumerate(df.drop('date', axis=1).columns):
  。。。略

 

df = df.sort_values(by='date')

# Check time intervals
df['delta'] = df['date'] - df['date'].shift(1)

df[['date', 'delta']].head()

 

df['delta'].sum(), df['delta'].count()

(Timedelta(‘13 days 23:45:00’), 1439)

df = df.drop('delta', axis=1)
df.isna().sum()

date 0
總有功功率(kw) 51
最高溫度 6
最低溫度 0
白天風(fēng)力風(fēng)向 0
夜晚風(fēng)力風(fēng)向 0
天氣1 0
天氣2 0
dtype: int64

3 異常值缺失值

f, ax = plt.subplots(nrows=2, ncols=1, figsize=(15, 15))

。。。略


ax[1].set_xlim([date(2018, 1, 1), date(2018, 1, 15)])

 

3.1 HeatMap顏色

Accent, Accent_r, Blues, Blues_r, BrBG, BrBG_r, BuGn, BuGn_r,

BuPu, BuPu_r, CMRmap, CMRmap_r, Dark2, Dark2_r, GnBu, GnBu_r,

Greens, Greens_r, Greys, Greys_r, OrRd, OrRd_r,

Oranges, Oranges_r, PRGn, PRGn_r, Paired, Paired_r, Pastel1, Pastel1_r, Pastel2, Pastel2_r,

PiYG, PiYG_r, PuBu, PuBuGn, PuBuGn_r, PuBu_r, PuOr, PuOr_r, PuRd, PuRd_r, Purples, Purples_r,

RdBu, RdBu_r, RdGy, RdGy_r, RdPu, RdPu_r, RdYlBu, RdYlBu_r, RdYlGn, RdYlGn_r, Reds, Reds_r,

Set1, Set1_r, Set2, Set2_r, Set3, Set3_r, Spectral, Spectral_r, Wistia, Wistia_r, YlGn, YlGnBu,

YlGnBu_r, YlGn_r, YlOrBr, YlOrBr_r, YlOrRd, YlOrRd_r, afmhot, afmhot_r, autumn, autumn_r, binary,

binary_r, bone, bone_r, brg, brg_r, bwr, bwr_r, cividis, cividis_r, cool, cool_r, coolwarm,

coolwarm_r, copper, copper_r, cubehelix, cubehelix_r, flag, flag_r, gist_earth, gist_earth_r,

gist_gray, gist_gray_r, gist_heat, gist_heat_r, gist_ncar, gist_ncar_r, gist_rainbow,

gist_rainbow_r, gist_stern, gist_stern_r, gist_yarg, gist_yarg_r, gnuplot, gnuplot2,

gnuplot2_r, gnuplot_r, gray, gray_r, hot, hot_r, hsv, hsv_r, icefire, icefire_r, inferno,

inferno_r, jet, jet_r, magma, magma_r, mako, mako_r, nipy_spectral, nipy_spectral_r,

ocean, ocean_r, pink, pink_r, plasma, plasma_r, prism, prism_r, rainbow, rainbow_r,

rocket, rocket_r, seismic, seismic_r, spring, spring_r, summer, summer_r, tab10, tab10_r,

tab20, tab20_r, tab20b, tab20b_r, tab20c, tab20c_r, terrain, terrain_r, twilight, twilight_r,

twilight_shifted, twilight_shifted_r, viridis, viridis_r, vlag, vlag_r, winter, winter_r

f, ax = plt.subplots(nrows=1, ncols=1, figsize=(16,5))

sns.heatmap(df.T.isna(), cmap='Reds_r')
ax.set_title('Missing Values', fontsize=16)

for tick in ax.yaxis.get_major_ticks():
    tick.label.set_fontsize(14)
plt.show()

 

3.2 缺失值處理(多種填充方式)

f, ax = plt.subplots(nrows=4, ncols=1, figsize=(15, 12))

sns.lineplot(x=df['date'], y=df['總有功功率(kw)'].fillna(0), ax=ax[0], color='darkorange', label = 'modified')
sns.lineplot(x=df['date'], y=df['總有功功率(kw)'].fillna(np.inf), ax=ax[0], color='dodgerblue', label = 'original')
ax[0].set_title('Fill NaN with 0', fontsize=14)
ax[0].set_ylabel(ylabel='Volume', fontsize=14)

。。。略

for i in range(4):
    ax[i].set_xlim([date(2018, 1, 1), date(2018, 1, 15)])

plt.tight_layout()
plt.show()

 

df['總有功功率(kw)'] = df['總有功功率(kw)'].interpolate()

4 數(shù)據(jù)平滑與采樣

重采樣可以提供數(shù)據(jù)的附加信息。有兩種類型的重采樣:

上采樣是指增加采樣頻率(例如從幾天到幾小時(shí))

下采樣是指降低采樣頻率(例如,從幾天到幾周)

在這個(gè)例子中,我們將使用。resample()函數(shù)

fig, ax = plt.subplots(ncols=1, nrows=3, sharex=True, figsize=(16,12))

sns.lineplot(df['date'], df['總有功功率(kw)'], color='dodgerblue', ax=ax[0])
ax[0].set_title('總有功功率(kw) Volume', fontsize=14)

。。。略
for i in range(3):
    ax[i].set_xlim([date(2018, 1, 1), date(2018, 1, 14)])

 

# As we can see, downsample to weekly could smooth the data and hgelp with analysis
downsample = df[['date',
                 '總有功功率(kw)', 
                ]].resample('7D', on='date').mean().reset_index(drop=False)

# df = downsample.copy()
downsample

 

5 平穩(wěn)性檢驗(yàn)

目測(cè):繪制時(shí)間序列并檢查趨勢(shì)或季節(jié)性

基本統(tǒng)計(jì):分割時(shí)間序列并比較每個(gè)分區(qū)的平均值和方差

統(tǒng)計(jì)檢驗(yàn):增強(qiáng)的迪基富勒檢驗(yàn)

# A year has 52 weeks (52 weeks * 7 days per week) aporx.
rolling_window = 52
f, ax = plt.subplots(nrows=1, ncols=1, figsize=(15, 6))

。。。略
plt.show()

 

現(xiàn)在,我們將檢查每個(gè)變量: p值小于0.05 檢查ADF統(tǒng)計(jì)值與critical_values的比較范圍

from statsmodels.tsa.stattools import adfuller

result = adfuller(df['總有功功率(kw)'].values)
result

(-5.279986646245767, 6.0232754503160645e-06, 24, 1415,

{‘1%’: -3.434979825137732, ‘5%’: -2.8635847436211317, ‘10%’: -2.5678586114197954}, 29608.16365155926)

# Thanks to https://www.kaggle.com/iamleonie for this function!
f, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 6))

def visualize_adfuller_results(series, title, ax):
   。。。略

visualize_adfuller_results(df['總有功功率(kw)'].values, '總有功功率(kw)',ax=ax)
# visualize_adfuller_results(df['temperature'].values, 'Temperature', ax[1, 0])
# visualize_adfuller_results(df['river_hydrometry'].values, 'River_Hydrometry', ax[0, 1])
# visualize_adfuller_results(df['drainage_volume'].values, 'Drainage_Volume', ax[1, 1])
# visualize_adfuller_results(df['depth_to_groundwater'].values, 'Depth_to_Groundwater', ax[2, 0])

# f.delaxes(ax[2, 1])
plt.tight_layout()
plt.show()

 

如果數(shù)據(jù)不是靜態(tài)的,但我們想使用一個(gè)模型,如ARIMA(需要這個(gè)特征),數(shù)據(jù)必須轉(zhuǎn)換。

將序列轉(zhuǎn)換為平穩(wěn)序列的兩種最常見(jiàn)的方法是:

? 變換:例如對(duì)數(shù)或平方根,以穩(wěn)定非恒定方差

? 差分:從以前的值中減去當(dāng)前值

6 數(shù)據(jù)轉(zhuǎn)換

(1)對(duì)數(shù)

df['總有功功率(kw)_log'] = np.log(abs(df['總有功功率(kw)']))

。。。略
sns.distplot(df['總有功功率(kw)_log'], ax=ax[1])

 

(2)一階差分

# First Order Differencing
ts_diff = np.diff(df['總有功功率(kw)'])
df['總有功功率(kw)_diff_1'] = np.append([0], ts_diff)

f, ax = plt.subplots(nrows=1, ncols=1, figsize=(15, 6))
visualize_adfuller_results(df['總有功功率(kw)_diff_1'], 'Differenced (1. Order) \n Depth to Groundwater', ax)

 

7 特征工程

7.1 時(shí)序提取

df['year'] = pd.DatetimeIndex(df['date']).year
df['month'] = pd.DatetimeIndex(df['date']).month
df['day'] = pd.DatetimeIndex(df['date']).day
。。。略

df[['date', 'year', 'month', 'day', 'day_of_year', 'week_of_year', 'quarter', 'season']].head()

 

7.2 編碼循環(huán)特征

f, ax = plt.subplots(nrows=1, ncols=1, figsize=(20, 3))

sns.lineplot(x=df['date'], y=df['month'], color='dodgerblue')
ax.set_xlim([date(2018, 1, 1), date(2018, 1, 14)])
plt.show()

 

month_in_year = 12
df['month_sin'] = np.sin(2*np.pi*df['month']/month_in_year)
df['month_cos'] = np.cos(2*np.pi*df['month']/month_in_year)

f, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 6))

sns.scatterplot(x=df.month_sin, y=df.month_cos, color='dodgerblue')
plt.show()

 

7.3 時(shí)間序列分解

from statsmodels.tsa.seasonal import seasonal_decompose

core_columns =  [
    '總有功功率(kw)']
。。。略
fig, ax = plt.subplots(ncols=2, nrows=4, sharex=True, figsize=(16,8))

for i, column in enumerate(['總有功功率(kw)', '最低溫度']):

    res = seasonal_decompose(df[column], freq=52, model='additive', extrapolate_trend='freq')

    ax[0,i].set_title('Decomposition of {}'.format(column), fontsize=16)
    res.observed.plot(ax=ax[0,i], legend=False, color='dodgerblue')
    ax[0,i].set_ylabel('Observed', fontsize=14)
。。。略

plt.show()

 

7.4 滯后特征

weeks_in_month = 4

for column in core_columns:
    df[f'{column}_seasonal_shift_b_2m'] = df[f'{column}_seasonal'].shift(-2 * weeks_in_month)
    df[f'{column}_seasonal_shift_b_1m'] = df[f'{column}_seasonal'].shift(-1 * weeks_in_month)
    df[f'{column}_seasonal_shift_1m'] = df[f'{column}_seasonal'].shift(1 * weeks_in_month)
    df[f'{column}_seasonal_shift_2m'] = df[f'{column}_seasonal'].shift(2 * weeks_in_month)
    df[f'{column}_seasonal_shift_3m'] = df[f'{column}_seasonal'].shift(3 * weeks_in_month)

7.6 探索性數(shù)據(jù)分析

f, ax = plt.subplots(nrows=1, ncols=1, figsize=(15, 6))
f.suptitle('Seasonal Components of Features', fontsize=16)

for i, column in enumerate(core_columns):
    。。。略

plt.tight_layout()
plt.show()

 

7.7 相關(guān)性分析

f, ax = plt.subplots(nrows=1, ncols=1, figsize=(8, 8))

。。。略

plt.tight_layout()
plt.show()

 

7.8 自相關(guān)分析

from pandas.plotting import autocorrelation_plot

autocorrelation_plot(df['總有功功率(kw)_diff_1'])
plt.show()

 

from statsmodels.graphics.tsaplots import plot_acf
from statsmodels.graphics.tsaplots import plot_pacf

f, ax = plt.subplots(nrows=2, ncols=1, figsize=(16, 8))
。。。略

plt.show()

 

8 建模

8.1 時(shí)序中交叉驗(yàn)證

from sklearn.model_selection import TimeSeriesSplit

N_SPLITS = 3

X = df['date']
y = df['總有功功率(kw)']

folds = TimeSeriesSplit(n_splits=N_SPLITS)
f, ax = plt.subplots(nrows=N_SPLITS, ncols=2, figsize=(16, 9))

for i, (train_index, valid_index) in enumerate(folds.split(X)):
    。。。略
for i in range(N_SPLITS):
    ax[i, 0].set_xlim([date(2018, 1, 1), date(2018, 1, 14)])
    ax[i, 1].set_xlim([date(2018, 1, 1), date(2018, 6, 30)])

plt.tight_layout()
plt.show()

 

8.2 單變量時(shí)間序列模型

train_size = int(0.85 * len(df))
test_size = len(df) - train_size
df = df.fillna(0)
univariate_df = df[['date', '總有功功率(kw)']].copy()
univariate_df.columns = ['ds', 'y']

train = univariate_df.iloc[:train_size, :]

x_train, y_train = pd.DataFrame(univariate_df.iloc[:train_size, 0]), pd.DataFrame(univariate_df.iloc[:train_size, 1])
x_valid, y_valid = pd.DataFrame(univariate_df.iloc[train_size:, 0]), pd.DataFrame(univariate_df.iloc[train_size:, 1])

print(len(train), len(x_valid))

8.2.1 ARIMA

from statsmodels.tsa.arima_model import ARIMA
import warnings
warnings.ignore=True

。。。略

# Prediction with ARIMA
# y_pred, se, conf = model_fit.forecast(202)
y_pred, se, conf = model_fit.forecast(216)

# Calcuate metrics
score_mae = mean_absolute_error(y_valid, y_pred)
score_rmse = math.sqrt(mean_squared_error(y_valid, y_pred))

print(Fore.GREEN + 'RMSE: {}'.format(score_rmse))

RMSE: 30973.353510293528

f, ax = plt.subplots(1)
f.set_figheight(6)
f.set_figwidth(15)

model_fit.plot_predict(1, 1300, ax=ax)
sns.lineplot(x=x_valid.index, y=y_valid['y'], ax=ax, color='orange', label='Ground truth') #navajowhite

ax.set_title(f'Prediction \n MAE: {score_mae:.2f}, RMSE: {score_rmse:.2f}', fontsize=14)
ax.set_xlabel(xlabel='Date', fontsize=14)
ax.set_ylabel(ylabel='總有功功率(kw)', fontsize=14)

ax.set_ylim(100000, 350392)
plt.show()

 

f, ax = plt.subplots(1)
f.set_figheight(4)
f.set_figwidth(15)

sns.lineplot(x=x_valid.index, y=y_pred, ax=ax, color='blue', label='predicted') #navajowhite
sns.lineplot(x=x_valid.index, y=y_valid['y'], ax=ax, color='orange', label='Ground truth') #navajowhite

ax.set_xlabel(xlabel='Date', fontsize=14)
ax.set_ylabel(ylabel='總有功功率(kw)', fontsize=14)

plt.show()

 

8.2.2 LSTM

from sklearn.preprocessing import MinMaxScaler

data = univariate_df.filter(['y'])
#Convert the dataframe to a numpy array
dataset = data.values

scaler = MinMaxScaler(feature_range=(-1, 0))
scaled_data = scaler.fit_transform(dataset)

scaled_data[:10]

array([[-0.50891613], [-0.50891613], [-0.59567808], [-0.59567808], [-0.60361527], [-1. ], [-0.63509216], [-0.63509216], [-0.58983584], [-0.58983584]])

# Defines the rolling window
look_back = 52
# Split into train and test sets
train, test = scaled_data[:train_size-look_back,:], scaled_data[train_size-look_back:,:]

d。。。略
x_train, y_train = create_dataset(train, look_back)
x_test, y_test = create_dataset(test, look_back)

# reshape input to be [samples, time steps, features]
x_train = np.reshape(x_train, (x_train.shape[0], 1, x_train.shape[1]))
x_test = np.reshape(x_test, (x_test.shape[0], 1, x_test.shape[1]))

print(len(x_train), len(x_test))
from keras.models import Sequential
from keras.layers import Dense, LSTM

#Build the LSTM model
model = Sequential()
model.add(LSTM(128, return_sequences=True, input_shape=(x_train.shape[1], x_train.shape[2])))
model.add(LSTM(64, return_sequences=False))
model.add(Dense(25))
model.add(Dense(1))

# Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')

#Train the model
model.fit(x_train, y_train, batch_size=16, epochs=10, validation_data=(x_test, y_test))

model.summary()

Epoch 1/10 70/70 [] - 15s 10ms/step - loss: 0.0417 - val_loss: 0.0071 Epoch 2/10 70/70 [] - 0s 3ms/step - loss: 0.0104 - val_loss: 0.0036 Epoch 3/10 70/70 [] - 0s 6ms/step - loss: 0.0081 - val_loss: 0.0023 Epoch 4/10 70/70 [] - 0s 4ms/step - loss: 0.0064 - val_loss: 0.0017 Epoch 5/10 70/70 [] - 0s 4ms/step - loss: 0.0059 - val_loss: 0.0017 Epoch 6/10 70/70 [] - 0s 3ms/step - loss: 0.0053 - val_loss: 0.0019 Epoch 7/10 70/70 [] - 0s 3ms/step - loss: 0.0065 - val_loss: 0.0019 Epoch 8/10 70/70 [] - 0s 3ms/step - loss: 0.0051 - val_loss: 0.0013 Epoch 9/10 70/70 [] - 0s 3ms/step - loss: 0.0048 - val_loss: 0.0023 Epoch 10/10 70/70 [] - 0s 4ms/step - loss: 0.0052 - val_loss: 0.0012 Model: “sequential” _________________________________________________________________

Layer (type) Output Shape Param # =================================================================

lstm (LSTM) (None, 1, 128) 92672

stm_1 (LSTM) (None, 64) 49408

dense (Dense) (None, 25) 1625

dense_1 (Dense) (None, 1) 26 =================================================================

Total params: 143,731 Trainable params: 143,731 Non-trainable params: 0

# Lets predict with the model
train_predict = model.predict(x_train)
test_predict = model.predict(x_test)

# invert predictions
train_predict = scaler.inverse_transform(train_predict)
y_train = scaler.inverse_transform([y_train])

test_predict = scaler.inverse_transform(test_predict)
y_test = scaler.inverse_transform([y_test])

# Get the root mean squared error (RMSE) and MAE
score_rmse = np.sqrt(mean_squared_error(y_test[0], test_predict[:,0]))
score_mae = mean_absolute_error(y_test[0], test_predict[:,0])
print(Fore.GREEN + 'RMSE: {}'.format(score_rmse))
from sklearn.metrics import r2_score
print('R2-score:',r2_score(y_test[0], test_predict[:,0]))

RMSE: 4502.091881948914

R2-score: 0.9519027039841994

x_train_ticks = univariate_df.head(train_size)['ds']
y_train = univariate_df.head(train_size)['y']
x_test_ticks = univariate_df.tail(test_size)['ds']

# Plot the forecast
f, ax = plt.subplots(1)
f.set_figheight(6)
f.set_figwidth(15)

sns.lineplot(x=x_train_ticks, y=y_train, ax=ax, label='Train Set') #navajowhite
sns.lineplot(x=x_test_ticks, y=test_predict[:,0], ax=ax, color='green', label='Prediction') #navajowhite
sns.lineplot(x=x_test_ticks, y=y_test[0], ax=ax, color='orange', label='Ground truth') #navajowhite

ax.set_title(f'Prediction \n MAE: {score_mae:.2f}, RMSE: {score_rmse:.2f}', fontsize=14)
ax.set_xlabel(xlabel='Date', fontsize=14)
ax.set_ylabel(ylabel='總有功功率(kw)', fontsize=14)

plt.show()

 

8.2.3 AutoARIMA

from statsmodels.tsa.arima_model import ARIMA
import pmdarima as pm
。。。略
print(model.summary())

28.png

y_pred = model.predict(216)
from sklearn.metrics import r2_score
print('R2-score:',r2_score(y_valid, y_pred))

R2-score: -0.08425358340633804

model.plot_diagnostics(figsize=(16,8))
plt.show()

 

8.3 多元時(shí)序預(yù)測(cè)

df.columns

Index([‘date’, ‘總有功功率(kw)’, ‘最高溫度’, ‘最低溫度’, ‘白天風(fēng)力風(fēng)向’, ‘夜晚風(fēng)力風(fēng)向’, ‘天氣1’, ‘天氣2’], dtype=‘object’)

feature_columns = [
     '最高溫度', '最低溫度', '白天風(fēng)力風(fēng)向', '夜晚風(fēng)力風(fēng)向', '天氣1', '天氣2'
]
target_column = ['總有功功率(kw)']

train_size = int(0.85 * len(df))

multivariate_df = df[['date'] + target_column + feature_columns].copy()
multivariate_df.columns = ['ds', 'y'] + feature_columns

train = multivariate_df.iloc[:train_size, :]
x_train, y_train = pd.DataFrame(multivariate_df.iloc[:train_size, [0,2,3,4,5,6,7]]), pd.DataFrame(multivariate_df.iloc[:train_size, 1])
x_valid, y_valid = pd.DataFrame(multivariate_df.iloc[train_size:, [0,2,3,4,5,6,7]]), pd.DataFrame(multivariate_df.iloc[train_size:, 1])

train.head()

 

train  =multivariate_df.iloc[:train_size, :]
train

 

8.3.1 多元Propher

from fbprophet import Prophet

# Train the model
model = Prophet()
# model.add_regressor('最高溫度')
# model.add_regressor('最低溫度')
# model.add_regressor('白天風(fēng)力風(fēng)向')
# model.add_regressor('夜晚風(fēng)力風(fēng)向')
# model.add_regressor('天氣1')
# model.add_regressor('天氣2')
# Fit the model with train set
model.fit(train)

# Predict on valid set
y_pred = model.predict(x_valid)

# Calcuate metrics
score_mae = mean_absolute_error(y_valid, y_pred['yhat'])
score_rmse = math.sqrt(mean_squared_error(y_valid, y_pred['yhat']))

print(Fore.GREEN + 'RMSE: {}'.format(score_rmse))
from sklearn.metrics import r2_score
print('R2-score:',r2_score(y_valid, y_pred['yhat']))
# Plot the forecast
f, ax = plt.subplots(1)
f.set_figheight(6)
f.set_figwidth(15)

model.plot(y_pred, ax=ax)
sns.lineplot(x=x_valid['ds'], y=y_valid['y'], ax=ax, color='orange', label='Ground truth') #navajowhite

ax.set_title(f'Prediction \n MAE: {score_mae:.2f}, RMSE: {score_rmse:.2f}', fontsize=14)
ax.set_xlabel(xlabel='Date', fontsize=14)
ax.set_ylabel(ylabel='總有功功率(kw)', fontsize=14)

plt.show()

 

免責(zé)聲明:本文僅代表作者個(gè)人觀點(diǎn),與納金網(wǎng)無(wú)關(guān)。其原創(chuàng)性以及文中陳述文字和內(nèi)容未經(jīng)本站證實(shí),對(duì)本文以及其中全部或者部分內(nèi)容、文字的真實(shí)性、完整性、及時(shí)性本站不作任何保證或承諾,請(qǐng)讀者僅作參考,并請(qǐng)自行核實(shí)相關(guān)內(nèi)容。



  • TAGS:人工智能技術(shù) python 數(shù)學(xué)建模 人工智能
  • 網(wǎng)友評(píng)論

    您需要登錄后才可以發(fā)帖 登錄 | 立即注冊(cè)

    關(guān)閉

    全部評(píng)論:0條

    熱門圖庫(kù)
    • 談?wù)劰I(yè)企業(yè)的數(shù)字化轉(zhuǎn)型之路
    • 第五屆亞洲3D打印、增材制造展覽會(huì)(TCT亞洲展)
    • 納金網(wǎng)推薦全球知名48個(gè)工業(yè)設(shè)計(jì)大賽(全)
    • 最具雕刻藝術(shù)的3D打印顛覆你認(rèn)知的紅點(diǎn)獎(jiǎng)設(shè)計(jì)
    • 12月20日,創(chuàng)想CR-100智能迷你3D打印機(jī)京東眾籌中
    • 【數(shù)據(jù)集下載】異形吊燈模型數(shù)據(jù)集