我正在尝试建立一个将两个数字相乘的神经网络.为此,我接受了scikit-learn的帮助.我要一个具有2个隐藏层(5、3)和ReLU作为激活函数的神经网络.
I am trying to build a neural network which would multiply 2 numbers. To do the same, I took help of scikit-learn. I am going for a neural network with 2 hidden layers, (5, 3) and ReLU as my activation function.
我已如下定义我的 MLPRegressor :
X = data.drop('Product', axis=1) y = data['Product'] X_train, X_test, y_train, y_test = train_test_split(X, y) scaler = StandardScaler() scaler.fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) mlp = MLPRegressor(hidden_layer_sizes=(5, 3), activation="relu", learning_rate="adaptive", max_iter=500000, verbose=True, validation_fraction=0.25)此处, data 是包含3列,2个随机数和1个Product列的数据框.问题是我得到的损失约为10 ^ 14.在这种情况下,如何减少这种损失,改善模型性能以及所有可能的更改可以帮助我?
Here, data is the dataframe which contains the 3 columns, 2 random numbers and 1 Product column. The issue is the loss that I get is of order 10^14. How do I reduce this loss, improve my model performance and what all possible changes can help me out in this situation?
推荐答案我不是NN专家.我将对输入进行日志转换,然后将其输入网络,然后对输出进行幂运算.只是一个想法.
I am not an expert in NNs. I would do log-tranform of the inputs, then feed them into a network, then exponetiate the output. Just a thought.
更多推荐
如何建立一个将两个数字相乘的神经网络
发布评论