神经网络预测第n个平方

编程入门 行业动态 更新时间:2024-10-26 14:34:40
本文介绍了神经网络预测第n个平方的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

我正在尝试使用多层神经网络来预测第n个平方.

I am trying to use multi-layer neural network to predict nth square.

我有以下包含前99个方格的训练数据

I have the following training data containing the first 99 squares

1 1 2 4 3 9 4 16 5 25 ... 98 9604 99 9801

这是代码:

import numpy as np import neurolab as nl # Load input data text = np.loadtxt('data_sq.txt') # Separate it into datapoints and labels data = text[:, :1] labels = text[:, 1:] # Define a multilayer neural network with 2 hidden layers; # First hidden layer consists of 10 neurons # Second hidden layer consists of 6 neurons # Output layer consists of 1 neuron nn = nl.newff([[0, 99]], [10, 6, 1]) # Train the neural network error_progress = nn.train(data, labels, epochs=2000, show=10, goal=0.01) # Run the classifier on test datapoints print('\nTest results:') data_test = [[100], [101]] for item in data_test: print(item, '-->', nn.sim([item])[0])

在第100个和第101个正方形上打印1:

Which prints 1 for both 100th and 101st squares:

Test results: [100] --> [ 1.] [101] --> [ 1.]

正确的方法是什么?

推荐答案

根据Filip Malczak和Seanny123的建议和评论,我在张量流中实现了一个神经网络,以检查当我们尝试教它预测(并内插)噪声时会发生什么情况.第二广场.

Following Filip Malczak's and Seanny123's suggestions and comments, I implemented a neural network in tensorflow to check what happens when we try to teach it to predict (and interpolate) the 2-nd square.

连续训练

我在间隔[-7,7]上训练了网络(在此间隔内获取300点,使其连续),然后在间隔[-30,30]上进行了测试.激活功能是ReLu,网络具有3个隐藏层,每个隐藏层的大小为50.历元= 500.结果如下图所示.

I trained the network on the interval [-7,7] (taking 300 points inside this interval, to make it continuous), and then tested it on the interval [-30,30]. The activation functions are ReLu, and the network has 3 hidden layers, each one is of size 50. epochs=500. The result is depicted in the figure below.

因此,基本上,在区间[-7,7]内(并且也接近),拟合非常完美,然后在外部或多或少地线性延续.很高兴看到,至少在最初,网络输出的斜率试图匹配" x^2的斜率.如果我们增加测试间隔,则两个图形会有很大差异,如下图所示:

So basically, inside (and also close to) the interval [-7,7], the fit is quite perfect, and then it continues more or less linearly outside. It is nice to see that at least initially, the slope of the network's output tries to "match" the slope of x^2. If we increase the test interval, the two graphs diverge quite a lot, as one can see in the figure below:

训练偶数

最后,如果相反,我在间隔[-100,100]中对所有偶数整数集训练网络,并在此间隔中将其应用于所有整数(偶数和奇数)集,则得到: ="i.stack.imgur/hyqcS.png" rel ="noreferrer">

Finally, if instead I train the network on the set of all even integers in the interval [-100,100], and apply it on the set of all integers (even and odd) in this interval, I get:

当训练网络以产生上面的图像时,我将历元增加到2500,以获得更好的准确性.其余参数保持不变.因此,似乎在训练间隔的内部"进行插值的效果很好(也许除了0周围的区域之外,拟合度更差).

When training the network to produce the image above, I increased the epochs to 2500 to get a better accuracy. The rest of the parameters stayed unchanged. So it seems that interpolating "inside" the training interval works quite well (maybe except of the area around 0, where the fit is a bit worse).

这是我用于第一个图形的代码:

Here is the code that I used for the first figure:

import tensorflow as tf import matplotlib.pyplot as plt import numpy as np from tensorflow.python.framework.ops import reset_default_graph #preparing training data train_x=np.linspace(-7,7,300).reshape(-1,1) train_y=train_x**2 #setting network features dimensions=[50,50,50,1] epochs=500 batch_size=5 reset_default_graph() X=tf.placeholder(tf.float32, shape=[None,1]) Y=tf.placeholder(tf.float32, shape=[None,1]) weights=[] biases=[] n_inputs=1 #initializing variables for i,n_outputs in enumerate(dimensions): with tf.variable_scope("layer_{}".format(i)): w=tf.get_variable(name="W",shape=[n_inputs,n_outputs],initializer=tf.random_normal_initializer(mean=0.0,stddev=0.02,seed=42)) b=tf.get_variable(name="b",initializer=tf.zeros_initializer(shape=[n_outputs])) weights.append(w) biases.append(b) n_inputs=n_outputs def forward_pass(X,weights,biases): h=X for i in range(len(weights)): h=tf.add(tf.matmul(h,weights[i]),biases[i]) h=tf.nn.relu(h) return h output_layer=forward_pass(X,weights,biases) cost=tf.reduce_mean(tf.squared_difference(output_layer,Y),1) cost=tf.reduce_sum(cost) optimizer=tf.train.AdamOptimizer(learning_rate=0.01).minimize(cost) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) #train the network for i in range(epochs): idx=np.arange(len(train_x)) np.random.shuffle(idx) for j in range(len(train_x)//batch_size): cur_idx=idx[batch_size*j:batch_size*(j+1)] sess.run(optimizer,feed_dict={X:train_x[cur_idx],Y:train_y[cur_idx]}) #current_cost=sess.run(cost,feed_dict={X:train_x,Y:train_y}) #print(current_cost) #apply the network on the test data test_x=np.linspace(-30,30,300) network_output=sess.run(output_layer,feed_dict={X:test_x.reshape(-1,1)}) plt.plot(test_x,test_x**2,color='r',label='y=x^2') plt.plot(test_x,network_output,color='b',label='network output') plt.legend(loc='center') plt.show()

更多推荐

神经网络预测第n个平方

本文发布于:2023-11-29 23:39:07,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1647979.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:神经网络

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!