深度学习TensorFlow基本数据类型及回归算法深入实践

编程入门 行业动态 更新时间:2024-10-08 10:57:38

深度学习TensorFlow基本数据类型及回归<a href=https://www.elefans.com/category/jswz/34/1770096.html style=算法深入实践"/>

深度学习TensorFlow基本数据类型及回归算法深入实践

秦凯新技术社区推出的《Coding技术进阶实战》系列即将上线,包含语言类精深的用法和技巧,涵盖 python,Java,Scala,Tensorflow等主流大数据和深度学习技术基础,敬请期待。为什么我会写这样一个系列,来源于被一位容器云专家问到如何实现一个线程池时,让我顿感以前研究的Java并发控制相关的理论以及多线程并发设计模式忘得九霄云外,鉴于此,气愤难平,决定展示个人编程魅力。

版权声明:本套技术专栏是作者(秦凯新)平时工作的总结和升华,通过从真实商业环境抽取案例进行总结和分享,并给出商业应用的调优建议和集群环境容量规划等内容,请持续关注本套博客。QQ邮箱地址:1120746959@qq,如有任何技术交流,可随时联系。

1 TensorFlow基本使用操作

  • TensorFlow基本模型

      import tensorflow as tfa = 3# Create a variable.w = tf.Variable([[0.5,1.0]])x = tf.Variable([[2.0],[1.0]]) y = tf.matmul(w, x)  #variables have to be explicitly initialized before you can run Opsinit_op = tf.global_variables_initializer()with tf.Session() as sess:sess.run(init_op)print (y.eval())
    复制代码
  • TensorFlow基本数据类型

      # float32tf.zeros([3, 4], int32) ==> [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]# 'tensor' is [[1, 2, 3], [4, 5, 6]]tf.zeros_like(tensor) ==> [[0, 0, 0], [0, 0, 0]]tf.ones([2, 3], int32) ==> [[1, 1, 1], [1, 1, 1]]# 'tensor' is [[1, 2, 3], [4, 5, 6]]tf.ones_like(tensor) ==> [[1, 1, 1], [1, 1, 1]]# Constant 1-D Tensor populated with value list.tensor = tf.constant([1, 2, 3, 4, 5, 6, 7]) => [1 2 3 4 5 6 7]# Constant 2-D tensor populated with scalar value -1.tensor = tf.constant(-1.0, shape=[2, 3]) => [[-1. -1. -1.][-1. -1. -1.]]tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0  11.0  12.0]# 'start' is 3# 'limit' is 18# 'delta' is 3tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15]
    复制代码
  • random_shuffle算子及random_normal算子

      norm = tf.random_normal([2, 3], mean=-1, stddev=4)# Shuffle the first dimension of a tensorc = tf.constant([[1, 2], [3, 4], [5, 6]])shuff = tf.random_shuffle(c)# Each time we run these ops, different results are generatedsess = tf.Session()print (sess.run(norm))print (sess.run(shuff))[[-0.30886292  3.11809683  3.29861784][-7.09597015 -1.89811802  1.75282788]][[3 4][5 6][1 2]]
    复制代码
  • 简单操作的复杂性

      state = tf.Variable(0)new_value = tf.add(state, tf.constant(1))update = tf.assign(state, new_value)with tf.Session() as sess:sess.run(tf.global_variables_initializer())print(sess.run(state))    for _ in range(3):sess.run(update)print(sess.run(state))
    复制代码
  • 模型的保存与加载

      #tf.train.Saverw = tf.Variable([[0.5,1.0]])x = tf.Variable([[2.0],[1.0]])y = tf.matmul(w, x)init_op = tf.global_variables_initializer()saver = tf.train.Saver()with tf.Session() as sess:sess.run(init_op)# Do some work with the model.# Save the variables to disk.save_path = saver.save(sess, "C://tensorflow//model//test")print ("Model saved in file: ", save_path)
    复制代码
  • numpy与TensorFlow互转

      import numpy as npa = np.zeros((3,3))ta = tf.convert_to_tensor(a)with tf.Session() as sess:print(sess.run(ta))
    复制代码
  • TensorFlow占坑操作

      input1 = tf.placeholder(tf.float32)input2 = tf.placeholder(tf.float32)output = tf.mul(input1, input2)with tf.Session() as sess:print(sess.run([output], feed_dict={input1:[7.], input2:[2.]}))
    复制代码

2 TensorFlow线性回归实现

  • numpy线性数据集生成

      import numpy as npimport tensorflow as tfimport matplotlib.pyplot as plt# 随机生成1000个点,围绕在y=0.1x+0.3的直线周围num_points = 1000vectors_set = []for i in range(num_points):x1 = np.random.normal(0.0, 0.55)y1 = x1 * 0.1 + 0.3 + np.random.normal(0.0, 0.03)vectors_set.append([x1, y1])# 生成一些样本x_data = [v[0] for v in vectors_set]y_data = [v[1] for v in vectors_set]plt.scatter(x_data,y_data,c='r')plt.show()
    复制代码

  • TensorFlow实现线性模型

       生成1维的W矩阵,取值是[-1,1]之间的随机数W = tf.Variable(tf.random_uniform([1], -1.0, 1.0), name='W')# 生成1维的b矩阵,初始值是0b = tf.Variable(tf.zeros([1]), name='b')# 经过计算得出预估值yy = W * x_data + b# Loss: 以预估值y和实际值y_data之间的均方误差作为损失loss = tf.reduce_mean(tf.square(y - y_data), name='loss')# 优化器:采用梯度下降法来优化参数(train模块,参数表示学习率)optimizer = tf.train.GradientDescentOptimizer(0.5)# 开始训练:训练的过程就是最小化这个误差值train = optimizer.minimize(loss, name='train')sess = tf.Session()init = tf.global_variables_initializer()sess.run(init)# 初始化的W和b是多少print ("W =", sess.run(W), "b =", sess.run(b), "loss =", sess.run(loss))# 执行20次训练for step in range(20):sess.run(train)# 输出训练好的W和bprint ("W =", sess.run(W), "b =", sess.run(b), "loss =", sess.run(loss))writer = tf.train.SummaryWriter("./tmp", sess.graph)
    复制代码
  • TensorFlow迭代结果

      W = [ 0.96539688] b = [ 0.] loss = 0.297884W = [ 0.71998411] b = [ 0.28193575] loss = 0.112606W = [ 0.54009342] b = [ 0.28695393] loss = 0.0572231W = [ 0.41235447] b = [ 0.29063231] loss = 0.0292957W = [ 0.32164571] b = [ 0.2932443] loss = 0.0152131W = [ 0.25723246] b = [ 0.29509908] loss = 0.00811188W = [ 0.21149193] b = [ 0.29641619] loss = 0.00453103W = [ 0.17901111] b = [ 0.29735151] loss = 0.00272536W = [ 0.15594614] b = [ 0.29801565] loss = 0.00181483W = [ 0.13956745] b = [ 0.29848731] loss = 0.0013557W = [ 0.12793678] b = [ 0.29882219] loss = 0.00112418W = [ 0.11967772] b = [ 0.29906002] loss = 0.00100743W = [ 0.11381286] b = [ 0.29922891] loss = 0.000948558W = [ 0.10964818] b = [ 0.29934883] loss = 0.000918872W = [ 0.10669079] b = [ 0.29943398] loss = 0.000903903W = [ 0.10459071] b = [ 0.29949448] loss = 0.000896354W = [ 0.10309943] b = [ 0.29953739] loss = 0.000892548W = [ 0.10204045] b = [ 0.29956791] loss = 0.000890629W = [ 0.10128847] b = [ 0.29958954] loss = 0.000889661W = [ 0.10075447] b = [ 0.29960492] loss = 0.000889173W = [ 0.10037527] b = [ 0.29961586] loss = 0.000888927plt.scatter(x_data,y_data,c='r')plt.plot(x_data,sess.run(W)*x_data+sess.run(b))plt.show()
    复制代码
  • 版权声明:本套技术专栏是作者(秦凯新)平时工作的总结和升华,通过从真实商业环境抽取案例进行总结和分享,并给出商业应用的调优建议和集群环境容量规划等内容,请持续关注本套博客。QQ邮箱地址:1120746959@qq,如有任何学术交流,可随时联系。

3 MNIST数据集加载介绍

  • 加载

      import numpy as npimport tensorflow as tfimport matplotlib.pyplot as plt#from tensorflow.examples.tutorials.mnist import input_dataimport input_dataprint ("packs loaded")print ("Download and Extract MNIST dataset")##使用one_hot 01编码mnist = input_data.read_data_sets('data/', one_hot=True)printprint (" tpye of 'mnist' is %s" % (type(mnist)))print (" number of trian data is %d" % (mnist.train.num_examples))print (" number of test data is %d" % (mnist.test.num_examples))Download and Extract MNIST datasetExtracting data/train-images-idx3-ubyte.gzExtracting data/train-labels-idx1-ubyte.gzExtracting data/t10k-images-idx3-ubyte.gzExtracting data/t10k-labels-idx1-ubyte.gztpye of 'mnist' is <class 'tensorflow.contrib.learn.python.learn.datasets.base.Datasets'>number of trian data is 55000number of test data is 10000
    复制代码
  • What does the data of MNIST look like?

      print ("What does the data of MNIST look like?")trainimg   = mnist.train.imagestrainlabel = mnist.train.labelstestimg    = mnist.test.imagestestlabel  = mnist.test.labelsprintprint (" type of 'trainimg' is %s"    % (type(trainimg)))print (" type of 'trainlabel' is %s"  % (type(trainlabel)))print (" type of 'testimg' is %s"     % (type(testimg)))print (" type of 'testlabel' is %s"   % (type(testlabel)))print (" shape of 'trainimg' is %s"   % (trainimg.shape,))print (" shape of 'trainlabel' is %s" % (trainlabel.shape,))print (" shape of 'testimg' is %s"    % (testimg.shape,))print (" shape of 'testlabel' is %s"  % (testlabel.shape,))What does the data of MNIST look like?type of 'trainimg' is <class 'numpy.ndarray'>type of 'trainlabel' is <class 'numpy.ndarray'>type of 'testimg' is <class 'numpy.ndarray'>type of 'testlabel' is <class 'numpy.ndarray'>shape of 'trainimg' is (55000, 784)shape of 'trainlabel' is (55000, 10)shape of 'testimg' is (10000, 784)shape of 'testlabel' is (10000, 10)
    复制代码
  • How does the training data look like?

      # How does the training data look like?print ("How does the training data look like?")nsample = 5randidx = np.random.randint(trainimg.shape[0], size=nsample)for i in randidx:curr_img   = np.reshape(trainimg[i, :], (28, 28)) # 28 by 28 matrix curr_label = np.argmax(trainlabel[i, :] ) # Labelplt.matshow(curr_img, cmap=plt.get_cmap('gray'))plt.title("" + str(i) + "th Training Data " + "Label is " + str(curr_label))print ("" + str(i) + "th Training Data " + "Label is " + str(curr_label))plt.show()
    复制代码

  • Batch Learning?

     print ("Batch Learning? ")batch_size = 100batch_xs, batch_ys = mnist.train.next_batch(batch_size)print ("type of 'batch_xs' is %s" % (type(batch_xs)))print ("type of 'batch_ys' is %s" % (type(batch_ys)))print ("shape of 'batch_xs' is %s" % (batch_xs.shape,))print ("shape of 'batch_ys' is %s" % (batch_ys.shape,))Batch Learning? type of 'batch_xs' is <class 'numpy.ndarray'>type of 'batch_ys' is <class 'numpy.ndarray'>shape of 'batch_xs' is (100, 784)shape of 'batch_ys' is (100, 10)
    复制代码

4 MNIST数据集逻辑回归测试

  • tensorflow的tf.reduce_mean函数

      m1 = tf.reduce_mean(x, axis=0)结果为:[1.5, 1.5]
    复制代码
  • tensorflow的argmaxtensorflow的 sess = tf.InteractiveSession()

      arr = np.array([[31, 23,  4, 24, 27, 34],[18,  3, 25,  0,  6, 35],[28, 14, 33, 22, 20,  8],[13, 30, 21, 19,  7,  9],[16,  1, 26, 32,  2, 29],[17, 12,  5, 11, 10, 15]])#打印加上eval ## 矩阵的维度 2#tf.rank(arr).eval()## 矩阵行和列 [6,6]#tf.shape(arr).eval()# 参数0表示维度,按照列。  表示最每列最大值的索引 [0,3,2,4,0,1]#tf.argmax(arr, 0).eval()# 0 -> 31 (arr[0, 0])# 3 -> 30 (arr[3, 1])# 2 -> 33 (arr[2, 2])tf.argmax(arr, 1).eval()# 5 -> 34 (arr[0, 5])# 5 -> 35 (arr[1, 5])# 2 -> 33 (arr[2, 2])array([5, 5, 2, 1, 3, 0], dtype=int64)
    复制代码
  • 加载数据集

      import numpy as npimport tensorflow as tfimport matplotlib.pyplot as pltimport input_datamnist      = input_data.read_data_sets('data/', one_hot=True)trainimg   = mnist.train.imagestrainlabel = mnist.train.labelstestimg    = mnist.test.imagestestlabel  = mnist.test.labelsprint ("MNIST loaded")Extracting data/train-images-idx3-ubyte.gzExtracting data/train-labels-idx1-ubyte.gzExtracting data/t10k-images-idx3-ubyte.gzExtracting data/t10k-labels-idx1-ubyte.gzMNIST loadedprint (trainimg.shape)print (trainlabel.shape)print (testimg.shape)print (testlabel.shape)#print (trainimg)print (trainlabel[0])(55000, 784)(55000, 10)(10000, 784)(10000, 10)[ 0.  0.  0.  0.  0.  0.  0.  1.  0.  0.]
    复制代码
  • TF逻辑回归模型构建

      # 先放坑(每一行是一个样本)x = tf.placeholder("float", [None, 784])# 总共10位 [ 0.  0.  0.  0.  0.  0.  0.  1.  0.  0.]y = tf.placeholder("float", [None, 10])  # None is for infinite #10分类任务 784输入,10代表输出W = tf.Variable(tf.zeros([784, 10]))# 10代表输出b = tf.Variable(tf.zeros([10]))# LOGISTIC REGRESSION MODEL(输出为10)actv = tf.nn.softmax(tf.matmul(x, W) + b) # COST FUNCTION(损失函数)cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(actv), reduction_indices=1)) # OPTIMIZERlearning_rate = 0.01optm = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
    复制代码
  • TF模型训练

      ##迭代次数training_epochs = 50每次迭代多少样本batch_size      = 100display_step    = 5# SESSIONsess = tf.Session()sess.run(init)# MINI-BATCH LEARNINGfor epoch in range(training_epochs):avg_cost = 0.num_batch = int(mnist.train.num_examples/batch_size)for i in range(num_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size)sess.run(optm, feed_dict={x: batch_xs, y: batch_ys})feeds = {x: batch_xs, y: batch_ys}avg_cost += sess.run(cost, feed_dict=feeds)/num_batch# DISPLAYif epoch % display_step == 0:feeds_train = {x: batch_xs, y: batch_ys}feeds_test = {x: mnist.test.images, y: mnist.test.labels}train_acc = sess.run(accr, feed_dict=feeds_train)test_acc = sess.run(accr, feed_dict=feeds_test)print ("Epoch: %03d/%03d cost: %.9f train_acc: %.3f test_acc: %.3f" % (epoch, training_epochs, avg_cost, train_acc, test_acc))print ("DONE")  Epoch: 000/050 cost: 1.177906594 train_acc: 0.840 test_acc: 0.855Epoch: 005/050 cost: 0.440515266 train_acc: 0.860 test_acc: 0.895Epoch: 010/050 cost: 0.382895913 train_acc: 0.910 test_acc: 0.905Epoch: 015/050 cost: 0.356607343 train_acc: 0.870 test_acc: 0.909Epoch: 020/050 cost: 0.341326642 train_acc: 0.860 test_acc: 0.912Epoch: 025/050 cost: 0.330556413 train_acc: 0.910 test_acc: 0.913Epoch: 030/050 cost: 0.321508561 train_acc: 0.840 test_acc: 0.916Epoch: 035/050 cost: 0.314936944 train_acc: 0.940 test_acc: 0.917Epoch: 040/050 cost: 0.309805418 train_acc: 0.940 test_acc: 0.918Epoch: 045/050 cost: 0.305343132 train_acc: 0.960 test_acc: 0.918DONE
    复制代码

5 总结

通过简单的案例,真正明白TensorFlow设计思想,才是本文的目的。

版权声明:本套技术专栏是作者(秦凯新)平时工作的总结和升华,通过从真实商业环境抽取案例进行总结和分享,并给出商业应用的调优建议和集群环境容量规划等内容,请持续关注本套博客。QQ邮箱地址:1120746959@qq,如有任何学术交流,可随时联系。

秦凯新 于深圳 201812092128

更多推荐

深度学习TensorFlow基本数据类型及回归算法深入实践

本文发布于:2024-02-14 14:01:05,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1763633.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:算法   数据类型   深度   TensorFlow

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!