LSTM 什么是卷积神经网络络输入输出究竟是怎样的

如何为LSTM重新构建输入数据(Keras)
如何为LSTM重新构建输入数据(Keras)
全球人工智能GAI
“全球人工智能”拥有十多万AI产业用户,10000多名AI技术专家。主要来自:北大,清华,中科院,麻省理工,卡内基梅隆,斯坦福,哈佛,牛津,剑桥...以及谷歌,腾讯,百度,脸谱,微软,阿里,海康威视,英伟达......等全球名校和名企。——免费加入AI技术专家社群&&——免费加入AI高管投资者群&&申请成为AI高校推广大使&&摘要:对于初入门的开发人员,如何为LSTM准备数据一直是一个问题。在为LSTM准备数据的过程中的确有很多需要注意的问题,阅读本文可能会帮助你解决更多的问题。对于初入门的开发人员来说,这可能是非常困难的事情为LSTM模型准备序列数据。通常入门的开发者会在有关如何定义模型的输入层这件事情上感到困惑。还有关于如何将可能是1D或2D数字矩阵的序列数据转换可以输入到输入层所需的3D格式的困难。在本文中,你将了解如何将输入层定义为模型,以及如何重新构建可以输入到模型的输入数据。看完本文后,你将知道:如何定义的输入层。如何重塑模型的一维序列数据并定义输入层。模型的多并行系列数据并定义输入层。教程概述本文分为4部分:1.LSTM输入层。2.具有单输入样本的示例。3.具有多个输入特征的4.LSTM输入提示。LSTM输入层输入层是由神经网络第一个隐藏层上的“
input_shape ”参数指定的。这可能会让初学者感到困惑。例如,以下是具有一个隐藏的层和一个密集输出层组成的神经网络示例。model = Sequential (
) model.add ( LSTM ( 32 )) model.add ( Dense ( 1 ))在这个例子中,我们可以看到()层必须指定输入的形状。而且每个层的输入必须是三维的。这输入的三个维度是:样品。一个序列是一个样本。批次由一个或多个样本组成。时间步。一个时间步代表样本中的一个观察点。特征。一个特征是在一个时间步长的观察得到的。这意味着输入层在拟合模型时以及在做出预测时,对数据的要求必须是数组,即使数组的特定维度仅包含单个值。当定义网络的输入层时,网络假设你有一个或多个样本,并会给你指定时间步长和特征数量。你可以通过修改“ input_shape ”的参数修改时间步长和特征数量。例如,下面的模型定义了包含一个或多个样本,50个时间步长和2个特征的输入层。model = Sequential()model.add(LSTM(32, input_shape=(50, 2)))model.add(Dense(1))现在我们知道如何定义输入层,接下来我们来看一些我们如何为准备数据的例子。具有单输入样本的LSTM示例考虑到你可能会有多个时间步骤和一个特征序列的情况,所以我们先从这种情况讲起。例如,这是一个包含10个数字的序列:0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0我们可以将这个数字序列定义为NumPy数组。from numpy import arraydata = array (
[ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 ]
)然后,我们可以使用数组中的reshape()函数将这个一维数组重构为三维数组,每个时间步长为1个样本,那么我们需要个特征。在数组上调用的函数需要一个参数,它是定义数组新形状的元组。我们不能干涉数据的重塑,重塑必须均匀地重组数组中的数据。data = data.reshape((1, 10, 1))一旦重塑,我们可以打印阵列的新形状。print ( data.shape )完整的例子如下:from numpy import arraydata = array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])data = data.reshape((1, 10, 1))print(data.shape)运行示例打印单个样本的新形状:(1, 10, 1)该数据现在可以为input_shape(,)的的输入(X)。model = Sequential()model.add(LSTM(32, input_shape=(10, 1)))model.add(Dense(1))具有多个输入功能的你的模型可能有多个并行数据作为输入的情况,接下来我们来看看这种情况。例如,这可以是两个并行的个值:series 1: 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0series 2: 1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1我们可以将这些数据定义为具有行的列的矩阵:from numpy import arraydata = array (
[ 0.1, 1.0 ] , [ 0.2, 0.9 ] , [ 0.3, 0.8 ] , [ 0.4, 0.7 ] , [ 0.5, 0.6 ] , [ 0.6, 0.5 ] , [ 0.7, 0.4 ] , [ 0.8, 0.3 ] , [ 0.9, 0.2 ] , [ 1.0, 0.1 ]
)该数据可以被设置为个样本,具有它可以重新整形为阵列,如下所示:data = data.reshape(1, 10, 2)from numpy import arraydata = array([[0.1, 1.0],[0.2, 0.9],[0.3, 0.8],[0.4, 0.7],[0.5, 0.6],[0.6, 0.5],[0.7, 0.4],[0.8, 0.3],[0.9, 0.2],[1.0, 0.1]])data = data.reshape(1, 10, 2)print(data.shape)形状。(1, 10, 2)10,2)作为)使用。model = Sequential()model.add(LSTM(32, input_shape=(10, 2)))model.add(Dense(1))输入提示接下来我列出了在为准备输入数据时可以帮助你的一些提示。输入层必须是。2.3个输入尺寸的含义是:样本,时间步长和特征。3.LSTM输入层由第一个隐藏层上的参数定义。4.所述参数是限定的时间的步骤和特征数量的两个值的元组。5.样本数默认假定为大于6.NumPy函数可用于将你的数据重塑为7.函数会将一个元组作为新定义的形状的参数。进一步阅读如果你进一步了解,本部分将提供有关该主题的更多资源。Recurrent Layers Keras API。Numpy reshape()函数API。如何将时间序列转换为Python中的监督学习问题时间序列预测作为监督学习。系统学习,进入全球人工智能学院热门文章推荐未来 3~5 年内,哪个方向的机器学习人才最紧缺?中科院步态识别技术:不看脸 50米内在人群中认出你!厉害|黄仁勋狂怼CPU:摩尔定律已死 未来属于GPU!干货|7步让你从零开始掌握Python机器学习!华裔女科学家钱璐璐,发明仅20纳米的DNA机器人!Geoffrey Hinton提出capsule 概念,推翻反向传播!2017年7大最受欢迎的AI编程语言:Python第一!重磅|中国首家人工智能技术学院在京揭牌开学!厉害 | 南京大学周志华教授当选欧洲科学院外籍院士!5个月市值涨了1200亿,首次突破3100亿市值!
本文仅代表作者观点,不代表百度立场。系作者授权百家号发表,未经许可不得转载。
全球人工智能GAI
百家号 最近更新:
简介: 全球最新的AI技术动态|实战|资源|及应用!
作者最新文章YJango的循环神经网络——scan实现LSTM - 知乎专栏
{"debug":false,"apiRoot":"","paySDK":"/api/js","wechatConfigAPI":"/api/wechat/jssdkconfig","name":"production","instance":"column","tokens":{"X-XSRF-TOKEN":null,"X-UDID":null,"Authorization":"oauth c3cef7c66aa9e6a1e3160e20"}}
{"database":{"Post":{"":{"contributes":[{"sourceColumn":{"lastUpdated":,"description":"人工智能的火热,越来越多的人想要入门深度学习。尽管网上有很多深度学习教程,但过于专业,使得其他领域的朋友难以理解其做法和背后原理。\n\n该专栏目的就是以尽可能通俗易懂的方式解释各种深度学习技术的每个操作。而更为重要的是分析这些操作背后的原因。\n\n若发现有难以理解的文章,请联系YJango进行补充说明。也欢迎有巧妙讲解方式的朋友投稿。\n“通俗易懂”是唯一标准。","permission":"COLUMN_PUBLIC","memberId":,"contributePermission":"COLUMN_PUBLIC","translatedCommentPermission":"all","canManage":true,"intro":"分享最通俗易懂的深度学习教程","urlToken":"YJango","id":21377,"imagePath":"v2-b302f3ef0ea02dae943eda3cd914ce66.jpg","slug":"YJango","applyReason":"0","name":"超智能体","title":"超智能体","url":"/YJango","commentPermission":"COLUMN_ALL_CAN_COMMENT","canPost":true,"created":,"state":"COLUMN_NORMAL","followers":6890,"avatar":{"id":"v2-b302f3ef0ea02dae943eda3cd914ce66","template":"/{id}_{size}.jpg"},"activateAuthorRequested":false,"following":false,"imageUrl":"/v2-b302f3ef0ea02dae943eda3cd914ce66_l.jpg","articlesCount":22},"state":"accepted","targetPost":{"titleImage":"/v2-b7d9ba7e3e1ba2971821_r.png","lastUpdated":,"imagePath":"v2-b7d9ba7e3e1ba2971821.png","permission":"ARTICLE_PUBLIC","topics":[794],"summary":"介绍上一节在《》中介绍了循环神经网络目前最流行的实现方法LSTM和GRU,这一节就演示如何利用Tensorflow来搭建LSTM网络。\n代码LV1是指本次的演示是最核心的code,并没有多余的功能。\n为了更深刻的理解LSTM的结构,这次所用的并非是…","copyPermission":"ARTICLE_COPYABLE","translatedCommentPermission":"all","likes":0,"origAuthorId":0,"publishedTime":"T07:05:16+08:00","sourceUrl":"","urlToken":,"id":2515199,"withContent":false,"slug":,"bigTitleImage":false,"title":"YJango的循环神经网络——scan实现LSTM","url":"/p/","commentPermission":"ARTICLE_ALL_CAN_COMMENT","snapshotUrl":"","created":,"comments":0,"columnId":0,"content":"","parentId":0,"state":"ARTICLE_PUBLISHED","imageUrl":"/v2-b7d9ba7e3e1ba2971821_r.png","author":{"bio":"深度学习/人工智能;从语音、画面、文本、视频寻找人类行为规律","isFollowing":false,"hash":"4eedf55cf1de9fc5a8d2b","uid":587100,"isOrg":false,"slug":"YJango","isFollowed":false,"description":"特点:分享最易懂的深度学习教程\n深度学习专栏:/YJango\n电子书:/@yjango\n微信:gxiiukk\n我已加入“维权骑士”()的版权保护计划。","name":"YJango","profileUrl":"/people/YJango","avatar":{"id":"v2-8c73ab3e8a372f49c130c26f0960bb61","template":"/{id}_{size}.jpg"},"isOrgWhiteList":false},"memberId":,"excerptTitle":"","voteType":"ARTICLE_VOTE_CLEAR"},"id":581386}],"title":"YJango的循环神经网络——scan实现LSTM","author":"YJango","content":"介绍上一节在《》中介绍了循环神经网络目前最流行的实现方法LSTM和GRU,这一节就演示如何利用Tensorflow来搭建LSTM网络。\n代码LV1是指本次的演示是最核心的code,并没有多余的功能。\n为了更深刻的理解LSTM的结构,这次所用的并非是tensorflow自带的rnn_cell类,而是从新编写,并且用scan来实现graph里的loop (动态RNN)。任务描述:这次所要学习的模型依然是中的用声音来预测口腔移动,没有阅读的朋友请先阅读链接中的章节对于任务的描述。同时拿链接中的前馈神经网络与循环神经网络进行比较。处理训练数据目的:减掉每句数据的平均值,除以每句数据的标准差,降低模型拟合难度。代码:# 所需库包\nimport tensorflow as tf\nimport numpy as np\nimport time\nimport matplotlib.pyplot as plt\n%matplotlib inline\n# 直接使用在代码演示LV3中定义的function\ndef Standardize(seq):\n
#subtract mean\n
centerized=seq-np.mean(seq, axis = 0)\n
#divide standard deviation\n
normalized=centerized/np.std(centerized, axis = 0)\n
return normalized\n# 读取输入和输出数据\nmfc=np.load('X.npy')\nart=np.load('Y.npy')\ntotalsamples=len(mfc)\n# 20%的数据作为validation set\nvali_size=0.2\n# 将每个样本的输入和输出数据合成list,再将所有的样本合成list\n# 其中输入数据的形状是[n_samples, n_steps, D_input]\n# 其中输出数据的形状是[n_samples, D_output]\ndef data_prer(X, Y):\n
D_input=X[0].shape[1]\n
for x,y in zip(X,Y):\n
data.append([Standardize(x).reshape((1,-1,D_input)).astype(\"float32\"),\n
Standardize(y).astype(\"float32\")])\n
return data\n# 处理数据\ndata=data_prer(mfc, art)\n# 分训练集与验证集\ntrain=data[int(totalsamples*vali_size):]\ntest=data[:int(totalsamples*vali_size)]\n示意图:1,2,3,4,5表示list中的每个元素,而每个元素又是一个长度为2的list。解释:比如全部数据有100个序列,如果设定每个input的形状就是[1, n_steps, D_input],那么处理后的list的长度就是100,这样的数据使用的是SGD的更新方式。而如果想要使用mini-batch GD,将batch size(也就是n_samples)的个数为2,那么处理后的list的长度就会是50,每次网络训练时就会同时计算2个样本的梯度并用均值来更新权重。 因为每句语音数据的时间长短都不相同,如果使用3维tensor,需要大量的zero padding,所以将n_samples设成1。但是这样处理的缺点是:只能使用SGD,无法使用mini-batch GD。如果想使用mini-batch GD,需要几个n_steps长度相同的样本并在一起形成3维tensor(不等长时需要zero padding,如下图)。演示图:v表示一个维度为39的向量,序列1的n_steps的长度为3,序列2的为7,如果想把这三个序列并成3维tensor,就需要选择最大的长度作为n_steps的长度,将不足该长度的序列补零(都是0的39维的向量)。最后会形成shape为[3,7,39]的一个3维tensor。权重初始化方法目的:合理的初始化权重,可以降低网络在学习时卡在鞍点或极小值的损害,增加学习速度和效果代码:def weight_init(shape):\n
initial = tf.random_uniform(shape,minval=-np.sqrt(5)*np.sqrt(1.0/shape[0]), maxval=np.sqrt(5)*np.sqrt(1.0/shape[0]))\n
return tf.Variable(initial,trainable=True)\n# 全部初始化成0\ndef zero_init(shape):\n
initial = tf.Variable(tf.zeros(shape))\n
return tf.Variable(initial,trainable=True)\n# 正交矩阵初始化\ndef orthogonal_initializer(shape,scale = 1.0):\n
#/Lasagne/Lasagne/blob/master/lasagne/init.py\n
scale = 1.0\n
flat_shape = (shape[0], np.prod(shape[1:]))\n
a = np.random.normal(0.0, 1.0, flat_shape)\n
u, _, v = np.linalg.svd(a, full_matrices=False)\n
q = u if u.shape == flat_shape else v\n
q = q.reshape(shape) #this needs to be corrected to float32\n
return tf.Variable(scale * q[:shape[0], :shape[1]],trainable=True, dtype=tf.float32)\ndef bias_init(shape):\n
initial = tf.constant(0.01, shape=shape)\n
return tf.Variable(initial)\n# 洗牌\ndef shufflelists(data):\n
ri=np.random.permutation(len(data))\n
data=[data[i] for i in ri]\n
return data\n解释:其中shufflelists是用于洗牌重新排序list的。正交矩阵初始化是有利于gated_rnn的学习的方法。定义LSTM类属性:使用class类来定义是因为LSTM中有大量的参数,定义成属性方便管理。代码:在init中就将所有需要学习的权重全部定义成属性class LSTMcell(object):\n
def __init__(self, incoming, D_input, D_cell, initializer, f_bias=1.0):\n\n
# incoming是用来接收输入数据的,其形状为[n_samples, n_steps, D_cell]\n
self.incoming = incoming\n
# 输入的维度\n
self.D_input = D_input\n
# LSTM的hidden state的维度,同时也是memory cell的维度\n
self.D_cell = D_cell\n
# parameters\n
# 输入门的 三个参数\n
# igate = W_xi.* x + W_hi.* h + b_i\n
self.W_xi = initializer([self.D_input, self.D_cell])\n
self.W_hi = initializer([self.D_cell, self.D_cell])\n
= tf.Variable(tf.zeros([self.D_cell])) \n
# 遗忘门的 三个参数 \n
# fgate = W_xf.* x + W_hf.* h + b_f\n
self.W_xf = initializer([self.D_input, self.D_cell])\n
self.W_hf = initializer([self.D_cell, self.D_cell])\n
= tf.Variable(tf.constant(f_bias, shape=[self.D_cell])) \n
# 输出门的 三个参数\n
# ogate = W_xo.* x + W_ho.* h + b_o\n
self.W_xo = initializer([self.D_input, self.D_cell])\n
self.W_ho = initializer([self.D_cell, self.D_cell])\n
= tf.Variable(tf.zeros([self.D_cell])) \n
# 计算新信息的三个参数\n
# cell = W_xc.* x + W_hc.* h + b_c\n
self.W_xc = initializer([self.D_input, self.D_cell])\n
self.W_hc = initializer([self.D_cell, self.D_cell])\n
= tf.Variable(tf.zeros([self.D_cell]))
# 最初时的hidden state和memory cell的值,二者的形状都是[n_samples, D_cell]\n
# 如果没有特殊指定,这里直接设成全部为0\n
init_for_both = tf.matmul(self.incoming[:,0,:], tf.zeros([self.D_input, self.D_cell]))\n
self.hid_init = init_for_both\n
self.cell_init = init_for_both\n
# 所以要将hidden state和memory并在一起。\n
self.previous_h_c_tuple = tf.stack([self.hid_init, self.cell_init])\n
# 需要将数据由[n_samples, n_steps, D_cell]的形状变成[n_steps, n_samples, D_cell]的形状\n
self.incoming = tf.transpose(self.incoming, perm=[1,0,2])\n解释:将hidden state和memory并在一起,以及将输入的形状变成[n_steps, n_samples, D_cell]是为了满足tensorflow中的scan的特点,后面会提到。每步计算方法:定义一个function,用于制定每一个step的计算。代码:
def one_step(self, previous_h_c_tuple, current_x):\n\n
# 再将hidden state和memory cell拆分开\n
prev_h, prev_c = tf.unstack(previous_h_c_tuple)\n
# 这时,current_x是当前的输入,\n
# prev_h是上一个时刻的hidden state\n
# prev_c是上一个时刻的memory cell\n\n
# 计算输入门\n
i = tf.sigmoid(\n
tf.matmul(current_x, self.W_xi) + \n
tf.matmul(prev_h, self.W_hi) + \n
self.b_i)\n
# 计算遗忘门\n
f = tf.sigmoid(\n
tf.matmul(current_x, self.W_xf) + \n
tf.matmul(prev_h, self.W_hf) + \n
self.b_f)\n
# 计算输出门\n
o = tf.sigmoid(\n
tf.matmul(current_x, self.W_xo) + \n
tf.matmul(prev_h, self.W_ho) + \n
self.b_o)\n
# 计算新的数据来源\n
c = tf.tanh(\n
tf.matmul(current_x, self.W_xc) + \n
tf.matmul(prev_h, self.W_hc) + \n
self.b_c)\n
# 计算当前时刻的memory cell \n
current_c = f*prev_c + i*c\n
# 计算当前时刻的hidden state\n
current_h = o*tf.tanh(current_c)\n
# 再次将当前的hidden state和memory cell并在一起返回\n
return tf.stack([current_h, current_c])\n解释:将上一时刻的hidden state和memory拆开,用于计算后,所出现的新的当前时刻的hidden state和memory会再次并在一起作为该function的返回值,同样是为了满足scan的特点。定义该function后,LSTM就已经完成了。one_step方法会使用LSTM类中所定义的parameters与当前时刻的输入和上一时刻的hidden state与memory cell计算当前时刻的hidden state和memory cell。scan:使用scan逐次迭代计算所有timesteps,最后得出所有的hidden states进行后续的处理。代码:
def all_steps(self):\n
# 输出形状 : [n_steps, n_sample, D_cell]\n
hstates = tf.scan(fn = self.one_step,\n
elems = self.incoming, #形状为[n_steps, n_sample, D_input]\n
initializer = self.previous_h_c_tuple,\n
name = 'hstates')[:,0,:,:] \n
return hstates\n解释:scan接受的fn, elems, initializer有以下要求:fn:第一个输入是上一时刻的输出(需要与fn的返回值保持一致),第二个输入是当前时刻的输入。elems:scan方法每一步都会沿着所要处理的tensor的第一个维进行一次一次取值,所以要将数据由[n_samples, n_steps, D_cell]的形状变成[n_steps, n_samples, D_cell]的形状。initializer:初始值,需要与fn的第一个输入和返回值保持一致。scan的返回值在上例中是[n_steps, 2, n_samples, D_cell],其中第二个维度的2是由hidden state和memory cell组成的。构建网络代码:D_input = 39\nD_label = 24\nlearning_rate = 7e-5\nnum_units=1024\n# 样本的输入和标签\ninputs = tf.placeholder(tf.float32, [None, None, D_input], name=\"inputs\")\nlabels = tf.placeholder(tf.float32, [None, D_label], name=\"labels\")\n# 实例LSTM类\nrnn_cell = LSTMcell(inputs, D_input, num_units, orthogonal_initializer)\n# 调用scan计算所有hidden states\nrnn0 = rnn_cell.all_steps()\n# 将3维tensor [n_steps, n_samples, D_cell]转成 矩阵[n_steps*n_samples, D_cell]\n# 用于计算outputs\nrnn = tf.reshape(rnn0, [-1, num_units])\n# 输出层的学习参数\nW = weight_init([num_units, D_label])\nb = bias_init([D_label])\noutput = tf.matmul(rnn, W) + b\n# 损失\nloss=tf.reduce_mean((output-labels)**2)\n# 训练所需\ntrain_step = tf.train.AdamOptimizer(learning_rate).minimize(loss)\n解释:以hard coding的方式直接构建一个网络,输入是39维,第一个隐藏层也就是RNN-LSTM,1024维,而输出层又将1024维的LSTM的输出变换到24维与label对应。注: 这个网络并不仅仅取序列的最后一个值,而是要用所有timestep的值与实际轨迹进行比较计算loss训练网络代码:# 建立session并实际初始化所有参数\nsess = tf.InteractiveSession()\ntf.global_variables_initializer().run()\n# 训练并记录\ndef train_epoch(EPOCH):\n
for k in range(EPOCH):\n
train0=shufflelists(train)\n
for i in range(len(train)):\n
sess.run(train_step,feed_dict={inputs:train0[i][0],labels:train0[i][1]})\n
for i in range(len(test)):\n
dl+=sess.run(loss,feed_dict={inputs:test[i][0],labels:test[i][1]})\n
for i in range(len(train)):\n
tl+=sess.run(loss,feed_dict={inputs:train[i][0],labels:train[i][1]})\n
print(k,'train:',round(tl/83,3),'test:',round(dl/20,3))\nt0 = time.time()\ntrain_epoch(10)\nt1 = time.time()\nprint(\" %f seconds\" % round((t1 - t0),2))\n# 训练10次后的输出和时间\n(0, 'train:', 0.662, 'test:', 0.691)\n(1, 'train:', 0.558, 'test:', 0.614)\n(2, 'train:', 0.473, 'test:', 0.557)\n(3, 'train:', 0.417, 'test:', 0.53)\n(4, 'train:', 0.361, 'test:', 0.504)\n(5, 'train:', 0.327, 'test:', 0.494)\n(6, 'train:', 0.294, 'test:', 0.476)\n(7, 'train:', 0.269, 'test:', 0.468)\n(8, 'train:', 0.244, 'test:', 0.452)\n(9, 'train:', 0.226, 'test:', 0.453)\n563.110000 seconds\n解释:由于上文的LSTM是非常直接的编写方式,并不高效,在实际使用中会花费较长时间。预测效果代码:pY=sess.run(output,feed_dict={inputs:test[10][0]})\nplt.plot(pY[:,8])\nplt.plot(test[10][1][:,8])\nplt.title('test')\nplt.legend(['predicted','real'])解释:plot出一个样本中的维度的预测效果与真是轨迹进行对比效果图:总结说明该文是尽可能只展示LSTM最核心的部分(只训练了10次,有兴趣的朋友可以自己多训练几次),帮助大家理解其工作方式而已,完整代码可以从我的github中中找到。\n该LSTM由于运行效率并不高,下一篇会稍微进行改动加快运行速度,并整理结构方便使用GRU以及多层RNN的堆叠以及双向RNN,同时加入其他功能。","updated":"T23:05:16.000Z","canComment":false,"commentPermission":"anyone","commentCount":20,"collapsedCount":0,"likeCount":104,"state":"published","isLiked":false,"slug":"","lastestTipjarors":[{"isFollowed":false,"name":"呓人61","headline":"","avatarUrl":"/v2-d68ab35ceb7c20c0e94f3_s.jpg","isFollowing":false,"type":"people","slug":"yi-ren-61-47","bio":"计算机硕士/人工智能/资深项目经理/管理咨询顾问/创业中","hash":"13d8d7c69d230d7e739a","uid":464200,"isOrg":false,"description":"","profileUrl":"/people/yi-ren-61-47","avatar":{"id":"v2-d68ab35ceb7c20c0e94f3","template":"/{id}_{size}.jpg"},"isOrgWhiteList":false},{"isFollowed":false,"name":"居里猴姐","headline":"","avatarUrl":"/v2-b5afb2ade25_s.jpg","isFollowing":false,"type":"people","slug":"cao-ying-seso","bio":"","hash":"06c76f1cb74778f7eba32e982aa96fba","uid":60,"isOrg":false,"description":"","profileUrl":"/people/cao-ying-seso","avatar":{"id":"v2-b5afb2ade25","template":"/{id}_{size}.jpg"},"isOrgWhiteList":false}],"isTitleImageFullScreen":false,"rating":"none","titleImage":"/v2-b7d9ba7e3e1ba2971821_r.png","links":{"comments":"/api/posts//comments"},"reviewers":[],"topics":[{"url":"/topic/","id":"","name":"神经网络"},{"url":"/topic/","id":"","name":"人工智能"},{"url":"/topic/","id":"","name":"深度学习(Deep Learning)"}],"adminClosedComment":false,"titleImageSize":{"width":850,"height":569},"href":"/api/posts/","excerptTitle":"","tipjarState":"activated","tipjarTagLine":"真诚赞赏,手留余香","sourceUrl":"","pageCommentsCount":20,"tipjarorCount":2,"annotationAction":[],"hasPublishingDraft":false,"snapshotUrl":"","publishedTime":"T07:05:16+08:00","url":"/p/","lastestLikers":[{"bio":"不想搬砖的程序猿","isFollowing":false,"hash":"8b81c57a322e5c5dc5cd4","uid":84,"isOrg":false,"slug":"yang-pang-pang-22","isFollowed":false,"description":"你不能拿走我的蜡烛","name":"杨胖胖","profileUrl":"/people/yang-pang-pang-22","avatar":{"id":"v2-fa4b70ca7c0fdaf94bec5a8d4f781d7e","template":"/{id}_{size}.jpg"},"isOrgWhiteList":false},{"bio":"","isFollowing":false,"hash":"35d7ce953a449bcbd54b4cd2","uid":019600,"isOrg":false,"slug":"bai-mu-qi-er-30","isFollowed":false,"description":"","name":"大白","profileUrl":"/people/bai-mu-qi-er-30","avatar":{"id":"v2-f5c60bc5d78fe8b1a88dde","template":"/{id}_{size}.jpg"},"isOrgWhiteList":false},{"bio":null,"isFollowing":false,"hash":"624ace413c74fdce376b43d357ef3baf","uid":60,"isOrg":false,"slug":"zheng-xiao-long-7","isFollowed":false,"description":"","name":"浪迹天涯","profileUrl":"/people/zheng-xiao-long-7","avatar":{"id":"3df27fc02","template":"/{id}_{size}.jpg"},"isOrgWhiteList":false},{"bio":"深度学习,嵌入式,","isFollowing":false,"hash":"a31db0a9dca5c512ed384b","uid":721200,"isOrg":false,"slug":"zz-guo-76","isFollowed":false,"description":"","name":"zz guo","profileUrl":"/people/zz-guo-76","avatar":{"id":"da8e974dc","template":"/{id}_{size}.jpg"},"isOrgWhiteList":false},{"bio":"准程序员","isFollowing":false,"hash":"8535b14fec29eb7fefc8bc1adb8d6f60","uid":052100,"isOrg":false,"slug":"bilibi","isFollowed":false,"description":"","name":"桥风","profileUrl":"/people/bilibi","avatar":{"id":"b5ac5ba91da36628cef8e41","template":"/{id}_{size}.jpg"},"isOrgWhiteList":false}],"summary":"介绍上一节在《》中介绍了循环神经网络目前最流行的实现方法LSTM和GRU,这一节就演示如何利用Tensorflow来搭建LSTM网络。\n代码LV1是指本次的演示是最核心的code,并没有多余的功能。\n为了更深刻的理解LSTM的结构,这次所用的并非是…","reviewingCommentsCount":0,"meta":{"previous":null,"next":null},"annotationDetail":null,"commentsCount":20,"likesCount":104,"FULLINFO":true}},"User":{"YJango":{"isFollowed":false,"name":"YJango","headline":"特点:分享最易懂的深度学习教程\n深度学习专栏:/YJango\n电子书:/@yjango\n微信:gxiiukk\n我已加入“维权骑士”()的版权保护计划。","avatarUrl":"/v2-8c73ab3e8a372f49c130c26f0960bb61_s.jpg","isFollowing":false,"type":"people","slug":"YJango","bio":"深度学习/人工智能;从语音、画面、文本、视频寻找人类行为规律","hash":"4eedf55cf1de9fc5a8d2b","uid":587100,"isOrg":false,"description":"特点:分享最易懂的深度学习教程\n深度学习专栏:/YJango\n电子书:/@yjango\n微信:gxiiukk\n我已加入“维权骑士”()的版权保护计划。","profileUrl":"/people/YJango","avatar":{"id":"v2-8c73ab3e8a372f49c130c26f0960bb61","template":"/{id}_{size}.jpg"},"isOrgWhiteList":false,"badge":{"identity":{"description":"日本会津大学 人机界面实验室博士在读"},"bestAnswerer":null}}},"Comment":{},"favlists":{}},"me":{},"global":{"experimentFeatures":{"ge3":"ge3_9","ge2":"ge2_1","isOffice":"false","nwebStickySidebar":"sticky","qrcodeLogin":"qrcode","favAct":"default","default":"None","mobileQaPageProxyHeifetz":"m_qa_page_nweb","newMore":"new","newBuyBar":"livenewbuy3","newMobileColumnAppheader":"new_header","appStoreRateDialog":"close","sendZaMonitor":"true","homeUi2":"default","answerRelatedReadings":"qa_recommend_with_ads_and_article","wechatShareModal":"wechat_share_modal_show","iOSNewestVersion":"4.2.0","qaStickySidebar":"sticky_sidebar","androidProfilePanel":"panel_b","liveStore":"ls_a2_b2_c1_f2","zcmLighting":"zcm"}},"columns":{"next":{}},"columnPosts":{},"columnSettings":{"colomnAuthor":[],"uploadAvatarDetails":"","contributeRequests":[],"contributeRequestsTotalCount":0,"inviteAuthor":""},"postComments":{},"postReviewComments":{"comments":[],"newComments":[],"hasMore":true},"favlistsByUser":{},"favlistRelations":{},"promotions":{},"switches":{"couldAddVideo":false},"draft":{"titleImage":"","titleImageSize":{},"isTitleImageFullScreen":false,"canTitleImageFullScreen":false,"title":"","titleImageUploading":false,"error":"","content":"","draftLoading":false,"globalLoading":false,"pendingVideo":{"resource":null,"error":null}},"drafts":{"draftsList":[],"next":{}},"config":{"userNotBindPhoneTipString":{}},"recommendPosts":{"articleRecommendations":[],"columnRecommendations":[]},"env":{"edition":{},"isAppView":false,"appViewConfig":{"content_padding_top":128,"content_padding_bottom":56,"content_padding_left":16,"content_padding_right":16,"title_font_size":22,"body_font_size":16,"is_dark_theme":false,"can_auto_load_image":true,"app_info":"OS=iOS"},"isApp":false},"sys":{},"message":{"newCount":0},"pushNotification":{"newCount":0}}}

我要回帖

更多关于 什么是深度神经网络 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信