lstm 三角函数预测

Preface

说了好久要手撕一次lstm预测,结果上学期用bucket时遇到issue后就搁了下来,后面还被突然尴尬了几次(⊙﹏⊙)b。
好吧,我先把issue亮出来https://github.com/apache/incubator-mxnet/issues/8663,然而并没有大神鸟(我也不知道为什么 ...)。

Code

今天也是事起突然,然后就写了段测试程序( 可能大家都玩gluon,不理symbol那一套了):

import mxnet as mx
from mxnet import gluon
import numpy as np

hiden_sizes=[10,20,1]
batch_size=300
iteration=300000
log_freq = 20
ctx=mx.gpu()
opt = 'adam' # 'sgd'

unroll_len =9
t= mx.nd.arange(0,0.01*(1+unroll_len),.01, ctx=ctx)
tt= mx.nd.random.uniform(shape=(iteration,1), ctx=ctx)
t= (t+tt).T   # (unroll_len, iteration)
y= mx.nd.sin(t[-1])/2

model=gluon.rnn.SequentialRNNCell()
with model.name_scope():
    for hidden_size in hiden_sizes:
        model.add(gluon.rnn.LSTMCell(hidden_size))
model.initialize(ctx=ctx)
L=gluon.loss.L2Loss()
Trainer= gluon.Trainer(model.collect_params(),opt)
prev_batch_idx=-1
acc_l = mx.nd.array([0,], ctx=ctx)

for batch_idx in xrange(iteration/batch_size):
    x_list = [x[batch_idx*batch_size:(batch_idx+1)*batch_size].T for x in t[:unroll_len]]
    # e in x_list: (b,1)
    label =   y[batch_idx*batch_size:(batch_idx+1)*batch_size]
    with mx.autograd.record():
        outputs, states = model.unroll(unroll_len, x_list)
        l=L(outputs[-1], label)
        l.backward()
    Trainer.step(batch_size)
    acc_l += l.mean()
    if batch_idx- prev_batch_idx == log_freq:
        print 'loss:%.4f'%((acc_l/log_freq).asnumpy())
        prev_batch_idx = batch_idx
        acc_l *= 0

Note

  1. adam要比sgd显著地快,见文末loss的比较列表。
  2. 没有relu激活,然后层数多了之后,难以优化?
    前一个问题:LSTM的定义式里面没有这个存在的地方;第二个问题,发现有几个链接
    https://www.reddit.com/r/MachineLearning/comments/30eges/batch_normalization_or_other_tricks_for_lstms/
    https://groups.google.com/forum/#!topic/lasagne-users/EczUQckJggU
    以上是相关的讨论。
    然后这份工作(http://cn.arxiv.org/abs/1603.09025)是针对hidden-to-hidden提出的BN。从描述和贴上的结果来看,收敛速度精度并没有可观的提升。
adamsgd
0.03780.0387
0.02230.0335
0.00590.0284
0.00430.0247
0.00300.0214

相关推荐