ADAM

ADAM stochastic optimizer.

See ADAM: A method for stochastic optimization

Constructors

this
this(ulong num_epochs, float lr, ulong mini_batch_sz, float eps_, float beta1_0_, float beta2_0_)
Undocumented in source.

Members

Functions

current_lr
float current_lr(size_t k, size_t j)
Undocumented in source. Be warned that the author may not have intended to support it.
dup
ADAM dup()
Undocumented in source. Be warned that the author may not have intended to support it.
params_to_optimize
void params_to_optimize(float[][] W_, float[][] grad_)
Undocumented in source. Be warned that the author may not have intended to support it.
register
void register(NeuralLayer layer)
Undocumented in source. Be warned that the author may not have intended to support it.
toString
string toString()
Undocumented in source. Be warned that the author may not have intended to support it.
update
void update(NeuralLayer layer, float[] ext_grad)
Undocumented in source. Be warned that the author may not have intended to support it.
update
void update(NeuralLayer layer, SparseF[] ext_grad)
Undocumented in source. Be warned that the author may not have intended to support it.
update_general
void update_general(NeuralLayer layer, V[] ext_grad)
Undocumented in source. Be warned that the author may not have intended to support it.
update_matrix
void update_matrix()
Undocumented in source. Be warned that the author may not have intended to support it.

Mixins

__anonymous
mixin opCallNew
Undocumented in source.

Static functions

adam_op
void adam_op(float[] row, float beta, float[] g)
Undocumented in source. Be warned that the author may not have intended to support it.
adam_op2
void adam_op2(float[] row, float beta, float[] g)
Undocumented in source. Be warned that the author may not have intended to support it.
adam_op3
void adam_op3(float[] row_W, float[] row_S, float[] row_M, float beta1_, float beta2_, float eps_, float lr_)
Undocumented in source. Be warned that the author may not have intended to support it.

Variables

M
float[][] M;
Undocumented in source.
S
float[][] S;
Undocumented in source.
W
float[][] W;
Undocumented in source.
beta1
float beta1;
Undocumented in source.
beta1_0
float beta1_0;
Undocumented in source.
beta2
float beta2;
Undocumented in source.
beta2_0
float beta2_0;
Undocumented in source.
eps
float eps;
Undocumented in source.
grad
float[][] grad;
Undocumented in source.

Mixed In Members

From mixin opCallNew

opCall
auto opCall(T args)
Undocumented in source. Be warned that the author may not have intended to support it.

Inherited Members

From SGDOptimizer

num_epochs
ulong num_epochs;
Undocumented in source.
mini_batch_sz
ulong mini_batch_sz;
Undocumented in source.
lr
float lr;
Undocumented in source.
cnt
ulong cnt;
Undocumented in source.
priors
AdditiveLinearPrior[] priors;
Undocumented in source.
prox
ProxyLinearPrior prox;
Undocumented in source.
learn
void learn(NeuralNet nn, D data, S delegate(R net_out, ref T ex, ref V[] grad) grad_f, bool verbose, uint num_cores)
Undocumented in source. Be warned that the author may not have intended to support it.
current_lr
float current_lr(size_t k, size_t j)
Undocumented in source.
register
void register(NeuralLayer layer)
Undocumented in source.
update
void update(NeuralLayer layer, float[] ext_grad)
Undocumented in source.
update
void update(NeuralLayer layer, SparseF[] ext_grad)
Undocumented in source.
dup
Optimizer dup()
Undocumented in source.
toString
string toString()
Undocumented in source. Be warned that the author may not have intended to support it.

Examples

auto nn = NeuralNet()
    .stack(DenseData(200))
    .stack(Linear(10));

// ADAM learning with 5 epochs, learning rate 0.1, mini-batch size of 100:
nn.learn(data, "multinomial", ADAM(5, 0.1, 100));

Meta