rnnlib
get_module_device
get_module_device (model)
is_cuda_enabled
is_cuda_enabled (model)
repeat_lstm_state
repeat_lstm_state (state, batch_size)
create_lstm_init_state
create_lstm_init_state (num_layers, num_directions, hidden_size, init_state_learned=True, device=None)
:param hidden_size: :param init_state_learned: :returns: init_state is a input of lstm cells. _init_state is saved as a parameter of model (such as self._init_state)
repeat_lstm_cell_state
repeat_lstm_cell_state (state, batch_size)
create_lstm_cell_init_state
create_lstm_cell_init_state (hidden_size, init_state_learned=True, device=None)
:param hidden_size: :param init_state_learned: :returns: init_state is a input of lstm cells. _init_state is saved as a parameter of model (such as self._init_state)
get_indicator
get_indicator (length_tensor, max_length=None)
:param length_tensor: :param max_length: :returns: a tensor where positions within ranges are set to 1
LSTMCell
LSTMCell (input_size, hidden_size)
standard LSTM cell
LSTMFrame
LSTMFrame (rnn_cells, batch_first=False, dropout=0, bidirectional=False)
Wrapper of RNNFrame. The ‘for_lstm’ option is always ‘True’.
RNNFrame
RNNFrame (rnn_cells, for_lstm=False, batch_first=False, dropout=0, bidirectional=False)
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool
forward_rnn
forward_rnn (rnn, init_state, input, lengths, batch_first=False, embedding:torch.nn.modules.sparse.Embedding=None, dropout:torch.nn.modules.dropout.Dropout=None, return_packed_output=False)
LayerNormLSTM
LayerNormLSTM (input_size, hidden_size, num_layers=1, batch_first=False, dropout=0, r_dropout=0, bidirectional=False, layer_norm_enabled=True)
Wrapper of RNNFrame. The ‘for_lstm’ option is always ‘True’.
LayerNormLSTMCell
LayerNormLSTMCell (input_size, hidden_size, dropout=None, layer_norm_enabled=True, cell_ln=None)
It’s based on tf.contrib.rnn.LayerNormBasicLSTMCell Reference: - https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LayerNormBasicLSTMCell - https://github.com/tensorflow/tensorflow/blob/r1.12/tensorflow/contrib/rnn/python/ops/rnn_cell.py#L1335
LayerNormRNNCell
LayerNormRNNCell (input_size, hidden_size, layer_norm_enabled=True)
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool