RNNs
  • Recurrent Neural Networks and LSTM
  • Forecasting
  • Sequence Data
  • Recurrent Neural Networks
    • Architecture of RNN
    • Training an RNN
    • Vanishing Gradient
  • Long Short Term Memory (LSTM)
    • Architecture of LSTM
    • Walk through of LSTM
  • Gated Recurrent Unit
    • GRU
    • Architecture of GRU
  • Activation Functions
    • Various Activation Functions
  • Regularization of Neural Networks
    • Various Methods in Regularization
  • Applications of RNNs
    • Various Applications of RNN
    • RNNs Movie
  • Forecasting Stock Trends using RNNs
    • Objective
    • Alpha Vantage API
    • Stock Trends Prediction using LSTM.
    • Bidirectional RNN
    • Stock Trends Prediction using Bidirectional RNNs
  • Conclusion
    • Conclusion
    • References
  • License
    • License
Powered by GitBook
On this page

Was this helpful?

  1. Recurrent Neural Networks

Architecture of RNN

PreviousSequence DataNextTraining an RNN

Last updated 5 years ago

Was this helpful?

The below image shows an RNN being unfolded to a full network, xtx_txt​is input to the network at time t, hth_tht​is the hidden state at time t also referred to as the memory of the network. It is calculated based on previous hidden state and current input.

Represented by ht=f(Uxt+Wht+bh)h_t= f(Ux_t + Wh_t +b_h) ht​=f(Uxt​+Wht​+bh​)

Here U and W are weights for input and previous state value input respectively, bhb_h bh​is the bias associated to the hidden network and f is the non-linearity applied to the sum to generate final cell state.

And output at time t is calculated as shown below :

Now, having understood about the maths behind the architecture of an RNN , lets see how to train the network.

Ot=f(Wht+bo)O_t = f(Wh_t + b_o) Ot​=f(Wht​+bo​)

bob_o bo​ is the bias for the output layer

RNN Architecture