Gershgorin Loss Stabilizes the Recurrent Neural Network Compartment of an End-to-end Robot Learning Scheme.
Mathias LechnerRamin M. HasaniDaniela RusRadu GrosuPublished in: ICRA (2020)
Keyphrases
- end to end
- learning scheme
- recurrent neural networks
- learning algorithm
- neural network
- feed forward
- recurrent networks
- complex valued
- multipath
- reservoir computing
- feedforward neural networks
- admission control
- neural model
- transport layer
- artificial neural networks
- internet protocol
- congestion control
- hidden layer
- echo state networks
- robot manipulators
- wireless ad hoc networks
- rule learning
- real time
- application layer
- scalable video
- video sequences
- training data
- error propagation
- adaptive neural
- humanoid robot