Exploring Neural Network Structure through Sparse Recurrent Neural Networks: A Recasting and Distillation of Neural Network Hyperparameters.
Quincy HersheyRandy C. PaffenrothHarsh Nilesh PathakPublished in: ICMLA (2023)
Keyphrases
- recurrent neural networks
- neural network structure
- hyperparameters
- neural network
- model selection
- cross validation
- closed form
- bayesian inference
- support vector
- hidden layer
- bayesian framework
- maximum likelihood
- random sampling
- prior information
- em algorithm
- gaussian process
- noise level
- feed forward
- sample size
- incremental learning
- maximum a posteriori
- artificial neural networks
- neural model
- gaussian processes
- incomplete data
- high dimensional
- missing values
- multi layer perceptron
- machine learning
- learning algorithm
- feature selection
- feature space
- expectation maximization
- support vector machine
- prior knowledge