Sequence generation error (SGE) minimization based deep neural networks training for text-to-speech synthesis.
Yuchen FanYao QianFrank K. SoongLei HePublished in: INTERSPEECH (2015)
Keyphrases
- neural network
- text to speech synthesis
- training process
- feedforward neural networks
- error function
- training algorithm
- feed forward neural networks
- pattern recognition
- back propagation
- error rate
- recurrent neural networks
- error back propagation
- test set
- multi layer
- recurrent networks
- training phase
- multi layer perceptron
- deep architectures
- activation function
- backpropagation algorithm
- feed forward
- text to speech
- training set
- neural network model
- objective function
- neural nets
- neural network training
- neural network structure
- online learning
- supervised learning
- genetic algorithm