From Imitation to Prediction, Data Compression vs Recurrent Neural Networks for Natural Language Processing.
Juan Andrés LauraGabriel MasiLuis ArgerichPublished in: CoRR (2017)
Keyphrases
- data compression
- recurrent neural networks
- natural language processing
- chaotic time series
- compression algorithm
- neural network
- compression scheme
- compression ratio
- recurrent networks
- feed forward
- echo state networks
- reservoir computing
- data reduction
- wavelet compression
- huffman coding
- machine learning
- information extraction
- artificial neural networks
- mixed data
- reinforcement learning
- neural model
- high compression
- wavelet filters
- cascade correlation
- natural language
- compressed data
- nonlinear dynamic systems
- original data
- artificial intelligence
- co occurrence