Complex-valued Neurons Can Learn More but Slower than Real-valued Neurons via Gradient Descent.
Jin-Hui WuShao-Qun ZhangYuan JiangZhi-Hua ZhouPublished in: NeurIPS (2023)
Keyphrases
- complex valued
- real valued
- activation function
- distributed representations
- neural network
- hidden layer
- gabor transform
- real valued data
- recurrent neural networks
- artificial neural networks
- linear equations
- neural nets
- learning rate
- feed forward
- latent variables
- network architecture
- training phase
- document retrieval
- back propagation
- search engine
- genetic algorithm
- information retrieval