Login / Signup
Are Transformers with One Layer Self-Attention Using Low-Rank Weight Matrices Universal Approximators?
Tokio Kajitsuka
Issei Sato
Published in:
CoRR (2023)
Keyphrases
</>
low rank
weight matrices
missing data
convex optimization
linear combination
singular value decomposition
matrix completion
matrix factorization
low rank matrix
rank minimization
semi supervised
kernel matrix
high dimensional data
weight matrix
high order
trace norm
pairwise
face recognition
feature space