Login / Signup
Charbel Sakr
ORCID
Publication Activity (10 Years)
Years Active: 1997-2023
Publications (10 Years): 26
Top Topics
Fundamental Limits
Deep Learning
Neural Network
Learning Algorithm
Top Venues
CoRR
ICASSP
ICLR (Poster)
IEEE J. Solid State Circuits
</>
Publications
</>
Ben Keller
,
Rangharajan Venkatesan
,
Steve Dai
,
Stephen G. Tell
,
Brian Zimmer
,
Charbel Sakr
,
William J. Dally
,
C. Thomas Gray
,
Brucek Khailany
A 95.6-TOPS/W Deep Learning Inference Accelerator With Per-Vector Scaled 4-bit Quantization in 5 nm.
IEEE J. Solid State Circuits
58 (4) (2023)
Yu-Shun Hsiao
,
Siva Kumar Sastry Hari
,
Balakumar Sundaralingam
,
Jason Yik
,
Thierry Tambe
,
Charbel Sakr
,
Stephen W. Keckler
,
Vijay Janapa Reddi
VaPr: Variable-Precision Tensors to Accelerate Robot Motion Planning.
IROS
(2023)
Yu-Shun Hsiao
,
Siva Kumar Sastry Hari
,
Balakumar Sundaralingam
,
Jason Yik
,
Thierry Tambe
,
Charbel Sakr
,
Stephen W. Keckler
,
Vijay Janapa Reddi
VaPr: Variable-Precision Tensors to Accelerate Robot Motion Planning.
CoRR
(2023)
Sujan K. Gonugondla
,
Charbel Sakr
,
Hassan Dbouk
,
Naresh R. Shanbhag
Fundamental Limits on Energy-Delay-Accuracy of In-Memory Architectures in Inference Applications.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst.
41 (10) (2022)
Charbel Sakr
,
Steve Dai
,
Rangharajan Venkatesan
,
Brian Zimmer
,
William J. Dally
,
Brucek Khailany
Optimal Clipping and Magnitude-aware Differentiation for Improved Quantization-aware Training.
ICML
(2022)
Charbel Sakr
,
Steve Dai
,
Rangharajan Venkatesan
,
Brian Zimmer
,
William J. Dally
,
Brucek Khailany
Optimal Clipping and Magnitude-aware Differentiation for Improved Quantization-aware Training.
CoRR
(2022)
Abdulrahman Mahmoud
,
Siva Kumar Sastry Hari
,
Christopher W. Fletcher
,
Sarita V. Adve
,
Charbel Sakr
,
Naresh R. Shanbhag
,
Pavlo Molchanov
,
Michael B. Sullivan
,
Timothy Tsai
,
Stephen W. Keckler
Optimizing Selective Protection for CNN Resilience.
ISSRE
(2021)
Charbel Sakr
,
Naresh R. Shanbhag
Signal Processing Methods to Enhance the Energy Efficiency of In-Memory Computing Architectures.
IEEE Trans. Signal Process.
69 (2021)
Hassan Dbouk
,
Sujan K. Gonugondla
,
Charbel Sakr
,
Naresh R. Shanbhag
A 0.44-μJ/dec, 39.9-μs/dec, Recurrent Attention In-Memory Processor for Keyword Spotting.
IEEE J. Solid State Circuits
56 (7) (2021)
Abdulrahman Mahmoud
,
Siva Kumar Sastry Hari
,
Christopher W. Fletcher
,
Sarita V. Adve
,
Charbel Sakr
,
Naresh R. Shanbhag
,
Pavlo Molchanov
,
Michael B. Sullivan
,
Timothy Tsai
,
Stephen W. Keckler
HarDNN: Feature Map Vulnerability Evaluation in CNNs.
CoRR
(2020)
Hassan Dbouk
,
Sujan K. Gonugondla
,
Charbel Sakr
,
Naresh R. Shanbhag
KeyRAM: A 0.34 uJ/decision 18 k decisions/s Recurrent Attention In-memory Processor for Keyword Spotting.
CICC
(2020)
Sujan K. Gonugondla
,
Charbel Sakr
,
Hassan Dbouk
,
Naresh R. Shanbhag
Fundamental Limits on the Precision of In-memory Architectures.
ICCAD
(2020)
Sujan Kumar Gonugondla
,
Charbel Sakr
,
Hassan Dbouk
,
Naresh R. Shanbhag
Fundamental Limits on Energy-Delay-Accuracy of In-memory Architectures in Inference Applications.
CoRR
(2020)
Charbel Sakr
,
Naigang Wang
,
Chia-Yu Chen
,
Jungwook Choi
,
Ankur Agrawal
,
Naresh R. Shanbhag
,
Kailash Gopalakrishnan
Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks.
CoRR
(2019)
Charbel Sakr
,
Yongjune Kim
,
Naresh R. Shanbhag
Minimum Precision Requirements of General Margin Hyperplane Classifiers.
IEEE J. Emerg. Sel. Topics Circuits Syst.
9 (2) (2019)
Charbel Sakr
,
Naresh R. Shanbhag
Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm.
ICLR (Poster)
(2019)
Charbel Sakr
,
Naigang Wang
,
Chia-Yu Chen
,
Jungwook Choi
,
Ankur Agrawal
,
Naresh R. Shanbhag
,
Kailash Gopalakrishnan
Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks.
ICLR (Poster)
(2019)
Charbel Sakr
,
Naresh R. Shanbhag
An Analytical Method to Determine Minimum Per-Layer Precision of Deep Neural Networks.
ICASSP
(2018)
Charbel Sakr
,
Jungwook Choi
,
Zhuo Wang
,
Kailash Gopalakrishnan
,
Naresh R. Shanbhag
True Gradient-Based Training of Deep Binary Activated Neural Networks Via Continuous Binarization.
ICASSP
(2018)
Charbel Sakr
,
Naresh R. Shanbhag
Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm.
CoRR
(2018)
Charbel Sakr
,
Naresh R. Shanbhag
Minimum Precision Requirements for Deep Learning with Biomedical Datasets.
BioCAS
(2018)
Charbel Sakr
,
Yongjune Kim
,
Naresh R. Shanbhag
Analytical Guarantees on Numerical Precision of Deep Neural Networks.
ICML
(2017)
Charbel Sakr
,
Ameya D. Patil
,
Sai Zhang
,
Yongjune Kim
,
Naresh R. Shanbhag
Minimum precision requirements for the SVM-SGD learning algorithm.
ICASSP
(2017)
Yingyan Lin
,
Charbel Sakr
,
Yongjune Kim
,
Naresh R. Shanbhag
PredictiveNet: An energy-efficient convolutional neural network via zero prediction.
ISCAS
(2017)
Charbel Sakr
,
Ameya Patil
,
Sai Zhang
,
Naresh R. Shanbhag
Understanding the Energy and Precision Requirements for Online Learning.
CoRR
(2016)
Sai Zhang
,
Mingu Kang
,
Charbel Sakr
,
Naresh R. Shanbhag
Reducing the Energy Cost of Inference via In-sensor Information Processing.
CoRR
(2016)
Charbel Sakr
,
Terence D. Todd
Carrier-Sense Protocols for Packet-Switched Smart Antenna Basestations.
Int. J. Wirel. Inf. Networks
7 (3) (2000)
Charbel Sakr
,
Terence D. Todd
Carrier-sense protocols for packet-switched smart antenna basestations.
ICNP
(1997)