Mohammad Emtiyaz Khan
Mohammad Emtiyaz Khan
Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo
Verified email at - Homepage
Cited by
Cited by
Fast and scalable bayesian deep learning by weight-perturbation in adam
M Khan, D Nielsen, V Tangkaratt, W Lin, Y Gal, A Srivastava
International Conference on Machine Learning, 2611-2620, 2018
Practical Deep Learning with Bayesian Principles
K Osawa, S Swaroop, A Jain, R Eschenhagen, RE Turner, R Yokota, ...
arXiv preprint arXiv:1906.02506, 2019
AI for social good: unlocking the opportunity for positive impact
N Tomašev, J Cornebise, F Hutter, S Mohamed, A Picciariello, B Connelly, ...
Nature Communications 11 (1), 2468, 2020
Conjugate-computation variational inference: Converting variational inference in non-conjugate models to inferences in conjugate models
M Khan, W Lin
Artificial Intelligence and Statistics, 878-887, 2017
Continual deep learning by functional regularisation of memorable past
P Pan, S Swaroop, A Immer, R Eschenhagen, R Turner, MEE Khan
Advances in Neural Information Processing Systems 33, 4453-4464, 2020
Smarper: Context-aware and automatic runtime-permissions for mobile devices
K Olejnik, I Dacosta, JS Machado, K Huguenin, ME Khan, JP Hubaux
2017 IEEE Symposium on Security and Privacy (SP), 1058-1076, 2017
Approximate Inference Turns Deep Networks into Gaussian Processes
MEE Khan, A Immer, E Abedi, M Korzepa
Advances in Neural Information Processing Systems, 3088-3098, 2019
Scalable marginal likelihood estimation for model selection in deep learning
A Immer, M Bauer, V Fortuin, G Rätsch, KM Emtiyaz
International Conference on Machine Learning, 4563-4573, 2021
Variational bounds for mixed-data factor analysis
MEE Khan, G Bouchard, KP Murphy, BM Marlin
Advances in Neural Information Processing Systems 23, 1108-1116, 2010
An expectation-maximization algorithm based Kalman smoother approach for event-related desynchronization (ERD) estimation from EEG
ME Khan, DN Dutt
IEEE transactions on biomedical engineering 54 (7), 1191-1198, 2007
Slang: Fast structured covariance approximations for bayesian deep learning with natural gradient
A Mishkin, F Kunstner, D Nielsen, M Schmidt, ME Khan
Advances in Neural Information Processing Systems, 6248-6258, 2018
Fast and simple natural-gradient variational inference with mixture of exponential-family approximations
W Lin, ME Khan, M Schmidt
International Conference on Machine Learning, 3992-4002, 2019
Fast yet simple natural-gradient descent for variational inference in complex models
ME Khan, D Nielsen
2018 International Symposium on Information Theory and Its Applications …, 2018
A Stick-Breaking Likelihood for Categorical Data Analysis with Latent Gaussian Models.
ME Khan, S Mohamed, BM Marlin, KP Murphy
AISTATS, 610-618, 2012
The Bayesian Learning Rule
ME Khan, H Rue
arXiv preprint arXiv:2107.04562, 2021
Kullback-Leibler Proximal Variational Inference
ME Khan, P Baqué, F Fleuret, P Fua
Advances in Neural Information Processing Systems, 2015
Variational Message Passing with Structured Inference Networks
W Lin, N Hubacher, ME Khan
arXiv preprint arXiv:1803.05589, 2018
Variational imitation learning with diverse-quality demonstrations
V Tangkaratt, B Han, ME Khan, M Sugiyama
International Conference on Machine Learning, 9407-9417, 2020
TD-Regularized Actor-Critic Methods
S Parisi, V Tangkaratt, J Peters, ME Khan
arXiv preprint arXiv:1812.08288, 2018
Faster Stochastic Variational Inference using Proximal-Gradient Methods with General Divergence Functions
ME Khan, L Switzerland, R Babanezhad, W Lin, M Schmidt, M Sugiyama
Uncertainty in Artificial Intelligence (UAI), 2016
The system can't perform the operation now. Try again later.
Articles 1–20