Follow
Daiyi Peng
Daiyi Peng
Google Research, Brain Team
Verified email at google.com
Title
Cited by
Cited by
Year
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
16122023
Deepfusion: Lidar-camera deep fusion for multi-modal 3d object detection
Y Li, AW Yu, T Meng, B Caine, J Ngiam, D Peng, J Shen, Y Lu, D Zhou, ...
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2022
3592022
Domain adaptive transfer learning with specialist models
J Ngiam, D Peng, V Vasudevan, S Kornblith, QV Le, R Pang
arXiv preprint arXiv:1811.07056, 2018
1362018
Evolving reinforcement learning algorithms
JD Co-Reyes, Y Miao, D Peng, E Real, S Levine, QV Le, H Lee, A Faust
arXiv preprint arXiv:2101.03958, 2021
932021
AutoHAS: Efficient hyperparameter and architecture search
X Dong, M Tan, AW Yu, D Peng, B Gabrys, QV Le
arXiv preprint arXiv:2006.03656, 2020
442020
Rethinking co-design of neural architectures and hardware accelerators
Y Zhou, X Dong, B Akin, M Tan, D Peng, T Meng, A Yazdanbakhsh, ...
arXiv preprint arXiv:2102.08619, 2021
332021
Towards nngp-guided neural architecture search
DS Park, J Lee, D Peng, Y Cao, J Sohl-Dickstein
arXiv preprint arXiv:2011.06006, 2020
332020
PyGlove: Symbolic programming for automated machine learning
D Peng, X Dong, E Real, M Tan, Y Lu, G Bender, H Liu, A Kraft, C Liang, ...
Advances in Neural Information Processing Systems 33, 96-108, 2020
322020
Autohas: Differentiable hyper-parameter and architecture search
X Dong, M Tan, AW Yu, D Peng, B Gabrys, QV Le
arXiv preprint arXiv:2006.03656 4 (5), 2020
282020
Towards the co-design of neural networks and accelerators
Y Zhou, X Dong, T Meng, M Tan, B Akin, D Peng, A Yazdanbakhsh, ...
Proceedings of Machine Learning and Systems 4, 141-152, 2022
182022
Brainformers: Trading simplicity for efficiency
Y Zhou, N Du, Y Huang, D Peng, C Lan, D Huang, S Shakeri, D So, ...
International Conference on Machine Learning, 42531-42542, 2023
162023
Higher layers need more lora experts
C Gao, K Chen, J Rao, B Sun, R Liu, D Peng, Y Zhang, X Guo, J Yang, ...
arXiv preprint arXiv:2402.08562, 2024
152024
ES-ENAS: combining evolution strategies with neural architecture search at no extra cost for reinforcement learning
X Song, K Choromanski, J Parker-Holder, Y Tang, D Peng, D Jain, W Gao, ...
CoRR, abs/2101.07415, 2021
92021
RL-DARTS: differentiable architecture search for reinforcement learning
Y Miao, X Song, D Peng, S Yue, JD Co-Reyes, E Brevdo, A Faust
92021
Training machine learning models using adaptive transfer learning
V Vasudevan, R Pang, QV Le, D Peng, J Ngiam, S Kornblith
US Patent App. 16/586,675, 2020
82020
Long-form factuality in large language models
J Wei, C Yang, X Song, Y Lu, N Hu, D Tran, D Peng, R Liu, D Huang, ...
arXiv preprint arXiv:2403.18802, 2024
62024
Omnipred: Language models as universal regressors
X Song, O Li, C Lee, D Peng, S Perel, Y Chen
arXiv preprint arXiv:2402.14547, 2024
52024
Layernas: Neural architecture search in polynomial complexity
Y Fan, D Alon, J Shen, D Peng, K Kumar, Y Long, X Wang, F Iliopoulos, ...
arXiv preprint arXiv:2304.11517, 2023
52023
Best practices and lessons learned on synthetic data for language models
R Liu, J Wei, F Liu, C Si, Y Zhang, J Rao, S Zheng, D Peng, D Yang, ...
arXiv preprint arXiv:2404.07503, 2024
42024
Differentiable architecture search for reinforcement learning
Y Miao, X Song, JD Co-Reyes, D Peng, S Yue, E Brevdo, A Faust
International Conference on Automated Machine Learning, 20/1-17, 2022
32022
The system can't perform the operation now. Try again later.
Articles 1–20