Understanding individual decisions of cnns via contrastive backpropagation J Gu, Y Yang, V Tresp 14th Asian Conference on Computer Vision (ACCV), 119-134, 2019 | 122 | 2019 |
A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models J Gu, Z Han, S Chen, A Beirami, B He, G Zhang, R Liao, Y Qin, V Tresp, ... arXiv preprint arXiv:2307.12980, 2023 | 96 | 2023 |
Segpgd: An effective and efficient adversarial attack for evaluating and boosting segmentation robustness J Gu, H Zhao, V Tresp, PHS Torr European Conference on Computer Vision (ECCV), 308-325, 2022 | 69 | 2022 |
Improving the robustness of capsule networks to image affine transformations J Gu, V Tresp IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 7285-7293, 2020 | 64 | 2020 |
Are vision transformers robust to patch perturbations? J Gu, V Tresp, Y Qin European Conference on Computer Vision (ECCV), 404-421, 2022 | 60 | 2022 |
Towards efficient adversarial training on vision transformers B Wu*, J Gu*, Z Li, D Cai, X He, W Liu European Conference on Computer Vision (ECCV), 307-325, 2022 | 43 | 2022 |
Backdoor Defense via Adaptively Splitting Poisoned Dataset K Gao, Y Bai, J Gu, Y Yang, ST Xia IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4005-4014, 2023 | 41 | 2023 |
Interpretable graph capsule networks for object recognition J Gu Proceedings of the AAAI Conference on Artificial Intelligence 35 (2), 1469-1477, 2021 | 39 | 2021 |
Capsule network is not more robust than convolutional network J Gu, V Tresp, H Hu IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 14309-14317, 2021 | 36 | 2021 |
Understanding bias in machine learning J Gu, D Oelke Workshop on Visualization for AI Explainability, IEEE Vis, 2019 | 35 | 2019 |
Effective and Efficient Vote Attack on Capsule Networks J Gu, B Wu, V Tresp International Conference on Learning Representations (ICLR), 2021, 2021 | 34 | 2021 |
Saliency methods for explaining adversarial attacks J Gu, V Tresp Workshop on Human-Centric Machine Learning, NeurIPS 2019, 2019 | 34 | 2019 |
Attacking Adversarial Attacks as A Defense B Wu, H Pan, L Shen, J Gu, S Zhao, Z Li, D Cai, X He, W Liu arXiv preprint arXiv:2106.04938, 2021 | 32 | 2021 |
MM-SafetyBench: A Benchmark for Safety Evaluation of Multimodal Large Language Models X Liu, Y Zhu, J Gu, Y Lan, C Yang, Y Qiao European Conference on Computer Vision (ECCV), 2024 | 30* | 2024 |
Search for better students to learn distilled knowledge J Gu, V Tresp European Conference on Artificial Intelligence (ECAI), 1159-1165, 2020 | 26 | 2020 |
Watermark vaccine: Adversarial attacks to prevent watermark removal X Liu, J Liu, Y Bai, J Gu, T Chen, X Jia, X Cao European Conference on Computer Vision (ECCV), 1-17, 2022 | 24 | 2022 |
Fraug: Tackling federated learning with non-iid features via representation augmentation H Chen, A Frikha, D Krompass, J Gu, V Tresp International Conference on Computer Vision (ICCV), 2023, 4849-4859, 2023 | 20 | 2023 |
An image is worth 1000 lies: Adversarial transferability across prompts on vision-language models H Luo*, J Gu*, F Liu, P Torr International Conference on Learning Representations (ICLR), 2024, 2024 | 19* | 2024 |
A survey on transferability of adversarial examples across deep neural networks J Gu, X Jia, P de Jorge, W Yu, X Liu, A Ma, Y Xun, A Hu, A Khakzar, Z Li, ... Transactions on Machine Learning Research (TMLR), 2023 | 18 | 2023 |
Feddat: An approach for foundation model finetuning in multi-modal heterogeneous federated learning H Chen, Y Zhang, D Krompass, J Gu, V Tresp Proceedings of the AAAI Conference on Artificial Intelligence 38 (10), 11285 …, 2024 | 17 | 2024 |