Dr. Zhongliang Guo

Dr. Zhongliang Guo

AI Security Researcher

University of St Andrews

Citations--
h-index--
i10-index--

Research Interests

Trustworthy AI
Adversarial Attack
Computer Vision

About

I served as a Research Fellow at University of St Andrews, first with Prof. Duncan Robertson and Dr. Samiur Rahman on ML-driven low-altitude airspace safety, and later with Dr. Juan Ye on trustworthy LLMs. I received my Ph.D. degree in Computer Science within 2.5 years, from the University of St Andrews, supervised by Dr. Ognjen Arandjelović and Dr. Lei Fang. I also collaborated closely with Prof. Chun Pong Lau, Dr. Yifei Qian, and Dr. Shuai Zhao during my Ph.D. journey.

My research centers on Trustworthy AI, especially Adversarial Attack and Generative AI Misuse Prevention. I served as a reviewer for numerous prestigious journals and conferences, including IEEE T-IFS, Pattern Recognition, Information Sciences, NeurIPS, ICLR, CVPR, and AAAI. I am also the Leading Guest Editor for the special issue Security and Trustworthiness in Pattern Recognition: Attacks, Evaluations, and Defenses of Pattern Recognition.

Selected Publications

View All →

A Grey-box Attack against Latent Diffusion Model-based Image Editing by Posterior Collapse

Zhongliang Guo, Chun Tong Lei, Lei Fang, Shuai Zhao, Yifei Qian, Jingyu Lin, Zeyu Wang, Cunjian Chen, Ognjen Arandjelović, Chun Pong Lau

IEEE Transactions on Information Forensics and Security (IEEE T-IFS), 2026

A VAE-targeted adversarial protection framework that leverages posterior collapse phenomena to prevent unauthorized image manipulation in latent diffusion models with minimal computational overhead.

Artwork Protection Against Unauthorized Neural Style Transfer and Aesthetic Color Distance Metric

Zhongliang Guo, Yifei Qian, Shuai Zhao, Junhao Dong, Yanli Li, Ognjen Arandjelović, Fang Lei, Chun Pong Lau

Pattern Recognition, 2026

A proactive protection method using adaptive adversarial perturbations to prevent unauthorized neural style transfer of artwork while preserving visual quality and resisting purification-based defenses.

Artwork Protection Against Neural Style Transfer Using Locally Adaptive Adversarial Color Attack

Zhongliang Guo, Yifei Qian, Kaixuan Wang, Weiye Li, Ziheng Guo, Yuheng Wang, Yanli Li, Ognjen Arandjelović, Lei Fang

27th European Conference on Artificial Intelligence (ECAI Oral), 2024

A proactive protection method using frequency-adaptive perturbations to prevent unauthorized neural style transfer while preserving visual quality of original artwork.

A White-Box False Positive Adversarial Attack Method on Contrastive Loss-Based Offline Handwritten Signature Verification Models

Zhongliang Guo, Weiye Li, Yifei Qian, Ognjen Arandjelovic, Lei Fang

International Conference on Artificial Intelligence and Statistics (AISTATS), 2024

A novel white-box adversarial attack on signature verification models using style transfer with specialized loss functions to manipulate embedding distances while maintaining visual similarity.

T2ICount: Enhancing Cross-modal Understanding for Zero-Shot Counting

Yifei Qian*, Zhongliang Guo*, Bowen Deng, Chun Tong Lei, Shuai Zhao, Chung Pong Lau, Xiaopeng Hong, Michael P Pound

Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR Highlight), 2025

A diffusion-based zero-shot object counting framework that enhances text sensitivity through hierarchical semantic correction and cross-attention supervision for fine-grained counting.

News

2025-12

I officially get my PhD degree after the graduation ceremony! 🎓

2025-11

One paper about unauthorized image editing prevention is accepted by IEEE Transactions on Information Forensics and Security

2025-11

One paper about facial privacy protection is accepted by Pattern Recognition

2025-11

One paper about medical LLM is accepted by IEEE Transactions on Affective Computing

2025-10

I start to serve as the leading guest editor for a special issue of Pattern Recognition

2025-07

One paper about digital intellectual property protection is accepted by Pattern Recognition

2025-06

I passed my PhD viva with only typo corrections 🎉

2025-04

I submitted my PhD thesis

2025-04

One paper about federated learning security is accepted by IEEE Transactions on Neural Networks and Learning Systems

2025-04

One CVPR paper is selected as highlight paper

2025-03

One paper about LLM backdoor attack is accepted by Information Fusion

2025-02

Two papers are accepted by CVPR 2025

2025-02

One paper about LLM backdoor attack is accepted by Transactions on Machine Learning Research