Abstract
We present ShieldGemma, a comprehensive suite of LLM-based safety content moderation models built upon Gemma2. These models provide robust, state-of-the-art predictions of safety risks across key harm types (sexually explicit, dangerous content, harassment, hate speech) in both user input and LLM-generated output. By evaluating on both public and internal benchmarks, we demonstrate superior performance compared to existing models, such as Llama Guard (+10.8\% AU-PRC on public benchmarks) and WildCard (+4.3\%). Additionally, we present a novel LLM-based data curation pipeline, adaptable to a variety of safety-related tasks and beyond. We have shown strong generalization performance for model trained mainly on synthetic data. By releasing ShieldGemma, we provide a valuable resource to the research community, advancing LLM safety and enabling the creation of more effective content moderation solutions for developers.
Abstract (translated)
我们提出了ShieldGemma,一种基于Gemma2的全面的安全内容审核模型。这些模型在用户输入和LLM生成的输出中提供了对关键伤害类型(性暗示、危险内容、骚扰、仇恨言论)的安全风险的稳健、最先进的预测。通过在公共和内部基准上评估,我们证明了与现有模型相比卓越的性能,例如Llama Guard (+10.8% AU-PRC on public benchmarks)和WildCard (+4.3%)。此外,我们还提出了一个新颖的基于LLM的数据 curation 管道,适用于各种安全相关任务,超越了LLM。我们在主要使用合成数据训练的模型上展示了强大的泛化性能。通过发布ShieldGemma,我们为研究社区提供了宝贵的资源,推动了LLM的安全,并为开发人员创建了更有效的内容审核解决方案。
URL
https://arxiv.org/abs/2407.21772