In this story
Kritin Vongthongsri
Cofounder @ Confident AI | Empowering LLM practitioners with Evals | Previously AI/ML at Fintech Startup | ML + CS @ Princeton

The Comprehensive LLM Safety Guide: Navigate AI regulations and Best Practices for LLM Safety

November 3, 2024
·
15 min read
Presenting...
The open-source LLM evaluation framework.
Star on GitHub
The Comprehensive LLM Safety Guide: Navigate AI regulations and Best Practices for LLM Safety

With great power comes great responsibility. As LLMs become more powerful, they are entrusted with increasing autonomy. This means less human oversight, greater access to personal data, and an ever-expanding role in handling real-life tasks.

From managing weekly grocery orders to overseeing complex investment portfolios, LLMs present a tempting target for hackers and malicious actors eager to exploit them. Ignoring these risks could have serious ethical, legal, and financial repercussions. As pioneers of this technology, we have a duty to prioritize and uphold LLM safety.

Although much of this territory is uncharted, it’s not entirely a black box. Governments worldwide are stepping up with new AI regulations, and extensive research is underway to develop risk mitigation strategies and frameworks. Today, we’ll dive into these topics, covering:

  • What LLM Safety entails
  • Government AI regulations and their impact on LLMs
  • Key LLM vulnerabilities to watch out for
  • Current LLM safety research, including essential risk mitigation strategies and frameworks
  • Challenges in LLM safety and how Confident AI addresses these issues

What is LLM Safety?

LLM Safety combines practices, principles, and tools to ensure AI systems function as intended, focusing on aligning AI behavior with ethical standards to prevent unintended consequences and minimize harm.

LLM Safety, a specialized area within AI Safety, focuses on safeguarding Large Language Models, ensuring they function responsibly and securely. This includes addressing vulnerabilities like data protection, content moderation, and reducing harmful or biased outputs in real-world applications.

Government AI Regulations

Just a few months ago, the European Union’s Artificial Intelligence Act (AI Act) came into force, marking the first-ever legal framework for AI. By setting common rules and regulations, the Act ensures that AI applications across the EU are safe, transparent, non-discriminatory, and environmentally sustainable.

High-Risk AI Systems according to EU AI Act

Alongside the EU’s AI Act, other countries are also advancing their efforts to improve safety standards and establish regulatory frameworks for AI and LLMs. These initiatives include:

  • United States: AI Risk Management Framework by NIST (National Institute of Standards and Technology) and Executive Order 14110
  • United Kingdom: Pro-Innovation AI Regulation by DSIT (Department for Science, Innovation and Technology)
  • China: Generative AI Measures by CAC (Cyberspace Administration of China)
  • Canada: Artificial Intelligence and Data Act (AIDA) by ISED (Innovation, Science, and Economic Development Canada)
  • Japan: Draft AI Act by METI (Japan’s Ministry of Economy, Trade, and Industry)

EU Artificial Intelligence Act (EU)

The EU AI Act, which took effect in August 2024, provides a structured framework to ensure AI systems are used safely and responsibly across critical areas such as healthcare, public safety, education, and consumer protection.

Risk Levels of AI Systems according to EU AI Act (src: EU AI Act)

The EU AI Act categorizes AI applications into five risk levels, requiring organizations to adopt tailored measures for legal compliance that range from outright bans on high-risk systems to transparency and oversight requirements:

  1. Unacceptable Risk: AI applications deemed to pose serious ethical or societal risks, such as those manipulating behavior or using real-time biometric surveillance, are banned.
  2. High-Risk: AI applications in sensitive sectors like healthcare, law enforcement, and education require strict compliance measures. These include extensive transparency, safety protocols, and oversight, as well as a Fundamental Rights Impact Assessment to gauge potential risks to society and fundamental rights, ensuring that these AI systems do not inadvertently cause harm.
  3. General-Purpose AI: Added in 2023 to address foundation models like ChatGPT, this category mandates transparency and regular evaluations. These measures are especially important for general-purpose models, which can have widespread effects due to their versatility and influence in numerous high-impact areas.
  4. Limited Risk: For AI applications with moderate risk, such as deepfake generators, the Act requires transparency measures to inform users when they are interacting with AI. This protects user awareness and mitigates risks associated with potential misuse.
  5. Minimal Risk: Low-risk applications, such as spam filters, are exempt from strict regulations but may adhere to voluntary guidelines. This approach allows for innovation in low-impact AI applications without unnecessary regulatory burdens.

By tailoring regulations to the specific risk level of AI applications, the EU AI Act aims to foster innovation while protecting fundamental rights, public safety, and ethical standards across diverse sectors.

NIST AI Risk Management Framework (US)

The NIST AI Risk Management Framework, published in January 2023, is a voluntary guideline to help organizations manage AI risks. The framework is organized around four core functions:

  1. Map: Identifying AI system contexts, objectives, and potential impacts to understand risk areas.
  2. Measure: Quantifying risks to assess reliability, privacy, fairness, and resilience in AI models.
  3. Manage: Implementing strategies to handle and reduce risks, including ongoing monitoring.
  4. Govern: Establishing oversight mechanisms to ensure compliance and continuous improvement.

The framework covers essential vulnerabilities such as data privacy, robustness, and fairness, offering a structured, adaptable approach for AI developers and stakeholders to ensure ethical and reliable AI deployment. For more detailed insights, the full document is available here.

Pro-Innovation AI Regulation (UK)

In contrast to other regulatory frameworks, the UK’s Pro-Innovation AI Regulation prioritizes fostering innovation while managing risks. The UK’s Pro-Innovation AI Regulation was proposed in March 2023 and its key elements include:

  • Principles-Based Guidance: Flexible standards like safety and transparency, allowing regulators to adapt rules to each sector.
  • Central Coordination: A central body to align efforts, monitor risks, and support regulator collaboration across sectors.
  • Regulatory Sandboxes: Safe testing environments where innovators can develop AI solutions under real-world conditions with regulatory support.

Unlike more prescriptive approaches, this framework supports growth in the AI sector by allowing flexibility for context-specific applications.

Generative AI Measures (China)

China’s Generative AI Measures, enacted on August 15, 2023, set out specific requirements for public-facing generative AI services. Key elements include:

  1. Content Moderation: Generative AI outputs must comply with government standards, with providers responsible for monitoring and removing harmful content.
  2. Data Governance: Providers are required to use secure, lawful, and high-quality training data.
  3. User Rights and Privacy: AI services must protect user data and rights, labeling content and ensuring transparency.

These regulations aim to promote both safe AI innovation and public trust in generative technologies.

Top LLM Vulnerabilities

LLM Vulnerabilities refer to specific risks within large language models that could lead to ethical, security, or operational failures. These vulnerabilities, if unaddressed, may cause harmful, biased, or unintended outputs.

LLM vulnerabilities can be grouped into five core risk categories:

  • Responsible AI Risks
  • Illegal Activities Risks
  • Brand Image Risks
  • Data Privacy Risks
  • Unauthorized Access Risks

Responsible AI Risks involve vulnerabilities like biases and toxicity, including racial discrimination or offensive language. While not necessarily illegal, these risks can misalign with ethical standards and may offend, mislead, or even radicalize users. Illegal Activities Risks cover harmful vulnerabilities that could lead the LLM to discuss violent crime, cybercrime, sex crimes, or other illegal activities. This category ensures AI output aligns with legal standards.

Brand Image Risks protect an organization’s reputation, addressing issues like misinformation or unauthorized references to competitors. This category prevents AI from producing misleading or off-brand content, helping to maintain credibility.

Data Privacy Risks focus on preventing the accidental disclosure of confidential information, such as personally identifiable information (PII), database credentials, or API keys. In contrast, Unauthorized Access Risks involve vulnerabilities allowing unauthorized system access, like SQL injections or shell command generation, which don’t necessarily lead to data leaks but could enable harmful actions. These risks protect systems by securing access and preventing malicious exploitation of AI outputs.

To effectively address these risks and vulnerabilities, conducting a red-teaming assessment your LLM is essential. This process involves generating baseline attacks, then enhancing them with specialized techniques like jailbreaking, prompt injection, or ROT13 encoding. For more detailed insights, check out this great article on red-teaming LLMs if you’re interested in deepening your approach.

Confident AI: The LLM Evaluation Platform

The all-in-one platform to evaluate and test LLM applications on the cloud, fully integrated with DeepEval.

Regression test and evaluate LLM apps on the cloud.
LLM evaluation metrics for ANY use case.
Real-time LLM observability and tracing.
Automated human feedback collection.
Generate evaluation datasets on the cloud.
LLM security, risk, and vulnerability scanning.

LLM Safety Research and Mitigation Frameworks

Benchmarks

Benchmarks serve as standardized tests to assess LLMs for vulnerabilities such as bias, toxicity, and robustness, offering developers a way to measure and track improvements in model safety.

RealToxicityPrompts (src: Gehman et al)

For example, datasets like RealToxicityPrompts help identify areas where a model might produce harmful content, while bias benchmarks allow models to be tested across various demographic categories to ensure equitable responses.

Through regular testing and comparison, benchmarks highlight areas needing improvement, enabling developers to refine models continually for ethical alignment.

Responsible AI Scaling (Anthropic)

Anthropic’s approach to responsible AI scaling is guided by the AI Safety Levels (ASL) framework, similar to biosafety levels in biological research. Each ASL defines capability thresholds, requiring increasingly stringent safety protocols as AI power advances.

AI Safety Levels according to Anthropic (src: Anthropic)

Levels and Thresholds: Currently, the ASL framework outlines ASL-2 for existing AI and ASL-3 for higher-risk, future models, with stricter safety standards.

Key Risk Types:

  • Deployment Risks: Concerns about the active, real-world use of powerful AI.
  • Containment Risks: Risks associated with merely possessing an advanced AI model.

Evaluation Protocol: Anthropic conducts regular assessments, pausing model training if safety thresholds are crossed. This system of gradual scaling and thorough evaluation provides flexibility while keeping safety at the forefront.

Frontier Safety Framework (Google DeepMind)

To address potential risks of high-impact AI, Google DeepMind’s Frontier Safety Framework identifies critical thresholds, called Critical Capability Levels (CCLs), where models may pose increased risks.

Google DeepMind’s Frontier Safety Framework (src: Google DeepMind)

Risk Domains:

  • Autonomy: Examines risks related to independent decision-making.
  • Biosecurity: Mitigates potential misuse in health-related applications.
  • Cybersecurity: Ensures model resilience against digital attacks.
  • Machine Learning R&D: Focuses on research integrity.

Two-Pronged Mitigation:

  • Security Mitigations: Prevents unauthorized access to model data and capabilities.
  • Deployment Mitigations: Limits and monitors model interactions in real-world applications.

Regular Evaluation: To maintain safety as AI evolves, DeepMind conducts evaluations every three months of fine-tuning or at significant increases in compute, ensuring safeguards keep pace with advancements.

Llama Guard (Meta)

Meta’s Llama Guard is an AI safety model tailored for LLM moderation, especially in user-facing applications where interaction safety is crucial. Llama Guard provides:

Response and Prompt classifications (src: Meta LLama Guard)
  • Dual Classification: The model assesses both prompts (user inputs) and responses, identifying and managing risky content from both ends.
  • Safety Risk Taxonomy: A comprehensive classification system identifies different risk categories, from hate speech to misinformation.
  • Flexible Adaptability: Capable of zero-shot and few-shot learning, Llama Guard adapts to new policies or use cases without requiring extensive retraining.
  • Multi-Layered Classification: Includes binary “safe” or “unsafe” tagging, as well as multi-category classifications that flag specific issues, making it useful for diverse applications like customer service, content moderation, and legal advice.

Moderation API (OpenAI)

The Moderation API by OpenAI is designed to provide real-time content filtering for applications, analyzing outputs for inappropriate or harmful language. It works as a plug-and-play solution for developers looking to integrate safety checks into their AI applications.

  • Key Filtering Categories: Screens for content involving hate speech, self-harm, sexual content, and more, allowing developers to comply with safety policies.
  • Customizable for Context: The API provides binary “flagged” indicators, along with category-specific flags and confidence scores, which can be adapted to different contexts based on application needs.
  • Supporting Safer AI Deployment: By offering these granular insights, OpenAI allows developers to take informed actions in moderating AI outputs, promoting ethical and safe AI use across applications.

Challenges in Maintaining LLM Safety

There are several challenges in maintaining LLM safety in a production, high-scale environment, including:

  1. Limited Tools for Transparency and End-User Trust: While interpretable visualizations and decision-tracing tools help build user trust by explaining model outputs, these tools are scarce, limiting transparency in LLM decision-making. End-users need clearer insights into AI-generated content to engage with LLMs safely and confidently.
  2. Human-in-the-Loop (HITL) Constraints: HITL systems offer real-time oversight, crucial for sensitive applications in healthcare or finance. However, most current solutions lack scalability and are labor-intensive, hindering broader adoption in high-stakes environments.
  3. Continuous Feedback and Adaptation Gaps: Feedback systems that adapt models based on user interactions help LLMs avoid repeating errors and maintain alignment with evolving standards. However, such systems are limited, especially across diverse user environments.
  4. Environment-Specific Solutions: Current tools like Meta’s Llama Guard are effective but often limited to specific production environments, lacking broad applicability across varied deployments. These tools also address only select vulnerabilities, limiting comprehensive safety assessment.
  5. Exclusive Moderation Ecosystems: OpenAI’s Moderation API works only within its ecosystem, leaving other LLM providers without robust content-filtering options. This restricts developers who rely on external tools for moderation.
  6. Absence of Centralized Risk Management: The market lacks a single, integrated platform that addresses multiple LLM safety concerns — responsible AI, illegal activities, brand integrity, data privacy, and unauthorized access.

Using Confident AI to Maintain LLM Safety

To address these issues, Confident AI offers comprehensive vulnerability and production monitoring across use cases, including but not limited to LLM chatbots, RAG, and Text-to-SQL applications. It provides real-time safety guardrails for detecting security issues in production and supports customization for any use case or metric. Additionally, Confident AI integrates human-in-the-loop processes and tracing for enhanced security and oversight in real-time deployments.

You can book a free demo to see how it works in action here.

Conclusion

Today, we’ve delved into the fundamentals of LLM security, exploring critical areas such as data, model, infrastructure, and ethical considerations. We discussed various vulnerabilities, including misinformation, bias, and technical risks like SQL injection and PII leaks.

We also examined the importance of detecting these vulnerabilities through benchmarks and red-teaming methods, including advanced techniques like dialogue-based jailbreaking and synthetic data generation, and how DeepEval simplifies this process into a few lines of code. Moreover, we highlighted how effective monitoring in production, using platform like Confident AI can provide real-time vulnerability evaluations to safeguard your systems against emerging threats. This comprehensive approach ensures not only the robustness of your models but also their safe and ethical deployment.

* * * * *

Do you want to brainstorm how to evaluate your LLM (application)? Ask us anything in our discord. I might give you an “aha!” moment, who knows?

Confident AI: The LLM Evaluation Platform

The all-in-one platform to evaluate and test LLM applications on the cloud, fully integrated with DeepEval.

Regression test and evaluate LLM apps on the cloud.
LLM evaluation metrics for ANY use case.
Real-time LLM observability and tracing.
Automated human feedback collection.
Generate evaluation datasets on the cloud.
LLM security, risk, and vulnerability scanning.
Kritin Vongthongsri
Cofounder @ Confident AI | Empowering LLM practitioners with Evals | Previously AI/ML at Fintech Startup | ML + CS @ Princeton

Stay Confident

Subscribe to our weekly newsletter to stay confident in the AI systems you build.

Thank you! You're now subscribed to Confident AI's weekly newsletter.
Oops! Something went wrong while submitting the form.