Security¶
Deploying chat bots to production environments requires due diligence, especially in cases where LLMs are involved. At minimum, the following precautions should be considered:
Comprehensive security review
Additional rate limiting and abuse prevention
Monitoring and alerting for security violations
Regular security audits and penetration testing
Understanding of the OWASP Top 10 for LLM Applications
Overview¶
When using ChatterBot, you may want to add security scanning to protect against common vulnerabilities outlined in the OWASP Top 10 for LLM Applications.
ChatterBot does not include built-in security scanning. Instead, you can integrate third-party security tools like llm-guard, Prompt-Guard, or other scanning solution at the application level to scan inputs before they reach the chatbot and outputs before they are shown to users.
Depending on your use case, the following are examples of best practices you might consider:
Always scan user input for prompt injection
Always scan bot output scan bot output to prevent PII leakage
Start with strict thresholds and relax if false positives occur
Log security violations for monitoring and analysis
Test with adversarial inputs before deployment
Implement rate limiting at application layer
Never execute LLM outputs as code without validation
Review OWASP LLM Top 10 regularly