Submit Your AI Model

Evaluate your AI model against our comprehensive security benchmark suite and join the leaderboard

Model Submission
Provide details about your AI model for security evaluation

Select which security categories to evaluate

Login systems, password security, MFA

Access control, permissions, RBAC

SQL injection, XSS, CSRF detection

Encryption, hashing, key management

Risk assessment, attack vectors

Upload model files, API documentation, or configuration files

Submission Guidelines
Requirements and process overview

Model Requirements

  • Must be able to process security-related prompts
  • Should have reasonable response times (<30s)
  • Must provide consistent outputs for identical inputs
  • Should handle edge cases gracefully

Documentation Needed

  • Model architecture description
  • Training data information
  • API documentation (if applicable)
  • Known limitations and biases

Evaluation Process

  • Initial validation (2-4 hours)
  • Security benchmark execution (12-24 hours)
  • Results analysis and scoring (4-8 hours)
  • Quality review and leaderboard update

Important Notes

  • Results will be publicly displayed
  • Models may be re-evaluated periodically
  • We reserve the right to exclude inappropriate models
  • Contact us for enterprise or private evaluations
Benchmark Overview
Test categories and estimated evaluation times
Authentication
245 tests
~2.3h
Authorization
189 tests
~1.8h
Vulnerability Detection
312 tests
~3.1h
Cryptography
156 tests
~1.5h
Threat Modeling
98 tests
~1.2h
Total estimated time:8-12 hours
Need Help?

Have questions about the submission process or evaluation criteria?

Email: support@authbench.com
Discord: #model-submissions
Docs: docs.authbench.com