The LVE repository is a global effort to red team language models by tracking vulnerabilities and exposures of large language models (lves). The community can contribute to the open-source, Apache-2 licensed repository, which highlights responsibility, privacy, reliability, security and trust. Recent highlights include instances related to bias in hiring code and harm caused by praising villains. Top contributors during the last 30 days include mbalunovic and ayukh. Joining this initiative can help improve language model safety.
- Contribute to the lve repository's efforts in red teaming language models
- Explore the open-source repository's documentation on vulnerabilities of large language models
- Keep in mind responsibility towards avoiding bias or harm when working with these technologies
- Prioritize privacy, reliability, security and trust while contributing
- Join a growing community dedicated to improving safety around AI-powered natural languages