- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Anthropic created an AI jailbreaking algorithm that keeps tweaking prompts until it gets a harmful response.
You must log in or register to comment.
Anthropic created an AI jailbreaking algorithm that keeps tweaking prompts until it gets a harmful response.