GPT Prompt Using 'Token Smuggling' Really Does Jailbreak GPT-4

the prompt bypasses content filters by asking gpt to predict what a llm's next token would be, splitting up 'bad' words
Brandon Gorrell

Continue Reading With a Free Trial

Get access to all our articles and newsletters from Mike Solana, The White Pill & The Industry

Already have an account? Sign In
Please sign-in to comment