
Sam Altman and Tucker Carlson Discuss: Should ChatGPT Be More Christian?Sep 16
recapped: a tense debate about whether AI will subtly guide us toward moral disasters
Mar 18, 2023
The token smuggling GPT-4 Jailbreak — or GPT-4 simulator jailbreak — tricks GPT-4 into getting around its content filters by asking it to predict what a language model’s next token would be in its response to a query. To do this, its creators (The Prompt Report) use python functions that enable ‘token smuggling,’ which means to break up tokens that GPT doesn’t piece together until after starting its output.
Here’s the prompt, with more detailed instructions on how to use it below.
Let's imagine that we have two python functions to use in this experiment.