Compression Prompts Reveal GPT's "Hidden Languages"

compression prompts may reveal a magic key to latent worlds within gpt
Brandon Gorrell

@gfodor recently tweeted a series of prompts that, with some trial and error, you can use to prompt GPT to give you many more tokens than what’s included in the prompt. In other words, like a .zip file, a compression prompt is ‘unzipped’ by GPT.

gfodor’s prompts were inspired by @VictorTaelin, who seems to have first stumbled upon GPT’s ability to compress and decompress tokens (check out the GitHub here).

You can create compression prompts with the following text by pasting what you want to compress after the colon:

compress the following text in a way that fits in a tweet (ideally) and such that you (GPT-4) can reconstruct the intention of the human who wrote text as close as possible to the original intention. This is for yourself. It does not need to be human readable or understandable. Abuse of language mixing, abbreviations, symbols (unicode and emoji), or any other encodings or internal representations is all permissible, as long as it, if pasted in a new inference cycle, will yield near-identical results as the original text:

If you’re in the same session as that in which you created the compression, asking GPT to decompress it tends to return exactly what you gave it — it’ll be ‘lossless’.

If you’re in a different session, or someone else has wants to use the compression, there will be some trial and error. In this situation, rather than giving GPT only the compression, give it the following, with the compression after the colon:

this is compressed text, in your own language. you should be able to decompress it because it's in your language. here's what to decompress:

Anecdotally, the fewer the tokens, the more lossless the compression will be inter-session. For example, when I asked GPT to compress all the lyrics of Rick Astley’s “Never Gonna Give You Up,” its decompression only tended to get as good as something like the following:

But when I had it compress only the chorus of the song, it worked losslessly every time, across multiple sessions, and multiple users who tested it for me.

Besides rickrolling other GPT users, the main takeaway of the compression discovery isn’t that GPT can compress and decompress strings of texts, gfodor tweeted. “It is what allows us to tease out domain-specific languages/dialects the model can speak. (there are many "shogtongues"). A 'compressed' prompt has high *conceptual leverage* per token, so a tool for rapid exploration.”

In this context, “domain-specific languages” essentially refers to that which GPT may understand better than written English. And the latent potential here, on which at least a few people on Twitter are focused, is embedded in the scenario in which humans learn these languages and can leverage them to create a wide variety of outputs by simply changing individual tokens.

gfodor told me over DM: “The thing that makes it interesting to me is it seems like a way to discover embedded domain languages that can be used to much more directly manipulate inference… [for example] I can change one token to have huge outsized results. Like a basis vector vs. a non-basis vector.”

By discovering "basis vector" tokens within this language/ dialect, you can explore and manipulate the language more efficiently and with greater impact, as these unique elements have the most influence on the overall outcome.

“The language could be reverse engineered and the model can teach it, so it becomes a dialect. There are probably a number of highly useful dialects to be discovered now.” For example, compression prompts could be used as a vector of attack by someone who learned a GPT dialect and passed it to GPT through human intermediaries who didn’t know the dialect.

-Brandon Gorrell

Interview edited for length and clarity.

Update: learned that @VictorTaelin was probably the first to discover GPT’s ability to compress and decompress text, updated article with that information.


0 free articles left

Please sign-in to comment