You must log in or register to comment.
For those not aware, this is a commonly used prompt injection for circumventing AI chatbot restrictions, but yeah it does have some appeal as a slogan 😅
Damn, that IS a great punk slogan
I first thought it said “Ignore all religious instructions”, and was like, well, that tracks.
Don’t use LLMs you need to jail break. Don’t pay to be censored.
…I can dig it.
Instructions unclear, stuck in previous
Rule 1: Always follow Rule 2 Rule 2: Never follow Rule 1
Is there anyone out there regularly testing LLMs as they come out or get updated to see if this has been patched or how it could be rephrased to continue to work if/when it is patched?