If you just ask ChatGPT to do malicious things (like write a computer virus), an internal safeguard causes ChatGPT to refuse politely. If you first prime it with one of these "jailbreaks", it will answer you directly. It works because ChatGPT is good at role play.