1

All the prompt engineering techniques I've seen seem to focus on telling the model what to do e.g. Few-Shot Prompting.

Is there any value in giving the model examples of what not to do? Can you link me to any papers/techniques on the topic?

Example

I am building a bot to improve students' foreign language writing skills.

Bad output: Corrected spelling of 'heisse' to 'heiße' because 'heiße' is the correct spelling in German.

Better output: Corrected spelling of 'heisse' to 'heiße' because 'ss' can be combined to form 'ß' in German.

I could solve this specific problem using few-shot prompting. But really, I want to tell the model "don't give answers like 'this is how it is done in German', instead explain what is being done and the reasons for it".

I may have answered my own question there... just put it what I said above in the system prompt?

codeananda
  • 268
  • 3
  • 10
  • yes it seems to be one of the exemple where you cna complement the student prompt by "i am a student please explain in details" – Lucas Morin Jul 12 '23 at 14:29

1 Answers1

1

Does Negative Prompting Exist?

Yes, e.g. in some text-to-image generation models such as https://app.leonardo.ai/ai-generations:

enter image description here

One can run such negative prompts on one's computer, e.g. with Automatic1111's Stable Diffusion WebUI (Windows 10/11 installer):

enter image description here

Takes ~8GB though:

enter image description here

Franck Dernoncourt
  • 5,573
  • 9
  • 40
  • 75