The world of prompt engineering is fascinating on various levels and there’s no shortage of clever ways to nudge agents like ChatGPT into generating specific kinds of responses. Techniques like Chain-of-Thought (CoT), Instruction-Based, N-shot, Few-shot, and even tricks like Flattery/Role Assignment are the inspiration behind libraries full of prompts aiming to meet every need.
In this article, I will delve into a technique that, as far as my research shows, is potentially less explored. While I’ll tentatively label it as “new,” I’ll refrain from calling it “novel.” Given the blistering rate of innovation in prompt engineering and the ease with which new methods can be developed, it’s entirely possible that this technique might already exist in some form.
The essence of the technique aims to make ChatGPT operate in a way that simulates a program. A program, as we know, comprises a sequence of instructions typically bundled into functions to perform specific tasks. In some ways, this technique is an amalgam of Instruction-Based and Role-Based prompting techniques. But unlike those approaches, it seeks to utilize a repeatable and static framework of instructions, allowing the output from one function to inform another and the entirety of the interaction to stay within the boundaries of the program. This modality should align well with the prompt-completion mechanics in agents like ChatGPT.
To illustrate the technique, let’s specify the parameters for a mini-app within ChatGPT4 designed to function as an Interactive Innovator’s Workshop. Our mini-app will incorporate the following functions and features:
- Work on New Idea
- Expand on Idea
- Summarize Idea
- Retrieve Ideas
- Continue Working on Previous Idea
- Token/”Memory” Usage Statistics
To be clear we will not be asking ChatGPT to code the mini-app in any specific programming language and we will reflect this in our program parameters.
This post originally appeared on TechToday.