How to build a prompt engine

Any bigger project consists of smaller parts working together

So at Ice Bear Labs, we are building Schreiberling.app right now, which is a copywriting tool for marketing people, supported by AI technology, right now mainly OpenAI’s ChatGPT.

When you start with a project like that, you start with a simple MVP. That includes the prompt you’re sending to the API to get copywriting text written by OpenAI’s language model.

But once the project grows, that one prompt doesn’t stay one prompt. You gotta add support for different types of copywriting, say landing page, product text, an announcement for an event on Linkedin. So you write new prompts with new inputs and outputs defined. It gets more complicated. At this point, you start to think “How do we structure this in a way to support complexity while still maintaining a clear and simple structure in our code?”

The power of a prompt engine

The phrase is still pretty vague, but here is what we define as a “prompt engine”:

A code structure that supports different prompts for different use cases, that is able to ingest variable inputs for variable prompts as well as variable output parameters, so we can shape what we would like as a result of our prompt.

We have learned that - even though we clearly state in our prompt - ChatGPT sometimes returns the wrong language when we prompt everything in English (even though we wanted to German text). So our prompt engine supports multiple prompts for multiple languages.

Further, we’ve added different inputs to our different prompts. If you generate a product text for example, you might have inputs like name of the product, target group, unique selling point of the product and so on. For a social media post about an event, we might have the name, the date, the audience we want to show up and so on. Every single prompt type will have different inputs.

Also, you might want to shape the output, that ChatGPT gives us. For Schreiberling, we concentrate on language, tone and our different authors (which we created by defining different presets for ChatGPT’s temperature, frequency penalty and presence penalty).

A screenshot of our “Tone and Author” settings

All of this is combined in our prompt engine. It’s a utility library we have integrated into our backend service. Within, you can define all the complex prompts you need, including inputs and outputs. When a user needs a product text, we simply do

promptEngine.product.create(name, tone, language …)

and get the result that we need.

Using Langchain to build a prompt engine

We didn’t do all of this from scratch. There is an amazing tool, that every developer can leverage to create something like this way faster than doing it all alone. Langchain.

Langchain is a “framework for developing applications powered by language models”

The term chain here refers to the ability to create connections between your chosen LLM platform (say OpenAI), the model you want to use (GPT-3.5) and the prompt you want to create.

In pseudo code, it looks something like this:

> Create a new chat, in this case an OpenAI chat

> Create a system prompt

> Create the first user message (for example the inputs for a product marketing text)

> Throw the prompts together to have a complete chat prompt

> Throw those into a LLM chain, combining the chat model and platform you want to use with the prompt you defined

> Call the chain and wait for your result

For our coders, this is what the same instructions look like in our prompt engine, written as Typescript function, using Langchain.

So Langchain gives us the Lego building blocks that we need to define our text generation functions. We combine this with our own prompts, our inputs and outputs we want to add and we have a fully functioning prompt engine that we can use to create multiple different results for different languages as well.

Visualizing the prompt engine

That’s pretty abstract until now, so let me show you what the full engine looks like, when drawn by my untalented hands.

A schematic of our frontend, backend, prompt engine setup

Say we want to add a new text generation type for “Linkedin post”, we only have to write prompts, using Langchains functions and add them to this engine. We’ll have to write a prompt for every language, but that’s a trade we are willing to take for being sure that the correct language comes out at the other end.

What if I can’t code?

The coolest thing about this prompt engine. Yes we developed this in Typescript and created a library to be able to use it in our code for our Saas product. But nothing here suggests, that you can’t create a similar structure within a simple text file. If you are only able to use the ChatGPT interface and have zero coding knowledge, you can still create multiple prompts for multiple use cases and save them in a single notepad text file. Like this for example

You are a marketing expert working at a creative marketing agency. Your specialty is writing unconventional and engaging marketing texts. Please note: We are known as an agency for our creativity.

Please use the following information in your advertising text:

The name of the product is <Product Name>
The product fulfills the following needs: <Product Features>
Please write your product text for the following target audience: <Product Target Audience>
The product has the following unique qualities: <Product Features>

Write the result in <tone>

Do the same for german, do the same for french. You can add inputs and outputs, all you want. When you need another text generation, add a new prompt to your notebook. And that’s it, you have your own “engine” without any code.

Want to see how Schreiberling works?

If you found this interesting and want to see how our product works, we have a free trial over at Schreiberling.app. We would love to see you try our software and give feedback, what prompt type we have to add to our engine next.

Zurück
Zurück

The importance of work and not working

Weiter
Weiter

My experience with startup competitions