GPT is an awesome product that can do a lot out-of-the-box. However, sometimes that out-of-the-box model doesn't do what you need it to do.
In that case, you need to provide the model with more training data, which can be done in a couple of ways.
Usually, for common scenarios GPT will already be adequate, but for more complex or highly specific use cases it will not have the required training to output what you need.
The system prompt is a prompt that is sent along with every request to the API, and is used to tell the model how it should behave.
Using the system prompt is the easiest way to provide additional data to GPT, but there are also some downsides to this approach.
OpenAI provides a way for you to train new data into the model so that it is always available, without having to provide it with each request.
For example, if you want to build an app that outputs SSW rules based on a title, the untrained model probably won't know what SSW Rules are so you need to train it.
❌ Figure: Bad example - The untrained GPT model doesn't know what format to create a rule in
✅ Figure: Good example - The trained GPT model knows how to format the rule, and the style rules are written in