It’s clever if you’re running your own models, or only paying for tokens generated.
Since you’re generating the variable fields anyway, it will actually require fewer forward passes even if broken up in multiple prompts than if you generated the static fields as well.
Of course this doesn’t work for OpenAI apis which charge for input context on a per invocation basis.
Since you’re generating the variable fields anyway, it will actually require fewer forward passes even if broken up in multiple prompts than if you generated the static fields as well.
Of course this doesn’t work for OpenAI apis which charge for input context on a per invocation basis.