Ask HN: How are you managing your prompts?
How are you cataloging, maintaining, versioning your prompts? Are you relying primarily on one toolchain where they are organizing by default in the toolchain interface (but locked in the toolchain)? Or are you using some other mechanism? Are you using a mechanism that enables you to share and permission them with team members, colleagues, friends?
The intent with this inquiry is to improve my own workflow(s) and potentially those of others interested in this topic. Thank you!
To clarify, you seem to be asking about end-users storing their (eg. ChatGPT) prompts, not software devs managing the prompts they use in the API?
These are two very different questions.
Both actually, as I think there is a lot of overlap between the use cases. Someone mentioned to me that they recently "graduated" from using the Mac ChatGPT app to something more robust, for example, due to how many prompts they were managing and the systems they were submitting them to. I would also be interested in your perspective why these are vastly different (user prompts vs system prompts), as I might be missing context.
To keep my prompts organized as an end-user, I use an open-source desktop frontend called AnythingLLM [0]. I have a different workspace per use case, and I fork existing threads to continue from prevous prompts.
To keep prompts organized as a software startup is a completely different use-case, as we need: - a way to dynamically fill the prompts (we use Mustache) - a way to store the prompts that enable versioning (so they are on git like the rest of our code) - a way to allow non-technical users (eg. Product team) to revise a prompt, so they are stored as JSON objects.
So our prompts are basically an object that encapsulates the OpenAI-style parameters, plus additional in-house parameters such as fallback model, risk profile, etc.
[0]: https://anythingllm.com/desktop
Past discussion, which may be helpful:
Ask HN: How do you manage your prompts in ChatGPT? https://news.ycombinator.com/item?id=41479189
I'm curious to see how people's workflows have changed.
Us, we manually catalog them in well-named Markdown files and folders and store in a git repo. I would like a more taxonomical approach.
I don't do this at all. I find that being obsessed with optimizing prompts is exactly what's not needed at this stage of AI's development.
I just prompt as I go and find that the "cost" of prompting again to get a better output is lower than the cost of having some system for cataloging, maintaining, and versioning my prompts.
I might be wrong but I'm getting good results out of LLMs.
There are complex prompts that are worth fine tuning. For me, it’s prompts that require a bunch of context. For example, vacation planning. Include a bunch of requirements, so it turns into a 1000 words essay, but it’s worth it to store it and reuse it.
Or a personal assistant. I have a text based workflow but explaining it to an llm takes yet another 1000 word essay. I would benefit from a workflow that lets me reuse and version prompts.
promptlayer is great. Highly recommend for prompt versioning, play grounding, etc.
“Start for free” and no “pricing” available. Am I missing it, or what’s the monetization strategy here?