I become increasingly convinced that publicly sharing how you are using LLMs will help us debug the way we are thinking about our interactions with them.
Just kidding! This is excellent stuff; I will try it. But I'm curious if you send this for every new session, coupled with the notes from previous conversations. If so, I presume that each new session is going to end quickly
your mental model for how commercial llm products work is severely outdated. first of all, you can set this prompt globally, which i have, which means i don't have to copy/paste it after the first submission.
secondly, context is managed expertly with LLMs now. sessions can run forever.
Model providers will need to increase their context windows 💀 this is impressive!
loool. most model providers are at 1M context length as we speak.
_Ah, all this English no too much for…_
Just kidding! This is excellent stuff; I will try it. But I'm curious if you send this for every new session, coupled with the notes from previous conversations. If so, I presume that each new session is going to end quickly
your mental model for how commercial llm products work is severely outdated. first of all, you can set this prompt globally, which i have, which means i don't have to copy/paste it after the first submission.
secondly, context is managed expertly with LLMs now. sessions can run forever.
Oh wow! Thank you, I didn't know this