I do this all the time cross GPT, Grok, Claude, Gemini, And Le Chat. And it’s fascinating to see how each model articulates its behavior. Looking beneath the language used you can begin to get a glimpse at how each model is designed and shaped.
This is the art of interacting with LLM. The article picture of the LLM with Sigmund Freud is actually on point, a well crafted open ended ambiguous question will get you more than cross examination.
Couldn't agree more. This concept of folk interpretability is incredibily insightful and offers a crucial path to democratizing LLM understanding. How do you envision its practical application balancing informativeness with the inherent subjectivity of behaviorally elicited narratives?
Man, you just validated so many of my existing questions about LLM conditioned learning / knowledge presentation. To have the thought/insight is one thing, to lay out so specifically how you teased out the evidence of that conundrum is unbelievably impressive. Hat tip from over here.
I can document a template pattern in the language model end product.
These <subject noun> are not <incorrect adjective>. They are not <incorrect adjectives>. They are best understood as <correct adjectives >; <definition of adjectives >, not because <incorrect definitions >.
Structurally repetitive due to using the same agent. This is all over youTube and it is boring.
I do this all the time cross GPT, Grok, Claude, Gemini, And Le Chat. And it’s fascinating to see how each model articulates its behavior. Looking beneath the language used you can begin to get a glimpse at how each model is designed and shaped.
Yes I think the key thing to remember is that there is information to be had but don't take it at face value.
This is the art of interacting with LLM. The article picture of the LLM with Sigmund Freud is actually on point, a well crafted open ended ambiguous question will get you more than cross examination.
Thank you much appreciated!
Couldn't agree more. This concept of folk interpretability is incredibily insightful and offers a crucial path to democratizing LLM understanding. How do you envision its practical application balancing informativeness with the inherent subjectivity of behaviorally elicited narratives?
Man, you just validated so many of my existing questions about LLM conditioned learning / knowledge presentation. To have the thought/insight is one thing, to lay out so specifically how you teased out the evidence of that conundrum is unbelievably impressive. Hat tip from over here.
I can document a template pattern in the language model end product.
These <subject noun> are not <incorrect adjective>. They are not <incorrect adjectives>. They are best understood as <correct adjectives >; <definition of adjectives >, not because <incorrect definitions >.
Structurally repetitive due to using the same agent. This is all over youTube and it is boring.