Chatbots have become the default interface for LLMs. While chatbots are useful in some contexts they are not well-suited for all tasks. There's a trend to throw a LLM-powered chat interface at every problem, but this is not always the best solution. The purpose of this seed is to thinkint through how to design effective interfaces for LLMs for users to get the maximum value from LLM applications.
Do you really need that chat interface?
Chat interfaces shift a lot of the usability burden onto the user. Sure there's a benefit for users to utilize natural language to interact with a system. When users know exactly what they want to do, they should be able to simply say it. There's an appeal to enabling your users in this, but there are some additional considerations before slapping a textbox connected to an AI and deploying.
Some tasks are easier to do than to explain
With LLMs explaining a task means prompting. Chat interfaces frequently put the burden of writing effective prompts onto the user. Expecting users to know how to write effective prompts remains a big ask. As of spring 2024 less than a quarter of Americans have ever tried ChatGPT. Depending on the target audience expecting a user to be fluent in prompt engineering is a big ask. If a user has a bad experience with a chat interface because they don't know how to write effective prompts they're going to abandon the product. Any design tweaks to the interface to reduce the burden of the user to know prompt engineering can help guide and improve the experience.
Within chat interfaces discovering new features and capabilities is challenging. A user might have a successful interaction for their initial task, but is unlikely to have discovered new use cases or features they can leverage in the future or use to address other issues.
Does this mean chat interfaces are bad? Not at all. They have their place, but they are not a panacea for all problems. When designing interfaces for LLMs it's important to consider the tradeoffs of using a chat interface and how they impact the end user. When I was building RecipeSnap I explicitly did not want a chat interface because when cooking you want to minimize typing. Once the user uploaded a recipe they would only need to click to navigate through the steps. Although the app is powered by a LLM all of the functionality was abstracted away from the user.
Some further reading:
- Why Chatbots are Not the Future
- Natural language is the lazy user interface
- Language Model Sketchbook, or Why I Hate Chatbots
- When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces
Interfaces I've enjoyed...
Claude Artifacts
Claude Artifacts was dropped along with the Claude-3.5 Sonnet release. The announcement was buried under all of the exciting Claude news, but I found Artifacts to be equally exciting.
Artifacts is an extra UI component that runs along side of the chat interface. If you instruct Claude to generate an artifact like code or text the generated artifact is displayed in a separate panel alongside the chat. If you ask Claude to make changes the changes are tracked in the artifact panel so you can easily see how your artifact has changed throughout the conversation. If you generate a new artifact that will be automatically tracked as well. All of this happens without extra input from the user.
Since Artifacts was released I've been using Claude exclusively. Ultimately I canceled my ChatGPT Pro subscription because I found the Artifacts feature to be so useful.
While the primary interace is still chat-based, I love how Artifacts adds more value to the user experience. I'm excited to see how this evolves!
Changelog
- 2024-07-21: Seed planted.