Conversation branching and similar tools in popular AIs
If you have ever used a large language model for anything with more depth than a weather query or a yes/no question, your experience may feel blemished by the one-lane character of the conversation. All popular models, or AIs, as they are commonly called outside the professional circles, keep the previous iterations in mind and factor this context into each subsequent answer. But when you are researching a more complex subject, or looking for several options based on a fluctuating set of variable inputs, context awareness is not enough: you want to explore several paths, see the matter from several perspectives. This is where AI conversation branching steps in.
The feature feels simple, really: take a certain point in the dialogue, call it a fork, and pursue two (or more) lines of conversation from there. Large language models have been around — within a couple of clicks for pretty much any regular user — for some time now, but conversation branching is a rather new thing for them. This fact points to the inherent complexity of its implementation, and partially explains why some developers implemented their own visions of this function. Let’s see what popular AIs offer in this respect currently (October 2025).
ChatGPT conversation branching feature
ChatGPT is, arguably, the most popular AI out there. With dozens of integrations, it surely is poised to be the market leader. And yet, OpenAI was not the first to give conversation branching to its users: it was Google. For ChatGPT, this feature was launched on September 5, 2025, and if you haven't heard anything about it, that’s because the rollout was intentionally low-key.
Conversation branching in ChatGPT works as follows:
- Hover over any message in a ChatGPT thread, click the three dots icon (“More actions”);
- Select “Branch in new chat,” and have AI fork a new line from that point.
The newly created branch is fully aware of the context, but whatever you ask in the original thread from the branching point is beyond it. Multiple branches can be created from any message in a thread, and each is stored as a unique line of conversation.
Perplexity’s Add to Follow-Up feature
Perplexity is another major player making waves in the realm of consumer-oriented artificial intelligence every now and then. It brands itself as the Answer Machine, and actually delivers on this promise: using it instead of a search engine is a pretty addictive experience, especially given the clear way it gives you links to the sources the information is coming from.
It is good for research, too, quickly becoming aware of the latest news and being able to dig deep into what the web has stored in its troves. As for branching, thus far, Perplexity does not have this feature per se, but there is the Add to Follow-Up function that kind of mimics the use pattern:
- select a piece of text you want to explore more;
- click Add to Follow-Up in the context menu (pops up on its own);
- add instructions into the prompt box (like “elaborate,” as the simplest one).
Thus, you’ll create what can count as a branch; your next step can be selecting some other fragment in the node answer (the one you started branching from) and asking Perplexity to go deeper into that one, thus creating another branch, or typing your next related query into the prompt box. Either way, the AI will keep up the context just fine.
Google Gemini’s conversation branching
Google Gemini has had conversation branching since February 2025, but the feature is available in Google AI Studio exclusively. It is positioned as a tool for developers and advanced users to prototype and refine workflows. The sequence is quite simple:
- click the three-dot button;
- select “Branch from here” in the menu.
As for the regular web version of Gemini, it doesn’t have branching (as of the time of this writing), but there are follow-up questions that may be perceived as a surrogate thereof.
Anthropic Claude: branching via edit and regenerate
Claude, another popular large language model positioned as ethical, safe, and trustworthy AI, has had branching capabilities since at least early 2024. It works somewhat differently from ChatGPT and Gemini: a new branch is created when you regenerate an already given answer or edit the prompt.
This is where it is different from Perplexity, for example: it also allows editing prompts, but doing so yields a new answer and erases the one that was given earlier. Claude retains everything: editing a prompt or simply regenerating the previously generated response creates a branch, i.e., the said response does not disappear.
Stay tuned for more AI coverage; if you want to revisit previous pieces (like the “AI-based services for all” series), find them under the “Artificial Intelligence” tag.