Scaling Synthesis

Search IconIcon to open search

People process complex information in multiple levels and stages of processing

Last updated March 20, 2023

Authored By:: Joel Chan Rob Haisfield

When we’re exposed to new information, we have to ask questions and learn more before we feel confident in our understanding. Knowledge evolves over time, and we adapt our thoughts as we see other perspectives. This is the sensemaking process. This is our introduction to something new, and we know the least we ever will about the topic at this time.

Similarly, in the moment a note is created, it is very difficult to know where it belongs. We may not yet have a name for what we’re seeing or know how it relates to another insight. Our understanding will likely evolve after reading other sources. On later review, we may even find a unique detail of the wider point to be the most interesting. It’s hard to know everything up front!

Synthesis as a process is usefully modeled as a specialized form of sensemaking, which requires iterative loops of (re)interpreting data in light of evolving schemas. Basically, as people learn more about the structure of their intellectual domain, reinterpreting what they already know given new knowledge and frames is likely to improve the rate of synthesis. The primary models that represent this process are the Learning Loop Complex and the Notional Model of Sensemaking, which loops between and within foraging and sensemaking loops, progressively increasing in structure and effort, starting from raw data sources and culminating in a synthesized set of hypotheses.

In our interviews, we saw multiple people talk about their process of conversations, chats with others to crystallize new information, like R16 in their synthesis process. Sometimes this can be a frame, or a prompt, that the person presented to themselves, like R10 writing multiple versions in order to speak fluently about the topic.

This all speaks to the need to treat notes as a work in progress. Notes aren’t in their final form when first written; they evolve, often far beyond the stub they began as. Translating to tool design, There needs to be an excellent workflow for refactoring in a tool for thought to enable incremental formalization.

In the scope of a Thought processor, we see different approaches. Roam Research encourages users to write directly into a daily notes page, linking to pages as they write. This contrasted conventional practice in tools like Evernote that ask the user to name the page and place it in a folder up front. Zettelkasten systems, which are agnostic to a specific tool, ask users to create a note without names and later fit it in next to similar notes.

In the process of synthesis, we often write a number of notes about an idea - some useful, some not. We know that Multiplicity facilitates synthesis, and this writing process helps, but it creates a huge amount of noise for us to later search through. We have a need for Archiving.

Forgetting is powerful. If something is no longer important, its presence in your memory is a burden, turning your “search results” into a noisy mess that decreases your likelihood of finding the right answer.

Many people use personal knowledge management software to augment their memories. “Your mind is for having ideas, not holding them” as stated in the infamous productivity text “Getting Things Done.” We saw this again and again in our interviews. Expert synthesizers use thought processors to ensure good ideas don’t fall through the cracks. Personally, I use thought processors in conjunction with personal knowledge management in order to ensure the lessons I learn today retain their usefulness in 10 years. If at that point my questions and ideas are dependent on recent memory rather than a decade of learning, I’ve failed.

However, over the course of 1 year using Obsidian, I wrote more than a million words. Maybe 30% of that was from educational materials. The rest was “thinking out loud” in order to process thoughts. Eventually, many of those thoughts would be processed into an information artifact. Those artifacts became my canonical point of reference.

While this multiplicity is generally useful during the process of thinking through an issue, later down the line reviewing it contributes to the feeling of wasted repeated effort. Early versions of information in the final artifact can likely be forgotten without much negative consequence.

Additionally, over the course of a decade, I’ll certainly update my beliefs and mental models. If that happens, I’ll have to ask myself, “Is it important to keep a record of my beliefs at that point in time, or is leaving it in my database weighing it down?”

I don’t know how software can automatically archive this information without making users mad at some point or another. This is a hard problem, but it should be clear that Bulk refactors are a necessary primitive to maintaining a decentralized discourse graph.

At the extreme end, Venkatesh Rao jokingly referred to an archetype of user as “mind-palace pack rats”. These are the people who don’t know how to delete information and want to either memorize or save all information they encounter. I occasionally fall into this trap. Predicting trajectories of future reuse of information objects is hard, and information hoarders may believe that everything will be reused at some point. Additionally, they may believe that it’s better to have information and not need it than need it and not have it.

Whether or not either of these beliefs turn out to be true, the hoarders will produce a massive amount of information over the course of 10 years.

If the first belief is false and they never delete or archive information, then this means they will have too much information to search through in 10 years, turning their personal database into a noisy mess with far too much to sift through for efficient reuse. Whether or not the second belief is true, it must be weighed against the immense cost of bloating your knowledge graph.

The problem is that over the course of 10 years, I’ll generate so much content that many of it will simply appear to be noise. Not everything you believe to be important today will be important then. Most likely, you will only want the most important artifacts due to time constraints.