🪴 Scaling Synthesis

Search IconIcon to open search

Q- What is a decentralized discourse graph

Last updated June 16, 2022

Authored By:: P- Joel Chan, P- Rob Haisfield, P- Brendan Langen

Our research draws from a long line of information models, such as the Semantic Web Applications in Neuromedicine (SWAN) ontology, the micropublications model, the ScholOnto ontology for modeling scientific discourse, the nanopublication model, and the Hypotheses, Evidence, and Relationships (HypER) model. These models share a common underlying model for representing scientific discourse: they distill traditional forms of publication down into more granular, formalized knowledge claims, linked to supporting evidence and context through a network or graph model.

We use the term discourse graph to refer to this information model, to evoke the core concepts of representing and relating knowledge claims (rather than concepts) as the central unit, and emphasizing linking and relating these claims (rather than categorizing or filing them). Standardizing the representation of scientific claims and evidence in a graph model can support machine reasoning.1 We are particularly interested in the potential of discourse graphs to accelerate human synthesis work.

So what does it mean for a discourse graph to be decentralized?

Synthesis has many component parts.2 Given that, C- The responsibilities required to produce synthesis can be split up among many people. P- Michael Karpeles refers to this concept as human computation

Dividing the responsibilities of synthesis is one of the core strengths of decentralizing the process. When we look at prior attempts at building a semantic web, we find that the primary reasons have to do with human behavior and dishonesty.

C- People are lazy and C- Most people will primarily consume information, so we can’t expect them to do all of the work necessary to index information themselves. While tools like Roam Research and Obsidian enable people to develop advanced discourse graphs for themselves, over time they may end up with a system that is so complicated that maintaining it becomes work. Why not split up the effort so the ones who share information aren’t responsible for 100% of the processing?


  1. R- Genuine semantic publishing ↩︎

  2. R- Towards a comprehensive model of the cognitive process and mechanisms of individual sensemaking ↩︎