What user behavior is required to maintain a decentralized knowledge graph? Presumably, there should some balance between what is required of user behavior, what is required of the design of the system, and what AI can do to reduce the effort from user behavior.
- Today in random things brought to my attention on Twitter ( @balOS @joelchan86 you’ll both get a kick out of this), Tau is a blockchain decentralized social network that uses logic based AI (similar to Jump) and program synthesis from user discussions to make updates to its codebase, allowing all participants in the discussion forums to participate in Tau’s governance.
- Supposedly they’ve figured out a way to use logic programming also to identify points of consensus so it isn’t updating its code all over the place.
- When you listen to them talk, you can hear them make an implicit point: if web3 is in part about being able to participate in the governance of the products/platforms you use, unless you expand programming capability to everybody, it’s ultimately governed by the developers who make decisions about what to build. End-user programming is necessary to self / community governance.
- I asked on Twitter about the conceptual fit between query languages like SQL and Datalog with personal and shared knowledge graphs and they responded.
- Makes me also think - is there more of a conceptual fit between different types of AI and discourse graphs? I.e. logical AI (Jump, Tau) or probabilistic AI (GPT-3, DALL-E 2)?
- I would bet that logical AI is a better fit because it’s fully explainable. In the context of a discourse graph, you want to know exactly how it built up its answer. Architectural transparency is key.
- Also, a logical AI might be able to do more from the relatively small amount of data produced by individual researchers in their own graphs. From our interviews, people seem to want AI that is tuned to their personal knowledge graphs, not necessarily from a global body of knowledge.
- On the other hand, a probabilistic AI might be a better fit if you consider the “zettelkasten as a conversation partner” to be most important.
- Additionally, with the discourse graph generated by thousands or millions of participants, idk which would be more effective - a probabilistic or logical AI.
- I know a guy who has built a library (can’t find it) called Frankenstein that is trained to look at black box probabilistic AI results and fabricate propositional logic explanations. Spitballing, a cool model might be:
- Probabilistic AI to identify logical relationships from natural language
- Logical AI built on top of that
- Probabilistic AI to answer questions from a global discourse graph
- Frankenstein to interpret those answers in terms of logical relationships and check to see if they are correct or where they might have logical breakdowns
Thoughts from Ryan Murphy:
In language processing/AI-ish stuff, I want to see:
- Find similar/related documents (as per DEVONthink)
- Identify missing links (e.g., highlight important words/tokens that seem to appear significant and relevant elsewhere in your notes, too)
- Suggested tags … I thought I had more, but I’m forgetting ‘im
Joel Chan: love the idea of missing links! maybe a more powerful/fuzzy variant of aliases.
suggested aliases for the current note. Would be super valuable. Prevents people from talking about the same thing using slightly different terms in different places
Ryan Murphy: actually, and that’s a fifth (or maybe the same one): redundancy detection. “These two notes are so similar that you might want to merge them”-type use cases you are always in control, but the system can help you with your tasks: finding things you want to merge, propagating changes, detecting missing links, etc.