Blog

This is an ongoing collection of longer form thoughts on the design and philosophy behind the Coalescent Computer.

Blog < A New Architecture for Social Computing

@jakintosh | January 1, 2023

The Coalescent Computer is, in one sentence, a specification for a virtual stack machine and a protocol for deterministic binary data. However, it is the marriage of these two components as well as their specific intentional design that create a fundamentally new model for personal and social computing. (By “social computing” I mean a network of equal peers “personally computing, together”.)

In this system, all data has a canonical binary format that is used to produce a reference hash. Instead of using a file system, there is a database that allows retrieval of data by its hash. Using these hashes, data can be linked together to create relationships, allow for composition, or provide names.

All virtual machine code has a single binary representation that lets it be treated as data within the unified data model. Furthermore, the canonical machine code invokes subroutines using data hashes instead of a compiled memory address, and the hashes are resolved to concrete memory address when loaded. This means that each routine is loaded into memory exactly once, and that we get all the benefits of all data in the system: it’s stored in the database, can be linked to, and can be named.

Since the hashes are content-addressable, the single “entry point” routine hash of a function, application, or even an operating system is enough to uniquely identify that code, where subroutine hashes are recursively resolved. This also means that the question of static vs dynamic linking is eliminated, and that all libraries have maximum code overlap, and minimum storage usage. In addition, by scanning the map of loaded routine hashes, insecure or malicious code published by security auditors can very easily detect compromised code that may be running on the system.

These core design features create a simple set of building blocks that let interesting complex behaviors emerge. It creates a simple and reasonable platform for personal computing, where all computation and data storage happens locally, and external data and code can be fetched and verified when needed. And since the foundations are open protocols, as long as you have software (or even hardware) that meets the virtual instruction specification, any instance of the machine will “coalesce” with any other instance.

This becomes even clearer once you move from personal computing into networked social computing. By allowing two Coalescent Computer instances to communicate over the network, they can share data (and code, which is data) between them, resolving hashes they may not have stored locally, and verifying the contents as legitimate. More than just peers, we can have networks where each node has its own perspective on which other peers it may trust and to which degree. The power of choosing what to compute, how to compute it, and with whom is always in the hands of each agent on the network.

In my own personal philosophical understanding of the world, all human beings (and, in fact, all sensory agents) are part of the same universal sensory system, each with their own history and perspective of sensory experiences. We don’t have disagreements on reality, but different “pieces of data”. The Coalescent Computer follows this natural design: every instance of the Coalescent Computer is the *same* computer, just with a different collection of data in its database to be shared at the discretion of the agent.