Trent McConaghy: Blockchain Infrastructure Landscape: A First Principles Framing

    Manifesting Storage, Computation, and Communications


    How are Ethereum, IPFS/Filecoin, and BigchainDB complementary? What about Golem, Polkadot, or Interledger? I often get questions like this. So, I decided to write about how I answer those questions, via a first-principles framing.

    The quick answer:

    …there’s no one magic system called “Blockchain” that magically does everything. Rather, there are really good building blocks of computing that can be used together to create effective decentralized applications. Ethereum can play a role, BigchainDB can play a role, and many more as well. Let’s explore…


    The elements of computing are storage, compute, and communications. Mainframes, PCs, mobile, and cloud all manifest these elements in their own unique ways. Specialized building blocks emerge to reconcile tradeoffs within a given element.

    For example, in the storage element we have both file systems and databases, where file systems are for storing blobs like mp3s with a hierarchy of directories and files, and databases are for storing structured metadata with a query interface like SQL [1]. In the centralized cloud, we might use Amazon S3 for blob storage, MongoDB Atlas for databases, and Amazon EC2 for processing.

    This article focuses on the Blockchain landscape: the blocks for each element of computing, and some examples of systems manifesting each block. For each block, I will focus on being illustrative over thorough.

    Blockchain Building Blocks

    Here is each element of computing, with related decentralized building blocks:

    • Storage: token storage, database, file system / blobs
    • Processing: stateful business logic, stateless business logic, high performance compute
    • Communications: connect networks of data, of value, and of state

    Blockchain Infrastructure Landscape

    Blockchain technology is manifesting in each block, as this image shows:



    The fundamental computing element of storage has the following building blocks.

    Token storage. Tokens are stores of value (e.g. assets, securities) whether it’s Bitcoins, air miles, or digital art copyright. The main actions on a token storage system are to issue and transfer tokens (with many variants), while preventing double-spends and the like.

    Bitcoin and Zcash are two prominent “pure play” systems focusing solely on tokens. Ethereum happens to use tokens in service towards its mission of being a world computer. These are all examples of tokens given out as internal incentives to run the network infrastructure.

    Other tokens aren’t internal to a network to power the network itself, but are used for incentives in a higher-level network where the lower-level infrastructure actually stores the tokens. One example is ERC20 tokens like Golem (GNT) running on top of the Ethereum mainnet. Another example is Envoke’s IP licensing tokens, running on the IPDB network.

    Finally, I have listed a “.*” to illustrate that most Blockchain systems have a mechanism for token storage.

    Database. Databases specialize in storing structured metadata, for example as tables (relational DB), document stores (e.g. JSON), key-value stores, time series, or graphs; and then rapidly retrieving that data via queries (e.g. SQL).

    Traditional distributed (but centralized) databases like MongoDB and Cassandra routinely store hundreds of Terabytes and even Petabytes of data, with throughput that can exceed 1 million writes per second, like here.

    Query languages like SQL are profound because they separate implementation from specification, and are therefore not bound to any particular application. SQL has been a standard for decades. This is why the same database system can be used across many different industries.

    Put another way:

    …to generalize beyond Bitcoin to more applications without any application-specific code, you don’t need to go all the way to Turing completeness. You just need a database. This has corresponding benefits in simplicity and scale. There are still great reasons to have Turing completeness in some places; we discuss this further in the “decentralized processing” section.

    BigchainDB is decentralized database software; specifically a document store. Being built on MongoDB (or RethinkDB), it inherits the querying and scale of Mongo. But it also has Blockchain-y characteristics like decentralized control, tamper-resistance, and token support. IPDB is a public net instance of BigchainDB, with governance.

    Also in the Blockchain space, we can think of IOTA as a time-series database, if we squint a bit.

    File system / data blob storage. These are systems to store large files (movies, mp3s, large datasets), organized in a hierarchy of directories and files.

    IPFS and Tahoe-LAFS are decentralized file systems that wrap decentralized or centralized blob storage. FileCoinStorjSia, and Tieron do decentralized blob storage. So does good old BitTorrent, though it uses a tit-for-tat scheme rather than tokens. Ethereum SwarmDat, and Swarm-JS do basically both.


    “Smart contracts” systems are the popular label for systems that do processing in a decentralized fashion [2]. This actually has two subsets with very different properties: stateless (combinational) business logic and stateful (sequential) business logic. Stateless vs stateful gives radical differences in complexity, verifiability, etc. There’s a third decentralized processing building block: high-performance compute (HPC).

    Stateless (combinational) business logic. This is any arbitrary logic that does not retain state internally. In electrical engineering terms, it can be framed as combinational digital logic circuits. The logic is represented as a truth table, schematic diagram, or code holding conditional statements (combining if/then, and, or, not). Because they don’t have state, it’s easy to verify large stateless smart contracts, and therefore to build large verified / secure systems. inputs and one output requires O(2^N) computations to verify.

    The Interledger Protocol (ILP) contains the crypto-conditions (CC) protocol to cleanly specify combinational circuits. CC is good to know of because it’s becoming an internet standard via the IETF, and because ILP is getting widespread adoption among both centralized & decentralized payments networks (e.g. >75 banks via Ripple). CC has standalone implementations in JavaScriptPythonJava, and more. BigchainDB, Ripple, and other systems use CCs; and therefore support combinational business logic / smart contracts.

    Since stateful logic is a superset of stateless logic, then systems that support stateful logic also support stateless logic (at the expense of additional complexity and verifiability challenges).

    Stateful (sequential) business logic. This is any arbitrary logic that does retain state internally. That is, it has memory. Or, it’s a combinational logic circuit with at least one feedback loop (and a clock). For example, a microprocessor has an internal register that gets updated according to machine-code instructions that are sent to it. More generally, stateful business logic is a Turing machine that takes in a sequence of inputs, and returns a sequence of outputs. Systems that manifest (a practical approximation of) this are called Turing-complete systems [3].

    Once you have internal state, then verification becomes harder. For combinational circuits, the number of possible mappings is 2^(number of inputs). For sequential, the number of internal states, is 2^(number of internal state variables) if your internal variables are all Boolean. For example, if you have a 3-input combinational circuit, it would have 2³ =8 possible states to verify. But if it’s a sequential circuit with a 32-bit register, then to fully verify you have to check 2³²=4.2 billion states. This restricts the complexity of sequential circuits (if you want to trust them).

    Ethereum is the best-known Blockchain system that manifests stateful business logic / smart contracts running directly on-chain. LiskRChainDFINITYEOSTezosFabricSawtooth, and many more also implement it.

    Because sequential logic is a superset of combinational logic, these systems also support combinational logic.

    For many use cases, there’s a far simpler approach: simply have processing on the client side, within the browser or the mobile device, running JavaScript or Swift. Here, you have to trust the processing going on in your client, but if that’s on the device in your hand it’s often acceptable. We think of this as the “fat client” alternative to the “fat protocols” framing. This architecture is easy for mainstream web developers. For example, all that many webapps need is application state. To build this you just need JS + IPDB (using js-bigchaindb-driver). Or if your app also needs have blob storage and payments, then include the JS client versions of IPFS (ipfs.js) and Ethereum (web3.js). Here’s an example:

    *ZiExDXrFKLWLKiw UoE A

    High-Performance Compute (HPC). This is processing to do “heavy lifting” compute for things like rendering, machine learning, circuit simulation, weather forecasting, protein folding, and more. A compute job here might take hours or even weeks on a cluster of machines (CPUs, GPUs, even TPUs).

    I see these approaches to decentralized HPC:

    • Golem and frame it as a combination of decentralized supercomputer along with associated apps.
    • Nyriad frames it as storage processing. Basically, the processing sits next to decentralized storage (which Nyriad also has a solution to).
    • TrueBit lets 3rd parties compute but then doing post-compute checking (implicitly checking when possible; explicitly checking if questions get raised).
    • Some folks are simply running heavy computation on VMs or Dockercontainers, and putting the result (final VM state, or just computed results) into blob storage with restricted access. Then, they sell access to these containers using, for example, tokenized read permissions. This approach asks more of clients to verify results, but the good thing is that all this tech is possible today. This will naturally combine with TrueBit as TrueBit matures.


    Here I focus on connecting networks. It comes in three levels: data, value, and state.

    Data. In the 60s we got the ARPAnet. Its success spawned several similar networks like NPL and CYCLADES. A new problem arose: they didn’t talk to each other. Cerf and Kahn invented TCP/IP in the 70s to connect them, to create a network of networks, which we now call the internet. TCP/IP is now the de-facto standard to connect networks. OSI was a competing set of protocols, but it’s long faded (though, ironically, its model has proved useful). So, despite its age, TCP/IP is nonetheless a decentralized building block, for connecting networks of data.

    Value. TCP/IP only connects networks on a data level. You can double-spend packets — send the same packet to more than one destination at once — and it doesn’t care. But what about connecting networks where you can send value across the networks? For example, from Bitcoin to Ethereum, or even SWIFT payments network to say Ripple’s XRP network. You only want the token to be able to go to one destination at a time. One way to connect networks while preventing double-spends is to use an exchange. But that’s traditionally pretty heavy. However, you can strip an exchange to its essence and remove the need for a trusted middleman, by using cryptographic escrow. Alice can send money to Bob via Mallory, where Mallory is passing on the funds but cannot spend them (and there’s a timeout so that Mallory can’t stall things forever). This is the essence behind the Interledger Protocol(ILP). It’s the same conceptual idea as two-way pegs (think sidechains) and state channels (think Lightning & Raiden); but the focus is 100% on connecting networks with respect to value. Besides ILP, there’s also Cosmoswhich adds a bit more complexity for more convenience.

    State. Can we go beyond connecting networks of value? Imagine a computer virus with its own Bitcoin wallet that can hop from one network to another. Or a smart contract in Ethereum mainnet that can move its state to another Ethereum net, or another compatible net? Or, why restrict an AI DAO to just one net?

    This where Polkadot comes in, to connect networks of stateAeternity also fits somewhere between the network-of-value and network-of-state spectrum.


    We’ve now reviewed the three elements of computing (storage, processing, communications), the decentralized building blocks for each, and example projects within each building block.

    People are starting to build systems that manifest combinations. There many combinations of two blocks at once, usually IPFS + Ethereum or IPFS + IPDB. But there are even folks using three or more blocks. Here are a couple leading edge examples:

    • Ujo uses IPFS|Swarm + IPDB + Ethereum for decentralized music, just as envisioned here. IPFS or Swarm are for file system and blob storage. IPDB (with BigchainDB) is used for metadata storage and querying. Ethereum is used for token storage and stateful business logic.
    • Innogy uses IPFS + IPDB + IOTA for supply chain / IoT applications. IPFS is used for file system and blob storage. IPDB (with BigchainDB) is used for metadata storage and querying. IOTA is used for time-series data.

    Related Work

    Here are related framings by others in the Blockchain community; all of whom I’ve had the pleasure to have great conversations with.

    Joel Monegro’s “Fat Protocols framing emphasizes each building block as a protocol. I think this is a cool way of framing, though it constrains the building blocks to be talking to each other via a network protocol. There’s another way: blocks could simply be one “import” statement or library callaway.

    Reasons for using an import could be (a) lower latency: a network call takes time which could hurt or kill the usability; (b) simplicity: using a library (or even embedded code) is usually just simpler than connecting on the network, paying tokens, etc; and (c ) more mature: the protocol stack is just emerging now. We have awesome Unix libraries going back decades, even Python and JS blocks going back 15+ years.

    Fred Ehrsam’s “Dapp Developer Stack has an emphasis on web business models. While it’s also very helpful, it does not aim to make a fine-grained distinction among blocks for a given element of computing (e.g. file system versus database).

    The BigchainDB whitepaper (first released Feb 2016) Figure 1 gave an earlier version of the stack of this post. It focused on the elements of decentralized processing, file system, and database. It did not frame from the perspective of “elements of computing”, and did not distinguish the types of decentralized processing. What I’ve written in this post is an evolution of my thinking from that paper over the past year and a half; with continual updates in talks such as my May 22 talk at Consensus 2017 which is very similar to this article. (Part of my reason to write this post is that I’ve received many requests to put it in writing:)

    Stephan Tual’s “Web 3.0 Revisited” stack is spiritually similar to this post. It does a good service to the community by trying to make a map that groups many projects into similar building blocks. I was happily surprised by how similar the thinking was to my own. However, its layer of blocks to serve applications (blocks for messaging, storage, consensus, governance, ..) is actually mixing three things: apps, the “what”, and the “how”. To me, blocks should be the “what”. So, messaging is an app (should be at the application level); storage needs to be more fine-grained; consensus is part of the “how” (hidden within some blocks); and governance is also part of the “how” (therefore also hidden). It also has [network] protocols as a separate lower-level block, though I see those as one of the possible ways that blocks can talk to each other, alongside library calls. Nonetheless, I think this is an excellent article and stack:)

    Alexander Ruppert’s “Mapping the decentralized world” has about 20 groupings of organizations, with the x-axis giving four higher-level groupings from infrastructure layer to application layer, but with middleware and liquidity as intermediate levels. This is a great piece too; I’m happy to have helped Alex map it out. It has less emphasis on core infrastructure and more on broader trends; whereas this piece is all about core infrastructure from a first-principles framing.


    Systems like Ujo combine many blocks together, such as IPFS or Swarm (for blobs) + Ethereum (for tokens and business logic) + IPDB & BigchainDB (for database with fast queries), and therefore leverage the benefits of all of these systems.

    I expect that this trend will accelerate as folks get a better understanding of the how the building blocks relate. It’s also more productive than rather than framing everything into one monolith called “Blockchain”.

    I expect this stack to continually evolve, as the decentralization ecosystem evolves. AWS started out as just one service: S3 for blob storage. Then it got processing: EC2. And it kept going; here’s the full timeline. AWS now as more than 50 blocks; though of course a small handful remain the most important. Below is a picture of all the AWS services.

    *saBhWuK ESSsBouGA
    Screenshot of services from July 15, 2017.

    I envision something similar happening in the decentralization space. As a first cut, one could imagine a decentralized version of every single AWS block. However, there will be differences, since each ecosystem (cloud vs mobile vs decentralized) has its own special blocks, such as token storage for decentralization. It will be a fun ride!


    [1] You can actually put further hierarchy into these building blocks. E.g. databases sit on top of file systems, which sit on raw data (blob) storage. And distributed databases involve communication. For example, most modern databases talk to the underlying storage via a file system like Ext4, XFS or GridFS. The framing I give in this article is that of an applications programmer: what’s the UX for a file system, the UX for a database, etc.

    [2] I’ve never really liked the label “smart contracts”. They’re not really smart in any AI-ish sense of the word. And they usually have nothing to do with “contract” in any legal sense of the word. If they do include legals, they usually state so, e.g. with Ricardian contracts. The labels “decentralized processing” and within it “decentralized business logic” make more sense. However, given “smart contract” now has widespread use, so be it. I have better things to focus on than fighting over labels:)

    [3] I say “Turing complete” here in a practical sense, not in a theoretically pure sense. That is: the machine returns a string of outgoing bits as a function of the incoming bits and its current internal state; but practical in the sense of not running infinitely long or claiming to solve the “when does the machine stop” problem (halting problem).

    Thanks to the countless folks who have given me feedback on this stack over the last couple years. And thanks to Carly Sheridan, Troy McConaghy, and Dimi de Jonghe for through editing. Finally, thanks to everyone in the space who continues to improve the building blocks and build ever more interesting applications:)

    Originally published at BigchainDB blog.

    Previous articlePillar Tokens worth $12 Million Sold in First Hour of Sale
    Next articleAicoin Launches Today – Next Generation Of Investment Innovation In Initial Coin Offerings
    Trent's vision is to have a shared public database for the internet, to allow sovereign management of personal data and to celebrate the cultural commons while compensating creators. Towards this, he works on the IPDB network & foundation, BigchainDB blockchain database software, Coala IP protocol, and advise Estonian E-residency program. These are building blocks towards his long-term goal - to help ensure that humanity has a role in an increasingly autonomous world. Trent started his career by doing AI research for national defense in the 1990s, while still an undergraduate. Then, in his first startup (ADA), he explored creative AI. ADA was acquired in 2004 and then extended that work during his PhD, and found a way to reconcile machine creativity with existing engineering knowledge. In his second startup (Solido), he leveraged AI to help drive Moore's Law, which was under threat due to manufacturing variations. Now, Solido SW is used for most modern chip designs. Most recently Trent co-founded ascribe to help compensate creators in the digital era (via IP on the blockchain) and this work evolved into IPDB, BigchainDB, and Coala IP. More: -PhD in EE from KU Leuven, Belgium. Awarded #1 thesis worldwide in the field. -BE EE and BsC CS from U Saskatchewan, Canada. Awarded #1 final year thesis. -Did ML research for Canadian Department of National Defense. -Wrote 2 Books [1,2] on ML, circuits and creativity. Also 35 Papers, 20 Patents. -Keynotes & invited talks at MIT, Columbia, Berkeley, JPL, Nvidia, IFA+ Summit, Data Science Day Berlin, PyData Berlin, more. Regular lecturer at Data Science Retreat [link]. -Organize Singularity Meets Self Improvement [link] and co-organize Berlin Machine Learning [link] meetups. Co-organized workshops on ML / circuits [e.g. 1,2,3,4]. TPC / reviewer for many conferences & journals [e.g. 1,2,3,4,5] -Past projects: painting, BCI hacking, hypnosis, video game programming, surfing. -Raised in a pig farm in Canada, hacking away on cold winter nights.