Min-Maxing Complexity on Digital Infrastructure Mechanisms through Occam’s Razor and Ashby’s Law of Requisite Variets
DaniloLessaBernardin
18 views
44 slides
Jun 19, 2024
Slide 1 of 44
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
About This Presentation
Presented at 2024 INFORMS/ALIO/ASOCIO
June 19th 2024, Medellín, Colombia
In this article, we argue, based on consolidating learnings from project practice, that mechanisms and subsystems being implemented for Digital Infrastructure, which are multi-stakeholder in nature, are to be as complex as ...
Presented at 2024 INFORMS/ALIO/ASOCIO
June 19th 2024, Medellín, Colombia
In this article, we argue, based on consolidating learnings from project practice, that mechanisms and subsystems being implemented for Digital Infrastructure, which are multi-stakeholder in nature, are to be as complex as needed to encode a variety of behavior, but as simple as possible in order to provide legibility for its affected and regulating stakeholders. Both of those goals can be read as an interpretation of the Occam's Razor and Ashby's Law of Requisite Variety heuristics over a Models-Based Systems Engineering workflow. Together with the Precautionary Principle, we summarize the base set of decision principles for Mechanism Selection into three: Variety, Legibility and Safety. Then, we proceed to provide two case studies on which those principles were actively applied: one around the selection of the Aztec Blockchain Sequencer Selection Protocol, and another on the ideation and implementation of Neural Quorum Governance on the Stellar Community Fund.
Size: 14.63 MB
Language: en
Added: Jun 19, 2024
Slides: 44 pages
Slide Content
Min-Maxing Complexity on Digital Infrastructure Mechanisms through Occam’s Razor and Ashby’s Law of Requisite Variety An tale through two case studies. 2024 INFORMS/ALIO/ASOCIO June 19th 2024, Medellín, Colombia Link for the slides Link for the draft working paper
Background on the Authors Danilo Lessa Bernardineli (Speaker) Ilan Ben-Meir Michael Zargham Jakob Hackel BSci Sr. Engineer on Computational Science. BSc Physics at Universidade de São Paulo BSci Editor and Research. PhD candidate in English at Brown University. BSci Founder and Chief Engineer. PhD in Systems Engineering at University of Pennsylvania. BSci TPM and Researcher. PhD in economics at the Research Institute for Cryptoeconomics
Summary Background Intro to BlockScience "Web3", "Blockchain" and DAOs Introduction Digital Infrastructure and Models-based Systems Engineering applications Legibility, Variety and Safety for achieving Sufficiency on Mechanism Design Modular Design as complexity restrainers Case Study on the Aztec Sequencer Selection Protocol Case Study on the Stellar Neural Quorum Governance Mechanism Conclusions
Agenda Introduction Background Case Study 1: Aztec Sequencer Selection Protocol Case Study 2: Stellar Neural Quorum Governance Conclusions
Background on BlockScience ( 1/2) https://block.science/
Background on BlockScience (2/2) https://block.science/
What is "Web3", "Blockchain" and DAOs https://twitter.com/Bedrockswap/status/1554834977136660480
What is "Web3", "Blockchain" and DAOs https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/using-blockchain-to-improve-data-management-in-the-public-sector
What is "Web3", "Blockchain" and DAOs https://www.forbes.com/sites/johncumbers/2023/04/28/the-desci-movement-will-crypto-really-solve-sciences-biggest-problems/
Agenda Introduction Background Case Study 1: Aztec Sequencer Selection Protocol Case Study 2: Stellar Neural Quorum Governance Conclusions
What is Digital Infrastructure Digital Infrastructure (DI) is defined as being "The total physical and software-based infrastructure necessary to deliver digital goods, products & services." by the Sustainable Digital Infrastructure Alliance . Additionally, some authors split DI into two components: A hard one, which is related to physical components, and a soft one, which is based on social and virtual components.
The Engineering Design Lifecycle for Mechanisms https://blog.block.science/block-by-block-managing-complexity-with-model-based-systems-engineering/
Our observations as evidenced by the case studies " Performing Mechanism Design for Digital Infrastructure more often than not iterates through multi-objective robust optimization problems, on which the selection of the solution is to be decided upon by a multi-stakeholder group with varying familiarity around the technical procedures. This class of problems is already technically challenging with fixed cardinalities, and Mechanism Design poses a unique challenge on which the cardinality size can be decided upon. A mechanism input space or response space can be either arbitrarily small or large. Or its size can be static, contextual or dynamic. "
Our thesis as evidenced by the case studies "The thesis which we espouse in this article is that the notion of Sufficiency , which is associated with Pareto optimality , tends to be a more realistic conceptual framework, when compared with optimization, for achieving mechanism success in terms of adoption and resilience under such settings. However, meta-heuristics for identifying sufficiency is required, and we argue that the principles of Legibility , Variety and Safety (LVS) are to be espoused"
Stating the Principles Heuristic Occam's Razor Ashby's Law of Requisite Variety Formulation “plurality should not be posited without necessity.” " In order to be efficaciously adaptive, the internal complexity of a system must match the external complexity it confronts" Assertion for Mechanism Design The best solution to a given problem is one that is as simple as possible, while affording the requisite degree of regulatory power. The best solution to a given problem is as complex as necessary, without overly constraining the variety of the system being regulated. https://en.wikipedia.org/wiki/Variety_(cybernetics)
Undesirable consequences of sub-optimal systemic complexity Too Complex Too Simple Reduced Participation : Participants may be overwhelmed or discouraged by the complexity and choose not to engage. This may be exacerbated by increased operating costs and/or uncertainty about upsides to participation. Unintended Incentives : Simplicity could breed game-ability, where participants find ways to exploit the system that weren't anticipated by its designers. Opacity : A more complicated mechanism might be less transparent, leading to mistrust. Participants might suspect they're being taken advantage of. Inefficiencies : Failing to account for important factors or participant preferences can prevent a market from achieving an acceptable equilibrium state. Corner Cases : Complexity increases the likelihood of unexpected outcomes or loopholes. Edge cases also tend to concentrate behavior and may lead to emergent centralizatio n. Perceived Unfairness : If mechanisms don't differentiate sufficiently among situations, it could seem like they're creating or amplifying inequality, e.g. perceptions of censorship if particular classes of transactions are systematically underrepresented.
Key principles for Mechanism Design Legibility Strive to make the rules and mechanisms understandable. Even if there is complexity behind the scenes, the interface and rules that participants interact with should be as clear as possible. Variety Design mechanisms that can be adjusted over time. This relates to the modularity of the design and the degrees of freedom which allow actors in the network to adapt to changes in their environment. Safety Provide guardrails and/or computationally enforce rules which should not be violated under any circumstances; e.g. enforcing the presence of proofs for each transaction to be included in any block. https://blog.block.science/engineering-for-legitimacy/
Optimizing for LVS When optimizing for those principles, it can be expected naively that there are trade-offs. For instance, more legible systems are usually expected to be simpler, which sometimes can lead to decreased diversity of system responses. However, as we'll see, this is not necessarily universal, as approaches such as modular design have the potential of providing legibility through interfaces while "imprisoning" the complexity (and therefore, expressiveness of variety) into modules.
Imprisoning Complexity through Modularization. Or Blocks. https://blog.block.science/building-with-blocks-of-science/ https://media.economics.uconn.edu/working/2023-05.pdf https://blog.block.science/block-by-block-managing-complexity-with-model-based-systems-engineering
Agenda Introduction Background Case Study 1: Aztec Sequencer Selection Protocol Case Study 2: Stellar Neural Quorum Governance Conclusions
What is Aztec? Aztec is described as being "a trustless, scalable, decentralized Layer 2 (L2) that implements a hybrid zk-Rollup. It aims to support public and private smart contract execution while preserving user privacy. Aztec utilizes a UTXO architecture and enables the development of applications and tools on public smart contract platforms." (Delphi et al.) Unpacking this in laymen terms, we describe Aztec as a distinct blockchain, which is coupled to a Layer 1 blockchain - in this case the Ethereum Network. While Ethereum is highly decentralized and secure for transactions, it is also costly and completely public. Aztec complements this by making transactions private, and reducing the cost to make transactions, by consistently settling whole batches of private transactions on Ethereum
The need for a Sequencer Selection Protocol The transaction data in Aztec is processed and uploaded in regular batches which are named Blocks , whose generation is actuated by the Block Production Process (BPP) , which consumes a partially unordered set of user submitted transactions, and will output either a finalized or skipped block. The actors that are involved in the Block Production Process can be listed as being the A) Sequencers , which are responsible for ordering and proposing blocks of transactions B) Provers , which are responsible for producing the zk-Rollup proofs that are required in order to finalize blocks. C) Relay
The need for a Sequencer Selection Protocol
A tale of two proposals
B52 vs Fernet In a stylized summary, Fernet uses randomness to select the lead sequencer, and there's no formal separation between sequencer and provers, while B52 tries to maximize for MEV-originated revenue. Eg. the proposal which orders the transactions in the most apparently profitable way is to be preferred. This value is then burnt, which is interpreted to generate protocol revenue due to its deflationary effect. The report provides an analogy with macro-economics by saying "Fernet can be likened to the “free market” solution to a computational labor market problem, whereas B52 is more similar to a “nationalized” solution to that same computational labor market problem
Legibility, Variety and Safety as a principles for recommendations https://hackmd.io/@blockscience/Aztec_Sequencer_Selection
Conclusions on Aztec The conclusion of this case study is that an Mechanism Selection decision that relies on determining sufficiency through the LVS principles can lead to more acceptable and robust conclusions than relying on optimization alone. https://hackmd.io/@blockscience/Aztec_Sequencer_Selection
Agenda Introduction Background Case Study 1: Aztec Sequencer Selection Protocol Case Study 2: Stellar Neural Quorum Governance Conclusions
Historical Background In early 2023, the Stellar Development Fund (SDF) contacted BlockScience for devising, specifying and validating a novel governance mechanism for assigning individual voting scores for the Stellar Community Fund (SCF) voter based. It was expected that SCF would increase in scale on their voter base, both in terms of quantity of voters as well as an increase in variety in their preferences and behavior. The end product of this effort is what is now called Neural Quorum Governance , the juxtaposition of Neural Governance, which is a mechanism for attributing an individual vote score, and Quorum Delegation, which is a mechanism for allowing individuals to delegate their votes.
What is Stellar and the Stellar Community Fund? Terminology notes (from Bernardineli et al.): " Stellar is a public and open-source blockchain network for payments, on- and off-ramps, and tokenization. It is powered by the Stellar Consensus Protocol (SCP), which is a federated proof-of-agreement consensus mechanism. Soroban is a Rust / WASM / JSON-RPC based smart contract platform which is designed to be sensible, built-to-scale, batteries-included, and developer-friendly" (...) "The Stellar Development Foundation (SDF) is a non-profit organization that supports the development and growth of Stellar" (...) "The Stellar Community Fund (SCF) is an open-application awards program managed by the Stellar Development Foundation to support developers and startups building on Stellar and Soroban. SCF relies on input from verified members in the Stellar community for award allocation, governance and project support to keep the fund running and developing." Entity Stellar SCP Soroban SDF SCF Description L1 Blockchain Network PoA Consensus Mechanism Smart Contract Platform Non-profit organization Awards Program Layman terms The ledger that holds the data The algorithm for deciding if a new data point is valid A VM that writes/reads from the ledger
The Typical MBSE Process adopted by BlockScience Source: https://blog.block.science/block-by-block-managing-complexity-with-model-based-systems-engineering/
Ideation (Phase 1) overview Retrieved by 2023-09-06 from the ideation report and kick-off slides
PoC Delivery (Phase 2) overview (1) Retrieved by 2023-09-06 from https://github.com/BlockScience/scf-voting-mechanism/blob/main/notebooks/proof-of-concept-demo.ipynb
PoC Delivery (Phase 2) overview (2) Retrieved by 2023-09-06 from https://hackmd.io/BIh2LNprSoaSM-rRVjbAjA?view & https://hackmd.io/GMs8iB1MQEGsJirqqPm3NA & https://hackmd.io/HzRrf1NtQ_a7nlSvX_stXg?view
The wish for a SCF Voting Mechanism https://blog.block.science/the-story-behind-neural-quorum-governance/
Collected Requirements The resulting requirements, according to Bernardineli et al, were identified as being: A) "Any community fund disbursement voting mechanism must reasonably align with the community’s expectations of what these funds should accomplish and when to consider disbursement fair" , B) "a suitable voting mechanism must align with the cost of the attention and time dimension" , C) "the need for alignment on and contextually fitting expressions of abstract ideas" . Furthermore, the desirables were mapped as being "1) modular 2) scalable 3) being compatible with DAOs 4) flexibility for funding styles 5) embedding trust 6) privacy and 7) highlighting Soroban / SCP’s capabilities."
What is "Neural Quorum Governance"? https://blog.block.science/introducing-neural-quorum-governance/ " Neural Governance (NG) utilizes the notions of Voting Neurons (composed of an Oracle and Weighting functions ) that are layered and aggregated together for forming one's Voting Power. The nomenclature is due to its similarity to Neural Networks. Key benefits for adopting it: 1) A plug and play experience towards Neuron Development & Configuration. 2) Expressing complexity through many simple integrated pieces rather than a single isolated complicated piece. 3) Enables and directs development over modules that can be composed together. 4) Enables piecewise transparency" https://meridian.stellar.org/sessions/how-to-get-funding-for-blockchain-projects
The landscape https://blog.block.science/the-story-behind-neural-quorum-governance/
Conclusions on NQG We conclude with this case study with the experience that applying Modular Design, with the goal to bound complexity behind predictable and transparent interfaces, can be a useful method for finding sufficient mechanisms in terms of their LVS.
Agenda Introduction Background Case Study 1: Aztec Sequencer Selection Protocol Case Study 2: Stellar Neural Quorum Governance Conclusions
Conclusions Achieving sufficiency of Legibility, Variety and Safety can be a useful construct for determining the suitability, or even comparative advantage, over a mechanism being designed or proposed. We also saw that Modular Design, if applied by reducing modules inter-dependencies, can be a useful methodological tool for further achieving sufficiency over those principles, as they're able to "imprison complexity" behind modules that can have relatively easier to understand interfaces.
Thanks! More info at: https://block.science/ Get in touch at: [email protected] Danilo Lessa Bernardineli (LN) @danlessa (Telegram) @danilolessa (Twitter) Link for the slides Link for the draft working paper
Heading Subtitle First Level Second level Third Level 1 To create a list of notes, press ENTER, TAB, enter “1”, enter TAB again, and type note Notes: Text is anchored to the bottom, enabling deletions or additions without having to move the box Source: Source 10-point, non-bold. Always comes at bottom. Data slides should always have a source