Logo
  • About
  • Learn
  • Build
  • Lead
  • Explore
  • Connect
RSVP
Logo

About

Home

Intents

Events

Account

Feedback

Learn

Videos

Show Notes

Welcome Guide

Benefits Article

Optimism.io

Build

Optimystics Tools

Code Repositories

Community Notion

Development Hub

Lead

Sages Council

Snapshot Space

Optimism Town Hall

OTH Topics

Explore

Related Events

Youtube Channel

Optimystics Blog

Eden Fractal Blog

Connect

Discord

Farcaster

Twitter

Contact

DiscordXGitHubYouTube

Respond to the thread with Seth Benton, Spencer Graham, and SudoFerraz.eth about Quantitative Value Creation Measurement for Decentralized Work with sourcecred, optimism fractal, and armitage

Overview

  • All of this should go public in OF notion except for the PM with Sudo, including this thread task
Finish organizing the subtasks above

Retweet Seth Benton’s tweet about Optimism Fractal, SourceCred, and PlaceCred?

  • I just retweeted it from Dan
  • I think it’s good to retweet it from OF too. How about other accounts?
  • Consider also linking to the new project: Explore and consider integrating with Intersubjective, Pluralistic Measurement Systems (such as SourceCred, PlaceCred, Praise and Armitage)

Respond to SudoFerraz.eth from Armitage labs

image
image

Thanks for reaching out

I checked out the website and it looks very interesting

You’re welcome to join Optimism Fractal

Seth has joined Optimism Fractal

https://armitage.xyz seems like it can be very helpful to facilitate github open source development for optimism fractal once we attract more developers. This kind of objective measurement can complement respect given at the Respect Games and other subjective value measurement mechanisms

I just created a project about this in along with some and will encourage community Organize project in Optimism Fractal for Integrating with Intersubjective Measurement Systems such as SourceCred (and Armitage)Organize project in Optimism Fractal for Integrating with Intersubjective Measurement Systems such as SourceCred (and Armitage)

Respond to Spencer Graham about his PRD for Quantitative Value Creation Measurement for Decentralized Work

image

Fascinating, thank you for sharing

add to
add to Curate introductory resources about Optimism Fractal for Hats Protocol community and Overview of Mutual BenefitsCurate introductory resources about Optimism Fractal for Hats Protocol community and Overview of Mutual Benefits
add to 🎩Integrate with Hats Protocol
add new task to review their PRD
‣
Review

PRD: Quantitative Value Creation Measurement for Decentralized Work Within a Specific Ecosystem

Author: Spencer @ Hats Protocol

tl;dr

  • This document presents a set of criteria for a mechanism that measures value created by contributors within a specific ecosystem
  • It also sketches a potential implementation that fulfills them
  • The most pressing need is for the Hats protoDAO

Background and Context

What is quantitative value creation measurement?

Measurement is perception. The measured value created by an agent within a given domain during period pn is their peer’s perception of how much value they actually created (or did not destroy) within that domain within period pn. Since it is quantitative, we can measure it cumulatively, e.g. from p0 through pn.

When quantified, that perception can also be used as a prediction, e.g. an agent’s measured value created within a given domain during period pn can be a prediction of the value they are expected to create (or not destroy) within that domain in the next period pn+1.

Why do we need to measure it at all?

Accurate measurement of value created by contributors is one of the most important yet elusive building blocks for decentralized work. It can help address a number of challenges for an ecosystem that relies on decentralized work.

As a measure of value created in the most recent period:

  • Compensation: how should contributors be compensated?

As a prediction of value to be created in the next period:

  • Matching contributors with responsibilities and authorities: who is the best person to work on a given project or have a given role?

As a proxy for who bests understands what the ecosystem needs:

  • Governance power: who should have a say in various collective actions, and how much weight should their input carry for decisions?

Note: a quantitative measurement mechanism need not be useful for all three of the above cases for it to be valuable. Indeed, different ecosystems face different scenarios, and may only prefer to support one of these use cases (or perhaps a different one altogether).

Why quantitative measurement?

In decentralized work, there is no boss who can qualitatively synthesize the available inputs to arrive at an answer for how to compensate an employee. Rather, such decisions must be made collectively, so it is crucial that everybody has access to the same information. This requires a quantitative approach.

A good solution doesn’t yet exist

Seemingly everybody knows this is a big gap. Everybody and their mother is working on a reputation product. But there are several big outstanding problems that remain unsolved:

  1. There is rarely such a thing as objective measurement of value creation. Decentralized work typically involves solving hard problems. Hard problems take time to solve, and even longer to see the full impact of a solution. Often this impact is itself subjective. But even if the value of a solution can eventually be quantified, it’s impossible to form an objective prediction of what that value will be in advance. And in decentralized work the inevitable variety of predictions means that it’s impossible for a single manager to pass off a subjective prediction for an objective measurement
  2. How do we decide who decides? In decentralized work, the lack of a default reviewer creates all sorts of challenges in measuring the subjective value created by a given contributor. Who has the necessary context about what the contributor is doing to effectively review their work? Who is sufficiently aligned with the goals of the network to properly contextualize the work? Do these two groups of people overlap?
  3. High operational and governance overhead are nonstarters, both from an efficiency perspective as well as capture-resistance. Decentralized work won’t stay decentralized very long if power can coalesce around the few who are able to operate or decipher the mechanics of the system, and it won’t be work for long if too much budget is devoted to the value creation measurement system.
  4. In subjective systems, loud people and sexy/visible work get over-rewarded while quiet people and dirty work gets under-appreciated. Epicycles and other bandaid attempts to address this just make Problem 2 worse.

This document specifies the necessary properties of a mechanism for measuring value created in the context of decentralized work that addresses these challenges head-on.

What is this desired mechanism for?

At Hats Protocol, we are working on our evolution into a proper DAO, beginning with a protoDAO (basically a practice DAO). We need a mechanism for quantitatively measuring the value created by contributors. We expect the protoDAO to use this measurement to determine the following:

  • Generalized governance/voting power (eg DAOhaus DAO voting shares) within the protoDAO
  • Contributor fit for roles (responsibilities and authorities) within the protoDAO
  • Eventually: contributor compensation

Specification

This is a quantitative value creation measurement mechanism for decentralized work. It facilitates organizations (such as networks, DAOs, cooperatives, etc) to get real work done together without the need for power-over relationships.

Required Properties

We believe that the right value creation measurement system has the following characteristics:

  1. Credibly neutral
  2. Likely to be viewed as legitimate by contributors and other stakeholders
  3. Low operational overhead
  4. Low governance overhead; minimizes parameters to tune
  5. Doesn’t require data from too many sources; requires as few data sources as possible
  6. Approximates meritocracy: contributors are rewarded in proportion to the actual* value they create
    1. or as close as is possible given the reality that objective value attribution is not feasible and is merely a prediction except in long term retrospect

Additional Desired Properties

To achieve the above requirements (especially Property F), we suspect that the value creation measurement system should also have the following properties:

  1. Intersubjective: recognizing that in most cases there is no objective truth, ground truth about value created is a function of the subjective perception of the participants
  2. Does not over-reward loudness, visibility, or popularity; and the inverse
  3. Does not over-reward cliques
  4. Rewards new contributors appropriately
    1. eg does not tend to result in old boys clubs over the medium to long term
  5. Non-financial: no component used to generate a value creation measurement should be saleable or transferable

Security and Trust Assumptions

Implementers can assume that the system in which the mechanism operates has the following characteristics:

  • Participation is permissioned, and participants are semi-trusted
  • Participation weight is not transferable (see Property K)
  • Scale of contributor set is < 150

Potential Attack Vectors

Or other undesired behaviors.

  • Loudly shilling one’s own work in community channels (also bad because it consumes the network’s attention, a scarce shared resource)
  • Bribing participants for points
  • Collusion between participants (eg back-scratching allocations)

Current Best Thinking: Implementation Sketch

The following is a sketch of an implementation that seeks to fulfill all of the required and desired properties from the specification. It outlines the basic shape of the mechanism, but several key design questions remain open (see Open Questions section).

Description
Rationale / support for properties
Potential tools / implementation
Data generation mechanism
Points allocation circle • Participants allocate points to receiving nodes ◦ On initiation, all participants receive the same number of points to allocate. This number is arbitrary (normalized by algorithm; see next row) but identical for all participants. • Receiving nodes can be people (eg contributors) or ideas (eg projects) • Simple version: a single circle where all participants are also receiving nodes (eg classic Coordinape giving circle) • Optional extension: recursive circles ◦ parent circle: single allocation circle where receiving nodes are projects (outputs, missions, work streams, etc), and participants allocate to each according to the perceived value of their outputs or progress during the current period ◦ child circles: for each project, a separate circle where both participants and receiving nodes are active contributors to the project within the current period
This design supports Properties C, D, G, and K
Coordinape Signal Protocol Govrn Quests
Aggregation algorithm
• Within a single circle: a recursive algorithm where the only inputs are the raw allocated points and the outputs are weighted allocated points. Recursive because the algorithm converges from the inputs to the outputs iteratively. ◦ The convergence algorithm is the largest area of uncertainty and work to be done. ◦ Algorithm should support Properties H and I • Optional extension (continued) ◦ weighted allocated points from layer 2 circles are re-weighted according to the weighted allocated points from the layer 1 circle ◦ Weighted allocated points are then summed by individual contributors to give a final period score per contributor that represents their share of the total value created within the period
This design supports Properties C, D, and E A well-designed algorithm would support F (including as a result of supporting H and I) Hypothesis: value created in the present period is the best measure of ability to assess value created by others in the present period
PageRank EBSL SybilRank
Timing
conducted periodically • participants can make or edit allocations throughout the period, up until the period end • analysis/aggregation conducted at period end
2-3 times per season every 4-6 weeks
Participation
Participation is gated by a DAO-managed allowlist. We can assume that the allowlist includes all or most of the people who are both aligned with the long term goals/success of the organization and have sufficient context on the current period’s work.
It’s ok if it includes some people who don’t meet those criteria: the algorithm should downweight their input sufficiently.
Hats Protocol (participants must wear a given Hat)
Output
Resulting period value created scores are a primitive themselves and can be used in any number of ways, such as: • compensate contributors proportionally  • increase governance power of contributors proportionally • input into performance evaluation for contributors => increased/decreased responsibilities and authorities

Why a Recursive Algorithm?

This is directly related to Problem 2. In hierarchical organizations, every employee has a boss that serves as the default person to review them. Typically, the boss is the person who has the most context about the various things the employee is working on and how important they are. This puts the boss in an excellent position to subjectively review the employee’s performance (if the boss is actually competent and satisfies B, F, H, I and J which as we all know is exceedingly rare).

In non-hierarchical networks, however, there is no built-in “reviewer”. This protects against power-over abuse, but it makes determining the quality of a contributor’s work very challenging. When there is no default reviewer, how are we supposed to know who should have a say in a given contributor’s review? Who has the necessary context about what the contributor is doing to effectively review their work? Who is sufficiently aligned with the goals of the network to properly contextualize the work? Do these two groups of people overlap?

Answering these questions accurately is hard enough in a simplified static world, and it gets even harder when we confront the reality that decentralized work is marked by highly fluid contribution patterns. A high-context, highly aligned person in one season may be completely disengaged the next. Knowing the answer at time t0 has a low correlation with the answer at time t1.

Complicating matters further is that nobody has the unilateral power to make an arbitrary gridlock-busting decision about any of the above (yet another reason why too much governance overhead is a non-starter).

So how are we supposed to weight stakeholder’s allocations? We can’t use historical value created or governance power because of the time/fluidity problem, and we can’t use flat weights because there’s no way that everybody has the same context & alignment (see also Problem 4).

The only remaining possibility is to use this period’s value created measurement to weight this period’s allocations. This implies recursiveness: converge towards a matrix of scores after multiple recursive iterations.

Open Questions

  1. What is the right algorithm to use? How can we support Properties H and I?
  2. Is Property J achievable without additional data inputs?
  3. Would such a system really support Properties A and B? What else might be necessary to achieve those properties?

Working Group

Telegram: https://t.me/+_QxifA4jU7IxNTIx

  • Working Group: https://t.me/+_QxifA4jU7IxNTIx
image
image

Create project in Optimism Fractal for Integrating with Intersubjective Measurement Systems such as SourceCred (and Armitage)

https://forum.summerofprotocols.com/t/pig-revisiting-the-source-improving-the-sourcecred-credit-attribution-protocol/446

image
image