Token House Call: July 2nd, 2024
Recording: July 2 Token House Call recording.mp4 3
Alternate recording (includes Jingâs slides): tl;dv - app
Slides: July 2nd, 2024 Token House call slides 1
- Retro Funding 4 Profit formula will not be considered in voting.
- Badgeholder votes due for July 10th, some informal metrics sessions are happening in the badgeholder chat.
- Retro Funding 5: Announcing Guest Voter Participation 1:
- A random sample of 90 existing Citizens together with 30 one-time guest voters will participate in the round. If you are a developer and you want to participate in voting for Retro Funding 5 submit an application here by July 14th 2024.
- We had the leads of the Grants Council, Developer Advisory Board and Code of Conduct Council talking about their scope and goals for Season 6.
- Grants Council Charter | Grants Council Charmverse | Grants Council Season 6 Calendar 3 | Grants Council Operating Budget proposal 1
- Developer Advisory Board Charter and Operating budget 1
- Code of Conduct Council Nominations: the self-nomination submission deadline is extended until July 24th. CoCC Election Townhall will be hosted after the self-nomination period ends, around July 25th-30th. | Code of Conduct Council Charter
- Mission Requests:
- Grants Council and the Collective Feedback Commission mission request drafts are due for July 8th.
- Grants Council will provide a suggested ranking of all mission requests by July 10th to go to vote.
- Mission Requests must be completed and posted to the forum by July 8th at 19:00 GMT. However, it is encouraged to post drafts to the forum as early as possible to incorporate any delegate feedback before the deadline.
- Members of the Feedback Commission and or Grants Council may choose to sponsor ideas from any other community member. Post mission ideas in this thread
- Read Season 6: Mission Request creation guide and Suggested Mission Requests for more info.
Since one of our Collective Values is Iterative Innovation, we are continuing to refine retroactive public goods funding (retro funding) with round 4. The focus here is on collaboratively curating impact metrics, guided by the principle of Metrics-based Evaluation 25. Unlike previous Retro Funding rounds, Retro Funding 4 shifts from voting on individual projects to engaging badgeholders in selecting and weighting metrics that measure various types of impact.
This following runthrough will showcase the stages of this processâfrom the initial surveys and workshop insights to post-workshop adjustmentsâand how weâre integrating your feedback to shape a metrics framework that accurately reflects the impact we aim to reward.
Retro Funding 4âs New Direction: Metrics-based evaluation
The core hypothesis of Retro Funding 4 is that by using quantitative metrics, our community can more accurately express preferences for the types of impact they wish to see and reward. This approach marks a significant shift from earlier Retro Funding iterations where badgeholders voted directly on projects. In RF4, the emphasis is on leveraging data to power impact evaluation, ensuring that metrics are robust, verifiable, and truly representative of the communityâs values.
One important thing to keep in mind is that this is one of many experiments that weâll be running this year and you can find all the details of whatâs to come right here 25.
The Initial Survey: Establishing the baseline
The initial survey we sent out was a means of gauging community perspectives on qualitative and quantitative measurement approaches and gathering input on a potential first draft list of metrics.
Key findings included:
Balance in metrics: Badgeholders showed a relatively balanced interest in metrics assessing both quantitative growth and qualitative impact, indicating a need for an evaluation framework that incorporates both dimensions.
Innovation and Open Source: Metrics related to innovation and open source scored significantly higher, reflecting a community preference for collaborative efforts.
Concerns about gaming metrics: A notable concern was the potential for gaming the system, highlighting the need for metrics that are difficult to manipulate and that genuinely reflect impactful contributions.
Engagement and a focus on quality: There was a strong preference for metrics that measure sustained engagement and ongoing quality vs one-time actions, highlighting the value placed on long-term impact.
The Workshop: Data Deep Dive and Refining Metrics
Building on the survey insights, the workshop we ran on May 7th enabled badgeholders to provide more in-depth feedback on the data-driven aspects of the impact metrics system as well as take a pass at selecting the most meaningful metrics grouped around:
Network growth
These are metrics designed to quantify the expansion and scalability of the network. They focus on observable, direct measurements and aim to capture high level activity.
Network quality
This aims to evaluate the trustworthiness and integrity of interactions on the network. They focus on depth and reliability.
User growth
The focus here is on measuring new, active and consistent participation in the network.
User quality
The quality measurements are aimed at assessing the value and consistency of user contributions, not just their numbers.
For a detailed look at the survey results, key moments and discussions, you can review the workshop space here 6.
You can also explore this blog post 8 by Open Source Observer that recaps the work theyâve done so far to map up the metrics. OSO is collaborating with the Optimism Collective and its Badgeholder community to develop the impact metrics for assessing projects in RF4.
Post-Workshop Survey: Refining metrics
Following the workshop discussions and feedback, and since not all Badgeholders were able to attend, a detailed survey was sent out to get further feedback on refined metrics, divided into the four identified segments:
Network Growth: Feedback suggests a need for simple, intuitive, and directly measurable metrics like gas fees and transaction counts
Network Quality: There ia general support for metrics that consider the quality and trustworthiness of transactions, with suggestions to refine how âtrusted transactionsâ are defined and measured
User Growth: A preference for metrics that accurately capture genuine user engagement emerged, with calls for clearer definitions of what constitutes an âactiveâ or âengagedâ user
User Quality: Responses highlighted the challenge of measuring user quality without arbitrary metrics, suggesting a need for ongoing discussion and refinement to develop quality metrics that truly reflect contributions to the ecosystem
Refining and implementing feedback
The insights from the surveys together with the discussions in the workshop are leading us to the next stage of metrics refinement. A key focus across the workshop and the second survey has been the granularity of the trusted users element so to respond to and integrate the insights, Open Source Observer (OSO) is working on the following:
Trusted user model
Based on some good suggestions and ideas for how to improve upon the âtrusted userâ model coming from the second survey shared, OSO are exploring several additional sources of data for the model.
Important Note: Open Source Observer is only considering sources that leverage existing public datasets (no new solutions), are privacy-preserving, and can be applied to a broad set of addresses (>20K).
Action items:
- Extending the trusted user model: The current model relies on three trust signals (Farcaster IDs, Passport Scores, and EigenTrust by Karma3 Labs). OSO will now move forward with a larger set of potential trust signals such as:
- Optimist NFT holders
- Sybil classifiers used by the OP data team
- Social graph analysis
- Pre-permissionless Farcaster users
- ENS ownership
- OSO will be running some light scenario analysis and choose the best model(s). They will also model a variety of options and share a report on the results.
Small fixes:
In the second survey, badgeholders also made a number of straightforward suggestions for how metrics could be standardized and implemented more consistently. These include:
- Limiting all relevant metrics to the 6 month Retro Funding 4 evaluation window
- Creating versions of some metrics on both linear and logarithmic scales
- Creating versions of âuser growthâ metrics that look at a larger set of addresses vs just the âtrusted userâ set
- Copy-editing to make descriptions clearer
Action items:
- OSO will proceed directly with making these changes
New metrics:
There were several metrics that had previously been considered and were proposed again in the feedback survey. These include looking at gas efficiency; power users; a larger set of addresses than just trusted users; and protocols that are favored by multisig users.
Action items:
- OSO will implement these new metrics where feasible
Next Steps: Testing metrics out
The next step in this process will be to take the metrics set for a test drive. In the week of June 3rd, Badgeholders will have the chance to test out the metrics through an interface that simulates IRL voting. Badgeholders will also have the opportunity to submit extra feedback per metric as they test each one out.
Once this final round of feedback is collected and analyzed, we will be able to come back together in a session to discuss and decide on the final set of metrics to be used in Retro Funding 4.