I want to create a multiplayer version of RootClaim.
RootClaim is sort of what a fact checking website would look like if it were unbiased, seriously interested in making an educated guess about the likelihood of various hypotheses and well versed enough in statistics to apply relevant techniques when they're appropriate. They open source their analyses so others can view how they arrived at their conclusions, what weights they gave to which claims etc. See for example their analysis on who carried out a chemical attack in Syria, or their one on the origins of COVID.
What does fact checking have to do with prediction markets?
Most prediction markets ask for a single value as input e.g. on the question "how likely is it that X will happen in 2023?" you might answer "24%". Others might answer "10%" or "90%" — but no one is sharing how they arrived at that prediction, as RootClaim does (but RootClaim doesn't participate in prediction markets).
So what would a prediction market look like where you submit more than just a number and you share your entire analysis? This would allow you to look under the hood of successful predictions, clone & mix winning models and comment on argumentation given by others.
I'm assuming people will be interested in detailing and divulging such things. I'm also assuming letting people comment on each others' models will not turn into faeces slinging contest. Those may be erroneous assumptions. Then again, I think there's also the possibility of creating the first inklings of an environment where idea-sex between different prediction models can occur, leading to better predictions.
A reasonable and (hopefully) feasible way to test this idea within the scope of a mini grant would be to build an online editor which outputs a "document" similar to the analyses done by RootClaim, allowing a user to construct and share a traceable, navigable argument which culminates in a predicted value, while allowing other users to comment on individual parts.
If I build that and I can convince 10 strangers and prediction market participants to use it to construct and share an analysis, and they value it — I think that would be grounds for considering a second round of funding for this project.
I'm a developer / designer.
I've built an Android app, which has over 10k users, which analyzes the sound produced by gold a silver coins and tells the user whether they're authentic. I think that shows I have some relevant skills in building products, doing software development as well as fringe hobbies.
I'm actually better versed in React (which is the technology I'd be using for this) than I am with Android, but my public showcase projects for React happen to be less impressive.
Lastly I've also spent some time thinking about how to create an interface to easily construct and navigate arguments. I've created a prototype-of-sorts using the outliner tool Workflowy where I've laid out arguments for and against a COVID lab origin. I think this shows I have unusual levels of curiosity about this problem space.
I work 60% as a software engineer for a Swiss company. The amount I'm asking for ($4000) is slightly less than I would earn extra if I were to work 100% and I would only be using it to pay myself out as a stipend.
For me it's also an amount I feel I can justify to myself (and to my partner) to spend at least a month working on at 40% capacity. If I get the grant I will likely work on it more, because I find it cool and I imagine I'll feel anointed by Scott, but that seems like a reasonable floor. I'm also open to a smaller amount but I would have to scale back my commitment as well.
Thanks for reading!
It would be nice to integrate something like this into manifold. "relationships between predictions"