Gradefact aims to establish a scoring system for news organizations, public figures, and anyone making predictions, similar to how credit scores evaluate an individual's creditworthiness. Our goal is to improve the accuracy and accountability of news and predictions through a transparent, objective grading system. This system will help reduce the spread of deliberately false or incompetent news and predictions, allowing for higher-quality, more truthful information. The system also algorithmically interprets a neutrality score (bias) and a political leaning when appropriate.
Gradefact's proprietary algorithm is able to infer a probabilistic prediction even when a publication or individual does not explicitly state a prediction. This is achieved through advanced natural language processing techniques that analyze the text or speech of the source and try to infer the implied probability of an event, albeit with lower weighting than binary predictions. For example, if a news organization writes an article about a political candidate's chances of winning an election, our system can determine the probability that the author believes the candidate will win, even if they do not explicitly state a prediction. This allows us to include more sources (editorials, opinion pieces, tweets etc) in our grading and provide a more comprehensive view of the accuracy of a publication or individual.
Gradefact integrates the predictions of our graded sources with for-profit prediction and betting markets. By initially aggregating these markets and eventually creating our own real-money prediction markets for major events, we aim to create accountability for both mainstream news sources and predictors. This means that both parties risk something when making a prediction: the sources risk their reputation, while Gradefact users risk capital. The defining ethos of the platform can be summarized by the saying “Put your money where your mouth is”.
What is your track record on similar projects?
Our team is consisting of a former professional poker player (ie full time human prediction machine), a former full time crypto/DeFi product designer/manager and full time Data Scientist and natural language processing expert. On aggregate the team members have launched startups (with varying degrees of success) in the Machine learning space, Decentralised Lending space, Gambling related space and others.
Our initial expenditure is focussed on annotation (we need a team of annotators to help train our system), initial UX and UI designs, and monthly expenditure on servers, specifically GPU architecture and machine learning hosting platforms. We also have expenses for hardware and licensing of various apis, feeds and aggregation software.
Wanted to offer some more clarity re MVP, our goal here is to build a demo product and the specs for the mvp are as follows:
Minimally Viable Product (MVP) Criteria;
Successfully obtain a statistically significant dataset of past elections/sports predictions/financial predictions from at least 4 sources:
Twitter personality (Elon etc)
Three Legacy/MSM News Publications;
1x Right Leaning
1x Left Leaning
1x Prediction Market
Use the dataset as input into our language model to:
Obtain a summarisation: Sentiment, Political leaning, Bias in language
Extract inferred predictions (explicit and implicit):
Verify the resolutions of events as compared to the predictions made
For elections use an established database to resolve
For finance use commercial apis for verified historic price
Assign an aggregate prediction accuracy score for each source based on all articles/media/tweets etc analysed
Demonstrate the feasibility of the aforementioned features on a webpage that will:
Curate a newsfeed of our sources that have a bias towards events with likely settlement or outcome in the near future, or have resolved recently. This would be to show the viability of comparing opinion and editorial in media versus the money line opinion of prediction markets on the same events.
Display historical prediction accuracy score 0 to 100 for each source that covers each of the articles being curated for the feed
Just for clarity, the first iteration and mvp is solely focused on the grading of predictions by a small chosen demonstration group.
Regarding your other point, something to keep in mind is that a lot of the actual analysis that is used in the grading is extracted data as per our model. That is to say that it will come attached with everything necessary to perform the more complex analysis, it isn’t a seperate process but part of the collection pipeline.
So it would look something like:
Scrape target tweets, media, YouTube, news articles, etc
Parse into our algo and extract predictions, inference, bias, etc
Analysis and extraction
Lookup of extracted prediction data to establish base truth when possible
The prediction market part is a fairly crucial element of the complete product with the express vision to expose opinion masquerading as prediction with that of a money line opinion. Our goal is to either pair existing topical issues with predictions made by our graded predictors vs that of the money line bettors.
Whether we build our own market or piggy back of another will depend on interest level and further raises. I would totally be open to working with manifold on this element also.
Hmm. I feel like there's a good idea in here (publicly grading pundits with an accuracy/reputation score over time) that's going to get buried under a too-ambitious product (your own new prediction market platform, proprietary machine-learning inference of predictions, users who are risking capital, political bias detection).
I'd be more interested in funding an MVP which just did historical/running annotations of predictions from a small selection of pundits, and tried to scale up from there after it got some traction.