(This was 1 Bitcoin btw. Austin helped me with the process of routing it to Manifund, allowing me to donate ~32% more, factoring in avoiding capital gains tax in the UK).
@Greg_Colbourn
Global moratorium on AGI, now. Founder of CEEALAR.
https://twitter.com/gcolbourn$0 in pending offers
Greg Colbourn
4 days ago
(This was 1 Bitcoin btw. Austin helped me with the process of routing it to Manifund, allowing me to donate ~32% more, factoring in avoiding capital gains tax in the UK).
Greg Colbourn
4 days ago
I've been impressed with both Holly and Pause AI US, and Joep and Pause AI Global, and intend to donate a similar amount to Pause AI Global.
Greg Colbourn
4 days ago
It's more important than ever that PauseAI is funded. Pretty much the only way we're going to survive the next 5-10 years is by such efforts being successful to the point of getting a global moratorium on further AGI/ASI development. There's no point being rich when the world ends. I encourage others with 7 figures or more of net worth to donate similar amounts. And I'm disappointed that all the big funders in the AI Safety space are still overwhelmingly focused on Alignment/Safety/Control when it seems pretty clear that those aren't going to save us in time (if ever), given the lack of even theoretical progress, let alone practical implementation.
Greg Colbourn
10 months ago
Supporting this because it is useful to illustrate how there are basically no viable AI Alignment plans for avoiding doom with short timelines (which is why I think we need a Pause/moratorium). Impressed by how much progress Kabir and team have made in the last few months, and look forward to seeing the project grow in the next few months.
Greg Colbourn
over 1 year ago
This research seems promising. I'm pledging enough to get it to proceed. In general we need more of this kind of research to establish consensus on LLMs (foundation models) basically being fundamentally uncontrollable black boxes (that are dangerous at the frontier scale). I think this can lead - in conjunction with laws about recalls for rule breaking / interpretability - to a de facto global moratorium on this kind of dangerous (proto-)AGI. (See: https://twitter.com/gcolbourn/status/1684702488530759680)
For | Date | Type | Amount |
---|---|---|---|
Manifund Bank | 2 days ago | deposit | +85500 |
PauseAI US 2025 through Q2 | 4 days ago | project donation | 90000 |
Manifund Bank | 6 days ago | deposit | +90000 |
AI-Plans.com | 10 months ago | project donation | 5000 |
Manifund Bank | 10 months ago | deposit | +3800 |
Alignment Is Hard | over 1 year ago | project donation | 3800 |
Manifund Bank | over 1 year ago | deposit | +5000 |