Currently I'm a Managing Partner at AltX, an EA-aligned crypto trading hedge fund. Previously, I've been a professional poker player and chemistry PhD student. I'm also very good at various strategy games and am a top trader on Manifold in my spare time. I'm quite involved in the EA community and also advise some projects in the crypto space. I also run the Highly Speculative EA Capital Accumulation FB group and EA/LW Investing Discord.
Compared to most EAs, I'm fairly concerned about animal welfare, particularly how we treat animals in the long term and mitigating any value lock-in that causes other sentient life to suffer unnecessarily and not flourish as much as it could. This means I am concerned about a potential rise in insect farming (if insects are sentient) or an expansion in humanity's use of animals leading to their suffering. I'm similarly concerned about humanity not living up to its potential and while I do think that AI currently poses the most existential risk, I am more concerned than most EAs on biological risks, particularly engineered pandemics as well as bioweapons as well as very bad nuclear scenarios.
Some things I consider important when it comes to grants I would make.
-I think EA needs to get a lot more ambitious and entrepreneurial when it comes to solving problems it cares about. I also think too many EAs inflate the possibilities of downside risk, leading to EA being far too risk-averse in its grants on many issues.
-I think problems are solved by working directly on the core problem than by armchair philosophizing on the problem for a long time since I think you get new information as you work directly on the problem.
-I think very smart people, so long as they are value-aligned and have proper incentives are your best bet to solving problems rather than just "a good starting idea". I think I am a very good judge of people.
-I care a lot about incentive alignment and am wary of misaligned incentives.
This means I am far more interested in funding direct work to detect deception in AI systems than I am in someone who wants to skill up to do AI safety research or build a meta-org to teach independent alignment researchers about epistemic or promote AI governance. I am more wanting to fund a group building refuges on islands than someone who wants to do research on their laptop for a year. It also means that if I know someone to be a smart and capable person who is value-aligned in my network, I am willing to give them a grant even though I might not have quite sufficient knowledge to adequately assess their progress or research behind their idea.
I'm happy to be given a restriction on what you want money to go to if you decide to re-grant to me (If you want to restrict the grant to a cause area or otherwise).