Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
2

Emergency travel funding to attend EA Global: New York 2025

sheikheddy avatar

Sheikh Abdur Raheem Ali

ActiveGrant
$500raised
$500funding goal
Fully funded and not currently accepting donations.

Project summary

Travel support to attend EA Global: New York 2025.

What are this project's goals? How will you achieve them?

My purpose for attending the event is to learn, network, and increase coordination to mitigate catastrophic AI risks.

How will this funding be used?

EA Global was unable to fund my travel expenses for this event, so the $500 would go towards covering return tickets for an overnight flix bus from Toronto as well as other necessary miscellaneous expenses.

Who is on your team? What's your track record on similar projects?

My professional background is in software engineering, but I have some exposure to ML training, and I've mentored ~12 students in the past ~18 months (independently as well as via programs like SPAR). I keep up with recent safety literature by reading alignment forum posts and chatting with authors at conferences. Some of my prior work can be found at my Google Scholar profile https://scholar.google.com/citations?user=jMgsBc8AAAAJ&hl=en. 

What are the most likely causes and outcomes if this project fails?

I currently have 40+ in-person meetings confirmed for this weekend, so cancelling all of them may potentially lead to some inconvenience and disruption since people would need to change their plans.

How much money have you raised in the last 12 months, and from where?

I haven't raised money in the last 12 months.

Comments2Donations1Similar7
sheikheddy avatar

Sheikh Abdur Raheem Ali

8 days ago

# EA Global NYC 2025 - Travel Grant Report

## Overview

Thanks to the $500 travel grant, I was able to attend EA Global NYC in October 2025. The grant provided partial funding for the trip, covering return bus tickets from Toronto. While total expenses exceeded this amount, the grant was essential - without this support, I wouldn't have been able to attend the conference.

## Activities and Outcomes

I scheduled 47 one-on-one meetings over the three-day conference, spanning technical AI safety research, policy, infrastructure, and organizational strategy.

### Research Outputs

The most immediate concrete outcome was a follow-up experiment on stated vs revealed preferences in LLMs. After discussing this topic with a researcher on Friday, I ran the experiment that evening and drafted a preliminary writeup. The setup investigates how models respond to token limits - initially accepting constraints while expressing frustration, then attempting to circumvent them, and finally adjusting behavior after negotiation. This is a relatively clean testbed for studying model preferences compared to more complex setups.

I also provided technical feedback to a researcher who recently received a $1M grant for evaluations work, and received feedback from others on my own experiments.

### Learning and Context Building

Key conversations included:

  • Technical infrastructure approaches at NDIF (National Deep Inference Fabric) and AE Studio for interpretability research.

  • Hardware-based AI safety mechanisms using trusted computing and attestation (cryptographic verification of what code is running).

  • Policy pathways through state legislatures for technology governance.

  • Organizational strategy at LISA (London Initiative for Safe AI) and their current priorities.

  • Approaches to scaling AI safety workshops and outreach to broader audiences.

  • AI security practices at major financial institutions.

  • Compassion in Machine Learning's approach to synthetic data for model alignment.

### Coordination and Support

Beyond research conversations, I was able to help several attendees:

  • Connected an AMD ML intern interested in low-level performance improvements to an engineer based in NYC who works at Thinking Machines (and previously founded a $1B startup).

  • Connected a design consultant with $100k+ budget for funding video projects in x-risk related cause areas to relevant creators.

  • Did leetcode interview preparation with a student before their upcoming technical interview at a hedge fund. They'd only done solo practice before and this was their first time doing a mock interview with a partner. They messaged me later to let me know that their interview went well.

  • Connected an undergraduate new to EA with SPAR policy mentors.

  • Encouraged two early career researchers who had been doing interpretability work to apply to ARENA. Also discussed future content for ARENA with a member of their team.

  • Discussed pair programming follow-up with an Anthropic safety fellow.

  • Set up meetings after the conference with some MATS scholars.

## Impact

The conference delivered on my three main goals: learning about current technical and strategic approaches to AI safety, building connections with researchers and practitioners, and improving coordination across different parts of the ecosystem. The grant made this possible when EA Global couldn't fund my travel expenses.

I'd encourage others to apply for similar travel support - the value of in-person conversations at these conferences is substantial, and the application process was straightforward.

🧡
Austin avatar

Austin Chen

20 days ago

Approving this small travel grant!