Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

Travel grant to attend San Diego Alignment Workshop and NeurIPS

Science & technologyTechnical AI safetyGlobal catastrophic risks
🐳

Selma Mazioud

ProposalGrant
Closes November 24th, 2025
$0raised
$500minimum funding
$2,875funding goal

Offer to donate

34 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

I am a first-year PhD student in Statistics at UC Berkeley working with Professor Bin Yu on understanding the complexity dynamics and phase transitions in neural network training. Our project investigates how various complexity measures evolve during learning and how they relate to feature formation, generalization, and model safety. The goal is to develop principled ways to steer models toward safer, more reliable behavior through algorithmic control of complexity.

Attending the San Diego AI Alignment Workshop and NeurIPS 2025 will allow me to connect with researchers in alignment, deepen my understanding of the field, and integrate theoretical and empirical insights from both safety and statistics communities. As I am newly transitioning into AI alignment research, these opportunities would be invaluable for refining my research direction and identifying high-impact, safety-relevant questions early in my PhD.

What are this project's goals? How will you achieve them?

My project investigates how neural network complexity evolves during training and how this relates to generalization, robustness, and safety. Specifically, I will analyze multiple complexity measures (e.g., Hessian-based, norm-based, and new stabilized metrics) to identify learning phases and understand how training dynamics affect reliability and failure modes. I will also explore how algorithmic adjustments—such as weight decay or gradient noise—can control model complexity and potentially steer networks toward safer behaviors.

By attending the San Diego AI Alignment Workshop and NeurIPS 2025, I aim to connect this work to ongoing research in AI alignment, receive targeted feedback, and identify opportunities for collaboration. Success will mean developing a framework that links measurable aspects of model complexity to alignment-relevant safety properties. This supports the Long-Term Future Fund’s goal of improving our theoretical and empirical understanding of how to make powerful AI systems safer and more predictable.

How will this funding be used?

To fund my travel and conference tickets.

Who is on your team? What's your track record on similar projects?

My prior research experience includes projects in geometric deep learning and graph-based learning methods, where I developed and analyzed models for structured data. These experiences strengthened my ability to design and run computational experiments, handle complex theoretical frameworks, and communicate results rigorously.

What are the most likely causes and outcomes if this project fails?


How much money have you raised in the last 12 months, and from where?


CommentsOffersSimilar7

No comments yet. Sign in to create one!