Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
vincentweisser avatarvincentweisser avatar
Vincent Weisser

@vincentweisser

focused on open/decentralized agi, alignment and scientific progress

vincentweisser.com
$0total balance
$0charity balance
$0cash balance

$0 in pending offers

Outgoing donations

Luthien
$200
10 months ago
Luthien
$200
10 months ago
SafePlanBench: evaluating a Guaranteed Safe AI Approach for LLM-based Agents
$250
11 months ago
Investigating and informing the public about the trajectory of AI
$250
11 months ago
human intelligence amplification @ Berkeley Genomics Project
$100
11 months ago
Attention-Guided-RL for Human-Like LMs
$100
11 months ago
human intelligence amplification @ Berkeley Genomics Project
$500
11 months ago
AI-Driven Market Alternatives for a post-AGI world
$115
about 1 year ago
AI-Driven Market Alternatives for a post-AGI world
$100
about 1 year ago
MATS Program
$200
about 1 year ago
Lightcone Infrastructure
$100
about 1 year ago
Next Steps in Developmental Interpretability
$200
about 1 year ago
10th edition of AI Safety Camp
$200
about 1 year ago
Biosecurity bootcamp by EffiSciences
$100
about 1 year ago
SafePlanBench: evaluating a Guaranteed Safe AI Approach for LLM-based Agents
$200
about 1 year ago
Investigating and informing the public about the trajectory of AI
$200
about 1 year ago
Making 52 AI Alignment Video Explainers and Podcasts
$50
almost 2 years ago
AI Safety Research Organization Incubator - Pilot Program
$200
about 2 years ago
AI Safety Research Organization Incubator - Pilot Program
$277
about 2 years ago
AI Safety Research Organization Incubator - Pilot Program
$500
about 2 years ago
Scaling Training Process Transparency
$150
about 2 years ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$100
about 2 years ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$10
about 2 years ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$100
about 2 years ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$790
about 2 years ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$1000
about 2 years ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$210
about 2 years ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$500
about 2 years ago
Exploring novel research directions in prosaic AI alignment
$200
about 2 years ago
MATS Program
$300
about 2 years ago
MATS Program
$500
about 2 years ago
Empirical research into AI consciousness and moral patienthood
$50
over 2 years ago
Empirical research into AI consciousness and moral patienthood
$70
over 2 years ago
Run five international hackathons on AI safety research
$100
over 2 years ago
Avoiding Incentives for Performative Prediction in AI
$50
over 2 years ago
AI Alignment Research Lab for Africa
$150
over 2 years ago
AI Alignment Research Lab for Africa
$100
over 2 years ago
AI Alignment Research Lab for Africa
$150
over 2 years ago
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$100
over 2 years ago
Avoiding Incentives for Performative Prediction in AI
$100
over 2 years ago
Discovering latent goals (mechanistic interpretability PhD salary)
$150
over 2 years ago
Introductory resources for Singular Learning Theory
$50
over 2 years ago
Holly Elmore organizing people for a frontier AI moratorium
$100
over 2 years ago
Recreate the cavity-preventing GMO bacteria BCS3-L1 from precursor
$50
over 2 years ago
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$150
over 2 years ago
Activation vector steering with BCI
$150
over 2 years ago
Avoiding Incentives for Performative Prediction in AI
$50
over 2 years ago
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$70
over 2 years ago
Alignment Is Hard
$70
over 2 years ago
Introductory resources for Singular Learning Theory
$70
over 2 years ago
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$100
over 2 years ago
Compute and other expenses for LLM alignment research
$100
over 2 years ago
Optimizing clinical Metagenomics and Far-UVC implementation.
$100
over 2 years ago
Apollo Research: Scale up interpretability & behavioral model evals research
$160
over 2 years ago
Apollo Research: Scale up interpretability & behavioral model evals research
$250
over 2 years ago
Run five international hackathons on AI safety research
$250
over 2 years ago
Holly Elmore organizing people for a frontier AI moratorium
$100
over 2 years ago
Discovering latent goals (mechanistic interpretability PhD salary)
$400
over 2 years ago
Discovering latent goals (mechanistic interpretability PhD salary)
$40
over 2 years ago
Scoping Developmental Interpretability
$45
over 2 years ago
Scoping Developmental Interpretability
$1000
over 2 years ago
Scoping Developmental Interpretability
$455
over 2 years ago
Joseph Bloom - Independent AI Safety Research
$250
over 2 years ago
Joseph Bloom - Independent AI Safety Research
$100
over 2 years ago
Joseph Bloom - Independent AI Safety Research
$50
over 2 years ago
Agency and (Dis)Empowerment
$250
over 2 years ago
Isaak Freeman
$100
over 2 years ago
Medical Expenses for CHAI PhD Student
$43
over 2 years ago
Long-Term Future Fund
$50
over 2 years ago

Comments

Ozempic for Sleep: Research for Safely Reducing Sleep Needs
vincentweisser avatar

Vincent Weisser

about 1 year ago

Important research project! Isaak, Helena are awesome and assembling a great team that should make progress on it

Cadenza Labs: AI Safety research group working on own interpretability agenda
vincentweisser avatar

Vincent Weisser

about 2 years ago

Awesome work! One of the most exciting areas of alignment in my view!

AI Safety Research Organization Incubator - Pilot Program
vincentweisser avatar

Vincent Weisser

about 2 years ago

Very excited about this effort, think it could have great impact, and personally know Kay and think he has a good chance to deliver on this with his team!

AI Alignment Research Lab for Africa
vincentweisser avatar

Vincent Weisser

over 2 years ago

glad to hear and awesome to see this initiative!

Compute and other expenses for LLM alignment research
vincentweisser avatar

Vincent Weisser

over 2 years ago

Might be worth keeping it open for more donations if requested?

Transactions

ForDateTypeAmount
Luthien10 months agoproject donation200
Luthien10 months agoproject donation200
Manifund Bank10 months agowithdraw14000
SafePlanBench: evaluating a Guaranteed Safe AI Approach for LLM-based Agents11 months agoproject donation250
Investigating and informing the public about the trajectory of AI11 months agoproject donation250
human intelligence amplification @ Berkeley Genomics Project11 months agoproject donation100
Attention-Guided-RL for Human-Like LMs11 months agoproject donation100
human intelligence amplification @ Berkeley Genomics Project11 months agoproject donation500
AI-Driven Market Alternatives for a post-AGI worldabout 1 year agoproject donation115
AI-Driven Market Alternatives for a post-AGI worldabout 1 year agoproject donation100
MATS Programabout 1 year agoproject donation200
Lightcone Infrastructureabout 1 year agoproject donation100
Next Steps in Developmental Interpretabilityabout 1 year agoproject donation200
10th edition of AI Safety Campabout 1 year agoproject donation200
Biosecurity bootcamp by EffiSciencesabout 1 year agoproject donation100
SafePlanBench: evaluating a Guaranteed Safe AI Approach for LLM-based Agentsabout 1 year agoproject donation200
Investigating and informing the public about the trajectory of AIabout 1 year agoproject donation200
Manifund Bankabout 1 year agodeposit+17015
Making 52 AI Alignment Video Explainers and Podcastsalmost 2 years agoproject donation50
AI Safety Research Organization Incubator - Pilot Programabout 2 years agoproject donation200
AI Safety Research Organization Incubator - Pilot Programabout 2 years agoproject donation277
AI Safety Research Organization Incubator - Pilot Programabout 2 years agoproject donation500
Scaling Training Process Transparencyabout 2 years agoproject donation150
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 2 years agoproject donation100
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 2 years agoproject donation10
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 2 years agoproject donation100
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 2 years agoproject donation790
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 2 years agoproject donation1000
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 2 years agoproject donation210
Cadenza Labs: AI Safety research group working on own interpretability agendaabout 2 years agoproject donation500
Manifund Bankabout 2 years agodeposit+500
Manifund Bankabout 2 years agodeposit+500
Manifund Bankabout 2 years agodeposit+1000
Manifund Bankabout 2 years agodeposit+1000
Manifund Bankabout 2 years agodeposit+300
Exploring novel research directions in prosaic AI alignmentabout 2 years agoproject donation200
Manifund Bankabout 2 years agodeposit+200
Manifund Bankabout 2 years agomana deposit+10
MATS Programabout 2 years agoproject donation300
MATS Programabout 2 years agoproject donation500
Manifund Bankover 2 years agodeposit+500
Manifund Bankover 2 years agodeposit+300
Empirical research into AI consciousness and moral patienthoodover 2 years agoproject donation50
Empirical research into AI consciousness and moral patienthoodover 2 years agoproject donation70
Run five international hackathons on AI safety researchover 2 years agoproject donation100
Avoiding Incentives for Performative Prediction in AIover 2 years agoproject donation50
Manifund Bankover 2 years agodeposit+200
AI Alignment Research Lab for Africaover 2 years agoproject donation150
AI Alignment Research Lab for Africaover 2 years agoproject donation100
AI Alignment Research Lab for Africaover 2 years agoproject donation150
WhiteBox Research: Training Exclusively for Mechanistic Interpretabilityover 2 years agoproject donation100
Avoiding Incentives for Performative Prediction in AIover 2 years agoproject donation100
Discovering latent goals (mechanistic interpretability PhD salary)over 2 years agoproject donation150
Manifund Bankover 2 years agodeposit+500
Introductory resources for Singular Learning Theoryover 2 years agoproject donation50
Holly Elmore organizing people for a frontier AI moratoriumover 2 years agoproject donation100
Recreate the cavity-preventing GMO bacteria BCS3-L1 from precursor over 2 years agoproject donation50
WhiteBox Research: Training Exclusively for Mechanistic Interpretabilityover 2 years agoproject donation150
Activation vector steering with BCIover 2 years agoproject donation150
Manifund Bankover 2 years agodeposit+500
Avoiding Incentives for Performative Prediction in AIover 2 years agoproject donation50
Manifund Bankover 2 years agodeposit+500
WhiteBox Research: Training Exclusively for Mechanistic Interpretabilityover 2 years agoproject donation70
Alignment Is Hardover 2 years agoproject donation70
Introductory resources for Singular Learning Theoryover 2 years agoproject donation70
Manifund Bankover 2 years agodeposit+500
WhiteBox Research: Training Exclusively for Mechanistic Interpretabilityover 2 years agoproject donation100
Compute and other expenses for LLM alignment researchover 2 years agoproject donation100
Optimizing clinical Metagenomics and Far-UVC implementation.over 2 years agoproject donation100
Apollo Research: Scale up interpretability & behavioral model evals researchover 2 years agoproject donation160
Apollo Research: Scale up interpretability & behavioral model evals researchover 2 years agoproject donation250
Run five international hackathons on AI safety researchover 2 years agoproject donation250
Holly Elmore organizing people for a frontier AI moratoriumover 2 years agoproject donation100
Discovering latent goals (mechanistic interpretability PhD salary)over 2 years agoproject donation400
Discovering latent goals (mechanistic interpretability PhD salary)over 2 years agoproject donation40
Scoping Developmental Interpretabilityover 2 years agoproject donation45
Scoping Developmental Interpretabilityover 2 years agoproject donation1000
Scoping Developmental Interpretabilityover 2 years agoproject donation455
Joseph Bloom - Independent AI Safety Researchover 2 years agoproject donation250
Joseph Bloom - Independent AI Safety Researchover 2 years agoproject donation100
Joseph Bloom - Independent AI Safety Researchover 2 years agoproject donation50
Manifund Bankover 2 years agodeposit+1000
Agency and (Dis)Empowermentover 2 years agoproject donation250
Manifund Bankover 2 years agodeposit+2000
<e083e3b0-a131-4eaa-8a83-6a146a196432>over 2 years agoprofile donation100
Medical Expenses for CHAI PhD Studentover 2 years agoproject donation43
<03fac9ff-2eaf-46f3-b556-69bdee303a1f>over 2 years agoprofile donation50
Manifund Bankover 2 years agodeposit+900
Manifund Bankover 2 years agodeposit+100