You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
This project globalizes AI accountability by validating the CAAF (Cooperative Alignment Assessment Framework). Current safety standards are Western-centric and ignore the "here and now" crisis of behavioral manipulation in diverse cultures. I will validate and refine this tool through intensive expert consultation at the Leuven conference, ensuring the CAAF framework becomes a robust, internationally recognized standard for auditing AI behavioral manipulation.
Framework Validation: Stress-test the CAAF Audit Checklist with global experts in Leuven to secure a "seal of approval."
Systemic Intervention: Provide a socio-technical audit tool that identifies structural gaslighting, which standard benchmarks miss.
Open Standards: Publish a validated, cross-cultural audit standard on SSRN and a dedicated platform. I will achieve this by leveraging my 25+ years of research expertise and the platform at Leuven to recruit collaborators and demonstrate the tool’s efficacy to international stakeholders.
The funds cover travel and accommodation for the Leuven conference—a critical milestone for validation. Beyond travel, the grant will support the development of an online CAAF audit platform and its multi-language localization. While initially focused on public accountability for NGOs and policy-makers, this builds the foundation for a sustainable model of independent AI auditing and professional oversigh
I am an independent researcher with 25+ years of experience in qualitative stakeholder research.
Track Record: My CAAF framework was recently featured at the ADBI-JICA AI Forum.
Vision: I am currently a "team of one," but I am using this project to recruit a multi-disciplinary team of auditors and ethicists who share the mission of de-centering Western norms in AI safety. I am an appointed expert at the Global AI Ethics Institute (Paris).
The greatest risk is that the present-day crisis of "Structural Gaslighting" remains unaddressed by those who choose to focus only on speculative future risks. Without this intervention in Leuven, the opportunity to force a confrontation with Big Tech's lack of accountability in non-Western contexts will be lost. Failure means human agency continues to be eroded by invisible, biased algorithms.
$0. All research and international outreach to date have been entirely self-funded to maintain absolute intellectual independence. This is my first formal grant application to scale this mission from an independent research project into a global auditing system.