@Mitsuoka
AI Governance researcher dedicated to uncovering socio-technical risks in non-Western contexts. My work addresses the "here and now" crisis of behavioral manipulation and structural gaslighting in LLMs. Developed the Cooperative Alignment Assessment Framework (CAAF) to bridge the gap between technical compliance and ethical reality. Actively bridging the divide between academic research (SSRN) and practical auditing tools for global accountability.
https://www.linkedin.com/in/tomoko-mitsuoka/$0 in pending offers
Expertise & Track Record:
Independent AI Governance Researcher: Specialized in the intersection of political science and socio-technical systems, focusing on how LLM architectures impact cultural integrity and user agency.
25+ Years of Industry Experience: A deep background in qualitative research and consulting, allowing for a nuanced understanding of how technology is actually consumed and manipulated in real-world contexts.
Practical Implementation: Developed the CAAF (Cooperative Alignment Assessment Framework) Audit Checklist, a tool designed to bridge the gap between high-level ethics and technical auditing. This tool was recently shared at the ADBI-JICA AI Forum to positive feedback from international stakeholders.
Academic Contributions:
Published three working papers on SSRN addressing "Structural Gaslighting" and "Defensive Victimhood" in AI systems, specifically highlighting risks in CJK (Chinese, Japanese, Korean) linguistic contexts.
Selected to present latest findings at the Large-Scale AI Risks conference in Leuven (June 2026).
Current Mission: I am dedicated to ensuring that AI governance is not just a Western-centric compliance exercise but a robust, cross-cultural mechanism that protects human agency against structural manipulation. I am currently seeking strategic partnerships and travel grants to finalize and validate my audit tools through global expert consultations.