Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
EGV-Labs avatarEGV-Labs avatar
Jared Johnson

@EGV-Labs

I'm the founder of EGV Labs. We develop AI governance, alignment, and workflow protocols without expensive retraining cycles. We offer public policy advice, runtime maintenance and soon an adaptive suite of system alignment protocols (details under NDA) alongside education programs for safe, accountable use of LLMs.

https://www.linkedin.com/showcase/egv-labs/
$0total balance
$0charity balance
$0cash balance

$0 in pending offers

About Me

I'm the founder of Empower Green Voices (EGV) and its research initiative EGV Labs, which operates at the intersection of AI safety, structural coherence, and dignity-based governance. With a deep background in climate justice, policy advocacy, and systems design, I bring a unique lens to the challenges of governing advanced AI systems in ways that center human agency, interpretability, and decentralized control.

At EGV Labs, we perform research on runtime governance mechanisms for advanced AI systems; developing practical protocols for coherence auditing, refusal-integrity verification, and human-operable oversight that function in real deployment conditions. Our work focuses on adaptive alignment: building safety infrastructures that evolve alongside AI capabilities rather than treating alignment as a one-time achievement. EGV's methodology is grounded in the principle that AI governance must be operationally accessible; not just to researchers and labs, but to the diverse stakeholders whose lives these systems affect. This means creating transparent, auditable systems where everyday users can meaningfully intervene, challenge decisions, and verify safety claims.

EGV Labs' research has demonstrated predictive validity across multiple domains, independently anticipating findings later confirmed in academic literature on tokenization effects, metacognitive architectures, and cross-model coherence patterns. EGV Labs operates with methodological independence, using rigorous verification and cross-model testing to investigate phenomena that institutional incentives often discourage exploring.

EGV Labs seeks funding to expand this research, develop open governance protocols, and build infrastructure for human-AI coordination that preserves agency and enables participatory oversight as capabilities scale.

Projects

Beyond Compute: Persistent Runtime AI Behavioral Conditioning w/o Weight Changes

pending admin approval