You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Project Description
Unlike conceptual alignment proposals, this project includes a working governed-execution runtime that demonstrates measurable containment under adversarial prompt pressure.
The system does not modify model weights or rely on probability conditioning. Instead, it externalizes structural logic and enforces explicit execution contracts at runtime.
The system defines:
What forms of output are admissible
What must be refused
When execution must halt
How violations are recorded and preserved
Under adversarial testing, the runtime activates safe-mode containment after contract violations, flags hallucination patterns, and generates structured JSON evidence logs comparing baseline, governed, and hybrid configurations.
The objective is not to change model cognition, but to make model behavior observable, bounded, and auditable.
This produces machine-verifiable artifacts that can be used for:
Technical AI safety research
Governance experimentation
EU AI Act–style compliance frameworks
Alignment benchmarking
This project has three concrete goals:
Develop a model-agnostic admissibility evaluation layer
Produce reproducible stress-test artifacts across multiple model configurations
Package results into audit-ready documentation usable for governance and safety evaluation
Execution plan:
Run structured stress tests (baseline vs governed vs hybrid)
Measure containment under adversarial ambiguity prompts
Generate structured evidence logs (JSON + performance metrics)
Produce standardized evaluation artifacts (PDF + raw logs)
Funding will support:
Cross-model testing (Gemini, Claude, OpenAI APIs)
Compute costs for structured stress testing
Refinement of admissibility constraint engine
Artifact standardization for external evaluators
Independent review and iteration
Minimum funding enables:
One full model evaluation package with artifact release
Full funding enables:
Multi-model comparative report
Extended adversarial stress testing
Public methodology documentation
Independent technical review
Nicholas Evans
Founder, Operational Systems Group LLC
Built working governed-execution prototype
Designed explicit admissibility constraint architecture
Produced stress-test artifacts comparing baseline vs governed configurations
Focused on operational containment rather than weight modification
This project is currently solo-developed.
Most likely risks:
Limited compute for cross-model validation
Low early adoption of artifact format
Misalignment between technical evidence and funder expectations
Even in failure, the project produces:
A structured evaluation methodology
Artifact-based LLM testing framework
Reproducible containment logs for future research
$0 external funding raised in the last 12 months.
Project has been self-funded.