@Brian-McCallion
Independent researcher working on foundational models of large language model behaviour and alignment. My work focuses on mechanistic explanations of hallucination, coherence, and failure modes using systems theory, information theory, and learning dynamics. I am currently developing a boundary-mediated framework for understanding inference and learning in LLMs, with an emphasis on testable predictions and alignment-relevant design patterns.
$0 in pending offers
I work independently at the intersection of machine learning theory, systems thinking, and AI alignment. My background spans complex technical systems and long-term work on how structure, compression, and error correction shape intelligent behaviour. Rather than focusing on incremental performance improvements, I aim to develop mechanistic frameworks that explain why modern models behave as they do, and how specific failure modes arise.
Over the past year I have developed a unified theoretical model of LLM inference and learning that treats both token generation and training updates as irreversible boundary write events. This framework produces concrete, testable hypotheses about hallucinations, alignment brittleness, and the limits of proxy-based safety methods. I am seeking support to validate these ideas empirically using small-scale experiments, with the goal of contributing durable foundations for alignment research rather than ad hoc mitigations.
pending admin approval