@gjwilkerson
Independent researcher working at the intersection of complex systems, statistical physics, and AI safety. I am developing Uncertainty-Aware Language Models (ULMs) that generate structured reliability signals during inference, aiming to detect hallucinations and instability in real time. My prior research focused on network cascades and emergent computation, and I now apply related dynamical systems ideas to model evaluation, calibration, and interpretability. I work independently and am seeking support to advance reliability-focused AI research.
https://www.komplexai.io/$0 in pending offers
Independent researcher working at the intersection of complex systems and AI safety. I’m currently focused on empirical evaluation of LLM reliability and developing methods to detect instability during inference. My goal is to combine theoretical insight with hands-on engineering to improve the observability and trustworthiness of advanced AI systems.