You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Schism is building a consent-aware, privacy-first protocol to prevent digital miscommunication by making human context—emotional, cognitive, cultural, and relational—machine-readable and interoperable.
We're addressing a $1.2 trillion annual problem: digital communication systems transmit content but strip away the context that determines meaning. This "context collapse" disproportionately impacts neurodivergent professionals, cross-cultural teams, and remote workers who face constant misinterpretation and emotional labor.
Our explainable, on-device AI surfaces intent-impact divergence in real time, helping users better anticipate misreads across communication styles and neurotypes. Schism operates as infrastructure: portable, ethical, and non-prescriptive. Unlike tools that correct grammar or police tone, Schism makes invisible context visible through privacy-preserving, user-controlled infrastructure that works across existing platforms.
We’re launching Phase I R&D to prototype our local-first model, metadata protocol, and pilot integrations with tools like Slack and Gmail. This work supports long-term safety in human-AI interaction, organizational coordination, and epistemic resilience—especially in high-friction, high-stakes systems.
Our goal is to build and test the first version of the Schism Context Protocol, a privacy-preserving infrastructure layer that prevents digital miscommunication by making human context machine-readable and ethically shareable. The Phase I deliverables include:
A local-first, explainable AI engine that models intent-impact divergence across cognitive, emotional, and relational vectors
A set of lightweight user interfaces (e.g. Self-Context Cards, perspective nudges) embedded in real-world tools like Slack and Gmail
A browser-based pilot rollout with 3–5 high-context teams to test performance, interpretability, and impact on miscommunication
We’ll know we’ve succeeded when we can:
Demonstrate that users better anticipate misunderstandings through Schism’s context surfacing tools
Show that our explainable AI improves user interpretability without normative correction
Validate interest from researchers, product teams, and platform stakeholders in adopting or extending the protocol
The impact pathway aligns closely with the Long-Term Future Fund’s goals:
By reducing cognitive overload and reputational fragility, Schism supports coordination, epistemic safety, and trust—especially in research or governance settings
By testing interpretability interfaces and non-coercive AI interventions, it contributes to safer human-AI interaction design
By embedding ethical context exchange at the protocol level, it lays groundwork for more alignable and trustworthy AI systems
This project creates infrastructure for understanding—between humans and between humans and AI—when it matters most.
This funding will support Phase I R&D for Schism, including:
Development of a lightweight, explainable AI engine for detecting intent-impact divergence on-device
Design and testing of the Self-Context Card, status signaling, and consent-aware metadata interfaces
Pilot implementation with 3–5 high-context teams using real-world tools like Slack and Gmail
Research collaboration with experts in HCI, neurodiversity, and AI ethics to validate usability and reduce cognitive load
Engineering and advisor compensation during prototyping and testing
This funding ensures we can move from concept to working prototype while maintaining strong alignment with privacy, safety, and long-term interpretability goals.
Our team is led by a neurodivergent founder with 10+ years of experience in systems engineering and strategy, product operations, and cross-functional infrastructure across aerospace, healthcare, and tech—bringing both lived experience and deep execution capability to the problem of miscommunication. She’s supported by a founding engineer with expertise in local-first, privacy-preserving systems; a Ph.D. in machine learning and reasoning algorithms; and two AI/ML research scientists (CS PhDs from Georgia Tech) focused on edge inference and interpretability. A cognitive neuroscientist rounds out the team, ensuring our models and interfaces are grounded in real-world cognitive diversity and inclusive design. Together, the team blends technical depth, research rigor, and mission alignment to build Schism as ethical, scalable infrastructure.
Track Record:
Founder has over 10 years of experience designing and scaling systems across aerospace, healthcare, and tech. With a background in product strategy and communication frameworks, she has led high-complexity programs across distributed teams and built human-centered infrastructure in mission-critical environments. As a neurodivergent systems thinker, she brings both lived experience and deep execution capability to a problem often dismissed as "soft."
Founding Engineer has developed secure, privacy-first infrastructure and local-first applications across startups and enterprise environments. His experience includes architecting systems for offline resilience, protocol sync, and end-to-end encryption—critical for Schism’s edge-first model.
Technical CS Ph.D. is a published researcher with expertise in explainable AI, probabilistic reasoning, and knowledge graph integration whose work focuses on making complex AI systems interpretable, robust, and semantically grounded.
Two AI/ML Research Scientists (PhDs, Georgia Tech) bring advanced knowledge in edge inference, systems optimization, and neural architecture design. Their research spans model compression, distributed learning, and scalable alignment methods—key to making Schism performant and feasible on-device.
Cognitive Neuroscientist specializes in emotional processing, cognitive load, and neurodiversity. Her academic and applied research bridges neuroscience and HCI, ensuring that Schism’s design is inclusive, accessible, and grounded in real-world cognition.
The most likely cause of failure is technical overreach or mis-scoping—i.e., that building a real-time, explainable, on-device communication protocol proves more difficult to implement or scale than anticipated within the constraints of early funding and pilot feedback. Another risk is market timing: while the need is urgent, adoption of infrastructure-level tools (vs. apps or features) may be slower than expected, especially if privacy and interpretability are deprioritized by major platforms.
If the project fails, the most likely outcome is that we produce a set of partial artifacts—usable research, modular tools (e.g., Self-Context Cards, status signaling), or design frameworks—that influence other efforts in communication tooling, AI safety, or HCI. Even in failure, we expect to generate insights around consent-aware design, non-prescriptive AI, and the emotional labor of digital systems—insights that could inform safer human-AI interaction and more inclusive tech governance.
We’re pre-product and two months into active development. We have not raised external funding yet. To date, the project has been self-funded by the founder and supported through in-kind contributions from advisors and collaborators. We’re currently applying for non-dilutive grants and selectively engaging aligned early-stage investors.