UNT AI in Action  ·  Dr. Scott J. Warren  ·  University of North Texas

Coherense
Gamified Human-AI Collaboration for Executive Digital Transformation, Training, and Decision-support

Department of Learning Technologies  ·  College of Information

Why digital transformations fail

60–70%
Overall DT failure rate (Gartner, 2025)
42%
Organizational factors (Syed et al., 2023)
38%
Cultural resistance
35%
Leadership gaps
31%
Implementation problems
The core problem is usually not the shiny tool. It is the messy human system wrapped around it.
Syed et al. (2023). International Journal of Information Management.  |  Oludapo, Carroll, & Helfert (2024). Information Systems Management.  |  Gartner (2025).

What the app actually does

1
Users move through short mission-based scenarios that simulate real transformation constraints: stakeholder conflict, capability gaps, governance pressure, and uncertain technology fit. (Brown et al., 1989; Collins et al., 1989)
2
Instead of asking the AI to “solve it,” the user is prompted to justify choices, compare options, and explain consequences. (Zimmerman, 2000)
3
The system produces a structured output reviewable for coherence, defensibility, and organizational fit. (Schmarzo, 2020; Warren et al., 2025)

The Products: Coherense and Meridian

Coherense
AI-guided training platform
Executive digital transformation decisions
  • Mission-based scenarios with real DT constraints: stakeholder conflict, capability gaps, governance pressure
  • Human-AI collaboration: AI reasons alongside the user, not instead of them
  • Decision artifacts: every session produces a usable output for real planning
  • Three learning arcs: AI Integration, Quantum Readiness, Emerging Tech (ECET)
  • Structured feedback on reasoning quality and coherence of choice
↗ coherense.systemly.space
Meridian
Narrative learning game
Digital transformation leadership
  • Story-driven game: players navigate a fictional organization undergoing digital transformation
  • Consequential decisions: choices ripple through stakeholder relationships, resources, and outcomes
  • Companion to Coherense: transfers analytical frameworks into applied narrative practice
  • Replay and branching: learners test alternate paths and compare outcomes (Warren & Jones, 2017)
  • Under ongoing Delphi study validation with executive participants
↗ meridian.systemly.space
Coherense - Training Platform
Coherense training platform screenshot
Open live demo ↗
Meridian - Learning Game
Meridian learning game screenshot
Open live demo ↗

How Coherense supports gamified learning

Mission structure

Users do not passively read - they enter bounded scenarios with goals, constraints, and decisions to make. (Brown et al., 1989)

Progression and replay

Users can revisit missions, test alternate choices, and learn through comparison rather than one-shot exposure. (Zimmerman, 2000)

Feedback loops

The platform gives structured feedback on reasoning quality - closer to a simulation game than a lecture deck wearing a fake mustache. (Black & Wiliam, 1998)

Decision artifacts

Every mission ends with a usable output, reinforcing transfer to real planning practice. (Schmarzo, 2020; Warren et al., 2025)

Warren, Roy, & Robinson (2021). Advances in game-based learning. Springer.  |  Warren & Jones (2017). Game-based learning and 21st century skills. Springer.

Pedagogical design: how learning takes place

Scenario-based learning

Learners encounter realistic DT dilemmas requiring defensible choices under constraint - grounded in situated cognition (Brown et al., 1989) and cognitive apprenticeship theory (Collins et al., 1989).

Progressive disclosure

Content reveals in layers: Koan → Teaching → Context → Scenario. The scenario is locked until teaching is opened, reducing answer-peeking and supporting learner control. (Zimmerman, 2000)

Posture diagnostics and feedback

Decision patterns (Speed / Governance / Caution) aggregate into a posture profile. Role-tailored feedback targets workplace transfer. (Black & Wiliam, 1998)

Role and lens differentiation

Content is tailored to four executive roles and a secondary lens (ethics, readiness, risk, ROI, governance), implementing differentiated instruction at scale. (Tomlinson, 2001)

Game-based and narrative learning

Meridian extends learning into a narrative simulation where consequential choices compound across 20 quarters, producing durable behavioral change over declarative recall. (Warren et al., 2021; Warren & Jones, 2017)

Decision artifacts and transfer

Every arc ends with a downloadable output - FMEA risk score, SDTDF readiness report, or executive dashboard - ensuring learning transfers directly into practice. (Schmarzo, 2020; Warren et al., 2025)

How it was built: AI-assisted engineering

Spec-driven AI prototyping  A 288-line engineering specification (v40, February 2026) was authored first, then provided to Claude (Anthropic, 2024) as a complete prompt-context to generate the full prototype codebase - HTML, CSS, and JavaScript - with zero external libraries.
Architecture  Static front-end deployed on a Hostinger VPS. No frameworks or backend required for basic use. Phase 2 adds a Flask + SQLite API for session logging and research data export.
Research instrumentation  The spec includes a full behavioral logging schema - panel-open sequences, time-on-task, scenario choices, posture tallies, and tool uptake - enabling pre/post posture-shift measurement as a within-subject research instrument.
Iterative versioning  40 build versions across January–February 2026. Each version updated via revised specification prompts to Claude, demonstrating a human-in-the-loop model where the researcher retains full authorship of requirements, content, and design decisions.

Human-AI collaboration model

The AI layer acts as a reasoning collaborator rather than a generic chatbot (Warren & Beck, 2023)
It helps clarify intent, surface assumptions, and compares strategic options (Warren et al., 2023)
Players identify second-order effects before implementation decisions are finalized (Grotewold et al., 2024)
This ensures leaders know to ask why, can, and should before IT adoption
Warren & Beck (2023). TechTrends.  |  Warren, Beck, & McGuffin (2023). Journal of Applied Instructional Design.  |  Grotewold, Warren et al. (2024). Manuscript in preparation.

Research and validation path

Current study
Delphi Study with Executive Participants
Focus areas
Usability · Learning experience · Guided Human-AI decision-making
There is an ongoing Delphi study with executives examining usability, learning experience, and perceptions of guided human-AI decision-making using training from Coherense and the associated Meridian learning game.
That matters because the platform is not just software - it is a training intervention that needs evidence of learning value and decision quality improvement.
Takeaway: If it changes behavior in organizations, not just screens, it has value.

References

Anthropic. (2024). Claude [Large language model]. https://www.anthropic.com
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5(1), 7–74.
Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(1), 32–42.
Checkland, P. (1981). Systems thinking, systems practice. Wiley.
Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship. In L. B. Resnick (Ed.), Knowing, learning, and instruction (pp. 453–494). Erlbaum.
Gartner. (2025). Top strategic technology trends. Gartner Research.
Grotewold, K., Warren, S. J., et al. (2024). Human-AI collaboration frameworks in organizational decision contexts. Manuscript in preparation.
Oludapo, O., Carroll, N., & Helfert, M. (2024). Digital transformation failure factors. Information Systems Management.
Schmarzo, B. (2020). The economics of data, analytics, and digital transformation. Packt.
Syed, R., et al. (2023). Why digital transformations fail. International Journal of Information Management.
Tomlinson, C. A. (2001). How to differentiate instruction in mixed-ability classrooms (2nd ed.). ASCD.
Warren, S. J., & Beck, D. (2023). Human-AI decision augmentation in organizational learning. TechTrends.
Warren, S. J., Beck, D., & McGuffin, M. (2023). Augmented decision-making frameworks. Journal of Applied Instructional Design.
Warren, S. J., & Jones, G. (2017). Game-based learning and 21st century skills. Springer.
Warren, S. J., Roy, M., & Robinson, R. (2021). Game-based learning for organizational performance. In Warren & Jones (Eds.), Advances in game-based learning. Springer.
Warren, S. J., et al. (2025). Dynamic Systems Engineering for digital transformation. Manuscript under review.
Zimmerman, B. J. (2000). Attaining self-regulation. In Boekaerts et al. (Eds.), Handbook of self-regulation (pp. 13–39). Academic Press.
← → keys