Guidance and Collaboration Mechanisms of AI Agents in Design Sprint Learning
An Infographic on a Study of Personalization and Role Reconfiguration
The Evolution of AI’s Role
AI is evolving from a passive tool to an active facilitator in learning and innovation. This study focuses on the high-pressure “60-minute Mini Design Sprint” to explore how AI can effectively support creative tasks and advance future AI educational practices.
The Knowledge Gap
We lack a deep understanding of how AI functions as a true collaborator in creative learning environments.
- Gap 1: How do different levels of AI guidance affect a learner’s cognitive load and perceived success?
- Gap 2: How does a learner’s perception of the AI’s role (mentor, teammate, tool) shape their interaction patterns and trust?
Context Dependency
Core Hypothesis: The effectiveness of an AI agent is context-dependent, varying with guidance level and user perception.
The high-guidance group is expected to have lower cognitive load, while the low-guidance group may feel more autonomy but greater stress. Trust is a key mediating variable.
Multi-Method Inquiry
Research Design:
An explanatory case study with an embedded mixed-methods design, framed by Critical Realism.
Participants & Intervention:
~15 professionals randomly assigned to high/low-guidance AI groups for a 60-minute design task.
Measurement Instruments:
NASA-TLX (cognitive load), Human-AI Trust Scale, AI Role-Perception Scale, and semi-structured interviews.
Three-Stage Analysis
- Data Collection: Interaction text logs, scale data, and qualitative interview data.
- Analysis Process:
- Semantic Coding
- Thematic Coding
- Retroductive Inference
- Challenges & Mitigation: Mitigating researcher’s “pro-innovation bias” through reflexive practices.
Beyond “Does It Work?”
This research aims to move beyond simply evaluating if an AI tool “works” to deeply investigate the underlying mechanisms of “how it works.” The expected contribution is to provide context-sensitive insights to design future AI learning environments as true facilitators of human creativity and innovation.
References (APA 7th Edition)
- Archer, M., Bhaskar, R., Collier, A., Lawson, T., & Norrie, A. (2013). Critical realism: Essential readings. Routledge.
- Belland, B. R. (2017). Instructional scaffolding in STEM education: Strategies and efficacy evidence (1st ed.). Springer Nature. https://doi.org/10.1007/978-3-319-02565-0
- Bhaskar, R. (2008). A realist theory of science (2nd ed.). Routledge.
- Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
- Braun, V., & Clarke, V. (2012). Thematic analysis. In H. Cooper (Ed.), APA handbook of research methods in psychology, Vol. 2: Research designs (pp. 57-71). American Psychological Association.
- British Educational Research Association. (2018). Ethical guidelines for educational research (4th ed.). https://www.bera.ac.uk
- Creswell, J. W., & Clark, V. L. P. (2017). Designing and conducting mixed methods research. Sage publications.
- Danermark, B., Ekström, M., & Karlsson, J. C. (2019). Explaining society: Critical realism in the social sciences. Routledge.
- Easton, G. (2010). Critical realism in case study research. Industrial Marketing Management, 39(1), 118–128. https://doi.org/10.1016/j.indmarman.2008.06.004
- Finlay, L. (2002). “Outing” the researcher: the provenance, process, and practice of reflexivity. Qualitative health research, 12(4), 531-545.
- Fletcher, A. J. (2017). Applying critical realism in qualitative research: Methodology meets method. International Journal of Social Research Methodology, 20(2), 181–194. https://doi.org/10.1080/13645579.2016.1144401
- Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619. https://doi.org/10.1126/science.1134475
- Hart, S. G. (2006, October). NASA-task load index (NASA-TLX); 20 years later. In Proceedings of the human factors and ergonomics society annual meeting (Vol. 50, No. 9, pp. 904-908). Sage Publications.
- Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In P. A. Hancock & N. Meshkati (Eds.), Advances in psychology (Vol. 52, pp. 139-183). North-Holland.
- Kemmis, S., & McTaggart, R. (2005). Participatory action research: Communicative action and the public sphere. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (3rd ed., pp. 559–603). SAGE.
- Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human factors, 46(1), 50-80.
- Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson.
- Oliveira, M., Brands, J., Mashudi, J., Liefooghe, B., & Hortensius, R. (2024). Perceptions of artificial intelligence system’s aptitude to judge morality and competence amidst the rise of Chatbots. Cognitive Research: Principles and Implications, 9(1), 47.
- Reason, P., & Bradbury, H. (Eds.). (2001). Handbook of action research: Participative inquiry and practice. Sage.
- Richards, L., & Morse, J. M. (2013). Readme first for a user’s guide to qualitative methods (3rd ed.). SAGE Publications, Inc.
- Saldaña, J. (2021). The coding manual for qualitative researchers (4th ed.). SAGE Publications Ltd.
- Sayer, R. A. (1992). Method in social science: a realist approach (2nd ed.). Routledge. https://doi.org/10.4324/9780203163603
- Schaefer, K. E., Chen, J. Y. C., Szalma, J. L., & Hancock, P. A. (2016). A metaanalysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Human Factors, 58(3), 377–400.
- Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285. https://doi.org/10.1207/s15516709cog1202_4
- Sweller, J., van Merriënboer, J. J. G., & Paas, F. (2019). Cognitive architecture and instructional design: 20 years later. Educational Psychology Review, 31(2), 261–292.
- Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113–117. https://doi.org/10.1016/j.jesp.2014.01.005
- Yin, R. K. (2017). Case study research and applications: Design and methods. Sage publications.