Portfolio |Tools For Thought | CHI 2025 Workshop

This is my portfolio to apply for participation in the upcoming Tools for Thought workshop at CHI 2025. It documents my take on how we can build tools to help people think better.


Designing Transformative Lenses to Aid Thinking

When I studied visual arts, one lesson that really struck me came from learning to draw. Usually, it can be pretty difficult to draw things realistically. To counter this, we can deliberately try drawing the “negative spaces” — the empty spaces around the object, rather than the object itself (see Figure 1). This simple change in focus can help people draw remarkably more accurately! I use this principle when designing technologies to support thinking. In my research, I explore how technologies can serve as transformative lenses, empowering individuals to adopt new and constructive perspectives. My work investigates this idea across various domains, including creativity, learning, and communication, as showcased in the projects below. Taking a human-centered and research-through-prototyping approach, I uncover key factors and principles that can inform the design of future technologies that empower individuals to purposefully pursue their learning and performance goals.

Figure 1. Drawing something, like a hand, can be easier by focusing on drawing the “negative spaces” — the gaps in, around and between the fingers, rather than the hand itself. This purposeful shift in focus helps us yield better results and master this complex task!


1) Enhance Creative Workflows Through Rapid Externalized Thinking

Using Generative AI to Facilitate Rapid, Parallel, and Holistic Design Thinking in the Context of Visual Design

Figure 2. A designer using Paratrouper.

Designing the visuals for casts of characters is a complex craft, for which there is a lack of tool support. Dedicated avatar and character creation interfaces (e.g., for games and video conferencing), are typically geared towards creating an individual character for a specific universe. General purpose artistic tools, while flexible, do not cater to the entire complex creative workflow. In the process of creating a cast of characters, beyond developing a concept for a single character (e.g., in turnarounds, expression sheets, etc.), designers must fit them within the context of other characters and the larger story (e.g., in lineups and storyboards).

Based on background research and formative interviews with five character designers, we determined that a tool for this task should (1) support rapid instantiations of ideas, (2) allow multi-modal input to maximize expressivity, and (3) enable varied visualizations for contextual thinking.

This led us to create Paratrouper [1], a generative-AI-powered multi-modal authoring tool for visual character cast design (see Figure 2). With it, one can use text, sketches, and image references to create images of original characters within cards. Characters cards can be viewed side-by-side, as well as sorted and styled in groups. Characters can also be visualized from multiple angles in character sheets, and staged together in different settings.

In a user study, we invited eight character designers to use the tool to design an original cast of characters. Based on observations and interviews, we learned that Paratrouper can help creators articulate and refine their design intent through rapid externalization. Character cards encouraged non-linear and parallel design exploration, multi-modal input modalities afforded creative agency and expressivity, and stages fueled holistic thinking as well as sparked new ideas.

[1] Leong, J., Ledo, D., Driscoll, T., Grossman, T., Fitzmaurice, G., & Anderson, F. (2025, April). Paratrouper: Exploratory Creation of Character Cast Visuals Using Generative AI. To appear in Proceedings of the 2025 CHI conference on human factors in computing systems.


2) Learn New Concepts Through the Lens OF YOUR Personal Interests

Using Generative AI to Reframe Learning Materials in the Context of Your Personal interests

Figure 3. A person interacting with the vocabulary app.

Video 1. 30s preview video (CHI 2024)

Students often struggle to feel motivated while learning. What if we could empower students to learn new material through the lens of their own interests?

We investigated this possibility in the context of vocabulary learning. To do this, we prototyped a generative-AI-powered vocabulary learning app comprising three conditions (see Figure 3 and Video 1):

  • Control: Shows the target word, the definition, and an example of how the word can be used in a sentence taken from an existing source (e.g., book, news article, etc.)

  • AI-Generated Sentence: Shows the target word, its definition, and allows someone to type a word or phrase which drives the generation of a personalized sentence, showing how the word can be used.

  • AI-Generated Story: Shows the target word, its definition, and allows someone to type a word or phrase which drives the generation of a personalized short story, showing how the word can be used.

Using this prototype, we conducted an online, between-subjects user study (n=272) to investigate the impact personalized, AI-generated learning examples would have on people’s learning performance (based on quizzes) and perception of their learning experiences (based on a survey). From this, we learned that while generated learning examples do not lead to better learning outcomes (i.e., immediate and delayed), people experience more intrinsic motivation learning from them. We also discovered that people take different approaches to personalizing their learning materials. For instance, while some typed inputs based on their interests or aspects of their daily lives, others played with word associations to shape the generation of their personalized learning examples.

[2] Leong, J., Pataranutaporn, P., Danry, V., Perteneder, F., Mao, Y., & Maes, P. (2024, May). Putting things into context: Generative AI-enabled context personalization for vocabulary learning improves learning motivation. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (pp. 1-15).


3) Overcome Emotional Hurdles that Inhibit Learning & Skill Development

Use Augmented Reality to Reframe How You See Others (or Yourself!) to Engage in Activities that Promote Self-Development

Figure 4. Using the Masquerade app to apply AR filter effects on others.

Video 2. 30s Preview Video (CHI 2023)

Public speaking anxiety can get in the way of people sharing their ideas. To deal with this, a common piece of advice is to imagine your audience in a new light. However, what if augmented reality (AR) filters could be used for this instead?

To explore this idea, we conducted a survey (n=100) to capture a snapshot of a general public’s perception of this idea (i.e., acceptance, filter effects, possible motivations, concerns). Furthermore, we prototyped a custom web conferencing application called Masquerade (see Figure 4 and Video 2), that enabled users to apply AR filter effects privately (meaning no one else on the call would see the effects) either on others or on themselves.

In a user study, we invited 16 people with a fear of public speaking (FOPS) to deliver an online speech using the app [3]. We found that AR filters, particularly applied on-others, can be helpful to mitigate public speaking anxiety, via a variety of strategies: moderating the distance they feel to their audience, changing how they feel towards themselves (i.e., more confident, or less self-focused), or altering their perception of the overall situation (i.e., less serious or distracting). Their preferences can depend on the intensity of their anxiety, their existing public speaking habits, and additional personal and social factors. We also uncovered several important ethical considerations. In particular, the ability for all stakeholders to negotiate what features are available, and being transparent about what features are available on each call are critical for maintaining a safe and respectful social atmosphere.

More broadly, the emotions and mindsets we carry govern whether and how we approach experiences that can lead to self growth. In an XRDS magazine article [4], I outlined some of the ways in which I believe generative AI can be used to cultivate positive emotions and mindsets that are conducive to learning and self-development.

[3] Leong, J., Perteneder, F., Rajvee, M. R., & Maes, P. (2023, April). “Picture the Audience...”: Exploring Private AR Face Filters for Online Public Speaking. In Proceedings of the 2023 CHI conference on human factors in computing systems (pp. 1-13).

[4] Leong, J. (2023). Using generative AI to cultivate positive emotions and mindsets for self-development and learning. XRDS: Crossroads, The ACM Magazine for Students, 29(3), 52-56.


4) Enhance Learning Motivation via AI-Generated Characters That Inspire You

Using Generative AI to Make Engaging AI-Generated Teachers and Peers

Figure 4. Learning from an AI-generated Einstein.

Video 3. MIT Media Lab Video Feature (Nature Machine Intelligence 2021)

With generative AI, we can create portrayals of characters ranging from fictional characters to historical figures. Despite many possible negative use cases for this technology, there are also potential positive use cases of AI-generated characters, specifically in supporting learning and well-being.

In a Perspective article [5], we outlined opportunities to leverage AI-generated characters for learning (see Video 3). Such characters can be used to boost motivation and engagement using inspiring virtual instructors (see Figure 4). They can also be embodied by learners, to enable learning through role-playing. Additionally, characters can serve as interactive peers, collaborators, or intellectual sparring partners.

Delving into the topic of virtual instructors more deeply, we conducted an online between-subjects study (n=134) to investigate the effects of learning from an AI-generated virtual instructor who resembles a person one likes or admires [6]. In the study, some participants watched a lecture featuring an AI-generated character modeled after a celebrity, while others learned from a non-recognizable character with similar demographic traits. After the lecture, they each answered a quiz and survey. We found that while it cannot immediately make learners remember the content better, they are more likely to appraise the virtual instructor positively, and feel more motivated to learn when the instructor resembles a person they like or admire.

[5] Pataranutaporn, P., Danry, V., Leong, J., Punpongsanon, P., Novy, D., Maes, P., & Sra, M. (2021). AI-generated characters for supporting personalized learning and well-being. Nature Machine Intelligence, 3(12), 1013-1022.

[6] Pataranutaporn, P., Leong, J., Danry, V., Lawson, A. P., Maes, P., & Sra, M. (2022, October). AI-generated virtual instructors based on liked or admired people can improve motivation and foster positive emotions for learning. In 2022 IEEE Frontiers in Education Conference (FIE) (pp. 1-9). IEEE.