CHI PLAY ’18 Abstracts- Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts
SESSION: Keynote Addresses
For the past few decades video games have been the focus of widespread concerns regarding violence, addiction and sexist content. In the United States, video games are blamed for high gun violence rates. The World Health Organization (WHO) has claimed that excessive gaming is tantamount to a disease. Activists worldwide express concerns that the sexist content of some games may lead to sexism and misogyny in real life. But is there good research evidence to support these claims?
Lessons learned from the “video game violence” research era are instrumental. The term “violent video game” never held conceptual, scientific value, yet was used as an emotional loadstone. This resulted in overstatements of evidence by politicians, scholars and professional guilds such as the American Psychological Association. In some cases, hyperbole continued despite mounting evidence that action-oriented games are associated with, if anything, declining trends in violence and evidence for even effects on mild aggression remained inconsistent.
Nonetheless repetitions of this trend continued with discussions of “sexist” games and their impact on sexism in real life as well as the controversy over the WHO’s “gaming disorder.” In both cases activists for particular positions which appear to disparage games pushed forward aggressively despite the presence of evidence that might have advised a more cautious approach. It is time for a reassessment of games research and how a societal culture of moral panic impacted games research over the past few decades. However, these cultural issues can be repaired first by understanding the historical patterns of moral panic that scholars contributed to and second by embracing a culture of open science.
Human expression and connection fuels our evolutionary humanity, curiosity and passion. So how far can we go? Pell is designing modes for following the body’s natural edge to the abyss of space. New works including the parallel design of human-robotic performance protocols undersea and human-cinematic robot performance onstage, have inspired new modes of trans-disciplinary dialogue to understand affective visualization applications in performing astronautics. Technical concepts derived through play and performance on EVA (extra vehicular activities or spacewalks), have led to the development of technical configurations supporting the Spatial Performance Environment Command Transmission Realities for Astronauts SPECTRA (2018). Various SPECTRA experiments on Moon/Mars analogue missions have expanded protocols, for example the confined/isolated Lunar Station analogue mission simulations [Lunares 3 Crew] with transmission of LiDAR imaging and the choreographers’ moves for an artist-astronaut’s interpretation on the analogue Crater. Pell has demonstrated that interactions with SPECTRA systems have a direct impact on the artist-astronaut’s range of spatial awareness, orientation, geographic familiarization, and remote and in-situ operational training for amplifying performance capabilities on EVA. The significance of these new approaches is the widening of the definition of both technical and cultural activities in astronautics through play and performance. Other research from cinematic robotics, and mixed realities including virtual reality, LiDAR projects and big data immersive visualisations platforms, to an astronaut dance is about designing systems for improved performance and cultural engagement for exploring the critical pathways, discourse and cultural practice surrounding space as inspiration for new works of art, and new ways of working with art and space, during a unique mission simulation. These opportunities also support safe forums for reflexive analysis of our human ambitions, and indeed our assumptions, about a human return to the Moon, and future extra-terrestrial culture. SPECTRA tools translate visions for architecting a new era of spaceflight. Outcomes also signal new research and impact pathways for the artist, astronaut and avatar in space exploration and discovery.
SESSION: Paper Presentations
Voice interaction is increasingly common in digital games, but it remains a notoriously difficult modality to design a satisfying experience for. This is partly due to limitations of speech recognition technology, and partly due to the inherent awkwardness we feel when performing some voice actions. We present a pattern language for voice interaction elements in games, to help game makers explore and describe common approaches to this design challenge. We define 25 design patterns, based on a survey of 449 videogames and 22 audiogames that use the player’s voice as an input to affect the game state. The patterns express how games frame and structure voice input, and how voice input is used for selection, navigation, control and performance actions. Finally, we argue that academic research has been overly concentrated on a single one of these design patterns, due to an instrumental research focus and a lack of interest in the fictive dimension of videogames.
The intersection of the physically active human body and technology to support it is in the limelight in HCI. Prior work mostly supports exertion by offering sensed digital information about the exertion activity. We focus on supporting exertion during the activity through sensing and actuation, facilitating the exerting body and the bike to act on and react to each other in what we called ‘integrated exertion’. We draw on our experiences of designing and studying “Ava, the eBike”, an augmented eBike that draws from the exerting user’s bodily posture to actuate. As a result, we offer four design themes for designers to analyze integrated exertion experiences: Interacting with Ava, Experiencing Ava, Reduced Body Control Over Ava and Ava’s Technology. And also, seven practical tactics to support designers in exploring integrated exertion. Our work on integrated exertion contributes to engaging in new ways with the physically active human body.
Trading card games challenge players to select a card from their personal deck to compete against cards from an opponent’s deck with the outcome determined by rules specific to the game. Players desire that the cards in their decks offer meaningful choice relative to those held by the opponent since one player dominating removes all challenge from the competition. The issue of determining the existence and extent of meaningful strategies during competitive selection processes is common to range of other contexts, including picking units for combat in real-time strategy games such as StarCraft II. The approach described models game outcomes as a skew-symmetric matrix and presents an algorithm for excluding dominated and dominating units, and then further ranks the remaining meaningful choice options. A metric: band size quantifies the degree to which subsets of units can still contribute to meaningful game play. This process is applied to a single unit combat scenario using the StarCraft II rules to identify and rank a core set of 39 combat units that only offer meaningful choice within a limited neighbourhood of 12 units around each unit.
Game designers working with Head-Mounted Displays (HMDs) are usually advised to avoid causing disorientation in players. However, we argue that disorientation is a key element of what makes “vertigo play” (such as spinning in circles until dizzy, balancing on high beams, or riding theme park rides) engaging experiences. We therefore propose that designers can take advantage of the disorientation afforded by HMDs to create novel vertigo play experiences. To demonstrate this idea, we created a two-player game called “AR Fighter”, in which two HMD-wearing players attempt to affect each other’s balance. A study of AR Fighter (N=21) revealed three recurring vertigo-experience themes for researchers to analyse and associated design tactics for practitioners to create digital vertigo play experiences. With this work we aim to guide others in using disorientation as intriguing game element to create novel digital vertigo play experiences, broadening the range of games we play.
eSports matches offer fast-paced entertainment for millions of viewers worldwide, but little is known about how to support a positive viewer experience. One of the key challenges related to popular real-time eSports games (e.g., multiplayer online battle arena games or first-person shooters) is empowering viewers to effectively follow rapid gameplay. In our paper, we address this challenge through the design of information dashboards to improve spectator insight and experience in League of Legends, and Counter Strike: Global Offensive. Based on surveys that received a total of 788 responses, we design information dashboards that we evaluate with 18 experienced eSports viewers. Our results show that dashboards contribute to spectator insight and experience, but that careful consideration is necessary to adequately manage in-game complexity and cognitive load of viewers, and establish spectator trust in information dashboards through transparent design. Based on these findings, our paper formulates design goals for spectator dashboards, and outlines key opportunities for future work.
Controller-based interaction is popular due to the increasing prevalence of console and couch-based games, but is known to be slower and less accurate than aiming with a mouse. In this study we evaluated the performance of five interaction techniques for games to answer the question, if gaze interaction can improve the performance of controller interaction. We compared mouse only, controller only, gaze only with two commonly used gaze and controller hybrid interactions: gaze teleportation and gaze panning. We implemented a targeting game that resembled a Fitts’ Law test to evaluate performance, effort, and preference. Our findings show that mouse was the fastest technique and gaze was both the slowest and most error-prone. For the controller-based techniques, players preferred gaze teleportation over the other techniques; however, it only improved performance over the controller for targets that were small and far away.
Designing for Friendship: Modeling Properties of Play, In-Game Social Capital, and Psychological Well-being
Players are increasingly viewing games as a social medium to form and enact friendships; however, we currently have little empirically-informed understanding of how to design games that satisfy the social needs of players. We investigate how in-game friendships develop, and how they affect well-being. We deployed an online survey (N= 234) measuring the properties of games and social capital that participants experience within their gaming community, alongside indicators of the social aspects of their psychological wellbeing (loneliness, need satisfaction of relatedness). First, our findings highlight two strong predictors of in-game social capital: interdependence and toxicity, whereas cooperation appears to be less crucial than common wisdom suggests. Second, we demonstrate how in-game social capital is associated with reduced feelings of loneliness and increased satisfaction of relatedness. Our findings suggest that social capital in games is strongly and positively related to players’ psychological well-being. The present study informs both the design of social games as well as our theoretical understanding of in-game relationships.
Virtual environments have been proven to be effective in evoking emotions. Earlier research has found that physiological data is a valid measurement of the emotional state of the user. Being able to see one’s physiological feedback in a virtual environment has proven to make the application more enjoyable. In this paper, we have investigated the effects of manipulating heart rate feedback provided to the participants in a single user immersive virtual environment. Our results show that providing slightly faster or slower real-time heart rate feedback can alter participants’ emotions more than providing unmodified feedback. However, altering the feedback does not alter real physiological signals.
Biofeedback holds great potential for augmenting game play, but research to date has focussed predominantly on single player games. This paper proposes an interactional approach, which emphasises how multiple players engage with biofeedback and one another to make sense of the feedback and to incorporate it into their game play. To explore this approach in the context of the dice game Mia, we designed AMIA (Augmented Mia), a prototype system that gives feedback on heart rate, skin conductance, and skin temperature on a player’s hat or armband. A study with 21 participants showed that biofeedback was ambiguous, but nevertheless participants harnessed it as a hint about their opponents’ strategies, as a means of distraction, as a handicap when players could not see their own feedback as it was presented on their hat, and as a point of connection with other players. We discuss the mechanisms underlying these interactions and present design opportunities along spatial, temporal, and compositional dimensions of biofeedback that encourage and heighten social interaction.
This paper reports on a study of the interaction skills of forty-two children, between the ages of eighteen months to forty-two months, in using touch devices. A majority of the children had used a touch device previously and had prior experience with touch devices. Continuous swiping, discrete touching and directional swiping were found to be the easiest actions to complete. The drag interaction was more difficult but most children could complete the interaction. The pinch, stretch and rotate interactions were most difficult for the children to make successfully. Common errors included unintended movement during interactions, pressing too hard, and lack of precision due in part to the target size. This study expands the domain knowledge about a toddler’s ability to interact with touch devices, allowing better creation and selection of interfaces for them to use.
Digital games can offer rich social experiences and exciting narrations by the integration of interesting, believable companion characters. However, if companions fall short of the players’ expectations, they may instead spoil the whole experience, especially if they continuously accompany the player during the game. In this paper, we analyze the design space of companion characters in games. We discuss the influence of companions on players’ experiences and expectations. Furthermore, we present the results of an online survey (N = 237) to provide insights into players’ opinions and perceptions regarding game companions. According to the survey results, players attach great importance to a companion’s personality and its integration into the story of the game, and expect companion behavior to be context-sensitive, autonomous, and proactive. Altogether, our work aims at supporting the development of companions by highlighting the design decisions that have to be made and by suggesting ways to improve their believability.
Augmented sandboxes have been used as playful and educative tools to create, explore and understand complex models. However, current solutions lack interactive capabilities, missing more immersive experiences such as exploring the sand landscape from a first person perspective. We extend the interaction space of augmented sandboxes into virtual reality (VR) to offer a VR-environment that contains a landscape, which the user designs via interacting with real sand while wearing a virtual reality head-mounted display (HMD). In this paper, we present our current VR-sandbox system consisting of a box with sand, triple Kinect depth sensing, a virtual reality HMD, and hand tracking, as well as an interactive world simulation use case for exploration and evaluation. Our work explores the important and timely topics how to integrate rich haptic interaction with natural materials into VR and how to track and present real physical materials in VR. In a qualitative evaluation with nine experts from computer graphics, game design, and didactics we identified potentials, limitations as well as future application scenarios.
Emotion-based Dynamic Difficulty Adjustment Using Parameterized Difficulty and Self-Reports of Emotion
Research has shown that dynamic difficulty adjustment (DDA) can benefit player experience in digital games. However, in some cases it can be difficult to assess when adjustments are necessary. In this paper, we propose an approach of emotion-based DDA that uses self-reported emotions to inform when an adaptation is necessary. In comparison to earlier DDA techniques based on affect, we use parameterized difficulty to define difficulty levels and select the suitable level based on players’ frustration and boredom. We conducted a user study with 66 participants investigating performance and effects on player experience and perceived competence of this approach. The study further explored how self-reports of emotional state can be integrated in dialogs with non-player characters to provide less interruption. The results show that our emotion-based DDA approach works as intended and yields better player experience than constant or increasing difficulty approaches. While the dialog-based self-reports did not positively affect player experience, they yielded high accuracy. Together, these findings indicate our emotion-based approach works as intended and provides good player experience, thus representing a useful tool for game developers to easily implement reliable DDA.
During game play a variety of undesired emotions can arise, impeding players’ positive experiences. Adapting game features based on players’ emotions can help to address this problem, but necessitates a way to detect the current emotional state. We investigate using input parameters on a graphics tablet in combination with in-game performance to unobtrusively detect the players’ current emotional state. We conducted a user study with 48 participants to collect self-reported emotions, input data from the tablet and in-game performance in a serious game teaching players to write Japanese hiragana characters. We synchronized data, extracted 46 features, trained machine learning models, and evaluated their performance to predict levels of valence, arousal, and dominance modeled as a seven class problem. The analysis shows that random forests achieve good accuracies with F1 scores of .567 to .577 and AUC of .738 to .740, while using input features or in-game performance alone leads to highly decreased performance. Finally, we propose a game architecture that is able to react to undesired emotion levels by adaptive content generation in combination with emotion recognition.
Livestreamed APGs (audience participation games) allow stream viewers to participate meaningfully in a streamer’s gameplay. However, streaming interfaces are not designed to meet the needs of audience participants. In order to explore the game design space of APGs, we provided three game development teams with an audience participation interface development toolkit. Teams iteratively developed and tested APGs over the course of ten months, and then reflected on common design challenges across the three games. Six challenges were identified: latency, screen sharing, attention management, player agency, audience-streamer relationships, and shifting schedules. The impact of these challenges on players were then explored through external playtests. We conclude with implications for the future of APG design.
Empowerment of movement through superhuman strength and flexibility is a staple of action video game design. However, relatively little work has been done on the same in the context of Virtual Reality and exergames, especially outside the most obvious parameters such as jumping height and locomotion speed. We contribute a controlled experiment (N=30) of exaggerating avatar flexibility in a martial arts kicking task. We compared different settings for a nonlinear mapping from real to virtual hip rotations, with the aim of increasing the avatar’s range of movement and kicking height. Our results show that users prefer medium exaggeration over realistic or grossly exaggerated flexibility. Medium exaggeration also yields significantly higher kicking performance as well as perceived competence and naturalness. The results are similar both in 1st and 3rd person views. To the best of our knowledge, this is the first study of exaggerated flexibility in VR, and the results suggest that the approach offers many benefits to VR and exergame design.
To navigate beyond the confines of often limited available positional tracking space, virtual reality (VR) users need to switch from natural walking input to a controller-based locomotion technique, such as teleportation or full locomotion. Overloading the hands with navigation functionality has been considered detrimental to performance given that in many VR experiences, such as games, controllers are already used for tasks, such as shooting or interacting with objects. Existing studies have only evaluated virtual locomotion techniques using a single navigation task. This paper reports on the performance, cognitive load demands, usability, presence and VR sickness occurrence of two hands-busy (full locomotion/teleportation) and two hands-free (tilt/walking-in-place) locomotion methods while participants (n=20) performed a bimanual shooting with navigation task. Though handsfree methods offer a higher presence, they don’t outperform handsbusy locomotion methods in terms of performance.
Game and player analysis would be much easier if user interactions were electronically logged and shared with game researchers. Understandably, sniffing software is perceived as invasive and a risk to privacy. To collect player analytics from large populations, we look to the millions of users who already publicly share video of their game playing. Though labor-intensive, we found that someone with experience of playing a specific game can watch a screen-cast of someone else playing, and can then infer approximately what buttons and controls the player pressed, and when. We seek to automatically convert video into such game-play transcripts, or logs.
We approach the task of inferring user interaction logs from video as a machine learning challenge. Specifically, we propose a supervised learning framework to first train a neural network on videos, where real sniffer/instrumented software was collecting ground truth logs. Then, once our DeepLogger network is trained, it should ideally infer log-activities for each new input video, which features gameplay of that game. These user-interaction logs can serve as sensor data for gaming analytics, or as supervision for training of game-playing AI’s. We evaluate the DeepLogger system for generating logs from two 2D games, Tetris and Mega Man X, chosen to represent distinct game genres. Our system performs as well as human experts for the task of video-to-log transcription, and could allow game researchers to easily scale their data collection and analysis up to massive populations.
Challenge plays a critical role in enabling an enjoyable and successful player experience, but not all dimensions of challenge are well understood. A more nuanced understanding of challenge and its role in the player experience is possible through assessing player psychophysiology. The psychophysiology of challenge (i.e. what occurs physiologically during experiences of video game challenge) has been the focus of some player experience research, but consensus as to the physiological markers of challenge has not been reached. To further explore the psychophysiological impact of challenge, three video game conditions — varying by degree of challenge — were developed and deployed within a large-scale psychophysiological study (n = 90). Results show decreased electrodermal activity (EDA) in the low-challenge (Boredom) video game condition compared to the medium- (Balance) and high-challenge (Overload) conditions, with a statistically non-significant but consistent pattern found between the medium- and high-challenge conditions. Overall, these results suggest electrodermal response increases with challenge. Despite the intuitiveness of some of these conclusions, the results do not align with extant literature. Possible explanations for the incongruence with the literature are discussed. Ultimately, with this work we hope to both enable a more complete understanding of challenge in the player experience, and contribute to a more granular understanding of the psychophysiological experience of play.
GulliVR: A Walking-Oriented Technique for Navigation in Virtual Reality Games Based on Virtual Body Resizing
Virtual reality games are often centered around our feeling of ‘being there’. That presence can be significantly enhanced by supporting physical walking. Although modern virtual reality systems enable room-scale motions, the size of our living rooms is not enough to explore vast virtual environments. Developers bypass that limitation by adding virtual navigation such as teleportation. Although such techniques are intended (or designed) to extend but not replace natural walking, what we often observe are nonmoving players beaming to a location that is one real step ahead. Our navigation metaphor emphasizes physical walking by promoting players into giants on demand to cover large distances. In contrast to flying, our technique proportionally increases the modeled eye distance, preventing cybersickness and creating the feeling of being in a miniature world. Our evaluations underpin a significantly increased presence and walking distance compared to the teleportation approach. Finally, we derive a set of game design implications related to the integration of our technique.
Systematic Review and Validation of the Game Experience Questionnaire (GEQ) – Implications for Citation and Reporting Practice
Despite lacking a formal peer-reviewed publication, the Game Experience Questionnaire (GEQ) is widely applied in games research, which might risk the proliferation of erroneous study implications. This concern motivated us to conduct a systematic literature review of 73 publications, analysing how and why the GEQ and its variants have been employed in current research. Besides inconsistent reporting of psychometric properties, we found that misleading citation practices with regards to the source, rationale and number of items reported were prevalent, which in part seem to stem from confusion over the “manuscript in preparation” status. Additionally, we present the results of a validation study (N = 633), which found no evidence for the originally postulated 7-factor structure of the GEQ. Based on these findings, we discuss the challenges inherent to the “manuscript in preparation” status and provide recommendations for authors, researchers, educators, and reviewers on how to improve reporting, citation and publication practices.
Ingestible sensors, such as capsule endoscopy and medication monitoring pills, are becoming increasingly popular in the medical domain, yet few studies have considered what experiences may be designed around ingestible sensors. We believe such sensors may create novel bodily experiences for players when it comes to digital games. To explore the potential of ingestible sensors for game designers, we designed a two-player game – the “Guts Game” – where the players play against each other by completing a variety of tasks. Each task requires the players to change their own body temperature measured by an ingestible sensor. Through a study of the Guts Game (N=14) that interviewed players about their experience, we derived four design themes: 1) Bodily Awareness, 2) Human-Computer Integration, 3) Agency, and 4) Uncomfortableness. We used the four themes to articulate a set of design strategies that designers can consider when aiming to develop engaging ingestible games.
Studies have shown that local latency — delays between an input action and the resulting change to the display — can negatively affect gameplay. However, these studies report several different thresholds (from 50 to 500ms) where local latency causes problems, and there is still little understanding of the relationship between the temporal requirements of a game and the effects of local latency. To help designers determine how lag will affect their games, we designed two studies that focus on specific atoms of interaction in simple games, and characterize both gameplay performance and experience under increasing local latency. We use the data from the first study to develop a simple predictive model of performance based on the amount of lag and the speed of the game. We used the model to predict performance in the second study, and our predictions were accurate, particularly for faster games and higher levels of lag. Our work provides a new analysis of how local latency affects games, which explains why some game atoms will be sensitive to latency, and which can allow predictive modeling of when playability will suffer due to lag, even without extensive playtesting.
There is an increasing trend in HCI on studying human-food interaction, however, we find that most work so far seems to focus on what happens to the food before and during eating, i.e. the preparation and consumption stage. In contrast, there is a limited understanding and exploration around using interactive technology to support the embodied plate-to-mouth movement of food during consumption, which we aim to explore through a playful design in a social eating context. We present Arm-A-Dine, an augmented social eating system that uses wearable robotic arms attached to diners’ bodies for eating and feeding food. Extending the work to a social setting, Arm-A-Dine is networked so that a person’s third arm is controlled by the affective responses of his/her dining partner. From the study of Arm-A-Dine with 12 players, we articulate three design themes: Reduce bodily control during eating; Encourage savoring by drawing attention to sensory aspects during eating; and Encourage crossmodal sharing during eating to assist game designers and food practitioners in creating playful social eating experiences. We hope that our work inspires further explorations around food and play that consider all eating stages, ultimately contributing to our understanding of playful human-food interaction.
Reflection is a core design outcome for HCI, and recent work has suggested that games are well suited for prompting and supporting reflection on a variety of matters. However, research about what sorts of reflection, if any, players experience, or what benefits they might derive from it, is scarce. We report on an interview study that explored when instances of reflection occurred, at what level players reflected on their gaming experience, as well as their reactions. Our findings revealed that many players considered reflection to be a worthwhile activity in itself, highlighting its significance for the player experience beyond moment-to-moment gameplay. However, while players engaged in reflective description and dialogic reflection, we observed little to no instances of higher-level transformative and critical reflection. We conclude with a discussion of the value and challenges inherent to evaluating reflection on games.
Exergames help senior players to get physically active by promoting fun and enjoyment while exercising. However, most exergames are not designed to produce recommended levels of exercise that elicit adequate physical responses for optimal training in the aged population. In this project, we developed physiological computing technologies to overcome this issue by making real-time adaptations in a custom exergame based on recommendations for targeted heart rate (HR) levels. This biocybernetic adaptation was evaluated against conventional cardiorespiratory training in a group of active senior adults through a floor-projected exergame and a smartwatch to record HR data. Results showed that the physiologically-augmented exergame leads players to exert around 40% more time in the recommended HR levels, compared to the conventional training, avoiding over exercising and maintaining good enjoyment levels. Finally, we made available our biocybernetic adaptation software tool to enable the creation of physiological adaptive videogames, permitting the replication of our study.
In this paper we present the design and evaluation of a first-person walker digital game called WORLD4. Walkers are a sub-genre of 3D games that typically include minimal player interaction, slow paced game play, and ambiguous goals. Walking is the primary means of interaction in walker games, rather than prioritize ‘skill-based’ mechanics. However, the design of these game environments is not well understood and challenges many accepted game design conventions. We have designed WORLD4, a multi-dimensional first-person exploration game, to explore how ambiguity might support exploratory game play experiences in virtual environments. 14 participants playtest WORLD4 and analysis of the data identified three descriptive themes specific to the walker game player experience: 1) designing partial inscrutability; 2) shifting meaning; and 3) facilitating subversion of expectations. We use these themes to describe a set of prescriptive design strategies that may assist designers in designing for ambiguity in exploratory game environments.
The human-computer interaction (HCI) field includes a long-standing community interested in designing systems to enable user reflection. In this work, we present our findings on how interactive narratives and roleplaying can effectively support reflection. To pursue this line of inquiry, we conducted an exploratory, cross-sectional study evaluating an interactive narrative we created, Chimeria:Grayscale. To address issues present in prior HCI studies on the topic of reflection, we grounded our system design methodology and evaluations in theories drawn from clinical psychology and education. The results of our study indicate that Chimeria:Grayscale, the interactive narrative we created by operationalizing our system design methodology, enabled our study participants to critically self-reflect.
A lack of racial-ethnic diversity in game characters and limited customization options render in-game self-representation by players of colour fraught. We present a mixed-methods study of what players from different race-ethnicities require to feel digitally represented by in-game characters. Although skin tone emerged as a predominant feature among players from all racial-ethnic groupings, there were significant group differences for more nuanced aspects of representation, including hair texture, style, and colour, facial physiognomy, body shape, personality, and eye colour and dimension. Situated within theories of how race is conveyed, we discuss how developers can support players of colour to feel represented by in-game characters while avoiding stereotyping, tokenism, prototypicality, and high-tech blackface. Our results reinforce player needs for self-representation and suggest that customization options must be more than skin deep.
Due to a steady increase in popularity, player demands for (online) video game content are growing to an extent in which consistency and novelty in challenges is hard to attain. Challenges in balance and error-coping accumulate. We introduce the concept of deep player behavior models by applying machine learning techniques to individual, atomic decision-making strategies. We discuss their potential application fields in personalized challenges, autonomous game testing, human agent substitution, and online crime detection. Results from a pilot study that was carried out with the massively multiplayer online role-playing game Lineage II depict a benchmark between hidden markov models, decision trees, and deep learning. Data analysis and individual reports indicate that deep learning can be employed to provide adequate models of individual player behavior with high accuracy for predicting skill-use and a high correlation in recreating strategies from previously recorded data.
Despite rewards being seemingly ubiquitous in video games, there has been limited research into their impact on the player experience. Informed by extant literature, we built a casual video game to test the impact of reward types, both individually (i.e. rewards of: access, facility, sustenance, glory, praise) and by variety of rewards (i.e. no rewards, individual rewards, all rewards). No evidence was found for differing reward types impacting the player experience differently. However, evidence was found for a greater variety of rewards having a positive impact on interest and enjoyment. Regardless of the impact of variety of rewards, the individual characteristic of reward responsiveness was found to be predict sense of presence and interest and enjoyment. This paper makes contributions to the application of reward types, general understanding of the impact of rewards on the player experience, and discusses the importance of trait reward responsiveness in player experience evaluation.
Guidelines on Successfully Porting Non-Immersive Games to Virtual Reality: A Case Study in Minecraft
Virtual reality games have grown rapidly in popularity since the first consumer VR head-mounted displays were released in 2016, however comparatively little research has explored how this new medium impacts the experience of players. In this paper, we present a study exploring how user experience changes when playing Minecraft on the desktop and in immersive virtual reality. Fourteen players completed six 45 minute sessions, three played on the desktop and three in VR. The Gaming Experience Questionnaire, the i-Group presence questionnaire, and the Simulator Sickness Questionnaire were administered after each session, and players were interviewed at the end of the experiment. Participants strongly preferred playing Minecraft in VR, despite frustrations with using teleporation as a travel technique and feelings of simulator sickness. Players enjoyed using motion controls, but still continued to use indirect input under certain circumstances. This did not appear to negatively impact feelings of presence. We conclude with four lessons for game developers interested in porting their games to virtual reality.
This work focuses on studying players behaviour in interactive narratives with the aim to simulate their choices. Besides sub-optimal player behaviour due to limited knowledge about the environment, the difference in each player’s style and preferences represents a challenge when trying to make an intelligent system mimic their actions. Based on observations from players interactions with an extract from the interactive fiction Anchorhead, we created a player profile to guide the behaviour of a generic player model based on the BDI (Belief-Desire-Intention) model of agency. We evaluated our approach using qualitative and quantitative methods and found that the player profile can improve the performance of the BDI player model. However, we found that players self-assessment did not yield accurate data to populate their player profile under our current approach.
We present an exploratory study of analyzing and visualizing player facial expressions from video with deep neural networks. We contribute a novel data processing and visualization technique we call Affect Gradients, which provides descriptive statistics of the expressive responses to game events, such as player death or collecting a power-up. As an additional contribution, we show that although there has been tremendous recent progress in deep neural networks and computer vision, interpreting the results as direct read-outs of experiential states is not advised. According to our data, getting killed appears to make players happy, and much more so than killing enemies, although one might expect the exact opposite. A visual inspection of the data reveals that our classifier works as intended, and our results illustrate the limitations of making inferences based on facial images and discrete emotion labels. For example, players may laugh off the death, in which case the closest label for the facial expression is “happy”, but the true emotional state is complex and ambiguous. On the other hand, players may frown in concentration while killing enemies or escaping a tight spot, which can easily be interpreted as an “angry” expression.
Smartphones support gaming, social networking, real-time communication, and individualized experiences. Children and parents often take part in digital experiences with distant friends while isolating themselves from co-present family members. We present MeteorQuest, which is a mobile social game system aimed to bring the family together for location-specific game experiences through physical play. The system supports group navigation by mapping screen brightness to the proximity to various in-game targets. Mini-game stages were designed together with interaction designers to encourage physical and social interaction between the players through group puzzles, physical challenges of dexterity and proxemics play. We conducted an exploratory study with three families to gain insights into how families respond to mobile social game features. We studied their socio-spatial arrangements during play and navigation using the lens of proxemics play and provide implications for the design of proxemic interactions and play experiences with families.
Social play, and the role of technology in it, is a topic of central concern to the CHI PLAY and HCI community. In this paper we provide an overview of philosophical, psychological and sociological concepts and theories of social play and use these as a lens to conduct a literature review of research on interactive technologies in play contexts. Our chosen scope includes technologies which afford free play in groups of children within the same physical space. We identify how assumptions and stances about play influence which kind of technologies are designed, which social elements are supported and how success is defined and assessed. Finally, we propose a novel perspective on designing playthings which conceptualises them as boundary objects. We argue that such a perspective is particularly valuable when designing for heterogeneous groups of children and, thus, also has the potential to make a contribution towards designing effective roles of technologies for social inclusion.
The characteristics of virtual faces can be important factors for avatars and characters in video games. Previous work investigates how users create their own avatars and determines the generally preferred characteristics of virtual faces. However, it is currently unknown how the preferred characteristics of avatar faces depend on the players’ age and gender or if these demographics can be predicted based on the data provided by an avatar creation system. In this paper, we investigate the effects of gender and age on the facial characteristics of 4,215 virtual faces created by 1,475 participants (994 male, 481 female) mainly from Central Europe using a web-based avatar creation system and the Caucasian average face. Our results show that with increasing age, men and women design increasingly realistic and less stylized avatars. We also found that young persons design more androgynous avatars, while adults further increase the masculinity and femininity of their avatars. However, women with higher age decrease the femininity and increase the masculinity of stereotypical faces. Based on the data collected during the avatar creation process, we can predict the participants’ gender with an accuracy up to 91%, which open up new use cases for video games and avatar creation systems. We discuss potential social, biological, and cognitive explanations for our results and contribute with design implications for games and future avatar customization systems.
Avatars in virtual reality (VR) increase the immersion and provide an interface between the user’s physical body and the virtual world. Thus, avatars enable referential gestures, which are essential for targeting, selection, locomotion, and collaboration in VR. However, players of immersive games can have another virtual appearance deviating from human-likeness and previous work suggests that avatars can have an effect on the accuracy of referential gestures in VR. One of the most important referential gestures is mid-air pointing. It has been shown that mid-air pointing is affected by systematic errors, which can be compensated using different methods. Thus, it is unknown if the avatar must be considered in corrections of the systematic error. In this paper, we investigate the effect of the avatar on pointing accuracy. We show that the systematic error in pointing is significantly affected by the virtual appearance but does not correlate with the degree to which the appearance deviates from the perceived human-likeness. Moreover, we confirm that people only rely on their fingertip and not on their forearm or index finger orientation. We present compensation models and contribute with design implications to increase the accuracy of pointing in VR.
Whenever someone chooses to study instead of going to a party, or forgo dessert after dinner, that person is exercising self-control. Self-control is essential for achieving long-term goals, but isn’t easy. Games present a compelling opportunity to engage in tasks that allow a player to exercise and improve self-control, and consequently provide data about a person’s cognitive capacity to exert self-control. However, exercising self-control can be effortful and depleting, which makes incorporating it into a game design that maintains engagement and quality of experience a challenge. We present the design of game mechanics for exercising and improving self-control, and an initial study that effectively demonstrates that games can be designed to engage a broad level of self-control processes without negatively affecting player engagement and experience. Our results also show that player performance is connected to trait-level self-control. We discuss how (for example) players with low trait self-control can therefore be identified, and games intended to improve or exercise self-control can dynamically adapt to this information.
In the recent years a handful of powerful game engines have been released for easing the production of high quality computer games, e.g. Unity 3D or Unreal 4. Since these engines are free of charge for amateurs, their use has increased worldwide drastically, enabling small teams to create high quality games rapidly. However, for blind people, no such tools exist, enabling them to create games easily.
IBlind Adventure closes this gap, enabling visually impaired or even blind people to create their own games. Games are strictly audio based, and Blind Adventure is like a structured audio recorder, enabling game creators to record and lay out game levels. In this paper we introduce Blind Adventure, its gaming concepts, and its user interface.
The current trend to use applied games to engage users with mobile health (mHealth) delivery systems continues to build, yet research as to its effectiveness is still sparse. This study evaluates the effectiveness of using two different casual games to drive meaningful engagement with an mHealth app. MindMax was produced by the Australian Football League Players Association to improve the mental health and wellbeing of young Australians. Interviews (N = 42) and app usage data of organic users (N = 2679) suggest that MindMax has sustained users’ interest with the app by using the casual games as rewards for engagement. In addition, there is evidence that the gamification strategy was successful in drawing users back to the wellbeing modules. Mixed experiences with the more difficult game suggest the potential usefulness of game play data to inform more personalized mHealth messaging. Further uses for applied games in mHealth applications are discussed.
Educational games are a creative, enjoyable way for students to learn about technical concepts. We present Entanglion, a board game that aims to introduce the fundamental concepts of quantum computing — a highly technical domain — to students and enthusiasts of all ages. We describe our iterative design process and feedback from evaluations we conducted with students and professionals. Our playtesters gave positive feedback on our game, indicating it was engaging while simultaneously educational. We discuss a number of lessons we learned from our experience designing and evaluating a pedagogical game for a highly technical subject.
Violent Video Games in Virtual Reality: Re-Evaluating the Impact and Rating of Interactive Experiences
Research shows that bespoke Virtual Reality (VR) laboratory experiences can be differently affecting than traditional display experiences. With the proliferation of at-home VR headsets, these effects need to be explored in consumer media, to ensure the public are adequately informed. As yet, the organizations responsible for content descriptions and age-based ratings of consumer content do not rate VR games differently to those played on TV. This could lead to experiences that are more intense or subconsciously affecting than desired. To test whether VR and non-VR games are differently affecting, and so whether game ratings are appropriate, our research examined how participant (n=16) experience differed when playing the violent horror video game “Resident Evil 7″, viewed from a first-person perspective in PlayStation VR and on a 40” TV. The two formats led to meaningfully different experiences, suggesting that current game ratings may be unsuitable for capturing and conveying VR experiences. The public must be better informed by ratings bodies, but also protected by developers and researchers conscious of the effects their designs may have.
CHI PLAY ’18 Extended Abstracts- Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts
SESSION: Course Summary
This hybrid course will allow participants to understand the complexities of games user research methods. For this we have put together 4 sessions (2 hours each, 8 hours total) of content on applications on different user research methods in games evaluation to help participants turn player feedback into actionable design recommendations. The course is designed as a hybrid course with 2 sessions to be delivered online before CHI PLAY 2018, one interactive face-to-face session will be delivered during CHI PLAY 2018 and one final session to be delivered online shortly after CHI PLAY 2018. The course is designed from an applied user experience (UX) research perspective and should allow for participants unfamiliar with user testing and basic user research skills. The course material is based on the Games User Research book  and will be delivered by the book’s editors.
SESSION: Doctoral Consortium
Children’s access and engagement with digital products has surged in the past two decades, in parallel with paediatric hospital modernisation worldwide with the aim of turning them into child-friendly spaces. This is where children can be helped to feel less distressed by providing encounters with familiar technology. Nevertheless, the literature is sparse regarding the potential that technologies for children may have in relation to play in busy hospital settings. The scope of this PhD is thus to explore how digital technology can support social play among children aged 3-5, by involving various stakeholders in the process. This is in support of the new children’s hospital in Edinburgh, to open in late 2018.
This doctoral work focuses on strategies for designing and implementing playful interactive experiences for social intervention of children with Autism Spectrum Disorder. In particular, this study explores strategies for fostering socialization in children with autism with the use of digital technologies, and the corresponding interaction design principles which lead to these desired behaviors in playful settings. The study utilizes the interaction paradigm of collocated user scenarios in full-body interaction, where users interact with the system through the use of their bodily moments. This research has been informed by previous projects being conducted within the Full-Body Interaction Lab in the Cognitive Media Technologies research group at Universitat Pompeu Fabra in Barcelona.
Mosquito-borne diseases are a global public health concern. Individuals’ behavior may affect the transmission dynamics of diseases. For instance, improper storage of water can become a breeding site for the Aedes aegypti mosquito, the vector of dengue, zika, chikungunya and urban yellow fever viruses. Most of the Aedes foci are in or near people’s homes and the control of vector population is essential to prevent these diseases. Thereby, population awareness should be part of public health policies. Games are a powerful tool to promote awareness and behavioral changes, including public health. Our research goals are 1) the development and evaluation of a gamification-based system to support entomological and epidemiological surveillance through volunteered contributions, integrated with a 2) a game-based platform to be used as a complementary tool to promote awareness and behavioral changes. Some partial results are the game design, prototypes and demos. The expected results are the release and evaluation of a mixed reality game-based platform to support public health actions based on education and engagement of population aimed at vector control.
Using Games and Play to Foster Collaboration and Innovation in a Cross-Organizational Working Environment
This application for the doctoral consortium at CHI PLAY 2018 describes the research planned in the context of my PhD thesis, which aims at utilizing games and play to increase collaboration and innovation within a cross-organizational working environment, using a research campus as a case study. In the context of my thesis, a progressive series of pervasive game events (with participation possibilities ranging from online messages to live-action role-play) will be carried out and its effects studied.
The advancement of sensor technology has provided new opportunities for bodily play and consequently enriched our bodily experiences. The emergence of ingestible sensors supports capturing the user’s body data continuously. The intimacy between ingestible sensors and human body also shapes our bodily experiences. My research focuses on utilizing ingestible sensors to facilitate playful and engaging experiences in HCI using a Research through Design approach. This will lead to the development of ingestible interfaces, which allow the creation of novel and playful experiences. My work so far has explored the playful experiences that can be designed without crafting the relationships between the user’s body and ingestible sensors. This research will contribute to the understanding of how to design playful experiences around ingestible sensors and ultimately inspire designers to create a wider range of future play experiences.
Immersive virtual avatars can powerfully affect a user’s behavior. Some changes in behavior can be positive, although some can be negative and unknown to the user. A hypothesis could be made that embodying an immersive virtual avatar could negatively effect a user’s implicit bias, this would be a serious cause for concern with regard to the recent emergence of consumer virtual reality games. Here, I describe a pilot study exploring the area of immersive virtual avatars and their effects on a user’s implicit associations. In this study n=26 where white males were either a white or black avatar, and then proceeded to shoot at human-shaped cutouts in a simulated shooting game. A significant difference was observed between two conditions, however in the opposite direction that was hypothesized; participants who embodied the white avatar recorded a lower implicit bias.
Many language-learning games teach either grammar, vocabulary or phonology. However, there are possibly neither digital nor non-digital games that teach the pragmatics of the language. Pragmatics is the study of how a language is conveyed and interpreted between speaker and hearer during spoken discourse. Learning pragmatics is essential for engaging in conversation with native speakers and interpreting their intentions within their speech. The outcomes of this PhD research project will include a card game for learning the pragmatics in English and a framework for designing the game. This “research through design” project involved case studies that examined how language-learning card games engage learners prior to conceptualisation of the first prototype iteration of the card game.
Today’s workplaces are experiencing a critical gap in employee engagement and given the importance of engagement for employee satisfaction and organisational performance for competitive advantage, a different approach to addressing engagement in workplaces is required. The approach of ‘gamification’ – the use of game mechanics in non-game contexts – is a an increasingly popular approach to increase engagement and holds promise to address current engagement gaps in the workplaces. Applying gamification to the complexities and idiosyncrasies of the workplace presents challenges for researchers and designers, particularly in understanding and dealing with the complex, social and changing nature of workplaces. Cultural-Historical Activity Theory (CHAT) is a commonly used theory and research methodology in educational settings, and this dissertation seeks to apply CHAT to the gamification of workplace activity. Coupled with a design-based research approach to data collection and analysis, this study investigates the implementation and effectiveness of gamification in three organisations.
Mental well-being, physical vitality, and a thriving environment, while conceptually disparate, dynamically combine to influence the human spirit and condition. Using theoretical frameworks from a variety of disciplines, including computer science, psychology, communication, and marketing, my program of research examines this intersection of the mind, body, and planet, and how media technology, namely virtual reality (VR), can invigorate one, or all, of these factors. Within this paper I outline ongoing research projects that encapsulate my scholarly focus, prefacing them with existing literature that scaffolds their purpose and predictions. First, focusing on mental health, I evaluate a series of studies examining the effects of user-avatar interactions on anxiety. Second, with regards to physical health, I discuss on-going studies testing the effects of VR games on acute pain perception. Lastly, I provide an overview of several studies related to how depicting climate change impacts on flora and fauna in VR can encourage pro-environmental outcomes.
The Effect of Puzzle Video Games on High School Students’ Problem-solving Skills and Academic Resilience.
Video games are designed to be playful learning experiences. Existing qualitative research [1, 2] suggests there is a link between playing video games and improved resilience. My thesis aims to explore this link further using a mixed methods approach. This research will focus on puzzle video games as their core game mechanic involves learning through failure. Learning to deal with failure is a key component of resilience. Games offer unique environments where productive failure is encouraged rather than punished [3, 4]. Exploring the link between video games, productive failure and learning could help improve the design of games that improve resilience.
Investigating the Usability, User Experiences, and Usefulness of Digital Game-based Exercises for Elderly People: A Case Study of Finland
In this study, we evaluated the usability, user experiences, and usefulness of digital game-based exercise called The Skiing game for the Finnish elderly people in Finland. The findings suggest that digital games are promising to be used as an alternative way of exercising for them. The findings also highlight usability recommendations for us to take into account in future study. The findings from this study can be insightful for health practitioners, policy makers, researchers, and game designers not only in Finland, but also in other countries for promoting elderly people’s physical well-being through digital gameplay.
Exploring Opportunities for Scalable Outcomes of Co-design Activities with Marginalised Groups of Children
My PhD work seeks to explore roles for digital playthings to support groups of children with mixed abilities in co-located, social play. In a Participatory Design process I investigate how children and other stakeholders can be involved in creating such technology with a particular focus on how to scale outcomes from the idiosyncratic context of the project to increase their impact in a broader inclusive educational environment. Within the Social Play Technologies project we conduct co-design workshops to support marginalised groups of children in the design process and collaboratively create smart artefacts for social play activities. With my PhD research I aim to bridge the gap between co-design workshops as research practice and scalable project out- comes that reach beyond the project’s scope. To this end I ask what constitutes outcomes and explore different ways to expand their impact by involving key stakeholders such as relevant industry, policy makers or education provides. As a result, I develop a methodological framework that sup- ports designers to create spaces for negotiation in partici- patory design (PD) projects with diverse stakeholders and to take their outcomes beyond academic practices.
The ability to recall our past is essential to who we are. We use our past memories as a basis to inform what we do, and how we behave, the events we remember tell us about what and who is important to us, and help us to make our decisions about the future. Traditionally we use medias such as diaries, photos, film, and mementos as cues which help us recall memories. However traditional media are limited to the level of detail, and cues they provide to trigger memory recall. As our own recollection of the event fades with time, the cues can become less effective for recall. This project investigates if more immersive forms of media such as virtual environments provide more efficient means to trigger memory recall, compared to more traditional forms of memory preservation.
The Transmutation of Perception: Research of Attention and Visual Guidance in Virtual Reality Context
As the panoramic experience of virtual reality (VR) breaks the ‘frame’ of traditional film, the old audio-visual language is dissolving and a new system has not yet been established. So far, developers and researchers have not reached a conclusion about the audio-visual system in a stereoscopic, user-controlled environment, and this has already become the most urgent issue in the VR industry. In order to provide a solution, the relationship is examined between visual machine, visual mechanism, and visual attention based on visual archaeology and visual attention theory. A discussion is introduced about how films could learn from the visual attention guidance of digital games in the VR environment and to eventually build a visual attention guidance system that serves as a practical reference for creation in the future.
Most of the research in the field of educational game design has been focused on the effectiveness of games in learning and the engagement gains. Very little research has been done to study how teachers use games in the classroom and how this interaction impacts the game development process to end up with an effective learning tool. More information is needed on how teachers can design connecting activities to go with games. This paper highlights the author’s studies examining how teachers behaviors and attitudes impact educational game design and its classroom implementation.
SESSION: Interactivity & Play
Historical narratives of conflict typically revolve around heroes and villains or perpetrators and victims. However, this dichotomy of events and people into good and evil greatly reduces the extent to which the past can be analysed, explained, and understood. To truly understand the actions that lead to conflict, one must appreciate the dense network of relationships between social agents, each with their own personal motivations and ideals. A contemporary political viewpoint capturing this multiperspectivity is that of Agonism. Focusing on the characters and events, Agonism emphasises the socio-cultural interactions and relationships between all agents involved including bystanders and, crucially, perpetrators. We discuss two ‘Games for a Social Change’ that we have developed to promote an Agonistic view: Endless Blitz and Umschlag ’43. We describe the games themselves, and the framework of memory studies that informs our work.
DIYSPY Live Remote Play is an App-enabled live game experience for CHI Play ’18. It showcases how teams not co-located can bond through playful, improvisational and storytelling mechanics. The game involves the use of computers with video conferencing software (Zoom or Skype) in different rooms at a CHI Play venue, and an App. Through the spy-themed game, players will be prompted to run around and find objects from their actual immediate environment as gadgets to solve their missions. They then use the objects as improvisation prompts to tell fantastical tales about their spy mission. Using performance, comedy, and live play techniques, this game facilitates team bonding and adaptability to new ideas, in the now pervasive creative industries context of remote work.
For our demonstration, we present two cooperative social Virtual Reality (VR) games that support sharing and augmenting facial expression between a VR player and a laptop player. The two games, “Bomb Defusal” and “Island Survivor”, provide different and unique experiences that take advantages of sharing the player’s facial expression. Our system captures facial expressions of the laptop player and shares them on their avatar to the VR player in real-time. This demonstration shows the potential of sharing facial expression and potentially emotions, for cooperative VR games.
Your Move Sounds So Predictable is a semi- improvised two-player movement and sound game, based around a pair of bespoke motion- sensing sonic balls. Players pull a card and follow the instruction on where to place the ball in relation to their body. The sonic behavior of each ball has been programmed to exhibit a moderately complex and hard-to-predict set of responses to the user input that challenge the user’s expectation and the experience of autonomy and causality. The balls also communicate with each other, adding additional causal flows. Each player explores this relationship between movement and sound through play, whilst at the same time attending to the emergent sonic composition created by the group. Chaos or harmony will ensue.
UPDATED -31 August 2018. This paper describes the computer game The Frog’s Princess which is an interactive storybook designed for fifth grade students. Players choose from ten dialog options for the prince and princess, creating their unique version of the fairy tale to edit and keep. Players are exploring how they would express themselves as heroes and heroines. Support materials include reading and writing exercises with teacher notes.
Visual impaired people can, depending on the impairment grade, detect changes on a game board by haptic or audio clues. This scanning process of the game area requires both time and cognitive load to remember the setup. This decreases the intended relaxation through games. As an alternative we propose to use a matrix of vibration motors on the human belly for haptic rendering for designing games for visual impaired people. As an example we will demonstrate a simple pong game played without visual clues.
With 3D-printers becoming more available, affordable, and easy to use, they can be used as a tool to produce more flexible textures and even more complex tangible objects for visually impaired people. Similarly, the Arduino system allows to end-users to easily create electronic devices, both for learning as well as hobby approaches. With these Personal Fabrication Processes, people can easily produce haptic sense-able and electronic advanced games, focusing on the design of games for impaired people. We present three basic games enhanced for visual impaired people designed with these rapid prototyping technologies: BrailleMemory, a Braille learning game with RFID enhancement, a haptic Yahtzee scorecard and 3D game cards for a building block game.
Positional tracking ensures high presence in Virtual Reality (VR). Though mobile VR has a large potential to bring VR to the masses, enabling positional tracking on mobile VR platforms has been a challenge. Existing implementations often rely on computer vision or require special sensors and are often computationally intensive, resulting in a low frame rate and reduced battery life. We present StereoTrack; a 180° positional tracking method using acoustic sensing which has a low computational overhead, and is minimalistic in terms of required hardware (a pair of speakers); which may allow for large scale adoption. A number of studies evaluate Stereo-Track in terms of precision, accuracy, and effect on overall latency. We test the feasibility of StereoTrack in the context of two games that demonstrate its potential to enhance existing mobile VR interaction options.
Playing with food is both a common taboo and a secret desire. From a young age children are taught not to play with their food but are fed in playful ways (“here comes the aeroplane”). The food we consume be it a fruit picked from a tree or an elaborate restaurant dish is full of play opportunities and recent technological advances such as miniaturized electronics and fabrication techniques have opened up a host of new potentials for play and interaction. In this paper we examine the relationship between food and play, what it means to play with your food? What are the materialities of taste and flavour in a world in which the physical and the digital are deeply intertwined and in which the senses can easily be fooled by virtual means? And what modalities of play are possible with food, drink and flavor sensations?
Game based cyber-security training for enterprise users is a topic which is relatively less researched upon. We intended to conduct an experiment on this topic with Phishy, a game based phishing awareness training, and thus find answers to the following questions: a) Do enterprise, tech-savvy users benefit from having a game based training so that this methodology could be widely used for training important topics on a large scale? b) Will such a game be engaging to the associates and will they come back to play it more than once? c) Will the game be capable of helping them? Phishy game was successfully completed by 8071 associates within a month. From their game data, it was found that the associates who completed the game showed a significant improvement in identifying phishing links. This paper explains the design, development and analysis of Phishy, an online serious game that we developed for providing phishing awareness training to enterprise users.
The Phonoloco technology allows to repurpose a set of regular smartphones into a system that allows for playful interactions. The most important feature of the proposed system is the use of the front-facing camera and a pattern on the ceiling in order to sense the position of each device lying on a flat surface at centimeter precision. This way each smartphone becomes a digital manipulative that can be displaced by the player. The details of the technology will be revealed and an overview of possible interactions will be given. Two games were developed and tested with preschoolers, illustrating the feasibility of the system.
We reflect on the design, implementation, and testing of the experimental testbed game Beam Me ‘Round, Scotty! II and the numerous design lessons learned in transitioning theoretical research questions about social presence and connectedness into concrete gameplay mechanics contrasting asymmetric and symmetric cooperative play. We discuss the unanticipated challenges that can emerge from seemingly unrelated design choices and the importance of grounding experimental conclusions and design recommendations in specific gameplay contexts.
Educational games have been shown to provide support for children with learning disabilities to overcome their challenges. Gaming and feedback elements can benefit the learning process while keeping children engaged and help them to regain motivation for learning. We present the design of a mobile serious game for German dyslexic primary school children that incorporates gaming elements such as narrative, pedagogical agents, tutorials, feedback, and reward mechanisms. We derive our game design decisions and specify the rationales behind with special focus on the needs and demands of the target group. We evaluate the gaming elements based on the results of 63 children who played the game at home during a period of 9-10 weeks. Results indicate overall positive perception of the game elements. Children were immersed in the fantasy-themed world, liked the pedagogical agents, and indicated that the interactive tutorials gave an easy start into the game, and emphasized the special importance of praise.
Co-smonauts in Retrospect: The Game Design Process of an Intergenerational Co-located Collaborative Game
This paper describes the design process of an intergenerational co-located cooperative game called Co-smonauts. The main aim of the research project was to identify design factors for intergenerational games in a museum context that foster the social exchange between old and young, feature an intuitive control scheme, and offer a certain amount of depth that keeps players enjoying themselves. In detail, this submission provides an overview of the study-guided design process and key design decisions that led to the final game. Three steps are presented (Game Design, Building Phase Design, Navigator Interface Design), each of them containing a detailed description of the design challenges, the carried out studies to face the challenges, and the design implications. In general, we aim at providing insights for both researchers and designers regarding the design, the development, and the evaluation of co-located collaborative intergenerational games.
Paired playing is a variant of the think aloud method that has potential value in understanding the moment-to-moment work that players perform as they play computer games. Compared with traditional think aloud approaches, the use of two participants creates an experience that can be less awkward and confronting, though at the cost of managing and maintaining a social interaction. This paper reports on the results obtained from an ethnomethodologically-informed analysis of paired playing sessions of a custom computer game, the Pirate Dice game, which was aimed at understanding the kind of work that players performed in order to function as a pair, with implications for understanding some of the limits of this method. It reveals that players were involved in a skilful and intricate “dance” involving talk, gestures, the mouse and the game itself; also identifying some of the variability in the playing experience of these pairs and that collaborative play is its own mode of play. Overall it suggests that paired playing may be complementary to other forms of think aloud but is no replacement.
This paper outlines the development, creation and initial presentation of a computer program designed to direct human performers in improvised musical performance. The work, titled “Electric Sheep”, is grounded in models of play-based improvisation of the late 20th century, focused around American composer John Zorn’s “game-pieces” of the 1980s. It seeks to overcome some of the technical limitations of previous game-pieces whilst also providing a functioning example of player-computer interaction (PCI) in improvised music practice. Utilising an iterative rehearsal and development process we were able to isolate and highlight the importance of non-verbal and non-musical communication between improvising musicians and offer suggestions for incorporating this kind of feedback into future systems. Through this work, we will highlight the value of exploring the intersection of PCI and musical play as a valuable method of forming insight into rich PCI interactions.
In this paper, I introduce Plan Your Brisbane, a web-based, mobile-responsive game produced by Brisbane City Council. The game was developed as a platform for civic engagement with Brisbane residents to seek feedback and build consensus around a charter of principles to help guide future city planning decisions and the need to accommodate an estimated additional 386,000 residents by 2041. Plan Your Brisbane was available online between 18 February and 16 April 2018 and attracted 100,869 players. The game was particularly successful in engaging residents in the usually difficult to reach 18–24 and 25–34 age demographics. Plan Your Brisbane contributes to the game design community by demonstrating that games can facilitate civic engagement across all age demographics, even on complex issues such as city planning. This paper introduces the design process and design rationales of the game and shares some of the results.
Non-player characters (NPCs) are important for immersion, but how their character design affects player experience has received little attention in previous games research. Related work suggests that NPCs support player identification, which in turn impacts player enjoyment and immersion, but this has not been explored empirically. In a between-subjects study, we explored effects of NPC design on player experience. In particular, we investigated how biological sex of NPCs and player gender affect identification and NPC interaction. Our results provide first empirical support for NPC design supporting player identification, reveal gender differences in the identification process, and uncover new research questions regarding mediators of the identification process.
The Storytelling Machine: A Playful Participatory Automated System Featuring Crowd-Sourced Story Content
This paper offers a design rationale from a playful, participatory installation called “The Storytelling Machine”. The Storytelling Machine curated and delivered a collective story that was generated from crowd-sourced content. The machine transformed an audience member’s drawing into an animated character that roamed a series of video worlds. These animated characters were projected onto surfaces in an exhibition space. The machine randomly displayed audiences’ drawings and story texts, generating real-time graphics. This project involved a series of intercontinental public exhibitions and an extensive series of public workshops that spanned a three-year time frame. In this paper, we detail the design of the Storytelling Machine. Our contribution is an innovative, novel design that involves audiences in participatory activities in order to create a real-time collective digital story. This research may benefit game designers and researchers interested in engaging audiences through the design of participatory, digital story systems.
Since the release of the HTC Vive and Oculus Rift tracking has improved, leading to virtual reality (VR) games that allow players to stand and physically move around virtual environments. Consequently, these VR games can provide players with beneficial levels of exercise. In this paper we provide a design framework for using activity sensor data within VR games called VRmove. This framework was derived from an analyses of data from studies of 18 players over 4 diverse commercial VR games to identify key elements for exertion and enjoyment: actual and perceived exertion being key for representing the gain from the fun disguising the effort and the multi-factorial element of movement involved. The rationale of this work is to demonstrate the use of the VRmove framework to inform the design of a new game. This work’s core contribution is the VRmove framework, which informs the design of future VR games so they are exerting while still being enjoyable.
This paper explores a potential of virtual reality (VR) to enable a flying experience through virtually extended human limbs, i.e., wings. We envision a user engaged in an adventurous VR flight experience while interacting with surrounding environment, and propose a prototype VR game, JediFlight. The interaction mechanic of JediFlight features a natural, simultaneous manipulation of two virtual limbs (i.e., wing and arm) by one limb (i.e., arm) by harnessing hand and elbow position trackers. We conduct a human-subject study to examine the overall experience of wing-based flying mechanism with a realistic multitasking gameplay scenario. We present six design themes that can be utilized to design VR games and applications featuring extended avatar representation and manipulation.
SESSION: Student Game Competition
Emergent behaviour occurs when simple systems result in complex behaviour. In games, this allows for more meth- ods of interaction, and lets the player explore in a way they want. It is challenging to design for emergence, as it typ- ically arises unexpectedly . With “Plusminus”, we devel- oped a method of augmenting physics to promote emergent gameplay using magnetism. Initial playtests demonstrated that players came up with surprising ways to play, such as using magnets to launch themselves over walls and outside the designed play area. We demonstrate that by giving the player control over magnetism, we can design for emergent gameplay.
Demographic projections of many western democracies show them to be aging nations. To keep thriving we must ensure that our citizens are aging in a healthy manner without isolation from society. Alzheimer’s disease is one of the most misunderstood conditions of our aging population, and is a problematic condition for caregivers and family members to support. This project seeks to build a mixed reality environment that allows users to experience some of the symptoms of Alzheimer’s in the form of serious games. By gamifying the notion of the somewhat controversial notion of the “empathy machine”, the project makes a novel social impact on the general population.
Drawing, sketching and other forms of visual communication are useful tools for communicating our thoughts especially in a multidisciplinary setting, yet instruction in drawing is often limited by the amount of students one instructor can guide at once. [Univ.] Drawing presents an accessible, low-threshold opportunity for any student to learn to draw through a gameful touch-screen device application. This paper focuses on the creation of the in-platform evaluation and assessment system with the goals of being fair to the user by giving reliable feedback, be encouraging to aid motivation, and to be meaningful to the user. An automatic technology for assessing drawings made by the user was developed. With [Univ.] Drawing we are testing if the concept of learning to draw through a gameful system is feasible on a larger scale.
One of the core features of the real-time strategy game genre is an omnipresent view coupled with a fog of war limiting your vision to where your units are. But what would an RTS game without an omnipresent view look like? Radio General is a real-time strategy game where all you have in front of you is a map, a radio, some markers and a few figurines. The only link to the game world is through the radio. Your units make biased oral reports to you over the radio, and the player uses a microphone and voice-commands to issue orders over the radio. Radio General is an experiment in taking a well-established genre, and removing a core gameplay element from it: being able to see your units. This decoupling of player and game world allows for several modes of play, including the usage of an Android phone or tablet, and physical components such as a paper map and figurines.
Designing Game Worlds. Coherence in the Design of Open World Games through Procedural Generation Techniques
“Open World Games” have the potential for countless hours of exploration and fun through their huge game worlds. Procedural generation of game content by computer software is therefore becoming more and more important as a design technique. Using such techniques can cause problems in the coherence of the worlds that adversely affect the game experience. This work deals with the development and design of an open world game with a high degree of design coherence.
Toxicity in online environments is a complex and a systemic issue. Esports communities seem to be particularly suffering from toxic behaviors. Especially in competitive esports games, negative behavior, such as harassment, can create barriers to players achieving high performance and can reduce players’ enjoyment which may cause them to leave the game. The aim of this study is to review design approaches in six major esports games to deal with toxic behaviors and to investigate how players perceive and deal with toxicity in those games. Our preliminary findings from an interview study with 17 participants (3 female) from a university esports club show that players define toxicity as behaviors disrupt their morale and team dynamics, and participants are inclined to normalize negative behaviors and rationalize it as part of the competitive game culture. If they choose to take an action against toxic players, they are likely to ostracize toxic players.
Subjective Wellbeing and its associated outcomes are a growing area of concern worldwide. While various interventions such as exercise, mindfulness and entertainment have been developed to address wellbeing, the role of serious games to address mental health concerns is still emerging as a popular field for study and research. The current paper presents Journey: A game on Positive Affect as a game which has been designed around key concepts in the field of positive psychology to overcome symptoms of depression and anxiety. The game intends to measure the wellbeing index of players as well as to induce positive emotions. The conceptualization of the game idea, its design, implementation process, its test plans as well as future scope have been presented in this paper.
Choreografish: Co-designing a Choreography-based Therapeutic Virtual Reality System with Youth Who Have Autism Spectrum Advantages
Choreografish is a virtual reality, therapeutic arts engagement leveraging participatory research and design to collaborate with young adults with autism spectrum disorder (ASD). The research team was motivated by the social anxiety some with ASD have, and the attendant difficulties accessing art forms that may actually play well to Autism Spectrum Advantages (ASA). This project was co-designed with youth with ASA to explore the use of VR and choreographic thinking to empower users and designers to engage with the arts and self-manage anxiety. This paper describes the project, and gives a brief design history of Choreografish.
Chewing is crucial for digestion and as mindful eating suggests, it is important that we do it properly. Despite this, not many people chew their food properly. To help facilitate proper chewing, we developed “Feed the Food Monsters!”, a two-player Augmented Reality game that aims to engage co-diners in proper chewing using their bodies as play. This game draws inspiration from Tetris and allows diners to view each other’s chewing behavior through a playful interface that is overlaid on their torso. In this game, players wear HMDs and guide each other to chew properly in order to keep the food monsters quiet. Besides supporting chewing in a social dining setting, this game also makes a contribution to AR-based games where chewing actions are mapped to game actions. Ultimately, with this work, we hope to engage people in the practice of proper chewing in a fun and a pleasurable way.
Gameplay Metrics can help provide insights into player behaviour and performance, supporting game designers and players wishing to better understand their own game, yet such data is often not easily available to the wider public. We take our inspiration from the field of Learning Analytics, which attempts to create awareness and insights for learners by leveraging the gathered activity data and feeding this back to the students in an open way. This paper suggests an open standard for gathering and storing Gameplay Metrics data based on proven standards in Learning Analytics, facilitating easier development of analytical tools across different video games and genres to the benefit of researchers, developers, and players.
When players enter into an interactive game setting, they undertake a process of discovery and subsequent distinguishing of the rules and goals which will prevail in the play context. However, this process of forming understanding is often altered by the subtle expectations which prevail upon the interaction setting. In this paper, we argue that user perception, leading to internal or external motivation towards gameplay, is shaped differently depending on the general context on how and where players first encounter the game. We will use an existing three phase gameplay model, based on invitation, exploration, and immersion, and draw observations from laboratory, museum, hospital, and classroom studies of various interactive playful experiences. This work aims to support interaction designers in developing internal motivation in young users, towards positive gameplay experiences for children and lessening the gap between laboratory and in situ studies.
Interpersonal distance is defined as the area which we choose to keep between ourselves and others, revealed through observation and cultural components. Although previous studies have suggested the possibility of alterations in perception of interpersonal distance in children with Autism Spectrum Disorder, it remains unknown whether these differences exist in relation to characters in a virtual environment. As many social-skills interventions for autism rely upon virtual characters to teach social behaviors, this research is key in understanding how to configure the interpersonal distance of virtual characters to an adequate level to effectively foster computerized social-skills training. We have carried out controlled trials with children with autism to identify variations in preferences from the typically developed population with both a human partner and a virtual character. The contributions of this research are twofold: first, to support existing literature in identifying differences in personal space preferences between children with autism and typically developing children; and second, to understand whether these differences carry over into the context of virtual environments.
Establishing Mixed Reality(MR) spaces for play in communities imposes a series of political, social, and ethical consequences. To clarify these concerns, a mobile MR application and platform called Invisible Cities was developed to provide communities with a tool to curate MR within their geographic boundaries. The system was designed to encourage critical reflection on campus spaces through the creation of MR statues in a ubiquitous MR play space. We utilize and build upon the spatial framework established by Hitesh Sharma and his colleagues to explore how Invisible Cities’ implemented solutions provide some answers to the issues of placemaking and MR play spaces.
Breastfeeding is widely promoted due to its health benefits for infants, but breastfeeding rates in many industrialised countries are low, and some mothers struggle to establish a positive feeding relationship with their child. We draw from breastfeeding research along with a qualitative enquiry into the lived experiences of breastfeeding mothers to outline an agenda for the design of playful technology to support healthy infant feeding. We describe how games can (and cannot) be leveraged to support the feeding journey and normalize breastfeeding, while respecting individual feeding choices, targeting mothers, partners, and wider society, laying out an agenda for future research.
In this paper, we present the results of an evaluation of preschool children playing five commercially available mobile computer games, the results of which will inform the user-led redesign of ‘Space Vision’, a serious mobile game for early identification and home-monitoring of vision problems for children of preschool age (3-5 years). Currently, Space Vision is a digitally gamified version of an established visual acuity testing method that uses a basic hidden-object-game mechanic. Initial testing revealed that the engagement and usability of Space Vision must be improved to maintain attention for repeated use. Though theories of human gameplay motivation provide the abstract components necessary to transpose this test into an immersive gameplay experience, they do not operationalise specific game design decisions for our target age group. To address this, we conducted a small-scale evaluation with 15 preschool children playtesting five successful children’s games as the first step in our user-centred design process to inform the process of creating a more engaging and motivating prototype of Space Vision.
The ESP Game (Google Image Labeler) demonstrated how the crowd could be used to perform a task that is easy for humans but challenging for computers – annotating images. The game facilitated the task of basic image labeling but often the labels provided were at a high level and not sufficient to distinguish similar images from one another. We introduce ClueMeIn, an entertaining web-based game that uses a similar format to obtain more detailed image labels than the ESP game. The results will improve the accuracy of image searches and better accessibility for the visually impaired.
We present preliminary findings from sharing and augmenting facial expression in cooperative social Virtual Reality (VR) games. We implemented a prototype system for capturing and sharing facial expression between VR players through their avatar. We describe our current prototype system and how it could be assimilated into a system for enhancing social VR experience. Two social VR games were created for a preliminary study. We discuss our findings from our pilots, potential games for this system, and future directions for this research.
While some promising early work exists, gamified surveys are still not widespread. In this work-in-progress paper, we present our methodology for gamifying surveys, which combines the rigor of survey design with the creativity and affordances of game design and makes use of an existing tool to ease development. We illustrate our methodology with four different gamified surveys focused on personality tests and frameworks—in general and in the context of games. We discuss the challenges and opportunities and describe a research agenda to advance this work.
Approximately 4-10% of the German population suffers from developmental dyslexia, influencing affected children’s educational, personal, and social development negatively. Digital interventions have shown great promise to support dyslexic children additionally to school support and learning therapy. In this article, we present the results of a serious game for German dyslexic children to improve reading and spelling performance with special emphasis on syllable stress awareness. We evaluate game and user experience as well as the relationship between real-life literacy skills and in-game data of 63 German primary school children who played the game on a Tablet at home for 9-10 weeks within the scope of a randomized controlled field trial in 2018. Results indicate overall positive game and user experience and a completion rate of 75% indicates the feasibility of unsupervised digital interventions. Moreover, real-life reading and spelling proficiencies correlated significantly with processing times and scores measured in-game, providing first evidence of the game’s validity.
With the rise of computers and mobile devices, digital card games have gained popularity and market success, offering features that the classic paper cards cannot. We identify a possibility to fill a gap between paper and digital card games by creating a tangible interface of smart objects with embedded electronics that will garner the benefits of both existing modes. In this paper, we describe the Paper-like Entertainment Platform Agents (PEPA) deck, a conceptual system of tangible interactive cards that use bend gesture interaction as a form of input.
During the last three decades, many successful interactive narrative projects have been realized, including many games. However, the respective design knowledge mostly rests with individual designers, who are self-trained and use private, inaccessible vocabulary. This state of affairs is an obstacle to further development and a considerable challenge to the professional training of future narrative designers. To address this issue, a multi-pronged approach is necessary, which includes the creation of a body of empirically verified design knowledge, a related ontology to enable a dialogue between practice, education and research, as well as specific curricula and pedagogical concepts. The authors are actively researching these different elements and have developed a method to empirically verify design convention candidates. At this time, we are introducing the ontology and on that basis an online platform for the collection of convention candidates that opens our effort to a wider community of practitioners and researchers.
You; the Observer, Partaker or Victim. Delineating Three Perspectives to Empathic Engagement in Persuasive Games using Immersive Technologies.
A commonality in socially-aware persuasive games is the strategy to appeal to empathy, as a means to have players feel and understand the struggles of another. This is particularly evident in the expanding use of immersive technologies, lauded for its ability to have players more literally ‘stand in another’s shoes’. But despite the growing interest, empathic engagement through immersive technologies is still ill-defined and the design thereof complicated, with questions like “who is the player?” and “with whom does the player empathize?”. We contend that a better understanding of the different perspectives to empathic engagement – the observer, partaker, and victim – and the gap between realities can be insightful, and resulted in a framework to support future research and design.
The illusion of being someone else and to perceive a virtual body as our own is one of the strengths of virtual reality setups. Past research explored that phenomenon regarding human-like virtual representations. In contrast, our ongoing work focuses on playing VR games in the role of an animal. We present five ways to control three different animals in a VR environment. The controls range from third person companion mode to first person full-body tracking. Our exploratory study indicates that virtual body ownership is also applicable to animals, which paves the way to a number of novel, animal-centered game mechanics. Based on interview outcomes, we also discuss possible directions for further research regarding non-humanoid VR experiences in digital games.
Tai Chi uses smooth movement and a focussed state of mind to support mental and physical health. Tai Chi teachers use metaphoric imagery such as “wave hands like clouds” to help students integrate smooth movements with a focussed mind. Current interactive technologies applied to Tai Chi take a very literal approach, focussing on body position and centre of gravity. In contrast, “Tai Chi In The Clouds” is a system which uses micro unmanned aerial vehicles (UAVs) as “clouds” to lead or follow the movements of the hands, giving live feedback on smoothness of movement via LEDs. We used UAVs to aid the experience of living out the metaphoric imagery used in Tai Chi. With our work we aim to contribute to new design language to support movement based, mind-body practices.
Ingestible sensors are digital devices that can measure the user’s body data after being swallowed and hence have great potential in medical use. Unfortunately, few studies have considered the playful experiences afforded by ingestible sensors. We believe that the use of localised sensations, such as those created by heat, to represent the data measured by ingestible sensors offers opportunities to support experiencing the body as play. To explore this opportunity, we introduce a two-player system called HeatCraft that uses an ingestible sensor to measure the users’ body temperature and employs thermal stimuli to provide feedback. Similar to open-ended games, HeatCraft allows players to decide when and what to do in order to know more about their body, facilitating playful experiences of exploration and discovery. With this work, we aim to inspire game designers and HCI researchers to consider localised sensations when designing playful and engaging experiences around ingestible sensors.
When the promotion of a positive behavioral change is the goal, most often persuasion techniques are used. To maximize the effectiveness of these techniques designers need to tailor the employed persuasion strategies to each individual. Many studies faced the problem of modeling players’ profile by designing taxonomies. However, none of them verified if this approach works in practice. In this paper we investigate whether using a well-known player types categorization based on a questionnaire is an effective mean to represent the way players actually behave in an on-the-field gamified system.
Recent HCI work on digital games highlighted the advantage for designers to take on a 1st person perspective on the human body (referring to the phenomenological “lived” body) and a 3rd person perspective (the material “fleshy” body, similar to looking in the mirror). This is useful when designing bodily play, however, we note that there is not much game design discussion on the 2nd person social perspective that highlights the unique interplay between human bodies. To guide designers interested in supporting players to experience their social bodies as play, we describe how game designers can engage with the 2nd person perspective through two design tactics based on two of our own play systems. With our work, we hope we can aid designers in embracing this 2nd person perspective so that more people can benefit from engaging their bodies through games and play.
We note a trend on utilizing interactive technology to extend human capacities through bodily cyborg-like integrations such as artificial limbs and implantables. This trend is often captured by the term “transhumanism”, referring to the use of technology to extend human capacities. We find that many transhuman discussions appear to focus on instrumental benefits (i.e. exploiting opportunities to be more productive). We extend this by proposing engagement with transhumanism also from a perspective of “play”. We reflect on our own and other’s work to articulate three strategies for game designers on how they can engage with transhumanism when aiming to facilitate playful experiences. Ultimately, we aim to contribute to a more playful transhuman future.
There is an increasing trend in interaction design to engage with food. We note that most prior work targets instrumental benefits (for example see food tracking apps to manage nutritional intake). In contrast, in this article, we highlight the potential of technology to support eating as a form of play. We reflect on our own work to articulate two design strategies for game designers on how they can facilitate playful eating experiences using novel technologies. Ultimately, with our work, we aim to facilitate a more playful engagement around the way we eat.
Enjoy Watching Japanese Chess Games like Football: an Evaluation Method of Game Positions for Beginners
Researches for computer Shogi (Japanese chess) have been widely conducted. Most of the previous researches have tried to create a strong program that can defeat the professional players. They use the estimated information of pieces arrangement to evaluate a current game position. The evaluation is less comprehensible for beginners because it seems not to correspond to the current arrangement. We propose a novel method for the beginners to evaluate a current game position of Japanese chess game. Our method uses only a current arrangement of pieces to evaluate the game position. We developed a prototype interface to visualize the evaluated game position, and conducted a small experiment. We asked participants to watch Japanese chess games with the interface, and speak what they thought while watching the games. The participants gave more questions and interpretations that were related to understanding of game positions. The results indicated that our system enhanced the beginners enjoyment of watching Japanese chess games with questions and understandings.
To counter the difficult task of studying kanji when learning the Japanese language, we suggest an adventure style Serious Game. Previous solutions are focused on small 2D games or gamification of virtual kanji trainers. Our goal is to utilize the concept of flow by immersing the player in a rich storyline with Japanese mythology, turn-based encounters and various mini games to train all aspects of kanji: stroke order, meaning, pronunciations and compound words. We further introduce Augmented Reality to the language acquisition as an innovative way to combine virtual content with the real world. A first test revealed great potential of such an approach and feedback for further development.
This paper introduces The Tracer Method that integrates two common approaches for understanding skilled performance: Cognitive Task Analysis (CTA) and Eye Tracking (ET). This combination has the potential to provide information for game designers and human computer interaction researchers that will guide feedback to areas with the greatest payoff. Historically, ET has been used to gain behavioral insight into visual search patterns and attention, whereas CTA has been used to understand higher-level goals, strategies, and decisions. We integrate the two by using CTA to identify key events during the game, and examine ET statistics conditioned on these. In this demonstration, ET behavior was recorded while 17 experienced Overwatch players engaged in competitive play. Through post-game CTA interviews, critical decisions were identified and analyzed post play. Results provide some examples of new insights that can be captured from a combination of these methods, and used for game design, play testing, or evaluation.
This paper presents a Game System based on an ecological phenomenon called mixed-species bird flocks. It demonstrates that this Game System could be used as an effective tool for scientific outreach in ecological study of bird interactions.
The paper begins by explaining the concept of mixed-species bird flocks, participant birds and their attributes. It then uses the formal Mechanic Dynamic Aesthetic (MDA) framework to describe how this concept is applied to the Game System. The paper further provides details on how the game Mechanics are derived from real world bird behaviour and how the Dynamics in the game follow real interactions that birds experience in nature. The Game System follows the Aesthetics of Challenge, Fellowship and Submission to engage the players and make them interested in and curious about the ecological phenomena in their surroundings.
The paper concludes with preliminary insights gathered from playtesting sessions and provides a glimpse into the future possibilities of creating more gameplays based on other facets of mixed-species bird flocks.
One of the challenges of generating video game levels procedurally is capturing what in the design of a specific level makes it fun to play. In this paper, we demonstrate our preliminary work on a system which learns from expertly designed game levels to produce new game levels automatically. We developed a platform for designers to create tile-based dungeon levels and a level-generating agent which consumes recordings of design sessions to learn and then create its own levels. We evaluate the output of our agent using metrics gathered from a static analysis and a discount usability study using a digital game prototype that renders the level designs. Our preliminary results suggest that this system is capable of generating content that emulates the style of the human designer and approaches the level of fun of human-designed levels.
In this paper, we discuss an on-going research carried out to create smart auditory based pervasive game environments by using open street map and social media data. Two examples are provided, the first showing how data from Twitter could be used to generate talking NPC (Non Player Character) as a way to represent the social presence of people who frequent different areas. In the second prototype, we show how a context aware storyteller could be developed to tell stories from books based on the characteristics of the physical world locations where the user is located. Natural Language processing and Machine learning techniques such as clustering and word embedding based modeling were used to develop the prototypes. The approaches discussed in this study would be particularly useful in developing pervasive games with serious purposes and we hope to apply these techniques to create pervasive games for mental healthcare and education.
The paper introduces the key principles of a generic gamification framework for eGuide apps offered to visitors in tourist attractions of various kind. The framework is intended to make it easier to implement gamification in eGuides, reuse and adapt gamification scheme among different eGuides, as well as maintain and update gamified eGuides. The principles include separation of gamification layer, considering gamification rules to be a kind of eGuide content separate from the rule engine, distinction of shallow and deep gamification, standardized description of eGuide events and gamification rules, separation of rule requirements and results definitions, differentiation of rule types and easy editing of rules.
Moral dilemmas are a staple of narratives in games. Allowing the player to choose and impact a game’s story is an important part of enjoying an interactive narrative. Yet when it comes to how we design and test games with moral dilemmas, there is little written on the subject. From this premise, The Moral Gameplay Taxonomy has been developed. Developed through an examination of 23 games, the taxonomy consists of two dimensions, creating four quadrants, which represent the different types of choices (systemic and scripted) provided to the player, and the style of endings (branching and linear). As this is a preliminary study, further work needs to be undertaken to develop the taxonomy further. It will be a valuable tool for examining how the design of moral dilemmas can affect player perceived agency and enjoyment in games.
Especially in knowledge societies, creativity is becoming an ever more important trait. In order to uphold progress and innovation, it is essential to hoan this trait. Playing (video) games has been found to promote creative thinking. However, there is yet a lack of understanding regarding which processes within games cause this effect. In order to effectively use games to train or use as a catalyst for creative potential, it is of relevance to uncover the influence of individual game components. This research takes an approach in investigating the influence of game components on creativity tests. We conducted an online user study with four conditions (control, levels, scarcity and theme), where creativity was assessed as a construct of cognitive abilities. Our results show that the inclusion of individual game components impacted the creative performance of participants regarding fluency, originality and elaboration.
Taking the ‘A’ Out of ‘AR’: Play Based Low Fidelity Contextual Prototyping of Mobile Augmented Reality
Taking the ‘A’ out of ‘AR’ means implementing the augmented elements of an interface and contextual elements of reality in a more controlled context to allow for proof of concept evaluations. This paper proposes a prototyping technique that bridges the gap between traditional paper prototyping methods used for interface design and evaluation, and the challenges associated with the development of visual, context-aware augmented reality (AR) applications. An initial evaluation of this technique was conducted through the examination of a small-scale case study of user evaluation sessions of a mobile application.
Most mental health issues start at a young age but are only addressed later. Addressing mental health issues early in life can prevent escalation, release the burden sooner and increase the quality of life of individuals. Unfortunately, mental health needs of young people are unmet because of limited trained professionals and low service capacity. Computer games have a strong potential to support existing interventions and can increase the helpfulness, availability and affordability of mental health services. More research is needed to optimize the impact of therapeutic games and to target broad user groups. We believe that adaptability in therapeutic games is important to support a variety of therapy styles and to meet individual children’s needs related to age, mental disorders and literacy skills. We elaborate on studies that involve therapists and children in the evaluation and design phases of a therapy supporting game called Pesky gNATs.
This paper describes the design and evaluation process of a location-based serious game in a heritage awareness context. Conveying knowledge regarding tangible cultural heritage with the help of video games is a well-established concept. Though many applications in this domain have proven to be effective, they always rely on restrictions regarding time, place and usage of specific hardware. In contrast to previous approaches, we have developed Memorial Quest, a serious game with the objective to convey knowledge regarding cultural heritage objects accessible without aforementioned constraints. We examined educational effects by conducting a user study (n = 40) in which we compared our game to a common learning method in cultural heritage. Statistical analysis of the results revealed that learning effects were significantly larger when playing the game instead of perceiving the same contents in a traditional way. With the help of questionnaires and qualitative data, we identified possible flaws and elaborated potential improvements for future iterations.
Food sounds – sounds that certain foods make during eating through chewing, biting or licking – are key parts of the eating experience. They play an important role in establishing our perception about the palatability of food. We believe interactive technology offers unique opportunities to create novel playful experiences with food by playing with these sounds. In exploring this opportunity, we outline a structure on how we can explore playful sounds and present one case study of a novel interactive food experience: “The singing carrot”. The singing carrot generates unique digital sounds while eating a carrot. Ultimately, with our work, we aim to inspire and guide designers working with interactive playful food sound.
Physical activity of children and adolescents is an important part of the personal and physical development. Nevertheless, a trend has been observed in recent years that is worrying. With increasing age of children and adolescents, the motivation for physical activity decreases. But how can children and adolescents be motivated to engage in physical activity? For this purpose, we have considered developing a playful app that not just counts the steps of the user, but especially focus on group dynamics while motivating or challenging each other. To do this, we explore items that users can earn with steps to gain an advantage. At the same time, we explore how these items affect the motivation of physical activity. This paper presents the results of a workshop that was made with early adolescents to research their motivation and an approach that uses a mobile smartphone. As a result, hypotheses are presented to the established game mechanics to be tested on the basis of the prototype. These should be evaluated and defined more precisely in the next step with the help of a survey.
SESSION: Workshop Summaries
Over the recent years, mental health has become a major disease burden globally. Untreated mental illness has serious consequences for the individual, resulting in lower quality of life and has severe negative effects on the global economy. Digital solutions for mental health offer relief to the overburdened health care system but would benefit from design approaches geared to increase participant adherence and engagement. Video games offer a rich design ecosphere-ranging from narrative elements usable in therapy, accessible social dynamics, challenging cognitive tasks, to novel assessment approaches and applicability as preventive measures-suggesting their potential to advance digital solutions for mental health. While there is clear potential in video games for mental health, the challenges and opportunities of transferring game design to mental health applications, designing for specific mental illnesses, and integration games for mental health into the clinical context are rarely addressed.
What are current and future challenges that incorporating eye tracking into game design and development creates? The Second EyePlay workshop brings together academic researchers and industry practitioners from the fields of eye tracking and games to explore these questions. In recent years, gaming has been at the forefront of the commercial popularization of eye tracking. In this workshop, we will share experiences in the development of gaze-enabled games, discuss best practices and tools, and explore future challenges for research. Topics of interest lie at the intersection of eye tracking and games including, but limited to, novel interaction techniques and game mechanics, development processes and tools, accessible games, evaluation, and future visions.
This one-day workshop brings together researchers and practitioners to share knowledge and practices that support teaching game design and game development. Open to participants with a diverse range of interests and expertise the workshop will facilitate discussion across a range of discipline areas. The outcomes from the workshop will include an archive of participants’ initial position papers along with the materials created during the workshop. The result will be a roadmap to games education, focusing on how best to support active and engaged learning and teaching processes that create work-ready graduates, identifying the challenges that need to be addressed in order to do so.