• Non ci sono risultati.

Design fictions for behaviour change: exploring the long-term impacts of technology through the creation of fictional future prototypes

N/A
N/A
Protected

Academic year: 2021

Condividi "Design fictions for behaviour change: exploring the long-term impacts of technology through the creation of fictional future prototypes"

Copied!
58
0
0

Testo completo

(1)

Design Fictions for Behavior Change: Exploring the Long-Term Impacts of

Technology through the Creation of Fictional Future Prototypes

Amon Rapp

University of Torino – Computer Science Department, C.so Svizzera, 185 Torino, 10149 Italy

amon.rapp@gmail.com

Abstract. Human-computer interaction researchers are increasingly designing behavior change

technologies for a variety of purposes, from promoting healthier lifestyles to support sustainable habits. These technologies are commonly assessed in terms of their effectiveness in modifying human behavior. Nevertheless, the multifaceted social and psychological long-term consequences of these kinds of artifacts are often forgotten. To explore their design space, we involved 48 Psychology students asking them to envision and design future behavior change systems. Following the recent interest in design fictions, which present “fantasy prototypes” in plausible near futures, we investigated how designs might help people think of the presuppositions and implications of technology. Then, we analyzed four design fictions to explore themes that are often forgotten in the current behavior change debate. We finally discussed the empirical and methodological outcomes of our study and presented a series of considerations and research questions that could stimulate further reflections on behavior change technologies and the method we employed.

Keywords: Behavior change; design fictions; research through design; persuasive technologies; persuasive computing.

Word count: 24116

Contact author: Amon (first name) Rapp (last name)

University of Torino – Computer Science Department - C.so Svizzera, 185 Torino, 10149 Italy Email: amon.rapp@gmail.com; rapp@di.unito.it Mobile: +39 3462142386

(2)

1. INTRODUCTION

Human-Computer Interaction (HCI) researchers are increasingly designing behavior change technologies1 [Hekler, Klasnja, Froehlich, & Buman, 2013]. These systems are addressed to promote a number of behaviors, like physical activity [Consolvo, Everitt, Smith, & Landay, 2006], healthy [Grimes, Bednar, Bolter, & Grinter, 2008] and sustainable habits [DiSalvo, Sengers, & Brynjarsdóttir, 2010], learning [Goh, Seet, & Chen, 2012], safety [Bergmans & Shahid, 2012] and self-management of chronic diseases [Gasca, Favela, & Tentori, 2008].

While the opportunities to modify human behavior through technology are currently explored in academia, the recent commercial availability of many wearable and ubiquitous technologies is opening new horizons for behavior change systems. A plethora of devices allow people to collect their own data potentially everywhere at any time, widening the possibilities of acting on their habits through targeted interventions and promising, for example, to improve their wellness [Rapp et al., 2015].

In this new landscape, it is fundamental to start reflecting on whether and how these technologies could affect individuals and the society in which they live. How can we stimulate reflections on the impact of behavior change technologies on people’s everyday life in the long term? How are the limits and potentialities of these instruments perceived? What are the issues that may arise when technology is used to modify human behavior? And what kinds of assumptions do they embed?

To answer the questions above, we involved 48 Psychology students and asked them to envision and design future behavior change systems. Following the new interest in design fictions, which present “fantasy prototypes” in plausible near futures (Blythe, 2014b), we explored how designs can help people think of the assumptions and implications of technology.

Our contribution to HCI and behavior change research is threefold. On the methodological level, we show how design fictions can be incorporated within a research process to explore a specific application field, involving people different from designers in reflecting critically on the systemic 1 Although it is common to talk of persuasive technologies within HCI, we prefer, following Hekler et al., to use the term behavior change technologies, since the first one evokes the Fogg behavioral model [Fogg, 1998], hiding all the other approaches that have informed the design of systems aimed at changing behaviors during the last years.

(3)

outcomes of technology. On the empirical level, we report four design fictions, together with a “double level” reading, which constitute a sort of “annotated portfolio” of research through design prototypes: we illustrate how design fictions embed implicit knowledge that might complement and contrast what people explicitly state about technology, making emerge philosophical and ethical assumptions e.g. about the nature of behavior, humanity and change. On the design level, we present design considerations paired with research questions intended to orient the research on future behavior change systems and help designers think of the implications of their work.

The paper is structured as follows. In Section 2, we outline the previous studies related to behavior change and design fictions. Then, in Section 3, we describe the method employed in our research, while, in Section 4, we expose its findings, namely the produced design fictions and the themes emerged during the participants’ debates. Section 5 reports a critical reading of the reported design fictions, whereas Section 6 discusses the emerged themes and the used methodology. Finally, Section 7 proposes a series of considerations for design paired with research questions to be explored. Section 8 concludes the paper.

2. BACKGROUND

3. HCI and behavior change

In the last few years, HCI researchers have designed a variety of systems to encourage a change in behavior. To claim the efficacy of their interventions they often grounded their designs in theories and models of behavior: Thieme et al. [2012], for example, designed BinCam, a social persuasive system that aims to change recycling habits by increasing users’ perceived behavioral control (i.e. whether or not a person feels able to perform the behavior), according to the Theory of Planned Behavior; the Smart Garden Watering Advisor [Pathmanathan, Pearce, Kjeldskov, & Smith, 2011], which relies on the idea of persuasion outlined by Fogg [2003], aims to change gardeners’ watering habits, by pushing tailored information to convince their pro-environmental behavior; Lee Kiesler,

(4)

and Forlizzi [2011] applied three behavioral economics techniques to promote healthy snacking, exploiting users’ unaware decision biases to influence their choices; Villamarín-Salomón & Brustoloni [2010], instead, used Operant Conditioning, one of the fundamental methods of learning in applied behavioral analysis [Cooper, Heron, & Heward, 2007], to reward users' secure behavior in a security-reinforcing application; Rapp [2017a, 2017b] showed that gamification strategies framed in behavioral, cognitive an social practice theories may yield a “gameful” behavior change.

These technologies are usually evaluated, in the context of HCI research, in relation to their effectiveness in bringing about the intended change in behavior. Hamari, Koivisto, and Pakkanen [2014], for example, by reviewing a large corpus of “persuasive technologies”, categorized them along the grade of success in producing psychological or behavioral outcomes, i.e. whether they were able to persuade people into various behaviors. However, as noted by Hekler et al. [2013], an effective change in people’s behavior is rarely robustly demonstrated in HCI research.

Klasnja, Consolvo, and Pratt [2011] rightly state that behavior change is a complex, long-term process with high relapse rates: to convincingly demonstrate that a technology contributed to such a process requires large-scale, long-term studies that typically cannot be done with early-stage prototypes developed in HCI. In fact, although randomized controlled trials remain the gold standard in evaluating the efficacy in behavioral studies, very few HCI researchers have sufficient resources to conduct them [Hekler et al., 2013].

Given this, different types of evaluation strategies have started to be explored within HCI community. Hekler et al. [2013] believe that alternative experiment designs, such as single case experiment designs and factorial designs, may prove to be useful in HCI research. Klasnja et al. [2011], instead, suggest that we move to qualitative and field studies in order to understand user’s perceptions and patterns of system use, and investigate how technology interacts with other factors that may affect change, such as people’s attitudes, relationships and their everyday context. Smith, Wadley, Webber, Ploderer, and Lederman [2012] individuated three kinds of effects when designing behavior change technologies: proximal, which might occur within days or weeks,

(5)

intermediate, which might occur during a trial lasting several months, and distal which might occur over the longer term. They argued that while measurement of distal effects may not be possible, design thinking and evaluation should still aim to inform our understanding of limits and potential for the realization of distal benefits to health and wellbeing.

However, all these perspectives seem to share a common assumption in HCI research, that is that technology will make people’s lives “more enjoyable, easier, better informed, healthier and more sustainable” [Linehan et al., 2014, 47]. Nathan, Friedman, Klasnja, Kane, and Miller [2008] stress that interactive systems impact individuals, society and natural environment now and in the future, and these impacts are complex, ambivalent, emergent, and linked to other changes in society. This is even truer for behavior change technologies that explicitly aim to modify individuals’ habits, which cannot go without consequences in the long-term and on the broader society. Such technologies may also produce negative side-effects and have socio-political implications that are rarely explored in HCI. Purpura, Schwanda, Williams, Stubler, and Sengers [2011] presented a hypothetical health-supporting technology - Fit4Life - in order to illustrate how current design recommendations can lead to technologies that spiral out of control, reinforcing a ‘McDonaldized’ [Ritzer, 2008] worldview, which values quantification and rationality at the cost of situational, hard-to-measure factors. Lupton [2014] stressed that behavior change technologies operate via intensely surveillance principles, as people’s body and behaviors can now be tracked 24/7 and the data sent to health promoters, health insurances and even government security agencies. Moreover such devices tend to conform to paternalistic top-down approaches, where encouragement becomes persuasion becomes coercion. DiSalvo et al. [2010] found that in many behavior change scenarios related to sustainable HCI the persuasion begins to border on coercion, raising serious issues of ethical concern about who gets to decide what change should happen and how, whose needs are met and whose values matter.

However, despite rare exceptions, Nathan et al. [2008] note how there is a scarcity of methods and empirical work to assess the potential long-term and systemic implications of technology. Also

(6)

Sellen, Rogers, Harper, and Rodden [2009] highlight how many of the issues that HCI has not addressed before, such as ethical questions and moral concerns related to the transformations in technology, are far from being solved. Actually, it has been underlined that HCI researchers do not commonly produce critical evaluations of the potential consequences of their work [Linehan et al., 2014].

Taking into account these elements, we want to start exploring how behavior change technologies could affect in the future people’s everyday life and the society they inhabit, and what kinds of questions and issues would emerge from their widespread and long-term adoption. To do this, we propose to look at the practice of research through design and, in particular, at design fictions.

4. Design fictions

Recently, the practice of design has become increasingly integrated within the HCI community, taking the form of research through design [Gaver, 2012]. The term can be brought back to the Frayling’s [1993/4] seminal paper “Research in Art and Design”, where he criticizes a deceptive dichotomy between research and art/design, since many of the motivations and practices of the two are alike. Research through design “is a thing-making practice whose objects can offer a critique of the present and reveal alternative futures” [Bardzell, Bardzell, & Hansen, 2015, 2095]. In this approach, “instead of having the intention to produce a commercial product, design researchers focus on how the application of design practice methods to new types of problems can produce knowledge” [Zimmermann & Forlizzi, 2008, 42]. From this point of view, HCI artifacts can be seen as a 'theory nexus' [Carroll & Kellogg, 1989], which brings to light those problems that designers believe are crucial, and their convictions about the correct way to solve them [Gaver, 2012].

Nevertheless, although design research is inherently oriented toward the future, the future described by HCI researchers is often utility-driven, not addressing the multifaceted social and psychological long-term consequences of the technology artifacts they design [Linehan et al.,

(7)

2014]. Recently, different techniques have been suggested for triggering more critical reflections related to the impact of technology on individuals and society. Value-system design, for example, is an approach to the design of technology that accounts for human values, by bringing into question those that, consciously or unconsciously, shape the design practice [Friedman, Kahn, & Borning, 2006]. Critical design [Dunne & Raby, 2001], on the other hand, instead of reinforcing needs and values as they are presently interpreted in the consumer society, tries to disrupt and transgress such constructions: by embodying cultural critique in designs, it wants to provoke and encourage people to reflect on designs’ participation in socio-cultural norms and structures [Bardzell, Bardzell, Forlizzi, Zimmerman, & Antanitis, 2012]. Finally, reflective design is a set of design principles and strategies that guide researchers in rethinking dominant metaphors and values, and involving users in the same critical practice, by bringing unconscious aspects of experience to conscious awareness [Sengers, Boehner, David, & Kaye, 2005].

In this landscape, HCI researchers started recently to draw on fiction as a means for facilitating reflection on the implications of design. This approach is not completely new within HCI. Fictional narrative has been used in the form of Personas [Cooper, 1999; Marcengo, Guercio & Rapp, 2009] and scenarios [Carroll, 1997], often to outline results of user studies or present not yet existing technologies [Carroll & Rosson, 2003]. Narratives have also been employed e.g. by creating films in which characters discuss a technology that is not directly shown, in order to trigger reflections on existing technological problems [Briggs et al., 2012], or by grounding them in Activity Theory to analyze user / player interaction with multiple applications and platforms [Marsh & Nardi, 2015].

In time, more complex narratives allowed to also consider the social, psychological and ethical dimensions of design [Linehan et al., 2014]. Along this trend, design fictions emerged as a recent field of research through design in HCI, presenting “fantasy prototypes” in plausible near futures, based on the idea that concept designs can be usefully discussed without necessarily making them [Blythe, 2014b]. Originally defined by the sci-fi writer Bruce Sterling [2005], design fictions create a discursive space where different types of future emerge, exploring at the same time the present

(8)

condition [Hales, 2013]. These “speculative designs” can be used “to prompt reflection on contemporary issues and the possible consequences of science and technology” [DiSalvo, 2012, 118]. Design fictions can be narratives, short stories, films, objects, and prototypes [Blythe, 2014b], taking also the form of utopias and dystopias [Knutz, Markussen, & Christensen, 2013; Satchell, 2008]. Scholars tried to define requirements and standards for specifying design fictions, such as utilizing tropes, i.e. figurative language like metaphor or irony that make ideas relatable [DiSalvo, 2012], or incorporating social factors [Gonzatto, van Amstel, Merkle, & Hartmann, 2013], as well as to outline taxonomies [Hales, 2013], method toolboxes [Grand & Wiedmer, 2010] and models [Lindley & Coulton, 2014]. Despite these attempts, design fiction practice remains flexible and open to interpretation, so that it has been proposed to define it as pre-paradigmatic [Lindley & Coulton, 2016], where different schools of thought and approaches can coexist.

Within HCI design fictions were used, for example, to create “imaginary abstracts”, which summarize findings about prototypes that do not exist [Blythe, 2014b], and even imaginary papers [Lindley & Coulton, 2015, 2016]; to envision a future of “blogjects” where objects may give life to a collective intelligence [Bleecker, 2006]; to facilitate the feedback loops between Steampunk practice, community and fiction, providing a model for how to physically realize an imagined world through design practice [Tanenbaum et al., 2012]; to envision a dystopian future controlled by machines in order to reflect on the long-term impact of HCI research [Kirman et al., 2014]; to outline what if scenarios in a dystopian world to better define a methodology for designing through narratives [Markussen & Knutz, 2013]; to explore the implications of products addressed to enhance human/animal relationship [Lawson, Kirman, Linehan, Feltwell, & Hopkins, 2015]; to imagine ambiguous gamified systems [Rapp et al., 2016]. However, until now they have not been extensively employed to explore the design space of a specific application field, such as that of behavior change technologies.

Differently from other methods of envisioning future technologies, design fictions are focused on discovering the implications and the assumptions of the current technology landscape, rather

(9)

than addressed to forecast how technology could likely evolve in the future, which is a matter of futurologists or technology future analysts [Technology Futures Analysis Methods Working Group, 2004]. Future, here, is mainly a tool to reflect on the present condition: “if informing industry or predicting the future are not the point then the focus must return to scholarship: thinking carefully about technology” [Blythe, 2017, 5409].

Moreover design fictions incorporate many elements of narratives. Like sci-fi novels, and unlike common future scenarios, they explicitly acknowledge social conflict and struggle [Blythe, 2014a]. This allows for a more nuanced exploration of the contradictory consequences that a technology may entail on individuals and society. They also provide more creative tools than future real-life scenarios, which often fail to envision extra-ordinary aspects of life in the future, and to evoke further imaginative thinking [Grammenos, 2012]. Unlike standard future scenarios, design fictions are usually more ambiguous, discussing fictional prototypes as diegetic, i.e. belonging to a richly imagined fictional context: by positioning an imagined technology within a coherent narrative world, they require to think beyond immediate implications of that technology, considering it within a broader social and cultural ecosystem [Tanenbaum, Pufal, & Tanenbaum, 2016]. They further allows to “play” with possible dystopias, providing designers with a critical distance, which may highlight concerns that are more difficult to individuate in more realistic scenarios, while remaining engaged as researchers [Tanenbaum et al., 2016].

By using design fictions, we aimed at investigating the implicit assumptions, as well as the possible long-term and systemic consequences, of behavior change technologies, as they “open scenarios up for the inclusion of social and political conflict in design thinking” [Blythe, 2014, 706]. Design fictions particularly fit the domain of inquiry of such technologies: through their interventions, behavior change systems reconfigure the relationship between individuals, technology and society, raising questions about control, coercion, and resistance, which are rarely discussed in deep during a traditional user study. Examples of how these technologies may entail complex and ambiguous outcomes can be found in the evaluation of prototypes like BinCam

(10)

[Thieme et al., 2012]: its users expressed feelings of being observed and controlled, as well as reported feeling ‘guilty’ or ‘ashamed’, but these findings have not been explored in their potential long-term and systemic impacts. Instead, we need, when designing for behavior change, ask ourselves not only whether it could be done, but also whether it should be done, and how society would look like if it will be done. Using design fictions might then help in discovering the assumptions lying behind such designs, and what they may entail if taken to the extreme. Design fictions, therefore, could represent a tool for reasoning on how current behavior change designs embed values, opportunities and threats that may not be immediately visible through the lens of traditional evaluation techniques.

Until now, design fictions have not been employed to systematically explore a technological field. Moreover, they have been created mostly by designers, which might have a strong focus on technology rather than on the individual or society. For this, such work it has been seen somehow as elitist [Blythe, 2017]. Even when other kinds of “participants” have been recruited in the development of design fictions, they have been involved only after the creation of the design fictions, for example through speculative enactments, where people were invited to enact elements of possible futures, which nevertheless were created by design experts [Elsden et al., 2017]. A partial exception can be encountered in the Blythe, Andersen, Clarke, and Wright’s work [2016] on designing future technologies for urban environments. One of the workshops they set for creating “magic machines” for the city, actually involved six organization members supporting older people, which did not belong to the design community and did create the fantasy prototypes. Such magic machines, however, appear to be more Dadaistic jokes, purposefully absurd, “essentially silly and nonsensical” [Blythe et al., 2016: 4974], than design fictions addressed to narrate possible future words that already exist, and in which technology is a part that contributed to shape them [Blythe, 2017].

Building on top of these first attempts to extend the creation process of design fictions outside the strict circle of designers, in order to bring new and fresh perspectives in the debate and

(11)

reflection on technology, we involved 48 psychological students in the development of fictional prototypes and their implied future worlds, stressing the design fictions’ narrative component by encouraging them to write “short stories”. Instead of envisioning the probable evolution of behavior change technologies in the future, we wanted to invite participants to reflect on the implications of the idea of changing behavior through technology itself, and on the systemic consequences that it could entail in the long-term.

5. METHOD

To explore the design space of behavior change systems through design fictions we set up 2 academic courses at our University.

6. Sample

Each course involved 24 participants and lasted 40 hours distributed in 8 weeks (4 meetings of 2 hours and 8 meetings of 4 hours). All participants were students of the master’s degrees in Psychology (12 males) and were recruited through the institutional means of our University (the course was displayed in the database of the courses available, and students could freely enroll). They did not have expertise in design methods, as well as in brainstorming and envisioning techniques. Conversely, they had previous knowledge of clinical, behavioral, and social psychology by being enrolled in the master’s degrees in Psychology.

We selected Psychology students for the following reasons.

HCI researchers and psychology researchers are both interested in supporting behavior change processes in a variety of domains, but their work remains largely siloed within the two communities [Hekler et al., 2013]. In the design of behavior change technologies, HCI researchers are those who commonly drive the design process. Even if they often draw on psychological theories (and are supported by psychology researchers) to make design decisions, their expertise pertains more to

(12)

design and system development than human behavior: their focus, therefore, remains stuck on typical matters of the HCI discipline, like effectiveness of technology, usability, and opportunities for integrating technology in people’s everyday activities.

Here, instead, we wanted to reverse this point of view, by allowing individuals with a psychological background to drive the design process. Such individuals know more the functioning of human behavior than design principles and can bring a different perspective from that commonly applied to behavior change system design. On the one hand, they are not tied to the theoretical background (and assumptions) that traditionally informs behavior change technologies. On the other hand, they are less technology-oriented than HCI researchers when looking at behavior modification. They could bring, thus, a lens more focused on the ethical implications and the psychological outcomes that such technologies may entail.

A second reason regards the fact that, until now, design fictions have been mostly employed by design experts to explore consequences of technologies. Therefore, the implications, worries, and opportunities explored in this practice reflect those of the “experts”. We aimed, instead, at involving participants that could also see themselves as users of the technology they were creating, in order to explore fears and wishes that such technology could engender in part of its potential audience. Involving behavioral scientists with a previous design experience, instead, would have recast the design fiction practice in the domain of experts, even if approached from a psychological point of view. However, our participants are not a typical cohort of behavior change system users, and does not represent any general population. Rather we suggest that these participants, in their unique ways, provide good cases for reflecting on the idea of modifying behavior through technology. As future psychologists, they considered themselves as potentially in need of solving a behavioral/psychological problem they had or may have in the future, and/or as potential promoters of such technologies among their future patients: this aspect explicitly emerged during the first lesson of the courses, when we explored the students’ motivation for participating to the design work.

(13)

The participation was voluntary and granted 4 formative credits. Each course was divided in 5 phases (See Table 1).

6.1 Procedure

In the first phase we presented a corpus of different behavior change systems to provide participants with a background that could inform the following design phases. The presentation was based on a review we made of three years (2012, 2013, 2014) of UbiComp, ISWC and CHI conferences on the topic of behavior change. Although this phase was aimed at introducing the theme of behavior modification through technology, we did not stress technological aspects (i.e. we did not discuss the technological feasibility of the prototypes, as well as we did not deepen their implementation details). Moreover, in the following design phases we advised participants that they could envision behavior change systems not necessarily based on the current technological opportunities. The only requirement for their designs was their plausibility (i.e. they could offer alternative futures, while remaining grounded in behavioral theories, empirical science and current social practices2) of the concepts developed.

Design Phase Objectives Method Outcome

First phase: Introduction

To introduce participants to behavior change systems and their design space

Review and presentation of behavior change system prototypes

Second phase: Exploration of the behavior change technology design space

To develop a set of concepts representing behavior change systems that could likely be effective in the next future (15-20 years).

Design studio methodology Each group created 3 concepts for a total of 36 concepts

Third phase: usage scenarios and concept development

To start making participants think about their designs in their practical consequences, as well as to introduce them to a narrative-based design practice

Personas and scenarios Each group created 1

fictional prototype and 1 usage scenario, for a total of 12 fictional prototypes and 12 usage scenarios

2 For example, an instrument that could revert the flow of time would have not respected the plausibility requirement, given the physical laws of our world.

(14)

Fourth phase: design fictions set in a distant future (50-60 years)

To describe a fictional world in which the designed technology has become pervasive

Six hours to collectively create design fictions

Each group created 1 design fiction for a total of 12 design fictions

Fifth phase: discussion To reflect on the

implications of the created design fictions

4 design fictions were extensively discussed in class

Reflections on behavior change technologies

Table 1. The research phases

In the second phase, participants were invited to explore the design space of behavior change technologies. The goal of this stage was to develop a set of concepts representing behavior change systems that could likely be effective in the next future (15-20 years). The conduction of this phase was inspired by the design studio methodology, a design technique that relies on a process of illumination, sketching, presentation, critique, and iteration. We selected this method because we needed a technique for stimulating the creative thinking and the envisioning of new design concepts [Ewans, 2014]. Participants were divided in groups (4 participants each), for a total of 12 groups, and each group had to collaboratively define three “design challenges”. A design challenge is formulated as a sentence beginning with “How might we…” and is intended as a question aimed to turn a problem area into an opportunity for design. Then, each participant had 8 minutes to individually sketch 8 ideas related to the first design challenge defined in her group. Afterwards, everyone had 3 minutes to present her sketches to the rest of the group, while the group had 2 minutes to constructively criticize them. This process was iterated for the other two design challenges defined. Then, the group had 30 minutes to define the three most promising concepts (one for each design challenge), by selecting and combining ideas from the pool of the generated sketches. Finally, they presented their concepts to the other groups.

In the third phase, each group, building on top of the insights received during the presentation of their ideas, had to select only one concept among those previously defined, and to develop it in a

(15)

fictional prototype. Fictional prototypes will be outlined in Section 4.1. Participants were explicitly invited to leverage their knowledge about the functioning of human behavior during the design process. This activity had to be conducted through the definition of a usage scenario, in which a Persona used the system proposed for her personal aims. The scenario could be set in the next future (15-20 years) simply explaining how the fictional prototypes could be used in the users’ everyday life. This method was selected to start making participants think about their designs in their practical consequences, as well as to introduce them to a narrative-based design practice.

In the fourth phase, participants were invited to move their imagination far away in the future (50-60 years) and create one design fiction depicting a future world in which the fictional prototype developed in the previous phase evolved and became pervasive. Participants were thus requested to give an answer to the questions: what if the behavior change system they developed will spread among people, so that everyone will use it in her everyday practices? What kind of society will be shaped by your design? How will individuals be affected and how will it change their behaviors, attitudes and values? Each group had six hours to create one design fiction. These design fictions were developed in the form of brief written narratives. Usage scenarios and design fictions will be reported in the Appendix.

In the last phase, each group presented its work to the rest of the participants. Then, each class had to vote two design fictions to be extensively discussed in a collective debate on the basis of their potential to generate reflections on behavior change technology. Four design fictions were selected. Participants were then invited to critically reflect on these works and their implications. Although the focus of each debate was represented by the particular design fiction chosen, participants were left free to widen the discussion to the other ones. The class coordinator (the teacher) did not intervene in the debate suggesting themes or interpretations: he only managed the turn-taking. Each design fiction selected was discussed for about 80 minutes by participants.

(16)

6.2 Data analysis

All the presentations and discussions were audio recorded and transcribed. The transcriptions were then analyzed through a thematic analysis [Braun & Clark, 2006]. Data were coded independently by two different researchers who generated initial codes: they were broken down by taking apart sentences and by labeling them with a name. The results were then compared for consistency of text segmentation and code application. All inconsistencies were discussed and resolved. Codes were grouped into categories independently by the two researchers. The themes identified were finally compared to solve inconsistencies. Data coming from these group discussions will be presented in Section 4.2.

Besides the thematic analysis aimed at accounting for the data gathered during the class discussions, usage scenarios and design fictions were considered as “texts” and then analyzed through a narrative analysis. Following Bardzell, Bardzell, and Hansen [2015] we read the design fictions informed by critical practices in art, philosophy, and design: from this perspective, they might be understood as objects that help us “work through and clarify a given puzzling problem space, not to resolve it into a dogmatic theory, but rather to clarify its complex particularities” [Bardzell et al., 2015, 2096]. In doing so, we were also inspired by the deconstructivist tradition. Deconstruction refers to Derrida’s philosophy, which stresses how “texts” are internally conflicted [Derrida, 2004], and refers to a reading strategy that pursues the contradictory and incoherent elements of texts to produce new meanings and interpretations [Blythe, 2014]: within HCI, this approach has been already used to interpret the Weiser’s Sal scenario [Blythe, 2014]. Here, we considered it insightful to make emerge some contradictions between scenarios, design fictions, and data coming from the collective debates. Outcomes of such reading are outlined in Section 5.

(17)

7. RESULTS

We will now present two different kinds of results. In par. 4.1. we will describe the participants’ creations in the form of fictional prototypes. Then, in par. 4.2., we will report the findings coming from the group discussions, which exhibit participants’ reflections on behavior change technologies triggered by the design fictions they created.

7.1 Fictional prototypes

Participants developed 12 fictional prototypes in total, informed by different behavioral theories, enacted in 12 usage scenarios. Then they moved their concepts far away in the future to create 12 design fictions, in which the created prototypes were embedded in a fictional world. Table 2 presents an overview of the prototypes created during the courses.

We will now briefly report the four fictional prototypes selected by the participants to be discussed during the last phase of the courses. By being chosen by the participants themselves, such prototypes reflect the themes they conceived most important and worthy to be debated. Each of them presents a particular modality of changing behavior: promotion, substitution, enhancement, and punishment.

(18)

Name Behavior addressed Main features Behavioral principles

Physical Transformer Physical activity Robo-Pet

Sensorized Shoes Personal display

Theory of Mind (Baron-Cohen, 1991),

Self-efficacy (Bandura, 1977),

Positive reinforcers (Cooper et al., 2007)

Self-monitoring (Korotitisch & Nelson-Gray, 1999)

Kitchen 5.0 Food habits Sensorized environment Priming (Bargh, Chen, &

Burrows, 1996)

Default choices (Kahneman & Tversky, 1984)

Socializer Social behaviors Wristband Positive reinforcers

Consciousness raising (Prochaska & Velicer, 1997)

Recycling House Sustainable behaviors Sensors

Recycling system

//

Active Plant Sustainable behaviors CO2 recycler //

Food Controller Food habits Wearable eyeglasses

Automated fridge Digital alter ego

Punishments

Possible selves (Markus & Nurius, 1986)

Peer pressure (Kandel & Lazear, 1992)

Loss aversion (Kahneman & Tversky, 1984)

Learning Assistant Learning “Intelligent” notebook Peer pressure

Blame (Cooper et al., 2007)

CrimCheck Criminal behaviors Sensorized collar Self-monitoring

Physical

Punishments (Cooper et al., 2007)

Sense of guilt (Cooper et al., 2007)

Possible selves

Bright Space Learning Magnetic levitation system

Cameras/Projector Color-changing walls

Theory of Mind (Baron-Cohen, 1991)

Self-efficacy

Universal Translator Learning Wearable eyeglasses Self-efficacy

Positive reinforcers Super White Cane Impaired physical abilities Context-aware cane Self-efficacy

Perceived behavioral control (Ajzen, 1991)

(19)

Table 2. The fictional prototypes designed during the courses

In the Appendix we will extensively report the usage scenarios and the design fictions “as is”, i.e. as they were created by the participants in the six hours they had available. The choice of reporting them is due to the fact that they embody more than the knowledge that has influenced their creation, or than the insights that both our participants and we were able to extract from them. Following Bardzell et al. [2015] we believe that this kind of artifacts will continue to create new meanings, as long as they will be interpreted by the research community.

Promoting behavior: The Physical Transformer

The Physical Transformer shows how technology can modify individuals’ behavior by eliciting attitudes (beliefs, intentions, etc.) addressed to change, or by directly intervening on behaviors themselves (e.g. through rewards). This use of technology is seen in the most favorable light: here behavior change systems might act as trainers, therapists, companions, helping users in developing habits useful and good for themselves and the wider society. Other fictional prototypes relying on the same principles are the Kitchen 5.0, a sensorized kitchen aimed at promoting healthy eating, and the Socializer, a wristband addressed to increase the people’s social network. In all these prototypes, participants seemed to express a complicity between technology and humankind, whereby technology intervenes to sustain the individuals’ efforts to change, by employing mechanisms that strengthen their positive feelings, self-efficacy, enjoyment, and sociality. However, the design fiction originating from the Physical Transformer depicts a world in which the behavior change goal of “being healthier” has somehow gone beyond the control of the individuals. The Physical Transformer has produced its desired effects, but this effectiveness has produced a

(20)

normalized society. Moreover, the robot appointed for supporting the users’ trainings has turned into an eerie entity.

Fictional prototype. This prototype is aimed at stimulating the individuals’ physical activity. It is composed of a set of technological devices: a pair of shoes, a robotized pet, and a personal display. The shoes constantly track the user’s steps and physiological parameters, inferring her physical activity and calculating the calories she burnt: they exploit the reactive effects of self-monitoring, i.e., the fact that self-recording a behavior causes the behavior to change (Korotitisch & Nelson-Gray, 1999). The pet acts as a training companion, suggesting personalized exercises displayed on its back, and it is nurtured by the “virtuous” behaviors of the user: the more she will engage in physical activity, the better the pet will grow up and evolve. The pet fosters empathy and processes of identification, thus leveraging our capabilities for attributing mental states to others, according to the Theory of Mind (Baron-Cohen, 1991). The display allows the user to be constantly aware of the activity she has performed during the day, continuously prompting her past achievements and thus strengthening her self-efficacy (Bandura, 1977). Finally, the system rewards the user for her performances using positive reinforces (Cooper et al., 2007) in the form of monetary prizes.

Substituting behavior: The Recycling House

The Recycling House is addressed to act in the user’s place. The burden of change is transferred to technology: behavior change technology here operates as a delegate, while humans can remain unaware of what is happening, continuing to behave as before. The Active Plant, which automatically transforms air pollution into oxygen and water, is based on the same mechanism. In such designs, participants expressed how technology can represent a sort of deus ex machina able to correct or perform those behaviors that humans are not capable of fixing or enacting on their own. However, this risks to subtract control from the human hands. In the design fiction of The Recycling House, the spreading of automation, on the one hand, has allowed people to dismiss their efforts in changing toward more sustainable behaviors, whereas, on the other hand, has made them

(21)

lost the ability to distinguish between natural and artificial entities. Everything appears reparable, substitutable, and without any intrinsic value, while functionalism and instrumentalism have become the dominant thought.

Fictional prototype. The house of the future will be fully automated to reduce the water and energy consumptions and increase the waste recycling. The Recycling House is completely sensorized with water and energy supply controlled by a centralized system. The house autonomously turns on and off the lights when the user is passing through its different environments and regulates the flux of water depending on the user’s needs for minimizing the wastefulness. The house is also equipped with a recycling system that purifies the water used making it available for other employments; it automatically differentiates the waste thrown in the house’s bins, transforming it in a compost that can further provide heat energy to the house. This prototype aims to facilitate the recycling habits of the individuals, by relieving them of the burden of complying with behaviors that require a constant attention and regularity.

Enhancing behavior: The Bright Space

The Bright Space represents a tool for improving human skills. Here, technology might improve certain behaviors or support the acquisition of new skills. Technology might also be seen as a means for compensating behaviors no longer available due to disabilities, pathologies, or accidents, as it happens in two similar prototypes, the White Cane, which enhances the physical abilities of persons with a visual impairment, and the Memory Enhancer, aimed at helping people with Alzheimer’s disease remember. Through these designs, participants wished for systems that empower both the human body and mind, expanding the range of possible behaviors, or increasing their frequency and effects. However, acting on the human mind to improve behavior seems to raise more complex ethical issues, in respect of intervening on human body. The design fiction of the Bright Space depicts an individual empowered by technology, who has acquired outstanding intellectual abilities. The device has allowed her to perfectly control herself, overcoming all the

(22)

unconscious and environmental factors that commonly divert behavior from the human rational decisions. Changing behavior through learning has also been brought to the extreme consequences, enabling the person to consciously acquire new behaviors “on command”. Nevertheless, this has produced ambiguous effects on the internal experience of the protagonist, by making her more similar to a machine than a human being.

Fictional prototype. The Bright Space is a physical environment that stimulates the children’s creativity through the reconfiguration of its spaces and of the objects that it contains. The Bright Space has color-changing walls, widespread speakers, cameras, projectors and a magnetic levitation system that allows the child to assemble and disassemble physical objects in the 3D space. The room changes its states according to the child’s mood, levels of arousal and attention, dynamically providing exercises personalized on the basis of her internal states. It is capable of “reading” the child’s mental states and feeding them back, acting as a “mirror” that can help her develop theories of mind (Baron-Cohen, 1991). The Bright Space functions also as a regulator of the child’s negative emotional states and as a remedy for her attention disorders. For example, if the child is sad and apathetic, the room will turn its walls red in color, trying to balance her negative emotional states with opposite environmental cues. Along this, it will provide audiovisual stimuli to raise her attention and foster her creative processes. Finally, it will directly elicit the child’s creativity, by proactively changing the position of the objects present in the room and by instructing the child in performing tasks using such objects: all these exercises aim at increasing the child’s self-efficacy (Bandura, 1977).

Punishing behavior: The CrimCheck

The CrimCheck employs penalties to modify individual’s behavior. While in other fictional prototypes punishments are provided after the target behavior is performed, in order to disincentive its future emissions, as it happens in the Food Controller and in the Learning Assistant, in the CrimCheck they are designed as preventive tools, capable of changing the user’s intention to

(23)

behave before the behavior is emitted. Participants show to be aware of the problematic nature of such prototypes in changing human behavior. However, they are inclined to sustain their utility for certain types of negative behaviors, especially when other types of interventions have shown their inefficacy, or when they can produce undeniable benefits for the individual or society. The design fiction of the CrimCheck shows a technology that is completely effective in preventing an undesired behavior. Instead of envisioning a technological control that acts from the outside, the eye of the controller, here, is internalized in the protagonist’s mind. By being tracked and scrutinized in every instant and everywhere, she becomes her own watcher, preventing herself from enacting behaviors that can likely activate the device. As in Foucault’s Panopticon the guardian is incorporated in the prisoner, and the watcher may be no longer needed.

Fictional prototype. This prototype is addressed to prevent offending behaviors of people with a criminal record, through advanced forms of self-monitoring (Korotitisch & Nelson-Gray, 1999). It is a sensorized collar able to detect the emergent thoughts in the individual’s mind as well as the physiological changes in her body: by using these data, it infers her intentions of committing a crime. The CrimCheck first reads what is happening in the user’s mind, and then matches these data with the level of arousal of her body, in order to recognize her predisposition to act. When detecting a criminal attitude, the collar induces mental images that make the user experiment, mentally and physically, the negative consequences of her future actions, in the form of her most feared possible self. The CrimCheck works in accordance with the theory of possible selves, which highlights that when the future representations of ourselves concretize what we are afraid to become, they motivate us to avoid the realization of such negative representations (Markus & Nurius, 1986). The system aims to dissuade the individual from committing a crime, before it happens, by eliciting negative emotional experiences of guilt and regret (Cooper et al., 2007).

(24)

7.2 Group discussions

We will now describe the main themes emerged during the group discussions that followed the design work.

Power

The aimed endpoint of any behavior change system seems clearly to be related to the successfulness of the behavioral intervention. However, through discussions, this desired goal showed not to be so obvious. Creating a technology that is completely effective in modifying an individual’s behavior paradoxically raises more worries than enthusiasms. Participants highlighted how this would subtract the wheel of change from their hands: “I’m not sure that I’d like a device that promises to change me with a 100% success rate”, P12 says. This attitude of mistrust is symptomatic. Technology agency evokes in participants the image of a force you cannot argue with. This immediately raises the worry of not having chances to decide, or drive the change: “no matter what I may feel, my behavior will be changed anyway”, P11 stresses.

The main issue here refers to the fact that such technologies are perceived as addressed to bring back individuals to a unique “correct” model of behavior: “they aim at making us behave in the same manner, everyone must exercise, everyone must respect the rules - P29 says - I don’t know if a society where everyone is healthy and everything is perfect is the best of possible worlds. Differences and conflicts, even death, are important parts of what means to be human”. Such concern is even more emphasized when behavior change technologies are explicitly designed as coercive means to correct negative behaviors. A minor quota of participants thought that in some cases, such in that depicted in the CrimCheck design fiction, the “need for preserving the common good or the individual health could justify technologies that change behavior by canceling the possibility of choice”, as P13 notes. Nevertheless, many of them had a different reaction. A pervasive and preventive punishment that leverages the individual’s fears, concretized in feared possible selves (Markus & Nurius, 1986), and elicits a continuous sense of guilt (Cooper et al.,

(25)

2007) would internalize the control making it unavoidable. Reflecting on the fact that a technology, once adopted in a specific context, could also spread across different fields of application, these participants recognized how such internal control would likely yield a conformist society. So if on the one hand some participants praised the idea of a humanity finally free from the negative consequences of inactivity and laziness enabled by the Physical Transformer, on the other hand, other participants like P26 emphasized that “at what price? At the price of transforming us in copies”.

Actually, these worries are worsened by the believed “non-neutrality” of behavior change technologies: most of the participants argue that they could become effective means in the hands of insurance companies, employers, and public institutions to control the individual, like in the design fiction of the Physical Transformer: “I could not be cured only because I didn’t want to adhere to a lifestyle that may prevent some cardiovascular diseases… But I want to decide for myself how to live and whether to change”, P28 says, adding “Telling me how to behave in my private life is not a matter of the government, and I’m scared that one day it could have the means for doing it”. While P39 adds “who decides what is right and what is wrong? I am worried that a technology could be so effective that could be used by someone else to constrain us to behave as she wants”. These concerns bring to light how behavior change technologies open new spaces of conflict and negotiation between the individual and “third parties”, on control, power, agency and ownership over behavior, where the real issue at stake is the possibility of preserving the person’s individuality and her free will against the instances of normalization coming from private and public subjects.

To overcome these potential drawbacks of behavior change technologies, participants proposed to create systems capable of adapting themselves to the user’s peculiarities, valuing her individual differences, instead of trying to erase them in favor of a unique, right, behavior: P8, for example, highlighted that “such devices should promote different behaviors depending on the person that is using them”.

(26)

Agency

The Recycling House exemplifies an idea of technology aimed at compensating the lacks of the human behaviors. “Is it right to make people benefit of the advantages of a ‘virtuous’ conduct without requiring them to modify their own habits, and to assume the responsibility of their actions? Is it right that they have to do all the work?” P31 asks. Participants express a dual reaction to this kind of design.

Several of them endorsed the idea that “Technology - P34 says - is born for replacing humans in those activities that are difficult or impossible to accomplish for them. If it can relieve us of the burden of behaving in the ways required for the good of the world, I see no evil in it”. They emphasized that the ends are somehow more important than the means and that the creation of a sustainable world might justify the employment of automation: “If we can’t accomplish our duty, then, machines could do it in our stance”, P47 says. As in the discussion about the CrimCheck design fiction, a good end appears to partially justify some of the potential negative side-effects of the use of technology on behavior. Conversely, many participants thought that such use would imply several shortcomings. This position is best exemplified by P30, who stresses that “It is our responsibility to change to take care of the world, the others, or ourselves. We can’t dump our duties on machines”. Whereas P44 notes how the Recycling House design fiction jeopardizes every form of education: “how could children grow up when they will start to believe that it won’t matter how they will behave because technology could always repair what they have broken?”.

Responsibility, therefore, implies a decision on who has the right, the duty, and the willingness of driving the change, and thus whether the shift of agency from humans to technology can be acceptable when becoming pervasive. For participants, a solution to this moral dilemma might lie in a strict collaboration between the machine and the human being, whereby the agency and the responsibility of change is balanced between the two parts. For participants helping someone modify her own behavior could be actualized through the establishment of “an interpersonal relationship, like the one that exists between the patient and the therapist, or the physician, who

(27)

really cares of the man that he wants to change”, as P30 reported. Only within this kind of rapport, made of empathy, mutual understanding, disposition to listen, a change that comes from technology could be somehow reassuring. Participants thus showed to appreciate the pet of the Physical Transformer usage scenario, and argued for “intelligent systems that could pay attention to their users’ needs” as P41 highlighted, valuing systems that have a theory of mind (Baron-Cohen, 1991), i.e., are capable of understanding the internal states of the user. Intelligence and theory of mind capabilities, then, appeared to be a major requirement to transform behavior change technologies into “conversational and collaborative agents” showing human-like qualities and thus more acceptable to humans.

Interiority

Given the reflections outlined above, it is not strange that participants appreciated more the designs leveraging the individuals’ interiority, than those relying on environmental drives. Participants discussed the different fictional prototypes’ features, differentiating them mainly on the basis of their “locus of control”. As a result, the preferable ones are those that push the individual to work on herself and increase her self-efficacy (Bandura, 1977); while those based on external stimuli should be employed only in combination with the former. Here, we can see the important role of emotions, positive and negative as well, in triggering this kind of process. P35, for example, states that “Changing food habits cannot be done without considering what the food meant and means for me… maybe the starting point for an effective change could be associating positive emotions with food and through them changing the meanings that I usually pair with it”. Participants agreed that positive emotional states are often the motor of change. Moreover, they pointed to the need of empowering the “internal resources” of the individual. Change is better sustained by those interventions that help the individual rethink her whole experience, for example through an enhancement of her self-confidence and self-efficacy and a modification of the modalities through which she looks at her life.

(28)

If, on the one hand, exploiting the individual’s internality to enact the change seems a good solution in the participants’ eyes, on the other hand, such internality appears to be a holy territory, which has to remain untouched by the intervention of technology. Therefore, when technology appears to act directly on the mind to change behavior, it provokes the participants’ reactions. It is not a coincidence that the Bright Space design fiction was questioned also by those that designed it. If for a minor quota of participants, like P1, allowing a sort of “aesthetic surgery of the mind doesn’t differ so much from the aesthetic surgery of the body. They would both improve some parts of ourselves”, for the majority of them acting on the mind entails completely different matters, as it is perceived as “what most characterizes our humanity… I don’t think that we are allowed to intervene to shape it, even if this means to expand its capabilities making us able to learn, remember, perceive more things”, as said by P13. For these participants, the repulsion for such kind of technology is more visceral than rational, as P17 notes “I could not say exactly why this is not right, I would like to have a sort of super memory, but it also feels creepy to me”.

This is not a trivial point, as the same attitude in protecting the mind against external influences can be found with regards to all those programs that use unconscious levers for shaping behavior. For example, the possibility of employing biological means that directly acts upon the human brain to modify certain habits, as envisioned in the future design fiction of the Physical Transformer, is strongly rejected by participants. The persuasive mechanisms that operate on our subconscious are questioned as well, especially when realized by an artificial entity that could act without the constant control of a human agent. For example, designs that employed theoretical frameworks aimed at subtly influencing the user, such as the Kitchen 5.0 or the Food Controller, using priming mechanisms, loss aversion and defaults (Bargh, Chen, & Burrows, 1996; Kahneman & Tversky, 1984), were strongly criticized. Participants were concerned that a “hidden technical persuader” could drive our mind along directions we are not aware of. Although they expressed the understanding that in their everyday life many actions and thoughts are driven by environmental factors, self-deceptive mechanisms, and hidden impulses, which lie behind the surface of the

(29)

conscious mind, nevertheless they argued in favor of technology interventions aimed at making the user aware of all the behavioral levers that they are going to exploit.

8. COMPARATIVE READING

In this section we offer a reading of the design fictions presented in 4.1 comparing them with the results coming from the group discussions presented in 4.2. Following Bardzell et al. [2015] we argue that the critical reading of the research through design outcomes can develop further knowledge. In the arts and humanities critical analyses of artifacts are common, but there is little culture of such analysis in HCI. Gaver [2014] highlights the importance of providing annotated portfolios of designs, much like artists exhibit their work in galleries. Bardzell et al. [2015] stress how design exemplars are the primary body of knowledge, and that we need a critical reading of such designs much more than first person annotations. While participants discussions let explicitly emerge what they think about the designs they produced, analyzing the design fictions themselves may unveil implicit assumptions that can offer a more nuanced view of the findings of this study. The following reading has been developed through the discussion of researchers having different backgrounds: one with a human-computer interaction expertise (with a focus on computer science but with a multi-disciplinary background), one with a psychological background, and one with a social science background.

Technology domination

By and large, the diffidence in entrusting technology with behavior change objectives is visible in most of the created design fictions. Despite the different explicit positions emerged during the debates, where participants at least partly diversified the good and bad aspects of these technologies depending on the use we make of them, such fictions unveil a shared implicit skepticism toward their increasing diffusion. What these designs encompass is an underlying assumption that strictly

(30)

connects technology with domination, taking the form of control and constriction. In the CrimCheck design fiction, for instance, Jade appears completely powerless and subjugated by the device, so that at the end she cannot avoid recognizing its “goodness”: “She is not capable of deciding for her life and maybe the right thing to do is to trust this device”.

This may certainly appears as a naive perspective, as it seems to completely ignore people’s tendencies to cheat the system, and subvert technological interventions. Berridge [2015] highlighted how adopters of passive monitoring technologies sometimes manipulate them to their own ends: for example, they may transform telecare calls into opportunities to chat with the operator. By and large, such practices of appropriation are frequent when interacting with technologies: online game players often violate social norms as well as the game’s rules [Verhagen & Johansson, 2009], while people deliberately non-use technology in a variety of forms, from lagging adoption to active resistance [Satchell & Dourish, 2009]. This omission in considering possible actions of “rebellion” is relevant, as it points out an awe at behavior change technologies, as if participants feel themselves in an inferior position, and thus manipulable without possibilities of reaction. In this perspective, design fictions can be seen as a technique for unveiling unexpressed concerns, whereby exclusions may represent relevant cues as well.

On a closer look, however, such instances of domination do not come only from technology, but also from “transcendent” third-parties that exert their power through technology. In the collective debates, participants explicitly worried about the possibility that private subjects or institutions could use behavior change technologies for their own ends, a risk that Lupton [2014a] has noted as already present in our society. Design fictions, nonetheless, emphasize the responsibility of governments, as well as the political relevance of behavior change technologies, and make emerge the role of other, more subtle and thus dangerous, “actors”: family and society. The Bright Space design fiction depicts a world in which children are implanted with a memory chip that enhances their mental abilities. However, the intervention is likely determined by the parents’ willingness. As long as the chip “improves” humans, and it is implanted for the “children’s own sake”, it appears

(31)

taken for granted that parents can use it as a “learning tool” for enhancing all the newborns. On the other hand, in the CrimCheck design fiction, to be fit, healthy, and useful are social values that behavior change technologies sustain and all the individuals accept without putting them into question. Purpura et al. [2011] noted how, in the context of dieting and exercising, behavior change technologies might aim to enforce sublimated social goals, reinforcing conception of what it means to be healthy or fit. In the design fictions presented above, this theme is taken to the extreme, highlighting possible side-effects that were not pointed out by Purpura et al. [2011]. Here, behavior change technologies do not only reproduce a weltanschauung that is dominant in a given society, but erase the individuals’ ability of self-assessment and to critically interpret the world in which they live. Behavior change systems transform their users into automatic re-enactors of cultural and social norms, with no recognition of their experiences and awareness [Tsai & Wadden, 2005]. They relieve the human beings of the responsibility of choosing and accounting for their own decisions, proposing a seductive “machine life” [Schüll, 2014].

These dystopian views are interesting for designers as they signal an attitude of general mistrust and suspect toward behavior change systems. Design fictions point out the perceived opaqueness of these technologies, whereby it is not immediately clear who is promoting a certain behavior and why, indirectly claiming more “transparency” as a remedy for reestablishing the freedom of choice. Moreover, through design fictions, participants appear to deepen reflection on the kind of actors that may exploit technology for their own ends, adding “benevolent” subjects, such as society and family, to the list of the potential threats (private companies and governments). This reveals how they can be used as tools for reflection: by turning opinions into characters and plots, design fictions seem to widen the points of view through which looking at a problem.

Human-like relationship

Participants explicitly stressed the positive function of technology when it is paired with humanity and openness to the other.

(32)

In the usage scenario, the Physical Transformer acts through the robotized pet which is in charge of promoting the user’s active lifestyle. It assumes that everyone would be happy to take care of their pet robot and that this relationship will increase motivation to do more physical activity (Erika is “so happy to see how this is making it more lively, strong and affectionate: this “pet” is so close to her that she is inclined to talk to it like a friend”). This artificial entity is seen under a positive light: users can establish an empathic relation that favors the individual’s self-disclosure in a manner that recalls the relationship between the patient and the therapist. For this, participants wished for more intelligence in behavior change technologies, in the hope that they could establish human-like relations with their users in the future. Here, design fictions unveil the values that people consider essential when designing technology, the direction that technological development should follow, and what kind of ideal world they imagine: design fictions point to those technological features that individuals think are more important to develop in order to concretize their vision.

This desire, nevertheless, shows again some naivety aspects, since it is inclined to overestimate the goodness of technologies that simply support the user’s projection and disclosure. In particular, it obscures the fact that rapports with machines like robo-pets could entail negative impacts on the person’s wellbeing. For example, although research has found that companion robots may have a positive psychological effect and can help create social relationships [Shibata & Kawaguchi, 2009] and reduce loneliness [Robinson, Macdonald, Kerse, & Broadbent, 2013], concerns about the loss of, or reduction in the amount of social connections following their introduction have also been raised [Sparrow & Sparrow, 2006]. Turkle [2012] emphasizes how pet robots provide surrogates of real rapports, risking, at the same time, to make us lose the burden of caring other people and reduce our interest in real friendships.

By looking closer at the Physical Transformer’s design fiction, however, the emphasis on the positive functions of technologies that are capable of “conversing” with humans and developing an equal relationship with them is questioned. Here, the pet has actually gained more intelligence, by

Riferimenti

Documenti correlati

The BIM must be used to ensure the overcoming of the current difficulties in obtai- ning design, structural, systems and environmental informa- tion, with the aim of

Questa condi- zione ha riflessi anche di carattere generale, perché non è pensabile che il sistema delle cure prolungate possa continuare in una condizione di marginalità e

As previously explained, the two most signi cant parameters for the target performances are the funnel diameter D f (which regulates the velocity of the coolant in the funnel, and

In this article, based on the work done to prepare the project of the Concept of development of the fuel and energy complex of the Kyrgyz Republic until 2030 identifies the main

‘For many years we had neglected the vocational education and training (VET) providers,’ said the European Commission’s Joao Santos at the first annual meeting of the

For labor market conditions, we examined six indicators: percentage of workers em- ployed in jobs that require less than their actual skill level; the percentage of the labor

Alongside weak wage growth for rank-and-file workers, this divergence has entailed multiple labor market maladies with enormous social consequences: low-paid, insecure non

Abstract In this paper we analyze the effects of restricted participation in a two-period general equilibrium model with incomplete financial markets and two key elements: