Human Technologies – Technology centred or Business focused?

Technology Bubble

Today, I want to look at the relationships between the process of developing more human technologies and the business models that frame those technical developments. Let’s first briefly review the way technologies come to play a role in society, the role of design, and why I believe that business models are an essential part of making more human technologies.

Technologies, their use and meaning in society

The first point is to understand which role technology plays in our society. I have spoken about that at some length in my previous posts so I will only sum up the most relevant points.

chris-christie-control-room
© Niemann, 2014

First, technologies, through the way they are being designed, tend to allow, constrain, and draw certain relations in their environment. When light bulbs are designed to only last 100 hours, we can say that the value of obsolescence is embedded in the object. When Apple improves its computer’s interface to be friendly for everyday users (but difficult to tweak for advanced users), it embeds the value of (a certain type of) user-friendliness. When Israel designs its urban and road planning to constrict and isolate Palestinian villages, it puts a certain politic in these objects. When Fairphone redesigns smartphones for longevity and ease of repairability, it embeds its understanding of sustainability in the phone. The list can go on but the point is that technologies are developed with a certain vision of how we should interact with our environment. That is why, we may say that they are ‘political.’

Airplane 9.11Second, it is not because a technology has values embedded in it that its meaning and use cannot be hijacked. A famous horrific example would be the terrorist attacks of 9/11. Planes, which had been designed for transport, to increase efficiency, and which represent a typical vision of the Western world and globalisation, have witnessed their use and meaning entirely re-appropriated. They became engines of death and, in a sense, the ‘spokespersons’ of the conflict with the Western world, globalisation and its values.

This leads me to the last point: technologies are inescapably tied and developed within socio-technical systems. They become political and situated in space as the result of iterative process of interaction with the socio-technical components of their environment. They gain relevance when they are compatible with existing infrastructures, value judgements, economical settings, etc. This complex and ambivalent position, which extend well beyond the technology itself, is what we need to deal with when we attempt to make more human technologies.

The role of business model in the making of human tech.

The key problem is that even though this complex position is broadly acknowledged, most of the focus on responsible innovation is on the scientific and technical process. I think that we should take a deeper look at the relationships between the development of technologies and the business structures, values, and models from and with which they emerge.

Grant Research
©Dave Coverly, Speed Bump, 2011

It may seem obvious but more often than not scientists and engineers do not really control the visions and expectation that are set as the goals of their research agendas. Let’ take the example of the development of Genetically Modified Organisms (GMOs). Originally, the biologists working on GMOs were enthusiastic about the possibility of greatly diminishing the use of pesticides which had been heavily criticised in the 1960s. Fast forward a couple of decades later: as a result of the institutional structure they evolved in and the business model of biotech industry, it turned out that the profitable option was the opposite of their hope – that is developing organisms that could withstand the use of pesticide. The point I want to emphasise is that it is not enough to look at the drive of the engineers developing a technology. The business models (as well as many other infrastructural aspects) are at least as essential in the development of more human tech.

Making Human Tech. Through Re-designing of the Business Model

So perhaps we can use the same perspective discussed (in the previous posts) about making more human tech. (i.e. by thinking with anticipation, inclusion and reflexivity about our desired futures in order to (re)design our technologies) to also develop more human businesses?

Recently, I discovered a company called RiverSimple. It is an innovative British business that aims at designing a new business model for sustainable mobility. In one of the interviews, RiverSimple’s CEO Hugo Spowers made a compelling case why the car industry fails to work towards sustainability. As he stated: when you are in the business of selling cars to your customer you make your profit by selling cars that are as expensive as possible, as unreliable as possible, and have a short life, etc. The approach of RiverSimple is to redesign their business model to align the profit making with broader social goals. By doing something as simple as leasing cars instead of selling, RiverSimple maintains the ownership of its vehicles. As a result, the mechanism of profit making includes aspect such as: having a vehicle that last as long as possible, consumes as little as possible, demand as little maintenance as possible, etc. Governing our futures can also be done by making cars, but it is not only the design of the car that has to change. The business model (among other things) will also need to bifurcate.

Making Human Technologies = aligning social goals, technical design & business model?

My title may be a bit misleading. I believe that making human tech is not about focusing on businesses or technologies themselves but about changing the way we think about the relationship between the development of technologies, of business models, of policies, etc. The point is that we cannot just focus on the development of the technology, instead we need an integrative approach where developing and researching a new technology comes in hand with different business models. In such way we also can make sure that producing the making of human tech a profitable – and scalable – endeavour.

And you what do you think? Does that seem like a good idea? Let’s discuss it in the comment section!

Impacting design – What is Value Sensitive Design?

Screen Shot 2015-10-29 at 22.24.48If can we agree that values-less design is simply impossible (see my older posts). And that by making the technologies we will be living with, we are contributing to make a technological culture which impacts society, businesses as well as you and me. Then, the question is what can we do in practice to make technological innovation more human? In this blog post, I want to review the so-called approach of ‘Value Sensitive Design,’ a methodology which aims at finding a way to both identify the values that should matter and to embed them in the technologies. Now, let’s take a look at how we can get technology to work with us.

What is value sensitive design?

funny-man-and-computerValue sensitive design is an approach that stems from human-computer interaction. In the 80s, when it became apparent to the engineers that computers were not only used by like-minded people but also in business and in administration, the user-friendliness of the interface started to be seen as far more important. Research on ‘human-computer interactions’ were conducted, and it became clear that certain values – such as privacy of information, security of connection, or user friendliness – had to be implemented in the technology. Building on this realisation, value sensitive design is an attempt to make this process of value integration a full part of the design of technologies in order to tackle social needs and moral dilemmas.

More recently, the idea of value sensitive design has re-emerged as a possible methodology for more ‘responsible innovation.’ The idea is that values and biases get into technologies anyway, so to make technologies that are socially more relevant we need to identify which values should get in. Then, we could use them to guide the designing of the technologies. In practice, such a methodology can be divided in three parts: conceptual, empirical, and technical analysis.

The conceptual work focuses on identifying the major stakeholders who would be directly and indirectly involved in the technology. On this basis, it should be possible to identify their values, cares, and worries. The second step is an empirical exploration of the stakeholders’ experiences as a way to understand the trades-offs and dilemmas that the technology may create. Even though values are often contradictory, there may be an agreement about which are relevant. Finally comes the analysis of how the technology can be designed in order represent the selected values. The point here is to recognize how certain technical choices can express (or inhibit) moral values.

The case of UrbanSim

Screen Shot 2015-10-31 at 19.07.21Theory, theory, theory… okay let’s look at an example how all those pretty things can be applied in a specific case. Paul Waddell, a Berkley professor, designed in the 2000s a simulation system called UrbanSim to support environmental planning. Recognising the political implications of his simulation, Waddell and his colleague designed their software with the aim of representing the values of the citizens involved and potentially impacted by the simulation.

What they did first was to conceptually identify the values that Waddell & his team explicitly wished to embed in the design of the software: fairness (meaning that the simulation should not discriminate unfairly any stakeholders), accountability, and democracy. Secondly, they empirically researched the stakeholders’ values. Here, many cares and concerns were identified, such as sustainability, space for business, ease of public transports, etc. On this basis Waddell and his team designed an interface with three main values categories: economic, environmental, and social. Under each of these categories, a great variety of aspects could be selected (e.g. public transportation, noise pollution, etc.) by the stakeholders in order to create simulations that speak to their values.

It turns out that UrbanSim has been quite a success and is used extensively all around the world in the process of urban planning. This also shows that designing technologies on this basis is not hindering the development of businesses and the success of products; on the contrary, it really contributes to the value of the innovation.

Will Value Sensitive Design suffice to make human tech.?

B745BoAIEAAX4Pl
Which values should we address? Who wins and who loses? – Chappatte, 2012

That sounds pretty good, right? An approach that enables us to design technologies from the inside out, to address values that everyone involved finds important. To bring together ethics, philosophy, social sciences, and engineering. But what (mildly) bothers me is that considering values is not enough to make a normative decision about how they weight against each other. Are all values equal or are moral values more relevant? What actually is a ‘value’? We tend to agree that sustainability or equality are values, but should we consider ‘space for business’ to be one as well?

The key point here is one that is at the very heart of making more human technologies. Are human technologies products that better fulfil our social goals or is it about changing the process to make it itself more ‘responsible’ – irrespectively of which goal the technology aims to achieve. In the first case, the key issue is to identify which values we want to achieve (e.g. equality or sustainability) and find ways to guide the process of building technologies to best support those goals. Here, value sensitive design seems to be a great tool. In the second case, the point is not to achieve certain values, but rather to influence the process – which is typically purely technical and business oriented – of technology making more reflexive about politics, values, and social concerns. In this later case the methodology of VSD is a good start but definitely not an end point.

And you, what do you think about value sensitive design? Does it make sense to you or is it some sort of a fancy theory that can in no way be integrated in technology making?

Image: Science Blinding Beauty, 1930, Fukuzawa Ichiro. How can we get science & tech to work with our human values?

Some nice stuff on the subject if you are interested:

Which values should get in our technologies? A great documentary on The Architecture of Violence: https://www.youtube.com/watch?v=ybwJaCeeA9o

Perhaps a little boring but a nice conversation about values, ethics and technology: https://www.youtube.com/watch?v=R4ctXfQyGSo

A good starting point to get to know value sensitive design: http://www.nyu.edu/projects/nissenbaum/papers/Value_Sensitive_Design.pdf

Values & Technology

1MetropolisTower4Today, I was reading this great article on big data, and it got me thinking about the (broader) relation between ‘values’ and technologies. This relation lies at the bottom of the idea that we can make more human technologies and I think it is worth looking at it with closer attention. Are technologies merely value-less tools? Or perhaps it’s the contrary, because of their technical characteristics they force us to act in certain ways? Let’s take a look at what the role of values in technological innovation and what it can imply for the making of more ‘human’ innovation.

Technology as Policy

124281_600What is the relation between technologies and values? The answer we often hear is that mixing technologies with values
is just like mixing apples with bananas. Values are the ‘stuff’ of people and technologies are inanimate, without brain, and therefore cannot have values. Technologies are just tools that some people use for good, while others use them for bad… A rather infamous example is the slogan from the American Nation Rifle Association claiming that ‘Gun don’t kill people; people kill people.’ The problem here is that if I have a gun in my hand and I feel threated, my reaction will most probably be very different than if I only had a knife. The medium through which we interact with the world does have its say in shaping our relation.

We could also go the other way around and claim that values are an intrinsic part of certain technologies. A famous argument, in the second part of the 20th Century, against nuclear energy was that due to the danger, complexity, and reaction time needed to make nuclear power plants safe, only a military-like organization could control them. By extension, nuclear energy would slowly lead the countries using them to become totalitarian regimes. Another example is the common argument that the Internet is inherently democratic and liberal, and that its use will  emancipate and democratize society (an argument implicit to most of the Net-Neutrality debate). The problem here is that none of this has happened! States using nuclear energy have not become totalitarian and the world has not become one big democracy.

Internet & Democracy
“It’s a virus that is spreading through Internet” © Chapatte, 2010

The issue with those two perspectives is not that they are wrong – in fact they do hold some truth. But while technologies do have a great impact on our relationship with the outside world (and we can experience that everyday), they still leave us a say about how to use them. On the other hand, while certain forms of technology are best suited to support certain value ideals (e.g. democracy or totalitarianism), they do not pre-determine the way we must be acting, our choices, and the way we interpret them.

I think that the most sensible approach is to look at the context from which technology comes about and the way we interact with them, through them. Then, what’s clear is that YES, through designs, certain technologies definitely play a political role in society. The canonical example is the bridges designed by architect Robert Moses, which were built sufficiently low to prevent buses – which were used by poor population (predominantly black) – to easily access Long Island. Another, more recent example could be the way search engines are being designed and what does it mean to be automatically directed towards results that are in accordance with our existing beliefs, morals, and values? The point is that technologies, in relation to their socio-technical context, play a role in ordering the world, and precisely through their design we have a say about it.

Technology as Policy

From this perspective, technology can almost be perceived as a form of policy. Policy is often considered to be a statement of intent that is implemented through procedures in order to guide decisions and achieve rational outcomes. Technology, in relation with society often acts in this way; it does not directly enforce one way of acting – it is not a law – but it incentivizes certain actions and interactions (even if those are unintended). Like policies, technology demands careful design in order to help us satisfy our values (e.g. privacy, safety or sustainability) and to contribute to solving social problems. However, unlike policies which are often adopted through (more or less democratic) political processes, technologies are selected through the rules of the market.

That also means that technology and ethics – contrary to what we usually think – cannot be separated. Not only at the level of ethical dilemmas caused by certain new technologies, but more deeply. When we take design choices in the making of technology (and of its diffusion in society), we also decide which values matter most and how we believe our future should be looking like. In this sense, we now really live in a socio-technical culture.

The dystopian future envisioned by Fritz Lang, in which (poor) humans become machines and are ultimately devoured by them, is not a fatality. The same holds off our fear of loosing our jobs due to robotization. Like in case of policies, we need to acknowledge the political and value-laden nature of technology making. We need to design technologies that represent our values and that will bring us to live in the socio-technical future we wish to inhabit.

-That means that what we need is to come back to a human language to speak about technology. Instead of asking ‘will it work?’ or ‘what technological process will make it safe?’ we should ask ‘do we want these things’, ‘do we believe them to support our society’.

What do you think about the relationship between values and technology? To which extend do you think that the metaphor of technology as policy holds?

Image from Fritz Lang’s movie Metropolis, 1927.

How to make technologies more human: three approaches

Screen Shot 2015-10-08 at 21.42.46When looking at it closely, it may be harder than it seems to figure out what does ‘making more human technologies’ mean. Is, for example, virtual reality a human technology? And what could we possibly do to make it ‘more’ human? I believe that three different approaches can be distinguished here. First, we could consider human technologies to be artefacts that are being developed to address great challenges like inequality or sustainability. Second, they could be technologies which have fewer side-effects, such as health risks, environment risks or ethical issues. Third, these could be technologies which have been developed through a more mindful process. Let’s review these three approaches.

Grand challenges

geoengineering-cartoon
Geoengineering, another controversial way to address grand challenges. © Henning Wagenbreth, 2007.

The first obvious way is to make technologies that ‘really’ address some of the grand challenges of today’s societies – phenomena like global warming, inequalities, public health, poverty, etc. This approach means that innovation need to be steered principally by the normative desirability of its outcomes. From this perspective, it is not enough to make technologies that do ‘no harm.’ Instead, what we need is to purposefully direct them towards socially desirable needs.

But this approach is not without problems. First, technological developments – even desirable ones – still need to be viable. From an economical perspective they need to be funded and/or be able to sustain themselves. Technologies addressing the needs of the poor or rare diseases research constantly face this challenge. Second, broad goals such as ‘sustainability’ remain open to interpretation. For example, the popular eco-brand Ecover says it is planning to use synthetic biology to replace certain of its less sustainable compounds. But this opens a whole debate where certain NGO argue that synthetic biology is not ‘natural,’ it’s effects are uncertain and potentially undesirable. From this point of view, it may not be a ‘sustainable solution.’

Limiting unintended consequences

The second approach consists in being more mindful about the impacts of emerging technologies as a way to limit the risks and the unintended consequences (often referred as ‘side-effects’). This stems from the realization (as mentioned in the previous post), that technology-based innovation can also come with its share of unplanned and harmful consequences. These consequences can be health-related (as in the case of asbestos) environment related (e.g. pesticides), but also social (as in the debate about potential job losses caused by robots).

To address these effects, we have developed different kinds of risk assessment – from health to ethical or environmental. The main problem with emerging technologies is that predicting the consequences, espe
cially the social ones, is to a large extend an act of divination. Even when a lot is done to test the potential impact, as in the case of the pharmacological industry.

Picture1
– At the present time, this is what we know about GMOs… -Consequently, there is absolutely no rational reason to fear them… 2006 © Chalvin

My main point is that emerging technologies, precisely because they are new, have consequences that we cannot rationally predict. Not because we do not test them enough, but because our past experiences encourage the anticipation of the wrong kind of risks – the ones that we can calculate and control. But in fact, the real risks come from what we have not experienced yet and therefore can only speculatively expect. (For a more in-depth reflection on the link between risk and uncertainty this is a nice piece and here is a great article).

Another problem with the risk mitigation approach is about when should we, and when can we still avoid the unintended consequences. The dilemma is that while technologies are in an early stage of development, it is relatively easy to change them and their effects. Safeguard measures can be included, alternatives can be explored or they can be banned all together. Unfortunately, at such early stage there is not much that we know about the effects. By the time the technology is properly developed, ubiquitous to our lives, and that unwanted consequences are well know it has become difficult to control or even ban. They have become locked-in.

Integrating the process of innovation

In response to these problems a new approach is to make a shift from the management of risk to an attempt to integrate the process of innovation to make it more reflexive. The point here is that technological development has too much of a social impact to let it only in the hands of engineers (and the institutions for which engineers may be working for). The idea is then to ‘open up’ the process of innovation to explore other possibilities in design, visions, value chains, etc. The research at this level has been especially directed towards finding ways to have more:

  • Anticipation by demanding from innovators to have an approach which is oriented towards futures. However, unlike a mere predictive and limiting forecasting of the future embodied in the innovation, this approach demands as well to ‘open-up’ the way for unforeseen futures.
  • Reflexivity consists in going beyond a form of self-referential critique; it rather demands to be mindful of the limits set by our perceptions, knowledge, and technology. It assumes a critical stance towards the way we frame problems.
  • Inclusion – its is a tricky one. Ideally it would try to integrate the ‘public,’ in a up-stream way, in the development of new technologies. By extension it also means to be more mindful of who is profiting who is excluded from certain technological choices.
  • Responsiveness is the central aspect – it demands that all other aspects inform and impact the design of the innovation and its development. The key here is to look at innovation not as an artefact but as a process during which each step must remain open, flexible, and informed by process of anticipation, reflexivity, and inclusion.

How to make more ‘human’ tech

What is clear is that making more human tech is not simply something that is done at one level. In the complex reality all these approaches are undistinguishable: grand challenges influence the making of technologies which itself may reduce (or increase) certain risks. But personally I believe that it is through the last option that we have the most space for action. A lot of technologies are directed at good causes, but that just is not enough to make them more ‘human’ or even ‘responsible.’ Mitigating side-effects is necessary, but again, we cannot pre-plan all of them and technologies which simply ‘do not look bad’ are not sufficient to make them a valuable addition to our technological culture.

I believe that what we really need is to develop another kind of relation with our technologies, a relationship of care. Maybe a good way, explained by French sociologist Bruno Latour, to understand what this could mean is to take the case of Frankenstein. As it is often forgotten, the creature in Marry Shelly’s story was not born a monster, but it became one after being abandoned by the Doctor Frankenstein. The error was not to challenge nature in its ability to make life, but to abandon the creature, to stop caring about it.

This distinction between culture – the human stuff – and technology (and also with ‘nature’) is a deeply entrenched one. The anthropologist Philippe Descola, working on the nature-culture distinction, points out that the way we draw a strict line between ‘us’ and our ‘environment’ is a Western distinction. Other cultures, have always seen the continuation and the personal relation between ourselves and our environment. Maybe that is what we ought to do: stop considering nature, but also technologies, as a tool that serves us and start thinking a little more about how we want to evolve with them.

Who knows, if we had taken the time to communicate and care about Frankenstein we could have found a good friend. But maybe that’s just tilting at windmills…

What do you think?

Image: Book cover from Santiago Posteguillo’s book La noche en que Frankenstein Leyó el Quijote.

Why did we start to care about making more human tech?

Screen Shot 2015-09-22 at 18.10.07

The #MakeTechHuman initiative defined its goal as assessing “whether [technology] truly serves humanity”. So is this what making more human technology means? Having technologies that ‘truly serve’ humanity? I believe that this definition is unsatisfactory: which humanity are we speaking about? Does a washing machine or nuclear energy ‘truly serve’ humanity? And what would be the meaning of ‘more’ human technology? Technologies that more ‘truly’ serves us?

I think that to begin thinking of what ‘more human technologies’ may mean it makes sense to look at where and why did we start to be concerned with the ‘humanity’ of technologies. This concern may stem principally (but not only) from two megatrends that emerged during the second part of the 20th Century in the Western countries: the risk society and the knowledge economy.

The Risk Society

BURKI-nucleaire
© Burki, 2011

The idea of Risk Society – a concept developed by sociologist U. Beck – is that since the second part of the 20th Century we are facing new kinds of risks. These risks, referred as ‘manufactured risks,’ are being caused – unlike any other in the past – by our scientific and technological developments and are potentially catastrophic (such as nuclear energy, the use of pesticides or the effects of anthropogenic emissions of co2).

Beck then argues that we are now living in a ‘risk society’, a society where fear caused by the anticipation of risks of the developments we have ourselves created governs our lives. It is because the side-effects of technologies became so ostensible and started to drive our lives that the question of more ‘human’ technologies emerged.

It is because institutions were failing to deal with the side-effects of technological change that it became essential to question how emerging technologies could be developed.


The Knowledge Society

screen-shot-2014-11-14-at-15-01-36Another reason, a more economical one, relates to the perceived role of innovation in our modern economies. Innovation as an economical necessity is an idea that emerged in the late 1960s under the concept the ‘knowledge society’.

The idea of the knowledge society is that a new type of economy is emerging which relies on ‘knowledge workers’ instead of manual workers. Such economy, through constant innovation, would be “capable of providing rapid economic growth in jobs, opportunities, income, standards of living, and aspirations for many decades, if not for another century […]” (Drucker, 1969, p. 11). From this perspective technological development, as a source of marketable goods, is perceived as the motor of endless economical growth.

This idea has then become extremely popular and is part of most political discourses. In this context, the concern of making technologies more human is often perceived as an attempt to “involve civil society very upstream to avoid misunderstanding and difficulties afterwards” that is, to ensure the acceptance of ‘good’ technologies.

Making more human tech.

I believe that it is in this context that we can start to understand the emergence (and funding) of institutions that are concerned with making more human technologies. Around the 1960s we indeed can see the emergence of institutions such as science shops, centres for Technology Assessment, the spread of ethics committees or the development of Science & Technology Studies. The goal of these approaches may be quite different but overall they all took stock of the key importance of science & technology in society and tried to problematize this relation. Their aim is, generally speaking, to politicise – and often democratise – the relation of co-evolution between science, technology, and society.

Making human technologies can be seen as going a step further in the quest to become more reflexive about our technological culture. Instead of just politicising the relation, the idea is to impact technologies through their innovation process to result in technologies that are qualitatively different.

Innovation is not something that we should pursue for its own sake. It is, like Goya’s drawing Modo de volar, a way for us to escape (to fly away) from the limitations that we are facing. But there is another side to it, the innovation which serves as a tool of liberation – the wing-devices in Goya’s painting – cages us anew and brings new conditions to our lives. Making more human technology is the attempt to be more mindful about the way we evolve with those innovations.

And you, why do you think it is important to care about making more human technologies? Or maybe you think that it is a misplaced concern?

Image: Francisco Goya, 1815-1823, Modo de volar [A way of flying]

Making Human Technologies

Christopher Nevinson's The Arrival (1913)Technology and society. Two entities inescapably interrelated, yet so far apart. It would seem that, on the one hand, there is an objective and merely technical space where technologies are produced, as the result of our ever-increasing scientific knowledge. On the other, stands the messy, intersubjective space of society where problems are created and dealt with in their full complexity. Technology is then depicted as a neutral element that can be used for good or evil depending of its social context.

Yet, as it is clear to all of us, our relationship with technologies is far more complex than that. Everywhere it surrounds us, everywhere it mediates our relation with others and with the outside world. Often – as with video games – our experience with technology has become the goal of our activities. Not only does it shape our relations, but it also impacts our ways of thinking. Nowadays, it is not uncommon to, for example, speak of our brain in terms of computing power. In this context, why do we spend so little time thinking about the processes through which technologies are being designed and built?

This issue is, however, receiving growing attention; as in the European Union’s willingness to integrate an ‘responsible’ approach to research and innovation within the programs funded by Horizon 2020 or in Wired and Nokia’s campaign ‘MakeTechHuman’ (a direct inspiration for this blog). Nonetheless, a lot remains to be done in order to precisely define what does it mean, and more importantly, how can we go about making more human technologies? Exploring these aspects is the goal of this blog.

I believe that making technology more human comes down to asking ourselves the simple questions: which world do we want to live in? What kind of future do we envision and what relations do we want to entertain with our technologies? Do we want to live in the futurist world of speed and movement (as the one depicted in the painting above) or maybe we want to sail the ship towards other horizons? This broader question is one without an answer, but one that we must be asking ourselves in order to figure in which direction we should be articulating the relationship with our technologies.

Image: Christopher Nevinson, 1913, The Arrival