When looking at it closely, it may be harder than it seems to figure out what does ‘making more human technologies’ mean. Is, for example, virtual reality a human technology? And what could we possibly do to make it ‘more’ human? I believe that three different approaches can be distinguished here. First, we could consider human technologies to be artefacts that are being developed to address great challenges like inequality or sustainability. Second, they could be technologies which have fewer side-effects, such as health risks, environment risks or ethical issues. Third, these could be technologies which have been developed through a more mindful process. Let’s review these three approaches.
The first obvious way is to make technologies that ‘really’ address some of the grand challenges of today’s societies – phenomena like global warming, inequalities, public health, poverty, etc. This approach means that innovation need to be steered principally by the normative desirability of its outcomes. From this perspective, it is not enough to make technologies that do ‘no harm.’ Instead, what we need is to purposefully direct them towards socially desirable needs.
But this approach is not without problems. First, technological developments – even desirable ones – still need to be viable. From an economical perspective they need to be funded and/or be able to sustain themselves. Technologies addressing the needs of the poor or rare diseases research constantly face this challenge. Second, broad goals such as ‘sustainability’ remain open to interpretation. For example, the popular eco-brand Ecover says it is planning to use synthetic biology to replace certain of its less sustainable compounds. But this opens a whole debate where certain NGO argue that synthetic biology is not ‘natural,’ it’s effects are uncertain and potentially undesirable. From this point of view, it may not be a ‘sustainable solution.’
Limiting unintended consequences
The second approach consists in being more mindful about the impacts of emerging technologies as a way to limit the risks and the unintended consequences (often referred as ‘side-effects’). This stems from the realization (as mentioned in the previous post), that technology-based innovation can also come with its share of unplanned and harmful consequences. These consequences can be health-related (as in the case of asbestos) environment related (e.g. pesticides), but also social (as in the debate about potential job losses caused by robots).
To address these effects, we have developed different kinds of risk assessment – from health to ethical or environmental. The main problem with emerging technologies is that predicting the consequences, espe
cially the social ones, is to a large extend an act of divination. Even when a lot is done to test the potential impact, as in the case of the pharmacological industry.
My main point is that emerging technologies, precisely because they are new, have consequences that we cannot rationally predict. Not because we do not test them enough, but because our past experiences encourage the anticipation of the wrong kind of risks – the ones that we can calculate and control. But in fact, the real risks come from what we have not experienced yet and therefore can only speculatively expect. (For a more in-depth reflection on the link between risk and uncertainty this is a nice piece and here is a great article).
Another problem with the risk mitigation approach is about when should we, and when can we still avoid the unintended consequences. The dilemma is that while technologies are in an early stage of development, it is relatively easy to change them and their effects. Safeguard measures can be included, alternatives can be explored or they can be banned all together. Unfortunately, at such early stage there is not much that we know about the effects. By the time the technology is properly developed, ubiquitous to our lives, and that unwanted consequences are well know it has become difficult to control or even ban. They have become locked-in.
Integrating the process of innovation
In response to these problems a new approach is to make a shift from the management of risk to an attempt to integrate the process of innovation to make it more reflexive. The point here is that technological development has too much of a social impact to let it only in the hands of engineers (and the institutions for which engineers may be working for). The idea is then to ‘open up’ the process of innovation to explore other possibilities in design, visions, value chains, etc. The research at this level has been especially directed towards finding ways to have more:
- Anticipation by demanding from innovators to have an approach which is oriented towards futures. However, unlike a mere predictive and limiting forecasting of the future embodied in the innovation, this approach demands as well to ‘open-up’ the way for unforeseen futures.
- Reflexivity consists in going beyond a form of self-referential critique; it rather demands to be mindful of the limits set by our perceptions, knowledge, and technology. It assumes a critical stance towards the way we frame problems.
- Inclusion – its is a tricky one. Ideally it would try to integrate the ‘public,’ in a up-stream way, in the development of new technologies. By extension it also means to be more mindful of who is profiting who is excluded from certain technological choices.
- Responsiveness is the central aspect – it demands that all other aspects inform and impact the design of the innovation and its development. The key here is to look at innovation not as an artefact but as a process during which each step must remain open, flexible, and informed by process of anticipation, reflexivity, and inclusion.
How to make more ‘human’ tech
What is clear is that making more human tech is not simply something that is done at one level. In the complex reality all these approaches are undistinguishable: grand challenges influence the making of technologies which itself may reduce (or increase) certain risks. But personally I believe that it is through the last option that we have the most space for action. A lot of technologies are directed at good causes, but that just is not enough to make them more ‘human’ or even ‘responsible.’ Mitigating side-effects is necessary, but again, we cannot pre-plan all of them and technologies which simply ‘do not look bad’ are not sufficient to make them a valuable addition to our technological culture.
I believe that what we really need is to develop another kind of relation with our technologies, a relationship of care. Maybe a good way, explained by French sociologist Bruno Latour, to understand what this could mean is to take the case of Frankenstein. As it is often forgotten, the creature in Marry Shelly’s story was not born a monster, but it became one after being abandoned by the Doctor Frankenstein. The error was not to challenge nature in its ability to make life, but to abandon the creature, to stop caring about it.
This distinction between culture – the human stuff – and technology (and also with ‘nature’) is a deeply entrenched one. The anthropologist Philippe Descola, working on the nature-culture distinction, points out that the way we draw a strict line between ‘us’ and our ‘environment’ is a Western distinction. Other cultures, have always seen the continuation and the personal relation between ourselves and our environment. Maybe that is what we ought to do: stop considering nature, but also technologies, as a tool that serves us and start thinking a little more about how we want to evolve with them.
Who knows, if we had taken the time to communicate and care about Frankenstein we could have found a good friend. But maybe that’s just tilting at windmills…
What do you think?
Image: Book cover from Santiago Posteguillo’s book La noche en que Frankenstein Leyó el Quijote.