Member Feature: Hin-Yan Liu

Hin-Yan Liu

Hin-Yan Liu

Associate Professor, Centre for International Law, Conflict and Crisis - University of Copenhagen

Dr. Hin-Yan Liu is a member of the FRR’s Legal Expertise Committee and an associate professor in the Faculty of Law at the University of Copenhagen. Prior to that position, he was first a Max Weber Fellow and subsequently a Research Fellow at the European University Institute in Florence, while concurrently holding a permanent appointment at New York University in Florence. He has also previously held academic positions at King’s College London and the University of Westminster, and visited the Max Planck Institute for Foreign and International Criminal Law.

His doctoral work at King’s College London was supported by grants from the Social Sciences and Humanities Research Council of Canada, King’s College London and the Government of Alberta. Controversially arguing that ordinary legal processes were at the source of the impunity enjoyed by the modern private military company, his thesis was passed without amendments and has been adapted into a monography to be published by Hart in September 2015. He was awarded his LL.M. in Human Rights Law by University College London (UCL) with distinction, and also holds degrees in law and psychology.

Read on for an interview with Dr. Liu where he discusses the intersection of technology, law, and policy, the role of the FRR, and what he sees as the meaning of “responsible robotics”.

You do a lot of research in the domain of autonomous and remote weapons systems. You argue that we need an “architecture of responsibility” in order to deal with this “novel” category of weapons. Can you describe how Ethicists, Law Makers, Designers, and Public Institutions can or should contribute in the discussion around this “architecture of responsibility”?

I think that there is too much emphasis placed upon the technology. Like many others I was also bound up by the technological developments, trying to understand how these work and the developmental paths they are predicted to progress upon. From a regulatory standpoint, however, my thinking is now to try to place law and policy front and centre – to try to see these challenges as exclusively questions of regulation. What this means in practice is that ethicists, law makers, designers, and public institutions in my view, should not really worry about the technology at all. Instead, the focus should be upon where there are inconsistencies in the regulatory system (others have terms this a ‘gap’ for example, but gaps imply voids within the system, whereas the deficiencies which novel technologies reveal for the regulatory system are often at the fringes, or seemingly unconnected with the existing system).

Take responsibility for Autonomous Weapons Systems (AWS), for example. There has been much discussion about how to bridge responsibility gaps where these cause effects which look very much like war crimes. Yet, I think that in addition to a practical responsibility question (how should responsibility be apportioned for the development and deployment of AWS, which I called these circumstantial responsibility questions in previous work), there are also conceptual responsibility questions (what types of responsibility are at play, and which entities and activities are connected together via these forms of responsibility). It is theoretically possible for AWS to be designed in such a manner to not create traditionally conceived responsibility gaps (practical responsibility questions), but would this be sufficient to satisfy the spirit of responsibility and fulfil the work that we expect of responsibility? This is what I mean by the conceptual responsibility questions, the core issue may actually be on a disagreement as to the contours and content of the responsibility concepts which we expect to operate in the AWS context. These are the issues that need to be ironed out before a sophisticated discussion can commence.

In short, we need to answer the question of what we want responsibility to do, and continue to do, before we attempt to see how responsibility regimes can embrace novel technologies. Since it is a regulatory, rather than a technological, question one great advantage of this approach is that it is not necessary to have AWS in the here and now, because the ultimate characteristics of the technology do not matter. In this sense, regulatory preparation can be undertaken alongside parallel technological developments, thus minimising the ‘pacing problem’ where regulation falls behind technological innovation.

 

Technology moves fast and new technological systems and developed, while Law and Policy seem to be left behind. As a result, we do not have the needed tools to regulate new technological systems. Your research aims to fill this gap between Technology and Law/Policy. But are Law and Policy enough in order to regulate technological development?

If I understand you correctly, you are asking two different questions here: how law and policy might address the pacing problem; and whether or not law and policy approaches are sufficient and robust enough to cope with the disruptive challenges of new technologies.

Again, I think that we may be able to circumvent the pacing problem by detaching these from the technology. In other words, these are regulatory questions, not technological ones. From this perspective, technology only becomes interesting for law and policy once it creates conditions which legal and policy answers are no longer satisfactory. We can use projected technological developments to foresight regulatory thinking – to actually embrace legal and policy reactions to technology that do not yet exist, but which if introduced would severely disrupt the regulatory constellation. It would be possible to use these to start thinking of regulatory models capable of governing these hypothetical or speculative scenarios. If these do not materialize, some time and resources would have been wasted, but perhaps we would have gained some useful experience in going through the process. But if those scenarios do transpire, however, then we would have regulatory models and frameworks at hand which can be readily adapted to the specific exigencies of the manifested technology.

As to whether law and policy responses would be adequate or sufficient to rise to the challenge, I would suggest that in the narrow sense they would not. Lessig famously proposed four modalities of regulation: law, social norms, market, and architecture. I think that all of these modalities will need to be drawn upon (and others, such as nudging, for example, need to also be applied) for regulation to be sufficiently rounded-out.

But there are two other perspectives that I would like to introduce here. If we distort Einstein’s idea that ‘we cannot solve our problems with the same level of thinking that created them’ a little, it may be that technological problems may best be addressed through law and policy which perhaps invoke different types of logic. What I mean is that deploying technology to solve technological problems might not work, but law and policy approaches might be sufficiently different to make a regulatory dent. Ironically, maybe this is why deploying law and policy solutions against political challenges fail to work beyond certain limits, as these are more closely aligned.

The other perspective is being developed in a line of work being undertaken by my PhD student, Leonard van Rompaey. He argues that law is a paradigmatically human (as in Anthropos) endeavour which is tailored for human activity and takes into account human prejudices and human behaviour. If law is human, and adapted for human behaviour and understanding, then there appears to be a large void between human law and technological activities and the outcomes of those activities.

 

By taking into account your previous answer, and if Law and Policy are not enough, what do we actually need in order to create technology that is politically acceptable and ethically justifiable?

There’s obviously no silver bullet, but I think that the broadest ranges of perspectives and approaches need to be brought to bear on these emerging questions. So in part, a broader conception of regulation is part of the solution. But I think also that a greater range of opinions needs to be consulted, and the impact from a more diverse set of potential victims taken into account. A large part of what worries me in this context is that it is a narrow set of participants in the regulatory debate: there appears to be a Western, educated and male bias in particular. Not only is this problematic per se, but it also restricts the range of potential problems that are identified.

Let me give you an example. In previous work, I raised the question of structural discrimination arising from optimizing processes (anchored in the autonomous vehicles context in this case). If we overlook the problems associated with drawing upon the trolley problem for what an autonomous vehicle (AV) should do where an accident is unavoidable, and just take the conclusions, the argument is that there should be some form of ‘crash optimisation’ as Patrick Lin writes about. This makes a lot of sense, but when applied to networked AVs which share information with one another, discriminatory pressures may not only emerge, but become consistent. Say for example, the crash optimisation algorithm determines that it is optimal for the society if older people are hit rather than young people. This decision, which was once individual and isolated, now reverberates through the whole network of AVs: suddenly there is an emergent effect where older people bear a greater share of the risk burden from the operation of those AVs. All because of crash optimisation and networked communication working in tandem.

So what does this have to do with politics and ethics? There is suddenly something that looks very much like discrimination (selection of members of a group based upon immutable characteristics) which fails to qualify and be recognised as discrimination (the policy is not pursued in pursuance of those grounds) so cannot be challenged successfully in the courts. The heightened collective risk is imposed absent democratic debate, and outside of legislative processes (which make it more difficult to mount a legal challenge). Significant redistributions of benefits and burdens are being made outside of the political system, and the situation becomes one of man-under-the loop with the introduction of emerging technologies unless we get the law and policy responses right. There are probably many ethical questions posed as well, but I do not think that I am qualified to address these.

One further point I would like to add here is that we should also think of law (and policy) not as completed projects into which new challenges must be accommodated, but rather that these new challenges might reveal inadequacies or incompleteness in the law as it currently stands. Rather than resist this, and cling dogmatically to the idea that the law has all the answers already, a more dynamic approach that accepts that the system is incomplete and suffers inadequacies may lessen the inertia of legal system. This in turn will lower the barriers towards reevaluation and constructive critique, thereby allowing legal principles, concepts and procedures to be more adaptive to technological challenges. I should be clear that I do not mean that the law should be adaptive to the technology. Rather, we should allow new technologies to reveal inconsistencies and incompleteness which are already latent within the legal system, but which have not been observed yet because we have remained within that system all along.

 

Can you describe what should be the role of the Foundation for Responsible Robotics in that perspective?

I think that FRR, by foregrounding responsibility in relation to robotics and by beginning the discussion on this intersection is already in itself a good start. What I mean is that discussions about robotics (from what I catch in the press, and from informal discussions with people) seems to bifurcate into panic and doom on the one hand, to the technological capacities on the other. I happen also to be very engaged in existential risks work, so I am intrigued by the dystopian visions, but it is not necessarily the most fertile context within which to debate about the role that robotics may have in society. Nor are drives towards ever-greater efficiency and surmounting the technical challenges all that should drive the vision of robotics forward either. Between these, I am guessing, is the commercial interest of marketing robots that also has its own incentive structures. In this sense, emphasising responsibility seems to moderate these different approaches and provide a more balanced way forward. Sometimes, all that is necessary is to spur a second thought before undertaking an action to prevent greater harm later: as they say with mountaineering, you should aim to check a slip, and not wait to try to catch a fall.

 

Nowadays there is a big discussion around Sex Robots. The Foundation for Responsible Robotics aims to contribute in this discussion, and we recently published the report “Our Sexual Future with Robots”. Do you think that the discussion around robots, in more general terms, can give us all the assistance and the conceptual tools we need in order to address the issue of Sex Robots? Or this is a “novel” category and all the discussion has to start from scratch, and take into account the unique characteristics of this technology?

I am not sure that I would follow either of these choices, at least at this level of generality (and I have not had the opportunity to read the report yet, so I hope that my answer will not overlap with its findings). What I mean is that there will be instances where the applications of general rules will be sufficient, and other times when perhaps the discussion will need to start tabula rasa. My main concern when it comes to sex robots is that we, as a society, do not know what we want when it comes to the regulation of the intimate personal sphere. This might then express itself with reactionary or incoherent responses framed through short-term perspectives built upon shaky foundations.

Perhaps the most controversial aspect of sex robots concerns child sex robots in relation to paedophilia. My suspicion is that there will not be enough empirical evidence (for lack of participating subjects and questions of ethics for human subjects) to evaluate whether child sex robots can be deployed therapeutically for paedophiles, and perhaps be able to prevent harm to children while at the same time allow the expression of deeply repressed needs of paedophiles. This means that it would not be possible to argue the case for or against child sex robots on the basis of harm minimisation or human fulfilment. While it seems to be a misguided claim, it does not appear that we know as a society what it is that we want to regulate when it comes to sexual activities where minors are concerned, let alone how to do this. The New York Times recently reported about child marriage in the United States, for example, and not knowing anything about this beforehand I was rather shocked that this is not only legal, but actually practiced in significant numbers in the States. What this tells me is that sexual activities with minors is socially acceptable (perhaps even condoned?) where there is parental or guardian consent, and that there is no best interests of the child criterion in place. If this is the background of social norms, we should not be surprised to find that it would be very difficult to restrict, or perhaps even regulate, the production and sale of child sex robots (provided that there is some equivalent of consent). Furthermore, with the removal of harm and the activity taking place in a commercialized environment, perhaps different norms would make regulation even more difficult to sustain. A similar situation may apply with the regulation of pornography and countervailing claims for protection under freedom of expression, but I think the point is better made by child sex robots.

I don’t think I answered your original question, but my impression is that we need to figure out what it is that is objectionable in the realm of sexual activities before we can apply these to sex robots. I think this means that I am more leaning towards the side of relying upon general rules rather than treating sex robots as exceptional in a legal sense. A different way of approaching this would be to consider what is disruptive, in a legal and policy sense, of introducing sex robots. Unless there are deficiencies in the current system which is revealed or opened up by their introduction, my sense is that the general system should be able to cope.

 

What does responsible robotics mean to you?

Responsible Robotics to me means that there are connections drawn between the various human beings involved in the design, production, sale and use of robotics. Responsibility is often conflated with accountability, which is backward-looking and sort of imposed upon the parties, and I think that this is too narrow a conception of responsibility. Instead, prospective concepts of responsibility need also to be developed to establish the obligations and expectations for those individuals involved in robotics so that there would be less need to rely on formal (and confrontational) legal processes. To me, the responsible robotics project embodies a human-centred approach to developing potentially powerful and transformative technologies that could deeply impact both human life and individual potential. My wife works with participatory practices and community engagement, especially in the urban context, and what strikes me from the discussions we have is how only relatively recently architecture and urban space discussions have started to even think about human behaviour and the human in their design. It used to (and may still) be that conventional architecture was concerned with building a building, not a city, and that people would simply have to adapt into the final outcome when it was built, but now there is at least some architectural thinking which seeks to observe and understand how human beings interact in their physical space and to design that space to suit their behaviour and perhaps even to empower the citizen through urban design.

This seems to be a huge digression, but my point here is that we can draw analogies from the way urban space was designed, to how it can be designed: not only with the human in mind, but for the human, and to empower and enable the human (and perhaps even to promote human flourishing). I understand responsibility here in a vastly expanded sense: responsibility almost in terms of pride for those on the production side, and gratitude for those on the user side. What I mean is that responsible robotics would mean that there are no qualms, hesitations or reservations for those who produce the robots; and that the users of those robots would be made happier and made more capable by the robots in their lives. So to me, responsible robotics embodies precisely this desire – to design robotics in such a way to maximize human potential, and to have this as the overriding objective. If robotics are to be developed at all, this would be the purpose for which I would develop them and for no other.