Displaying present location in the site.
Research on National ID and Privacy on the Age of Artificial Intelligence
April 2018 to March 2019
Izumi Harada (Executive Fellow, Institute for International Socio-Economic Studies)
Yusuke Koizumi (Chief Fellow, Institute for International Socio-Economic Studies)
The fields of artificial intelligence (AI) and big data analysis have their benefits, but it has also been pointed out that they present various privacy-related risks to individuals. Transparency is one of the most important principles of privacy, but in cases where individual attributes, behaviors and tendencies are inferred by profiling, or where decisions relating to individuals are made based on AI algorithms using machine learning, it is difficult to express the reasoning behind these decisions in human language. Also, with regard to “fairness”, it is feared that no matter what bias is included in the data used for machine learning, it will not be possible for AI to avoid this bias by itself. As a result, it can end up promoting social discrimination. It has also been pointed out that profiling creates a one-sided personal image that can act against the individual’s interests when fending for oneself in an online world. Government agencies and private organizations both here and abroad are working on the creation of guidelines regarding the impacts of AI and its ethical implications, including the risks for individuals. For example, in July 2018, the Conference toward AI Network Society at the Ministry of Internal Affairs and Communications published a paper on the principles of AI utilization with the aim of having them shared internationally as a soft law among member states of the G7 and OECD. In December 2018, the European Commission’s High-Level Expert Group on Artificial Intelligence published a draft of its AI Ethics Guidelines. It is essential that industries identify issues related to AI and privacy, and consider their own countermeasures to these issues. In this study, we conducted field surveys (in the UK and USA) and reviewed the literature on domestic and foreign trends in AI and privacy protection. We also made recommendations as industry and published our findings both inside and outside the company.