Displaying present location in the site.
ICT Global Trend Part6-8
Privacy Issues with AI (2/2)Inferred data by profiling: Notice is required to the individual concerned
This article is written by Mr. Yusuke Koizumi , Chief Fellow, Institute for International Socio-Economic Studies(NEC Group)
What sort of actions and measures are being taken to cope with privacy issues with AI (Artificial Intelligence)?
I had an opportunity to visit to the Information Commissioner’s Office (ICO: national data protection authority in UK) in July this year and hear on the issues.
Among 5 privacy issues raised in the previous report;
(1) As for such personal data inferred by profiling as sensitive data or undesired data to be disclosed, it is specified in EU GDPR (General Data Protection Regulation) that even inferred personal data should be protected as in the same manner as that for personal data collected in the normal way. So, even inferred personal data needs one of the lawfulness grounds of processing such as individual’s explicit consent.
According to ICO, the important point is that profiling itself if made without individual’s knowledge is troublesome and it is essential that the prior notice should be given when inferring new personal data from already-known data.
(2) Potential social discrimination from being promoted or newly made because bias is included in algorism of AI and machine learning data.
(3) Social responsibility to clarify the opacity of the algorism (in black box)
As for (2) and (3), ICO stressed the necessity of auditing the algorism properly.
As for (3), it is further specified in GDPR that the information should be provided to the individual to explain meaningful information about the logic involved when making automated decision which significantly affects the data subject.
As for this responsibility, ICO explained that a detailed technical description about how an algorithm works is not required. But such explanation is important as “What categories of data are used to create a profile and why these data is considered relevant?”, “What category of data influence the automated decision-making and how?”, and “How the individual can adjust or change its behavior to have better results?”
As for (4), the case if the automated decision-making results in the decision disadvantageous or unfavorable to the individual (in case of employment, loan application, insurance conditions and so on), GDPR rightly specify “the right to obtain human intervention on the part of the controller”, “the right to express his or her point of view”, and “the right to contest the decision.”
Taking the example of decision on the automated loan examination, there are a variety of the rights specified in GDPR such as right to contest to the automated decision made, right to obtain human intervention and right to provide supplementary statement and data. But it does not mean that right to change the automated decision made is granted.
The question why automated decision made solely by AI is not accepted but by human being is accepted, is commented by ICO as follows. Although it is true that human being may have biases or make mistakes or mis-judgement, the guarantee that the individual has right to obtain human intervention gives peace of mind important to the individual that he or she can have a dialogue, if necessary, with human and he or she is not completely closed within AI environment.
As for (5), the accuracy of the predicted data when profiling deals with the future risk of an individual’s illness, disease, criminal offence and so on, the typical example of the case might be:
AI in a hospital predict that the individual A has high risk of heart disease, but the individual A does not suffer from heart disease at the end. In this case, is the prediction made by AI accurate or not?
In GDPR if the data is considered as inaccurate, the individual has right to rectify the data. It might be very important in case that the inaccurate automated decision is used to calculate insurance premium of the individual. But it might be probably right that AI has classified him or her into the high risk category of suffering from heart disease.
In EU hot argument is being made on how “Accuracy” is judged or interpreted for predicted data by profiling.
