PSPD in English UN Advocacy 2021-06-01   9237

Input for Report on right to privacy in the digital age

Input for Report on right to privacy in the digital age 

: opinions of Korean Civil Society Organizations for the upcoming report by the United Nations High Commissioner for Human Rights on the right to privacy in the digital age

 

Submitted by MINBYUN-Lawyers for a Democratic Society Digital Information Committee, Korean Progressive Network – Jinbonet, and People’s Solidarity for Participatory Democracy, Institute for Digital Rights

 

We are civil society organizations working for the ensurement of privacy rights in the digital age in the Republic of Korea. We would like to provide information and our opinions for the upcoming report by the United Nations High Commissioner for Human Rights on the right to privacy in the digital age as follows:

 

1. The Impact of Artificial Intelligence on Human Rights – cases of the Republic of Korea

 

In the Republic of Korea, Artificial Intelligence(hereinafter “AI”) system has been introduced not only in services, such as search, automatic translation, recommendation algorithms, chatbots, but also services in the financial areas, recruitment by companies and public organizations, and criminal justice procedures. However, the legislation for regulating ‘High-risk AI’ and for ‘Human Rights Impact Assessments’ or ‘audit system’ on ‘AI’ are absent or inadequate.

 

The government of the Republic of Korea has established the ‘Epidemic Investigation Support System(EISS),’ which uses AI, since the Covid-19 outbreak in 2020. The system processes and profiles various personal information and data, such as location data in the vicinity, credit card data, transportation card data, entry logs, etc. for tracking the locations and the routes of Covid-19 patients. The EISS was developed based on the platform of ‘smart

city’ technology which has been operated by the Ministry of Land, Infrastructure, and Transport, and uses various data profiling. The Police has launched AI policing that predicts which areas have high crime rates nationwide.1 The Ministry of Science and ICT and local governments have introduced face recognition technology on CCTVs in streets for contact tracing of COVID-19 confirmed cases.2 Moreover, the government has recently introduced or is unilaterally pushing uses of automated decision-making systems using AI in areas of recruitment, social welfare, and administrative measures.

 

On October 23, 2020, NGOs in the Republic of Korea have announced the result of a factual survey on the uses of AI interviews by public institutions, which has been identified through the information disclosure of 13 public institutions. According to the result of the survey, the institutions have used programs for AI interviews without verifying the programs provided by private companies through impact assessment whether the interviews based on the programs are fair or non-discriminatory. Unsuccessful applicants cannot know how AI assessed them and determined the pass/fail. Additionally, various public institutions have not fulfilled their responsibilities to protect the personal information of applicants by commissioning their recruitment procedures to private companies in violation of the Personal Information Protection Act. As a result, there is a concern that the personal information of applicants which have been transferred to private companies are misused for private purposes. Most institutions do not ensure the transparency of AI interviews as they do not disclose key information concerning the AI interviews.

 

The government also has been pushing the plans to use AI to detect benefit fraud. Moreover, the government and the National Assembly enacted the Framework Act on Public Administration in March 2021, and article 20 of the Act allows administrative agencies to make decisions using fully automated systems, including AI. Furthermore, the National Assembly has been in the process of reviewing the amendment of the Electronic Government Act, which allows the administrative agencies using AI to provide electronic government services.

 

In early 2021, ‘Lee Luda’, an AI chatbot of a private company that has attracted 820,000 users within two weeks of its launch, has become highly controversial in the Republic of Korea. It is because ‘Lee Luda’ has automatically made hate and discriminatory speeches against women, sexual minorities, persons with disabilities, and races. And it has been found that Scatter Lab, the chatbot’s developer, used personal information collected without consent to develop the ‘Lee Luda’. The company used about 9.4 billion dialogues of “Kakao Talk”, a mobile messaging application, collected by 600,000 users through its other services since 2013 without proper protection measures. In particular, about 200,000 people whose personal information used by the company without consent were children under the age of 14. The Personal Information Protection Commission of the Republic of Korea imposed a total of about 100 million won in administrative surcharge and fines on the company.

 

2. Laws related to Artificial Intelligence

 

The UN Secretary-General António Guterres recommended, “to create adequate legal frameworks and mechanisms to ensure full accountability in the context of the use of new technologies, including by reviewing and assessing the gaps in national legal systems, creating oversight mechanisms, where necessary, and making available avenues for remedies for harm caused by new technologies”.6 However, in the Republic of Korea, although high-risk AI is already being introduced in the public and private sectors, there are no appropriate laws, procedures, or supervisory systems in place to regulate it.

 

On June 9, 2020, the Framework Act on National Informatization was wholly revised to the Framework Act on Intelligence Informatization. The law requires AI-related policies to be established, under the comprehensive plans and implementation plans for intelligence information society policies, but the authority in charge is the Ministry of Science and ICT, which is tasked with promoting technology and industry. The Ministry, which has been pushing for the autonomous implementation of AI ethics and the exemption of regulation, announced plans to promote the management and impact assessment of high-risk AI under the law, but civil society believes that the Ministry is not appropriate for conducting a human rights impact assessment of AI. Korean civil society believes that the assessment and due diligence of high-risk AI should be based on human rights standards, not the technical evaluation, and some AI with very high risk to human rights needs to be banned.

 

Nonetheless, the AI’s impact on human rights are rarely considered in discussions or drafting of recent fast-paced AI-related policies or bills by the government and the National Assembly. There is no impact assessment on AI in the public and private sectors in the Republic of Korea. In particular, after the Covid-19 crisis, there has been insufficient public consultation concerning the issue, such as gathering opinions from data subjects.

 

The Personal Information Protection Act was enacted in 2011 to regulate personal information files processed for business purposes, but it does not yet contain rules for regulating AI, including the rights of data subjects related to automated decisions that have a significant impact on individuals. Although there is a personal information impact assessment system, it applies only to some personal information processing by the public institutions and does not apply to all high-risk processing. In addition, the revised Personal Information Protection Act, which took effect in August 2020, allows personal information to be used for purposes for statistics and scientific research other than original purpose without the consent of data subjects, as long as the personal information is pseudonymised. The provision was introduced to foster new technologies such as big data and AI. However, it could be applied to virtually any kind of research that claims scientific research, including commercial research within an enterprise (whether or not it has superior public interest over data subject’s rights), and allows pseudonymised personal information to be provided and sold for other enterprises’ research, and be retained even after accomplishing a particular research objective. It also allows combining the personal information of different entities and providing the original entity to retain the combined information in an pseudonymised form. For this reason, Korean civil society is concerned that companies’ customer information will be indiscriminately used, sold, and shared with other companies beyond their original purposes.

 

Furthermore, there are currently no laws governing profiling. The provisions related to automated decision-making are included in the revision of the Personal Information Protection Act, which is currently being prepared by the Personal Information Protection Commission. Yet, in the eyes of civil society, it is weaker than that of the European Union’s General Data Protection Regulation.

 

3. Korean Civil Society’s Position on Artificial Intelligence

 

On May 24, 2021, 120 Korean civil society organizations held a press conference and announced a declaration calling for AI policies that guarantee human rights, safety, and democracy. The declaration was designed to clarify civil society’s demand for AI regulation and to criticize the Korean government’s attitude neglecting the regulation of AI which has an enormous impact on human rights, safety, and democracy, despite the rapid introduction of AI products and services in the public and private sectors. Civil society organizations have called for the enactment of laws to regulate AI, and the main points are as follows.

 

Calling for the enactment of law regulating AI, the Korean civil society demands:

 

First, the intervention by the Fair Trade Commission, the National Human Rights Commission of Korea, and the Personal Information Protection Commission for national level supervision of AI products and systems, not by the Ministry of Science and ICT, which focuses on fostering industries.

 

Second, transparent disclosure of information on AI products and services and ensuring participation of consumers, workers, and citizens in designing them. In particular, Algorithms, in principle, should be disclosed, and unexplainable AI should not be introduced in the public sector.

 

Third, the establishment of AI impact assessment procedures and prohibition or regulation of high-risk AI, both in the public and private sectors.

 

Finally, the establishment of the procedures for an effective remedy for violating the rights by AI and damages caused by AI.

 

Report [See/Download]

Korean Version>>

정부지원금 0%, 회원의 회비로 운영됩니다

참여연대 후원/회원가입


참여연대 NOW

실시간 활동 SNS

텔레그램 채널에 가장 빠르게 게시되고,

더 많은 채널로 소통합니다. 지금 팔로우하세요!