Recommendations on The Protection of Personal Data in the Field of Artificial Intelligence

On September 15, 2021, the Personal Data Protection Authority (“Authority”) published “Recommendations on the Protection of Personal Data in the Field of Artificial Intelligence” (“Recommendations”) for developers, manufacturers, service providers and decision-makers operating in this field to be implemented during artificial intelligence implementations. Recommendations can be found here.

 

The Importance of Personal Data Protection in the Field of Artificial Intelligence

 

Artificial intelligence is a technological feature created entirely by artificial means and works by using the working system of machines without the use of any living organisms by showing human characteristics and behaviours. These features are most commonly used within the scope of most applications we use on our phones (e.g. navigation, facial recognition systems, health services) and, for example, the creation of algorithms based on information stored by artificial intelligence in these applications. While artificial intelligence offers simple solutions to our lives by taking part in applications as mentioned, it is essentially a deeper and more unpredictable technology than these applications. Especially with the increasing online use and return to electronic systems with the pandemic, most companies have started to integrate artificial intelligence into the technologies they use. However, most companies that fail to provide full security of these systems, and even when they provide those securities, face certain vulnerabilities and the risk of data loss due to their inherently unpredictable nature, that’s why artificial intelligence, which offers simple solutions in people’s lives, carries serious risks in terms of fundamental rights and freedoms.[1]

 

At the top of these risks is; the processing and storage of personal data per the Personal Data Protection Law No. 6698 (“PDPL”) during artificial intelligence applications. Because big data, which is considered the brain of artificial intelligence and is the main source of machine learning that feeds algorithms, includes a lot of data that is personal data or has the potential to be personal data. As such, the protection of personal data within the scope of artificial intelligence is of great importance. Therefore, it is necessary to pay strict attention to compliance with PDPL and relevant legislation regarding the processing of data in case of using artificial intelligence technology for not to violate fundamental rights and freedoms of the people whose personal data is processed (“data subject”).[2] If we summarize the recommendations given by the Authority:

 

1) General Recommendations: 

  • In the process of implementing artificial intelligence technology, the fundamental rights and freedoms of the data subjects should be respected and protected; especially for the personal data processing based on artificial intelligence (e-nabız or life fits home applications used for health reasons today are included in this scope)and during data collection, it is necessary to pay attention to the lawfulness, fairness, being accurate and kept up to date where necessary, being processed for specified, explicit and legitimate purposes, being relevant, limited and proportionate to the purposes for which they are processed, being stored for the period laid down by relevant legislation or the period required for the purpose for which the personal data are processed, as clearly stated in the Law.
  • In personal data processing based artificial intelligence implementations, if the process of personal data is predicted to be risky, the legality of the data processing activity should be decided within this framework by applying the“privacy impact assessment”.  Privacy impact assessment is defined as examining applications such as process, technology, action, system, which are possible or possible to have an impact on the privacy of individuals, identifying problems arising as a result of these practices and eliminating or preventing them. In these studies, compliance with the legislation should be ensured and a data protection compliance program specific to each project should be established.[3]
  • In personal data processing based on artificial intelligence implementations, in case of processing personal data which is accepted by the Law as “special categories of personal data”, such as health data, stricter measures should be taken and the rules of such data stipulated in the legislation should be taken into account.
  • If the same result can be obtained if personal data is not processed in the development and implementation of artificial intelligence technologies, the data should be anonymized.

 

2) Recommendations for Developers, Manufacturers, and Service Providers

 

In Recommendations;

– Developers are defined as those who develop content and applications for all kinds of products belonging to artificial intelligence systems,

– Manufacturers are defined as those who produce all kinds of products such as software and hardware that composing artificial intelligence systems.

– Service providers are defined as real or legal persons who provide products and/or services using artificial intelligence-based systems, data collection systems, software and devices.

 

  • In the development and design of model by these individuals, personal data privacy should be the priority, an approach to risk prevention and reduction should be adopted, the risk of discrimination/other negative effects that may arise about the data subjects at every stage of data processing should be prevented and minimum data use should be carried out taking into account the nature of the personal data.
  • Data subjects should be granted to object to the procedures implemented during the development of artificial intelligence technologies and rights of data subjects arising from national/international legislation should be protected. In this context, risk assessment applications should be applied, which are also attended by people who may be affected by the application.
  • The products and services in which data subjects will be subjected to automated data processing should not be designed, for example, data should not be processed without the explicit consent of data subjects or without clarification before the data is processed,alternatives that interfere less with personal rights should be preferred. At the same time, algorithms that show the purpose, how and in what scope the data of the data subjects are processed should be adopted, providing a combination of accountability in short.
  • When creating these designs, a design should be created considering the risks mentioned and which can stop the data processing activity of the AI user and delete, destroy or anonymize the data belonging to the users. In addition, as stipulated in the PDPL, people who use the application should be informed about the reasons for personal data processing, method and possible consequences of personal data processing, and an approval mechanism should be established.

 

3) Recommendations for Decision Makers

  • At all stages of artificial intelligence applications, the principle of “accountability” should be acted upon and appropriate measures should be taken. With these measures, risk evaluation procedures should be determined and AI applications should be created under the principles of processing personal data.
  • When situations arise that may affect the fundamental rights and freedoms of the data subjects, the way of applying to the supervisory authorities authorized to regulate and/or supervise the field of artificial intelligence should be adopted, and incentive practices should be carried out to ensure cooperation between such audit authorities on data privacy, consumer protection, competition development and anti-discrimination issues.
  • Individuals should be informed about AI applications and ensured that they take an active role in this process. Investment in digital literacy and education resources should be made to inform the data subjects.

 

 

Result

 

The use of artificial intelligence is becoming more common by the day, and as a result, personal data is to be processed directly or indirectly within the scope of such applications. The protection of personal data is very important in terms of ensuring the fundamental rights and freedoms of individuals. The Recommendations, which are also compatible with the documents related to the protection of personal data within AI implementations internationally published, aim to prevent right violations by aiming to process personal data per data processing principles and PDPL during AI applications that make our lives easier.

Finally, when the European regulations on this issue are examined we have access to information that;

– “Risk-based approach”is adopted and AI activities are categorized by criteria of risk ratings in line with this approach,

– AI activities will be subject to audit in separate ways based on these categories by the European AI Board, which is intended to be

In this context, we believe that it would be useful to establish a special board and make legislative preparations to adopt this approach in Turkey and to control and regulate such a rapidly developing area to prevent right violations.

 

[1] Ipek Sucu, “Artificial as the New World of the Digital Universe Intelligence and a Study on Each Film”, New Media Electronic Journal, 2020, Volume:4, Issue:1, p.41

 

[2] Gizem Gültekin Várkonyi, “Artificial Intelligence Risk Assessment of Technology in Terms of Personal Data Protection”, Artificial Intelligence and Big Data Technologies Approaches and Applications, Şeref Sarıoglu, Mustafa Umut Demirezen, Ankara: Nobel Academic Publishing Education Consultancy, 2020, p.343

 

[3] Muhittin Tataroglu, “Privacy Impact Assessment in The Prevention of Privacy Problems (PIA)”, Journal of Management and Economics, 2012, Volume: 20, Issue:1, p.279