Module 1.

1. GDPR 4 Data Support

Welcome to this online training for research support staff.

The General Data Protection Regulation (GDPR) has been in effect since 25 May 2018. But what does this law actually mean for scientific research? And how can you, as support staff, provide the best support to researchers in protecting personal data?

As research support staff, it is your job to help researchers protect personal data as well as possible during their research. And that starts from as early as the design phase of the research. Therefore, in this training, we take you through all the steps involved in designing and conducting a research project.

You will learn about all the important concepts, get to know the aspects of the GDPR that are most important for researchers, and also find out about the measures to be taken for protecting personal data in research. In short, this training will equip you to fulfil your role as research support staff properly.

We wish you the best of luck with this ‘GDPR for Data Supporters’ training!

 

RDNL (Research Data Netherlands)

1.1. Mission

The GDPR 4 Data Support training aims to contribute towards the professionalisation of research support staff with respect to the responsible handling of personal data, as laid down in the General Data Protection Regulation (GDPR). By explaining the meaning of the GDPR in the context of scientific research, we provide research support staff with practical tools for their work.

Proper cooperation between the different specialisations in the area of support is of great importance in this respect. That is how data stewards, internal review boards, ethics committees, privacy officers, and research project managers can ensure integral and consistent support for the research activities.

One of the goals of this training is to help resolve many of the common ambiguities and misunderstandings relating to the GDPR, in that way reducing the number of research projects delayed due to a misunderstanding of the GDPR. We also expect this GDPR compliance support to have a positive impact on researchers. Guaranteeing data protection will no longer be seen as an obstacle for researchers, but only as a precondition for being able to carry out innovative and high-risk research in a responsible manner.

1.2. Target group

The training is aimed at data supporters in the broad sense who wish to acquire in-depth knowledge about personal data, the GDPR, and the measures to be taken to protect these personal data. It is assumed that the participants are aware of the information provided in the basic training course entitled ‘Essentials 4 Data Support’. This training can also be done by researchers, if they wish to expand their own knowledge of these topics, so that they can work in compliance with the GDPR right from the design phase of their research.

1.3. Learning objectives

The name of the training – GDPR 4 Data Support – refers to the primary purpose of the training, i.e. deepening and broadening the basic knowledge and skills (the ‘Essentials’) of privacy officers or research support staff with respect to the protection of personal data in scientific research.

After doing the training, the participant:

●      Is aware of what constitutes direct and indirect and regular and special categories of personal data and can indicate what type of personal data are or are not covered by the GDPR

●      Understands the main principles of the GDPR and can apply these to scientific research

●      Can make reasoned choices to safeguard the GDPR principles

●      Can determine the technical and organisational measures that are desirable and necessary within a specific research context to ensure proper protection of personal data within the research project

In this training, our primary goal is to offer an accessible way in which the complex subject matter of the GDPR can be concretely used to provide support to researchers. By including quizzes, case studies, and summaries of the main points at different times during the training, we hope to create an enjoyable learning experience for the participants.

1.4. Structure of the training

This training consists of four theoretical modules and one practical module. The theoretical modules address the following topics:

●      Personal data

●      The GDPR

●      Principles relating to the processing of personal data (the principles contained in Article 5 of the GDPR)

●      Technical and organisational measures to be taken for the protection of personal data in research

Each module begins with a knowledge test that allows participants to test their prior knowledge. A brief summary of the key points from the module is included with each test. The subsequent theoretical part of each module is narrative in nature. Wherever possible, case studies are included to bring the theory to life and make it more concrete.

The practical module (Module 6: Complex cases) presents a series of research studies, where the participant must select the right answers to questions about how the researchers have safeguarded the underlying principles of the GDPR and what personal data protection measures have been taken. This module is intended as practice material, allowing participants to put into practice everything they have learned in the theoretical modules.

1.5. Explanation of concepts

During this training, you will come across various kinds of GDPR-related concepts. On this page, we offer you a brief explanation of each concept, sometimes supplemented with a reference to the source. For a list of definitions, see the GDPR itself: GDPR Article 4 (pp. 33 - 35).

1.5.1. Scientific research

Research that has the following characteristics:

(1) It has a clearly defined research objective that contributes to the public interest or fundamental research.

(2) It results in findings and, where applicable, underlying data which are made available to academia, via publication and announcements in print or online, for use in scientific debates and for verification purposes and/or

(3) it results in findings and, where applicable, underlying data which are made publicly available to society in general. This is expected to result in a positive social impact; for example, contributing to policy development, providing reliable input for social debates in a society where too often unvalidated claims are made and even fake news is spread for undemocratic purposes and/or

(4) It results in findings and, where applicable, underlying data which are made available to private organisations (companies). This is expected to result in innovation, a stronger internal market, and therefore more jobs for EU citizens.

Ideally, publications should be made available based on open access principles (the golden route or the green route) and the supporting/underlying data made available in accordance with the FAIR Principles (findability, accessibility, interoperability, and reusability) and the principle of ‘as open as possible, as closed as necessary’.

Researchers who process personal data for ‘archiving purposes in the public interest, scientific or historical research purposes or statistical purposes’ adhere to shared national and European standards for scientific integrity as well as discipline-specific standards relating to methodology and ethics. Research can only be considered as responsible research if there are checks and balances in place, such as Ethics Boards and Internal Review Boards for experimental and non-experimental research that play a role in reviewing the research design, proposed methodology, and standards of the proposed research project. Finally, researchers are accountable for their scientific contributions, the manner in which these contributions have come about, and how the rights and freedoms of research participants have been protected. They receive academic credit for these contributions via attribution and citations in publicly shared publications and datasets.

 

1.5.2. Personal data

Any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.

1.5.3. Anonymous data

Information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable. (GDPR Recital 26)

Hence, anonymous data are not regarded as personal data. The GDPR therefore does not relate to the processing of anonymous data, including for statistical or research purposes. (GDPR Recital 26)

In its Opinion 05/2014 on Anonymisation Techniques (WP216), the WP29 (presently: the European Data Protection Board (EDPB)) recognises the value of anonymisation, for example, in the context of open data, but it also notes that in practice it is difficult to ensure actual anonymity (especially with the increase in open data and the increased possibilities of linking datasets to one another):

The WP acknowledges the potential value of anonymisation in particular as a strategy to reap

the benefits of ‘open data’ for individuals and society at large whilst mitigating the risks for

the individuals concerned.

However, case studies and research publications have shown how difficult it is to create a truly anonymous dataset whilst retaining as much of the underlying information as required for the task.

The WP29 further indicates that there is no uniform view within the EU regarding what constitutes an acceptable risk of re-identification of anonymous data. Should one strive for computational anonymity or perfect anonymity? (Source, page 26)

To determine whether a natural person is identifiable, account should be taken of all the means reasonably likely to be used, such as singling out, either by the controller or by another person to identify the natural person directly or indirectly. To ascertain whether means are reasonably likely to be used to identify the natural person, account should be taken of all objective factors, such as the costs of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments. (GDPR Recital 26)

In practice, in the Netherlands, the best guarantee for anonymity of analysis results for researchers is provided by Statistics Netherlands (CBS), i.e. when researchers work with CBS microdata (in Dutch only), often in combination with their own set of data. During the output checking process, for which specific output guidelines have been established, the analysis results are assessed for re-identification risks and measures are taken to ensure that the results do not entail any risk of disclosure (for individuals, institutions or companies).

Read more

See also the article (in Dutch only) entitled ‘Case study: pseudonymisation at Statistics Netherlands (CBS)’ (Praktijkvoorbeeld: pseudonimiseren bij het Centraal Bureau voor de Statistiek (CBS), in Dutch) which talks about CBS, anonymisation, and pseudonymisation.

1.5.4. Citizen service number

The citizen service number (BSN) is a unique personal number primarily intended for use in interactions between citizens and the government. Organisations not related to the government may use the BSN only if that is required by law. The BSN makes it easy to link information from different files. Therefore, careless use of the BSN poses certain privacy risks, such as the risk of misuse of personal data and identity fraud. (Source (in Dutch only))

The BSN is not considered to be regular or special categories of personal data, not according to the GDPR nor the Dutch General Data Protection Regulation (Implementation) Act (Uitvoeringswet AVG (UAVG) (in Dutch only)). However, under the GDPR, European Member States are allowed to set their own conditions for processing a national identification number, such as the BSN. (Source (in Dutch only)


Article 87 of the GDPR states that countries may determine additional provisions regarding national identification numbers (e.g. the BSN in the Netherlands):

Processing of the national identification number

Member States may further determine the specific conditions for the processing of a national identification number or any other identifier of general application. In that case the national identification number or any other identifier of general application shall be used only under appropriate safeguards for the rights and freedoms of the data subject pursuant to this Regulation.

 

This clear overview from the international law firm Bird&Bird (2018) shows how countries handle this matter differently. For example, you see that the following is applicable:

 

Finland 13.11.2018

Under the Data Protection Act, a Personal Identity Code (PIC) may be processed with the explicit consent of the data subject or when it is important to unequivocally identify the data subject for the purposes provided by law or for carrying out an assignment prescribed by law. Processing is also allowed for carrying out rights and responsibilities of the data subject or the controller, or for the purposes of scientific or historical research or for statistical purposes. PIC may also be processed for certain additional purposes listed in the Data Protection Act such as lending, debt collection, and insurance.

And:

Sweden 06.09.2018

The Act stipulates that information regarding personal identification numbers or classification numbers may only be processed without consent where clearly justified in light of (i) the purpose of the processing; (ii) the importance of positive identification; or (iii) some other worthy reason. The Government may issue regulations on other justifications for the processing of personal identification numbers of classification numbers.

 

However, this option is not provided for in the UAVG in the Netherlands:

Netherlands 17.09.2018

In line with art. 6 GDPR, art. 44 UAVG provides that national identification numbers may only be processed if such processing is provided for by law, and only for those purposes as stipulated in the relevant legislation.

Therefore, in the Netherlands, only parties that perform a statutory task are allowed to process BSNs. This is stated in UAVG Article 46 (in Dutch only) as follows:

Processing of national identification numbers

1 A number prescribed by law for the identification of an individual may be used in the processing of personal data only for the purpose of implementing the relevant law or for purposes determined by law.

2 An order in council may indicate cases, other than those referred to in the first paragraph, in which a number, as referred to in the first paragraph and designated at such time, may be used. Further rules regarding the use of such a number may also be issued.

Universities do not have such a statutory task in the area of research and whether or not they are allowed to process a BSN is not a consideration that a controller is authorised to make on its own; hence, this must be determined by law.

 

For example, with respect to the statutory task of student enrolment at a higher education institution, the Higher Education and Research Act (in Dutch only) (Wet op het hoger onderwijs en wetenschappelijk onderzoek (WHW)) stipulates the following in Section 7.39:

 

Personal identification number to be provided upon enrolment

1 At the time of enrolment, the student or external student must also provide their personal identification number. If the student or external student is able to demonstrate that they cannot provide a personal number, the enrolment will take place with due observance of the second paragraph. Section 7.31d(2) shall apply.

2 If the student or external student is able to demonstrate that they cannot provide a personal identification number, the university board will notify Our Minister within two weeks of the available details of the student or external student, referred to in the first paragraph, as well as their address and place of residence.

3 Within eight weeks after receiving the notification referred to in the second paragraph, Our Minister will provide the university board with the student’s or external student’s BSN or, if it appears that the student or external student has not been provided with a BSN by the government, the education number (onderwijsnummer) will be provided.

 

This means that, although the higher education institution receives the BSN for the statutory task (as described in the WHW) of enrolment (from DUO (Education Executive Agency) (in Dutch only)), it is not permitted to process the BSN for any other purpose, such as research, even if this is in the public interest.

 

Read more
1. General communication (in Dutch only) from the Dutch Data Protection Authority (Dutch DPA) regarding the BSN

2. Public authorities have certain options for processing a BSN, but universities and institutes of higher education are not public authorities. See also here (in Dutch only).

 3. The provisions regarding the BSN originate from the Citizen Service Number (General Provisions) Act (in Dutch only) (Wet algemene bepalingen burgerservicenummer).

 

1.5.5. Covert research

This refers to research that is conducted without the research participants knowing in advance that they are participating in research. As a rule, this type of research is conducted in cases where transparency regarding the research objective can reasonably be expected to interfere with the objective of the research itself, because research participants may exhibit socially desirable behaviour based on information about the research objective or behaviour that is not spontaneous, for example.

General principles

1. In general, the GDPR states that there must be a legal basis for the research, such as consent (GDPR Art. 6); besides this, there are two other common legal grounds for scientific research, i.e. legitimate interest (GDPR Art. 6) and public interest (GDPR Art. 6).

2. One of the principles of data processing (GDPR Art. 5) is transparency and the obligation to inform research participants/data subjects.

Perspective for action in research

3. However, the GDPR allows sufficient room for manoeuvre, especially for the purposes of scientific research, and includes additional provisions in applicable industry-specific codes. For this, see the scope of ‘in accordance with the law’ in WP29 Opinion 03/2013 on purpose limitation. [Adopted on 2 April 2013]

This includes all forms of written and common law, primary and secondary legislation, municipal decrees, judicial precedents, constitutional principles, fundamental rights, other legal principles, as well as jurisprudence, as such 'law' would be interpreted and taken into account by competent courts.

Within the confines of law, other elements such as customs, codes of conduct, codes of ethics, contractual arrangements, and the general context and facts of the case, may also be

considered when determining whether a particular purpose is legitimate. This will include the nature of the underlying relationship between the controller and the data subjects, whether it be commercial or otherwise.

Source: https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2013/wp203_en.pdf

4. Covert research is recognised in codes of ethics and codes of practice. See, for example:

4.1. Statement of Ethical Practice, 2017 of the British Sociological Association. See (p. 5): https://www.britsoc.co.uk/media/24310/bsa_statement_of_ethical_practice.pdf

Covert Research

14. There are serious ethical and legal issues in the use of covert research but the use of covert methods may be justified in certain circumstances. For example, difficulties arise when research participants change their behaviour because they know they are being studied. Researchers may also face problems when access to spheres of social life is closed to social scientists by powerful or secretive interests.

15. However, covert methods violate the principles of informed consent and may invade the privacy of those being studied. Covert researchers might need to take into account the emerging legal frameworks surrounding the right to privacy. Participant or non-participant observation in non-public spaces or experimental manipulation of research participants without their knowledge should be resorted to only where it is impossible to use other methods to obtain essential data.

16. In such studies it is important to safeguard the anonymity of research participants. Ideally, where informed consent has not been obtained prior to the research, it should be obtained post-hoc.

Denscombe also talks about covert research, the ethical aspect, and the fact that covert research may be justified but cannot be based on informed consent.

4.2. See (p. 209): Martyn Denscombe. The Good Research Guide. For small-scale social research projects. Fourth Edition.

Ethics

Participant observation can pose particular ethical problems for the researcher. If ‘total’ participation is used, then those being studied will not be aware of the research or their role in it. They can hardly give ‘informed consent’. The justification for such covert research cannot depend on consent, but draws instead on two other arguments. First, if it can be demonstrated that none of those who were studied suffered as a result of being observed, the researcher can argue that certain ethical standards were maintained. Second, and linked, if the researcher can show that the identities of those involved were never disclosed, again there is a reasonable case for saying that the participant observation was conducted in an ethical manner.

Whichever variant of participant observation is used, there is the possibility that confidential material might ‘fall into the hands’ of the researcher. Now, while this is true of most research methods, its prospects are exacerbated with the use of participant observation, owing to the closeness and intimacy of the researcher’s role vis-à-vis those being researched. Confidential material might be disclosed inadvertently by someone who does not know the research interest of the participant. Or, possibly even more problematic, things might be revealed as a result of the trust and rapport developed between the researcher and those being observed. This could be true for any of the variants of participant observation. The ethical problem is whether to use such material and how to use it. And here the guidelines are quite clear: (1) any use of the material should ensure that no one suffers as a result; and (2) any use of the material should avoid disclosing the identities of those involved. Any departure from these guidelines would need very special consideration and justification.

5. In both sources, the proposed approach is the same:

5.1. There is a plausible justification for covert research, which is to be assessed by an ethics committee or internal review board.

5.2. There is a broader ethical consideration related to this type of research. From the perspective of GDPR compliance, a condition for this is that the research proposal should include appropriate technical and organisational measures (GDPR Art. 25) to ensure that ‘no one suffers as a result’. This means working in a secure environment and providing access to these data only to those who actually require such access, for the period of time for which this is important, and to the part of the data to which access is necessary. Finally, the identity of the research participants/data subjects or group of data subjects must never be disclosed in publications or otherwise.

5.3. Based on the transparency principle (GDPR Art. 5), data subjects should be informed of the research findings after the end of the study, but in such a way that this does not lead to the disclosure of the group or allow for the re-identification of individual data subjects.

6. In its DPIA Decree, the Dutch DPA has mentioned covert research as the first example for when a DPIA is required. See: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/stcrt-2019-64418.pdf (in Dutch only).

Module 2. Personal data

‘The right to the protection of personal data is not an absolute right; it must be considered in relation to its function in society and be balanced against other fundamental rights, in accordance with the principle of proportionality.’

GDPR Recital 4

Take a look around you. Maybe you are in your office right now or at home in your study. If someone who does not know you were to enter this room, how quickly would this person know who this office or study belongs to? What ‘data’ in the room can be traced back to you as a unique ‘person’? This might be a picture of your family, a certificate on the wall, or a postcard with your name and address on it.

All of this constitutes personal data and that is what this first module is about. And as a warm-up, we will start with a short quiz: what do you know about personal data?

2.1. Quiz

1. What is not considered as direct personal data?

l  Your height

l  Your sexual orientation

l  Your licence plate number

Your emotions

2. What are directly traceable personal data?

Personal data that allows the identity of an individual to be determined in a straightforward manner

l  Personal data officially known to a government agency

l  Personal data that, when combined with no more than two other items of personal data, can be traced back to a unique individual

l  Personal data that is unchangeable

3. What are indirectly traceable personal data?

Personal data which cannot be traced back to an individual directly, but which can be used to identify the individual when combined with other data

l  Personal data that must be combined with at least three other items of personal data to identify a unique individual

l  Personal data that appear in multiple databases

l  Personal data which cannot be traced back to an individual directly, but which can be used to identify the individual when combined with a last name

4. Does the GDPR distinguish between directly and indirectly traceable personal data?

l  Yes. The GDPR only protects directly traceable personal data

l  Yes. The GDPR only protects directly traceable personal data known to the Dutch DPA

No. The GDPR considers both directly and indirectly traceable personal data as ‘personal data’

l  No. The GDPR protects all personal data that are considered as personal data by the individual themselves

5. Can opinions be personal data?

Yes, as long as they can be traced back to a unique individual when combined with other personal data

l  Yes, as long as they are officially stored in a database

l  No, opinions cannot be objectively established and are therefore not viewed as personal data

l  No, opinions are exempted from the GDPR

6. What kind of personal data fall within the special categories of personal data as defined in the GDPR?

Political opinions

Religious or philosophical beliefs

l  Diet

Trade union membership

 

7.     Why are there special categories of personal data within the GDPR?

Because data subjects could potentially be significantly harmed if they became public

l  Because traditionally these data also received extra protection under the Dutch Personal Data Protection Act

l  Because the publication of these data requires explicit consent from the data subjects

l  Because the processing of these data implies substantially higher costs for a processor

 

8.     Can researchers process special categories of personal data in a research project without restrictions?

No. The processing of special categories of personal data is prohibited by law, unless the researcher has obtained the consent of the research participants for this

l  No. The processing of special categories of personal data is prohibited by law, unless the researcher does so based on an approved research plan

l  No. The processing of special categories of personal data is prohibited by law, unless the researcher can rely on one of the bases for processing ‘ordinary’ personal data

l  Yes. By definition, carrying out scientific research falls under the exceptions granted for processing special categories of personal data

 

9.     What are the two obligations that a researcher definitely must fulfil if they want to process special categories of personal data?

Performing a data protection impact assessment (DPIA) and consulting a privacy expert (such as the DPO)

l  Performing a data protection impact assessment (DPIA) and obtaining explicit consent from the data subjects

l  Consulting a privacy expert (such as the DPO) and obtaining informed consent from the data subjects

l  Obtaining explicit consent from the data subjects and setting out the agreements made in a record of processing activities

 

10.  What are fully anonymised personal data under the GDPR?

l  The GDPR views such data as regular personal data because they can be converted back into regular personal data via reverse engineering

l  Fully anonymised personal data form a separate category within the GDPR, similar to the special categories of personal data

l  Fully anonymised personal data are personal data only if they can be traced back to a unique individual

Since fully anonymised personal data can no longer be traced back to a unique individual, these data fall outside the scope of the GDPR

11.   Can the GDPR also apply to deceased persons?

 

l  Yes, if it concerns Dutch celebrities, for example

l  Yes, if the deceased person died less than six months ago

No, the GDPR only applies to living persons

l  No, the GDPR only applies to living persons born in the Netherlands

 

12.   Can the GDPR also apply to data about organisations?

 

l  Yes, this is indeed possible if the organisation processes special categories of personal data

l  Yes, sensitive company data may also fall under the GDPR in exceptional cases

No, the GDPR only applies to natural persons

l  No, the GDPR only applies to living persons born in the Netherlands

 

13.   Do you always need to have a legal basis for keeping an address book at home containing personal data of others?

 

l  Yes, no one is permitted to process personal data of others without a legal basis

l  Yes, in addition to the legal basis, explicit consent from these individuals is also required

l  No, it is not necessary in all cases as long as you properly protect the personal data

No, a legal basis is not necessary if the personal data are only shared privately

 

14.   Personal data that you store and share privately are not covered by the GDPR. What is the name of this exception?

 

Household exception

l  Private exception

l  Domestic exception

l  Legitimate interest

 

15.   What does the term ‘territorial scope’ mean within the GDPR?

 

That data of EU residents are protected by the GDPR regardless of whether the processing takes place within or outside the EU

l  That data of EU residents are protected by the GDPR as long as the processing takes place within the EU

l  That data of EU residents are protected by the GDPR as long as the processing is done by organisations within the EU

l  That data of EU residents are protected by the GDPR regardless of whether the residents are in the EU at that time

 

 

2.2. Learning objectives

To understand whether it is necessary to take certain measures for the protection of personal data in research, it is first important to have a proper understanding of what personal data actually are. In this module you will learn about:

●      The difference between directly identifiable and indirectly identifiable personal data

●      Special categories of personal data and why it is important to provide additional protection for these data

●      When the GDPR does and does not apply with respect to personal data.

Good luck!

2.3. Direct and indirect traceability

As you can see, there are many types of personal data. In the case of certain types of personal data, there seems to be little risk when these are shared within a particular context, such as your name and address when ordering on the internet. Other types of personal data are, by definition, far more sensitive, such as your sexual orientation, membership of a political party, or religious affiliation. The disclosure of such sensitive personal data can lead to undesirable or even dangerous situations in several contexts.

2.3.1 What are personal data?

In other words, your personal data deserve proper protection. The protection of these personal data has been laid down in the General Data Protection Regulation (GDPR). The GDPR is a European regulation that standardises the rules for the processing of personal data by private companies and public authorities across the European Union.

In addition to this protection, the GDPR also serves a second purpose, namely to promote the exchange of personal data within the EU. For researchers, both of these purposes are relevant: for research involving personal data, researchers want to be able to collect, process, and publish the data in the most optimal way. But they must also take appropriate measures to protect the collected personal data as well as possible. This is something we will explore in more detail in Module 2.

To understand whether it is necessary to take certain measures for the protection of personal data in research, it is first important to have a proper understanding of what personal data actually are. The definition used by the GDPR (Article 4.1 Definitions) is as follows:

Personal data: any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.

2.3.2. Indirect and direct

There are many types of personal data. An important distinction can be made between directly and indirectly traceable personal data. But what exactly are directly and indirectly traceable personal data? Why is it also relevant to properly protect indirectly traceable personal data? And how can you identify unique individuals by linking datasets consisting of indirectly traceable personal data?

The GDPR itself does not distinguish between directly or indirectly traceable personal data; both are regarded as personal data. As soon as someone can be identified as a unique individual in any way via certain directly or indirectly traceable data, these data are regarded as personal data. As research support staff it is therefore very important to be aware that the scope of personal data is broader than just a first and last name or your BSN.

Therefore, indirectly traceable personal data can be almost anything: as long as links can be established between databases – thus allowing for the identification of a unique individual by combining indirectly traceable personal data with other traceable personal data – it is imperative that such indirectly traceable personal data be protected under the GDPR to the same extent as directly identifiable personal data that are more quickly recognised as personal data. For researchers, this is all the more relevant since many research projects are set up precisely with the objective to collect ‘quasi-identifiers’, such as emotions, opinions, disorders, etc. But at any rate, these are also personal data, albeit indirectly traceable.

2.3.3 Identifiable

In the parliamentary history of the Dutch Personal Data Protection Act (in Dutch only) (Wet bescherming persoonsgegevens (WBP)), the provision containing the same definition explains that directly identifiable data are those that can be used to unequivocally establish the identity of an individual in a straightforward manner. These include data such as name, address, and date of birth which, when combined, are so unique and therefore specific to a particular individual that they can be broadly identified with certainty or with a high degree of probability.

Such data are also used in social and economic life to distinguish individuals from one another. It is different when the data cannot be used to identify a particular individual directly but can be linked to a particular individual by taking a few further steps. This type of data is called indirectly identifiable data. Even if the name of the individual is removed from this type of data, a particular individual’s identity can still be derived under certain circumstances by combining these with other data. As stated in Parliamentary Papers II 1997/98, 25892, 3, p. 14-15 (in Dutch only); see also WP29, Opinion 4/2007 on the concept of personal data, (WP136) 20 June 2007, p. 13-14.

Source: Text & Commentary Privacy and Data Protection Law, Definitions in: Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC, Article 4, Definitions.

2.4. In the spotlight

Example: 'Pseudonymisation'

Example: 'Pseudonymisation'

Let us look at an example where indirectly traceable personal data can be used to identify a unique individual. Suppose, as a researcher, you want to know if there is a correlation between height, disease, and income in a particular village. You can find out these data via questionnaires or interviews, for example, after which you have a useful dataset to work with. Since the names of the research participants are irrelevant to your research, you leave such data out of your dataset.

After having done this, it seems as if the data can no longer be traced back to a specific person, but nothing is further from the truth. The dataset shows that one person earns significantly more money than the other villagers, has an incurable disease, and is 2.05 metres tall. Coincidentally, there happen to be two persons of this height in the village, but one of them is a farm worker and the other is the mayor. The obvious conclusion is that the person in the dataset is the mayor, and with that it is also clear that he is suffering from an incurable disease.

So you see that seemingly untraceable personal data can still be traced back to a unique individual. You can reduce this risk through a measure such as pseudonymisation. According to the GDPR (Article 4.5 Definitions), pseudonymisation means:

 

Pseudonymisation: the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person.

Pseudonymisation measures

In the example involving the mayor, as a researcher you might decide not to work with specific salaries and heights, but with categories of salaries and heights. For example, the category ‘taller than 1.80 m’ rather than ‘2.05 m’. This prevents indirectly traceable personal data from being traced back to a unique individual.

There are two points to note here:

l  When applying pseudonymisation, you should always consider the context within which the research is being conducted. Applying a category for height such as ‘taller than 1.80 m’ means that certain data will no longer be traceable in certain countries. But in a country where the average height of the population is shorter, the same category may actually mean greater traceability.

l  As a result, there are no standard laws or rules for pseudonymisation; each researcher will have to consider within their own context which pseudonymisation measures best reduce the risk of identification.

For researchers who want to get started with pseudonymisation, an application called Amnesia (to be found in the EOSC Marketplace) might be of interest. The Amnesia application helps a researcher determine what the minimum size of a group of people with the same characteristics should be to make identification impossible.

Example: 'Open data'

Example 'Open data'

Government organisations provide open datasets to researchers and the general public for a variety of reasons. One of these organisations, the RDW (the Netherlands National Vehicle and Driving Licence Registration Authority), offers a comprehensive set of databases online: see here (in Dutch only). One of these databases, ‘Gekentekende_voertuigen’, contains the following data for all Dutch vehicles: registration number, vehicle type, make, trade name, periodic vehicle inspection certificate expiry date, date of registration, and gross private motor vehicle and motorcycle tax. On 31 August 2021, this dataset contained 14.3 million records and the dataset had been downloaded 132 million times by that date (since 8 September 2015).

This dataset identifies, for example, the cars whose periodic vehicle inspection certificate has expired and this information, in combination with actual observations of cars on the road (linked to the registration number), could be exploited by a malicious person. For example, the fine for driving on public roads with an expired periodic vehicle inspection certificate is €130. The RDW states the following (in Dutch only) about the open data published by it: ‘These data are not considered sensitive in terms of privacy, fraud or competition.’ However, the risk to privacy lies in the fact that datasets can be combined and linked based on unique characteristics, such as a registration number. See also this article by Andy Green, for example: ​​New PII Discovered: License Plate Pictures.

Finally, sensitive information may potentially become available via various hacks or other forms of cybercrime and this information, when combined with publicly available information, can give a fairly detailed idea of the identity of persons, resulting in an invasion of privacy for the data subjects. See, for example, the RDC data breach . Finally, many people post pictures of their car on social media without making the number plate illegible, which makes it possible to link the person to the car and number plate. Due to data breaches in traffic control systems, such as in Westbroek (in Dutch only), ‘a file with images, originating from the traffic cameras, was accessible via a web server without authorisation. Thousands of images from the period 2017 to 2021 were stored in this file. Vehicle number plates, locations, and times can be seen on the images.’ All of these data taken together makes it possible to identify the cars, where they were driving, and who drove them. If such traffic cameras are set up at hospital car parks, this could expose a new layer of information about these individuals each time, without their knowledge, that could be misused by a malicious party.

Possible identifiable links

What kind of links can be made based on this dataset? Below are some examples of how this seemingly anonymous data can lead to identification of unique individuals:

l  People proudly post their new car on social media, such as Facebook and Instagram, with the number plate visibly on display. Based on this photo, the RDW database can be used to find out not only what defects this car may have but also where this person was at a specific point in time. This can be useful information for car thieves looking for a specific type of car or for tabloids to find out where famous Dutch people live.

l  With this dataset, organisations that know which number plates belong to which individuals (such as companies that lease cars to employees) can track their employees’ whereabouts during the day.

l  Using the data from the RDW dataset, it is possible to find out which vehicles are often in close proximity of each other. Is there a possibility that the drivers of these vehicles are engaged in a compromising relationship or have criminal connections?

2.5. Special categories

The GDPR distinguishes between two types of personal data. Regular personal data which have been defined in Chapter 2.2 and special categories of personal data. The latter category is also referred to as sensitive personal data; both designations are correct. In this chapter, you will discover the differences between the two types of personal data and what these differences mean within the area of research.

2.5.1 What kind of data are we talking about?

Special categories of personal data are personal data that say something about a person’s:

l  Racial or ethnic origin

l  Political opinions

l  Religious or philosophical beliefs

l  Trade union membership

l  Genetic or biometric status for the purpose of unique identification

l  Health

l  Sexual orientation

l  Criminal history

If a researcher wishes to process regular personal data in the context of research, this is permitted provided there is a legal basis for this and sufficient technical and organisational measures have been taken to protect the personal data as effectively as possible (the bases and measures are discussed in greater detail in Chapter 3: GDPR). However, the processing of special categories of personal data is, in principle, prohibited (Article 9.1 GDPR):

Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person's sex life or sexual orientation shall be prohibited.

However, there are certain exceptions for scientific research. On the next page, you will discover what these exceptions are.

2.5.2. Exceptions for research

According to Article 9.2 of the GDPR, there are several exceptions that allow for the processing of special categories of personal data, despite the general prohibition. On this page we will discuss these exceptions and look at an example that illustrates the rationale behind them.

Let us start by looking at how the exception for scientific research is included in the GDPR. With respect to research, Article 9.2j applies:

9.2. Paragraph 1 shall not apply if one of the following applies:

(j) processing is necessary for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes in accordance with Article 89(1) based on Union or Member State law which shall be proportionate to the aim pursued, respect the essence of the right to data protection and provide for suitable and specific measures to safeguard the fundamental rights and the interests of the data subject.

Therefore, the researcher must always assess and substantiate the reason or reasons why the processing of special categories of personal data is necessary in relation to the scientific purpose of the research in question (this refers to the principles of subsidiarity and proportionality in relation to the research question – see here for an explanation of these terms).

The researcher must look carefully for alternative and less drastic ways to achieve the same purpose. This always starts with two considerations:

1.     Are the special categories of personal data really needed in this research?

2.     As a researcher, how do you collect these data in the least harmful or risky manner?

 

An answer to the second question might be, for example, to extract these necessary special categories of personal data from existing records, rather than collect them again via surveys. Researchers should be well aware of the need to provide for even greater and better protection for this type of sensitive personal data.

2.5.3. In the spotlight

Example: Sensitive personal data

[START]

Example: Sensitive personal data

For an extreme example of the sensitivity of this kind of data, we need only think back to World War II. During that period, if an individual was known to the German occupiers as a Jew or communist, this could have fatal consequences. Even today, expressing homosexual feelings can lead to corporal punishment or prison sentences in several countries.

And in the Netherlands too, certain political beliefs within specific professions can negatively affect a person’s career. In some cases, knowledge of a person’s health situation can affect the outcome of a job interview. In other words, the disclosure of sensitive personal data can have a significant impact, both directly and indirectly, on the individual concerned, and that is why these personal data require a greater degree of protection.

Would you like to know more about the potential consequences of sharing sensitive personal data? If so, read the book ‘Je hebt wel iets te verbergen’ (‘So you think you have nothing to hide’) by Dimitri Tokmetzis and Maurits Martijn.

[END]

A researcher or organisation is not permitted to use sensitive personal data unless one of the exceptions mentioned in Article 9.2 applies. This is different from working with regular personal data; a researcher may use such data as long as there is a basis for this and appropriate measures have been taken. The prohibition against processing sensitive personal data seems to contradict the objective of the GDPR which is to encourage a ‘free movement of data’ as mentioned in the title of the GDPR:


However, since there are established control mechanisms in the field of research based on scientific methodology and traditions, it is possible to process sensitive data within the framework of the GDPR under certain conditions. These exceptions can found in Article 89 of the GDPR and are further explained for the Dutch context specifically in Article 24 of the UAVG (in Dutch only).

2.5.4 Listing the exceptions

Article 89(2) (GDPR) lays down the main exception based on which the processing of sensitive personal data is permitted for research purposes. The Article mentions certain derogations with regard to the rights of participants in scientific research (i.e. the data subjects):

Where personal data are processed for scientific or historical research purposes or statistical purposes, Union or Member State law may provide for derogations from the rights referred to in Articles 15, 16, 18, and 21 subject to the conditions and safeguards referred to in paragraph 1 of this Article in so far as such rights are likely to render impossible or seriously impair the achievement of the specific purposes, and such derogations are necessary for the fulfilment of those purposes.

In the Netherlands, the room for interpretation provided for in Article 89(2) of the GDPR has been filled in via Article 44 of the UAVG (in Dutch only) (Member State law). Pursuant to Article 44 (UAVG) (in Dutch only), Articles 15, 16, and 18 of the GDPR can be disregarded. These concern:

Article 15 - Right of access by the data subject

Article 16 - Right to rectification

Article 18 - Right to restriction of processing

For example, for reasons of reproducibility and scientific integrity, further processing of new data from the individual must be suspended once the consent is withdrawn. Personal data processed till the time of withdrawal of consent (and which therefore have been legitimately collected and processed) may remain processed. This should be clearly communicated in advance to the possible research participants.

2.6 Data protection impact assessment

The exceptions for research mean that it is possible to work with sensitive personal data within certain frameworks. But these exceptions are also accompanied by some obligations. Here we discuss two of the most important obligations.

 

2.6.1 Obligation 1: Consulting a privacy officer

The first obligation is that, if a researcher wishes to process sensitive personal data within a research project, they must always consult a privacy officer within the institution (such as a Data Protection Officer (DPO)). The research project may be allowed to go ahead in the desired form, but only and always in consultation with the privacy officer of the institution: researchers should not make this decision themselves.

The privacy expert will work with the researcher to determine what additional measures and safeguards are needed to ensure proper protection of the sensitive personal data. In addition, as part of the data protection impact assessment, the privacy officer will verify compliance with the principles specified in Article 5; this is discussed in greater detail in Chapter 3: GDPR.

2.6.2 Obligation 2: Data protection impact assessment

The second obligation when working with sensitive personal data within a research project is to conduct a data protection impact assessment (DPIA). The GDPR defines a DPIA in Article 35.1 as follows:

Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data. A single assessment may address a set of similar processing operations that present similar high risks.

Therefore, a DPIA identifies the potential risks within the research project and offers the researcher a possible approach for determining the measures to be taken. While it is desirable to perform a DPIA for every research project, this is mandatory – under penalty of a fine – prior to drafting the research plan or research application if sensitive personal data are involved in the research. The outcome of a DPIA provides insight into the additional measures to be taken.

Additional measures that may be necessary may include security measures such as pseudonymisation, anonymisation, end-to-end encryption, etc. GDPR Article 32 lists the possible measures. Here the GDPR only indicates that a researcher must take measures and specifically refers to encryption and pseudonymisation as examples, but in effect a variety of measures are possible depending on the context of the research. However, these measures must be proportionate under the GDPR; purchasing an expensive encryption system that allows a researcher to hide only the first names of participants defeats the object, as stated in GDPR Recital 83:

In order to maintain security and to prevent processing in infringement of this Regulation, the controller or processor should evaluate the risks inherent in the processing and implement measures to mitigate those risks, such as encryption. Those measures should ensure an appropriate level of security, including confidentiality, taking into account the state of the art and the costs of implementation in relation to the risks and the nature of the personal data to be protected.

Please note: if the research falls within an existing category with respect to the processing of sensitive personal data (in other words, the research method is similar to earlier research projects for which a DPIA had been performed), a new DPIA is not necessary since the conclusions regarding the measures can be taken over from the previous DPIA.

Module 3 Principles relating to the processing of personal data

‘In order to be able to demonstrate compliance with this Regulation, the controller should adopt internal policies and implement measures which meet in particular the principles of data protection by design and data protection by default.’

GDPR Recital 78

 

The GDPR has a rich history. The first initiatives to protect our personal data more effectively emerged as early as in the 1970s. The underlying principles of all laws relating to the protection of personal data have remained virtually unchanged since that time. Thorough knowledge of these principles is a must for anyone involved in privacy and data protection!

In the GDPR, the seven underlying principles are referred to as the ‘Article 5 principles’. Quite logically, since they are contained in the fifth article of the GDPR. Test your knowledge of what you already know about these seven principles in this short opening quiz. Good luck!

3.1 QUIZ

[bold = correct answer]

In which year did the first law concerning the protection of personal data come into force in the Netherlands?

1989

l  1971

l  1999

l  2015

What is one of the main differences between the GDPR and previous laws concerning the protection of personal data?

That the Dutch Data Protection Authority can issue hefty fines

l  That organisations are now obliged to store all personal data in a central database

l  That the Dutch Data Protection Authority can access all of an organisation’s data at its own initiative

l  That organisations have an obligation to appoint a Data Protection Officer (DPO)

The principles in Article 5 of the GDPR are the same for all research projects. Does this mean that the technical and organisational measures to be taken are also the same for each project?

No. The GDPR principles only provide the framework and researchers must fill in this framework themselves with the necessary technical and organisational measures within the context of the research

l  No. The GDPR principles only provide the framework; the technical and organisational measures to be taken will always differ from one research project to another

l  Yes. For each principle, the GDPR clearly indicates the set of technical and organisational measures to be taken

l  Yes. Each principle in the GDPR is in turn based on a fixed set of technical and organisational measures

With respect to research, what is meant by GDPR Article 5, Principle 1: ‘lawfulness, fairness and transparency’?

That every data subject involved in the research clearly knows in advance how the personal data will be lawfully processed and what the data subject’s rights are

l  That each data subject has sufficient knowledge of the GDPR to make an informed choice prior to the research whether or not to participate in it

l  That each data subject involved in the research always has the option to appeal if the personal data are not processed lawfully

l  That, to every data subject involved in the research, it is clear how the personal data will be processed

With respect to research, what is meant by GDPR Article 5, Principle 2: ‘purpose limitation’?

That the personal data processed in the research context are necessary for the specified, explicitly described and legitimate purpose of the research or for a purpose that is not considered incompatible with the original purpose

l  That the way in which personal data are processed in the research context must be consistent with the overall objectives of the research institute

l  That the way in which personal data are processed in the research context must be aligned with the purposes of any follow-up research project

l  That the way in which personal data are processed in the research context must be consistent with the legal basis for the research

With respect to research, what is meant by GDPR Article 5, Principle 3 ‘data minimisation’?

Only those personal data will be processed that are adequate, relevant and limited to what is necessary in relation to the research purpose

l  All personal data relevant to the legitimate and concrete research purpose must be immediately pseudonymised

l  Only those personal data that are relevant to the legitimate and concrete research purpose may be used in any follow-up research

l  Only those personal data that are relevant to the legitimate and concrete research purpose may be made available for a peer review

With respect to research, what is meant by GDPR Article 5, Principle 4: ‘accuracy’?

Personal data must be accurate and, where necessary, kept up to date

l  Only personal data that have been conclusively proven to be accurate should be included in research

l  Personal data must be periodically checked by data subjects for accuracy

l  Only personal data that have been identified as accurate by the data subjects may be included in a publication

With respect to research, what is meant by GDPR Article 5, Principle 5: ‘storage limitation’?

Personal data may not be kept for longer than is necessary for the purposes for which the personal data are processed

l  Personal data must be kept in a form that does not allow for identification of the data subjects after death

l  Personal data must be kept in a form that does not allow for identification of the data subjects in the context of any follow-up research

l  Personal data must be kept in a form that does not allow for identification of the data subjects if they have submitted an official request thereto

With respect to research, what is meant by GDPR Article 5, Principle 6: ‘integrity and confidentiality’?

That a researcher must take appropriate measures to ensure protection against unlawful access, loss, damage, or destruction of personal data

l  That a researcher must at all times be honest and respect confidentiality when dealing with the data subjects involved in the research

l  That a researcher may not share the personal data collected in the research project without the explicit consent of the data subjects

l  That a researcher must take appropriate measures to avoid data breaches at all times during any follow-up research

With respect to research, what is meant by GDPR Article 5, Principle 7: ‘accountability’?

The researcher must comply with the principles in Article 5 and be able to demonstrate such compliance, if requested

l  The researcher must account for their actions to all data subjects after the end of a research project

l  The researcher is responsible for the secure processing of all personal data used within the research and any follow-up research

l  The researcher must comply with the principles in Article 5 and be able to account for this to the internal data protection officer (DPO), if requested

3.2 Learning objectives

The seven principles on which the GDPR is based are general in nature. They provide you, as privacy officer, with guidelines for taking all the necessary technical and organisational measures but do not tell you exactly what to do in every situation. That is why in this module we offer you the Metro Map which shows you step by step the measures you need to take. Subsequently, the next module explains these measures (referred to as ‘actions’) in even greater detail. In this module you will learn about:

●      The rich history of the GDPR

●      The precise meaning of each of the seven Article 5 principles

●      How to use the Metro Map for a step-by-step overview of the necessary measures

Good luck!

3.3 The history of the GDPR

Let us go back in time. The national census meant that, every ten years, there would be a knock on every door in the Netherlands to record the number of people living there. This happened in 1971 as well.

3.3.1 Civil resistance

Even before 1971, there was growing resistance from citizens to this method of recording data. But from 1971 onwards, this protest movement came to be led by the ‘Census Vigilance Committee’ (Comité Waakzaamheid Volkstelling) as the above poster by Lucebert illustrates. ‘Critics tended to associate the census with World War II; after all, that was only 25 years ago at the time. The occupying forces made good use of the well-maintained population registers to deport the Jews’ (source: Andere Tijden, Dutch TV programme on history).

Moreover, didn’t the government already have all the data for Dutch residents, for example, via birth records? Last but not the least, there were objections about the fact that the census also recorded information on philosophical beliefs, disabilities, and income. And it turned out that, in practice, an increasing number of people were providing fictitious information to the census authorities because refusal was punishable with a fine of 500 Dutch guilders or imprisonment.

Andere Tijden: De burger in kaart - De Volkstelling in 1971

Watch the broadcast (in Dutch only) below:

 

3.3.2. National Committee

Around 1970, there was a discussion in society about the powers of the National Security Service (BVD, currently known as the General Intelligence and Security Service (AIVD)) relating to the monitoring of citizens via wiretaps. A bill entitled ‘Further rules for the protection of telephone privacy’ (Nadere regels ter bescherming van het telefoongeheim) debated by the House of Representatives in 1970 focused on the lack of control over the National Security Service (source: Jan Holvast, The 1971 census - report of the first general public discussion on the invasion of privacy (De Volkstelling van 1971 verslag van de eerste brede maatschappelijke discussie over aantasting van privacy). 2013. p. 56.)

The discussion surrounding the 1971 census prompted Minister of Justice Van Agt to establish a National Committee for the protection of privacy in relation to the recording of personal data in 1972. This Koopmans Committee published its findings in the report entitled ‘Privacy and Recording of Personal Data’ (Privacy en Persoonsregistratie) (source: National Committee for the protection of privacy in relation to the recording of personal data. Privacy and Recording of Personal Data. Final report of the National Committee on the protection of privacy in relation to the recording of personal data (Staatscommissie bescherming persoonlijke levenssfeer in verband met persoonsregistraties. Privacy en persoonsregistratie. Eindrapport van de Staatscommissie bescherming persoonlijke levenssfeer in verband met persoonsregistraties). The Hague, 1976).

In this report, the Koopmans Committee sets out ’the legal or other measures desirable for the protection of privacy in connection with the use of automated recording systems for personal data and to what extent it is desirable that these measures should also apply to other records of personal data’ (source: Privacy and Recording of Personal Data, p. 5). The Committee formulated principles for a legal regulation, its main features, and attached a proposal for a legal regulation: a draft bill for a Personal Data Registration Act (Wet op de persoonsregistraties (WPR)).

The Committee summarises the principles of the envisaged regulation as follows (see pp. 28, 29: Privacy and Recording of Personal Data):

●      Ensuring that the recording of personal data becomes more transparent through openness and publicity

●      Strengthening the legal position of persons whose data are recorded vis-à-vis holders of these personal data records

●      Ensuring that the registration and use of personal data is subjected to a more direct level of supervision, in particular by setting restrictive rules and establishing a special supervisory body

The bill was presented to the House of Representatives in 1981, and after a few adjustments, came into effect as the Personal Data Registration Act (WPR) on 1 July 1989 (source: https://wetten.overheid.nl/BWBR0011468/2018-05-01).

3.3.3. Evolution of the law over the years

Below is a comparison between the WPR, its successor the WBP, and the successor to that, the GDPR. It is striking to see how little has actually changed in all this time:

3.3.4. Not a new law

So the GDPR is not a completely new law but one with an approximately 50-year-old history. Both the principles and concepts as well as measures such as encryption had been mentioned by the Registration Board (Registratiekamer, the then Dutch DPA) as possible options. For example, the report by G.W. van Blarkom and J.J. Borking (source: Personal Data Protection. Background Studies and Explorations No. 23 (Beveiliging van persoonsgegevens. Achtergrondstudies en Verkenningen 23). Registration Board. 2001 - online: https://www.cs.ru.nl/~jhh/pub/secsem/registratiekamer-av23.pdf) examines the encryption of passwords (p. 44) and the communication of data (p. 45). Similarly, the report (source: Can it be a bit less? About Privacy-Enhancing Technologies 2002 (Mag het een bitje minder zijn? Over Privacy-Enhancing Technologies 2002) - online: https://autoriteitpersoonsgegevens.nl/sites/default/files/downloads/brochures/bro_pet.pdf) of the Data Protection Board (College Bescherming Persoonsgegevens), the successor to the Registration Board, mentions encryption of a patient identification number as an option (p. 18).

A major difference between the GDPR and the earlier laws is that now there is a supervisory authority (the Dutch DPA) which has the power to immediately issue fines (source: https://autoriteitpersoonsgegevens.nl/nl/nieuws/cbp-krijgt-boetebevoegdheid-en-wordt-autoriteit-persoonsgegevens). Since these fines can be as high as 20 million euros or 4% of an organisation’s annual global turnover (source: GDPR Article 83(2.5)), they form a serious disincentive. So, failure to properly protect personal data can suddenly lead to serious financial consequences for organisations.

3.4 The principles

As privacy officer, you will have many conversations with researchers about how they handle personal data within their research. Such a conversation might go as follows:

Researcher: “As part of a conference I’m organising about my research, I would like to take pictures of the visitors and conduct some interviews. Is that allowed under the GDPR?”

Privacy officer: “Yes, that’s not a problem, as long as the visitors can say whether they want to be photographed or not. And that they know where and when the interviews will be published, so they can give you permission via a consent form for this.”

Researcher: “Yes, all right, I had already received such a form from a colleague. I’m not yet sure how long I want to store the interviews, since they might come in use in a few years’ time as well.”

Privacy officer: “Hmm, that’s going to be tricky, because perhaps then the conference participants won’t want to take part in the interviews anymore.”

Researcher: “But... why would they not want to participate anymore? After all, I’m asking them for permission in advance, aren’t I?”

3.4.1 Check boxes?

The above conversation shows that while the researcher is aware that a consent form is required for obtaining permission, the researcher has not given further thought to the retention possibilities and the retention period of the data. After all, in a few years’ time the participants may think very differently than they did at the time of the interview. And maybe, in a few years’ time, they will look very different than they did in the pictures taken at the conference.

Another relevant example is the storage of personal data by restaurants during the Covid-19 pandemic. At several restaurants, personal data were not only retained for possible contact tracing, but visitors were subsequently confronted with marketing messages from these eateries. In other words, these restaurants dutifully fulfilled the obligation to keep records of all their guests, but afterwards they used the data for purposes other than those communicated to the guests in advance.

The above examples show that both the researcher and the restaurant owner see personal data protection primarily as a series of ‘check boxes’, where a few actions ensure that their practices are GDPR-compliant. Performing a DPIA, preparing a consent form, and pseudonymising personal data are the most common actions taken by researchers to ensure compliance with the GDPR. But the reason for performing these actions is often less well known.

 3.4.2. The seven Article 5 principles

All actions to protect personal data originate from the seven principles described in Article 5 of the GDPR. We therefore refer to these principles as the ‘Article 5 principles’. These principles give a general indication of the actions to be taken to protect personal data; this is what makes the GDPR a normative rather than a descriptive law.

The GDPR only provides the framework – the researchers must fill in these frameworks themselves with the necessary technical and organisational measures within the context of the research. Therefore, the measures required within a particular research project may differ greatly from those required in another project. But the principles underlying both sets of measures are always the same.

So what exactly do these seven principles entail? Below you can read about all the principles, with a brief example for each principle to demonstrate how it works:

The 'Article 5 principles'

Principle 1: Lawfulness, fairness and transparency

The first principle in Article 5 reads as follows:

 

Personal data shall be processed lawfully, fairly and in a transparent manner in relation to the data subject (‘lawfulness, fairness and transparency’).

 

With respect to the GDPR Article 5 principles, GDPR Recital 39 states:

‘Any processing of personal data should be lawful and fair. It should be transparent to natural persons that personal data concerning them are collected, used, consulted or otherwise processed and to what extent the personal data are or will be processed.

The principle of transparency requires that any information and communication relating to the processing of those personal data be easily accessible and easy to understand, and that clear and plain language be used. That principle concerns, in particular, information to the data subjects on the identity of the controller and the purposes of the processing and further information to ensure fair and transparent processing in respect of the natural persons concerned and their right to obtain confirmation and communication of personal data concerning them which are being processed.

Natural persons should be made aware of risks, rules, safeguards and rights in relation to the processing of personal data and how to exercise their rights in relation to such processing. In particular, the specific purposes for which personal data are processed should be explicit and legitimate and determined at the time of the collection of the personal data.

The personal data should be adequate, relevant and limited to what is necessary for the purposes for which they are processed. This requires, in particular, ensuring that the period for which the personal data are stored is limited to a strict minimum. Personal data should be processed only if the purpose of the processing could not reasonably be fulfilled by other means.

In order to ensure that the personal data are not kept longer than necessary, time limits should be established by the controller for erasure or for a periodic review. Every reasonable step should be taken to ensure that personal data which are inaccurate are rectified or deleted.

Personal data should be processed in a manner that ensures appropriate security and confidentiality of the personal data, including for preventing unauthorised access to or use of personal data and the equipment used for the processing.’

 

Explanation:

In other words: from the perspective of the research participant, they should be informed by the researcher prior to the research so that they know that the data processing within the research context is lawful (as further stipulated in GDPR Article 6), what will be done with their personal data, who carries out the processing, what risks are involved for them through their participation in the research, and how these risks have been minimised as far as possible. The rights of participants should also be made known to them and how and to whom they may appeal if they suspect that their rights have been infringed.

Principle 2: Purpose limitation

The second principle in Article 5 reads as follows:

 

Personal data shall be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes; further processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes shall, in accordance with Article 89(1), not be considered to be incompatible with the initial purposes (‘purpose limitation’).

 

GDPR Recital 33 additionally states the following regarding purpose limitation in the context of scientific research:

‘It is often not possible to fully identify the purpose of personal data processing for scientific research purposes at the time of data collection. Therefore, data subjects should be allowed to give their consent to certain areas of scientific research when in keeping with recognised ethical standards for scientific research.

Data subjects should have the opportunity to give their consent only to certain areas of research or parts of research projects to the extent allowed by the intended purpose.’

 

Explanation:

In other words, the legitimate and specific purpose of a research project determines the nature and extent of personal data collected and processed for this purpose in this project. However, to allow for the validation of the research data or for the purpose of follow-up research, it is often desirable to reuse all or part of the collected personal data.

If this is the case, the research participant must be made aware of this prior to the research. The participant must also be informed about the purposes for which other researchers may access their data. Especially when it comes to sensitive data (special categories of personal data), consent for this must be given explicitly and freely by the participant to the researcher.

Principle 3: Data minimisation

The third principle in Article 5 reads as follows:

 

The personal data shall be adequate, relevant and limited to what is necessary for the purposes for which they are processed (‘data minimisation’).

 

GDPR Recital 78 states the following with regard to data minimisation:

‘The protection of the rights and freedoms of natural persons with regard to the processing of personal data require that appropriate technical and organisational measures be taken to ensure that the requirements of this Regulation are met.

In order to be able to demonstrate compliance with this Regulation, the controller should adopt internal policies and implement measures which meet in particular the principles of data protection by design and data protection by default.

Such measures could consist, inter alia, of minimising the processing of personal data, pseudonymising personal data as soon as possible, transparency with regard to the functions and processing of personal data, enabling the data subject to monitor the data processing, enabling the controller to create and improve security features.

When developing, designing, selecting and using applications, services and products that are based on the processing of personal data or process personal data to fulfil their task, producers of the products, services and applications should be encouraged to take into account the right to data protection when developing and designing such products, services and applications and, with due regard to the state of the art, to make sure that controllers and processors are able to fulfil their data protection obligations.

The principles of data protection by design and by default should also be taken into consideration in the context of public tenders.’

 

Explanation:

In other words, only those personal data that are relevant for the legitimate and concrete research purpose are collected and processed. These may involve many different data points but all of these must be justified and limited by the research purpose and how the personal data are protected during and after the research. The poster displayed in the image below addresses the data minimisation principle:

Principle 4: Accuracy

The fourth principle in Article 5 reads as follows:

 

The personal data shall be accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay (‘accuracy’).

 

As we have seen earlier, ‘the processing of personal data should be designed to serve mankind’ (GDPR Recital 4). This is only possible if these data are correct. This principle ensures, for example, that a patient receives the right treatment or that, when assessing a patient’s case, all personal data that are accurate and actually associated with that individual are taken into consideration. As a result, the likelihood of bias, and therefore of unfair or undue consequences for the individual, are also reduced. This accuracy principle gives rise to the right to rectification (GDPR Article 16):

‘The data subject shall have the right to obtain from the controller without undue delay the rectification of inaccurate personal data concerning him or her. Taking into account the purposes of the processing, the data subject shall have the right to have incomplete personal data completed, including by means of providing a supplementary statement.’

Principle 5: Storage limitation

The fifth principle in Article 5 reads as follows:

 

Personal data shall be kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed.

Personal data may be stored for longer periods insofar as the personal data will be processed solely for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes in accordance with Article 89(1) subject to implementation of the appropriate technical and organisational measures required by this Regulation in order to safeguard the rights and freedoms of the data subject (‘storage limitation’).

 

Storage limitation


What type of data may be retained after the end of the research and for how long? Broadly speaking, we can distinguish three objectives based on the Netherlands Code of Conduct for Research Integrity (Association of Universities in the Netherlands (VSNU), 2018). With respect to the retention of research data, including personal data, researchers must respect certain standards in the design, conducting, reporting, and dissemination of the research:

●      Standard 11 (p. 16, Research design): ‘As far as possible, make research findings and research data public subsequent to completion of the research. If this is not possible, establish valid reasons for their non-disclosure.’

In this regard, the VSNU Code of Conduct refers to a document of the Council of the European Union: The transition towards an Open Science system – Council conclusions (adopted on 27/5/2016) where the principle mentioned on page 8 reads ‘as open as possible, as closed as necessary’ and where valid reasons for not making data publicly available are mentioned as ‘personal data protection and confidentiality, security concerns, as well as global economic competitiveness and other legitimate interests.’

●      Standard 24 (p. 17, Conduct of research): ‘Manage the collected data carefully and store both the raw and processed versions for a period appropriate for the discipline and methodology at issue.’

In the 2014 version of the Netherlands Code of Conduct for Academic Practice of the VSNU, a period of ten years has been mentioned for the storage of research data:

‘Raw research data are stored for at least ten years. These data are made available to other academic practitioners upon request, unless legal provisions dictate otherwise.’

The 10-year period is still commonly used as a starting point in many disciplines.

●      Standard 25 (p. 17, Conduct of research): ‘Contribute, where appropriate, towards making data findable, accessible, interoperable and reusable in accordance with the FAIR principles.’

The usual way to store research data appropriately after the end of the research is in data archives, such as the long term preservation archive EASY at DANS(an institute of the Royal Netherlands Academy of Arts and Sciences (KNAW) and the Dutch Research Council (NWO)). Here it can be determined whether a dataset will be made publicly accessible to everyone or whether the nature of the data, such as in the case of personal data, requires additional measures that do not justify this kind of public access.

EASY offers various measures to make data publicly accessible online or to securely archive them and make them accessible only to certain individuals who have a legitimate need to know such data and who, after identification, are granted access to these data.

By complying with relevant certification rules, DANS shows that EASY is a Trustworthy Digital Repository service for the archiving of research data.

Principle 6: Integrity and confidentiality

The sixth principle in Article 5 reads as follows:

 

Personal data shall be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures (‘integrity and confidentiality’).

 

Explanation:

Recital 78, which was mentioned earlier in connection with Principle 3, indicates that data protection by design and data protection by default are good bases for providing the appropriate safeguards required for the processing of personal data. This usually involves security measures such as encryption, pseudonymisation and the granting of access (for example, using multi-factor authentication) based on an established access matrix. However, this must be done without compromising the quality of the data to ensure that you can also arrive at valid findings with the right data.

Principle 7: Accountability

The seventh principle in Article 5 reads as follows:

 

The controller shall be responsible for, and be able to demonstrate compliance with, paragraph 1 (‘accountability’).

 

Explanation:
Finally, the researcher is expected to not only take into account the principles of data processing as set forth in the principles but to do so demonstrably at the request of a supervisory authority during its investigation of the processing of personal data in question, for example.

For this, at least the following is necessary from the perspective of the GDPR:

●      Appropriate design and recording of the research

●      In case of high-risk processing: the risk assessment (DPIA) and establishing the corresponding mitigating measures

●      Actual communication (clear and understandable) to data subjects

●      Appropriate agreements for data subjects (Consent Form) and any research partners (for example, Joint Controller Agreements or other agreements such as Non-Disclosure Agreements)

3.4.3 The '1972 Principles'

The Article 5 principles form the basis of the GDPR. And not just of the GDPR; indeed, these principles were being applied from as early as the Privacy and Recording of Personal Data report of 1972 (pages 103 and 104). However, these principles were then somewhat hidden away in Article 44, which indicates when a permit for a recording system can be denied. But the similarities are striking:

●      a. If the purpose or expected efficacy of the recording system is contrary to law, public order, or morality [compare with the lawfulness and fairness principle]
 

●      b. If the regulations are not in accordance with the requirements imposed by law or a general order in council [compare with the lawfulness principle]
 

●      c. If insufficient measures have been taken to prevent the data to be recorded from falling into the wrong hands [compare with the integrity and confidentiality principle]
 

●      d. and e. If there are insufficient safeguards for ensuring the accuracy of the data to be recorded and provided [compare with the accuracy principle]
 

●      f. If, in accordance with the regulations, the personal data present in the recording system are not relevant enough in relation to the purpose of the recording system [compare with purpose limitation and data minimisation principles]
 

●      h. If the rights of the registered persons relating to inspection, access and rectification are more limited than is necessary from the point of view of the detection of criminal offences or proper taxation [compare with the accuracy principle]

Knowledge of these principles will ensure that a researcher is fully committed to protecting the privacy rights of the participants and advocates this in the research project. They will realise that defining measures for the proper protection of personal data is not only a must, but also that following these principles will be accompanied by significant benefits for themselves as well as for the data subjects, as outlined below:

●      Protection of personal data, thereby safeguarding the fundamental right to privacy of research participants

●      Being a demonstrably reliable research partner for research participants and other research institutes

●      Limiting liability for the institution

●      Preventing reputational damage for the researcher, the research, and the institution.

●      Conducting research in accordance with applicable laws and regulations

●      Being able to meet the requirements of external funders (contract funding)

●      Having a proper understanding of the nature of the processed personal data and the de-identification measures applied makes it easier to assess whether these data can be shared for research purposes in accordance with the FAIR principles or Open Data principles

These benefits are also visualised on a Privacy Reference Card (source: Domingus, M. (2016). A Researcher’s Privacy Reference Card. Why?).

3.5 Metro map

The Article 5 principles tell you what to do when protecting personal data but do not tell you exactly how to do it. As a privacy officer, how do you determine what measures to take in a specific research project? That is where the Metro Map, which we will discuss in this chapter, can be of help to you.

3.5.1. Through the Metro Map step by step

The Article 5 principles imply the following for the research participant and the researcher:

●      For research participants, these principles are enforceable as rights (for example, the right to rectification arises from the principle of accuracy)

●      For researchers, there is an obligation to demonstrate compliance with these principles by applying data protection by design and data protection by default as a strategy, for properly protecting the personal data of research participants.

This means that, while the principles are the same for each research project, the measures to be taken to protect personal data may differ from one research project to another.

As a privacy officer, how do you determine what measures to take in a specific research project? The Metro Map has been drawn up to help you understand, in broad terms, what actions need to be taken. It is a step-by-step plan that guides you, by means of concrete questions, from the research design to the issuing of a GDPR-compliant statement by your institution’s DPO. The video below explains the steps in detail:

https://www.youtube.com/watch?v=4eNUODTiRzI (in Dutch only)

Source: Domingus, M. (2018). The Privacy Impact Assessment (PIA) Route Planner for Academic Research. Inspired by Harry Beck’s London Metro Map. Retrieved from http://hdl.handle.net/1765/128160.

3.5.2. Key points

The key points from the Metro Map can be summarised as follows:

●      Only a small proportion of research projects require additional measures, because such measures are only required for high-risk projects

●      A research project is classified as high risk if it meets at least two of the nine criteria included in the Decree concerning the list of personal data processing operations for which a data protection impact assessment (DPIA) is mandatory (In Dutch only) (Besluit inzake lijst van verwerkingen van persoonsgegevens waarvoor een gegevensbeschermingseffectbeoordeling (DPIA) verplicht is) issued by the Dutch DPA

●      A pre-DPIA can be used to determine whether a research project is considered as high or low risk

●      There is an obligation to conduct a full DPIA when performing high-risk research

●      If a DPIA has been carried out for a similar research project in the past (in other words, the research to be performed falls within an existing research category), the measures that emerged from that DPIA may be adopted and no additional DPIA is required

●      Most research projects follow the Orange Line on the Metro Map: these projects can be conducted in compliance with the GDPR by applying standard measures

And would you like to know what kind of concrete measures you can implement? If so, take a look at the overview below. In Module V ‘Measures’, we will discuss each measure in detail.

3.5.3. In the spotlight

Possible measures

Possible measures to be taken are:

●      Encryption (encryption of storage media and end-to-end encryption of data in transit and at rest)

●      Pseudonymisation

●      Management of de-identification keys

●      Application of the four-eyes principle in pseudonymisation

●      Use of data minimisation

●      Use of synthetic data

●      Respecting retention periods

●      Working together only with known and trusted organisations and individuals

●      Providing access to data only on a need-to-know basis

●      Physical and technical measures to protect physical and digital storage and research workspaces

●      Application of privacy-enhancing technologies where possible

●      Application of zero trust architecture where possible

●      Zero-knowledge suppliers and suppliers who have a valid ISO 27002 certificate

●      Working exclusively within online digital vaults and preventing local copies of data

●      Use of appropriate backup and recovery procedures to mitigate the risk of data loss

●      Use of multi-factor authentication for individuals who have a need to know the sensitive data

●      Application of a clean desk policy and clean whiteboard policy.

 

3.6 Measures template

Have all measures been properly identified? In that case, as privacy officer, you should present these measures in a clear document as proof to the DPO of your institution, so that they can issue a GDPR-compliancy statement for the research to be carried out.

Recording the measures to be taken is also an obligation under the seventh principle of Article 5 (GDPR Article 5.2 - Accountability). This principle states that it is the controller’s obligation to demonstrate the researcher’s compliance with the GDPR principles in the research project.

Below is an example of the measures to be taken for each of the principles.

Source (in Dutch only): Domingus, M. (2021). GDPR Article 5 Principles relating to the processing of personal data and corresponding appropriate technical and organisational measures in the context of scientific research (AVG Artikel 5 Beginselen inzake verwerking van persoonsgegevens en bijbehorende passende technische en organisatorische maatregelen in de context van wetenschappelijk onderzoek). Retrieved from http://hdl.handle.net/1765/134862.

3.6.1 Template

A document based on this can be downloaded here (in Dutch only). In this template, the principles are listed on the left, and for each principle, the measures to be taken are described on the right. By completing this template, the privacy officer and researcher can ensure that all the principles have been taken into consideration.

 

3.6.2. From ‘have to’ to ‘want to’

As privacy officer, your job is to ask the researcher at each step in the research design process the following question: how are the Article 5 principles demonstrated in the research? For example, how does the researcher handle transparency, in what way does the researcher ensure data minimisation, and so on. For each research project, the researcher must think about how the principles are reflected in the research design. And the greater the sensitivity of the personal data involved in the research, the stronger the emphasis should be on safeguarding the principles.

The principles help the researcher and you as privacy officer to remain in control of the personal data before, during, and after the research. And in the event of an unexpected data breach, the consequences for the researcher and data subjects are minimised if the principles have been demonstrably followed via the taken measures. If the researcher also sees these benefits, then their mindset will change from ‘I have to do all this extra work for the GDPR’ to ‘I want to do the extra work for the GDPR, because by doing so I am not only protecting the fundamental rights of the data subjects but also avoiding the risk of fines, delays, or even stopping my research!

3.7 Data breaches

How many times do you think the term ‘data breach’ appears in the legal text of the GDPR? You will be surprised to hear that this term does not occur at all in the GDPR. To understand why this is the case, we must first take a closer look at what a data breach actually is.

A data breach involves a violation of the principle contained in Article 5.1f (integrity and confidentiality):

‘Personal data shall be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures.’

Based on this principle, we see that a data breach is not only about losing a USB stick with personal data on it, for example, but that the unintentional destruction or damage and loss of data also constitute a violation of the principle (referred to colloquially as a ‘data breach’). Actually, you can say that a data breach implies two things:

●      Unlawful and unauthorised access (violation of confidentiality)

●      Destruction, loss, or damage of data (violation of integrity)

As is apparent from the above text, keeping personal data for too long is not considered to be a data breach but an unlawful processing of those personal data. Therefore, if a researcher works with a highly outdated dataset, this dataset is also not considered compliant with the GDPR. If such data breaches occur, it means that the data are not sufficiently secure. Of course, there is a limit to this – what if a professional hackers club with hundreds of members gains access to your dataset? In that case, Principle 5.2 is relevant:

The controller shall be responsible for, and be able to demonstrate compliance with, paragraph 1 (‘accountability’).

Accountability, as we discussed earlier, means that researchers (with assistance from privacy officers) must be able to demonstrate that they have taken all the proportionate technical and organisational measures to protect the data as well as possible. If the researcher cannot demonstrate this, they risk a fine.

3.7.1. Fines

The fines for not providing proper protection for personal data can be divided into two categories (https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679&from=EN#page=82 ):

1.     Article 83.4: ‘fines up to 10 000 000 EUR, or in the case of an undertaking, up to 2 % of the total worldwide annual turnover of the preceding financial year’.

2.     Article 83.5: ‘fines up to 20 000 000 EUR, or in the case of an undertaking, up to 4 % of the total worldwide annual turnover of the preceding financial year’.

The amount of the fine depends partly on the extent to which the Article 5 principles have been violated. The greater the number of violations and the more serious they are, the higher the fine will be. But violating the principles can have further consequences than merely a fine. For example, the data subjects may decide to withdraw from the research. There may also be a large-scale request from data subjects to gain access to their data. And imagine a hacker gaining access to a researcher’s dataset as a result of which it is revealed that the researcher has been storing certain data for thirty years. In that case, the researcher will have a lot of explaining to do as to why that data still exists after all this time.

In conclusion, it is very important for a researcher to understand that every data breach must be reported to you as the privacy officer or directly to the DPO. A researcher cannot and should not ever make their own decision about the severity of a data breach; they are only obliged to report the data breach as soon as possible.

Module 4 - Measures

‘In order to maintain security and to prevent processing in infringement of this Regulation, the controller or processor should evaluate the risks inherent in the processing and implement measures to mitigate those risks, such as encryption. Those measures should ensure an appropriate level of security, including confidentiality, taking into account the state of the art and the costs of implementation in relation to the risks and the nature of the personal data to be protected.’

GDPR Recital 83

In the previous modules, we have discussed risks, principles, measures, and actions. All of these are part of the GDPR but how do they relate to one another? And most importantly, what measure should you use in what type of research?

As a warm-up, we offer you another short quiz. The questions will also give you an idea of the type of measures we will cover in this module.

QUIZ Measures

 

Correct answer in bold.

 

1.

Does all processing of personal data in a research need to be included in the record of processing activities separately?

 

·     Yes. All research is unique, whereby a new entry in the record is obligated by the GDPR

·     No. When the processing of personal data for new research is similar to that of earlier registered research, then it is not needed to register the new research.

·      No. When the processing of personal data for new research is similar to thatof earlier registered research, then you may combine both operations under a 'category of operations'.

·     Yes. The GDPR states that all operations with personal data need to be included in the record of processing activities within half a year after publication.

 

2.

As an organisation, how do you need to set up a record of processing activities?

·     As an organisation you may use any type of technology, as long as it conforms to the ISO 9000 standard.

·      As an organisation you may choose any structure and technology for the set up as long as it contains the relevant aspects of the operation.

·     As an organisation you must conform to the protocols of the Dutch DPA for setting up your record.

·     As an organisation you must conform to the protocols of RDNL for setting up your record.

 

3.

Is it permitted not to inform a participant fully about the nature of the research and how personal data is processed for that research?

·      No, unless it concerns covert research.

·     No, that is not permitted because participants also have the right to withdraw their consent.

·     Yes, that is permitted when the participant is not able to fully understand the exact content and format of the research.

·     No, that is not permitted under any circumstances.

 

4.

What happens when a participant, during the course of the study, withdraws his consent for processing his personal data?

·     A participant has given up his right to withdraw his consent by signing the consent form.

·      In that case the researcher does not have a legal basis anymore for (further) processing of the personal data of the participant concerned.

·     In that case the researcher must cease the research since the data are now corrupt.

·     This has no further consequences for the remainder of the research.

 

5.

Is conducting a DPIA obligated when you process special categories of personal data for your research?

·     No, conducting a DPIA is only obligated when it concerns covert research.

·      No.

·     No, conducting a DPIA is always a voluntary choice of the researcher.

·     Yes, that is obligated when more than 10 persons take part in the research.

 

6.

At what point during a study do you conduct a DPIA?

·     A DPIA is conducted when the research funding has been granted.

·     A DPIA is conducted after submitting the research proposal.

·     A DPIA is conducted when it turns out during the research that special categories of personal data are being processed.

·      A DPIA is conducted before the research is started.

 

7.

Does the GDPR apply to fully anonymized personal data?

·     Yes, because these data can be easily combined with other anonymized data allowing unique persons to be identified.

·     No, fully anonymized personal data have no value and therefore do not fall under the GDPR.

·      No, fully anonymized personal data no longer relate to a unique person and therefore do not fall under the GDPR.

·     Yes, because by reverse engineering the anonymization of these data can be easily undone which makes them fall under the GDPR.

 

8.

Is anonymized personal data anonymous for eternity?

·      No, by combining new datasets anonymized data may again be traced back to unique persons.

·     Yes, when all traceable personal data are removed then anonymized personal data can be considered as such forever.

·     Yes, because all traceable data must be deleted for this dataset.

·     No, after 15 years this anonymity expires and the names of the research participants are to me made public.

 

 

9.

Is replacing a specific birth year by a category of years (for example: "between 5 and 10 years") a form of anonymizing or pseudonymization?

·      This is a form of pseudonymization since the unique person is still traceable via the original birth year; it is a reversible process.

·     This is a form of anonymization since without the specific birth year the unique person is not traceable.

·     This is a form of anonymization since age does not say anything specific about a person.

·     This is a form of pseudonymization since only a few, less relevant, personal data are left out.

 

10.

What does the 'four-eyes principle' entail when it comes to pseudonymization?

·      Two persons have access to the key file due to verifiability and data integrity.

·     During the process of pseudonymization the work is always done in pairs to minimize mistakes as much as possible.

·     Only two specific persons have the technical knowledge to reverse the pseudonymization.

·     Only two people know where the key file is stored.

 

-        End of quiz -

In this module we will examine the actions and measures to be taken. How to determine the correct measures for the research project starting from a risk relating to personal data? And what actions can you take to best protect your data subjects? In this module you will learn about the relationship between these concepts, and we will look at what actions and measures are meaningful within a given context.

The learning objectives of this module are:

●      You have knowledge of the various actions that are possible or necessary for protecting the personal data of data subjects

●      You have knowledge of the relationship and dependencies between different actions

●      You have knowledge of which action is relevant or mandatory within a given research context

How essential is it for you to determine the right actions? The example below about the Covid-19 tests by the Dutch Municipal Health Service (GGD) in 2020 and 2021 clearly shows the importance of handling personal data with care. Read this example and try to think for yourself what actions could and should have been taken to better protect the personal data.

4.1 In the spotlight

Example: GGD Covid-19 tests
This article from the Dutch Broadcasting Foundation NOS clearly shows the impact of a careless handling of personal data. If you consider the coronavirus testing facility as a research project in which personal data are processed (as well as special categories of personal data such as medical data and the BSN), the article shows you how the handling of these personal data was not compliant with the GDPR Article 5 principles for a number of reasons:

1. The confidentiality of the data was breached because anyone who had access to the U-Diagnostics database also had access to all the data of all the people tested. Also, the system turned out not to be properly secure, thus allowing third parties to access the data. Furthermore, in addition to the database, a WhatsApp group was used to exchange information nationwide. This also violated the confidentiality of the data since some of the data (such as photos, passports, etc.) were shared with all 300 users in the group.

2. There was also doubt regarding proportionality of the collected data. For example, the GGD indicated in their privacy statement that the following personal data were required for the purposes of a coronavirus test:

●      First name and last name

●      Address or other location when you are not at home, for example, if you are on holiday

●      Date of birth

●      Whether you are male or female, unspecified, or unknown

●      Citizen service number (BSN)

●      Telephone number

●      Email address

●      Your symptoms

●      Test tube barcode

●      Results of Covid-19 test

●      If applicable: name of the doctor requesting the test

●      Whether you had any direct contact with other people

●      Whether you had to work and if so to which professional group you belong

The NOS article reports that apparently the travel destination and the client of the tested person were also recorded. By combining these data, it was possible to find out which soldiers (passport number, BSN, address) had been deployed to which countries. The above example shows that if an organisation does not take the right measures, they are in fact creating a data breach: a ‘data breach by design’.

Above we invited you to look at an example of a current research project from the perspective of the data subject. We will now also take into account the perspective of the research participant (the data subject) as seen from the perspective of the researcher and the research support staff.

Good luck with this module!

4.2 Record of processing activities

Maintaining a record of processing activities for the personal data processed by you is one of the final actions you need to take in the research project. As you can see, this action has been assigned to the last station on the Metro map. On this page you can read all about what the record of processing activities entails and how you and the researcher can benefit from this.

4.2.1 Below is the minimum information you need to know about the record of processing activities:

●      It is mandatory to maintain a record of processing activities if you process personal data in research
 

●      Each institution may choose the structure and technology used to develop this record
 

●      If the processing of personal data in one research project is similar to that in a research project that is already part of the record, you may combine both projects into a so-called category of processing activities. Thereafter, any similar research projects can simply be linked to this category.

For a more detailed explanation of what a record of processing activities entails and what benefits it offers, read the texts below at your leisure:

4.2.2 Record of processing activities

 

What is it?

If you process personal data (for example, as part of your research), Article 30 of the GDPR states that you must maintain a record of the processing activities that take place under your responsibility. An organisation is free to set this up as it sees fit; the GDPR does not specify any particular technology or structure to be used for this. For a small organisation with a limited budget, an Excel document might also suffice, for example.

In accordance with Article 30.1(a) to (g) of the GDPR, a record of processing activities must include the following information about each processing activity:

(a) the name and contact details of the controller and, where applicable, the joint controller, the controller's representative and the data protection officer;

(b) the purposes of the processing;

(c) a description of the categories of data subjects and of the categories of personal data;

(d) the categories of recipients to whom the personal data have been or will be disclosed including recipients in third countries or international organisations;

(e) where applicable, transfers of personal data to a third country or an international organisation, including the identification of that third country or international organisation and, in the case of transfers referred to in the second subparagraph of Article 49(1), the documentation of suitable safeguards;

(f) where possible, the envisaged time limits for erasure of the different categories of data;

(g) where possible, a general description of the technical and organisational security measures referred to in Article 32(1).

In Article 4, the GDPR explains what the law means by ‘processing’:

Processing is any operation or a set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction.

A research project can be regarded as a set of the above-mentioned processing activities. It involves the collection, organisation, analysis, structuring, eventual destruction, communication, etc. of personal data. It is necessary to create a record for each type of research in the record of processing activities. However, it is not necessary (although possible) to create a record for each individual research project in the record of processing activities. In other words, the GDPR leaves room in terms of the granularity of the records to be maintained: at the level of individual processing activities or categories of processing activities. The basis for this view can be found in Articles 30(2), 23(2)(a), and 23(2)(f) of the GDPR.

 

What are the benefits?

A structured record of processing activities, in which you as a privacy officer can quickly find the desired data, offers the following advantages:

●      If a complaint is received by the Dutch DPA, it is immediately possible to see, from the record of processing activities, the considerations that have been taken into account within a specific research project. It is also possible to see immediately what safeguards and measures have been taken with regard to the protection of personal data. With a well-functioning record of processing activities, you demonstrate immediately that you, as an organisation, are in control of all processing of personal data that has taken place under your responsibility.
 

●      In case of a change in legislation, the record of processing activities can help in immediately identifying the research projects in which certain personal data have been processed. If, for example, the legislation regarding the BSN changes, as a privacy officer you should be able to immediately identify, in a properly functioning record of processing activities, the research projects in which BSN numbers have been used. In this way, you are considered to be in control of all processed personal data not only with respect to the Dutch DPA but also to your own organisation.
 

●      If the design of a research project is identical to that of a previous project (i.e. the nature of the processing is the same), this project is considered to fall within a category of processing activities. Any follow-up research project that falls within this category in terms of design therefore merely needs to be linked in the record of processing activities to the already existing category; there is no need to create a completely new record since the measures and safeguards taken correspond to the projects carried out earlier within that category. This speeds up the process of maintaining records and saves a lot of work.

In this respect, see the granularity applied by the GDPR with regard to the safeguards for individual processing activities or categories of processing:

GDPR Art. 23(2)(a): ‘the purposes of the processing or categories of processing

GDPR Art. 23(2)(f): ‘the storage periods and the applicable safeguards taking into account the nature, scope and purposes of the processing or categories of processing

GDPR Art. 30(2): ‘a record of all categories of processing activities

Recital 110: ‘transfers or categories of transfers of personal data

For example, a scenario such as the one shown below may arise based on certain aspects of the processing. The file displayed as an image below can be downloaded here.

●      In many cases, especially for externally funded research, the researcher is obliged to prepare a data management plan for the research anyway. Some of the information that needs to be recorded in the record of processing activities is also contained in the data management plan. It is therefore recommended that the data stewards and privacy officers within the institution consider how overlaps, and hence a duplication of effort by the researcher, can be avoided. The same is true for research that is reviewed by an ethics committee or internal review board; information is partly collected here that is also requested to ensure that a proper record is created in the record of the processing activities.

4.3 Declaration of consent

A declaration of consent, also known as a consent form, is required at three stages during the preparation of a research project. On the Metro Map below, at the ‘Demonstrate compliance with the Article 5 principles’ station, you indicate that you need a consent form, at the ‘Implement the necessary technical and organisational measures’ station, you draft the consent form, and at the ‘Perform the research’ station, you apply the consent form.

On this page you can read all about what the consent form entails and how you and the researcher can benefit from it.

4.3.1 Here is the minimum you need to know about the consent form:

●      In general, the processing of special categories of personal data requires the prior consent of the research participant.
 

●      Consent within the meaning of the GDPR must be based on consent that is freely given.
 

●      The research participant should receive honest, complete, and understandable information prior to participating in the research and may also withdraw consent during the research.
 

●      After withdrawal of consent by a research participant, the researcher no longer has a legal basis for the further processing of the data of the research participant in question.
 

●      The research participant must actively give explicit consent by, for example, clicking or checking a box (online or on paper) or orally expressing consent.
 

●      In the case of covert research, it is permitted – within certain parameters – not to inform the research participant in advance, either fully or partially, about the purpose of the research.

For a more detailed explanation of what a consent form entails and what benefits it offers, read the texts below at your leisure:

4.3.2 Declaration of consent

What is it?

In general, the processing of special categories of personal data requires the prior consent of the research participant. In this respect, Article 24(c) of the UAVG (Exceptions for scientific or historical research purposes or statistical purposes) clearly states that, for the processing of special categories of personal data for scientific or historical research purposes or statistical purposes, asking explicit permission from research participants is the appropriate way or that you as a researcher must effectively be able to demonstrate that requesting consent from research participants is impossible or involves a disproportionate effort.

An example of the impossibility of requesting consent is if the person is no longer alive (for example, in a study of the quality of a particular medical treatment in relation to the costs and duration of the treatment, where some of the population will have died) or the contact details cannot be retrieved because the person has moved house a lot or is homeless.

There is a well-known misunderstanding about the term ‘consent’, i.e. in the meaning of the Medical Treatment Contracts Act (in Dutch only) (Wet Geneeskundige Behandelovereenkomst (WGBO)) and in the meaning of the GDPR. See also this website of the GDPR Helpdesk for Healthcare, where they explain in more detail about when additional consent is needed in healthcare. In the WGBO, this relates to consent for a treatment plan or the sharing of information from medical records with third parties (breach of medical confidentiality). This is also referred to as informed consent. In the GDPR, the term ‘consent’ is used to refer to a legal basis for processing personal data and often the WGBO term ‘informed consent’ is used incorrectly in this context.

Freedom of choice
Consent within the meaning of the GDPR (i.e. as the legal basis for processing personal data) must be based on actual freedom of choice. For this reason, it is not desirable to promise large payments to the research participant for taking part in the research. This is also the reason why there is no monetary consideration or payment for blood donations in the Netherlands. For ethical reasons, you want to prevent certain groups, destitute people, for example, from being ‘forced’ to participate in studies purely for the monetary consideration.

It is also believed that consent cannot be given entirely freely in a hierarchical relationship, that between employer and employee, for example. Indeed, in such relationships the question is how free the employee really feels to say ‘no’ to the employer and what negative consequences this may have for the working relationship. The same is true for vulnerable groups: individuals who are dependent on others or on a certain treatment cannot reasonably be considered free to refuse requests from these third parties or from the person providing the treatment.

Moreover, there must be a genuine choice, which means that a person should receive honest, complete, and understandable information prior to participating in the research in such a way that they have the chance to obtain proper information about the research and decide whether or not to participate in it based on this information. The research participant is also free to change this decision along the way and withdraw their consent.

 

Lawfulness upon withdrawal
The above has implications for the lawfulness of processing the research participant’s personal data. The data processed from the time of consent until withdrawal of this consent are considered to have been processed lawfully, but from the time consent is withdrawn, there is no legal basis for the further processing of the data of the research participant in question. For example, if the legal basis for processing personal data is public interest and not consent, this does not exempt the researcher from the obligation of being transparent about the purpose of the research, the persons who have access to the personal data and for how long, and the reasonable risks faced by the participant.

 
Explicit consent
Finally, the research participant must give explicit consent by, for example, clicking or checking a box (online or on paper) or orally expressing consent, which can be recorded (audio) to provide proof, if requested, that such consent has actually been granted. In anthropological research, there may be certain contexts in which research participants are not literate or are reluctant to sign papers because of a distrust of authorities. Also, due to the confidential and sensitive nature of the subject matter, research participants may not want to leave evidence of their beliefs, political opinion, sexual orientation, or trade union membership, for example, that could be used against them by third parties.

 

Conditions for consent
A consent form is a way to get permission from participants in your research to process their personal data. There are certain conditions attached to this, as outlined above. In Recital 32, the GDPR states the following in this respect:

Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement to the processing of personal data relating to him or her, such as by a written statement, including by electronic means, or an oral statement. This could include ticking a box when visiting an internet website, choosing technical settings for information society services or another statement or conduct which clearly indicates in this context the data subject’s acceptance of the proposed processing of his or her personal data.

Silence, pre-ticked boxes or inactivity should not therefore constitute consent. Consent should cover all processing activities carried out for the same purpose or purposes. When the processing has multiple purposes, consent should be given for all of them. If the data subject’s consent is to be given following a request by electronic means, the request must be clear, concise and not unnecessarily disruptive to the use of the service for which it is provided.

The definition of the term ‘consent’ can be found in Article 4.11 of the GDPR:

‘Consent’ of the data subject means any freely given, specific, informed and unambiguous indication of the data subject's wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.

Through a consent form, you give data subjects insight into the purpose of the research, how you plan to approach the research, and how you will handle their personal data. A consent form can only be drawn up after all the measures to be taken have been identified.

What are the benefits?

To lawfully process personal data, you need to have a legal basis. Consent is one of the six legal grounds based on which a researcher can lawfully process personal data. Data subjects must therefore read and sign the consent form prior to a research project, otherwise the researcher is considered to have violated the principles of lawfulness, fairness and transparency (GDPR Art. 5.1.).

In some cases, having to fill in a consent form in advance may harm the research. In the case of covert research or deception (sometimes necessary in experimental research) where the purpose is to study the natural behaviour of data subjects and not influence this behaviour in any way, it is not desirable for the data subjects to know the purpose of the research in advance. However, it is still necessary to minimise the risks for data subjects in such research projects, and the obligation of transparency can be met by sharing the research results with the data subjects and clearly communicating the purpose and design of the research after completion of the research.

What is implied above is that there are certain exceptions for research with respect to recording the consent of data subjects. These exceptions can be found in GDPR Recital 33:

It is often not possible to fully identify the purpose of personal data processing for scientific research purposes at the time of data collection. Therefore, data subjects should be allowed to give their consent to certain areas of scientific research when in keeping with recognised ethical standards for scientific research. Data subjects should have the opportunity to give their consent only to certain areas of research or parts of research projects to the extent allowed by the intended purpose.

4.4. DPIA - Data Protection Impact Assessment

Prior to any research in which personal data will be processed, a DPIA gives you an idea of the risks and allows you to mitigate these risks by taking appropriate technical and organisational measures.

See also: 2.17.Obligation 2: Data Protection Impact Assessment.

A DPIA may be voluntarily carried out any time, but in some cases it is mandatory. In the Decree concerning the list of personal data processing operations for which a data protection impact assessment (DPIA) is mandatory (in Dutch only), the Dutch DPA has indicated a number of processing operations for which a DPIA is mandatory. An example of this is covert research. Normally, a DPIA is carried out for high-risk processing operations. A processing operation is considered high risk if two or more of the following nine criteria apply to the intended operation (source: p. 1 Decree concerning the list of personal data processing operations for which a data protection impact assessment (DPIA) is mandatory):

1. Evaluation or scoring

2. Automated decision-making with legal effect or similar substantial effect

3. Systematic monitoring

4. Sensitive data or data of a very personal nature

5. Data processed on a large scale

6. Matching or merging of datasets

7. Data related to vulnerable data subjects

8. Innovative use or innovative application of new technological or organisational solutions

9. Situation in which the processing operations themselves ‘prevent data subjects from exercising a right or using a service or a contract’

4.4.1 Pre-DPIA

The process of checking for these nine criteria is sometimes referred to as a pre-DPIA.

Here is a brief example of what such a pre-DPIA may look like in practice, combined with the information requested for the record of processing activities:

Research @[University] Record of Processing Activities, Pre-DPIA, Version 1.0

 

 

 

 

 

 

 

General

 

Please select:

 

 

1.

Please specify the type of your research

 

Academic

 

 

 

 

Non-Academic

 

 

 

 

 

 

 

 

2.

Which individuals / groups (partners / providers) outside the EU, have access to your dataset?

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Details concerning the research and datasets

 

Select all that apply:

 

 

3.

Which categories of personal data do you use in your dataset?

 

Given name and surname

 

Financial data

 

 

Business contact information

 

Logging information records

 

 

 

Private contact information

 

Location data (GPS tracking or Wi-Fi tracking)

 

 

 

Address information

 

Images (photos or videos)

 

 

 

Gender

 

Profiling data (e.g. consumer profile)

 

 

 

Date of birth or age

 

Demographic data

 

 

 

Personnel number/student ID number

 

Other

 

 

 

Marital status

 

Not Applicable

 

 

 

Bank account data

 

 

 

 

 

 

 

 

4.

Do you process special categories of personal data?

 

Yes

 

 

 

 

No

 

 

 

 

 

Not Sure

 

 

4.a

Which special categories of personal data do you use in your dataset?

 

Select all that apply:

 

 

 

 

Nationality

 

Physical or mental health data

 

 

Race or ethnic origin

 

Sexual preference or orientation

 

 

 

Political views

 

Criminal data

 

 

 

Religious or philosophical beliefs

 

Social security number / ID number

 

 

 

Union membership

 

Not Sure

 

 

 

Biometric data (such as fingerprints

 

Not Applicable

 

 

 

Genetic data (DNA)

 

 

5.

Who are the subjects of your research?

 

Select all that apply:

 

Justify your answer below:

 

 

Children (<16 years)

 

 

 

 

 

Vulnerable groups

 

 

 

 

[University] students / alumni

 

 

 

 

[University] employees

 

 

 

 

Other

 

 

 

 

 

 

 

6.

How do you obtain the data for your research?

 

Select all that apply:

 

Justify your answer below:

 

 

Directly from individual

 

 

 

 

 

Publicly available data

 

 

 

 

Existing datasets

 

 

 

 

Other

 

 

 

 

 

 

 

7.

What is the size of your subject population?

 

Select the size of the population:

 

 

 

 

less than 10.000

 

 

 

 

 

more than 10.000

 

 

 

 

 

 

 

 

8.

Which hardware and software do you use?

 

Select all that apply:

 

 

 

 

[University] hardware

 

[University] licensed software

 

 

 

Own device

 

non-[University] licensed software

 

 

 

 

 

 

9.

Please specify your software (not available with [University] credentials).

 

For example, OneDrive, Google Drive, Surveymonkey:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

10.

Does your research involve any of the following activities?

 

Select all that apply:

 

Justify your answer below:

 

 

Evaluation/scoring

 

 

 

 

Systematic monitoring

 

 

 

 

Matching or combining datasets

 

 

 

 

Not Applicable

 

 

 

 

 

 

 

11.

Supporting documentation.

 

Select all that apply:

 

 

 

 

Research data management plan

 

 

 

 

 

Agreement(s) with third parties

 

 

 

 

 

Consent form from the data subject

 

 

4.5 Anonymisation

In addition to measures such as encryption and pseudonymisation, the anonymisation of datasets is also a possible measure to reduce the risks involved in the handling of sensitive and other types of personal data. However, once the personal data are completely anonymised, there is no longer any way to identify a unique individual and therefore the data are no longer viewed as personal data. Consequently, these data no longer fall within the framework of the GDPR, which only deals with personal data.

On the Metro Map, you can see that anonymisation as a measure recurs at various times during the research process. On this page you can read about when a dataset is actually considered ‘anonymous’ and what you must take into account if you wish to anonymise a dataset.

4.5.1 Here is the minimum you need to know about anonymisation:

·       In a fully anonymised dataset, a unique individual is no longer identifiable.

·       If a data item does not relate to an identified or identifiable person, then that data item is not considered to be personal data.

·       The burden of proving when data are truly anonymous is a heavy one, since data can become less anonymous over time, for instance when linked to a new dataset.

4.5.2 In the spotlight

When is a dataset is actually considered ‘anonymous’? What do you need to take into account if you wish to anonymise?

In most cases, completely anonymising personal data weakens the value of the dataset to such an extent that many researchers, in consultation with the privacy officer, opt for alternative measures that retain the value of the dataset as far as possible. In a fully anonymised dataset, a unique individual is no longer identifiable.

In Recital 26, the GDPR provides insight into how identifiability should be viewed:

The principles of data protection should apply to any information concerning an identified or identifiable natural person. Personal data which have undergone pseudonymisation, which could be attributed to a natural person by the use of additional information should be considered to be information on an identifiable natural person.

To determine whether a natural person is identifiable, account should be taken of all the means reasonably likely to be used, such as singling out, either by the controller or by another person to identify the natural person directly or indirectly. To ascertain whether means are reasonably likely to be used to identify the natural person, account should be taken of all objective factors, such as the costs of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments.

If a data item does not relate to an identified or identifiable person, then that data item is not considered to be personal data. Since the GDPR relates to the responsible handling and protection of personal data, anonymous data which cannot be traced back to a natural person fall outside the scope of the GDPR. This also applies to personal data that have been irreversibly rendered anonymous in such a way that the data subject is not or no longer identifiable. The GDPR therefore does not concern the processing of such anonymous data, including for statistical or research purposes.

The burden of proving when data are truly anonymous is a heavy one, since data can become less anonymous over time. As a greater number of sources become available which, when linked to each other, can be used to identify a broader context relating to a group, disclosure of group data and re-identification of natural persons may become possible. Data that are anonymous now may not be so five years from now. The authoritative Article 29 Working Party (full name: ‘Working party on the Protection of Individuals with regard to the Processing of Personal Data’) was an advisory body made up of representatives from the Data Protection Authorities of each EU Member State, the European Protection Supervisor, and the European Commission. In its opinion, as a further explanation of the GDPR and with regard to anonymisation, the Article 29 Working Group has described the difficulty with anonymisation techniques as follows:

Data controllers should consider that an anonymised dataset can still present residual risks to data subjects. Indeed, on the one hand, anonymisation and re-identification are active fields of research and new discoveries are regularly published, and on the other hand even anonymised data, like statistics, may be used to enrich existing profiles of individuals, thus creating new data protection issues. Thus, anonymisation should not be regarded as a one-off exercise and the attending risks should be reassessed regularly by data controllers.

Therefore, for safety's sake, the general rule is that researchers should immediately contact the privacy officer of their institution in case of doubt, to determine jointly whether the measures taken have successfully anonymised the data and whether the research results still have sufficient value. From the researcher’s perspective, this always involves two elements: on the one hand, the researcher wants to arrive at the richest possible dataset, and on the other hand, the researcher must protect the collected sensitive and other types of personal data as well as possible. A researcher may perceive these as strongly contrasting perspectives.

Example Facial recognition and anonymisation'

Example Facial recognition and anonymisation'

An extreme example of this might be a study where the researcher uses facial recognition technology. If the researcher wants to anonymise these data by blurring all the faces, for example, this will make the entire dataset irrelevant. In this case, the researcher would benefit more from measures such as restricting access to the dataset after the research has been completed and clearly specifying the rules for access in a consent form, so that it is clear in advance to the participants involved who can access the data after the research is complete.

Example: Sensitive personal data in a medical study

In this example, we discuss a study on the true cost of care when patients suffer from two or more – often chronic – diseases (multimorbidity). This type of study is not about who has what syndrome, but instead the focus is on patients with multiple syndromes undergoing the corresponding treatments for these syndromes. The study involves working with patient pseudonyms that cannot be traced by the researcher, specific care profiles, participation in certain integrated care programmes, encrypted data for the institutions involved in the treatment, and data on the actual use of care.

[Image - see NL version]

With such a study description, people often get nervous and are quick to conclude that this involves medical data which falls under special categories of personal data that require a more intensive protection regime. However, if you carefully look at how the relevant privacy issues have been taken into account in the research design, for example, via the use of non-traceable pseudonyms in this case, you can see that the study works with anonymous data and related encrypted data.

Anonymous data are not personal data and fall outside the scope of the GDPR. So it is important to make proper agreements to avoid the risks of re-identification through the linking of datasets. For this study, it is sufficient to have normal measures in place for processing the data.

For example, you could also decide to convert the patient data into a synthetic dataset, which has the same properties from a statistical perspective and is therefore equally suitable for the study but does not involve any personal data. This example shows that a proper risk analysis carried out prior to the study will immediately indicate the appropriate and proportional measures that need to be taken.

So no Excel files with personal and treatment data are collected from the institutions involved in the treatment of particular patients, where the researchers themselves link the data based on the patient’s citizen service number (BSN), for example. Indeed, that would be an example of irresponsible handling of personal data. And as we saw earlier, the law does not permit researchers to process the BSN.

4.6 Pseudonymisation

 

Pseudonymisation is one of the measures a researcher can take to transform personal data into a dataset that can no longer be directly traced back to an individual. It is a measure that plays a role at various different times in a research project, as you can see in the Metro Map below.

On this page we explain what pseudonymisation is exactly and how to apply it, and we also provide a list of interesting sources for further reading.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

4.6.1 Here is the minimum you need to know about pseudonymisation

 

·      Pseudonymisation means the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information.

·      Pseudonymisation of data generally involves the use of a source file containing the personal data and a target file, where the source file is pseudonymised using certain statistical operations or other pseudonymisation techniques. In general, there is a key based on which the researcher can always return to the source file from the pseudonymised file.

·       The four-eyes principle is generally applied, i.e. at least two persons have joint access to the key of  the source file. This is done to provide sufficient transparency and demonstrate that there has been no breach of scientific integrity in the research.

4.6.2 In the spotlight

 

How can you pseudonymise data? What do you need to take into account if you wish to pseudonymise?

Personal data which have undergone pseudonymisation, which could be attributed to a natural person by the use of additional information should be considered to be information on an identifiable natural person. (GDPR Recital 26)

The application of pseudonymisation to personal data can reduce the risks to the data subjects concerned and help controllers and processors to meet their data-protection obligations. The explicit introduction of ‘pseudonymisation’ in this Regulation is not intended to preclude any other measures of data protection. (GDPR Recital 28)

In practice, when pseudonymising data, certain personal data are omitted or replaced. There are many different techniques for doing the latter, depending on whether these data still need to be used for statistical purposes. Through generalisation, a specific age (day/month/year) can be replaced by a range of years (born in year x to x+5), as result of which the birth year data point is retained for statistical purposes. You can also use randomisation, by randomly replacing the ‘last name’ value with a different last name or by replacing the last name with a series of numbers, for example. You may also replace values with a ‘hash’, i.e. the application of an algorithm (cryptographic hash function) to change variable-sized data to fixed-sized data. As a result, you can no longer guess the input.

The ISO standard ‘ISO/IEC 20889:2018 Privacy enhancing data de-identification terminology and classification of techniques’ distinguishes the following de-identification techniques:

1. Statistical tools (Sampling, Aggregation),

2. Cryptographic tools (Deterministic encryption, Order-preserving encryption, Format-preserving encryption, Homomorphic encryption, Homomorphic secret sharing),

3. Suppression techniques (Masking, Local suppression, Record suppression)

4. Pseudonymisation techniques (Selection of attributes, Creation of pseudonyms)

5. Anatomisation

6. Generalisation techniques (Rounding, Top and bottom coding, Combining a set of attributes into a single attribute, Local generalisation)

7. Randomisation techniques (Noise addition, Permutation, Microaggregation)

8. Synthetic data

Pseudonymisation of data generally involves the use of a source file containing the personal data and a target file, where the source file is pseudonymised using certain statistical operations or other pseudonymisation techniques. In general, there is a key based on which the researcher can always return to the source file from the pseudonymised file. This may be required if, for example, a participant in a research project has indicated that they would like to be informed if the research reveals the possibility of a hereditary defect.

For the research itself, the most appropriate method is to process pseudonymised data both from the point of view of the analysis (the name of the data subject is not necessary for the analysis) as well as the protection of personal data (persons performing the analysis do not necessarily have the right to access the personal data of the research participant concerned). However, if the lead researcher has offered the option – via a consent form, for example – of providing information to the research participant in question based on their consent, it is necessary for the researcher to be able to retrieve the identity of the research participant in question from the pseudonymised data so that the information can be communicated as agreed.

Access to the above-mentioned key is only provided to a very limited number of persons, to ensure the protection of the personal data. From the perspective of scientific integrity, the four-eyes principle is generally applied, i.e. at least two persons have joint access to this key and never one person alone. This is done to provide sufficient transparency and demonstrate that there has been no breach of scientific integrity in the research, for example, with respect to the data collection.

Read more

 1. See the guide entitled ‘Dealing with pseudonymisation and keys in small-scale quantitative research’ prepared by the LCRDM Pseudonymisation Task Force of the National Coordination Point Research Data Management (LCRDM). The LCRDM is a national network of experts in the field of research data management (RDM).

2. See the white paper on the Five Safes framework from Privacy Analytics, which has also been used as the basis for the guide referred to under (1) above.

3. See the following recommendations from ENISA: Recommendations on shaping technology according to GDPR provisions. An overview on data pseudonymisation. 2018. The European Union Agency for Cybersecurity (ENISA) strives to achieve a high level of cybersecurity across Europe.

4. See the ISO standard: ISO/IEC STANDARD 20889:2018: Privacy enhancing data de-identification terminology and classification of techniques.

Module 5 - GDPR

This Regulation respects all fundamental rights and observes the freedoms and principles recognised in the Charter as enshrined in the Treaties, in particular the respect for private and family life, home and communications, the protection of personal data, freedom of thought, conscience and religion, freedom of expression and information, freedom to conduct a business, the right to an effective remedy and to a fair trial, and cultural, religious and linguistic diversity.

GDPR Recital 4

The General Data Protection Regulation (GDPR) has been in force since 25 May 2018 – a law with a long history that is sometimes referred to as the ‘privacy law’. But is that correct? What is privacy exactly? And how does privacy relate to data protection? In this module you will discover the connection between the GDPR and the fundamental right to privacy and learn about when and how the GDPR applies.

The GDPR is a very special law indeed! To discover the topics covered in this module and to test your previous knowledge, we will once again start with a short quiz. Good luck!

5.1.1 Quiz

What is meant by the term ‘informational privacy’?

 

The right to make decisions about your own personal data

l  The right to have your personal data removed from an organisation

l  The right to access personal data digitally if you wish

l  The right to be able to request information about your privacy

 

Privacy is a fundamental human right. What laws protect this right for us?

 

Universal Declaration of Human Rights (1948)

Charter of Fundamental Rights of the European Union (2012)

General Data Protection Regulation (Implementation) Act (2020)

The Dutch Constitution (2018)

 

Do the terms ‘privacy’ and ‘data protection’ mean the same thing?

 

No. Data protection is only one part of the broader concept of privacy

l  No. Privacy does not appear as a concept in the main text of the GDPR

l  Yes. In the GDPR, these concepts are used interchangeably

l  Yes. Privacy is a part of data protection

 

What is the difference between classic fundamental rights and fundamental social rights?

 

Classic fundamental rights limit the power of the government and fundamental social rights are obligations of the government towards citizens

l  Classic fundamental rights comprise all the fundamental rights that were established in 1848 and fundamental social rights define citizens’ rights to care

l  Classic fundamental rights limit the power of government and fundamental social rights define citizens’ rights to care

l  Classic fundamental rights comprise all the fundamental rights that were established in 1848 and fundamental social rights include all rights to a social safety net

 

EU Member States are expected to adopt national implementation legislation supplementary to the GDPR, which should contain more detailed provisions applicable in the relevant Member State. What is the name of this law in the Netherlands?

 

The UAVG (General Data Protection Regulation (Implementation) Act (Uitvoeringswet AVG))

l  The IAVG (GDPR Implementation Act (Implementatiewet AVG))

l  The NAVG (GDPR National Law (Nationale wet AVG))

l  The SAVG (GDPR Dragnet Act (Sleepwet AVG))

 

What is the hierarchy within our legislation with respect to the protection of personal data?

 

1. The Charter of Fundamental Rights of the European Union, 2. The GDPR, 3. The Dutch Constitution, 4. The UAVG

l  1. The GDPR, 2. The Dutch Constitution, 3.The UAVG, 4. The Charter of Fundamental Rights of the European Union

l  1. The Dutch Constitution, 2. The UAVG, 3. The Charter of Fundamental Rights of the European Union, 4. The GDPR

l  1. The Dutch Constitution, 2. The Charter of Fundamental Rights of the European Union, 3. The GDPR, 4. The UAVG

 

What risk does a researcher face if they are unable (in the event of a complaint from a data subject, for example) to provide the Dutch DPA with sufficient evidence regarding the measures taken to protect personal data in the research project?

 

A fine and possible discontinuation of the research

l  A fine and possibly the dismissal of the researcher

l  Discontinuation of the research and possibly the dismissal of the researcher

l  A fine and temporary deprivation of the researcher’s titles

 

What is the purpose of the GDPR?

 

Encouraging the free movement of data, including personal data, and protecting these personal data

l  Protecting personal data and optimising the privacy of all EU citizens

l  Encouraging the free movement of data, including personal data, and in that way maximising the privacy of all EU citizens

l  Protecting the personal data of all EU citizens

 

The right to privacy is a fundamental human right. Does that mean that the right to privacy is also an absolute right?

 

No. It must always be weighed against other fundamental rights. For example, in some situations, national security may take precedence over the right to privacy

l  No. Only during a GRIP 5 situation may the fundamental right to privacy be overruled in favour of other fundamental rights

l  Yes. The fundamental right to privacy prevails over all other fundamental rights

l  Yes. There can be no compelling reasons to restrict the fundamental right to privacy, as indicated in Article 7 of the Charter of Fundamental Rights of the European Union

 

The GDPR is a general regulation. What does this mean?

 

The GDPR has been drawn up based on open standards and is technology neutral

l  The GDPR is general in scope and does not say anything about national legislation

l  The GDPR states how things should be in general but national legislation may deviate from that

l  The GDPR is technology neutral and does not say anything about technical standards and norms

 

The GDPR is a principle-based law. What does this mean?

 

That a researcher must always consider, based on the formulated principles, what measures are necessary to protect personal data as well as possible within a specific research context

l  That a researcher may implement any measures whatsoever to protect personal data as long as they do not violate the principles outlined in the GDPR

l  That a researcher cannot rely on the GDPR for guidance since it only describes general principles

l  That a researcher must follow the principles outlined in the GDPR in order to successfully complete a research project

 

The GDPR is technology neutral. What does this mean?

 

Technology is developing so rapidly that any concrete specification of the type of technology to be used will, by definition, be out of date because of these rapid technological developments

l  The GDPR is technology neutral because otherwise specific commercial parties would gain a competitive advantage

l  The GDPR is technology neutral because all research institutes use different technologies

l  Technology is developing so rapidly that any concrete specification of the type of technology to be used would mean that a research institute has to regularly upgrade its IT systems

 

What does the GDPR say about new technologies such as artificial intelligence and machine learning?

 

Nothing – case law in these areas will show how these new technologies relate to the principles in the GDPR

l  For these new technologies, the GDPR clearly indicates the technical and organisational measures needed to protect personal data as well as possible

l  The GDPR may be technology neutral, but it clearly describes the direction to be taken in terms of measures for new and emerging technologies

l  The GDPR is a general regulation and therefore only describes in general terms how to deal with these new technologies

 

How does the GDPR distinguish between countries with respect to international cooperation?

 

1. EU Member States, 2. Countries in the European Economic Area (EEA), 3. Countries outside the EU and the EEA

l  1. Countries in the European Economic Area (EEA), 2. Countries outside the EU and the EEA, 3. Countries that are not part of the UN

l  1. EU Member States, 2. Countries outside EU, countries that are not part of the UN

l  1. EU Member States, 2. Switzerland, Norway, and the UK, 3. Countries outside the EU

 

What does an adequacy decision mean in the context of international cooperation within the GDPR?

 

That transfers of personal data to the country in question are considered by the European Commission to be equivalent to transfers of personal data within the EU

l  That only countries that have received such a decision may process the personal data of EU citizens

l  That transfers of personal data to countries that have received an adequacy decision from the EU are subject to less stringent regulations

l  That only countries that have received such a decision are eligible to be included within the European Economic Area

 

5.1.1 Learning objectives

The GDPR consists of more than 150 pages, with 99 articles describing the legal framework for the handling of our personal data. In this module you will learn about the main GDPR principles and articles relating to the performance of scientific research and we will also explore:

l  The difference between privacy and data protection

l  The purpose and operation of the GDPR, including in an international context

l  The reasons why the GDPR is technology neutral

Good luck!

5.2. Scope of the GDPR

The GDPR contains three limitations with respect to personal data: applicability to living persons, the ‘household exception’, and the territorial scope. In this final chapter, we will briefly discuss these limitations.

1. Applicability to living personsThe GDPR applies solely to living natural persons; it does not apply to persons who have died or to data about organisations. Therefore, the GDPR does not apply to historical research, archival research, etc. involving deceased persons.

However, the GDPR does apply to historical medical data relating to heredity, for example, since these records may be able to tell us something about unique living individuals. Another example is data from a diary of a deceased person, since this information from the diary may be able to tell us something about living relatives.

2. Household exception

The GDPR has a ‘household exception’: any personal data you store and share in private is not covered by the GDPR (Article 2.2c):

This Regulation does not apply to the processing of personal data: by a natural person in the course of a purely personal or household activity.

So if you accidentally send a list of family members’ email addresses to the wrong person, this is not considered a data breach that requires notification to the Dutch DPA. Other examples of the household exception include images on a security camera at your home, a phone book with address information of family members, a post-it on a home computer with friends’ phone numbers on it, and a list of email addresses of all the football club coaches stored by you on your own computer. These types of personal data are not covered by the GDPR. However, the GDPR does apply if the football club itself stores this list of email addresses within its organisation.

See also the more detailed explanation (in Dutch only) on the website of the Dutch DPA.

3. Territorial scope

The two points mentioned above relate to the material scope as mentioned in GDPR Article 2: the data subject must be a living person and the processing of personal data must occur within the work sphere. Besides the material scope, there is also a territorial scope as defined in GDPR Article 3. This Article says that data of EU residents are protected by the GDPR, regardless of the party that carries out the processing. This means that a company from Japan that has European customers must also comply with the GDPR, as defined in GDPR Recital 23:

In order to ensure that natural persons are not deprived of the protection to which they are entitled under this Regulation, the processing of personal data of data subjects who are in the Union by a controller or a processor not established in the Union should be subject to this Regulation where the processing activities are related to offering goods or services to such data subjects irrespective of whether connected to a payment.

Furthermore, this Article states that organisations established in the EU that process personal data must comply with the GDPR, even if the processing operation takes place outside the EU. Finally, this Article states that if an organisation monitors the behaviour of EU residents where this is ‘related to the monitoring of the behaviour of such data subjects in so far as their behaviour takes place within the Union’ (GDPR Recital 24), these processing operations shall also fall under the GDPR.

This means that protection under the GDPR also extends to an EU resident who uses services like Facebook or Google and is monitored via those services, for example. That is why these organisations, which have their headquarters outside the EU, can also be issued fines. In fact, companies like Google and Facebook have already had to pay hefty fines in several EU countries following rulings by national DPAs.

5.3. Privacy and data protection

‘I have nothing to hide!’ is an often-heard statement when it comes to privacy. This also seems apparent from the large amounts of personal information posted by both young and old people in pictures and words on social media, for example. Take a look at how easily people reveal all kinds of sensitive information:

https://www.youtube.com/watch?v=YNPI6B-BUW4&feature=emb_imp_woyt (In Dutch only)

On the other hand, people say they value their privacy very much. The discussion surrounding the coronavirus app, which notifies people if they have been in close contact with an infected person, shows that privacy as an issue remains very relevant within society. This is referred to as the ‘privacy paradox’: an interesting phenomenon with which to start this chapter about the GDPR.

5.3.1 Privacy paradox

The privacy paradox is a phenomenon where people say they value privacy, while in actuality they exchange their personal data for ‘free’ products and services without much thought or take absolutely no measures to protect their privacy.

Privacy is dead (Daniel J. Solove)

However, Daniel J. Solove, Professor of Law at the George Washington University Law School, calls the privacy paradox a myth created by faulty logic. Solove says that people make decisions about personal privacy risks in very specific contexts. Yet their perception of privacy risks or the value they attach to privacy are, he argues, often much more general in nature. As a result, these two levels – risk in a specific context and in general – are intertwined within the privacy paradox, according to Solove.

The right to make your own decisions about your personal data, the right to decide what personal data we share with whom, for what purpose, and for how long: this is referred to as ‘informational privacy’. And if for one person that means posting videos with sensitive personal data on social media every day, that is fine. This does not make this person more or less entitled to privacy; the very fact that this person can make choices about what they share with others is a form of privacy. So, each person determines the level of privacy they desire. Privacy issues arise when people are unaware that their personal data are being collected or shared without their consent.

5.3.1.1 In the spotlight

Koppie Koppie

A particularly fascinating example is the Koppie Koppie website (mixed Dutch & English), where it was possible to order coffee mugs with random photos of underage children taken from Flickr. Koppie Koppie is a project by designer Yuri Veerman and journalist Dimitri Tokmetzis. It was part of an awareness campaign Everyone a Spy/ (Iedereen Spion), an initiative of SETUP Utrecht.

 

Formally, people posting photos of their children on Flickr after having read Flickr’s Terms and Conditions of Service – although these are written in such a way that they are almost never read in practice (too long, written in technical language) – should have known that photos on Flickr may be reused by third parties, including for commercial purposes. The Koppie Koppie example shows that a lack of transparency about the privacy risks relating to Flickr services can give rise to a privacy issue. The parents are unknowingly making these photos available for the sale by third parties of coffee mugs with photos of their underage child on them.

 

Also covered in this video from CNN (2015).

Test samples of the Dutch Municipal Health Service (GGD) in Abu Dhabi

In September 2020, there was a commotion when test samples, collected by the GGD, were sent to Abu Dhabi, in the United Arab Emirates because the labs in the Netherlands could no longer handle the workload. Eventually, it turned out that the processing operation in Abu Dhabi was not entirely in order but was carried out nevertheless based on force majeure arguments.

The test samples themselves do not constitute data, but they lead to the creation of personal data. And not just any data but health information, which is a special category of personal data that requires additional protection. Therefore, the taking of test samples is a privacy-related issue in a broader sense, i.e. broader than just the informational aspect of it. This falls primarily under Article 11 of the Dutch Constitution (in Dutch only) relating to the inviolability of one’s person: ‘Everyone shall have the right to inviolability of his person, without prejudice to restrictions laid down by or pursuant to Act of Parliament.’ The explanation of this Article clearly states:

This Article pertains to Article 10 (Right to Privacy). Everyone has the right to have control over one's own body. The government is not allowed to do anything to your body if you do not want this. Others are also not allowed to do anything to your body if you do not want this. For example, no one is allowed to hurt you. Also, no one is allowed to administer medication to you if you do not want this. Even medical examinations or cutting your hair are not allowed, if you do not give permission for this.

For the further protection of the ‘respect for and guarantee of human dignity’, the Medical Treatment Contracts Act (in Dutch only) (Wet op de geneeskundige behandelingsovereenkomst (WGBO)) has been drawn up.

So was the processing in Abu Dhabi okay or not?

5.3.2. Human rights

Why do we care about privacy and the protection of our personal data? Because people have fundamental rights, such as the right to privacy. This right is enshrined in the Dutch Constitution.

The Constitution establishes the fundamental rights of citizens (English version), as can be seen in this video:

https://youtu.be/sVJaAt9mupg (in Dutch only)

5.3.2.1 Informational privacy

Informational privacy is the right to decide what data you share, how and with whom, for what purpose, and for how long. This right to self-determination for the processing of your personal data can only exist if that right is protected. The General Data Protection Regulation (GDPR) has been drawn up for this purpose. People often use the terms ‘privacy’ and ‘data protection’ synonymously, but data protection is only one component of privacy and is only related to the processing of personal data.

Privacy is a fundamental human right, as laid down in:

Universal Declaration of Human Rights (UDHR) (1948)


Article 12 - No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.

(Source)

Charter of Fundamental Rights of the European Union (2012)

Article 7 Respect for private and family life

Everyone has the right to respect for his or her private and family life, home and communications.

Article 8 Protection of personal data

1. Everyone has the right to the protection of personal data concerning him or her.

2. Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified.

3. Compliance with these rules shall be subject to control by an independent authority.’

(Source)

Dutch Constitution (2018)

Article 10: Privacy

1. Everyone shall have the right to respect for his privacy, without prejudice to restrictions laid down by or pursuant to Act of Parliament.

2. Rules to protect privacy shall be laid down by Act of Parliament in connection with the recording and dissemination of personal data.

3. Rules concerning the rights of persons to be informed of data recorded concerning them and of the use that is made thereof, and to have such data corrected shall be laid down by Act of Parliament.

(Source in Dutch) (Source in English)

The distinction between the broader context of privacy and the specific role played by data protection within it is evident in the distinction made in Articles 7 and 8 of the Charter of Fundamental Rights of the European Union; only the latter Article deals with the protection of personal data.

Subsequently, the GDPR establishes how an individual can exercise their right to privacy and the corresponding responsibilities during the processing of personal data. Furthermore, the GDPR describes the applicable principles, i.e. the principles relating to processing of personal data, see GDPR Article 5. In addition to the GDPR, an EU Member State may stipulate further rules in certain cases, through national implementation acts. In the Netherlands, this national implementation act has come into effect as of 1/1/2020 and is called the General Data Protection Regulation (Implementation) Act (in Dutch only), also known as the UAVG. It should be noted, however, that the legislator has striven to establish a ‘policy neutral’ implementation of the GDPR, see (Parliamentary Papers II 2017/18, 34851, 3, p. 17 (2.4 The relationship between European law and national law: layered regulations) (in Dutch only)):

In accordance with the general principle for the implementation of European regulations, the aim has been to ensure a policy neutral interpretation of the scope offered by the Regulation. In other words, while interpreting the Regulation within national legislation, it was always considered whether and to what extent the existing national legislation and policy choices could be maintained under the Regulation. Where that is not or not entirely possible, the choice has always been to adhere as closely as possible to the existing national law.

In the following chapters, we will explore the general and fundamental right to privacy, and the part of it that is also referred to as informational privacy which is limited to the protection of personal data.

5.3.2.2 In the spotlight

Council of State: Covid-19

An example of this are the criticism by the Council of State regarding the emergency measures proposed by the government on account of the global Covid-19 pandemic and the consequent limitation of fundamental rights (see: Council of State: limitation of fundamental rights justifiable but legal basis now required (in Dutch only)). The Council of State (in Dutch only) is an independent advisor to the government on legislative and administrative matters and is the highest general administrative court in the country. In the opinion of the Council of State, the recommended emergency measures were justifiable due to the life-threatening initial phase of coronavirus:

But the legal tenability of the recommended emergency orders diminishes the longer the situation lasts. Therefore, a temporary emergency law replacing the emergency measures must be adopted as soon as possible. The Council of State is particularly critical of the restrictions placed on private gatherings. In addition, there is limited democratic control of the measures since these have been approved by unelected chairs of the security regions.

This is an example of how the fundamental right to privacy is protected because the government is being asked to draft an emergency law for its recommended emergency measures that limit this fundamental right, which will then be democratically reviewed by the House of Representatives and the Senate. This means that the government, or a minister, cannot independently make a decision that limits this fundamental right.

See here (in Dutch only) for the legislative process followed for the Temporary Act governing the COVID-19 Measures (Tijdelijke wet maatregelen Covid-19).

5.4 The GDPR as a law

Data protection is a fundamental right as articulated in Article 8 of the Charter. The GDPR subsequently provides the legal framework for protecting the processing of personal data.

It does this by specifying the following:

l  Principles (principles relating to processing of personal data, including lawfulness, fairness and transparency)

l  Roles (controller, processor)

l  Responsibilities

Last but not least, it also establishes the rights of data subjects, i.e. the individuals whose personal data are being processed. The GDPR allows Member States to further interpret certain open standards, for example, with respect to the processing of national identification numbers, see: GDPR Art. 87 ‘Processing of the national identification number’:

Member States may further determine the specific conditions for the processing of a national identification number or any other identifier of general application. In that case the national identification number or any other identifier of general application shall be used only under appropriate safeguards for the rights and freedoms of the data subject pursuant to this Regulation.

These national provisions are laid down in national implementation acts for the GDPR. This act for the Netherlands is called the General Data Protection Regulation (Implementation) Act (in Dutch only), also referred to as the UAVG.

5.4.1 Examples

The examples below are a good illustration of how the UAVG may deviate from the GDPR:

Citizen service number

In the Netherlands, the citizen service number (BSN) may only be processed under strict conditions. For example, see: UAVG Article 46 (in Dutch only).

For an example of the differences between Member States regarding this point, see: GDPR Tracker - National identification numbers/any other identifier of general application.

Children and consent

The age at which children are considered capable of giving consent vary greatly between Member States (source: https://fra.europa.eu/en/publication/2017/mapping-minimum-age-requirements/use-consent). The image below clearly illustrates these differences:

5.4.2 Hierarchy and relationships

To properly understand the GDPR as a law, it is necessary to understand the GDPR’s hierarchical position and its relationship to the Charter of Fundamental Rights of the European Union. In the boxes below, you can see exactly how this works:

5.4.2.1. Hierarchy of legislation [BOX]

The right to privacy is safeguarded in different ways in different types of legislation. The following hierarchy applies to Dutch legislation (source: Rules on conflict of laws (lex specialis, superior, posterior) (in Dutch only)):

 

International treaties and European law

 

 

 


Charter of the Kingdom

 

 

 


Constitution

 

 

 


Statutes

 

 

 


Administrative orders

 

 

 


ministerial regulations

 

 

 


Provincial decrees

 

 

 


Local ordinances

Ordinances by product and bedrijfschappen

Water control authority ordinances

EU regulations contain rules that apply directly in all Member States of the European Union. This is referred to as ‘direct effect’. Hence, these regulations have a status similar to national laws in Member States, but in case of conflict, the regulation takes precedence over the national law (source: https://www.europa-nu.nl/id/vh7bhpblc5za/verordening (in Dutch only)).

Therefore, the hierarchy of regulations applicable to data protection is as follows:

1.     Article 8 in the Charter of Fundamental Rights of the European Union describes, in general terms, the European fundamental right to the protection of personal data.

 

2.     Article 5 of de GDPR describes the principles to be followed for the protection of personal data, through which it safeguards the protection offered by the general principles in Article 8 of the EU Charter (source: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679&from=en).

 

3.     Article 10 of the Dutch Constitution describes the fundamental right to privacy and data protection in the Netherlands (source: https://www.denederlandsegrondwet.nl/id/via0icz1lvv8/artikel_10_privacy (in Dutch only)).

 

4.     Specifically for the Dutch context and in addition to the GDPR, the UAVG (in Dutch only) specifies (as an Act of Parliament) the framework, roles, and responsibilities for the protection of personal data.

This means that if a researcher fails to comply with the privacy principles outlined in Article 5 (GDPR), this is considered as a breach of the fundamental human right enshrined in Article 8 of the EU Charter.

5.4.2.2 Relationship between the Charter and the GDPR [BOX]

The relationship between the Charter of Fundamental Rights of the European Union and the GDPR principles relating to the processing of personal data is explained in greater detail in the figure below:

The innermost blue circle contains the Article 8 principles from the EU Charter. They are protected by the Article 5.1 principles from the GDPR and the additional legislation in the UAVG for the Dutch context, as displayed in the innermost dark green circle. The outermost light-green circle represents the principle of accountability mentioned in GDPR Article 5.2:

5.2 The controller shall be responsible for, and be able to demonstrate compliance with, paragraph 1 (‘accountability’).

This principle 5.2 indicates that, under the GDPR, a researcher must be able to demonstrate, with accompanying proof, that they have taken all the necessary measures to protect sensitive and other kinds of personal data as optimally as possible.

A researcher must not only have a thorough knowledge of the Article 5 principles of the GDPR, but they should also be aware of the technical and organisational measures and safeguards that are necessary within a given context. Moreover, the researcher (in collaboration with a privacy officer) must know how to provide proof of the measures and safeguards taken, if necessary. This is important because, in case of a complaint from a data subject, the Dutch DPA may initiate an investigation in which such proof will be requested. Failure to provide such proof could potentially result in a fine and discontinuation of the research.

[END BOX]

But what technical and organisational measures does a researcher have to take within a specific context to initiate and carry out a research project in a GDPR-compliant manner? In ‘Module 4 - Actions’, we will go deeper into the various measures that can be taken for this. But first we will take a deeper look at the GDPR and the Article 5 principles of the GDPR.

5.4.3 Purpose of the GDPR

The GDPR is not about privacy. The word ‘privacy’ occurs only once in the GDPR, in reference to a European Parliament Directive on privacy supplementary to the GDPR. But what is the purpose of the GDPR?

The official title of the General Data Protection Regulation (GDPR) reads as follows:

REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL

on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).

Therefore, the two purposes of the GDPR are:

l  Encouraging the free movement of data, including personal data

l  The protection of these personal data.

The GDPR indicates, in a technology neutral manner (hence the general, open data standards nature of the GDPR), that certain rights and obligations apply to the processing of personal data and for this it defines specific roles and corresponding responsibilities. The GDPR specifies how these rights can be exercised, how this is monitored, and what kind of sanctions are in place. An often-heard statement about the GDPR is, “you’re not allowed to do anything with personal data any more”. However, it is actually the opposite: by properly applying the principles laid down in the GDPR, a lot can be done – after all, the entire purpose of the GDPR is to ensure the free flow of personal data!

5.4.3.1 The researcher and the GDPR

Let us take a look at this from the perspective of a researcher. The example below describes the steps to be taken mentally by every researcher when going from being GDPR compliant to embracing privacy as second nature.

5.4.3.2 In the spotlight

Privacy maturity of a researcher

A researcher and their research support staff who are learning about the GDPR for the first time will often typically focus on conducting the research in a GDPR-compliant manner. This is only logical because an illegality such as a data breach would not only damage the reputation of the researcher, research group, or institution but may also lead to the discontinuation of the research. At this stage, the researcher often sees the GDPR as a necessary external obligation, mainly involving additional work and costs.

After going through a few research projects, the risk-based logic of the GDPR with the identified risks and corresponding mitigation measures become clear. Therefore, once appropriate measures have been identified for the known privacy risks for research participants, these measures are also appropriate in similar situations, barring relevant developments in technology that require a reconsideration of the measures, for example.

The researcher and the research support staff become increasingly adept at applying the right technical and organisational measures for the protection of personal data in a practical and concrete manner. As a result, the researcher is also able to carry out increasingly high-risk research, such as studies involving machine learning, artificial intelligence, or big data. Or studies in which vulnerable target groups are the subject of research.

Since privacy and the protection of personal data are now second nature to the researcher, they have reached a level of maturity that provides access to new research opportunities in a rapidly evolving society. In this way, privacy becomes less of an obstacle for researchers and more of a prerequisite for conducting innovative high-risk research.

5.4.3.3 Rights and obligations in the GDPR

The image below effectively summarises the GDPR. Here the rights of a research participant are contrasted with the obligations of the researcher, in terms of the responsible handling of personal data and appropriate protection for this.

As mentioned earlier, the GDPR is a law based on principles. It is always about assessing whether the two purposes of the GDPR are fulfilled:

1.     Encouraging the free movement of personal data

2.     Protecting these personal data as well as possible

This assessment and the resulting measures vary depending on the research project. It involves making a reasonable assessment of the risks with respect to the research participants, weighing up these risks, and determining appropriate, proportional measures to mitigate these risks. For example, by not collecting more personal data than is necessary for the research purpose. To ensure this, researchers may ask themselves questions such as:

l  What sensitive and other kinds of data would I need to collect from the research participants?

l  What would be the consequences if a third party were to gain unauthorised access to these data?

l  What kind of harm could someone cause with these personal data?

l  What would it mean for the data subjects if their personal data were to fall into the wrong hands?

It is easy to think of many examples where there would be very serious consequences if unauthorised persons were to gain access to certain personal data, such as research projects where interviews with paedophiles, war criminals, or coup plotters in a dictatorship are part of the research data. But, in practice, most research projects do not involve this kind of sensitive information, so accordingly the measures to be taken may also be more limited.

Finally, research participants may also be harmed by stigmatisation and disclosure of the group. For example, if in an observational study of fathers from village X, which examines the quality of interaction between the fathers and their minor children, a publication of the research results discloses that there were many important areas for improvement in this interaction for a majority of these fathers.

5.4.3.4 The GDPR - not an absolute right

Please note: the right to privacy is not an absolute right. Recital 4 in the GDPR articulates this as follows:

The processing of personal data should be designed to serve mankind.

The right to the protection of personal data is not an absolute right; it must be considered in relation to its function in society and be balanced against other fundamental rights, in accordance with the principle of proportionality.

This Regulation respects all fundamental rights and observes the freedoms and principles recognised in the Charter as enshrined in the Treaties, in particular the respect for private and family life, home and communications, the protection of personal data, freedom of thought, conscience and religion, freedom of expression and information, freedom to conduct a business, the right to an effective remedy and to a fair trial, and cultural, religious and linguistic diversity.

 

Hence, while privacy is a fundamental human right, other rights may outweigh it in certain contexts, for example, in the case of a pandemic such as the Covid-19 crisis. In these situations, public health interests may prevail over the fundamental right to privacy, for example, when a government mandates a particular tracking app for coronavirus infections. In doing so, the government infringes upon the right to privacy but in the service of a (at the time) greater priority: public health.

In this example, the GDPR can serve as a guide to designing the tracking app to be as privacy-friendly as possible, by building privacy into the design of the app. The goal here would be to arrive at a positive-sum situation, where privacy is respected in the app while improving public health. And perhaps, from a privacy standpoint, it will turn out that a digital app is not the best solution for registering people but that it is safer for our personal data to simply note down your name and phone number on a piece of paper every time you visit a restaurant or other venue.

So although the GDPR is an important law, it is not the most important law in every context. There are situations where other interests outweigh our privacy, as the GDPR itself explicitly states in Recital 4.

5.4.4 Functioning of the GDPR

To fully understand how the GDPR works, it is important to know that the GDPR is a general regulation and a principle-based law.

The GDPR is a general regulation for two reasons:

1.     It is drafted in generic terms and only outlines principles on how to deal with personal data. For specific policy areas, there is specific legislation (such as (in Dutch only) the New Intelligence and Security Services Act (Nieuwe Wet op de inlichtingen- en veiligheidsdiensten (WIV)) or the Medical Research (Human Subjects) Act (in Dutch only) (Wet medisch-wetenschappelijk onderzoek met mensen (WMO)). This specific legislation takes precedence over general legislation (such as the GDPR).

2.     The GDPR is technology neutral: the legal text does not mention specific technologies or specifications thereof, such as the encryption key length (128, 192 or 256 bits) for data encryption. The possible technical and organisational measures have been discussed earlier in Chapter 4 (Measures).

5.4.4.1 A principle-based law

Some laws are based on principles and open standards, while other laws have clear objective criteria and are therefore based on closed standards. An example of the latter is the road sign indicating a speed limit of 100 km/hour between 6:00 and 19:00.

Road signs like the one shown above tell you exactly what the speed limit is on a motorway at a particular time. With a principle-based law such as the GDPR, there are no objective criteria, and instead it uses concepts such as:

-       Purpose not incompatible with the initial purposes

-       A vital interest

-       Appropriate safeguards

-       A high risk

-       Necessary

-       Proportional

-       Fair


The advantage of open standards is that the legislator does not have to make rules for every concrete situation or for every technical development (reduces the regulatory burden). The GDPR is essentially concerned with the defined purposes (goal-based regulation). See also Council of State Annual Report 2018: Open standards and legal certainty (in Dutch only). A disadvantage of open standards, as also mentioned in this Annual Report, is the legal uncertainty:

Open standards involve a risk of disagreement arising about their interpretation. If the legislator does not provide sufficient guidance, the courts will ultimately have to decide on the interpretation of a standard.


Based on the formulated principles, a researcher will always have to examine which measures are necessary to safeguard, to the best extent possible, the defined purposes for the protection of personal data within a research project. For this, policy rules (formerly: guidelines) laid down by the national supervisory authority, the Dutch DPA, give further meaning to the open standards, see an overview of this here (in Dutch only).

Court rulings also provide further clarification, for example, see here (in Dutch only), where the court in preliminary relief proceedings has ruled that a university may use online surveillance software (proctoring) during examinations and where it confirms that the university in question has complied with all the rules and principles of the GDPR.

Finally, the Court of Justice of the European Union, which is the highest authority, further clarifies the GDPR. For example, on 16 July 2020, in its famous Schrems II ruling, the Court went so far as to declare a decision of the European Commission invalid, i.e. the so-called adequacy decision of the US based on the Privacy Shield.

Case law therefore plays an important role in principle-based legislation with open standards because these standards and certain contexts gradually become clearer based on the entire body of court judgements.

Other principle-based legislation

Other examples of principle-based legislation are the law on cartel formation, financial market laws, integrity, fraud etc. For this type of law, it is impossible to include for instance specific financial products in the law itself since the range of products changes daily. It is for this reason that this type of law is based on principles within which one needs to operate.

With this type of law a supervisory authority is often involved who judges the extent to which the principles of the law have been violated. Case law plays an important role in principle-based legislation because one can estimate risks and measures or actions to take on the basis of earlier judgments.

5.4.4.2 Lex generalis and lex specialis

In addition to the fact that the GDPR is a principle-based law, it is important to be aware of the relationship between a general regulation (a lex generalis such as the GDPR) and sector-specific laws (lex specialis). If, in a given case, a sector-specific law (lex specialis) as well as the GDPR (a general regulation, lex generalis) are applicable, the principle of lex specialis derogat legi generali shall apply, i.e. this principle of speciality gives priority to specific legislation over general legislation. For example, the UAVG is specifically a national law and hence takes precedence over the GDPR.

But in case of conflicts between laws, another principle applies, i.e. lex superior derogat legi inferiori, which defines the hierarchy between higher and lower legislation. The higher legislation issued by the highest legislator takes precedence. This means that if there is any conflict between specific and general laws, the general legislation (GDPR) will still override the specific legislation (UAVG). As it happens, specific legislation usually contains more specific provisions. Only if there is a conflict as a result of this, will the higher legislation prevail. Although this does occur, it is more of an exception. See also the image displayed earlier which clearly shows the hierarchy between the different laws.

Finally, see this article (in Dutch only) which explains the hierarchy between the GDPR and the Police Data Act (Wet politiegegevens (Wpg)) in the context of the National Police:

Police Data Act

Just like any other organisation, the police are also required to comply with the GDPR rules. However, a different law applies when processing data in the context of prosecution or investigation: the Police Data Act (Wpg). Alongside the GDPR, this law applies as a lex specialis. This means that the rules in the GDPR do not apply when the Wpg is applicable. The Wpg applies when tasks relating to the detection and prosecution of criminal offences and enforcement of sentences are concerned. With respect to the personal data processed by the police in the normal course of their duties, the GDPR shall apply. But the police also have to deal with the Dutch DPA when the Wpg applies. In fact, the Wpg states that the Dutch DPA is the supervisory authority that oversees compliance with the provisions of the Wpg.

5.4.4.3 Additional provisions and exceptions for scientific research

The GDPR explicitly states that Member States have a degree of freedom in further interpreting certain aspects of the GDPR. An example of this is GDPR Article 9 (Processing of special categories of personal data), paragraph 2, subparagraph j. Here the GDPR states that Member States can make additional provisions:

The processing is necessary for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes in accordance with Article 89(1) based on Union or Member State law which shall be proportionate to the aim pursued, respect the essence of the right to data protection and provide for suitable and specific measures to safeguard the fundamental rights and the interests of the data subject.

Our national implementation legislation, the UAVG, includes additional provisions for the purpose of scientific research. In this context, the UAVG states the following more specifically in Article 24:

 

Exceptions for scientific or historical research purposes or statistical purposes

In view of Article 9(2)(j) of the Regulation, the prohibition on the processing of special categories of personal data does not apply if:

a. the processing is necessary for scientific or historical research purposes or statistical purposes in accordance with Article 89(1) of the Regulation;

b. the research, referred to in subparagraph a, serves a public interest;

c. seeking explicit consent proves impossible or involves a disproportionate effort; and

d. the implementation provides for sufficient safeguards ensuring that the privacy of the data subject is not disproportionately affected.

How should this be interpreted? We have seen that in this respect the UAVG takes precedence (lex specialis) over the general rule from the GDPR. UAVG Article 22 (in Dutch only) specifies in greater detail that, in the Netherlands, special categories of personal data may only be processed lawfully for research purposes if it can be demonstrated that requesting research participants for their consent ‘proves impossible or involves a disproportionate effort’.

Here the UAVG actually prescribes the ‘comply or explain’ approach as a basic principle: consent should be explicitly requested from research participants if special categories of personal data are processed in research or it should be explained why this is not possible or not possible with a proportionate amount of effort.

In some places, the GDPR clearly indicates that national implementation legislation may further interpret a GDPR obligation as seen, for example, in the case of the BSN (national identification number) (GDPR Article 87): ‘Member States may further determine the specific conditions for the processing of a national identification number or any other identifier of general application.’

5.4.5 Technology neutral

The GDPR regulates the protection of personal data; it is a general law that is technology neutral. But what does this mean exactly?

5.4.5.1 Technologies

The GDPR does not prescribe the technologies that must be used for applying the ‘appropriate safeguards,’ for example, for the purpose of encryption or pseudonymisation within a research project.

The GDPR only makes the following general statement in Recital 78:

The protection of the rights and freedoms of natural persons with regard to the processing of personal data require that appropriate technical and organisational measures be taken to ensure that the requirements of this Regulation are met. In order to be able to demonstrate compliance with this Regulation, the controller should adopt internal policies and implement measures which meet in particular the principles of data protection by design and data protection by default.

Such measures could consist, inter alia, of minimising the processing of personal data, pseudonymising personal data as soon as possible, transparency with regard to the functions and processing of personal data, enabling the data subject to monitor the data processing, enabling the controller to create and improve security features.

When developing, designing, selecting and using applications, services and products that are based on the processing of personal data or process personal data to fulfil their task, producers of the products, services and applications should be encouraged to take into account the right to data protection when developing and designing such products, services and applications and, with due regard to the state of the art, to make sure that controllers and processors are able to fulfil their data protection obligations. The principles of data protection by design and by default should also be taken into consideration in the context of public tenders.

5.4.5.2 Self-regulatory capacity

Here one relies on the self-regulatory capacity of the controller, taking into account what is prevalent and appropriate within the relevant sector. This leads, for example, to the formulation of a code of conduct that, to paraphrase GDPR Article 40(1):

(...) taking into account the specific features of the various data processing operations within a sector and the specific needs of the type of institution, should contribute to the proper application of the GDPR.

For research, this means (according to GDPR Recital 78, see quotation above) that the Executive Board of an institution, acting as the controller, should adopt internal policies and implement measures which meet in particular the principles of data protection by design and data protection by default.

5.4.5.3 Why technology neutral?

Technology is evolving so rapidly that any attempt to concretely define in a law the specific type of technology that should be used as an appropriate safeguard will, by definition, be out of date. That would mean having to amend the law regularly, which is undesirable if only for practical reasons.

5.4.5.4 General and principle-based

That is why the GDPR is general and principle-based. By applying the principles described in Article 5 of the GDPR, a researcher knows that technical and organisational measures are needed to protect the aforementioned principles. At the same time, continuous technological developments are giving rise to entirely new issues relating to the protection of our personal data.

To illustrate, consider the machine-learning phenomenon: who is responsible for the evolving self-learning algorithm? In this situation, who is the controller and who is the processor (for an explanation of the terms ‘controller’ and ‘processor’, see GDPR Article 4(7) and 4(8))? Through emerging case law in this area, we will gradually get a better understanding of how the GDPR principles relate to these new technological developments.

As a topical example, let us take a quick look at the use of algorithms:

5.4.5.5 Questions regarding algorithms

We see that algorithms often make errors and disadvantage certain people, including vulnerable groups. For some well-known examples of this, see this article in the New Scientist: Discriminating algorithms: 5 times AI showed prejudice.

Is the creator of the algorithm responsible for the discrimination? Is the party that applies the algorithms responsible for the discrimination? Is the provider of the algorithm to blame if the algorithms were developed with test sets that were too small or outdated? How do you prevent bias in algorithms and is it even preventable?

You will not be able to find specific answers to many of these questions in the articles of the GDPR, but the principles of the GDPR make it clear that technology and the processing of personal data must be for the benefit of the people (see the GDPR Article 5 principles: lawfulness, fairness and transparency).

However, GDPR Article 22 states that:

Automated individual decision-making, including profiling’ sets certain limits on the use of algorithms by stipulating the following in paragraph 1: ‘The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

Article 22 combined with the principles outlined in Article 5 therefore provides a sufficient basis for selecting and implementing measures that will properly safeguard the processing of personal data. Furthermore, in 2019, the UK’s national supervisory authority ICO has drawn up useful guidelines for DPIAs and AI.

Finally, there is a great deal happening in the sector in terms of assessments of AI, including by the European Commission, through the work of the High-Level Expert Group on Artificial Intelligence established by the EC, for instance. The AI HLEG has provided useful documents relating to ethical guidelines, policy recommendations, and a self-assessment check for developers.

The EC also prepared a white paper on AI, included AI in its vision, and announced specific AI-related legislation.[1] 

All of the above indicates that the context of certain technical developments, as in the case of machine learning and artificial intelligence, sometimes requires its own approach and even legislation, besides the GDPR, for regulating the use of these technologies for processing personal data.

5.4.6 International cooperation

More and more research is taking place within an international context, with collaboration between research teams representing various international public and private parties. As a result, the datasets collected within these research projects may reside on servers both within and outside the EU. In this chapter, we discuss the specific regulations governing the exchange of data between EU Member States and countries outside the EU.

First and foremost, it is important to distinguish between three types of countries: [fold out boxes]

EU Member States

In these countries, data are processed in accordance with the GDPR. The EU is a single jurisdiction with respect to the protection of personal data. Is an organisation transferring data from the Netherlands to another EU country? In that case, the organisation only needs to comply with the general requirements of the GDPR.

Countries in the European Economic Area (EEA)

These are Norway, Liechtenstein and Iceland. All three countries offer an equal level of protection for personal data. Is an organisation transferring data from the Netherlands to Norway, Liechtenstein and Iceland? In that case, the organisation only needs to comply with the general requirements of the GDPR.

Countries outside the EU and EEA

Separate rules apply to the transfer of personal data from the Netherlands to countries outside the EU and the EEA, the so-called third countries (this is referred to as a ‘cross-border data transfer’). The main rule is that an organisation may only transfer personal data to third countries that offer a proper level of protection which is essentially equivalent to the safeguards provided within the EU; the level of data protection must not be undermined.

In the absence of a proper level of protection, transfer is only permitted under one of the legal provisions of the GDPR (source: Chapter 5 - Transfers of personal data to third countries or international organisations).

5.4.6.1 Cross-border data transfers

The GDPR recognises the need for cross-border data transfers for the expansion of international trade and international cooperation. In all cases, transfers to third countries and international organisations may only take place in full compliance with the GDPR. This implies that EU data protection does not stop at EU borders.

It is important always to determine the risks of exchanging personal data in relation to the potential impact of such an exchange on the freedoms of EU citizens. If the data exchange involves no risks, no data protection is required: in that case, the exchange is considered part of the free movement of data.

The criterion for data protection is the extent to which the fundamental rights of EU citizens are affected. This is always about the balance between the protection of fundamental human rights (Articles 7 and 8, EU Charter) and peoples’ freedoms. As we saw earlier in this context, privacy is not always the highest right; in some situations national security interests may prevail, for example. The GDPR facilitates the protection of natural persons with regard to the processing of personal data and the free movement of such data and as stated in Recital 3: ‘to ensure the free flow of personal data between Member States’.

Summarised in a single image (source):

Step 1

 

 

 

 

 

 

 

 

 

 

 


 

 

 

 

 


 

 


Step 2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


                                                                                                       

5.4.6.2 Adequacy decision

However, some countries provide safeguards that guarantee a proper level of protection, essentially equivalent to the level of protection offered within the European Union. With respect to such countries, the European Commission has drafted a so-called adequacy decision – in other words, cross-border flows of personal data from the EEA to the country in question are equivalent to the transfer of data within the EU.

However, if a country is not part of the EU or EEA and has not received an adequacy decision, we refer to it as a ‘third country’:

l  For EU and EEA countries, see:  https://www.government.nl/topics/european-union/eu-eea-efta-and-schengen-area-countries

l  For countries that have received an adequacy decision, see: https://ec.europa.eu/info/law/law-topic/data-protection/international-dimension-data-protection/adequacy-decisions_en

In that case, the GDPR describes a number of lawful mechanisms to legitimise cross-border data transfers. For example, one of these lawful mechanisms is the conclusion of contracts containing standard contractual clauses (SCC).


5.4.6.3 In the spotlight

Standard contractual clauses

By signing the contract containing these SCCs (for example, for EU controllers transferring personal data to processors in third countries without a proper level of data protection), both organisations (the exporting organisation of the personal data and the importing organisation) create a legal obligation for themselves, to which they are both bound. This obligation is considered to provide sufficient guarantees for the protection of privacy and the fundamental rights and freedoms of individuals and for the exercise of the corresponding rights. The SCCs as a legal mechanism are only valid if both parties actually comply, in practice, with the defined set of legal obligations.

A part of the SCCs is the agreement that national data protection authorities in the EU are entitled to exercise their powers, for example, by prohibiting or suspending data flows to a third country (in which the importing organisation is located) if it is determined that the law to which the data importer is subject imposes requirements on it that differ from the obligations laid down in the SCC.

In case of collaboration with a third country involving personal data, it is reasonably expected, when concluding the contract with the SCCs, that the country will be able to meet the requirements set forth in the SCCs. As far as collaboration with the US is concerned, this has become more difficult since the Schrems II ruling of the European Court.

Module 6 - Complex cases

In Module 3 ‘Principles relating to the processing of personal data’, we discussed the Metro Map. The majority of all research projects go through the regular Orange Line on this map; in such projects, it is usually clear within a short period of time what measures are needed to ensure proper protection of the personal data.

And that is precisely why special research projects are particularly interesting: that is where you as research support staff can really make a difference! What measures are necessary if it is not entirely clear what type of personal data you are actually working with or if the research contains multiple high-risk criteria?

These are the types of research projects that we refer to here as complex cases. They serve as the perfect practice material for testing out all that you have learnt in the previous modules.

It is time to put things into practice! Each of the following pages describes a complex case. It is up to you to determine how the seven principles from GDPR Article 5 should be safeguarded and what technical and organisational measures should be taken to protect personal data as well as possible.

Each complex case is structured as follows:

●      You first read a brief outline of the research project

●      Based on this information, you answer the question of what type of personal data is being processed in the project, what the basis is, and whether the research is high risk or not

●      Following this, you are asked seven questions relating to the GDPR Article 5 principles where, for each principle, you must indicate how the researchers have safeguarded this principle in the research project

●      Finally, you answer the question as to what technical and organisational measures you think should be implemented in the project.

All questions are followed by a right and a wrong answer. After you have made your choice, you will immediately get to see whether it was the right choice. Good luck!

6.1 Misuse of network

Many terabytes of data travel across our university networks every day. But what are all these data really for? Do they relate to online activities focused on learning and working or are there also less-friendly activities among them? A university in the Netherlands decided to investigate this and started inspecting all the traffic on their network over an extended period.

Read the research plan below and try to answer the questions that follow correctly. What decisions would you make as a privacy officer?

Research title

Establishing the extent and nature of university network abuse

Research objective

The study involves the inspection of network-level data packets across the entire university to investigate whether and what kind of malicious activities are taking place on the network.

Research target group

All persons who use the university’s open network (staff, students, visitors, people living in the vicinity of the university).

Research location

The study takes place at a Dutch university.

Data storage location

On site.

Participating parties in the research

●      University researchers specialising in cybersecurity

●      Contracted external network partner of the university

Data used in the research

●      IP addresses of persons using the network

●      URLs visited by persons using the network

●      All browser data are being collected

Initial research design

This case study concerns research involving the large-scale collection of the IP addresses and visited URLs of all individuals who connect to the open university network. The project involves the inspection of data packets at the network level throughout TU Delft. The researchers will carry out these activities in cooperation with an external partner contracted by TU Delft, who will be providing access to the data but who itself will not have any access to these data.

Quiz

What kind of sensitive or other personal data will be processed during this research?

●      This is purely about IP addresses and URLs that cannot be traced back to individuals, so it does not involve any personal data. These are hashed and therefore pseudonymised. No sensitive personal data will be collected.

●      [CORRECT ANSWER] In this case, the personal data are the IP address and associated browsing data. The purpose of the research is not to process special categories of personal data or data of vulnerable individuals. However, these sensitive data, such as data about health, trade union membership, or sexual orientation, can be inferred from the pages visited but such data are not stored.
The hashing of the IP addresses and the fact that the research question is only aimed at identifying malicious acts based on an established set of indicators justify the basis for the processing, i.e. legitimate interests. It is true that vulnerable individuals, such as children residing in the vicinity of the institution, may connect to the institution’s network, but the purpose of the study is not to collect data on the individuals using the network but to ensure the security of the institution’s network.

What is the legal basis for this processing operation?

●      The basis is public interest, because the broader, public interest is served if it is known how cybersecurity can be applied and how it can be detected. As educational institutions, universities have a public task and are financed with public funds. This means that all insights emerging from universities, such as via research, are always in the service of public interest. In fact, it is not possible to request consent.

●      [CORRECT ANSWER] The basis for processing in this case is legitimate interest. Consent as a basis is not possible because it interferes with the purpose of the research.
Another possible basis could be public interest, but this basis is difficult to justify because the processing does not relate to the direct legal task of the public institution, such as research or teaching, but provides safeguards for a safe environment in which to conduct teaching and research.

Is it high-risk research and, if so, what risks are involved?

●      The combination of collecting only IP addresses and URLs, hashing the IPs, and destroying the data once the research is complete means that there will be no high-risk processing.

●      [CORRECT ANSWER] The combination of collecting only IP addresses and URLs, hashing the IPs, and destroying the data once the research is complete means that there will be no high-risk processing. However, the fact that this involves covert research means, in accordance with the Dutch DPA’s guideline, that a DPIA must be carried out and that high-risk processing is involved.

How do the researchers deal with Article 5, Principle 1: lawfulness, fairness and transparency?

●      [CORRECT ANSWER] The processing is lawful, fair and transparent because the basis for the processing is a legitimate interest. It is not possible to obtain consent due to the scale involved (thousands of people on the network). Transparency is provided through the institution’s privacy statement, which states the purposes for which each type of personal data is processed by the institution.

●      The research is concerned with lawful and fair conduct within the university network, so it contributes considerably towards this. The results of the research are published, which means that the researchers are transparent about these results and this is especially true if the publication occurs in an open access journal, something that is increasingly requested by research funders.

How do the researchers deal with Article 5, Principle 2: purpose limitation?

●      [CORRECT ANSWER] Purpose limitation is achieved because only those data are stored and used that are relevant for investigating whether malicious actions have occurred, i.e. for the purpose of safety/cybersecurity.

●      The researchers are trying to focus on the main research question and not to look further into the causes and source of this unlawful use of the university network.

How do the researchers deal with Article 5, Principle 3: data minimisation?

●      The researchers hope that cybercrime is not that prevalent on the university network and that the problem is therefore minimal, as are the data involved. Furthermore, it is advisable not to share too much data with other researchers during the research, so this should be kept to a minimum.

●      [CORRECT ANSWER] Data minimisation is achieved by collecting only the IP address and URLs. These two data points are necessary for the research questions and no other data are collected, analysed, or stored that could theoretically have been collected from the network.

How do the researchers deal with Article 5, Principle 4: accuracy?

●      [CORRECT ANSWER] Specific accuracy issues do not arise, since the entire network data (as described above, IP and URLs) are collected but not linked to an individual. The IP addresses are regularly and automatically hashed.

●      Researchers use methods accepted by the internal review board to arrive at the appropriate conclusions. This includes methods that are common and well-known in the field. As a result, the conclusions can be monitored better by colleagues who want to verify the research.

How do the researchers deal with Article 5, Principle 5: storage limitation?

●      Storage of data may seem inexpensive but if all versions of each operation are stored, it often becomes impossible to find the right data and metadata (labels for finding data) also need to be added to a lot of the data. This ensures that as little data as possible are stored, i.e. only the important data are stored.

●      [CORRECT ANSWER] The data are stored in secure facilities at the university during the research period. For reasons of scientific integrity and exclusively for verification purposes, these data will be stored in secure facilities at the university for five years after completion of the research.
Although the research results may be used for future research, these data will not be made available for follow-up research by the university’s research group or by third parties but will be destroyed after the five-year storage period has expired.

How do the researchers deal with Article 5, Principle 6: integrity and confidentiality?

●      Scientific integrity is very important with respect to the performance of research. For this reason, all researchers endorse the Netherlands Code of Conduct for Research Integrity and meetings are often held on this subject with the research group or within the faculty. No one wants a repeat of the scandals involving scientific fraud of a decade ago, because society should be able to rely on scientists respecting the principles of integrity and confidentiality.

●      [CORRECT ANSWER] Integrity and confidentiality are ensured through the signing of a confidentiality agreement by the data provider, i.e. the contracted network management company engaged by the university, and by the fact that only the researchers working on the project have physical and technical access to the data. Access to and changes in the data are logged and can be traced back to individuals through the use of personal accounts and this is regularly monitored by the principal researcher.
For each researcher role in the research project, the appropriate access rights to the data are determined and these rights are granted via technical means and for the duration of the research project. Furthermore, the four-eyes principle is applied; no individual may access the collected data alone but must always do so together with a colleague, who has the same access rights.
Regular hashing of the data is an additional measure that ensures the confidentiality of the data.

What technical measures are needed in this research project to properly protect personal data?

●      [CORRECT ANSWER] To protect privacy and as part of best practices, personal data are hashed periodically, at a schedule varying between daily and weekly. This means that any personally identifying characteristics in the data, such as the IP address, are replaced by a random tag and this too is changed periodically.
The researchers follow best practices to hash the IP addresses regularly and randomly. The university facilities, combined with the expertise of the researchers (cybersecurity), ensure that the necessary technical and organisational measures are taken to handle the data securely. A special point of concern is the technical filtering of traffic from local residents (GPs? children?).
Data cannot be downloaded onto USB sticks or other media and taken home. Data cannot be exported from the system to an export file or a print file so that they are no longer protected by the system.

●      As long as the data are encrypted, not much can go wrong. Very little data are actually collected. Thanks to the hashing, these data are also pseudonymised. By doing all of this, the researchers ensure compliance with the GDPR.

What organisational measures need to be taken during this research to properly protect personal data?

●      In terms of organisational measures, the researchers look closely at which organisations they work with and make proper agreements with them. Working with parties with whom there is a long relationship of trust makes it possible to work together better and more safely. This helps protect personal data.

●      [CORRECT ANSWER] Only the scientists involved in the research have access to the data. The four-eyes principle is applied and the logged records are monitored. Proper agreements are made with the supplier about the confidentiality of the data and these agreements are set out in a contract.

6.2 A cost-effective model

As residents of the Netherlands, we all pay our health insurance premiums. These collected funds are used by healthcare providers and people within various healthcare disciplines to work together every day to provide the best care possible. A university and a health insurance company decided to join forces to investigate all the possible funding options for person-centred care for people with chronic diseases.

Read the research plan below and try to answer the questions that follow correctly. What decisions would you make as a privacy officer?

Research title

Developing a cost-effective model for person-centred care for people with chronic diseases.

Research objective

Based on an analysis of the actually incurred overall healthcare costs, a model will be developed to ensure that proper interdisciplinary care is provided to patients and that this care is cost-effective. Desirable and undesirable incentives in healthcare and the associated overall healthcare costs will be examined.

Research target group

Individual patient data from all adults who participated in one or more integrated care programmes in 2020.

Research location

Within the university’s secure environment, based on data from a health insurance company.

Data storage location

Within the university’s end-to-end secure environment, based on data from a health insurance company. Standard software is used for descriptive statistics, cluster analysis, and regression analysis, as provided by the institution based on licences.

Participating parties in the research

●      A university

●      A health insurance company

Data used in the research

Data required for gaining insight into the complete use of care by patients in one of the integrated diagnosis treatment combinations (DTC). This includes:

●      A meaningless unique identifier, not traceable to a BSN

●      Age

●      Gender

●      Deceased in that year (Y/N)

●      Encrypted GP practice code

●      A set of indicators related to care provided by GPs and paramedics

Initial research design

The different funding options are based on a combination of qualitative and quantitative research. In quantitative research, we aim to substantiate the funding options with data on the actual use of care and costs for patients who were part of one or more predefined integrated care programmes (indicated with codes) in 2017.

What kind of sensitive or other personal data will be processed during this research?

●      The health insurance company will provide the data in such a way that they cannot be traced back to any natural persons by the researchers. In fact, these are anonymous data since the data do not include any names, addresses or postal codes.
Also, the research is not about the individual care of persons but about the use of care and its costs, based on the recorded diagnosis treatment combinations.

●      [CORRECT ANSWER] The health insurance company will provide the data in such a way that they cannot be traced back to any natural persons by the researchers. A re-identification risk could arise based on the broader context and therefore the data are viewed as personal data.
Because the data provide insight into the health of the individuals, they are also viewed as sensitive data. The identity of the persons may be indirectly derived based on GP and paramedic care received.

What is the legal basis for this processing operation?

●      The health insurance company says the following on its website under the privacy statement: ‘Health insurance companies regularly receive requests from university hospitals, for example, to be allowed to use personal data (about health) for scientific research or statistical purposes.
These data are provided only insofar as anonymous data are insufficient for the purpose, the research is in the public interest, and it was not possible to seek consent.’ Patients have been given the opportunity to view this privacy statement and have therefore consented to it. As a result, the basis for the processing is consent.

●      [CORRECT ANSWER] The purpose of the research serves the public interest. The processing of personal data by the university for this research is necessary for the performance of a task of general interest.

Is it high-risk research and, if so, what risks are involved?

●      [CORRECT ANSWER] Since data minimisation and pseudonymisation have been applied, we consider the risks to data subjects as negligible. The dataset provided to the university contains data that are not traceable to individuals. The technical and organisational safeguards ensure appropriate data protection during and after the research.

●      Because the data are anonymised, there is actually no risk. In fact, the research data are not personal data. Statistical protection is also applied, for instance by taking into account outliers which are then removed from the dataset because these may lead to a re-identification risk or disclosure of the group, for example.

How do the researchers deal with Article 5, Principle 1: lawfulness, fairness and transparency?

l  [CORRECT ANSWER] The processing of personal data is lawful and methodologically tested. In the research, transparency with respect to the data subjects is ensured through communication via the project website. The healthcare groups communicate the developments relating to the research to the integrated care partners (including GPs, dieticians, physiotherapists).

l  The research has a valid legal basis, and the site of the health insurance company communicates transparently to the patients about how the data will be made available for the purpose of scientific research. Retrieving data from the health insurance company rather than collecting the data from patients via a survey, for example, is the least privacy invasive manner and therefore 'fair’.

How do the researchers deal with Article 5, Principle 2: purpose limitation?

●      [CORRECT ANSWER] The personal data will not be processed for any purpose other than that described above under ‘Research objective’.

●      The research essentially has a single clear objective, and the researchers are bound by that. This objective is described above under the heading ‘Research objective’.

How do the researchers deal with Article 5, Principle 3: data minimisation?

●      In the research design, the decision has been made to analyse specific treatments rather than all of them and to do this within a certain region and only for adults. As a result of this selection, the research data is minimised as much as possible.

●      [CORRECT ANSWER] The personal data to be processed are strictly necessary for the purpose of the research. The justification for the data needed, by data field, is described in the data management plan of the research project. Pseudonymised data are used (GP codes, encrypted BSN) so that the researcher is unable to trace the data back to individuals.

How do the researchers deal with Article 5, Principle 4: accuracy?

●      [CORRECT ANSWER] The processing of personal data takes place based on relevant data from the specific source systems in question, thus ensuring the accuracy of the data.

●      The personal data processing operations are performed based on data provided by the health insurance company. It is a well-established company that has its affairs in order; we as researchers have always been able to rely on it till now.

How do the researchers deal with Article 5, Principle 5: storage limitation?

●      Since this involves anonymised data, not personal data, they can be archived indefinitely for research purposes.

●      [CORRECT ANSWER] Personal data are kept for verification purposes in an appropriate end-to-end encrypted environment where data are securely archived for a period of ten years.

How do the researchers deal with Article 5, Principle 6: integrity and confidentiality?

●      Under the responsibility of the university, personal data are properly protected by researchers who receive regular training on scientific integrity. For example, researchers do not pay the participants to prevent patients from participating for the wrong reasons. The researchers also handle the data confidentially because they want to prevent other researchers from publishing about their data before they do.

●      [CORRECT ANSWER] Under the responsibility of the university, personal data are generally properly protected against unauthorised or unlawful processing and against accidental loss, destruction or damage. The principal researcher ensures the proper application of organisational and technical safeguards with respect to the protection of personal data within the research.

What technical measures are needed in this research project to properly protect personal data?

●      The research project has been set up based on the principles of privacy by design, with appropriate safeguards incorporated within the research design for encrypted data storage. Even the communication with the secure environment is encrypted (SSL) and there is a key recording system with camera surveillance for entry into the research room in question. An iris scan is used for identification to enter the room and in the room itself only USB sticks with the logo of the university are used, so that the data cannot get lost easily.

●      [CORRECT ANSWER] Data collection, analysis and publication cannot take place anywhere other than within the secure environment; downloading these data or forwarding the data from the system is made technically impossible. Only two people have access to the data, based on personal accounts and access to and changes made in the data are logged and monitored.

What organisational measures need to be taken during this research to properly protect personal data?

●      The research takes place on the top floor, where only those who have a legitimate reason to be there are present. There is also no coffee machine to avoid attracting lots of people. In addition, cleaners skip this floor to prevent anyone other than the researchers from entering the research room. Regular checks are done to ensure that the day’s backups are safe and that the files are not damaged. This is done so as to ensure that the backup files do not turn out to be corrupt when they need to be used. All these activities are properly logged by the principal researcher who is the only one who knows where the log is kept.

●      [CORRECT ANSWER] The research takes place under the responsibility of the principal researcher who is experienced in this type of research. The data never leave the end-to-end encrypted analysis environment. To ensure this, a clean desk policy and a clean whiteboard policy are in place, and regular bag checks are carried out upon entry and exit to prevent data or notes from leaving the room.

Relevant literature

Are you interested in reading more on this topic? On this page we offer you an overview of additional relevant literature concerning the GDPR with respect to research.

A. Legislative texts

1. Regulation (EU) 2016/679 of the European Parliament and of the Council on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). 2016.

Online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679&from=en

2. Uitvoeringswet Algemene Verordening Gegevensbescherming (UAVG) [General Data Protection Regulation (Implementation) Act]. 2020.

Online: https://wetten.overheid.nl/BWBR0040940/2020-01-01 (in Dutch only)

3. De Nederlandse Grondwet [The Dutch Constitution]. 2018.

Online: https://www.denederlandsegrondwet.nl/9353000/1/j4nvih7l3kb91rw_j9vvkl1oucfq6v2/vkwrfdbpvatz/f=/web_119406_grondwet_koninkrijk_nl.pdf (in Dutch only)

4. Wet Bescherming Persoonsgegevens [Personal Data Protection Act]. (Expired 25/5/2018).

Online: https://wetten.overheid.nl/BWBR0011468/2018-05-01 (in Dutch only)

5. Charter of Fundamental Rights of the European Union. 2012.

Online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:12012P/TXT&from=EN

 

B. Dutch Data Protection Authority (previous names: Data Protection Board (College Bescherming Persoonsgegevens) and Registration Board (Registratiekamer))

1. Beveiliging van persoonsgegevens [Protection of personal data]. G.W. van Blarkom, J.J. Borking. Registration Board. Achtergrondstudies en Verkenningen [Background Studies and Explorations] 23. 2001.

Online: https://autoriteitpersoonsgegevens.nl/sites/default/files/downloads/av/av23.pdf (in Dutch only)

2. Privacy Bij Wetenschappelijk Onderzoek en Statistiek. Kader voor een gedragscode [Privacy in Scientific Research and Statistics. Framework for a code of conduct]. T.F.M. Hooghiemsta. 2002.

Online: autoriteitpersoonsgegevens.nl/sites/default/files/downloads/rapporten/rap_2002_privacy_statistiek.pdf (in Dutch only)

3. CBP Richtsnoeren: Beveiliging van persoonsgegevens [Data Protection Board Guidelines. Protection of personal data]. Government Gazette No. 5174. 2013.

Online: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/beleidsregels_beveiliging_van_persoonsgegevens.pdf (in Dutch only)

4. Besluit inzake lijst van verwerkingen van persoonsgegevens waarvoor een gegevensbeschermingseffectbeoordeling (DPIA) verplicht is [Decree concerning the list of personal data processing operations for which a data protection impact assessment (DPIA) is mandatory]. Dutch Data Protection Authority. Government Gazette No. 64418. 2019.

Online: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/stcrt-2019-64418.pdf (in Dutch only)

 

C. European Union Agency for Cybersecurity (ENISA)

1. Privacy and Data Protection by Design. 12 January 2015.

Online: https://www.enisa.europa.eu/publications/privacy-and-data-protection-by-design

2. Privacy by design in big data. 17 December 2015.

Online: https://www.enisa.europa.eu/publications/big-data-protection

3. Handbook on Security of Personal Data Processing. 29 January 2018.

Online: https://www.enisa.europa.eu/publications/handbook-on-security-of-personal-data-processing

4. Recommendations on shaping technology according to GDPR provisions - Exploring the notion of data protection by default. 28 January 2019.

Online: https://www.enisa.europa.eu/publications/recommendations-on-shaping-technology-according-to-gdpr-provisions-part-2

5. Data Pseudonymisation: Advanced Techniques and Use Cases. 28 January 2021.

Online: https://www.enisa.europa.eu/publications/data-pseudonymisation-advanced-techniques-and-use-cases

 

D. European Data Protection Supervisor

1. Preliminary Opinion on data protection and scientific research. 2020.

Online: https://edps.europa.eu/sites/edp/files/publication/20-01-06_opinion_research_en.pdf

 

 

Colophon

The training GDPR 4 Data Support came about thanks to different people and organisations:

Experts:

Marlon Domingus (FG - Erasmus University Rotterdam - accountable of the content)

Santosh Ilamparuthi (Data steward - TU Delft)

René van Horik (Researcher - DANS)

Femmy Admiraal (Information scientist - DANS)

Emilie Kraaikamp (Privacy coordinator - DANS)

Ellen Leenarts (Project lead - DANS)

 

Concept en content development

Sander van Acht (Owner - Flooow - accountable of didactic design)

 

Image credits 

https://unsplash.com/@scienceinhd

https://unsplash.com/@luvqs

https://unsplash.com/@homajob

https://unsplash.com/@markusspiske

https://unsplash.com/@micheile

https://unsplash.com/@cdc

https://unsplash.com/@chrisliverani

https://unsplash.com/@michiru

https://unsplash.com/@noaa

https://unsplash.com/@vanillabearfilms

https://unsplash.com/@firmbee

https://unsplash.com/@nci?

https://unsplash.com/@neonbrand

https://unsplash.com/@cowomen

https://unsplash.com/@homajob

 

License

Main course material

Unless otherwise indicated:


GDPR 4 Data Support by
Research Data Netherlands
is licensed under a
Creative Commons Attribution-ShareAlike 4.0 International License

Unsplash images

Unsplash images are images not including the typical white ball and without a license indicated.

This material is licensed under the Unsplash license https://unsplash.com/license.

 

Add a paragraph that includes that this is a changing space and more advancements and changes are coming. Link to https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence