“It´s about informational self-determination!”

Christopher Koska

Christopher Koska

EEXCESS is recommendation software, which means, it takes its users´ personal data to compile exactly fitting recommendations. Surely, this imposes the question of protection for these data.

One of the project team members, who is dealing with this topic, is Christopher Koska. He is a research assistant at the chair for Media Ethics at the Munich School of Philosophy and, at the same time, is working at wissenmedia in the EEXCESS project. As part of his job, he deals with ethical and ecological dimensions of data-, user- and usagemining, reasons for anonymity and identifiability on the internet and potential approaches for building trust. In this interview, he talks about the meaning of privacy and data protection for the EEXCESS project.

Christopher, what is many peoples´ fear of misuse of their personal data based on?
Christopher Koska:
From my point of view, it is about three things: Firstly, the uncertainty, which personal information about oneself is actually circulating; secondly, the lack of clarity who is controlling this data and who has access to it and, thirdly, there is no imagination what is done with the data or what could be done in the future.

Is this feeling of insecurity justified?
Yes, everybody among us gives away loads of data every day while surfing the internet. And this is still gradable. For example, there are trends like the “quantified self” movement, where people are collecting data about their bodies to comprehend their state of health with the help of self-diagnosis applications. Thinking of the example of the ENCODE (ENCyclopedia of DNA elements) project, this development also becomes clear. This project is about genomic decoding, and that´s where it really gets threatening.

How are people dealing with this?
 I think there is an increased need for orientation and guidance. The behavior of many people is very ambivalent. On the one hand, everybody wants to have more intelligent, user-friendlier technologies, information that is available anywhere and anytime – and, please, for free, if possible. On the other hand people are afraid of the misuse of their data that is needed by those technologies to function well. Many users are not aware that this is a conflict of interest and that these technologies are precisely not for free.

Collecting the data is one thing. How about the evaluation of the data and the implementation of the results of that evaluation, catchword “big data”? Can anybody already estimate what the implications are?
In fact, some of the prediction models that are based on “big data”, function alarmingly well. I just want to refer interested readers to the speeches of Yvonne Hofstetter and Viktor Mayer-Schönberger (talks are in German) at re:publica 2014.
For the future, we can say with certainty that those technologies are going to change our lives and the way we perceive the world. But today, we are still at the beginning of this development.
That is because statistical analysis of big data can indeed enable complex insights. But it can´t be a substitute for human interpretation and evaluation. So, if we are not able to involve the user who is disclosing her or his data in the interpretation of that data, this will lead to incapacitation of humans, but also to relatively one-dimensional technologies and to fast demystification of the actual big data hype.

What is the approach to this problem in the EEXCESS project?
It is not enough to take care of the optimal protection of personal data. It also is about “informational self-determination”. To what extent can I decide myself what data I want to find and send? How can I access and control the tons of information that are stored about me during my whole life time? These are the ultimate questions, dealing with the personal autonomy of every single human being, and they are a challenge for our democracy.
In that aspect, EEXCESS follows the right approach, I think. It is already established in the prototype that the user will have access to his profile and will be able to make decisions about what information he wants to give away. In the future, we will also be dealing with the information that is tracked indirectly by the software, e.g. the user´s search history, and visualize this for him or her. This is going to be exciting. We want to give the user more control and let him make more individual choices and, at the same time, improve the performance of our software. This is a very important aspect of the project.

Thank you very much for this interview!

Interview conducted by Maren Lehmann

Links

EEXCESS Policy Model for Privacy Preservation and Feasibility Report

Want more background on the big data and informational self-determination topic? Here are some suggestions:

Eli Pariser: Beware online filter bubbles (video)

Kate Crawford: Identifying Risks of hidden bias in big data research (video)

Erica Naone, MIT Technology Review: With Big Data Comes Big Responsibilities

Justin Fox, Harvard Business Review: Why Data Will Never replace Thinking

Jim Stikeleather, Harvard Business Review: Big Data´s Human Component

 

Leave a Reply

Your email address will not be published. Required fields are marked *