A brief introduction to a long road ahead.
Note This article was published by The IO Foundation on 26th April 2020. It was also published in MCMC’s myConvergence - Issue 19 (page 40). It is here republished with updated materials and minor corrections.
Personal data and Privacy are the subject of heated debates in international fora, along with concepts such as Data Governance1 or Ethics in AI2. These conversations typically explore the policies and legal ramifications of data management as well as cross-border data transfers and related free trade agreements. Data is not the new oil; data is, quite literally, everything by now.
In the current competition to amass as much data as possible, corporations are winning by and large, hosting in their infrastructures an ever-increasing amount of users and delegating traditional states to act as mere placeholders of the physical envelope of their digital users.
Human Rights have historically been an uphill battle for many societies. Democratic ones have recognized that the well-being of their citizens was a better and more productive outcome than their authoritarian alternatives. In past centuries many frameworks have been established, the most commonly known of them being the Universal Declaration of Human Rights3 (UDHR).
When data entered the stage, mostly no one was prepared. Very few individuals were gradually capable of understanding the deep, life-changing implications that societies would face. The world didn’t prepare and the data storm has caught users devoid of the necessary awareness and protections to avert the inherent harms that digital life was bringing with it. We were, and still are, lacking adequate Digital Rights frameworks to protect that data.
Data-centric Digital Rights are the core advocacy of The IO Foundation (TIOF), on which it has been working since its inception. In this article, we will analyze the nature of Data-Centric Digital Rights, some of the challenges faced in recent years and, more importantly, the necessary steps that governments need to take in the upcoming years towards embracing a conducive regulation and implementation of Digital Rights to protect their citizens.
To better understand the place that Digital Rights play in our lives, we shall analyze them through a more relatable concept of our daily lives: Architecture. And thus, paraphrasing Douglas Adams: it begins with a house.
Let’s consider a building. Its design, planning and construction involve, among many other factors, a confluence of urban and safety regulations, the observance of dozens of technical requirements and the collaboration of a considerable number of people. An architect will undertake the task to design a product that complies with all the above, authorities will supervise compliance, builders will implement the project and the public will enjoy using the resulting space.
Through all this sophisticated process, the most important aspect is that the final users will forever remain oblivious of all its complexity. This is a desired outcome: imagine every person having to undertake the necessary training to evaluate if the building they are entering is safe to use, from its structural design all the way to the correct implementation of the fire safety measures, the use of non-toxic materials or the proper maintenance of the air recycling system. Every person, for every new building they walk into.
Instead, societies function and develop under the more rational premise that checks and balances, set and enforced by governments and their agencies, are to be in place to ensure that final products and services are delivered in a manner that is safe for everyone. Users do not require expert knowledge beyond understanding how to use a product and only when a misuse could turn into harming other parties a license is required (for instance to drive a car).
Such safeguards are commonplace around us and may also be found in digital equipment, such as the Communications Equipment Certification4 issued by MCMC. In the case of smartphones, it ensures that the hardware complies with the necessary safety regulations, providing citizens with the reassurance that their devices do not exceed ionizing radiation levels that could be harmful or that their batteries will not explode.
Interestingly enough, this same principle is yet to be applied to Data Protection Laws and to its parent concept, Data-Centric Digital Rights.
Though the term Digital Rights has been a relevant concept in recent years (often confused with DRM5, which resides solely in the domain of intellectual property), it is widely unknown by the general public. In the traditional sense of the term, Digital Rights are considered by many as the application of Human Rights in digital spaces (typically the Internet) as a medium.
While this terminology usage derives from a charged historical context, it does not necessarily represent reality in an accurate way. Indeed, should we apply the same consideration for other communication channels, we would quickly and unequivocally coin the terms Paper Rights and Wave Rights.
The reality is however very different: the violation of Human Rights remains the same in nature regardless of the mechanism of transmission. Bullying someone face to face, on a newspaper article, on the radio or on social media results in the same type of damage for the victim.
When considering Digital Rights we need to analyze the very nature of digital spaces and what composes them: their infrastructure and the data they process.
The attempt to protect these two elements and avoid potential derived harms results in a set of regulations that we call Digital Rights. The advocacy of promoting and supporting Digital Rights we call it Data-Centric Digital-Rights or DCDR in short.
Governments worldwide have traditionally approached the need to protect their citizen’s data through some form of Data Protection Law.
In Europe, the General Data Protection Regulation6 (GDPR) was adopted in April 2016. It’s impact in the industry and the mindset change it sparked was so remarkable that, despite allowing 2 years to all organizations to prepare and adapt their systems and procedures, by its time of enforcement in May 2018 many were taken by storm. GDPR would eventually come to set the pace for all Data Protection Laws to come due to its comprehensive set of protections towards users and the defense of concepts such as Privacy By Design7. Personal data and Privacy is now a conversation that has slowly migrated from expert circles to more public crowds.
In Malaysia, the Personal Data Protection Act8 (PDPA) was enacted in 2010 and is currently undergoing a revision. Approximately 116 jurisdictions worldwide have passed some type of DPL, with a varied degree of protections.
Despite these regulations, all DPLs fail in the most basic aspect: to actually provide a transparent protection framework to citizens to protect their data.
For as many policies that may be enacted, their implementation counterpart is missing, containing a list of legal provisions with no clear, standard technical specifications on how to implement them.
This situation leads to 2 critical problems:
First, the inability to implement digital infrastructures and services that can be certified in a standard manner and thus provide transparent compliance to the law.
Second, the unavoidable consequence that users must bear the weight of ensuring that their Rights are being observed at all given times, even if this requires an uncommon level of awareness, knowledge and resources.
Let’s imagine for a moment a world where architects would only be told what a building should look like, leaving all technical implementations to the free will of the builders. Moreover, let’s imagine that the builders do not necessarily know about the legal regulations the architects must comply with nor the harms they can cause if the building collapses. One step further in this parallel reality would bring us users who would be required to accumulate the knowledge of the architects, the skills of the builders and top it off with the legal expertise to know how to proceed if the building was to collapse and the medical knowledge to patch our own injuries. And this before entering every single building in the world.
Science fiction? Not quite: this is precisely the world we live in when it comes to technology.
Legislators and technologists have a long lasting history of not getting along. Project managers (the architects) are concerned about compliance and programmers (the builders) are unaware of the harms they can cause with their implementation decisions. Also, the expectation is that users will understand the regulations set by Data Protection Laws worldwide (as their data may transit across different jurisdictions) and that they will take all necessary steps to act in their own defense should any misuse of their personal data happen.
These assumptions are very dangerous and are at the core of the lack of awareness in Digital Rights of all involved parties, resulting in disperse regulations, non-standard implementations and a click-happy reality where users accept Terms of Use (ToU) on the digital platforms they use that they don’t read nor understand, in turn exposing them to dangers they don’t realize.
At the core of all this confusion lies one very simple problem: for historical reasons too long to explain, we have collectively developed the impression that data is a vague concept, some ethereal entity that floats around us, that we can grab when necessary, process and obtain some magical result that somehow makes our lives easier. Nothing could be further from the truth.
Data is intrinsically connected to us. All (source) entities generate data (people, companies, the weather, everything) and we cannot disconnect that data from its source lest it will lose all of its value.
Indeed, consider the number 5. On its own this value is meaningless until we determine that it represents the number of years someone has been working for a company or the number of credit cards in their wallet.
It is only by fully contextualizing the figure that we obtain any resemblance of value. As a result, all data is intimately linked to its source and once consolidated it creates a model, a representational entity, in a digital space.
Once we observe and accept this intimate correlation between the source entity (a user) and its representational entity (the model resulting from all the data extracted from the user), it’s easy to understand that protecting citizens in their interactions within digital spaces must come as a result of safeguarding their data and the infrastructure that manipulates it with a new set of rules: Digital Rights.
In short, Data- Centric Digital Rights are the set of principles and regulations that protect Representational Entities from being misused in digital spaces, in turn protecting their Source Entities from harm.
Digital spaces do not function under the same rules as the analog world. In order to establish a conducive framework for Digital Rights, different concepts are to be considered. Core to TIOF’s advocacy on Digital Right, the following definitions conform to a pivotal set of Principles for their implementation.
The traditional understanding of data as separate entities from their users is anchored in past perceptions and the use of legacy technologies. The reality is much different: The data representing users (and of which they should have control of consent) is intimately and inextricably linked to them; it models them, creating an accurate representation that loses all value should that contextualization ever be severed.
The direct consequence is that a user’s data IS the user itself.
This proposition has severe consequences as the same duties of care that apply by constitutional laws to citizens should equally apply to the data representing them. In this sense, the necessary infrastructures that governments put in place to protect their citizens (hospitals, highways, the judiciary,...) should also be extended to the management and protection of their data with a national cloud system based on open standards and governed by a framework on Digital Rights.
The UN Guiding Principles on Business and Human Rights1 (BHR) are the modern transposition of the Universal Declaration of Human Rights (UDHR) into the corporate scene. They are an attempt to nurture a corporate sector that observes and respects Human Rights by incorporating their principles across all of their operations. The UNGPs are structured around 3 Pillars, namely:
Pillar I: The State's duty to protect
Pillar II: The Corporate responsibility to respect
Pillar III: Access to Remedy
From a proactive perspective on the use of technology (and therefore data protection), the objective should always be to avoid the occurrence of grievances, in turn minimizing the need for any remedy along the use of any technological products and services.
End Remedy represents the embodiment of the proactive planning, design and implementation of all necessary mechanisms, both in policy and technology, to avoid grievances to ever happen during the use of a product or a service, in turn minimizing the need for legal actions. In the context of Digital Rights, it implies the design of policies that protect users and the implementation of such provisions in a transparent, trustworthy and safe manner where legal remedies, while defined, are only employed as a last resort safety net.
Instilling this approach to the relevant stakeholders, namely in this case programmers is a critical step to ensure that End Remedy becomes second nature when designing digital products and services.
Initiatives such as the SDGs10, UNGPs9 or Privacy by Design7 are set in place to define a clear international framework on Human Rights and the defense of their Privacy; together with constitutional law, they collectively conform the Rights that citizens worldwide could and should benefit.
Digital Rights frameworks should foster not only policies that protect users’ data, they should be accompanied by the necessary technical specifications (based on open standards) to implement them.
Rights by Design is the approach of policies and technology being designed around the Rights of citizens and their data to observe them in their planning, architecture and implementation, transparently for all stakeholders.
It ensures that users are not required to be experts in digital technologies and instead the infrastructure will ensure that their Rights are being observed transparently, creating no cause for remedy.
When considering the measures and transformations that governments, as well as societies at large, will have to undertake in the years to come in the Digital Rights conversation, we should never forget to involve all the necessary stakeholders.
Commonly and repeatedly forgotten actors, programmers are the builders that implement all the infrastructures, products and services we are so much concerned about. Yet, they are often not invited to relevant working groups nor are they introduced to concepts such as Human Rights and Digital Rights during their formative years. While architects are aware of the harms they can produce in the event of an accident caused by their projects, programmers can hardly evaluate the digital harms they can induce as a result of improper designs and implementations. Furthermore, it is important to understand that digital harms is a topic that, to date, has no international consensus or standard to draw from, thus complicating tenfold the situation.
At The IO Foundation, we regard technologists, and very particularly programmers, as the next generation of Human and Digital Rights Defenders, in digital spaces.
This is a new frontier to be explored and it is going to be extremely critical to properly and proactively train all involved parties (from policymakers to programmers) and ensure they communicate with each other effectively. TIOF’s approach via it’s TechUp project is divided in two parallel actions.
First, to bring the conversation of Digital Rights to the local programmer community (Malaysia: Klang Valley) by partnering with local tech groups and running capacity building sessions that are flavored with Human and Digital Rights. Second, to increasingly introduce the necessary technical concepts to policy makers through targeted events and other activities. The final objective is to bridge the existing gap between both parties so that they can build together frameworks that are observant of Digital Rights by Design by acknowledging that I am my data and that target to End Remedy.
As we transition towards an increasingly digital life and we switch our houses for storage space, our paper-based passports for digital IDs and our Ringgit Malaysia for virtual coins, we require unambiguous answers to really pressing questions.
Who manages these new digital territories, governments or tech corporations?
What are these new owners bound by? Do they respond to democratic institutions or to opaque shareholder meetings?
Who is in control of the data, the users that originate them or the companies behind the always fancy IoT devices that capture them?
If a user was to be cloned, would it feel right to traffick those clones to jurisdictions all over the world without them even knowing? Would their government agree to that? If intuition (and the law) tell us that such a scenario is wrong and illegal, why are we acting so indifferent when it comes to our data?
If users are transitioning their lives from the analog world into the digital world, aren’t the Terms of Use they accept their new Constitutional Law, applicable to their data?
If citizens are to preserve their Rights and the freedoms attached to them, the conversation about how do we protect them transparently, effectively and by design cannot be postponed. The decisions made in the upcoming years will shape the future of societies everywhere and will be decisive to ensure that Big Brother remains a feared-yet-not-implemented literary exercise.
These were topics addressed during 2019 Digital Rights Awareness Week11 (DRAW), and will be further explored during the upcoming ::Assembly conference, a place where policy makers and technologists will come together to explore how governments worldwide should incorporate Digital Rights in their agendas, organized by MCMC and TIOF.
For a long while, Jean F. Queralt had been disturbed by the level of intrusion information and communication technologies have in the personal lives of people and societies at large. With a full career in IT, first as a programmer and later as a sysadmin, he took the leap in 2018 of founding The IO Foundation to establish a more solid and targeted direction to address Digital Rights from a technical standards perspective.
He can be reached at [email protected]
The IO Fondation (TIOF) is a global nonprofit advocating for Data-Centric Digital Rights, born out of a fundamental concern on the future of digital communities, both in their governance and implementation.
TIOF aims to provide platforms to raise awareness on Digital Rights as well as effective solutions to ensure that they are adequately observed.
www.TheIOFoundation.org - [email protected]
1 - Data Governance
2 - Ethics of artificial intelligence en.wikipedia.org/wiki/Ethics_of_artificial_intelligence
3 - Universal Declaration of Human Rights (UDHR)
4 - MCMC certificate
5 - Digital Rights Management (DRM)
6 - General Data Protection Regulation (GDPR)
7 - Privacy by Design
8 - Personal Data Protection Act (PDPA)
9 - Guiding Principles on Business and Human Rights (BHR)
10 - Sustainable Development Goals (SDGs)
11 - 2019 Digital Rights Awareness Week (DRAW)
Jean F. Queralt (John) is the Founder and CEO of , a tech nonprofit advocating for .
Disturbed by the level of intrusion of technology in the lives of citizens, he took the leap in 2018 of starting The IO Foundation to establish a more solid and targeted direction to address users' protection from a technical standards perspective.
He is actively involved in Standard Developing Organizations such as the , and .
Because he regards technologists as the , he works to raise awareness on the importance of these organizations across the technical community and facilitates their participation in them.
(TIOF) is a global nonprofit advocating for Data-Centric Digital Rights, born out of a fundamental concern on the future of digital communities, both in their governance and implementation. TIOF aims to provide platforms to raise awareness on Digital Rights as well as effective solutions to ensure that they are adequately observed. For more information, please visit and reach out via .