Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
A new car built by my company leaves somewhere traveling at 60 mph. The rear differential locks up. The car crashes and burns with everyone trapped inside. Now, should we initiate a recall? Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. If X is less than the cost of a recall, we don't do one.
Narrator
Fight Club, Chuck Palahniuk
Aside from being a book that should be read by most adults, Palahniuk defined in barely few lines the reality of most industries: how the cost of a remedial penalty is usually regarded as preferable than attempting to fix an identified problem, let alone learn the lesson and proactively do a better design in the next iteration.
Going back almost 20 years, I started getting concerned about data and how it was extracted and used by (at-the-time-not-so) big tech. I quickly realized that I was in no position to change any of the narrative and convinced myself that someone with more knowledge, connections and funding would take care of it.
Flashforward to a few years ago, under an unexpected turn of events, I had enough of waiting and decided it was time to attempt changing things. It was not lost on me that the enterprise was difficult (read crazy) and the odds were overwhelmingly low. With nothing to lose, I started The IO Foundation to advocate for Data-Centric Digital Rights (DCDR).
A quick definition of DCDR would be the attempt to establish the necessary technical standards to transparently implement your Digital Rights and enable technologists to create better and safer digital societies by embracing their role as NextGen Rights defenders. A mouthful, so let me unpack that with a few examples.
There are currently over 130 jurisdictions out there that have adopted some sort of data protection law. To date, they all have failed to provide effective, transparent, by design implementation because, as opposed to the car industry, they don’t provide a technical standard for their implementation. 50 different companies will interpret a given regulation the best they can, making it virtually impossible for citizens (and the authorities) to verify compliance.
Instead, we ended up with a remedial situation in which, like in the Fight Club, a company can simply budget money for compensations instead of ensuring your data is properly protected.
One comes to quickly realize that, as a result, there is a new player to consider: the developers that design and implement all the technology everyone is so much concerned about. Being at the core of ensuring that digital societies are, by design, safe for everyone, grants them the role of next generation of Rights defenders.
Problem is, not only are they not given clear technical directions on how to implement those regulations, there is effectively no technical standard defining Digital Harms and Digital Rights and thus all these are missing in their educational pipeline.
At this point, you may wonder why should this matter to you.
Take the overused knife metaphor: A knife can be used to both spread butter on a toast or to stab someone. We tend to say that the tool is not the problem, how we use it may be.
Now, what if we identified the possible problematic outcomes in advance and prepared the manufacturing regulations of knives so that their design didn’t allow for those harms to happen in the first place? What if we approved the making of knives that would remain stiff in contact with butter yet turned into jelly in contact with human skin? Science fiction? Check programmable materials.
Extending this concept to technology as a whole, it’s not hard to see that if we provided programmers with the necessary knowledge and clear technical guidance of the protections we wish to ensure, we would all have an opportunity to enjoy better and safer digital societies.
In the following articles we’ll be exploring some of the key elements of DCDR, the nature of some of its core components, new perspectives in our understanding of our digital experiences and the changes we should promote in both education and industry in the tech sector.
Because we all agreed that we seek better and safer tech.
Didn’t we?
I swear to fulfill, to the best of my ability and judgment, this covenant:
[...]
I will respect the privacy of my patients, for their problems are not disclosed to me that the world may know. Most especially must I tread with care in matters of life and death. If it is given me to save a life, all thanks. But it may also be within my power to take a life; this awesome responsibility must be faced with great humbleness and awareness of my own frailty. Above all, I must not play at God.
I will remember that I do not treat a fever chart, a cancerous growth, but a sick human being, whose illness may affect the person's family and economic stability. My responsibility includes these related problems, if I am to care adequately for the sick.
I will prevent disease whenever I can, for prevention is preferable to cure.
I will remember that I remain a member of society, with special obligations to all my fellow human beings, those sound of mind and body as well as the infirm.
If I do not violate this oath, may I enjoy life and art, respected while I live and remembered with affection thereafter. May I always act so as to preserve the finest traditions of my calling and may I long experience the joy of healing those who seek my help.
Modern Hippocratic Oath
Louis Lasagna
The current technology paradigm is faulty at its core. As we discussed in the previous episode, a flawed understanding of the nature of data has led us to a path where we are not proactively protecting citizens and their digital twins, the models created by all the data we frantically and insatiably collect about them.
Let’s picture for a moment two doctors meeting to discuss an oncology treatment. We are talking taxonomies here: technical medical language describing the illness and its characteristics, treatment drugs, the targeted systems, side effects, reactions, devices involved in the delivery and monitoring and so on..
Do we see patients engaging in those conversations? No we don’t. We leave the treatment design to the experts and, at most, we make the conscious decision to go down this or that path once we have been duly informed. The technical details? None of our business. Doctors take an oath to proactively protect their patients’ lives to the best of their abilities and they can do so because they have the language to describe the illness, how to eliminate it and anything in between.
Now imagine a similar scenario with architects, car engineers or any other complex system. It’s not hard to see how we have understood a long time ago that we need to leave the complexities to the experts and concentrate on being responsible citizens in the use of the technology they produce for us.
What about digital security? As citizens and technology consumers we are forced to be the ones making all sorts of technical decisions to “protect ourselves”.
A recent Google Media Release points out “Almost 3 in 5 people have experienced a personal data breach or know someone who has, yet 93% persist with poor password practices”.
First of all, the framing is rich: it’s your fault for creating weak passwords and it's even more your fault that you double down on having poor passwords that you are likely repeating across platforms. Or that you don’t use multi factor authentication (aka 2FA/MFA). It’s also your fault if your car manufacturer built a poorly designed key system and you didn’t take the initiative to go upgrade. Or if you entered a building and died in a fire because you didn’t check yourself that the fire sprinklers were defective. Of course it is.
What’s inherently wrong with this narrative is that the burden of observance is constantly thrown over the shoulders of the citizens. This is eminently not a sustainable approach and the core reason why people can’t be bothered with how their data is extracted and manipulated: it’s too much and it requires a degree of technical proficiency that only a trained expert can understand.
The obvious observation here is answering the question: Who builds all this stuff we are so much concerned about? To no one’s surprise, the answer is technologists; in particular software developers hold a great deal of responsibility as any tech device will always be interacted through some sort of software interface.
Unfortunately, programmers do not currently have the necessary language to proactively protect citizens. There is, for instance, no agreed upon taxonomy on the digital harms that data (in the form of our digital twins) can be subject to. How can they design systems that will protect citizens if they can’t define those potential problems to avoid them in the first place? How can we inspire them to adhere to a technological Hippocratic Oath if we can’t even define what it is they need to protect? And yet we desperately need their active participation to ensure technology protects us all the time.
It is no surprise that at The IO Foundation we regard programmers as the NextGen Rights Defenders. They hold the keys to better and safer technology that will ensure both our Human and Digital Rights and we need to work on updating their educational pipeline with the proper technical language (taxonomies, among others things) for them to embrace this role.
In the next episode we’ll dive on TIOF’s DCDR Principles and how developers can start changing their daily paradigm and embrace their role as NextGen Rights Defenders to build technology that is protective by design.
If you tried to pass the UDHR today, no way it would get approved.
Charles Bradley
Informal conversation in Manila.
UDHR stands for “Universal Declaration of Human Rights”. It is a declaration passed by the UN in 1948 that bundles up a series of Rights for Humanity, in an attempt to compensate for the atrocities of WWII and seek to not repeat them again.
Its Article 1 states:
All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.
The UDHR, complexities and failures of implementation aside, gave us a usable and useful direction to orient us all in the objective of defending Humans. It even had a good decision in the labeling for that. It’s in the name.
Fast forward barely a few decades, technology is pervasive in our societies, creating new realities and challenges with novel threats to our Rights; this time not only our fellow humans but also for those that (should) apply to our digital twins.
Our societal response has been predominantly to find ways to shoehorn Ethical frameworks into technology, which is yet to bear any actual fruit. Instead, The IO Foundation defends the direction of working on the basis of Rights due to their universal facet and has made it its main proposal to protect digital citizens.
The name of the initiative? Universal Declaration of Digital Rights, or UDDR.
Guess where the name was derived from.
How would a UDDR work?, you may ask. While the length of this article won’t let us get into too many details, let’s unpack it a bit.
What did the UDHR achieve? It gave us a concrete list of Rights (ie a taxonomy) attempting to proactively avoid a number of Harms that Humans could be subject to. In other words, observing the lifecycle of a person, under the premise that a long, quality life is a good thing, you set a list of possible Harms that could shorten your life or otherwise make it low quality. With that in your hands you can define the proactive measures to avoid said Harms in the form of Rights.
Now apply the same logic to your digital twins: Considering the lifecycle of a digital twin (in the shape of a data schema), what are the Harms it can be exposed to? Answering that (not so simple) question gives you a taxonomy of Digital Harms, which you can now use to build a list of Digital Rights (another taxonomy).
The cool thing about this structured approach is that it gives us something we can explain and teach to programmers; it enables them to understand how they can proactively be the NextGen Rights Defenders.
Moving forward, what would the UDDR be comprised of?
A Legal Document (L) providing a legal, policy-based definition of the objectives to be accomplished and the different list of Digital Harms to be avoided and Digital Rights to be observed.
A Technical Document (T) providing a technical guideline of the taxonomy of Digital Rights to be implemented by the UDDR.
A Digital Rights SDK (DR SDK) providing a usable implementation of the Technical Document (T) that software engineers can incorporate in their architecture to provide an abstraction layer that will transparently observe citizen's Digital Rights.
A Digital Rights Impact Assessment Global Registry (DRHIA), a publicly accessible global registry providing insights on the adoption and implementation of the UDDR.
In summary, the UDDR would enable us to be responsible users. Just like we are expected to be responsible drivers and not car engineers.
To get hands on, here’s a quick theoretical timeline:
Establish the list Digital Harms
Build around the above the list of Digital Rights we wish to defend
Incorporate them into the devs’ educational pipeline
Comply with that taxonomy when defining local regulations, which can now be enforced, transparently, through their local agencies to ensure a proper protection of their citizens’ digital twins
In turn, citizens can then decide which operations they allow to be applied on their digital twins and thus not enter those jurisdictions
Sounds crazy? Check how your cars enter the market; or your smartphones for that matter. This is pretty much a model that has been tested and implemented for decades.
That’s an impossible, daunting task!, I hear you say.
Well, consider any historically-worth event: World Wars, the several conflicts that (mostly) abolished of slavery globally, voting for both men and women. Now think if anyone was able to expect any of those would happen just 5 years before they did.
This is not to say we shouldn’t analyze our current socio-techno-political environment. We passed the UDHR only after a full blown holocaust and bombing the hell out of 2 japenese cities. Humans need horror to decide to induce a massive societal change or to have created enough context to use it as a prop marketing tool.
One thing that keeps me awake at night is wondering what will be the digital equivalent of those atrocities and how they will be framed. At this rate, though, it feels as if the current technological paradigm is designed in a way that a point of no return is coming. And we surely should do something about it.
Wouldn’t it be great that, for a change, we grow up as a (digital) society and make sure we create proper paradigms and infrastructures such as the UDDR before that catastrophe materializes?
Change is possible and it happens because we decide to. Ask Charlie Skinner.
In the next chapter we’ll be having a look at one of the main frictions for the UDHR to be possible. Corporate interests? Sure those too… and yet the main friction is Civil Society itself.
We try to avoid policy and business questions, as much as possible.
“Who is the Oracle? She’s the glitch in the matrix. What is her archetype? She represents civil society.”
Kiu Jiayaw (paraphrase)
The above first quote can be found in the “Getting Started in the IETF” page and is the perfect manifestation of the separation between technologists with policymaking and, ultimately, civil society.
For those who may be not acquainted with the Internet Engineering Task Force (IETF), it is, broadly speaking, the organization that imagines, manages and updates the communication protocols of the Internet. Yes, a decisive technical organization does not feel enticed to get involved in policy if ever possible. If this doesn’t deeply bug you (or at least perplex you), it should.
And why?, you may ask. Because civil society’s future depends on technologists and without them it is doomed to extinction.
Maybe a good start would be to define what civil society is. Aggregating a number of definitions, we can understand it as the ensemble of organizations and individuals that manifest the interests and will of citizens, separated from the government and corporate sectors. NGOs, individuals and even academia are members of civil society. In other words, it is all of us whenever we wear our citizen hat.
One of the main reasons why civil society has been able to achieve remarkable leaps in protecting citizens is because it was able to translate its different advocacies into actionable language; this in turn helped recruit people who specialized in those advocacies as well as train newcomers who, led by passion and purpose, entered the field.
Who are the experts in technology civil society is recruiting? And which purpose would anyone want to ascribe to when we can’t even describe to technologists what they will be defending in a language that they understand?
As things stand, civil society is not getting any technologically younger. It is failing to connect with technologists. This is particularly troubling when considering that technology enables transparency and accountability options that weren't widely available until recently. These are great tools that properly employed would drastically advance all advocacies put forward by civil society.
If that wasn’t bad enough, technologists at large are not encouraged to participate in crafting policies that will affect their work (and by extension all of us).
In the Malaysian context, it is worth noting that not enough technologists are involved with international events or authoritative organizations, something The IO Foundation is trying to change.
Simply and bluntly put, civil society experiences a mix of lack of digital knowledge marinated with a non negligible degree of technophobia.
Technologists do not understand what is expected from them and, mistakenly, believe that technology is devoid of politics. In their defense, we should grant them that what they mean (in general) is that they don’t want to create technology based on who is in cabinet at a given time; unfortunately that seems to be conflated with taking positions to protect citizens, something they often do.
To the technologists reading this: The best example is how encryption moved from Layer 7 to other Layers in the OSI model. That was technologists understanding privacy was necessary by design and not dependent on service providers getting to the task.
So yes, technologists can care and they indeed care about citizens and the impact of technology on them. So why aren’t they drawn into civil society?
Back in the 80s & 90s, with the advent of personal computing, we witnessed the rise and might of technology magazines. Hundreds of publications came to life in an absolutely thriving business. Here’s a quiz for you: Was it content creators or journalists who learned about technology… or was it technologists who learned how to write? Of course the latter; by. the. buckets.
Civil society needs to stimulate a similar diaspora, with technologists either joining existing NGOs, creating their own Tech NGOs or at an individual level. At The IO Foundation, we’ve been asking ourselves what is failing and how to instigate it. So far, we have identified the following major points of friction:
Purpose (via lack of language)
This is for us the number one item missing. We keep talking to technologists in ways they don’t understand, creating difficulties for a purpose-driven approach. “Build me a social media platform that respects Freedom of Expression” does not translate into any immediate algorithm they can work with. Neither do Data Protection Laws, for that matter. Even the current definition of Digital Rights by civil society is vague and unfit for purpose. How are we expecting that technologists help build better and safer technology if they don’t even know how to express what they should be defending? Unless we frame it in a way that they understand, they won’t find their purpose.
Money & Career
The average NGO struggles with steady funding (never mind independent), as it is. Where is the money going to come to sustain Tech NGOs, let alone to even attract this needed talent. At TIOF we’ve been arguing that micro funding is likely the future, a natural evolution from micro payments for tech products/services if you consider that generating an impact is, per se, a product that people subscribe their trust into. Then comes to the traditional funders’ space, that complex supply of money to which most NGOs are beholden. Analytically speaking, one can organize their attention to technology following 3 stages:
Tech as a medium, where technology is observed for the services it provides, not how it is built.
“Digital Infrastructure” (the new buzzword), where there is growing interest in how digital technology is built.
The people behind the infrastructure, where we would be looking at who builds the digital technology.
This last stage is the true holy grail and pretty much no one is looking into it. In recent years, traditional funders have worked out the first 2 stages and timidly have considered the third one. Without them properly aiming at closing that gap, NGOs are not going to be able to attract technologists amongst their ranks.
And I’ll throw an extra concern: A LOT of the funding is coming from the Big Tech that civil society is expected to deal with. See the problem?
Strategies
Civil society in general, and NGOs in particular, are having it all wrong by engaging in the same traditional and remedial ways with governments and corporations when it comes to digital technologies. Instead, we should be having technologists participate in standards’ working groups so that the necessary protections are embedded in them. When governments legislate technology they usually make use of these standards and a ripple effect would be produced. Work smart, not hard.
To put it bluntly, civil society won’t be able to protect Human Rights, climate and pretty much anything in the not-so-far future if it doesn’t adequately tackle tech. Running a working session on biometrics without one single biometric expert in the room is not a winning proposition; acting offended when this is pointed out won’t tame the elephant in the room. And yes, that happened.
Putting it all together, NGOs are heading towards their digital extinction and civil society is slowly yet surely falling into digital irrelevance by refusing to understand the nature of technology and persisting into looking at it from the outside, as a mere provider of services.
Civil society needs a transfusion. It needs new blood that can bring not only the energy, but also the knowledge necessary to effectively address the challenges posed by our ever present digital societies, Big Tech and digitally confused governments. And funders need to massively help them in this effort.
Civil society desperately needs to embrace technology both as an integrated advocacy and as a tool to improve their operations lest it wishes to spiral without control from its already impending crisis. To achieve this, we need to generate interest around technologists and provide them with value propositions that make sense to them; and in their language. We also need to encourage them to create the next generation of NGOs: Tech NGOs.
Be it as individuals or as members of Tech NGOs, we badly need them to join the ranks of civil society if we want to have a real opportunity.
What’s at stake? Building digital societies that advance humanity.
The alternative? Building the digital dictatures that will turn us into… well, androids.
The upside is that we may finally know if we would dream of electric sheep.
In our last installment of this series, we will try to consider what living in the future may look like. Yeah, those “smart” cities.
If you don't ask the right questions, I can't give you the answers and if you don't know the right question to ask, you're not ready for the answers.
Ed Parker
It is understood that Information Theory started in 1948 with Claude Shannon's “A Mathematical Theory of Communication”. Data wasn’t born that day though sure enough from then on we experienced an exponential growth in computing power, communication capabilities and relentless data ingestion.
Data is necessary and can’t be stopped. Wearing glasses signals some sort of eye problem. Sharing a meal with others, tells you about their taste in food and possibly other personal preferences. We emit and receive data all the time and this is really necessary or otherwise we wouldn’t be able to make sense of our environment and orient ourselves through it. All of our decisions, while sometimes not properly informed, are data-driven.
And so the question begs being asked: what is data actually?
I often feel that the current common perception of data is something along the lines of the magical dust floating in Middle Earth and that only some wizards can channel it through their arcane knowledge, mix it with other obscure elements and produce an otherworldly outcome to the marvel of us mere mortals.
Well, nothing further from the truth. Human-quantified information is only useful to humans and this only if it is sufficiently contextualized. In other words, by itself number 75 means nothing and thus has no value. Now, if we are to establish that it’s the number of beats per minute from my heart then we can suspect that, in general, I am healthy.
Here’s the important part: what gave meaning and value to that simple piece of data was to contextualize it sufficiently. That and only that.
Over the past decades a number of interpretations about the nature of data have been the center of rather heated debates. We live under a cocktail of them and this is making it more difficult to address all the malpractices we are too used to already.
The Data as IP approach, considers that “your data is like a song and if someone else is whistling it you should get paid for it”. The problem? This view detaches us emotionally from our data, making it nearly impossible to feel compelled to protect it unless for commercial reasons.
The Data as Labor viewpoint establishes that everytime we interact with technology, say posting a picture or writing a restaurant review, work is generated. This keeps opening the door to deceptive ideas that we can and should monetize our data.
Others prefer to observe Data as Infrastructure, arguing that data is only a means to build services and products. Again, a position where we see data as an external tool that has nothing to do with us.
There is however a more straightforward way to understand data that in fact consolidates all the above propositions and gives us a sense of how misusing data leads to harms: You are your data.
It’s really that simple. All the data collected about you, creates models of yourself.
Data, when properly organized using schemas, generates models that represent you. We call those digital twins.
When you look at data as an extension of yourself, concepts such as consent or data trafficking feel totally different. We may look deeper into all these concepts in future articles.
Some will quickly jump and argue “but that opens the door to selling your data!”.
Well… yes but no:
The recognition of ownership does not imply the ability to sell something. I own my brain and yet can’t sell it.
We sell ourselves for work on a daily basis. Recognizing the intimate relationship between us and our data would allow better regulation to minimize abuses.
Should the nature of data matter to you? For one thing, it is you. That’s precisely what big tech and all authoritarian governments have understood long ago and why they hoard your data. Or consider any of the nowadays all-too-common data breaches exposed worldwide. In the alleged JPN breach, would you say it would be data from 4 million Malaysians being circulated or rather 4 million Malaysians being exposed naked?
We hear often enough that you can’t get the right answers if you don’t ask the right questions. It seems to me that we asked the wrong questions when we started massively collecting data without understanding its true nature and that along the way we seriously turned a blind eye to the right questions because we were not ready for the actual answers.
A parting thought: Information is power and we currently do not control in effective ways who has access to it; essentially because we grew detached from it.
Better understanding the nature of data allows us now to dive into who should be taking care of what in the next episode.
The world – in general – is multicultural, it’s full of wars and destruction, so it wouldn’t be wise to import all of this into our societies.
Jordan B. Peterson
Ethics here, ethics there, ethics everywhere. When you start hearing a word a bit too much you can make a safe bet that someone is over-abusing it, not understanding what it means, subscribing to a buzzword or a combination of all those. Not to say that concern and interest aren’t legitimate options, just that they are less common.
Technologists have been having a field day with Ethics. All relevant organizations have been issuing their own Code of Ethics (, , , , , you name it). Some have also issued recommendations in the form of actual “standards”, such as RFC 1087. The term has also impressively in the past 5 years.
So… what are Ethics? Here are some definitions:
"[..] the discipline concerned with what is morally good and bad and morally right and wrong." Encyclopædia Britannica
"A system of accepted beliefs that control behavior, especially such a system based on morals."
Cambridge Dictionary
"A set of moral principles : a theory or system of moral values."
Merriam-Webster
See the problem already? Ethics are moral-driven, which in turn are derived from cultural elements, very different from one another by their own nature: you differentiate between cultures because of those differences. Moreover, morals change over time as societies face new challenges.
So which “ethical” system should be chosen, in the context of technology? And from which period in time? The one where honor is paramount (and suicide is regarded as an appropriate exit clause) or the one that rewards honor killings?
I hope you see what I did there.
Consider your , moving across platforms that adhere to different ethical systems. How can you be sure that they will be treated with the same level of “respect” if source and destination adhere to different ethical systems?
Is anyone expecting China, South Korea, Russia or Iran (to name a few) to have exactly the same interests in protecting citizen’s data as the EU? Is that even happening with mainstream Big Tech platforms in Western cultures, to begin with? Ask Snowden.
One quick example extracted from RFC 1087, where the Internet Architecture Board (IAB) expresses their repudiation of “unethical and unacceptable any activity which purposely: [...] (e) compromises the privacy of users.”
That’s coming from an organization with dozens of engineers working for Big Tech, which invests all energies in extracting citizens’ by any means possible. To be clear: there is nothing wrong with working for Big Tech per se nor IAB supports the data extractivism. What I am pointing out is for the need of those 2 sides to work out that contradiction.
Now think of AI and all the buzz around Ethics. I am forever perplexed at this one.
More concerning even, these (assumedly) well intentioned yet confusing set of ethical propositions say nothing about how to implement them so we are confronted into further segmentation, even inside a given ethical adherence.
The solution: finding a minimum common denominator that is universally applicable to people anywhere in the world. In other words: Ethics must give way to Rights.
Despite having different levels of implementation (International, national, local), the main quality of Rights is that they usually tackle universal features of human existence. I’ll get into more details in the next episode though let’s quickly say that all humans can benefit from things such as Freedom of Expression or Freedom of Movement, regardless of your cultural milieu because their absence inevitably leads to their demise. You don’t get to talk, you can’t express your needs, you’ll end up dying. You can’t move to where the food and safety is, you’ll end up dying. It very much is that simple.
That said, let’s please all remember that there are no Rights without Responsibilities.
We, plumbers, programmers, business developers, lawmakers, career politicians, educators, as citizens all of us (and ultimately digital citizens) have the inescapable duty to work together towards maintaining the Rights that we enjoy through the exercise of our Responsibilities.
“Do we have good examples of technologists taking a side?”, you may ask. Take a peek at the history of encryption in communication protocols. We can and we did.
In the next episode we’ll have a peek at how Rights could be implemented and what would be the different stakeholders and flow of it all.
The best part? You’ve already been doing it… only not for technology.
Those are my principles, and if you don't like them... well, I have others.
Groucho Marx
In the , I introduced the notion that technologists at large and programmers in particular are the Next Generation of Rights Defenders and that we as a society need to provide them with all the necessary knowledge and tools to embrace this new (and somewhat intimidating) role. Most of all, we need to inspire them so that the transition is both understood and undertaken.
For the purposes of TIOF’s , we will concentrate here on programmers.
An actual, meaningful change would involve a comprehensive reform of the programming educational pipeline, starting with a proper upgrade on the , a full analysis of the lifecycle of data structures, their Digital Harms and thus their Digital Rights as well as the application of all these in actual architectures.
Admittedly, this is a bit too much for an overnight reform that is nonetheless necessary and overdue. Much to our chagrin, we are going to have to undertake this long process if we ever want to stand a chance in the face of the perils of misunderstanding digital technologies.
In the shorter term, a more reasonable approach would be attempting to guide programmers in changing their current day-to-day paradigm, giving them some quick-to-reach tools that may help them in steering the wheel towards a more protective software by design. Picture Jiminy Cricket being a bro by reminding you that protecting citizens is a good thing and it’s not hard to do.
At TIOF we knew that elaborating long, complex principles would be ineffective and potentially counterproductive. We took to the task of elaborating brief, easy to understand principles that anyone could relate to.
Jiminy jumping onto your shoulder would proclaim: "Treat their data as you'd want to be treated.".
Indeed, if we are our data then it comes to no surprise that the way we manipulate it is akin to the way we would treat its source entity. In other words we want to protect our fellow citizens by handling their digital twins with care and respect. Just as you’d like to be treated, dear programmer.
Jiminy pointing at your source code would ask you to "Adopt designs that minimize grievances.".
Derived from the UN Guiding Principles on Business and Human Rights, the idea is exceedingly simple: proactively design architectures that avoid having to resort to remedial solutions. In other words, wouldn’t you prefer that a platform effectively deletes your data instead of you only having the option to sue them if (and only if) you discover one day that they didn’t honor your request?
Bouncing on top of your screen, Jiminy would encourage you to "Leave no policy uncoded behind.".
Think about it: What’s the best way to give citizens peace of mind? Easy: Implementing, transparently and by design, all the policies that protect them mandated by existing regulations. Isn’t that what we do for pretty much anything else?
Now, taking the challenge one step further, could we turn the above into a pledge, a Digital Hippocratic Oath of sorts? Something along the lines of:
I swear to fulfill, to the best of my ability and judgment, this covenant:
I will respect my fellow citizens, for their problems and data, which is them in essence, are not disclosed to me that the world may know. Most especially must I tread with care in matters of life and death. If it is given me to save a digital twin, all thanks. But it may also be within my power to erase a digital twin; this awesome responsibility must be faced with great humbleness and awareness of my own technical prowess. Above all, I must not play at Digital God.
I will remember that I do not treat a dataset, a schema, but a citizen and their authentic digital twins, whose improper manipulation may affect the citizen’s family’s safety and economic stability. My responsibility includes these related problems, if I am to care adequately for people’s data.
I will strive to design architectures and to implement technology that embeds all existing protections whenever I can, for prevention is preferable to cure.
I will remember that I remain a member of society, with special obligations to all my fellow human beings, those with access to technology and those who don’t.
If I do not violate this oath, may I enjoy life and art, respected while I live and remembered with affection thereafter. May I always act so as to preserve the finest traditions of my calling and may I long experience the joy of building digital spaces that encourage societal growth while ensuring safety by design.
Fancy this version? How would you improve it?
The next step in making sure that tech paradigms worldwide are aligned is ensuring that they all follow the same goal. Easy to say yet hard to achieve since everyone seems hell bent on this near-magic word of “Ethics” that simply means different things for different people. Let’s see if we can find an alternative to that conundrum.
To base our technology on ethics is a fool’s errand and a very dangerous one at that. We need to be reminded that technology is morally agnostic and that it is largely unaffected by traditional borders. It is however built by people so their role as can’t be understated.
Instead of building our digital societies around ethical systems, potentially plagued by antagonistic concepts and with not clear implementation guidelines, which will inevitably lead us to unmanageable disasters, we should instead invest in changing our perception on the nature of data and architect digital societies around . In other words, compliance with local legislations should be transparent to the citizen and done on the basis of universally critical considerations of protection over their lives and that of their digital twins.
The protection of our digital twins though frameworks such as the UDDR will not magically happen, they will be the result of changes of paradigms in the understanding of the and the architecture of digital platforms around the protection of the survival of our digital twins through the observance of their Digital Rights. Most importantly, it will need the mobilization of technologists as first line defenders of all these Rights.
A penny for your bytes
PART I
ABOUT DATA-CENTRIC DIGITAL RIGHTS
A penny for your bytes
PART II
THE NATURE OF DATA
A penny for your bytes
PART III
NEXTGEN RIGHTS DEFENDERS
A penny for your bytes
PART IV
DCDR PRINCIPLES
A penny for your bytes
PART V
DITCHING ETHICS, EMBRACING RIGHTS
A penny for your bytes
PART VI
SHAPING TOMORROW, TODAY
A penny for your bytes
PART VII THE IMPENDING CS CRISIS
Jean F. Queralt (John) is the Founder and CEO of , a tech nonprofit advocating for .
Disturbed by the level of intrusion of technology in the lives of citizens, he took the leap in 2018 of starting The IO Foundation to establish a more solid and targeted direction to address users' protection from a technical standards perspective.
He is actively involved in Standard Developing Organizations such as the , and .
Because he regards technologists as the , he works to raise awareness on the importance of these organizations across the technical community and facilitates their participation in them.
(TIOF) is a global nonprofit advocating for Data-Centric Digital Rights, born out of a fundamental concern on the future of digital communities, both in their governance and implementation. TIOF aims to provide platforms to raise awareness on Digital Rights as well as effective solutions to ensure that they are adequately observed. For more information, please visit and reach out via .
Jean F. Queralt (John) is the Founder and CEO of , a tech nonprofit advocating for .
Disturbed by the level of intrusion of technology in the lives of citizens, he took the leap in 2018 of starting The IO Foundation to establish a more solid and targeted direction to address users' protection from a technical standards perspective.
He is actively involved in Standard Developing Organizations such as the , and .
Because he regards technologists as the , he works to raise awareness on the importance of these organizations across the technical community and facilitates their participation in them.
(TIOF) is a global nonprofit advocating for Data-Centric Digital Rights, born out of a fundamental concern on the future of digital communities, both in their governance and implementation. TIOF aims to provide platforms to raise awareness on Digital Rights as well as effective solutions to ensure that they are adequately observed. For more information, please visit and reach out via .
Jean F. Queralt (John) is the Founder and CEO of , a tech nonprofit advocating for .
Disturbed by the level of intrusion of technology in the lives of citizens, he took the leap in 2018 of starting The IO Foundation to establish a more solid and targeted direction to address users' protection from a technical standards perspective.
He is actively involved in Standard Developing Organizations such as the , and .
Because he regards technologists as the , he works to raise awareness on the importance of these organizations across the technical community and facilitates their participation in them.
(TIOF) is a global nonprofit advocating for Data-Centric Digital Rights, born out of a fundamental concern on the future of digital communities, both in their governance and implementation. TIOF aims to provide platforms to raise awareness on Digital Rights as well as effective solutions to ensure that they are adequately observed. For more information, please visit and reach out via .
Jean F. Queralt (John) is the Founder and CEO of , a tech nonprofit advocating for .
Disturbed by the level of intrusion of technology in the lives of citizens, he took the leap in 2018 of starting The IO Foundation to establish a more solid and targeted direction to address users' protection from a technical standards perspective.
He is actively involved in Standard Developing Organizations such as the , and .
Because he regards technologists as the , he works to raise awareness on the importance of these organizations across the technical community and facilitates their participation in them.
(TIOF) is a global nonprofit advocating for Data-Centric Digital Rights, born out of a fundamental concern on the future of digital communities, both in their governance and implementation. TIOF aims to provide platforms to raise awareness on Digital Rights as well as effective solutions to ensure that they are adequately observed. For more information, please visit and reach out via .
Jean F. Queralt (John) is the Founder and CEO of , a tech nonprofit advocating for .
Disturbed by the level of intrusion of technology in the lives of citizens, he took the leap in 2018 of starting The IO Foundation to establish a more solid and targeted direction to address users' protection from a technical standards perspective.
He is actively involved in Standard Developing Organizations such as the , and .
Because he regards technologists as the , he works to raise awareness on the importance of these organizations across the technical community and facilitates their participation in them.
(TIOF) is a global nonprofit advocating for Data-Centric Digital Rights, born out of a fundamental concern on the future of digital communities, both in their governance and implementation. TIOF aims to provide platforms to raise awareness on Digital Rights as well as effective solutions to ensure that they are adequately observed. For more information, please visit and reach out via .
Jean F. Queralt (John) is the Founder and CEO of , a tech nonprofit advocating for .
Disturbed by the level of intrusion of technology in the lives of citizens, he took the leap in 2018 of starting The IO Foundation to establish a more solid and targeted direction to address users' protection from a technical standards perspective.
He is actively involved in Standard Developing Organizations such as the , and .
Because he regards technologists as the , he works to raise awareness on the importance of these organizations across the technical community and facilitates their participation in them.
(TIOF) is a global nonprofit advocating for Data-Centric Digital Rights, born out of a fundamental concern on the future of digital communities, both in their governance and implementation. TIOF aims to provide platforms to raise awareness on Digital Rights as well as effective solutions to ensure that they are adequately observed. For more information, please visit and reach out via .
Jean F. Queralt (John) is the Founder and CEO of , a tech nonprofit advocating for .
Disturbed by the level of intrusion of technology in the lives of citizens, he took the leap in 2018 of starting The IO Foundation to establish a more solid and targeted direction to address users' protection from a technical standards perspective.
He is actively involved in Standard Developing Organizations such as the , and .
Because he regards technologists as the , he works to raise awareness on the importance of these organizations across the technical community and facilitates their participation in them.
(TIOF) is a global nonprofit advocating for Data-Centric Digital Rights, born out of a fundamental concern on the future of digital communities, both in their governance and implementation. TIOF aims to provide platforms to raise awareness on Digital Rights as well as effective solutions to ensure that they are adequately observed. For more information, please visit and reach out via .