What You Need to Know about Digital Rights: a RightsCon Debrief

By Kristi Arbogast, Communications and Operations Associate at Open Gov Hub

With inboxes overflowing with new privacy policy notices in light of the now-in-effect General Data Protection Regulation (GDPR) in the European Union, it seemed even more vital to gather technologists, policy makers, activists, and academics together at RightsCon in Toronto in mid-May.

Over the course of three days at a conference center next to the beautiful Lake Ontario, over 2,500 people gathered at RightsCon to discuss and debate some of the most pressing topics relating to human rights in the digital age.

I went to represent Open Gov Hub and to act as a bridge between our opengov community and the digital rights community I have long researched. Here are my top four takeaways from the various panels I attended on bots, governance, data privacy, artificial intelligence for the Sustainable Development Goals, and algorithmic accountability.

  1. Not all bots are bad. While the word 'bot' often conjures images of bot armies trolling on Twitter, Sarah Moulton and Chris Doten of the National Democratic Institute were quick to remind us that positive uses of bots can and do happen. Bots can be used to increase citizen engagement at scale without needing human capacity and can meet people where they are online. You can also have bots that are used for transparency purposes - for example that tweet every time the Department of Defense awards a new contract, or a politician spends a certain threshold of money.
    However, if bots are to be used for citizen engagement, trust needs to be built. The panelists were unclear about the best way to do this; perhaps, they suggested, opening up the code, being clear that the user is engaging with a bot, and that their data will be protected are good methods. Additional good practices need to be decided upon and put into place, particularly as bot use continues to rise.
  2. Don’t focus only on the source code. Usually when people talk about algorithms and ways to hold them accountable, they talk about the black box problem, which means that you often don’t know what the actual code is that makes up the algorithm. Sometimes this is because it's proprietary information; other times it's because the code is so complex. This metaphor frames the issue in a way that seems insurmountable. So panelists urged people to look beyond the code and to find other methods to hold algorithms accountable - like looking at the data that it is using and seeing if it's biased, monitoring algorithms' impact on actual human rights, or looking closely at the actual people and companies creating these algorithms. These are valid methodologies for creating mechanisms for accountability. But no one was really clear about who would be doing this monitoring of algorithms. The platform that’s hosting it? The coders? Civil society? A regulatory body? The tech is changing and evolving so fast that we’re continually playing catch up in this area.

  3. When governments roll out artificial intelligence systems, there need to be human rights risk assessments. As governments rely more and more on AI for decisions related to service delivery, law enforcement, and more, human rights assessments must be built into these processes. For example, New York City is the first in the world to establish a task force to review every single automated decision-making algorithm to see if any violate the city's existing anti-discrimination laws. And on a broader scale, these assessments need to take into account not just the high-income countries, but middle and low-income countries, as well as data that represents all economic statuses of citizens, according to Dhanaraj Thakur of the World Wide Web Foundation. 
     
  4. Let's unite the Artificial Intelligence, Open Data, and Privacy communities. Representing a community of 40 organizations, many of whom focus on open data, I was eager to attend this panel to learn how to bridge the gaps between these communities so that we can be more effective and impactful. The panel, with representatives from each of the communities, aimed to talk about tensions, potential bridges, and joint advocacy efforts; however, they only really covered the tensions. Tensions included the blurred lines between open data and personal, private data (particularly regarding beneficial ownership) or the fact that privacy is not really mentioned with regards to artificial intelligence, despite leaders of AI (like Google) being massive data collectors. The panelists plan to continue these dialogues to further unite these communities who share many similar goals, and we here at the Open Gov Hub plan to do the same.

In the coming months, you can stay tuned for follow-up Hub convenings around some of these critical topics, so keep an eye on our events calendar and sign up for our newsletter.

Creative Commons License
The OpenGov Hub logo and opengovhub.org are licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
Permissions beyond the scope of this license may be available at http://opengovhub.org/about/