Algorithms are pieces of code that, much like a recipe, provide a set of instructions to complete a task. They are used by companies like Google and Facebook to determine what search results are relevant and what posts are shown in someone's timeline. They are used to mediate social, political, personal and commercial interactions for billions of people and can act as powerful gatekeepers that are increasingly used to make decisions for us or about us.
The ethical issues raised by all of this was a hot topic at the Global Conference on CyberSpace in the Hague this week.
Algorithms used to be followed by people as a set of instructions. People could choose not to follow them, but when they are entirely processed by computers, there is little to no potential for human intervention, compassion or judgment, and this could pose a problem, noted panelists.
"No algorithm is neutral, every algorithm is based on some kind of unknown ideology," said Kavé Salamatian, Professor of computer Science at the University of Savoie, France.
That means that what we need to check the philosophy and ideological premises of people who write algorithms, and not the code itself, he said during a panel discussion at the conference on Thursday.
The key to dealing with algorithms is transparency from companies using them, said Jillian York, policy director for freedom of expression for the Electronic Frontier Foundation (EFF). She pointed out that Twitter, for instance, manipulates its trending topics algorithm to make sure some things don't trend repeatedly.
"That's why you don't see Justin Bieber everyday and I'm sure we're all very grateful for that. But this can also be problematic when you do have a very popular social movement or political movement that is in fact trending in the traditional sense," which happens when there are millions and millions of tweets about that topic, she said. While this might not be censorship it is certainly suppression of a certain topic, she said. It may not necessarily happen for nefarious reasons, but could result in Monday night football becoming a more "popular" event in the eyes of a social media company than a social movement, and this can look essentially like censorship, she said.
York contributed to a report on algorithm ethics prepared for the CyberSpace conference by the Centre For Internet and Human Rights (CIHR). The reports signals that the speed of technological developments, as well as corporate and government incentives, have overshadowed the urgently needed discussion of ethics and accountability of this new decision-making infrastructure. Making sure that such algorithms are in line with human rights standards will be a challenge for companies and governments in the coming years, the report says.
Issues with algorithms might be closer to home than you think. Mattel, for instance, is developing a new Barbie doll, called Hello Barbie, that will be able to hold conversations by processing the dialogue of children around it. This processing is done by an algorithm in the cloud that is used to send a response back. This means that every family that owns such a doll has a microphone hooked up to the Internet listening in on household chatter, said Frank LaRue, Director of European Operations at Robert F. Kennedy Center for Justice and Human Rights.
"Is this ethical or not I would say not," said LaRue. Allowing such a doll into a family, in a way, also allows an algorithm to interfere with our process of thinking and talking. This means that in the most fundamental stages of life, children can be conditioned with what an algorithm determines to be proper use of language and a typical conversational flow, he said. "I think that is not only a serious breach of ethics but also a serious breach of human rights, a violation of privacy and a violation of freedom of expression," he added.
There are a variety of ways algorithms can influence people, speakers at the conference said. Choices made by algorithms could also potentially influence an election by showing certain types of stories about a candidate more often than stories about another. Algorithms used to block terrorism-related content could, at the same time, unintentionally remove news stories about terrorism from people's timelines.
The thought that algorithmic changes to Facebook timelines have a big influence on people is not as strange as it might seem. Facebook last year for example conducted a controversial psychological experiment on nearly 700,000 users to determine what the impact of changes in its news feed algorithm had on people's moods. The experiment, digital privacy rights group the Electronic Privacy Information Center said, "purposefully messed with people's minds."
The question, though, is what should be done to prevent problems arising from rapidly developing technologies. Maybe transparency can help, but is that enough And how far should that transparency go
Companies could be hesitant to share information about their algorithms, noted Richard Allan, Facebook's director of policy for Europe, the Middle East and Africa, who said the big question for Facebook is whether it is going to feel free to be transparent about its algorithmic choices.
"Is there going to be a space in which we can have these conversations Or is it going to be the case that every time we are going to publish something or engage with outside researchers, we're going to have such a storm in the media and our brand is getting so damaged that our instinct is going to be to close down and shut it down" he asked.
Even so, the general principles on which algorithms are based should be out there, noted panelists. This is already true to some extent with the news feed algorithm at the moment but could be extended in the future, Allan said. However, having to go into full, intimate detail of an algorithm could be another matter. If there is going to be a system in which some kind of approval is needed for every change that is made, there is clearly a potential conflict with the fast-moving environment tech companies operate in, he said. Companies should be able to make changes to their platforms without going through some kind of regulatory process every time, he stressed.
The issue is not merely academic. On Thursday, for example, French senators amended an economics bill with a stricture that would require Google to advertise three competitors on its home page. The bill stops short of requiring the search engine to disclose its algorithms, but does require economically powerful search engines to provide information about how they classify or index websites.
Regulating algorithms in some manner could be an option though. The U.S. Federal Trade Commission's Bureau of Consumer Protection recently established an Office of Technology Research and Investigation, that will be looking into the issue of algorithmic transparency. Regulatory action doesn't seem to be on the agenda yet, but the FTC is interested in how algorithms operate, what incentives are behind them, what data is used and how they are structured.
In some areas like finance, where automated high-speed trading systems have a potentially destabilizing effect on financial markets, algorithmic regulation has already been discussed, noted the CIHR report on algorithm ethics. A similar examination of whether to regulate algorithms can be seen in the online search market. While regulators have not yet issued rules about Google's algorithms, investigations into the search engine's dominant market position revolve precisely around this question, the report said.
"The increasing importance of algorithms raises a whole host of ethical quandaries, some of which involve issues of regulatory policies," the authors said. "These questions and more deserve urgent and deep attention by everyone concerned about the future shape of human society."
Loek is Amsterdam Correspondent and covers online privacy, intellectual property, online payment issues as well as EU technology policy and regulation for the IDG News Service. Follow him on Twitter at @loekessers or email tips and comments to loek_essers@idg.com