Smashing glass slamming cops with blunt objects: climbing walls, keeping a news ready outside to hang mike pence, their anger and violence born and raised on the algorithmic engineering power that facebook and its big tech, cousins perfected. Everything that happened was in plain sight. They were preparing for this from the day of the election. Where does this all go and what will change and for whom it’s my pleasure to welcome maricha sake, international policy director at stanford university, cyber policy center and international policy? Fellow at stanford’s institute for human centered, artificial intelligence maharaja has served as a member of european parliament for the dutch liberal democratic party. She brings a unique perspective simply because of the geographical scope of her work. Marice. The reactions were quick, mostly predictable. If anyone wanted proof that talk is cheap, we got loads of it. So for years, they’ve been hiding behind abstractions platforms. Stuff like we want to collect, connect the world whatever it’s time to account for what’s broken and what needs to be done to fix it over to you. Thank you. I think you’ve almost said it all, but let me try to add a few thoughts here. I think for a lot of people watching what happened on january 6th in washington, it suddenly became visible how much impact the spread of hate speech. Polarization calls for violence can have not only in sort of you know, parallel universes online or the dark corners of the internet, but that these theories really spill over.
They boil over into the streets and that, unfortunately, significant amounts of people are susceptible to. Conspiracy theories lies and really the the rousing to take matters into their own hands. Then we saw social media companies, probably terrified of the spotlight that was being shown on them and taking the swift action. We have not seen them taking for years. Civil rights groups, minority groups, women – have all been very concerned about the growing vitriol that they saw online and what it might lead to ultimately, but social media companies kept saying we don’t want to be arbiters of truth. We need to protect freedom of expression and there they were twitter. First then, facebook reluctantly youtube banning president trump still then now donald trump from their platforms and showing also that they were capable of taking a position after basically paving his platform for four years, and so the big question is now. What will this lead to? I hope it will not be a distraction from the need for democratic governments to set these frameworks, because i frankly think we cannot expect the same companies that have created the problem to solve it. For us, we really need more independent oversight, clear rules about where not only free speech is protected, but also public safety, the public interest, public health and so forth. You offer the idea of middleware as a structural solution. Are you still working on that line of thought? If user data is not flowing, my question would be then what is flowing and then what is being identified? Uh it’s it’s a work in progress together with my colleague, frank fukuyama and others who have worked on this, where we started to look at how to mitigate excessive platform power through anti trust mechanisms.
But then we came to the conclusion that specific remedies are needed to address the specific harms to democracy, and one idea is to give the user. So you and i more agency and choice over what content you want to see and the idea of middleware is essentially a layer in the middle of yourself as the internet user and the big tech platforms or search engines like google, for example, and essentially trusted parties Or organizations can say you know we will create middleware so that if you go uh shopping, you will only find products from a certain area or maybe only biological products. Or if you go online, you only get content from these kinds of newspapers or we prefer civil society links to come up on top. So basically, it curates your your returns on searches and the presentation of content in a way that better suits your own choices, not the ad driven algorithmically amplified propositions links, micro, targeting that the platforms come up with today, you’re, not just staying on that algorithmic engineering is The only branch of engineering where there are no limits, no liability, no nothing for injecting it into society right. How does middleware solve the problem of wood, adults, for instance, and we are still very much working through this proposed model from a technological point of view, from a policy point of view, from an enforcement point of view from transparency and accountability perspective so uh? We think it might be part of the solution, but i personally believe that there is no substitute for democratic law, making checks and balances, independent oversight so that the rule of law comes up top and not any commercially driven model, whether it’s big tech platforms or middleware Providers or anyone else, i think democracy is too precious to leave to corporates with their profit objectives in mind to, as you said, basically, experiment on us it’s become popular at both ends of the political spectrum to dump on section 230 provision of the 1996 communications decency Act that pretty much created the modern internet 26 words inside of it, and i get that you know – calls have been growing to moderate content and so on, but it is complex, it is costly, it is inherently political.
Your take, i think there is consensus in the united states to lift the exemption to liability when it comes to the role of platforms, and you could argue that the platforms have lifted their own exemptions already because they initially said we’re just neutral passers through of information And increasingly, they’ve started to intervene to redirect searches about the covid virus so that people would land on trustworthy sources and not in some kind of conspiracy, uh ridden uh shop, for example, uh and so we’ve seen growing amounts of intervention, but again from a commercial incentive. Not from a democratic incentive, and so what i hope is that any review of the section 230 will not be a black and white discussion about either all liability or no liability, because i think that’s a very unhelpful lens to look through. Because then, people polarize and it’s very easy for the tech companies to say that you know it will end the internet as we know it um. I think it’s it’s all about the details and it will also have to go hand in hand with stronger anti trust rules, better data protection provisions, more transparency into how algorithms work more accountability, not only liability, but really you know more comprehensive processes of the responsibility of These tech companies from the individual level to to the board, uh and so on. So i believe that we will see in the next let’s say decade, uh a set of different policy initiatives that hopefully will lay a new puzzle of democratic governance into our digital world.
We know this stuff happens and fits and starts, but what’s low hanging fruit. What do you think happens first now see, at least in the united states and the european union, a lot of movement in the antitrust space. I think that will continue, but what i hope will happen is that there will also be more sort of horizontal provisions, principally based provisions which will push for more transparency and access to information. One of the real problems is that, whether you’re an academic researcher or an average internet user or a journalist or parliamentarian anywhere in the world, you have a very difficult time to discern to understand to comprehend what happens under the hood of these tech companies. Even if their power grows and grows and grows, and so having the right kind of access to the right kind of information for the right actors, i think is, is an integral part of the rule of law. We have to have transparency, and this is a new challenge when the algorithm algorithms and machine learning processes constantly change uh. This is different from let’s, say you know, looking at the glass that i have here and assessing whether it’s been safely made, we can look at it, we can assess it an algorithm. This is this is much harder. It’S fluid it’s moving it’s changing and you would have a different experience online than i do versus our neighbor. You know our family etc. The individualized nature, on the basis of micro targeting, for example, makes oversight a new challenge, because we cannot take a moment’s observation.
Like a screenshot of of what experience we get because everybody has a different outcome, so we need new avenues towards transparency and oversight. I think that that is a path that has to be deepened as a condition for then looking at whether, for example, there’s been discrimination or disinformation about health or whatnot. Based on your lived experience in the eu and work experience in on both sides at stanford and back home, what is it from the eu playbook that the u.s can pretty much immediately take and apply here? They have federal data protection law, it’s quite remarkable that the united states doesn’t have one and the eu has worked towards what is called the general data protection regulation and a similar federally applicable data protection regulation would help to give americans better rights protections and to be More clear about where limits are in terms of data collection and and use, so this is something that i think they can take right now from the european experience what’s the mood is there a? Is there generally uh momentum for this yeah? I think that the the observation is that there’s, a lot of change happening very quickly, that we’re really reaching a tipping point, particularly when it comes to social media and search companies uh. For me, the question is: if we look throughout the digital world, there are so many areas where corporations are disproportionately powerful. I would like to think about it more systematically than just what we can see because we use search and we use social media, but that’s, probably the next chapter of this big endeavor, of making sure that democracy trumps advertising companies.
Basically, i know i said last question, but um couldn’t resist this. So i remember reading somewhere you went to the facebook headquarters with a friend or a colleague, and you were waiting to talk about work stuff and you were given this uh lecture or whatever on the linen method. Tell us what happened and tell us about how you view the facebook role in what happened on january 6th. Well, i have to add another anecdote, because the colleague with whom i went to facebook to talk about intermediary liability. So basically, section 230 today has been appointed as the prime minister of estonia, so the two of us went there to talk about freedom of expression, online and intermediary liability exemptions, and we were actually lectured about lean in which was not what we came for um. What do i think about the role of facebook? Specifically um? I mean it is an enormous platform with a lot of resources and with a lot of power, and it has not chosen to use those resources and that power to firmly stand against hate, not only in the united states, and i think this is really important. But globally, if, if people think that what happened in the united states on january 6 was bad, they should look at what happened in myanmar, around elections and violence and other places where violence is, is very uh shallowly below the surface, where there’s a history of hate Speech and ethnic violence and unfortunately, facebook has become a platform to mobilize those kinds of groups and unfortunately, it took an eruption of that kind of aggression in the united states for people to wake up now.
The best we can do is to steer this momentum in the right more democratic direction. Thank you.