Learning from the community
Tell us about your role:
I am currently a Research Assistant, postdoctoral researcher for FUSE at Dublin City University’s National Anti-Bullying Research and Resource Center. I assist schools in tackling bullying, hate speech based bullying and online safety.
My doctorate has explored how Facebook governs hate speech, and I am deeply interested in the area of
platform governance.Tell us about your career path, and how it ed you to your work’s focus: My background is in Media andCulture Studies. Through the latter, I obtained an internship position with the Spanish government,working as a cultural specialist for the Minister of Foreign Affairs. This internship led me to spend the initial years of my professional career working for the Spanish Development Unit in Guatemala and Sudan and for UNESCO as a visitor researcher. Influence was drawn from both academic and professional working practice, seeing my critical thought developing. I became increasingly interested in the subjects of class
Paloma Viejo and race. In 2013, I completed the MPhil in Race, Ethnicity and Conflict at Trinity College Dublin, which revolves around race-critical theory and critical social studies. By 2014, Professor Eugenia Sapiera of Dublin City University School of Communications opened a PhD research position in racism and hate speech in online environments. I was selected as the PhD candidate to research the conditions of possibility for the creation and circulation of racist material in social media. Inquiring about the notion of hate speech leads me to look at the evolution of mechanisms in place over time to “control hate,” particularly the period between 1940 and the 2010s (from the drafting process of the Declaration of Human Rights to the time of social media), by looking into the principles and values that underpin each actor who has regulated hate. Ultimately, I am looking to research any potential challenges by the rise of social media platforms as both new cultural
power and spaces where “hate speech” regularly occurs. In your opinion, what are the biggest issues facing social media? I would say it is hate speech, or more accurately, the conditions of the possibility for hate speech to be on the platforms. Hate speech is for the most part framed by Facebook and by politicians as an operational problem. However, through my research, I have observed that the problem is a more pro found one, rooted in the values and principles upon which Facebook has built its technology – and that technology
perpetuates. This question needs to be unfolded. Perhaps this interview has no room for it, but I will [give] you a simple example. Facebook has two values to justify how users upload content: Voice and Equity.
Voices and Equity are technologically reflected on a Facebook user’s wall under a simple question:“ What is on your mind?”
Among many other possibilities, Facebook asks the user: “What is on your mind?” That is the type of question you ask
someone who is lost in thought, who is staring at the ceiling. It does not ask for elaborate thoughts; it is asking one to
speak, simply speak, and the question is supported by two principles: Voice and Equity. Voice means that all individuals
can upload whatever is in their minds, and Equity implies that all users are arithmetically equal, regardless of whether or
not they belong to the oppressed or the oppressor. Every single user is in a position to speak their mind. That is, at the
end of the day, what ”Platform for all” means, but – also – here is where the problems start.
In this particular case, Facebook has invited us to post anything we want, whatever is on our mind, and that potentially
includes hateful content. Yes, we have the Community Standards forbidding specific expressions and automatic
detection to stop them. However, operationally speaking, those are activated once the content is flowing in the platform
– once the word is out. That is only a small example of how Facebook’s Principles and Values affect how we interact. We
could also talk about how Facebook’s value of Equality determines the policy definition of hate speech and embraces a
post-racial understanding of hate speech.
What “solutions” to improving social media have you seen suggested or implemented that you are excited about?
What do we mean by improving? Do we mean adding more product solutions designed upon the same principles? Or do
we mean altering the conditions of possibility for hateful content to be on the platform?If it is the first case, I can say I am
excited to see how Facebook will expand its product solutions to “advance racial justice” (see [Mark] Zuckerberg’s post
on June 5th, 2020). It is a new project currently led by Fidji Simo, head of the Facebook app, and Ime Archibong, who is in
charge of Product Experimentation on Facebook.
I look forward to seeing what kind of solutions they propose.
If by improving, we mean altering the conditions of possibility for hateful content on the platform, platforms like
Facebook would have to change enormously, to the extent, I argue, that they would no longer be the platforms we know.
Therefore, it would no longer be an improvement but a change. I am inquisitive to know how building platforms with
different values would affect the way we connect and communicate.
How do we ensure safety, privacy and freedom of expression all at the same time?
When it comes to ensuring safety and freedom of expression, a matter of fact is that Facebook already does. It is a
technicality, but one I find fascinating.
Tacitly, Facebook makes the distinction between freedom of expression and freedom of information. If we look closely,
all the mechanisms and techniques that Facebook has implemented to provide safety do not dictate what the users have
to say. Their voices are intact but mostly interfere with how users receive and disseminate information. Take a look:
1. User’s settings regulate user visibility.
2. The user’s flagging report system lets Facebook know what the user considers should not keep circulating.
3. Automatic detection is for obvious reasons only for content that is on the platform.
4. Human moderation, whose task is to eliminate or filter the visibility of content.
5. Oversight Board, whose ultimate task is to decide if certain content should be back on circulation or not.
Zuckerberg summarized this well in 2017: “Freedom means you do not have to ask permission first, and that by default
you can say what you want. If you break our community standards or the law, then you’re going to face consequences
afterwards. We won’t catch everyone immediately, but we can make it harder to try to interfere.” (Zuckerberg, Mark, 21
September 2017).
As such, freedom of expression and safety are ensured. Perhaps we should start talking specifically about freedom of
information. I actually think that, to talk about privacy, we will need to open a different question, but to an extent it is
also linked with circulation. The lower your visibility, the lower your circulation of content. Although it is not guaranteed.
You would have to rely on your close contacts to not circulate a post whose privacy is important for you.
When we discuss improving social media, we often toggle between the responsibility of platforms,the role of media to
educate the general public, governmental oversight, and the role of citizens in terms of literacy and how they engage
with platforms. In your opinion, what area do you think needs the most improvement?
Governmental Oversight. No doubt. I like Suzor’s (2019) idea when he suggests that terms of service should respond to
General Law. It would affect community standards, I guess. Furthermore, I say Facebook would be grateful for it. They
clarify that they do not want to be the arbiters of discrimination, neither the arbiters of truth. That is at least what the
public says, and I don’t have arguments that prove that what they – Facebook – says is not what they believe.
What makes you optimistic that we, as a society, will be able to improve social media?
It makes me feel optimistic that we will keep testing different forms of connecting digitally. Not sure if it has to be on a
platform. I do not see why we cannot own our data and share it with whoever we want. I would love to have a small data
center in my kitchen, right beside my toaster.
Connect with Paloma Viejo @palomaviejo
“It makes me feel optimistic that we will keep testing different forms of connecting digitally. Not sure if it has to be on a platform. I do not see why we cannot own our data and share it with whoever we want. I would love to have a small data center in my kitchen, right beside my
toaster.”
-Paloma Viejo, Research Assistant/Post Doc