In the last five years, the public’s view of tech has changed drastically. A man who used social media to spread conspiracy theories became president. CEOs of major tech companies like Facebook, Google, and Twitter testified before Congress about election interference. Tech workers began breaking the silence about sexist and racist company culture.
Criticism of the tech industry was not new — anyone who remembers Microsoft’s 1998 antitrust charges can attest to that. This new wave of outrage, however, took on a different tone. The “Techlash,” though based in valid concerns, saw industry outsiders fault anyone who worked in tech for the mistakes of their CEOs. This techlash has had a potent emotional impact on techworkers, pushing some towards fruitful introspection, and pushing others into an anxious, defensive headspace that hurts any chance for change-making worker solidarity.
Last week on February 25th, researchers Dr. Norman Makoto Su, Dr. Amanda Lazar, and Dr. Lilly Irani released a paper titled “Critical Affects: Tech Work Emotions Amidst the Techlash” a study of how techworkers were emotionally impacted. Grounding their work in affect theory and analysis of labor structures, the researchers looked for themes in techworkers’ emotional responses and evaluated what impact company culture has on the expression of these emotions. Following the more narrative and description-based style of anthropologist Kathleen Stewert, the researchers invite readers to engage with techworkers’ emotions and draw their own conclusions as to how solidarity should be built.
“As techworkers, we should look at ourselves, look at how we’re responding to situations, think about where we run into emotional walls, and discover new ways of organizing around the future of technology that we really want,” says co-author Lilly Irani, a former UX Designer at Google.
The researchers found some interesting themes. For one, they note how the “emotional habitus” (a word used to describe group attitudes or a shared headspace) of the tech industry emphasizes optimism and actionable, “constructive” critiques rather than emotionally-based criticisms. Feelings evoked by the techlash, such as frustration, anger, and sadness, do not fit within this habitus, and thus have few acceptable outlets.
Through a series of interviews, the researchers found that some techworkers yearned for an “alternative habitus” that was more accepting of a wide range of emotions. Others felt silenced because of NDAs, and became frustrated that their feelings weren’t reflected in media coverage. The researchers also draw on the work of Sareeta Amrute in saying that worker’s diverse backgrounds make it so the techlash is experienced in a variety of ways. However, they say that centering diversity of experience can allow techworker emotions to begin puncturing “the habitus encouraged by tech industry practices.”
In an industry where employees are often valued by metrics, qualitative research can be undervalued. Data-driven research in tech often ends up “ignoring edge cases, which are often marginalized populations,” says co-author Norman Makoto Su. “An emphasis on the hard numbers can actually be quite harmful in ignoring certain users.”
Though the authors don’t list solutions, some of the subjects they interviewed are part of a group called the Tech Workers Coalition, which organizes techworkers behind various social justice, workers’ rights, and economic inclusion issues. Worker-led and democratically structured, such groups help techworkers to feel more connected and less alone against the fury of the techlash. They also can actuate real change — the Tech Workers Coalition, for example, is strongly affiliated with unionization efforts throughout the industry.
If the tech industry is to really change, however, companies must reshape their internal culture to accomodate a wider range of their workers’ emotions and experience. Companies expect employees to tap into their emotions, Irani says, when excitement and passion drives productivity. However, they depend on employees to disregard negative emotion, stunting their ability to vocalize ethics concerns or find solidarity around shared passions.
Such was the case with AI ethicist Timnit Gebru, for example, who was fired after pointing out that a Google program could generate biased language in a research paper. Her concerns about the censorship of her research, diversity at the company, and reasons for her firing have been both publicly and privately gaslit by Google leadership. “Emotions are part of thought and opinion, and are thus a human right,” Irani said of Gebru’s firing. “It’s arbitrary and cruel to demand passion from your workers and then penalize them when those passions do not align with what you’ve decided is your company line.”