I’m a lucky researcher. Every time I say my research is about journalism and AI, people seem to be in awe, or at least genuinely interested in what I’m doing. But I often sense a bit of disappointment when they realize I don’t have any stories about a robot takeover or James Bond-like gadgets solving all of journalism problems up my sleeve.
That’s because the deployment of AI within newsrooms is still very much a work in progress. At the moment, news organizations, fact-checking networks and research centers alike are only exploring how to best use machine learning for journalistic purposes. Among these projects are a computer program able to verify politicians’ claims almost live or algorithms that can sort through a vast amount of investigative materials to dig out hidden gems. But so far, the most popular use of machine learning in journalism has to do with the auto generation of news stories, also known as automated journalism.
Automated journalism started to make its way across newsrooms a decade ago, first to automate homicide stories at The Los Angeles Times, and then was picked by other news organizations like The Washington Post, The Associated Press, Le Monde and The BBC to cover predictable events such as sports and election results.
Although automated journalism’s effects on readers is well documented (studies generally show that readers trust automated news, but find them a bit boring), little is known about the way it impacts media practitioners. Prevailing discourses tend to be as much about doomsday scenarios like the “end of the human journalist” as they are about idealistic visions like the rise of “cybernetic newsrooms.”
What my thesis does is to strike somehow a balance between those two extremes: yes, automated journalism comes along with opportunities such as reducing journalists’ workload so they can focus on more in-depth forms of journalism, like investigative reporting or public affairs, but could also carry critical aspects if, for instance journalists exercise less critical thinking or are exposed to algorithmic biases.
To investigate this, I’m using an analytical framework based on Bourdieu’s Field Theory to understand whether automated journalism drives external forces to influence the work of journalists (for instance, are journalists relieved from routine work by automated news redeployed to in-depth journalism or rather to click-bait stories?) or if, on the contrary, it genuinely leads to journalistic excellence (could journalists and automated news actually work together in an harmonious man-machine marriage?).
Once this framework has equipped me with a list of key dimensions to consider, I’m planning to conduct fieldwork within newsrooms in order to document these aspects. In the end, I will be able to provide news organizations with a roadmap on how to best implement automated journalism strategies. But I also believe that my framework can be used to study other applications of machine learning in journalism, and even, to a certain extent, in other highly skilled domains such as law and medicine.