Karen Hao, MIT Technology Review:
The tech giants have disproportionate control over the direction of AI research. This has shifted the direction of the field as a whole toward increasingly big data and big models, with several consequences. It blows up the climate impact of AI advancements, locks out resource-constrained labs from participating in the field, and leads to lazier scientific inquiry by ignoring the range of other possible approaches.
But much of corporate influence comes down to money and the lack of alternative funding. As I wrote last year in my profile of OpenAI, the lab initially sought to rely only on independent, wealthy donors. The bet proved unsustainable, and four years later, OpenAI signed an investment deal with Microsoft. My hope is we’ll see more governments step into this void to provide non-defense-related funding options for researchers. It won’t be a perfect solution, but it’ll be a start. Governments are beholden to the public, not the bottom line.
The overwhelming attention on bigger and badder models has overshadowed one of the central goals of AI research: to create intelligent machines that don’t just pattern-match but actually understand meaning. While corporate influence is a major contributor to this trend, there are other culprits as well. Research conferences and peer-review publications place a heavy emphasis on achieving “state of the art” results. But the state of the art is often poorly measured by tests that can be beaten with more data and larger models.
It’s not that large-scale models could never reach common-sense understanding. That’s still an open question. But there are other avenues of research deserving greater investment. Some experts have placed their bets on neurosymbolic AI, which combines deep learning with symbolic knowledge systems. Others are experimenting with more probabilistic techniques that use far less data, inspired by a human child’s ability to learn from very few examples.
In 2021, I hope the field will realign its incentives to prioritize comprehension over prediction. Not only could this lead to more technically robust systems, but the improvements would have major social implications as well.
If algorithms codify the values and perspectives of their creators, a broad cross-section of humanity should be present at the table when they are developed. I saw no better evidence of this than in December of 2019, when I attended NeurIPS. That year, with a record number of women and minority speakers and attendees, I could feel the tenor of the proceedings tangibly shift. There were more talks than ever grappling with AI’s influence on society.
At the time I lauded the community for its progress. But Google’s treatment of Gebru, one of the few prominent Black women in industry, showed how far there still is to go. Diversity in numbers is meaningless if those individuals aren’t empowered to bring their lived experience into their work.