These days, when presidents want to make policy, they often do it through their powers to regulate. The recent move by the Trump administration to relax fuel-efficiency standards for automobiles is one of many recent, high-profile examples. But federal agencies cannot just do whatever they want: There are legal rules that have been in place since the 1940s that require agencies to solicit and consider public input. Unfortunately, those rules have mostly been frozen in the mid-20th century and have not adapted to the new technological environment.
For example, opportunities to learn about and comment on regulation abound online, leading to an explosion of public participation. It is now common for agencies to receive over 1 million public comments on a proposed regulation. But agencies don’t have the person-power to process all that information, resulting in costly delays and undercounted perspectives.
There is also a new threat of “comment-bots” that send in fake comments using real people’s names. The Federal Communication Commission recently received millions of fake comments on its effort roll back rules that ban discriminatory treatment of web traffic by internet service providers. These fake comments have the potential to overrun the public comment system, severely eroding stakeholder participation in our democracy at a time when agencies have taken on an ever-increasing importance.
But while technology has created these new challenges, it may also hold out some potential solutions. In a recent paper, we explore how new tools in machine learning, text analysis, and artificial intelligence can improve how the government interacts with the public. These new technological advances can help ensure that people’s voice is actually heard in policymaking.
For example, automated text analysis techniques can use markers such as word choice and citation to identify the public comments that can most improve the quality of a regulation, saving staff time to focus on the most valuable information. Computational tools like “topic modeling” can also identify larger trends in what people are saying that can be missed by human readers.
We used one particular tool, called “sentiment analysis” to examine millions of comments received by administrative agencies during the Obama administration. Our sentiment analysis used the words in these comments to determine whether the public felt positively or negatively about a rule. One of our findings was that the more ideologically polarized an agency, the more it tended to elicit negative language in the public comments it received. At the very least, this finding confirms that text analysis tools are able to pick up on trends that human researchers would likely miss.
To address the problem of fake comments, agencies will have to get even more sophisticated. Right now, it is relatively easy to spot duplicate or near-duplicate comments generated by a comment-bot, but as advocates and hackers use smarter algorithms, agencies are going to have to deploy better tools to screen out the comment spam. At the very least, captcha-style interfaces may be in order as a first step.
Ultimately, artificial intelligence could go even further to support public participation in rulemaking. One of the major barriers to participation is the complexity of regulation and the difficulty of reading the hundreds of pages of technical support documents that accompany major new rules. AI tools could help the average citizen focus on the most salient and important parts of a regulation, directing their attention to the details that matter to them. AI can also be used to help people tailor their comments in ways that allow them to best voice their concerns in the technical language that is likely to have the most influence.
A bit further on the horizon, new AI-informed interfaces could facilitate a multi-directional dialogue among consumers, workers, industry, and environmentalists. Right now, there is a one-way stream of information, from the public to the government, and it can be difficult to know if government officials are even listening. There is no way for commenters to easily learn what others are saying. Technology could be used to change that, by creating an open and interactive environment where discourse between government and people with a range of perspectives is possible.
A more democratic and deliberative rulemaking process can facilitate dialogue across society in ways that the current social media echo chamber does not.
Michael A. Livermore is a professor at the University of Virginia School of Law. He focuses on environmental law, regulation, and the use of computational tools to study the law.
Vlad Eidelman is the vice president of Research at FiscalNote, where he leads R&D into advanced methods across the company. Prior to FiscalNote, he worked as a researcher in a number of academic and industry settings, completing his Ph.D. in CS, as an NSF and NDSEG fellow, at the University of Maryland.