Opinion // Open Forum

The AI regulation wave is coming: Industry should ride it

America’s artificial intelligence industry is increasingly likely to face government regulations that could blunt innovation. There is a perfect storm brewing on Capitol Hill, and in statehouses and city governments around the country. Growing unease over possible negative impacts of AI on American society, and increased awareness that technological developments are outpacing lawmakers’ ability to craft rules and provide oversight is likely to lead to well-intentioned but poorly-crafted regulations that will constrain research, development and deployment of AI technologies.

In the absence of government regulations, several leading voices in AI development advocate for industry self-regulation. While commendable in its ultimate goal — instilling trust in AI — it is not a viable long-term solution. Government oversight is necessary and, done right, will make America’s AI endeavors more effective and competitive. The U.S. regulatory principles for AI proposed by the White House on Jan. 7 are intended to do exactly that.

Regulations will be essential to engendering trust in AI systems among researchers in academia and industry, with policymakers, and with the general public. As with federal regulations for transportation, food, and medicine, such rules should broadly benefit society by protecting consumers and instilling confidence in producers. Regulations are also important in that they level the playing field: everyone in the field must comply with the same rules so no single actor gains a competitive advantage from different approaches to self-regulation. It is even possible for regulations to serve as a catalyst for further innovation by opening new business areas as seen in financial services after data sharing rules under open banking.

Widely applied AI-related regulations are just a matter of time as non-experts gain understanding of AI’s reach, and will soon go beyond autonomous vehicles and defense-related use cases where much of the legislative focus to date has been. One area where regulatory scrutiny gained big momentum in just a few months is facial recognition technologies, with San Francisco, Oakland, and Somerville, Mass., banning its use by law enforcement. City councilors in Portland, Ore., are considering a total ban, which is likely to prompt other municipalities to follow suit given the unease over the technology among civil libertarians. Critiques of these technologies are not always rooted in accurate technical understanding, however.

AI researchers should therefore be proactive in working with Congress and legislators at the state and local levels to recommend and shape regulations so that they are based on sound technical principles and are broadly supportive of the AI ecosystem by not creating undue hurdles to research and innovation. Early involvement in the process of defining regulations helps to reduce the uncertainty of trying to anticipate what future regulations could be. Well-crafted regulations could even promote more efficient AI research and development and boost American competitiveness such as by making available computing resources and data sets, and by updating intellectual property protections.

Data privacy is an area where regulations will be updated to address growing concern over unchecked advances in AI. This offers AI researchers the opportunity to inform lawmakers on the possibilities of combining techniques such as differential privacy, differentiated learning, practical verification, and encrypted computation. The AI research community’s expertise in these areas will be vital so that rules are crafted in line with what is technologically feasible, tailored for the desired result, and are updated as capabilities improve.

A further advantage for the AI community to act now on shaping the regulatory environment is that there’s little guesswork involved as to who the main players will be. Much AI regulation will occur within existing governmental organizations and be industry-specific: the Department of Transportation for autonomous vehicles, the FDA for telemedicine and IoT medical devices, and the FDIC and Federal Reserve for many banking-related uses of AI.

There are numerous examples of verification and validation techniques already being implemented, largely voluntarily by industry, such as adaptive stress testing, interpretability standards, and audit trails. Helping Congress to codify what is already being done is an effective and important step to building a critical foundation of trust. The challenge for the AI research community and regulators will be to strike the balance between potential harm caused by under-regulated AI systems and placing too many constraints such that it stifles innovation.

Early and practical action by AI stakeholders will help to ensure that this balance is found at the outset and can be adjusted as AI technologies mature. AI industry engagement on regulations is the smart play.

Martijn Rasser is a senior fellow at the Center for a New American Security, a Washington D.C., think tank. He previously was an executive with a Silicon Valley-based AI startup.

麻将游戏大全