One of the biggest problems in regulating AI is agreeing on a definition | Tech Rasta

[ad_1]

In 2017, with advocacy from civil society groups, the New York City Council created a task force to address the city’s growing use of artificial intelligence. But the task force was quick to reach a consensus on the scope of “automated decision systems.” At a hearing, a city agency argued that the task force’s definition was too broad to include simple calculations like formulas in spreadsheets. By the end of its eighteen-month tenure, the task force’s ambitions had narrowed from how the city used automated decision-making systems to simply defining the types of systems that should be subject to oversight.

New York City is not alone in this struggle. As policymakers around the world try to create guidance and regulation for the use of AI in settings ranging from school admissions and home loan approvals to military weapon targeting systems, they all face the same problem: AI. Really The challenge is to define.

Matt O’Shaughnessy

Matt O’Shaughnessy is a Visiting Fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, where he uses his technical background in machine learning to research the geopolitics and global governance of technology.

More >

Subtle differences in definition—as well as overlapping and loaded terminology used by different actors to describe similar practices—have major implications for some of the most important issues facing policymakers. Researchers commonly refer to methods of inferring patterns from large data sets as “machine learning,” but the same concept is often labeled “AI” in practice—representing the fear of systems with superhuman abilities rather than narrow and fallible algorithms. And some commercially marketed technologies, like AI, are so straightforward that their own engineers describe them as “classic statistical methods.”

In trying to better define AI for law or regulation, policymakers face two challenging trade-offs: whether to use technical or human-based terminology, and how broad a scope to use. But despite the difficulty of these tradeoffs, there is often a way for policymakers to craft a definition of AI that best fits the particular application in question.

The first trade-off pits definitions based on humans based on specific technical characteristics. Human-oriented definitions describe AI with analogies to human intelligence. For example, the US Department of Defense strategy defines AI as “the ability of machines to perform tasks that normally require human intelligence.” In contrast, capability-based definitions describe AI through specific technical capabilities. An effective definition describes a “machine-based system” that produces “predictions, recommendations, or decisions.”

Human-based definitions naturally incorporate advances in technology. Take the AI ​​research community, which has no need for legal certainty: Vague definitions of AI have attracted funding to broad issues and maintained a coherent research community, whose notions of which approaches are most promising have evolved dramatically. And by emphasizing specific technical characteristics, human-based definitions can better focus on the sociotechnical contexts in which AI systems operate. Considering this broader context—rather than just specific technical aspects—regulators need to understand how AI systems affect people. Associations.

In contrast, the specificity of competence-based definitions better supports legal certainty, which is an important factor in encouraging innovation and supporting the rule of law. However, these definitions can quickly become outdated if not carefully targeted at very specific policy issues.

Consider the rapidly growing field of generative machine learning, which has been used to “deepfake” AI-generated artwork and artificial but realistic-looking media. The definition of AI used in a recent EU policy draft clearly includes systems that generate “content” in addition to “predictions, recommendations or decisions”. But the slightly older OECD definition, which is based on legislation, only mentions systems that make “estimations, recommendations or decisions”, excluding content-generation systems. Although lacking precision, human-based definitions can more easily accommodate these types of developments in technological capabilities and impacts.

A second trade-off centers on whether definitions of AI should accommodate complex modern systems or whether they should also include classical algorithms. A key distinguishing feature of modern AI systems, deep learning techniques that have led to recent advances, is their complexity. At extremes, recent language models like OpenAI’s GPT-3 have billions of parameters and require millions of dollars worth of computation for training. Policymakers must consider the unique risks and vulnerabilities posed by these systems, but not exclude them from regulatory focus as a result of the effectiveness of traditional algorithms and statistical techniques.

Complex AI systems raise important concerns because of the way they derive outputs from data entanglements. They fail unexpectedly when operating in settings not reflected in their training data—think autonomous vehicles that stop when they encounter an unidentified object on the side of the road, or worse. The exact logic that complex AI systems use to draw conclusions from these large datasets is convoluted and opaque, and often impossible to boil down to straightforward explanations that allow users to understand their operation and limitations.

Limiting the scope of the definition of AI to cover only these complex systems—for example, by excluding the many types of straightforward computation that New York City’s task force entangled in the debate—would make regulation easier to enforce and enforce. In fact, many definitions of AI used in policy documents seem to be written in an attempt to specifically describe complex deep learning systems. But focusing exclusively on AI systems that inspire science fiction use cases risks overlooking the real harm of blindly using historical data in complex and classical algorithms.

For example, complex deep learning systems asked to generate images of corporate executives may return mostly white people, a product of discriminatory historical patterns reflected in the systems’ training data. Traditional algorithms that blindly use historical data to make decisions and predictions can produce discriminating models that are as accurate as complex systems. In 2020, UK regulators caused a stir after the pandemic-related cancellation of secondary school exams by introducing an algorithm to derive grades from teacher assessments and historical data. The algorithm was based on decades-old statistical methods rather than sophisticated deep learning systems—yet it presented the same persistent bias problems often associated with complex algorithms. Students and parents raised transparency and rigor concerns, which eventually led to the resignation of high-ranking civil servants. Biased outputs produced by complex and conservative algorithms reinforce deeply embedded inequalities in our society, creating vicious cycles of discrimination.

It is not possible to create a single, universal definition of AI, but with careful thought, policymakers can specify parameters within which to achieve their policy goals.

When precision is not required, policymakers may dispense with the use of a definition altogether. The influential UNESCO document eschews a precise definition in favor of focusing on the effects of AI systems, leading to a more future-proof tool that is less likely to need to be updated as the technology evolves. Precise concepts of AI are also supported by common law, where definitions have some flexibility to evolve over time. Legislators can support this delicate evolutionary process by including language that describes the goals of AI-related policy. In some settings, liability-based control schemes that directly target the intended harm can also avoid the need for a precise definition.

In other cases, legislators can create broad legislation by using a definition of AI that is human-based and includes both scientific and complex AI systems, and then allows more proactive regulatory agencies to craft more precise competency-based definitions. These regulators can engage in nuanced dialogue with regulated parties, adopt regulations more rapidly as technology advances, and narrowly target specific policy issues. When done correctly, this approach can reduce regulatory compliance costs without sacrificing the ability to evolve with technological advances.

Many emergency regulatory approaches take this strategy, but to be successful, they must ensure that they can be easily updated as technology evolves. For example, EU AI law defines a suite of regulatory tools—codes of conduct, transparency requirements, conformity assessments and outright bans—and then applies them to specific AI applications based on whether their level of risk is considered “minimal,” “limited.” ” “excessive,” or “unacceptable.” If the list of applications within each risk category can be easily updated, this approach preserves both the flexibility of broad legislative definitions of AI and the precision of narrow, capability-based definitions.

To balance the trade-off between limiting attention to complex AI systems or including classical algorithms, regulators should default to a broad scope, reducing only when necessary to make implementation feasible or when targeting vulnerabilities specifically introduced by certain complex algorithms. The harm caused by simple algorithms is not easily detected, disguised behind a veneer of mathematical objectivity, and regulators who ignore classical algorithms risk overlooking major policy issues. Policymakers’ thoughtful attention to AI—starting with the form and scope of its definition—is critical to mitigating AI’s risks while its benefits are widely distributed.

[ad_2]

Source link