Governments around the world are getting ready to regulate Artificial Intelligence to protect the consumer and the human race, but what is actually in the plans proposed to date the European Union and will any of it actually work? Isaac Asimov’s Three Laws of Robotics this is not.
As ChatGPT is rapidly becoming the most famous non-human intelligence in the known universe, supplanting Siri and Amazon Alexa in the public consciousness, politicians are doing what they always do: Figuring out how they might control this new technology via regulation. Democrat Representative Ted Lieu introduced a non-binding measure last week, directing the House of Representatives to consider the issue and propose regulations. Ironically, the measure itself was written by ChatGPT after providing the software simple instructions. “You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to focus on AI.” In response, the machine dutifully signed its own death warrant, saying it was the government’s “responsibility to ensure that the development and deployment of AI is done in a way that is safe, ethical, and respects the rights and privacy of all Americans.” Representative Lieu continued to make this point with an op-ed in The New York Times, also written by ChatGPT. “The rapid advancements in AI technology have made it clear that the time to act is now to ensure that AI is used in ways that are safe, ethical and beneficial for society…Failure to do so could lead to a future where the risks of AI far outweigh its benefits.” According to ChatGPT itself, these risks could be “deadly.” “As one of just three members of Congress with a computer science degree, I am enthralled by A.I. and excited about the incredible ways it will continue to advance society. And as a member of Congress, I am freaked out by A.I., specifically A.I. that is left unchecked and unregulated,” Representative Lieu added himself.
He is not alone, lawmakers in Massachusetts are considering regulation at the state level. “We are in the beginning of what I think is going to be a transformational technology that is going to have a huge impact on many people’s lives,” explained State Senator Barry Finegold. The government needs to get ahead of it, of course, as he compared the emergence of ChatGPT to Facebook’s rise two decades ago. “We thought it was kind of cute, college kids used it, but we never had any idea how powerful a thing Facebook would become.” Interestingly, he made no mention of how powerful a tool it has become for the government to spy on average Americans and ultimately censor them. Our friends on the other side of the Atlantic are even further ahead of the curve. The European Union has already proposed the Artificial Intelligence Act, which they hail as the “the first law on AI by a major regulator anywhere.” The proposed regulatory scheme divides the use of Artificial Intelligence into three risk categories which appear to make at least some sense at first glance, especially when bans apply to government usage of the software. “First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.” The bill also bans government usage of AI monitoring software that provides “‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement.” There are, however, several carve outs, “where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks,” meaning this software will be developed and deployed at the discretion of the government for your own protection and is not really a ban at all.
If this usage of Artificial Intelligence is so risky, however, why not simply ban it and keep the genie in the bottle or the camel’s nose out of the tent, whichever analogy you prefer? Instead, their plan is to have it on hand, where I am certain there is no risk of it being misused. The potential for monitoring and control over behavior is a major concern throughout, even when they appear to be describing something out of science fiction. For example, the bill prohibits the “placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden” without clarifying what such a system actually does. Assuming regulators in the European Union do not believe we are on the cusp of mind control technology out of John Carpenter’s forgotten classic They Live, what precisely are they referring to here? They do note that “such AI systems deploy subliminal components,” but also that they simply “exploit vulnerabilities of children and people due to their age, physical, or mental incapacities,” which might well apply much more broadly. A more elastic reading of the regulation would almost certainly include advertising platforms, which by their nature are designed to distort behavior and which depending on your definition could be said to cause certain harms, especially if we expand that definition to include the propagation of misinformation or what the media likes to call conspiracy theories. Two years ago, if there was an AI application that accurately claimed the coronavirus vaccine did not prevent the spread of the virus, would the EU have banned it because it was manipulating people into doing themselves harm? This seems especially likely when a later portion of the bill also classifies systems that could “produce adverse outcomes to health and safety of persons, in particular when such systems operate as components of products.” To be sure, here are they are referring more specifically to “increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and [perform] their functions in complex environments” and “increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate.” This is, of course, a noble goal, but it is impossible to say who would enforce such a regulation in the wake of the pandemic when misinformation from health officials was rampant. How are they going to ensure systems are operating properly when they are pushing measures so destructive there are calls for a pandemic amnesty?
The second-tier of high risk applications is much harder to define. For example, why is a CV-scanning tool considered high risk in and of itself? These applications are already in pretty broad use today. They review a submitted resume for certain keywords to ensure the applicant has the required skills prior to more formal review by the hiring department. It is conceivable that the machine scanning process could exclude qualified applicants for any number of reasons, but the alternative is not having all resumes reviewed in the first place. The reason companies are resorting to computerized review is because they receive so many applications that they can’t possibly read them all. Either they get read by a computer, or they end up sitting in an inbox forever. Further, what would government regulation of such a thing even look like? Are they going to demand the inclusion of certain keywords, forcing companies to look at job applications the government considers desirable above and beyond what the company in question would prefer? There is also no shortage of regulations regarding the hiring process in place right now including laws aimed at increasing diversity. There are however numerous concerns that Artificial Intelligence might be an arm of the White Supremacy project. Do we really need another layer of regulation on the software used during the hiring process? They also want to extend this “high risk” provision into educational admissions and almost every aspect of public and private life. “AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood.” “Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living.” Even border security and immigration, “AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities.”
In other words, almost any system that screens anyone for any reason, from getting a job to entering a country, would be subject to these regulations. Further, how these regulations would be defined and enforced remains almost entirely unclear. They note only the obvious, “Requirements should apply to high-risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity” without remotely explaining how that would be achieved, who would decide, and what the minimum standard would be. The word “appropriate” appears many times, but what does that mean? Read strictly, you can look at any sufficiently large data set and identify errors. Nothing is perfect. What then? Lest you think I’m exaggerating how broadly these will apply and whether advertising or even basic customer support systems might be ensnared in the new regulation, there is a specific carve out that could apply to virtually anything. “Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not.” In other words, the high-risk category essentially means whatever the regulators want it to mean. There are no objective criteria, and the government will ultimately decide in its infinite wisdom.
It’s also important to consider these AI systems do not produce the same output each and every time, nor is there any way for the developer to know the response the machine might make to any given situation in advance. They are designed to learn and take into account new data, constantly improving and changing their responses. ChatGPT, for example, initially provided me with inaccurate information about Teddy’s Roosevelt’s first wife, but subsequently corrected itself and apologized, blaming an incorrect data set. Under the EU’s logic, was it in violation of the law one day and not the next? The vagaries inherent in the regulations themselves are also present in the enforcement. With the exception of high risk applications, the EU imagines “professional pre-market certifiers” and that the “the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility,” but how this is to be done is completely undefined. They also envision some type of regular review, “it is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes,” but in light of the above, that seems impossible to determine, much less actually achieve. Large companies, of course, will find some way to figure it out, likely involving certain contributions to certain politicians, but even the EU acknowledges that smaller companies will be unable to compete in this new regulatory scheme. “In order to promote and protect innovation, it is important that the interests of smallscale providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on awareness raising and information communication,” which likely means there will be some form of subsidy, adding yet another layer to the complexity and the potential for challenges. There is of course a new government board created, that is responsible for “a number of advisory tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation, including on technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to and assisting the Commission on specific questions related to artificial intelligence.”
A vague regulatory scheme that could conceivably affect technologies in use right now, a limited enforcement mechanism subject to massive amounts of interpretation, subsidies, and a new government advisory body. What could possibly go wrong? Something tells me that we’ve seen this movie before and it never ends well. This time around, however, the challenge is even more insurmountable: There is likely not a single soul in government anywhere on the planet that understands enough about how these systems work and what they are capable of to craft anything resembling a workable regulatory framework. This is a new frontier entirely, and the government rarely does a good job when attempting to predict the future. The end result is almost certain to be massive overreach which stifles innovation for no reason, and a resulting backlash that reduces investment in this next generation technological leap. Many of these use-cases are already regulated or covered by existing law, from discrimination in hiring to medical malpractice. The most effective way to proceed is to ensure those laws are enforced, whether a human or an artificial intelligence violates them working on behalf of a company. The most simply way to achieve these ends would be a one page law that says companies are liable for Artificial Intelligence acting on their behalf, the same as a human employee. Efficacy, however, is rarely the government’s chief concern. The inherent vagueness gives them near unlimited power to force an industry that hasn’t yet matured to comply with their wishes. The great science fiction writer, Isaac Asimov, once proposed three laws for robotics. “A robot may not injure a human being or, through inaction, allow a human being to come to harm,” “A robot must obey the orders given it by human beings except where such orders would conflict with the First Law,” and “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” Perhaps needless to say, they didn’t work out as planned, but at least they had the benefit of clarity. Today, as we embark on adventure of which Mr. Asimov could only dream, we’re proposing laws that could mean almost anything in practice except the government will be in charge, of course, for all of our good.
[…] Artificial Intelligence: Government regulation will be a disaster for the next big technological lea… […]
LikeLike