Brooklyn Boro

Governor’s panel to decide what role AI will have in New York

August 14, 2023 Daniel Cody
Share this:

Artificial intelligence has become a front-running item for state legislatures around the country, which are deciding how to regulate the technology to preserve workers’ jobs and ensure public safety. Assemblymember Clyde Vanel (D–Queens) has a bill heading to Governor Hochul’s desk to create an expert panel, an AI task force, which will decide how to regulate the technology. 

According to the bill’s description, it will “[create] a temporary state commission to study and investigate how to regulate artificial intelligence, robotics and automation.” 

Vanel is a proponent of artificial intelligence, even going so far as to draft a housing bill entitled, “An act to amend the real property law, in relation to requiring lessors

Subscribe to our newsletters

to provide a copy of a lease agreement upon written request of a residential lessee,” using ChatGPT, the popular Large Language Model service. With the governor’s signature, the upcoming AI conference will shape the technology’s impact on consumers and industry in New York, as well as many other bureau-centric government, law and media organizations.

Notably, AI is being considered for its potential application in hiring and recruiting decisions, digital advertising and the public sector. When AI is responsible for streamlining the hiring process –– choosing the most qualified candidates among a hiring manager’s applicant pool –– it’s incredibly important that AI is screened and structured fairly, as it will decide who gets the job.

In a July report from The Boston Globe, Rona Wang, an Asian American MIT graduate and tech enthusiast, had asked an app called Playground AI to make her portrait appear more “professional.” The software reproduced Wang’s portrait with blue eyes and “features that made me look caucasian,” she told the Globe.

Wang tweeted her portrait and Playground AI’s reproduction of it, and it went viral. 

“I was like, ‘Wow, does this thing think I should become white to become more professional?’” Wang told the Globe. She added that “it was kind of funny.”

Wang doesn’t believe that the technology is “malicious” or “racist,” but underlying biases exist, and Playground AI’s CEO, Suhail Barot, responded to her tweet, explaining, “The models aren’t instructable like that so it’ll pick any generic thing based on the prompt. Unfortunately, they’re not smart enough.”

AI’s current bias fluctuates across different models and social continuums. Researchers from the University of Washington, Carnegie Mellon and Xi’an Jiaotong University found that AI language model softwares responded differently to politically sensitive questions. For example, the results of the study indicate that ChatGPT (a product of OpenAI) tends to reflect a libertarian mixed-liberal-and-conservative worldview, and BERT, a language model developed by Google, tends to be more socially conservative, according to MIT Technology Review. 

AI has gone from a boutique media sensation to everybody’s first interviewer in the job-searching process.

The tentative agenda for New York’s AI task force will also include discussions for limiting the dangerous usages of AI technology, which can simulate communication pursuant to criminal activity and other fraud.

According to Vanel’s bill, the state of New York will review the following through the expert panel: 

  • current law within this state addressing artificial intelligence, robotics and automation;
  • criminal and civil liability regarding violations of law caused by entities equipped with artificial intelligence, robotics and automation;
  • The impact of artificial intelligence, robotics and automation on employment in this state;
  • The impact of artificial intelligence, robotics and automation on the acquiring and disclosure of confidential information;
  • The potential restrictions on the use of artificial intelligence, robotics and automation in weaponry;
  • The potential impact on the technology industry of any regulatory measures proposed by this study; and public sector applications of artificial intelligence and cognitive technologies.

Additional legislation has been placed into the record as well: Bill A5309, pending in the Assembly, addresses issues surrounding algorithmic decision-making among state agencies, and how to circumvent unfair discrimination and other risks AI might pose for local governments. New standards will be imposed by redefining “unlawful discriminatory practice” via the New York Department of Taxation and Finance, if everything goes smoothly. 

Other bills, like A7838 and A7106, will examine the impact of AI on the workforce and political advertising, respectively. If these bills come under review of the legislature at some point, the government hopes to establish exactly how AI will impact workers, and also establish a standard of transparency when it comes to using AI images and media.

States like Connecticut have already implemented legislation to make sure that the use of AI in government will not result in unlawful discrimination or disparate impact. Starting next February, the Connecticut Office of Policy will monitor the “development, procurement, implementation, utilization and ongoing assessment” of AI technology currently in use by state agencies, according to the National Conference of State Legislatures.

In North Dakota, the state government has changed the definition of an individual to “[an] individual, organization, government, political subdivision, or government agency or instrumentality, and specifying that the term does not include environmental elements, artificial intelligence, an animal or an inanimate object,” according to the NCSL.


Leave a Comment


Leave a Comment