Brooklyn Boro

OPINION: Congress takes first steps toward regulating artificial intelligence

October 23, 2018 By Ana Santos Rutschman From The Conversation
Photo courtesy of Cagle Cartoons
Share this:

Some of the best-known examples of artificial intelligence are Siri and Alexa, which listen to human speech, recognize words, perform searches and translate the text results back into speech. But these and other AI technologies raise important issues like personal privacy rights and whether machines can ever make fair decisions. As Congress considers whether to make laws governing how AI systems function in society, a congressional committee has highlighted concerns around the types of AI algorithms that perform specific — if complex — tasks.

Often called “narrow AI,” these devices’ capabilities are distinct from the still-hypothetical general AI machines, whose behavior would be virtually indistinguishable from human activity — more like the “Star Wars” robots R2-D2, BB-8 and C-3PO. Other examples of narrow AI include AlphaGo, a computer program that recently beat a human at the game of Go, and a medical device called OsteoDetect, which uses AI to help doctors identify wrist fractures.

As a teacher and adviser of students researching the regulation of emerging technologies, I view the congressional report as a positive sign of how U.S. policymakers are approaching the unique challenges posed by AI technologies. Before attempting to craft regulations, officials and the public alike need to better understand AI’s effects on individuals and society in general.


Subscribe to the Brooklyn Eagle

Concerns Raised by AI Technology

Based on information gathered in a series of hearings on AI held throughout 2018, the report highlights the fact that the U.S. is not a world leader in AI development. This has happened as part of a broader trend. Funding for scientific research has decreased since the early 2000s. In contrast, countries like China and Russia have boosted their spending on developing AI technologies.

As illustrated by the recent concerns surrounding Russia’s interference in U.S. and European elections, the development of ever more complex technologies raises concerns about the security and privacy of U.S. citizens. AI systems can now be used to access personal information, make surveillance systems more efficient and fly drones. Overall, this gives companies and governments new and more comprehensive tools to monitor and potentially spy on users.

Even though AI development is in its early stages, algorithms can already be easily used to mislead readers, social media users or even the public in general. For instance, algorithms have been programmed to target specific messages to receptive audiences or generate “deepfakes,” videos that can appear to present a person, even a politician, saying or doing something they never actually did.


Leave a Comment

Leave a Comment