By Ben Botkin/Oregon Capital Chronicle
Oregon Attorney General Ellen Rosenblum issued advice for businesses in the state as they adapt to the growing tide of artificial intelligence.
The guidance, crafted by attorneys with the Oregon Department of Justice, informs businesses and individuals about existing laws that apply to AI, even those that have been on the books before artificial intelligence rose in prominence.
Artificial intelligence is a powerful tool, one that can rapidly transcribe or summarize reams of records or other electronic tasks. But AI poses potential threats, too, with criminals using it to scam people, for example by creating a phony AI voice to fake the voice of a kidnapped victim or a phony AI video of a celebrity endorsing a product.
“Artificial Intelligence is already changing the world, from entertainment to government to business,” Rosenblum said in a statement released recently. “But though machine-learning and AI platforms are relatively new, that doesn’t mean that these new tools operate outside existing law.”
The guidance is the latest in a series of actions in Oregon to safeguard against artificial intelligence. This session, Oregon lawmakers passed Senate Bill 1571, which requires campaigns to disclose when they use artificial intelligence to manipulate an image, video or audio, including deepfakes, to sway voters.
In 2023, Gov. Tina Kotek appointed an advisory council to guide the state’s artificial intelligence work and make recommendations.
Last year, Rosenblum and other attorneys general asked Congress to look for ways to protect children from AI, including embellished images that mimic children’s images or voices.
Rosenblum said existing laws that apply to AI include the state’s Unlawful Trade Practices Act, Consumer Privacy Act and Equality Act. The guidance, however, is just a starting point and lawmakers will likely pass AI legislation in the years ahead.
Laws in Oregon
Examples how the existing law applies are:
The Unlawful Trade Practices Act already protects consumers and forbids companies from making misrepresentations to them. This means that if a company uses chatbots, for example, to communicate with its customers, that automated technology needs to provide accurate information that is not misleading.
A business also could violate the state law with an AI-generated video that appears to be a celebrity endorsing a product when the celebrity has not done so, the guidance says.
If a business uses AI to automatically set prices, it still must abide by laws against price gouging, such as during a declared emergency when essential goods like food and lodging are in demand.
Under Oregon consumer privacy laws, consumers can withdraw their consent to use their data in AI models. Consumers can also opt out of AI models that make decisions like housing and education.
AI models are essentially large data sets that can make decisions or predict outcomes based on patterns. Some fear that AI models may unfairly or inequitably deny loan applications, housing or other benefits.
Protections remain in place against that, whether it’s done by a machine or a human.
The Oregon Equality Act, for example, ensures equal access to housing and public accommodations, the guidance said. So if a company uses an AI mortgage approval system that consistently denies loans to qualified applicants from certain ethnicities or neighborhoods because of the AI system or a reliance on biased data, that could violate the law, the guidance said.
“The regulation of AI is clearly a work in progress,” Rosenblum said. “Thus, this guidance will likely need to be updated, depending on what relevant legislation is passed in the 2025 Oregon legislative session, along with possible future changes in federal laws pertaining to AI. So, this is just a starting point for those either beginning to think about — or even well down the road of — incorporating AI into their Oregon business plans and activities.”
This story first appeared in the Oregon Capital Chronicle.