In summary
The multi-year process started in late 2021 took the next step toward regulating business use of AI in California. The rules are expected to be influential given the number of AI companies in the state.
Rules around businesses using artificial intelligence have begun to come into focus for the first time.
The California Privacy Protection Agency board on Friday voted 3-2 to advance rules about how businesses use artificial intelligence and collect the personal information of consumers, workers, and students. The vote, which took place in Oakland, continues a process that started in November 2021.
The proposed rules seek to create guidelines for the many areas in which AI and personal data can influence the lives of Californians: job compensation, demotion, and opportunity; housing, insurance, health care, and student expulsion. For example, under the rules, if an employer wanted to use AI to make predictions about a person’s emotional state or personality during a job interview, a job candidate could opt out without fear of discrimination for choosing to do so.
Under the rules advanced on Friday, businesses must notify people before using AI. If people opt out of interacting with an AI model, businesses cannot discriminate against people for that choice. If people agree to use an AI service or tool, businesses must respond to requests by individuals about how they use their personal information to make predictions. The rules would also require employers or third-party contractors to carry out risk assessments to evaluate the performance of their technology.
The proposed rules would affect any company making more than $25 million in annual revenue or processing the personal data of more than 100,000 Californians. AI regulation in California could be disproportionately influential. A Forbes analysis found that 35 of the top 50 AI companies in the world are headquartered in California.
The process for making rules around AI underway in California is unique because it affects workers and students as well as consumers. And whereas many states leave enforcement of data privacy laws to attorneys general, California data privacy law is enforced by a board with the power to make rules. Draft rules for automated decision-making technology and AI go beyond privacy bills in other states like Colorado and Washington or Europe’s General Data Protection Regulation to extend privacy protections to full-time employees, independent contractors, and job applicants.
Disclosure is a core part of AI regulation efforts like the privacy protection agency draft rules and the AI Act, which European Union lawmakers expect to pass into law in the coming months. A lack of disclosure has led to instances in recent years where bias algorithms can automate indignity and discrimination. Algorithms have also made critical decisions about things like housing, health care or education without consumer’s knowledge or consent. Once both laws go into effect, businesses will have 24 months to comply.
An artificial intelligence loophole?
More than 20 labor unions and digital rights organizations say the latest iteration of the rules— introduced a few days before the meeting — is watered down and introduces loopholes that would let businesses evade accountability when using the technology. Privacy board staff introduced the first version of draft rules last fall.
Those digital rights advocates — including organizations like the California Labor Federation and UC Berkeley Labor Center — said the rules eliminate an opt-out option from previous versions of the rules and change the definition of a key term in a way that could be taken advantage of.
Changing the definition of automated decision-making technology to one that only covers technology that “substantially facilitates human decision making,” the advocates argue, creates an opening for companies to side-step accountability.
“Companies could easily claim that they do not use automated systems that ‘substantially facilitate’ human decisions,” reads a letter issued by the advocates and shared with CalMatters. “This revision deprives the agency of necessary information about how risk-prone algorithmic tools are being used.”
That language change sounds like a gap in the law, said board member Vinhcent Le, who was part of a subcommittee that worked with privacy protection agency lawyers and staff to develop the first draft of rules more than two years ago.
“If this advances as is, we should focus on making sure this doesn’t become a big loophole,” he said.
California is the first and only place where employees are getting critical info about their data, UC Berkeley Labor Center Director Annette Bernhardt told the board during public comment ahead of the vote, and recent amendments threaten to deprive workers of agency over algorithmic tools.
In public comment at a December 2023 meeting where the board held its first discussions of the draft rules, business groups argued in favor of an exemption from public records requests and eliminating risk assessment approval by a company board of directors. Business interests like the Bay Area Council — whose members include big AI companies like Amazon, Google and Meta — previously argued that the draft rule definitions of AI and automated decision making were too broad.
Privacy protection agency Executive Director Ashkan Soltani said he’s looking forward to more input from the public since roughly 90 percent of feedback thus far has come from business lobbyists.
AI rules moving toward completion
Before Friday’s vote, board member Lydia de la Torre said she wasn’t comfortable moving the rules forward without unanimous approval because the rules are likely to face litigation from lawyers who are already telling the board the draft rules represent an overreach.
Board member Alastair Mactaggart said he voted no because he still finds the definition of automated decision-making technology “extraordinarily broad” and said the rules should not move forward because they will require every business to carry out risk assessments.
In response to de La Torre’s concern about litigation, board member Jeffrey Worthe said the meaningful vote is not now but when the board ends the public feedback process and votes to begin formal rulemaking.
“It’s time to move this to a wider audience,” he said. “We don’t have to have it all decided now.”
Research coauthored by Bernhardt found that workplace surveillance is on the rise and that it’s often used by small or mid-sized companies that obtain technology with little knowledge about how the tech works. She told CalMatters she’s less worried about AI eliminating jobs than she is about algorithms used in the workplace treating people like machines.
Staff counsel Neelofer Shaikh characterized workers subject to workplace surveillance as particularly vulnerable because “it is much harder to leave your workplace if you are subject to intensive profiling than to just leave a website.”
Work on draft AI regulation to protect the personal privacy of consumers and workers started shortly after the formation of the board following the November 2020 passage of The California Privacy Rights Act, which directs the board to protect the personal privacy of California residents.
In July, there will be another vote on these rules. Privacy protection agency staff don’t expect a final vote to approve the draft rules to take place for another year.