Menu

News & Insights

NAAG Consumer Protection Conference Examines Social Engineering Scams, Connected Devices, and Generative AI

On September 30, the National Association of Attorneys General (NAAG) held its annual Fall Consumer Protection Conference in Washington, DC. The public portion of the conference was attended by participants from 47 state attorney general offices, various industries, and law firms. Much of the discussion centered on consumer privacy concerns related to products that utilize artificial intelligence (AI) and the applicability of traditional consumer protection laws to AI, such as unfair and deceptive practices (UDAP) statutes. Additionally, panelists discussed challenges arising from so-called “social engineering scams” and the implications of the recent overruling of Chevron on consumer protection rulemakings.

Social Engineering Scams: Enforcement and Prevention Efforts, and Policy Proposals

Social engineering scams—a type of fraud in which scammers build trust with victims online in order to convince them to send money or provide confidential information—cost Americans billions each year. The panelists acknowledged that because the scammers are frequently based overseas, these scams are challenging to thwart. Representatives from federal government, big tech, finance, and consumer advocates agreed on the need for collaboration to educate consumers, better equip users of social media and financial tools to identify and slow down scams, and establish a national anti-scam resource center.

AI, Smart Devices, and Data Collection: Challenges & Opportunities

This panel highlighted the fact that an overarching challenge in regulating AI is the absence of one accepted definition and the many broad interpretations of what constitutes AI. Focusing on the framework of “connected cars,” the panelists discussed the myriad of consumer privacy and safety concerns that arise from smart devices that incorporate AI technology. The panel discussed why connected cars pose significant privacy risks due to the significant volume of data they can collect and share on drivers, passengers, and pedestrians, which allows for a wide range of inferences to be drawn about the consumer. Although existing enforcement models such as UDAP statutes are applicable, the panel discussed why a multi-industry law regulating AI and data sharing would be beneficial. In the meantime, some in the industry have filled the regulatory void with their own standards, as most major automotive companies have agreed to a consumer safety “code of conduct,” which may be amended to include AI-specific rules. An open dialogue between industry and enforcers will be important to navigate AI advances.

ChatBot-Tom Line: Protecting Consumers in the Age of AI

This discussion of consumer protection issues related to generative AI—specifically chat bots—centered around the need for transparency. Government regulators and attorneys representing industry recommended that companies developing or utilizing AI technologies be transparent about the fact that generative AI is being used and the capabilities of the technology, or else they may be subject to UDAP-related enforcement. Federal regulators have increasingly taken action against companies using generative AI, such as providers of AI-powered “lawyers” and companies that generate fake AI reviews for businesses. The panelists noted that there are certain applications for which generative AI may never be appropriate (e.g., making biased assumptions about employees) and that companies using AI should ensure the AI works as claimed before deployment.

How the U.S. Supreme Court’s Rejection of Chevron Deference Affects Federal and State Consumer Protection Agencies

The conference concluded with a broad discussion on the potential impacts of the rejection of Chevron deference following the Supreme Court’s decision in Loper Bright Enterprises v. Raimondo. The panelists hypothesized that because regulators may need to show more empirical evidence and analysis to support rulemaking decisions, we may see less voluminous rulemaking output due to the additional time and expense.

Takeaways

The biggest takeaway from this conference was the emphasis on the needs for transparency in AI use and collaboration between government, industry, and advocates. It is clear that AI is a top priority for federal regulators and the state attorney general community, in spite of the difficulty in defining what AI is. State and federal regulators are focused on the risks of AI to consumers, specifically that consumers may not be appropriately warned that AI is being used or that the capabilities of AI and its impact on the consumer and the privacy of their personal information will be misrepresented or undisclosed. Attendees were left with the impression that both government and industry acknowledge that consumer protection in this space may necessitate a “collaborative and iterative” approach across sectors to reform the practices that may have a harmful impact on consumers and the privacy and integrity of their data.