Source: Generative Artificial Intelligence and Data Privacy: A Primer / Stable Diffusion and ChatGPT, via CRS. The image was generated by Stable Diffusion, and the text response was generated by ChatGPT. This blog is part of an occasional series of reviews of research reports that may be of value to IPA members. If you have research you’d like us to review or recommended reading, please reach out to Ben Jackson and let him know! Cleaning off my desk the other day, I ran across a report on artificial intelligence from the Congressional Research Service that raises interesting questions for the payments industry. “Generative Artificial Intelligence and Data Privacy: A Primer” was published in May 2023, but it still remains a worthwhile read today. The report covers what Generative AI is, how models get and use data, what happens to the data shared with Generative AI, and policy considerations for Congress. The report is not focused on the financial services industry, but it raises important questions that executives should consider when building their AI strategies. Let’s look at each section and the questions it raises. Defining Generative AI In defining Generative AI, the report notes that the models produce content, including text, images, and videos. It also notes that most of today's attention focuses on general-purpose models trained on large amounts of data. Financial services companies should consider whether a general-purpose model really fits the jobs they want to assign to AI. Hallucinations, in which general AI produces a false answer synthesized from vast amounts of data, are a real risk. Additionally, a model trained on too large a data set may use information that is not relevant. For example, a U.S. bank that wants to use AI to evaluate credit trends in particular sectors as part of the loan-making process probably wouldn’t want to include data from other continents. The other side of the coin is that executives should ensure they understand the full capabilities of the tools they use. For example, could they train a chatbot to deliver both text and images to respond to customer queries and enhance the customer experience? Understanding How Generative AI Uses Data When discussing how generative AI uses data, the report notes that models require “massive amounts of data,” and that “some existing LLMs may reveal sensitive or personal information from their training data sets.” This means that financial companies need to know what is happening to the information that employees and customers are putting into their AI tools, especially if the AI is provided by a third party. Will providers use one company’s data to train models that help its competition? How much data does a model need to retain from each inquiry in order to stay current? Can the AI provider provide a walled garden for one company and how will the provider audit that? What disclosures should a company provide for its customers about how their interactions with AI will be recorded and used? Considering Policy and Business Implications. The report discussed some considerations for Congress, including imposing notice and disclosure requirements, opt-out requirements, and deletion requirements. It notes that the AI race may favor big companies who have the resources to build and manage models. Companies should consider whether they can and want to train an AI model just on their in-house data. Another consideration is that even though AI is the buzzword of the moment, is it even necessary? Could a good statistical model run on a spreadsheet accomplish the same business goal that a company wants to assign to AI? Remembering That Existing Laws Still Apply Finally, it’s worth remembering that the current laws on the books still apply whether a company is using AI or not. Gramm-Leach-Bliley is specifically mentioned in the report. In other places, there has been talk about making sure that AI models do not run afoul of fair lending laws. Executives should take the time to read this 8-page report and think about the questions it raises and how those might shape their AI strategies and implementations Ben Jackson is the Chief Operating Officer of the Innovative Payments Association, a leading trade association representing companies in payments. With over two decades of industry experience, Ben is dedicated to providing valuable information, advocacy, and support to help members improve financial outcomes for consumers, businesses, and government agencies. Comments are closed.
|
Archives
November 2025
Categories
All
|

RSS Feed