The AI Frontier: Policy, Regulation, and Global Leadership

June 24, 2024 | Podcast: Future-Ready Business



In this episode of Future Ready Business, Art Cavazos and Courtney White welcome Neil Chilson, Head of AI Policy at the Abundance Institute, and Travis Wussow to discuss the rapidly expanding realm of artificial intelligence. They explore policies shaping the future of AI, the necessity for global collaboration, and the way forward for the US in this dynamic and evolving field.

Featured This Episode

Our Hosts:
Art Cavazos

Art Cavazos
Partner, San Antonio
Twitter “X”: @FinanceLawyer
Follow on LinkedIn »

Courtney White
Research Attorney, Dallas & Houston
Instagram: @courthousecouture
Follow on LinkedIn »

Episode Guests:

Neil Chilson
Abundance Institute, Head of AI Policy
Twitter “X”: @Neil_Chilson
Follow on LinkedIn »

Travis Wussow
Partner, Austin
Follow on LinkedIn »

Episode Transcription

Art Cavazos: Hi, I’m Art Cavazos, a corporate and finance lawyer with Jackson Walker, and this is Future Ready Business. I’m joined today by my co-host, Courtney White and our special guests, Neil Chilson and Travis Wussow. As always, before we jump in, I’d like to remind our listeners that the opinions that are expressed today do not necessarily reflect the views of Jackson Walker, its clients, or any of their respective affiliates. This podcast is for informational and entertainment purposes only, and does not constitute legal advice. So what we usually like to do to get started is go around and let everyone introduce themselves, including Courtney and myself. Just for any new listeners, Neil, you’re at the top right on my screen. So why don’t you go ahead and tell us a little bit about yourself.

Neil Chilson: Sure. I’m Neil Chilson. I am the Head of AI Policy at the Abundance Institute, which is a brand new mission driven 501c3, that’s focused on creating the regulatory environment and the cultural environment that allows emerging technologies to grow and prosper. So I am a computer scientist and a lawyer, and it’s great to be here. Thanks for having me.

Art Cavazos: Great, thank you. Courtney?

Courtney White: Hi. My name is Courtney White. I am a research attorney in the Houston office, and by night, I have a social media presence as Courthouse Couture talking about fashion politics and the intersection of all things related to law.

Art Cavazos: Excellent. Travis?

Travis Wussow: I’m Travis Wussow. I am a new partner in the Austin office. Well, new, this is actually my second time working at Jackson Walker, but I started out my career in the Austin office. I’m currently relocating back from DC, which is where Neil and I met. I’ve done policy work for the last several years, and excited to get to chat with my old friend Neil and to talk about AI policy here today.

Art Cavazos: Yeah, so let’s start there, because I’d like to hear a little bit more about how y’all met. You know, where y’all are coming from, as far as your backgrounds, and what got you interested in AI in the first place?

Neil Chilson: Yeah, I’m happy to jump in. Travis and I met at Stand Together. I think maybe at the Charles Koch Institute, it went through a couple names. I’ve been working on tech policy at Stand Together, building on a legal career in that space. I was in private practice, and then was at the Federal Trade Commission for about four and a half years, including as the Chief Technologist. I’ve been interested in AI since I was a kid. I did computer science in undergrad and grad school. Really got interested in computers in part because of something called multi agent models, which are still being worked on today. So but you can think of them as sort of simulations of very simple programs that have emergent properties. And if you know anything about me, I’m really into complex systems and emergent order. And so that really hooked me at an early age. And this new explosion of AI has just been a sort of total accumulation of all the interests that I’ve had throughout my career. And so it’s a, it’s a really great time to be working in the policy space.

Art Cavazos: How about you, Travis?

Travis Wussow: Yeah so Neil laid out where he and I connected since, since in the intervening years, when I left Jackson Walker and came back, I’ve worked on kind of a range of different policy issues, and you know, so unlike Neil, I don’t have any expertise really in this area or any issue. I’m more of a government relations hack that have worked on dozens and dozens of different issues over the over the course of my career. But as Neil said, he and I met at Stand Together, and my interest in AI is, my dad is a computer scientist, and so is my brother. And so I’ve kind of always been around computers. We always had computers around when I was a kid, so I didn’t, I didn’t follow that path occasionally, unlike my brother, but it’s kind of always been, you know, computers and, you know, hacking around with them has always kind of been an interest of mine, a part of my life. But Neil and I were working on what, what we would call, within Stand Together, kind of a process of developing our point of view related to online speech. This was almost two years ago now, and try to try to figure out, you know, this, this is a, it’s a complex set of issues. It was hot then it’s still hot now, especially as we’re moving into the election cycle. And it was really interesting because we were in the midst of working on this was when ChatGPT Three, the first iteration, was sort of released out into the world, and it just sort of hit our work like a bombshell and caused us to have to really take a big step back and recognize that the world has potentially changed in some really fundamental ways.

And so we, I think, mostly answered the digital speech question, but then we also, you know, sort of followed all the different rabbit holes that that AI and specifically large language models raise for folks who are thinking about policy and for lawyers and so on and so, you know, Art, when you and I were talking a couple weeks ago about this podcast, you mentioned that AI was a topic that you all wanted to talk about. I thought, well, there’s no better person than the Neil to bring into this. Everything I know about this topic, I learned from Neil. So, so I’m excited about this conversation and excited to dig into it. There’s a lot of really interesting stuff.

Neil Chilson: Well, I hope you don’t stop there, Travis, I hope you keep learning.

Travis Wussow: I’m a lifelong learner.

Neil Chilson: There’s, there’s, there’s so much to learn about here, so don’t let me be your limiting factor.

Art Cavazos: So, Neil, a really interesting thing you said that I wanted to ask a little bit more about is that you’ve been interested in AI since you were a kid. And you know, for me, and I guess probably for most people, until recently, you know, AI has been the stuff of science fiction. And so I’m kind of curious, for someone who’s been interested in it and following it for so long, when do you feel like it kind of made the jump from something that was relegated to sci-fi novels to, you know, this is a real technology that is on the cusp, and, you know, now has really exploded into the mainstream.

Neil Chilson: Well, artificial intelligence has a really long history. It’s almost as old as the history of computation. Once we started having machines that could do things that sort of look like intelligent acts, we had people asking like, what would that mean? And how might you make it work better? And so the history of computer science really is a history of people getting super excited about something they call artificial intelligence, and then it turning out to not be as flexible or as smart, or it’s impressive, but like not as general as people might have thought. And then they stopped calling it AI, and they just call it computers. So, you know, when I was a kid, cutting edge artificial intelligence research involved chess playing. And now chess playing is like on every, you can download an app, you can play it on like the simplest phone, and it’s very, very good. And so nobody really calls that AI anymore, but they used to, and that’s happened over and over and over in the artificial intelligence space, everything from optical, like recognizing objects, which everybody’s phone now does very well. Speech recognition, text recognition, all of these are things were cutting edge AI research at some point, but now we just call it computers, the most recent thing everybody’s talking about AI, but what really kicked that all off, as Travis mentioned, was this public release of a chat bot interface to a large language model. And these types of models have been around for a long time. They’ve gotten much more sophisticated over time, and I don’t think people realize that the chat-based interface would hit as big as it would.

It’s pretty clear that OpenAI, who released ChatGPT, had no intention of really being a consumer facing company, but this product became so popular that everybody rushed into that space, and OpenAI basically changed its business model. That’s the moment. I think that’s why we’re talking about this now, I will say there have been maybe not as big, but there have been similar times of excitement. In the past, there was a big move towards what are called expert systems. In the 90s, these are sort of large databases with trees of decision making that you might use to help you diagnose a disease or make architectural choices or something like that. They were much more structured than these current models, and people were really excited about how they would transform everything, at least in certain industries, and that didn’t really pan out. So it’s not guaranteed that it’s going to pan out this time, but it certainly has gotten me a lot of attention and a lot more money, a lot more innovation and investment than some of these past waves. There might be an AI winter coming, but right now, summer looks pretty bright.

Art Cavazos: Courtney, do jump in with anything or I can continue.

Courtney White: Yeah, I think maybe where I would like to start in terms of the policy realm is, what policies have we kind of had in place to generally handle AI, and where do you think we should go next? Because I think there’s a lot of room for policy development and a lot of areas, but I’d love to hear both of your opinions on the on the topic.

Neil Chilson: Well, maybe I’ll set up a framework, and then Travis, you can help me fill it in. AI, as I already mentioned, AI is very hard to define. There is no consensus scientific definition on AI. Even among computer scientists, there’s a whole category of definitions that are very different from each other, and so it’s hard to even know what exactly we’re talking about until we define that term, and that makes legislation pretty hard. But if we’re talking about generative AI, or if we’re just talking about computers taking on some of the intellectual work of like a human might, that would mean that artificial intelligence is what’s called a general purpose technology, sort of like electricity or maybe engines, electrical engines. And so to me, what that means is that it’s really hard to know how to regulate that at a very broad level, but you might look at how you regulate specific applications of that general purpose technology, just like we might regulate the electric engine in a blender, quite differently than we regulate the electric engine in a bomb or a missile of some kind. So, when you ask, like, what kinds of regulatory frameworks do we have in place right now?

Well, we have a lot of regulation around certain types of uses of technology in healthcare or transportation or energy or software. There are different regulatory frameworks for each of those types of uses of technology, and so as AI gets deployed into each of those areas, I think what’s really important to do is to look see if the current law fits with the types of risk profiles that are coming from the technology, if there are gaps, figuring out how to fill them, but I think almost just as importantly, if there are barriers to using that technology that come from outdated regulatory approaches, we really need to step back and say, like, hey, like, how can we make sure that people can deploy this computational intelligence into a new technology, or into this into this industry, without running into regulatory barriers? So that’s how I think of it. Very broadly.

Travis Wussow: I agree with what Neil just said. I think the only thing I would add is, you know, I think with something like with a new technology like AI, it’s sort of tempting to ask the question, this is new, this is surprising, and to sort of get nervous and anxious about all of the different ways that this is going to change our lives. And we sort of have this, you know, impulse to sort of wrangle it and control it. We maybe we can talk about whether that’s even possible at this point to do in, you know, to do down the line. But, you know, I think as, as Neil said, all of the app, well, it’s probably too much to say all, most of the applications of AI, as Neil just laid out, healthcare, uh, intellectual property and so on, are sort of in spaces that already have a lot of regulation, right and so you know, when it comes to questions like, can a large language model consume large amounts of copyrighted information in order to train itself and then use that copyrighted information to produce an answer, or to provide content, or whatever.

There’s already a framework in terms of copyright law around similar kinds of issues, and so I think it’s very unlikely that what’s going to happen, not only because Congress is really only capable of passing spending bills right now, but beyond that, because you’re, you know, because that application of AI is into is in a space that already has some rules of the road. It’s gonna be very difficult to change those rules until all of these things have sort of been litigated, and the New York Times has sued, has sued OpenAI over the use of its ,what the New York Times is alleging is that OpenAI trained its model on copyrighted New York Times articles. So, I mean, these are, you know, these are live questions that are currently being litigated. And so from a public policy standpoint, I think it’s very unlikely that Congress or any other legislative body is going to come in and kind of settle the issue. I think that, you know, this litigation is going to have to sort of work itself out. And then, as Neil said, you know what, you know, I think it, I think it will be important for policymakers to kind of looking at the rubble, or looking at the way that things have shaken out because of court decisions, and ask, is this where we want to be, and is this, is this actually good policy? But I don’t think that we’re going to be able to do any of that kind of proactive policy making until all of these disputes have worked their way out. And, you know, it’s already started, and I think it’s only going to continue.

Neil Chilson: I was going to add two other prongs that policy might come through, and it is. So, we have the executive branch agencies or the independent agencies at the federal level. The White House issued a historically long, there’s only two executive orders in the history of the United States that are longer than it, executive order on AI. And by the way, both of those executive orders that are longer have the entire manual for the for court martial procedures in the military in them. So that’s why they’re so long, so historically unprecedented, lengthy whole of government approach to AI, where much of it essentially says to the agencies, like, nerd harder, do the same thing you’re doing, but think about AI, but a bunch of them, but there are a bunch of deliverables that are in that executive order. Most of them are sort of reports, but there’s a lot of rulemakings too. I think there’s about 135 different deliverables coming out of that. So there is a lot of action. Maybe it’s more heat than light, but there’s a lot of action in the administrative state around this issue. And then we have the states, state legislatures are also incredibly active. I think the last count I saw was above 700 different bills that are AI related. Now, because the term is so vague, a lot of those bills are sort of repurposed regulatory objectives that people already had that they’re now sort of couching as AI regulation. But there’s a lot of action there. And some of them are really big and scary and intrusive, and some of them are much less intrusive and sort of like pro forma good government sorts of things. So there is a lot of action on those two fronts as well.

Courtney White: I guess my only follow up, and hopefully it’s just a quick question, is I expected you all to kind of say it’s difficult to define AI. I don’t know if you saw the congressional hearings where they were questioning TikTok, it was very clear that Congress really doesn’t even understand what TikTok is. And so my next question is, is just this education piece. How are we going to be able to bridge that gap? Because if Congress is not understanding what’s going on, then we can only imagine what is going on with the general public, and it’s going to be difficult to really create sound policy when you don’t have elected officials who fully understand the depth and scope of what AI is, that it’s multiple things that it, that it evolves in different arenas. How are we going to get Congress up to speed?

Neil Chilson: It’s tough. I actually have a document open that we’re supposed to finalize today that’s a one pager for handing out to congressional offices. The way I’ve been thinking about it is the technology is extremely complicated, right? And so I’ve tried to boil it down to like, what are the policy relevant characteristics of not just the technology, but also the business models that are being built around it? So this is things like, it really looks like. I think initially everybody sort of thought this is going to be a highly centralizing technology where there would only be sort of one model that wins, and it’d be the biggest one with the most processors and data behind it. That sort of looks not true now. There’s 1000s of models. In fact, I just was talking to somebody who told me that there were 700 models that came out of China alone last year, so and there’s 1000s of models that are coming out of the US and other countries. So you don’t have to understand all the details to know that that means something very different for the competitive landscape, for example, and how you might want to regulate. Other things would be things like, the more data you have, it seems like, the better the models are. So, what would that mean about how you choose to set policy around that.

So, I’ve tried to boil down those various policy relevant characteristics into a set of questions that policymakers should ask when they’re presented with a proposed policy, or when they’re trying to think about how they might do it and think about like, hey, is this really an AI bill, or is this a repurposed other type of bill, and how should I approach it then? What definition of AI is used? Does this sweep in all of computing, or are we just talking about some narrow slice of generative AI? What specific harms is this trying to address? Or is it just sort of trying to regulate AI without really identifying what specific harms they’re trying to fix? Those types of questions, I think of the ones that we can push out of trying to explain everything about the details and say, like, here are the important questions. Here’s why they’re important. Make sure you’re asking these when you’re thinking about how to regulate in this space.

Art Cavazos: So when I think about, you know, the challenges that are facing AI right now. I mean, I think there’s probably dozens, maybe, you know, innumerable challenges out there, but I think, you know, this being an election year, that’s kind of at the top of a lot of people’s minds. So when you think about AI policy, when it comes to the spread of misinformation, what role do you think AI is going to play in that you know, whether that’s in the short term, like this year, or in the long term going forward, and how does AI policy need to create a framework around that?

Neil Chilson: So this really is a hot topic, in part because politicians are so interested in it, because it directly affects their jobs. It is the one space where legislation has actively moved. There have been some bills that came out of committee, the Senate Rules Committee, I testified on some of those. Some of them are sort of repurposed campaign finance, campaign disclosure bills that have that talk about deepfakes and other things, but some of them are new, and so I think people are really spun up because they’re bringing their mental model from social media to AI. And there was so much concern about the effect of social media on elections in 2016 that a lot of those same organizations are basically have that same concern, but they’re worried about this new technology. The main, the main concern is generally around misinformation. Can people use this technology to, because you can create professional looking content very quickly, maybe lots of variations of it. Can you use this to somehow flood the zone with lots of deceptive or confusing content. What I like to think about is like, where is the real constraint on the information flow, deceptive content or misleading content? That’s really been the constraint. I mean, millions of people lie online every day for free, right? It’s not that expensive to create misinformation. It’s really the distribution.

And so, if you’re a bad actor who’s trying to influence a bunch of people, probably your time and money is better spent trying to build a network of bots on a social media platform than it is to try to create some super fancy version of the content. You really need the network, not the content, so much. And so that’s how I’ve been thinking about it, sort of an economic thinking. And that means the problem is basically the same as it has been. It’s really more about the social media networks and what they do to deal with misinformation, or what they call misinformation, mishandle it or handle it properly, much more so than it is really about the AI content generation side. There’s some nuances to that, but overall, I guess I’m that’s my way of saying I’m not super worried about AI’s influence on this.

What I am super worried about is that there’s a big chunk of people in the US who kind of don’t trust our electoral system already, and so the more they hear about AI and how AI might be used in this space, that’s going to be a narrative that we need to figure out how to deal with. One of the things we’re doing at Abundance Institute is we have an AI and elections tracker that is collecting all of the stories about the times that AI or deepfakes or content was used in a US election, and we’re just trying to find the long tail truth of what happened there. So like, following not just the initial headline, but like, all the way out, like what actually happened. And so the, probably your listeners will have heard of the Biden deep fake voice call you know that happened in New Hampshire, which it turns out was done by a democratic operative trying to raise the profile of this issue or something. And so there’s like, an interesting like, you have to follow that story all the way out to see whether or not it had an impact or not. Right now, we’ve identified about four instances of AI use in US elections. None of them seem to have really had a huge impact on outcomes, but we’re going to continue to track that as we get closer to the national election in November, and then we’ll do a report afterwards, we’ll do some updates on the way, and then we’ll do a report afterwards to talk about like, what happened. So yeah, we’re keeping our eye on it.

Art Cavazos: So what do you think policy should be, as far as the ability to use somebody like Biden’s name, image, voice, likeness, kind of apart from what the law is right now, because, as Travis said at the beginning, it’s kind of in flux right now. There’s already a legal framework that’s existed for, you know, hundreds of years around intellectual property, and right now it’s being litigated as to how that applies to AI, but from kind of a policy, kind of, “should” perspective, should people be able to use the voice and image of a Trump or a Biden in order and, you know, what should the rules around that be? Can it be a parody, you know, can it be someone from their own party trying to quote, use it for good, or can it be someone from the other party trying to make them look like a fool or put words in their mouth? Where do you see those types of uses going, and where should they go?

Neil Chilson: Well, I’m pretty close to a free speech maximalist, and so I do think that the First Amendment’s protections are at their absolute peak when we’re talking about political speech, especially when you’re talking about the highest profile political figures and tools that make it easier to do political speech. I think generally we should think that, that’s it has its tradeoffs, obviously, but, but I think a framework that, uh, that allows that speech, rather than suppresses it, or, even worse, allows campaigns to sort of weaponize some sort of law to take down speech that they don’t like. I would really worry about a regime where you can get sued if you put up the wrong parody of a political candidate, or even where the hurt, the threat of a suit might chill people’s speech, and so in those situations, I don’t think that there’s a lot of policy that needs to be done. The First Amendment right now protects that, I think, pretty strongly. And to the extent that there are challenges to that protection, I would want to be on the side of defending people’s rights to use these tools. Gets a lot more complicated when you’re talking about private individuals, and especially when you’re talking about things like deep fake pornography and things like that, where there can be some real harms to reputation. We have tort law that covers some of this, but the question is, like, is that, is that enough? And I don’t know that we have answers yet about how those uses are going to shake out over time. We may need some new policy initiatives that help address those types of harms.

I will say most of the large platforms are very worried about that type of use, just the same way that you know, Apple doesn’t allow pornographic apps on its phone. I think a lot of the big AI generative AI platforms are trying to really, really keep a lid on that type of use. Because of their sort of centralized nature, they can do a pretty decent job of it. In fact, AI can help identify, a lot of the same tools that generate the content can be used to essentially identify those types of misuses in an automatic way that scales along with the with the software.

Art Cavazos: And what about from a global perspective? Because even assuming you know that we get our rules and norms down, you know, here in the US through our legal and policy process, I mean, there’s a whole globe out there, right of countries that aren’t subject to our intellectual property laws and aren’t subject to our judicial system. And foreign actors, whether those are state actors or criminal organizations, can use this technology. What can we do to facilitate international collaboration on AI standards and regulations?

Neil Chilson: International collaboration is always very difficult, especially when it’s not in anybody’s interest. Well, when it’s in everybody’s interest to defect, I don’t think we totally know the business models, but I will say the best, I think the best thing we can do is make sure that the US is the leading country in this technology. Similar, although maybe it’s not the best example, but a lot of our free speech values were sort of embedded into the social media platforms when they went to other countries. And I think that we would want that same sort of the US values embedded in our generative AI models. And the best way to get there is essentially by having them be the best in the world right now, we are the leader. We’re not that well, I remember a little ways. It depends on how you measure, but we’re doing very well as far as the effectiveness and the amount of investment that we have in these models. But we can’t sit back and hope that we stay there, and also we need to be just aware of the competitive effects of misguided regulation in this space. If most of the world ends up using CCP approved generative AI, I think that’s a loss for the world, and it’s certainly a loss for the US, and so I think we just need to make sure that US stays ahead in this.

Travis Wussow: Yeah, but I think this, you know, this is another, this is another example of how this issue is new, and it brings a new dimension and wrinkle to some of these challenges. But it’s an old problem. The CCP has been exporting its censorship technology all around the world to countries that want to run their, you know, manage their information ecosystem in a way similar to how the CCP does. And so, you know, obviously, as these tools become more sophisticated, their power to constrain people and essentially commit human rights abuses increases, of course, but it’s not really a new problem. You know this? This is a extending and strengthening an issue that already exists.

Courtney White: What may be interesting to discuss. You all kind of touched on it slightly in terms of this international competition. Obviously, if we want to stay at the forefront and be the leader in AI, how are we going to do that? You know, one of the ways that we’ve got to do that is training these large language models. And I guess the question is, what is the best way to do that, considering what our copyright laws are, and our thoughts in the United States surrounding copyright, thinking about the issue of diversity and ensuring that these models are not just reflective of one group of people, one thought group of people, or even one socioeconomic level group of people. What is the best way for us to go about doing this, considering a lot of those factors?

Neil Chilson: It’s a really good question. You know, right now, most of the big models are trained on a collection of different data sets that start at their base with essentially all the publicly accessible content on the internet. There’s been some jokes, actually, that maybe the best thing that’s ever been created by the internet is going to be these models, because everybody posted all their breakfast or pictures or cats online, and that’s a bunch of free data that’s labeled basically, and that’s kind of amazing, like you couldn’t have created these models in the 60s, when many of these techniques were actually first invented, because you didn’t have the data sets. But what that means is, essentially, these models are sort of like the average of the internet, which is not a representative place of, you know, normal life, as we were talking previously, just before the call, and Travis was mentioning touching grass. Well, there’s a lot of people on the internet who don’t touch grass very often, and so the content can be a little squirrely. There’s no designed way to solve this I don’t think right, like there’s no systematic, top down way to solve this problem, but what there is, is a very competitive market out there right now. Like I said, 1000s and 1000s of models being generated. We’re learning a lot about how you can actually create very powerful models for specific uses by really customizing your data set by really cleaning it up and making sure that it has all the content that you want in there.

I think we’ll live in a world, and we already do live in a world where I can take a model that is sort of customized for the uses that I want, and I can try to use it for very specific things as we sort of customize models, and we have a lot of competition across the models, we can help deal with some of these problems. I don’t know if that’s a satisfactory answer. It’s not a very planned answer, for sure, but it does seem, I would be a lot more worried about that problem if there were only a few models, because then everybody would be fighting over how that model is shaped to match their interest.

Art Cavazos: Yeah. I mean, it definitely seems like a very tricky problem, because, you know, on the one hand, we have the interests of copyright holders, and, you know, a lot of that data has been open, you know, available. But that doesn’t necessarily mean that you can just take it and do what you want with it without compensating the copyright holders, you know, at least under intellectual property law. And as we said, we’re going to see in the coming months, or maybe years, depending how long the litigation takes, how those cases shake out. But it seems like for all the reasons you were saying earlier, as far as international competition and needing to stay ahead and be the best in the world, that it just feels almost impossible, that there’s going to be a ruling that, you know, this data can’t be used, or that it’s going to be so prohibitively expensive that it’s going to make it impossible, you know, to scrape all of this data, which you know has already been scrapped. So even if it was possible to put the genie back in the bottle, I just don’t see the court’s ruling that that we’re going to have to do that. I don’t know if you have any answers, but you know where, where we find that balance of compensating people whose data has been scraped.

Courtney White: Because, I think low hanging fruit, just to follow up on Art, our content creators, individuals that are online, we have a lot of data on the social media platforms. And individuals, whether they knew it or not, signed away their life, me included, when we joined these platforms and talking about all of these different issues, and that information, while it is proprietary to some degree, to those social media platforms, we have a lot of data just available everywhere, from the data in our phones to what is on, you know, social media websites. So it’s just, it’s an interesting conversation to kind of think about.

Neil Chilson: Yeah, one thing that’s particularly interesting about this, from an economic point of view, is that any one person’s data is basically worthless. In that data set. They can pull it out and throw it away, and it doesn’t affect the results at all. And so the economics are very strange here, right? Like, to the large language model developer that data is basically, they’re not, it’s not worth it for them to pay for it, but they need a big collection of it. And so when you’re talking about the types of social media postings that we all do, getting compensation for that, there’s just no negotiating power, essentially. And so when you’re talking about more professional content, I think there are organized negotiations that no matter what the law comes out on, like OpenAI can throw money at the problem, right? And so they might right. Them settling with New York Times doesn’t seem impossible to me, or entering some sort of deal, even if it’s almost certainly a fair use to take a bunch of data and turn it into a bunch of numbers, it’s hard to think of a more transformative use than what they do. When you look at the model, it doesn’t look anything like the content that trained it, and so, but that doesn’t mean they won’t settle and they won’t come to some agreements with content producers. Especially the big and powerful ones who can make a real stink if they don’t.

Travis Wussow: Well, it also seems to me, and I’d be curious if you’ve thought about this Neil that you know, the market may just end up sort of solving this problem. You know, you have a similar dynamic with performing rights of songs, right? So, I mean, any time a song is played, you know, out of the radio, in a restaurant or in a store, the person who wrote that song is entitled to some to some compensation for that. And so there are these large Performing Rights societies ASCAP and BMI that collect those royalties and distribute them and thereby, sort of like, change the economic collective action problem that you just laid out Neil, that you know, my data standing alone is pretty worthless. But, you know, collected with everybody else, now it’s actually worth something, and it might be worth, worth settling. So, you know, we may I, I’m curious what you think about that, Neil, but it seems to me that, you know, for some publishing houses and owners of large bodies of copyrighted material, you know, something like that might emerge that that enables LLM companies to license it.

Neil Chilson: Yeah, and we’re already seeing companies differentiate on this, right? So Adobe famously has a generative AI function in their tools that is trained that they licensed a bunch of content, right? And so that their kind of promise is that when you generate this, it’s not infringing anybody’s copyright. I think you could see things like that, you know, OpenAI, on the other hand, has this sort of overhang that there’s, there’s some big risk I, Art, I’m with you, like, where I want the courts to go, but the courts don’t always. They don’t when they’re applying the law. They don’t always look at the practical impacts. In fact, some might argue that that’s not really their job. And so I think in copyright, there’s some more balancing effects, that probably is part of the job. But, you know, they have a liability overhang, I think, right now. And I think people who are training on other people’s content have some risk, even if it’s not huge. And so the one other aspect of this that I think is particularly important is the risk to sort of open source models, which is another big competitor in this space, and one that actually opens up a lot of competition for small companies who want to be able to use large trained models that are, you know, free to use for certain types of uses. If the copyright law goes the wrong way, that it will be very hard to develop an open source model, because there will be nobody, there’ll be nobody willing to take on that, that litigation risk for something that they’re giving away for free, essentially.

Art Cavazos: Well, we’re going to be running out of time soon. So I wanted to ask kind of as one last, kind of open ended question, if there was one area of AI policy right now that you think is kind of most critical, what is it? And what would you say about it? What do we need to be doing about it? Or what maybe are you working on to address it?

Neil Chilson: Well. I mean, I just think the explosion in healthcare is people aren’t really talking about it because so many of these models are sort of chatbot based, and they’re going to have huge benefits, right? Like, there’s already been a lot of research about and use deployment of this in sort of customer support areas. I saw a case study that you know, every time you’re a new driver for DoorDash, it’s, it’s very confusing, right? Like, there’s a lot of questions, there’s a lot of, like, weird nuances. They just dumped all their stuff into a chat bot that basically can help train up new DoorDash drivers very quickly. There’s a lot of turnover in that industry. So that has been that’s just proven, like, financially, a giant benefit to them. So I think there’s gonna be a lot of use cases like that. But in the healthcare space, we have the possibility of saving, you know, hundreds, thousands, millions of people’s lives through customized treatments. And right now, our FDA approval process is not set up for that. It’s set up for general big like generally applicable approaches to medicine, not for customized ones. How do you even do a trial for a customized. Now there are some that are going through. There’s some cancer vaccines that are AI driven, that are being tested right now, but it doesn’t really fit well with our system, we can save people’s lives if we get this stuff right, we can save a lot of people’s lives if we get this right. And so I think that’s a really important area for us to focus on.

Travis Wussow: Yeah, it’s interesting. Neil, a friend of mine, is working in the veterinarian space, where you don’t have the same kind of, you know, legal, regulatory framework. I mean, it’s not the wild west, I mean, there are some standards, but what they can do with the technology, in terms of identifying tumors, in terms of, you know, just by taking a picture of a slice of it, is truly unbelievable stuff that we, well, I guess what I would say is, you know, I’m, I’m with you, Neil. Like we need to find a way to, you know, to allow AI into some of these spaces that are heavily regulated because, because it will ultimately make things cheaper and better and safer as well.

Neil Chilson: Yeah, we should be getting as good of tech as dogs. That’s what I think.

Art Cavazos: At least as good, hopefully at least.

Travis Wussow: But it may not turn out that way, right? I mean, depending on how the regulatory framework plays out anyway.

Neil Chilson: That’s right.

Art Cavazos: Well, I kind of choose to look on the bright side there, and I think that’s a really positive note to end on, you know, thinking about, you know, the way that AI can actually save people’s lives, you know, make people’s lives better, quality of life better. There are a lot of positive aspects that can come out of this, and I think that’s really wonderful.

Neil Chilson: Can I just add one thing? I would just add if, if your listeners have not played around with these and not tried to use the ChatGPT or Claude or something like that, you know, to write an email, or, you know, take notes, condense notes into like, a nice memo or something like that. You should try it. You will be blown away, both by like, how much time it could potentially save you and how it makes some really stupid mistakes sometimes. But you should try it. It’s not, don’t, don’t, let it be a theoretical threat in your head. Get out there and try these things out, see how they work.

Art Cavazos: Yeah. Totally agree. The first time I used it, it’s like, every once in a while, every 10 years or so, I feel like there is, like, this new piece of technology that is just so advanced beyond everything else, that you’re like, Wow, this, this feels like a moment in history, you know, just using it. And that’s exactly what I felt the first time I used ChatGPT. So it’s a really, really cool space.

Courtney White: I agree totally, Art.

Art Cavazos: All right. Well, thank you Neil and Travis, and thank you to our listeners for joining us on this episode of Future Ready Business. We learned a lot today about AI and tech policy and Neil and Travis, I really appreciate it, and I hope you’ll join us again soon. In the meantime, are you all active on social media? Or do you have any places that folks can look for you and follow what you’re up to?

Neil Chilson: As Travis knows, I might be a little too active on social media.

Travis Wussow: One of my old jobs was monitoring his social media usage.

Neil Chilson: @Neil_Chilson on X. I also have a Substack that’s called “Out of Control”. It’s outofcontrol.substack.com. Where I write a lot about AI and other tech policy issues. Thanks for having me.

Travis Wussow: I’m a social media monk, but I will say I highly recommend, Neil’s a great follow on X and his Substack is great as well.

Art Cavazos: Excellent. Courtney?

Courtney White: You can find me online on multiple social media platforms @CourthouseCouture.

Art Cavazos: All right, and if you like the show, please rate and review us wherever you listen to your favorite podcasts and share Future Ready Business with your friends and colleagues. You can find me primarily on LinkedIn, also Twitter and Tiktok @FinanceLawyer. As mentioned at the top of the show, the opinions expressed today do not necessarily reflect the views of Jackson Walker, its clients, or their respective affiliates. This podcast is for informational purposes only and does not constitute legal advice. We hope you enjoyed it and thanks for listening.

Visit JW.com/future-ready-business-podcast for more episodes. Follow Jackson Walker LLP on LinkedIn, Twitter “X”, Facebook, and Instagram.

This podcast is made available by Jackson Walker for informational purposes only, does not constitute legal advice, and is not a substitute for legal advice from qualified counsel. Your use of this podcast does not create an attorney-client relationship between you and Jackson Walker. The facts and results of each case will vary, and no particular result can be guaranteed.