In this episode of FRB, Art Cavazos, William Nilson, Courtney White, and Greg Lambert have an open discussion exploring the intersection of technology, law, and society. Topics include copyright challenges, plagiarism, deepfake dilemmas, influencer industry shifts, and regulatory challenges.
Tune into this thought-provoking discussion on the evolving landscape of AI, the challenges it poses, and the need for robust regulations to safeguard against potential misuse.
Featured This Episode
Our Hosts: | |
Art Cavazos |
William Nilson |
Courtney White |
Greg Lambert |
Episode Transcription
Art Cavazos: Hi, I’m Art Cavazos, a corporate and finance lawyer with Jackson Walker, and this is Future-Ready Business. I’m joined today by my panel of co-hosts, Greg Lambert, Courtney White, and William Nilson. Courtney, Greg, Will, let’s go around and first up, clockwise on my screen, Courtney, why don’t you introduce yourself and tell folks a little bit about yourself?
Courtney White: Hi, my name is Courtney White. I’m a research attorney in the Houston office of Jackson Walker, and I am also on social media known as courthouse couture.
Art Cavazos: Thank you. Will?
William Nilson: Hey I’m Will Nilson. I’m a real estate attorney with Jackson Walker in the Austin office. I own a suit company, and I have a new tech company I just started this month. I can’t talk about it yet. Next podcast.
Greg Lambert: Teaser.
Art Cavazos: Exciting.
Greg Lambert: And I’m Greg Lambert. I’m the Chief Knowledge Services Officer at Jackson Walker, and I am a podcaster with The Geek in Review podcast where we talk about legal technology and all other innovative stuff in the industry.
Art Cavazos: Excellent, super excited to have all of y’all joining us. The opinions expressed today do not necessarily reflect the views of Jackson Walker, its clients or any of their respective affiliates. This podcast is for informational and entertainment purposes only, and does not constitute legal advice.
For these panel episodes, we’d like to kind of unpack our previous conversation with a guest that we interviewed, and so last time that was Bailey Reichelt, Space Lawyer. Which I feel like you know, needs to be very like Buck Rogers kind of Flash Gordon, you know, introduction, so we don’t have that level of production yet, but we’re getting there. I will have Bailey’s Flash Gordon entrance for next time.
Let’s open up the conversation as far as what stuck out to people, I can go first. So the thing that really stuck with me is some examples that she gave of space technology that’s having an impact on Earth right now, and I think that overall, that was that was a big takeaway was, you know, we ended up calling the episode, you’re already living in the space economy and I think that that’s a big point for a lot of people is understanding, you know, this isn’t the future anymore. The future is now.
William Nilson: I also thought it was fascinating how I mean, especially agriculture. How agriculture is starting to change based on Space Law, you just, I don’t know why I didn’t think of that because all of our sci fi movies about, you know, inhabiting new planets like Mars, or I think it’s The Martian, which is based on the novel, is so incredible about how he has to use like hydroponics and all these different methods to get real food. And when you think about it, all of those things can influence Earth and our limited resources about how do we use these resources to get food in the best way. And I have other thoughts, too, but I don’t want to- that was something that really stuck out to me.
Art Cavazos: Well, I hadn’t even thought about with satellites, kind of, like weaponized satellites. And, you know, what do you do when people start putting, you know, lasers and cutting apparatuses on their satellites?
Greg Lambert: Yeah well, we’ll just pretend that there’s no lasers on any of the satellites. So, other than communication lasers, right. Right?
William Nilson: I pretend every day.
Greg Lambert: Well, I think one of the things that was really interesting is that, and I think this is true, if all the satellites were to fall out of the out of orbit tomorrow, we would be back in a dark age. That we have so much of our technology, things we don’t even consider that are linked to the satellite structure that’s surrounding earth. I saw one example of that, and you can think about it is, I don’t know if you saw where the Japanese moon lander came in landed, but was upside down. And one of the things that makes it extremely hard to land objects on the moon, is that there is no satellite structure with a GPS system surrounding the moon like there is on earth. And so you have to calculate all of these things and do things remotely, you know, right as you’re as you’re doing them. They had an instance where apparently one of the thrusters failed to deploy and it turned the satellite upside – or turned the lander -upside down. So, you know, it’s just one example of things we take for granted on earth that you just can’t do once you to leave the confines of this you know, this lovely blue marble.
Art Cavazos: So, you’re saying I can’t have Morgan Freeman direct me, where to go?
Greg Lambert: You could, it would sound good.
Art Cavazos: Just not yet.
Greg Lambert: But it just wouldn’t necessarily work.
William Nilson: Maybe we should get a pilot on the pod you know, maybe they feel like when we’re backing up in our cars and it doesn’t have a backup camera, we say we have to start looking around where am I going, and it’s kind of the same kind of thing as manual control, there’s no additional assistance, we’re becoming very reliant on that.
Art Cavazos: And another interesting thing that she brought up, and Will, I think you also kind of reacted to was the nuclear energy technology that’s being developed for space. And that is also making progress that’s useful here on Earth. And funny enough, just coincidentally, I went to a presentation where they were talking about modular nuclear reactors, which Bailey mentioned, and how that really is kind of a cutting-edge technology, that’s probably going to change the energy landscape dramatically. It’s just a wild idea. You know, if you haven’t heard of it, like I hadn’t, until going into that presentation, it’s basically the ability to have a nuclear reactor that you can put on a flatbed truck, and transport, you know, pretty much anywhere in the world set up and suddenly, you have, you know, dozens, if not hundreds of megawatts of energy available, wherever you set that up. And it can basically be placed in like a ditch in the ground, like a 40 foot, you know, by a 15-foot ditch that’s dug into the ground, you put some concrete over it, and put the modular nuclear reactor in there, and you’re good to go. Which is just like worlds different from kind of those, you know, like you would see in a Simpsons cartoon, you know, that the nuclear reactors of, of old, and all the infrastructure that went into building those was just incredibly expensive. And now to replace that with basically a hole in the ground and something that you can ship in a flatbed truck is incredible.
William Nilson: Yeah, nuclear energy modular is especially exciting. And you know, as soon as we get past kind of the social questions of fear. Basically, fear over nuclear energy that has been significantly disproven in terms of what people are afraid of. We’re in a funny environment with that, I think, but not to get political, but I do think it’s interesting that we’re becoming more and more. It seems, we’re trending towards more familiarity with nuclear energy and usage of nuclear energy. Ironically, as soon as I think we’ve adopted nuclear energy, we’ll probably we’ll probably have a completely different set of technology ready or getting ready for energy, like something based on quantum mechanics or something like that, that I don’t understand.
Art Cavazos: Well, yeah, on that point of safety. I did learn in their presentation that there’s something like 50 research reactors on college campuses across the United States, Texas A&M, my alma mater, apparently has two research reactors on campus. And apparently, there is a nuclear reactor, research reactor, right next door to a girl’s dormitory at MIT. And I don’t know if the girls in that dormitory appreciate that being used as like, an example of just how safe this is, but kind of being used as a marketing campaign. But, apparently, it’s that safe.
Greg Lambert: If you can have cheap, flexible energy, there’s just so much that you can do with it, whether it’s desalinization of water, you know, it’s just there’s just so much and mostly that flexibility, because typically where people live is not where there’s high amounts of energy, that you’re able to generate energy, you have to transmit that in. I went through a tour of kind of North Central Texas up near Mexia and went past the big coal plant power plant up there, which was huge, I did-I had no idea how big the facility was, and how huge those cranes are that scooped the coal out from the pits that they store it in. But, you know, if you can get that to where, you know, it’s the size of this room or smaller, which is not a big room. You know, capabilities are just immense.
Courtney White: And a lot of this makes me think of which may be slightly a tangent. I don’t know if you all have seen Glass Onion, a Knives Out Mystery?
William Nilson: Oh, yeah.
Courtney White: Yeah.
Art Cavazos: I have not seen it yet. I wanted to it’s the sequel, right?
Courtney White: Yeah. And the premise is, is that they’re brought to this house, and there’s obviously a mystery, but they’re using an alternative fuel source, and so it kind of made me think of the nuclear energy and just again, the idea that whenever we’re looking at emerging technology, you have to always think about the ethical component. Because in the wrong hands that could go bad very quickly and that movie was really funny and cool, but it was, there was some level of discussion about the good and bad of this amazing, you know, alternative fuel source so.
Art Cavazos: And so, another thing we talked about was, you know, space, not just being a playground for billionaires, it is a playground for billionaires, but not only that. And, you know, there are a lot of startup companies that are in this space, which I think is really cool. And also, coincidentally, I now have a client who has a space startup company, since we talked with Bailey. And that was before the episode came out. So, I don’t think it was related at all. But just kind of, I think an indicator of where the marketplace is going, and how this really is kind of a hot topic, and there’s a lot of companies getting into this space, for lack of a better word.
Greg Lambert: And one last thing on Bailey’s, at least, at least for me, is I’m actually going to be talking with her co-founder for Aegis, their space law firm, to talk about their billable models where they’re a subscription-based law firm, which is unusual. And so, I’m going to be talking with Jack Sheldon. And then I’m also going to be talking with a guy, Matthew Kerbis out of Chicago, who is a solo that does the same thing, or it’s a subscription based. So, it’s nice when you can have these interviews and then get, you know, four or five other ideas to build on top of that.
William Nilson: Is it $2.99 a month?
Greg Lambert: Actually, Kerbis starts off at $19.99 a month.
William Nilson: $20? Are we talking $2000?
Greg Lambert: And $50 a page for documents.
William Nilson: Okay.
Greg Lambert: So, really interesting business model.
Art Cavazos: That is interesting. Do you want to tell us a little more about it Greg, or is that for a topic for another podcast?
Greg Lambert: Um, well, I can talk just a bit about it, because we haven’t recorded it yet. But yeah, we’re gonna meet and talk about, you know, for 50 years, most lawyers have billed by their time. We call it the billable hour. And it’s a great model for certain pieces of the legal industry, but especially since 2008, since the economic downturn, which someone said was now 16 years ago, it’s like holy smokes, how did that happen? But, you know, there’s a big push for having alternative business models for legal, whether they’re solo, small firm, mid-sized, or even big law firms where not everything is based on the value of time, but rather, some of that should be based on the value of the outputs. And so we’ll be talking more about that, how it works for them. Some things, how that they don’t become overwhelmed how they remain profitable, because you can make a lot of money, but you can also spend a lot of money. And then all of a sudden, it’s a wash. So I think it’s going to be exciting. And I’ll put that on the Geek in Review podcast in a few weeks.
Art Cavazos: Awesome. Thank you, Greg, for the plug, I appreciate it.
Greg Lambert: Thank you!
Art Cavazos: So, let’s go on now to a new segment, which we’re calling for now, Future Ready Beats, and I like that name, maybe it’s gonna stick, which is just an opportunity to go round the panel and hear what’s on everyone’s minds in the context of Future-Ready Business. So, I’ll go ahead and go first and offer up as a topic, not sure if everybody saw, but kind of in that very sleepy period between Christmas and New Year’s where kind of nothing else was going on. The New York Times filed a lawsuit against OpenAI. And so did a consortium of authors, basically making similar allegations that OpenAI and its technology, most famously ChatGPT and the LLM models that that they’re based on, violated copyright law by scraping New York Times articles to be used in the LLM training. But then also importantly, they also said that, not only was that input, a violation of copyright law, but they also said that the output that ChatGPT gives, can also regurgitate entire sections, or entire articles from the New York Times. And then, you know, OpenAI responded to that and has said that the way that the New York Times was able to get entire sections or entire articles was not intended use. There’s something called prompt hacking, now which is where when you type into ChatGPT, and ask it questions you kind of hack those prompts to get it to perform in ways that it wasn’t intended to. And I think that’s essentially what they’re saying the New York Times had to do in order to get it to kind of verbatim regurgitate articles, was basically prompt hacking, and that it’s not intended for that. But you know, the fact that it’s possible, and especially now that everyone knows it’s possible, does say something about that technology. What do y’all think about this? Any thoughts?
William Nilson: It’s kind of an age-old question. You know, we have all gone through a lot of school and I remember all throughout, going to school, going to college, high school, especially, talking about plagiarism and what that is. And it’s probably the least definable thing in academics I can think of, because it’s so, it’s fraught with miserable definition. I mean, it’s like impossible to say what’s plagiarism until, you know, it’s almost a qualifying word. Somebody says it’s plagiarism at a certain point, it’s a judgment call. And then at that point, that’s when that person’s academic record goes to jail, so to speak. So yeah, kiss it goodbye. I think that the issue with plagiarism to me has always been that learning is some form of unqualified plagiarism, because we try to learn things and develop our own ideas. If we’re not allowed to develop our own ideas out of what we’ve learned and regurgitate something that’s close to those things, but slightly different, then I’m not sure we should be learning anything at all and obviously, nobody wants that. So we’re at this weird another, crossing paths of this and the music industry was one of the earlier determining points on this. You know, there are a number of cases that discussed this in copyright law that, and I’m no copyright law expert, but I took music law at FSU, and there’s the Ice Ice Baby lawsuit with, “Doom Doom Doom, Doom, Doom, Doom, Doom, Doom,” versus, “dim, dim, dim, dim, dim, dim, dim,” that was the only difference.
Art Cavazos: Wasn’t there like a chime or something.
Greg Lambert: Yeah.
William Nilson: There’s also a chime. But it came down to the motif, which was whether that additional note was there, just that additional syncopated note. And you know, whether that made it derivative or not, obviously, hip hop started with derivative work, what I would say is certainly acceptable to derivative work and morally speaking. But it came from sampling and sampling was what caused a lot of copyright cases. And yet, sampling and derivative work like that is considered its own art form, which it should be. So, I just thought yesterday, about a mash up between the song Baby Got Back and the Devil Went Down to Georgia, and I’m still considering it, but I know I’m gonna get a copyright strike on my YouTube channel, if I put it down. So maybe it’ll be just for me. I think I tend to lean if I have a bias, because of my music background, I tend to lean in favor of creating and absorbing what you have and creating something out of it, whether it’s, you know, by this tool or that tool, and AI seems to be another tool that’s just particularly good at it. I just wouldn’t want them to regurgitate things that other people read and call it their own just out of hand. It has to be generative, truly generative, at some point for me to be morally okay with it.
Courtney White: And I think that’s where, over time, we’re going to have to see some parameters created when we see different instances of what AI can do. Right now, we’re still learning everything that AI can reproduce, and are we saying that the things that AI is reproducing are those original works are those derivative? Will already kind of made the point that basically everything is copyright, he turned copyright into something very philosophical. But I think what I am kind of looking for AI to do is really provide the framework for us understanding one, what can AI do, and then I think in every industry, the definition of what plagiarism and copying will be, will be different. I think in the artistic realm, it may be one thing, and then the education realm, it may be something different. I know there, someone just told me about an app that students are using, I can’t remember the name for math. And you essentially can go in there and get the app to recreate any math problem, and it will do it for you. First of all, this can be incredibly problematic for learning purposes. However, the app does also allow you to check your work and will explain the work. So that’s actually very, very helpful. However, it’s not helpful if you just went and got the answer and copied all the answers and did your homework.
So, is that okay? Is that not, okay? You know, so that’s an example for education. People in fashion may feel completely differently the way someone may feel in the music industry, and I just feel like right now we have no regulations, which may be great for money making purposes. It’s kind of like the wild, wild west, in determining what people can do with AI and what’s acceptable and what’s not, and I’m just looking for the parameters to start coming. I think courts are doing it in terms of what people are submitting, but I’m not really seeing it in terms of decisions that really help us understand what is okay and what’s not. I just think this fair use example is just one example of an area where we’re going to have a problem with AI.
William Nilson: Yeah, Courtney, you’re raising a really good point about intent. I think a lot of people, especially with copyright, rely on intent, when they commit copyright infringement, they rely on intent to say, I’m not trying to make money on this. You see that all the time on YouTube, you see it on content creators, “I’m not trying to make money on this,” even though they may, but the intent is not to sell what they’re providing. Obviously, currency is not the only determining factor about whether it’s a commercial enterprise or you know, whether they’re getting value, but intent seems to be something that people focus on with copyright and an interesting part about OpenAI is, when it comes to the honesty factor, they’ve been pretty, so to speak, open about, “yeah, we’re pulling all the data we can get, then we’re training our models on that.” It’s not as if it’s been a secret, you know, that this is happening. So they kind of have a social argument that honesty may be on their side here, and that’s going to perhaps isolate the issue to more of a legal framework that is, regardless of intent. And those kinds of rules can be a little draconian, we’ve found, if intent is not included in our analysis, and judgment is not included in our analysis. We run into rules that are far too regimented, and far too strict, and that tends to be what early rules look like. So, we’ll probably see some of that, and then it’ll be lessened over time into what works for society.
Greg Lambert: Yeah, let me tie back something that we mentioned during our space law conversation, and that is, you know, when the resources are cheap and abundant and available, the ingenuity, the technology, the advancements can be massive, and very, very quickly done. You know, one of the problems is I was at a panel back in December, where one of the panel members said, there’s only one Internet and they’ve already copied that, and so where are they going to get all of the other information that they need? And so, if we just make it to where any company or anyone can just go out and copy everything, and then just say, yeah, but what I’m doing is making this huge transformation, because I’m not actually turning it into words, I’m turning it into the series of numbers that are vectorized, that then I can take that information, I can generate new information, then that I think the opportunities are great, but at the same time, you know, you’ve got an industry that’s being built on the backs of previous industries, whether it’s journalism, or writing, anything that that has been written before is apparently to, I think, to a lot of companies fair game, because we’re not taking quotes from this, we’re just taking it as an abstract and creating this whole new product at the end of the of the day.
Now, whether or not that’s going to hold water, we don’t know, and one of the things I did want to say, and I’ll steal this from the guys at Hard Fork is, you know, copyright violation or copyright grant or a violation is not about how hard it was or how hard you worked. And so yeah, it may have been very hard to create a structured prompt that got back a quote from The New York Times, but if that’s if that quote is in there, and I don’t want to get into the details of you know, is that in the Large Language Model, or is that in the structure of the retrieval augmented generation process that they have, which can be actual, you know, this is how legal research companies like Westlaw and Lexis and Bloomberg, set it up so that when it pulls back to the information, it then can go back and find that original source and point you to that. And if these companies are using that, then I think they have a little harder argument of, we’re creating something completely new, if you can extract the original content out of it, so it’s gonna be interesting.
Art Cavazos: Yeah, yeah. And just to play devil’s advocate on that point, you know, to OpenAI’s point, if you need to kind of go through that much gymnastics to get it to do that, you know, at what point is it on the user? If I use a copy machine to make verbatim copies of a book, and then start selling it for a much lower price or something? Are they going to sue the copy machine maker? You know, or are they going to sue me, the user, for using that technology in a way that violates copyright?
Greg Lambert: The only argument I’d have to that is all of these photocopiers are owned by the company that’s providing the information. And so I think that’s, you know, that’s what I would argue as the counterpart is, these people aren’t just taking their own thing. They’re using art, they’re using that specific tool to do this. And so, you know, and again, we’ve argued this before, whether a platform is liable for the way that people use their platform, so maybe they fall under Section 230 exemption.
Art Cavazos: Right, yeah, it brings up, you know, cases that have come before like Napster, Google Books, you know, those are two good examples, because they went kind of in different directions, at least as far as the rulings. You know, with Napster, the Court held that there was not a fair use. And, you know, basically shut them down. With Google Books, the court found that there was a fair use there and Google Books was allowed to continue operating. But there’s various factors that that the courts think about when they’re talking about fair use. I won’t get into all of them here. But I think kind of some of the ones that those cases turned on was for Google Books, if you notice, when you go use Google Books today, you can’t see the entire book, you can’t just read the entire book on Google Books, you can, you can search it, you can see certain excerpts. And that’s because the courts want the amount of the copyrighted work to be limited. You can’t, you can’t just reproduce the entire copyrighted work. And so there are kind of restrictions on what Google Books can do. Whereas with Napster, the whole point was to download the entire song for free and get exactly the copyrighted work. And that would have, you know, and was already having a detrimental effect on the music industry. Because why would you go pay for something that you can get the exact same thing for free, and actually probably more convenient.
Greg Lambert: It was, it was much more convenient.
William Nilson: It cut revenue in half in one year, I think it was 14.127.
Courtney White: And I actually was a freshman in college, and they had a version of Napster that was just internal to students. We didn’t call it Napster, but it was like a Texas A&M like shared space. And I was like, oh, my gosh, this is great. You could download all kinds of stuff on there. I mean, it also could have been nefarious, but A&M shut that down really quick. Because like, I was downloading entire albums, and I was like, oh, my gosh, I don’t have to pay for anything anymore. And then they were like, no, we can’t continue to let students do that.
Greg Lambert: It was great while it lasted.
Art Cavazos: Yeah, it’ll be interesting to see where this goes, but I really have the feeling that it’s gonna end up being fair use, you know. I think on the input side, it kind of is unsettling probably for all of us that, you know, the technology now is there to kind of scrape the entire internet, and that includes content that we’re putting out right now includes, you know, copyrighted articles and anything else that’s out there. But just because it’s unsettling doesn’t mean, it’s violates the law, and then as far as the output, I kind of believe OpenAI, that it really isn’t intended for that use, you know, although maybe it’s possible, there’s probably ways that they can go in and tweak it and try and limit that even more restricted even more. And I bet they’ll be willing to do that because my sense is that that’s not really the purpose of this technology, and that if you’re kind of using it the way it’s intended to be used, you’re not going to be getting verbatim, a copyrighted material through the output. So, we’ll see where it goes, but I can’t think we could transition there.
Greg Lambert: Well, I have a topic for our Future Ready Beats segment. And Future Ready Beats is something that I think, Will that’s that was your idea. So, we’ll see. We’ll see if it sticks.
William Nilson: Give it time.
Greg Lambert: So, I was on a panel last Friday with a reporter from American lawyer media or ALM, and she had written a great article on a litigator from Greenberg Traurig, which a massively large, firm. I think that you know, they have like 5000 attorneys across the globe, based out of wonderful Miami so I’ve always thought that, you know, it’d be a great, great place to headquarter. But one of the litigators Lori Cohen, woke up one morning in 2022. And could not speak anymore. Just completely had lost her voice, not laryngitis, could not speak at all, and has not spoken since. And she was known, as you know, as kind of an Atticus Finch kind of passionate litigator who had tried something like 58 Major complex litigation matters in her career. And so, this is a huge blow to her because litigators really rely upon their voice in court to make things happen to plead the case of their clients.
Stephanie Wilkins is the writer about this, had talked with her and interviewed her at length about what she had had done since then. And this is one of those things where AI for good is something that’s going on in that she had a long term litigation assistant, who helped with the trial matters in making sure the technology is set up with trials, and he started playing around with Eleven Labs, which is an AI tool that can take text to voice but it can also take your own voice and clone that. And so, they worked and found a lot of old recordings of Lori Cohen and started feeding it into the AI and finding a way to clone her voice, not just the sound of her voice, but also just the dynamics around it. When things get strong when things were more subtle. And they they’ve played around with this, and I believe it was in Rhode Island Court that she actually appeared in court took her new AI generated voice with her. She named it to Lola. And so, everyone can either start singing The Kinks song, or the Barry Manilow song with Lola and I’m sure there’s other Lola’s out there, but she had her arguments that she could prepare for the court already set up.
And Lola, her voice presented these arguments in front of the court. And it was interesting, because the judge was telling her that he was going to be very flexible, in allowing this but he was not going to essentially give her any slack on it that she was still going to be held to the same standards that she would be if it were her talking directly if it wasn’t just the text that she had generated. That was that was speaking but rather, she was making those arguments. So if he wanted to cut her off, she would have to stop. She literally, I think they in the courtroom themselves. They had the technology set up to do that, but if it stopped, she had stopped playing, and then pick up a second piece of software. And if she had to answer, she would have to type it in. Unfortunately, it wasn’t the Lola voice because the Lola voice has to be prepared in advance, but she was using a second voice that would allow her to take the text and make it into a voice so that the court could hear that.
Art Cavazos: And I’m sure that’s an early technology limitation. You know, I’m sure this technology is going to get better, and it should be able to use that Lola voice, you know, more kind of in the moment.
Greg Lambert: Yeah. Yeah, I’m sure that it will be. Someone had a, I shouldn’t have brought it up because I can’t remember the name of it, but there’s a new AI tool. Maybe you’ve seen, it’s kind of an orange piece of plastic that you can bring with you. And so, someone was saying that yeah, this is probably a good feature for some kind of portable technology that allows you to take that program voice in it, and then be able to carry it around with you. So yeah, just the technology advancements and this is a great story for AI being used to help someone that found themselves in a very unique situation, but allows them to continue their life skill. And one of the things that, Lori Cohen is saying is that she’s hoping that she can then take this and by spreading the word on how she was able to use it, that she could then mentor and train other people that find themselves in similar situations to be able to leverage the technology to help them continue the life that that they’ve trained to lead.
Art Cavazos: So, it’s a really, you know, kind of feel-good story, and I think kind of shows the bright side of AI and how it can be used, you know, of course, my lawyer brain goes immediately to how it could be abused. And so, I, you know, I just think about, really like two scenarios come to my mind. One is, what if others want to use her voice? Like, if you can see this becoming more commonplace? You know, maybe it starts out with folks like yourself, who are kind of unable to speak and use this as kind of a disability aid. But then I could see folks start as it becomes more common folks starting to use it, maybe who can speak in their own voice, but maybe want to use the authoritative experienced voice of a litigator who has been in 58 complex litigations, right, and like, suddenly, who owns that likeness, and etc. There’s a lot of questions there and then the other scenario that comes to my mind is, what about when Lori Cohen is no longer with us? Right? And that technology still is, right? And who owns that? And you know, who gets to continue using that voice? Maybe using it to create new voices, etc.? So, I guess it’s kind of a still goes back to kind of vet who owns the data. Question?
Greg Lambert: Yeah. Well, we’ve seen kind of three stories I can think of on that on something similar. One is, it’s 2024, we’re right in the height of political season, not just in the US, but across the globe. And you may have seen the robo calls that went out where there was an AI generated voice of President Biden, that obviously was not his voice. The second story that that I’ve seen, of course, was the Taylor Swift images that were generated by AI, which even X had to like, I guess, on X today, you can’t search the term Taylor Swift.
Art Cavazos: Wow.
Greg Lambert: Because it’s been inundated with these with these images of her. And then the third one, and I don’t know if you guys have caught this, but there’s a company that, and I can’t remember what platform is showing on that has created a new stand-up routine for George Carlin.
Art Cavazos: I did hear about that.
Greg Lambert: And of course, George Carlin has been dead for I don’t know 15 years, 20 years? And so, his family is suing the company that has produced this. So yeah, Art, your lawyer /lizard brain is, you know, is right on spot. I mean, that’s why that’s why people pay you the big bucks to issue spot, and these are definitely some issues that are coming up with the with the technology. You got to take the good with the bad.
William Nilson: Isn’t it interesting that we celebrate impersonators until they’re too good?
Courtney White: Yeah, I mean, when I was listening, what I was thinking about is, I don’t know how many of you saw the Kendrick Lamar video that really kind of showed the breadth and scope of deep fake technology. So, if you combine that deep fake technology, with how he morphed into all of these different faces, with the capability of voice recognition and being able to replicate someone’s voice, that is alarming. And there have already been concerns about that on social media. And people were kind of thinking that it’s like folklore, that we’re not there, but the reality is, we are there, and if in the wrong hands, I just don’t really know what the implications can be. I feel like social media sites that I won’t name specifically are trying to do their best due diligence to recognize when there are issues that are flagged, not verified, but I just don’t even know how they’re going to be able to deal with where our technology is going. How will you flag that? How will you know? Will there be a way to tell? I have no idea.
William Nilson: Blade Runner addressed this years ago that the original novel, Do Androids Dream of Electric Sheep was very seminal now for us because knowing how to distinguish between an Android and a human in that novel was so difficult that they couldn’t do it unless they had bone marrow analysis, that was the only distinguishing feature, and instead of getting to that level, they would use conformation analysis. Essentially like Turing tests, that they would design to determine whether or not an Android listening to that question would be able to respond appropriately. So, they would kind of I think in the novel, they would, they would use a series of questions that would cause almost like prompt engineering, that would cause the AI or the Android, which no doubt worked on generative AI, to create a response that would out the entity is an Android. And clearly, we’re going to have to keep advancing in that way to prevent fraud, I mean, now we’re gonna have content creators that are in the near future, that use certain phrasing or use certain creative methods to prove in their content that they are not using AI generation to create the content in order to show you know, the genuineness of the creation or not that AI creation is not genuine, but that they’re doing it without an AI assistant. So, and some people will find value in that. So, there will be a sector of the market that has to prove that and that’s also going to influence our fraud prevention. I mean, American Express is going to have a very interesting struggle with people proving so to speak with generative AI, that they are who they say they are, and authorizing purchases that shouldn’t be authorized. I’m sure that they’re on the money right now with that, and other major credit card companies as well. So, I’m excited to see what they do with that. It’s a definitely interesting and a very important.
Art Cavazos: Yeah, I mean, I think it’s definitely going to flood the marketplace at some point and how people deal with that is going to be very interesting to see. I mean, I was joking earlier about Waze, you know, using Morgan Freeman’s voice, and I assumed that that was, you know, authorized in some way. You know, I don’t know exactly how that was done, but I see those types of kind of just novel and interesting uses not necessarily, like useful, you know, but just, you know, people like to kind of play around with these technologies. And so like, if you as a content creator, or just a person in your everyday life, can use this technology to kind of talk in someone else’s voice or change, you know, someone else’s voice. I think people are going to start using that, and it’s going to become commonplace, and then right, what kind of challenges does that pose for banks and schools and you know, everyone else?
Courtney White: What I was actually thinking Art, is that in terms of influencing, usage, rights, and contracts are a huge deal. So usually, you’ll negotiate that someone has the right to use your photos or content for a certain period of time. And so quite often, if someone is not going to pay in a way that actually makes sense, with the usage rights, a lot of influencers will say, I won’t do the deal. What I think could be interesting is what would stop a brand from saying, okay, well, I use your photos that you gave me usage of last year, what if I could use your body again, and just figure out how to use that with the new clothing that I have with the new I feel like our technology is at a point where they could do that? Is that an original work? Can they do that? And influencing is an area where there’s really not enough regulation in my, in my opinion at all. And that is something that I worry about constantly, which is why I probably don’t have as many brand deals anymore, because I’m really cognizant of the usage rights.
Art Cavazos: Right and tying it back to our first Future Ready Beat, you know, if they can say, well, we’re just, you know, scraping from the internet and, and then we’re transforming it and creating an output that’s not regurgitating exactly what we scraped. You know, that’s why we need to follow this New York Times case closely to see, you know, is that going to be fair use and go ahead, Greg.
Greg Lambert: I’m just gonna say, well, every advancement and technology or process has always had the ying and yang. You can put a lock on something and somebody is going to, you know, become a lock pick. You’re going to be able to encrypt something, and someone is going to be able to decrypt that something. This is just one more level of every time that we think that we found some way to make things for good and make things unique and advance. There’s always going to be this other side that’s going to figure out, okay, how do we break that? And so this is just one more step on that. It’s a big step, but it’s just one more step in the same kind of in advancements and creativity and ingenuity.
Art Cavazos: Well on those wise words. I think it’s a good place to wrap up for today. So thank you everyone for joining us on this episode of Future-Ready Business. We touched on a lot of things today regarding Space Law, AI, exciting technologies that are coming down the pike and are already here, folks, let’s go round panel and where can everyone find you on the internet?
Courtney White: You can find me on Instagram, Pinterest, Tik Tok, all the places @CourthouseCouture.
Art Cavazos: Will?
William Nilson: My socials are well for Austin bespoke I’ll say what that one is, right? This time @AustinBespokeFits, is the upcoming new social, so get a pre follow on there, and then we’ll be posting a lot.
Greg Lambert: And for me, I’m @glambert on X And been spending more time on LinkedIn because I’m more of a knowledge sharing kind of person. And I feel like that’s the best place for the type of knowledge that I’m sharing. So, you can look me up at @Greg Lambert on LinkedIn.
Art Cavazos: All right, thanks. And you can find us at @FutureReadyBusiness on Instagram. And you can find me on Twitter and Tiktok at @FinanceLawyer. I’ll start using those eventually, but if you message me, I can receive messages there at least. And I’m also on LinkedIn to Greg’s point. @ArtCavazos on LinkedIn. If you’d like to show, please rate and review us wherever you listen to your favorite podcasts. And please share FRB with your friends and colleagues. As mentioned at the top of the show. The opinions expressed today do not necessarily reflect the views of Jackson Walker, its clients or any of the respective affiliates. This podcast is for informational and entertainment purposes only and does not constitute legal advice. We hope you enjoyed it. Thanks for listening.
Visit JW.com/future-ready-business-podcast for more episodes. Follow Jackson Walker LLP on LinkedIn, Twitter, Facebook, and Instagram.
This podcast is made available by Jackson Walker for informational purposes only, does not constitute legal advice, and is not a substitute for legal advice from qualified counsel. Your use of this podcast does not create an attorney-client relationship between you and Jackson Walker. The facts and results of each case will vary, and no particular result can be guaranteed.