Looking Back: AI and Emerging Technology in 2023

December 19, 2023 | Podcast: Future-Ready Business



As we anticipate 2024, many are expecting AI adoption to continue accelerating in various industries. In this episode of FRB, Art Cavazos, William Nilson, Courtney White, and Greg Lambert engage in a thought-provoking exploration of AI’s impact so far on law and business, and the need for a nuanced approach to its integration. Tune in as they discuss how companies can balance the rapid progress of AI and confidentiality concerns and the critical intersection of diversity, ethics, and AI safety versus acceleration.

Featured This Episode

Our Hosts:
Art Cavazos

Art Cavazos
Partner, San Antonio
Twitter: @FinanceLawyer
Follow on LinkedIn »

William Nilson
Associate, Austin
Instagram: @BigWillyGotBack
Follow on LinkedIn »

Courtney White

Courtney White
Research Attorney, Dallas & Houston
Instagram: @courthousecouture
Follow on LinkedIn »

Greg Lambert
Chief Knowledge Services Officer, Houston
Follow on LinkedIn »

Episode Transcription

Art Cavazos: Hi, I’m Art Cavazos, the corporate and finance lawyer with Jackson Walker, and this is Future-Ready Business.

The opinions expressed today do not necessarily reflect the views of Jackson Walker, its clients, or any of their respective affiliates. This podcast is for informational and entertainment purposes only and does not constitute legal advice.

I’m joined today by my panel of co-hosts, Greg Lambert, Courtney White, and William Nilson. Why don’t we go around kind of clockwise from my screen, which, Will, that puts you first. Why don’t you let folks know who you are and what you’re on today to talk about?

William Nilson: I’m William Nilson. I’m a commercial real estate attorney of Jackson Walker. And I also own and operate two and upcoming three businesses. I’m here to talk about AI and all of its impact on all of us. I’m pretty excited about that.

Art Cavazos: Awesome. I’m excited, too. Courtney?

Courtney White: Hi, my name is Courtney White. I’m a research attorney in our Houston office. Excited to be on the podcast to talk about AI and emerging technology. Offline, I am Courthouse Couture on multiple social media platforms.

Art Cavazos: And, Greg, that leaves you last but certainly not least.

Greg Lambert: All right. I’m Greg Lambert. I’m the Chief Knowledge Services Officer here at Jackson Walker, and I also blog under 3 Geeks and a Law Blog and podcast under The Geek in Review. Here at the firm, I’m one of the lead people that is spearheading the investigations for potential AI products that might work in our own environment.

Art Cavazos: A lot of my knowledge of what’s actually going on in the legal spaces as far as AI is thanks to you, Greg, whether it’s through your podcasts or, you know, internal meetings and things. So, definitely appreciate your expertise today.

So, our last episode, we talked with Ashley Carlisle at HyperDraft, an AI legal tech company. We talked a lot about really interesting topics related to AI development. And so, I thought, today, we could continue the discussion and dive deeper into certain aspects, as well as talk about some things that didn’t come up, one of which was because it hadn’t happened yet.

In just the few weeks between when we recorded that episode and today, you could almost have blinked and missed it, but Sam Altman was removed as CEO of OpenAI and now he’s back. So, we’ll get into that a little bit later.

First of all, kind of want to just open it up. Courtney, you were on that episode? Greg, Will, I’m assuming you heard the episode. Does anybody want to start off with some thoughts they had or a question that they wanted to bring up?

William Nilson: There was a lot. Go ahead, Courtney.

Courtney White: I think probably the most interesting thing is that AI was brought to us and to everyone without really being tested the way most things are kind of tested. So, the entire conversation about what would have happened if we would have waited six months for ChatGPT to kind of emerge and everybody start using it. But now that ChatGPT is here, what are we going to do about some of the unanswered questions that came up on that podcast episode?

William Nilson: Yeah, listening to that specific point about, you know, we became the beta. This is true of, like, a lot of different applications in new tech, especially products that are very expensive to make the initial product, so they have to push it out once it’s ready. Waiting – I guess from a business perspective (I mean, I’m not one of those businesses, but I’m assuming) – waiting would just cost too much in terms of not being first and all that. So, you know, we kind of are guinea pigs in a way, which, in one hand, is it feels like—I think primary argument is going to be that’s a little frightening, but the there’s another argument to that, which is that it’s kind of fun. That is kind of what’s happened with ChatGPT, has been a little bit more fun at the outset because we didn’t know what was going to happen and neither did they. Things started to happen and then we got to see different AIs do different things. Some AIs, I’ll say, had serious problems. As they were encountering the public opinion, I guess of what was fed to the AI and started saying things that were really, really not okay, and they had to pull that back. So, I’m being vague, but that was a real story. That, to me, is kind of funny. It’s an interesting experience to have to say the least.

Greg Lambert: Yeah, and I’d say my part of it was the super hype around it. We were told basically that this was going to replace 44% of the lawyers that were out there. I think people kind of overestimated on the short term how much this was going to change things. But as Steve Jobs said, you know, we tend to overestimate the effect on the short term and underestimate the effect on the long term. So, I think people are coming in here, we’re kind of coming down from the high of the hype especially here in legal, and now it’s going to be all right, what can this really do and being put to the test. I think we’re still in for a lot of excitement and a lot of changes, but I think people need to redirect their aspirations a little bit and bring them down to reality.

Art Cavazos: Yeah, I agree with that. I’m a big believer in what I’ve rebranded as a Mara’s observation, that hype cycle where something comes out, everybody’s overly excited about it and thinks it’s going to just change the world overnight. Inevitably, that does not happen. So, everybody kind of chucks it in the dumpster and says, ‘Oh, this must be a flash in the pan and nothing worth anything after all,’ and then slowly but surely, it kind of incorporates into our daily lives, and suddenly we have a computerized coffee machine and everything is, you know, inundated with that technology. I’m quite confident the same is going to happen with AI. We’re now in kind of maybe getting towards that trough period where folks are thinking this is neat, you can do all these novel and funny things with it, but it’s not going to replace lawyers. I think serious people would have never thought that it was going to replace half the workforce overnight anyway, but I think it’s just kind of catching up that that’s the reality. Over the next five years, I think it is going to transform the way pretty much everything is done in the world of business.

Greg Lambert: You know, Art, you mentioned that serious people wouldn’t have thought that half the workforce would be replaced by AI, but I do have to remind you that was a Goldman Sachs estimation. That was a report from Goldman Sachs that said 44% of the lawyers’ work could be replaced by AI. Now, granted, I think we need to—in fact, we’re seeing more and more things like lawyers using generative AI without checking citations and submitting briefs to courts. I think I saw one again earlier this week. The problem is if you look at the dates on that, like the Shorts issue was back in March, this one was back in I think mid or early May. I think we’ve, hopefully, we’ve learned from others’ mistakes and we’re seeing some lag on people that have screwed up with the super hype of it at the beginning. I think as we roll into 2024, we’re going to see less and less of that and we’re actually going to start seeing much more of the benefits of AI, especially in legal, which is such a language-focused industry. There’s going to be a lot of people that hang on to those, you know, Mr. Schwartz in New York, you know, got in trouble for using it, so we shouldn’t use it. I think that’s going to go away and we’re going to actually come in with some practical, simple ways of leveraging generative AI to help us in our day-to-day.

William Nilson: Greg, I agree. To highlight your point, we see all this news about – I mean, it’s not a ton of news, but it’s like every time this happens – there’s an article about look at this lawyer that uses generative AI improperly. There was most recently, you flagged for us internally, this witness expert who had used AI to generate a report that, how many hours did it say it was going to take to generate the same report by hand or by just traditional methods?

Greg Lambert: So, it took him 72 hours to create the report using generative AI, I think he said it would have taken 1,700 hours.

William Nilson: So, a savings. What is that percentage-wise, that’s 1%? Is that right of the time, roughly? So, a 99% time savings. I’m looking at this like that’s a good thing. Spending less time on anything is a good thing to get the same product. Now, it wasn’t the same product and that was the problem is it was an inferior product that hadn’t been checked properly. So, really probably should have taken maybe 2-3% of the time, not 1% of the time, but we’re still talking about a 97% to 98% reduction, or increase in efficiency, you might say.

That brings to me the point that we talk about problems that people have using AI, which is good, we should be discussing how to properly use it. But we don’t see news about (and this happens all the time) about attorneys that cite cases improperly. They’ve done the research without AI, they’re just citing them improperly or they just completely misunderstand the law, which is their whole job, copy and paste stuff. They don’t Shepardize or whatever copy written term there is for checking and making sure that the case is really good law; they’re not doing those things. And then they get sanctioned, or they go before their ethics board of the state bar that they’re in, and sometimes they lose their license or sometimes they pay fines or they pay the other attorney’s fees, and all these things happen to them, but that doesn’t make the news. They just go away, they get penalized or de-licensed, you know, all these things happen to them, it doesn’t make the news. So, that’s the difference, right? We have this new medium of control over how are we generating what we’re doing.

The next step for generative AI with language, or at least an upcoming step, is that our processing right now is far below what’s needed to quickly create LLM. They’re at least in a lot in California right now. I have a friend working right now on natural language processing models that use what’s called analog computer processing units as opposed to digital computer processing units, and they’re developing that technology in order to, instead of zeros and ones, it’s a zero to 100 base that says how far in is this data, which is, I’m not going to get into all the technicals here, but if and when that becomes standardized and approach is really a threshold where it’s marketable and it’s cheap enough to market, that’s going to burst LLM models in a way that we definitely have not seen yet. They’ll be created much more quickly, and that’s, I think, part of what people are waiting for with language generation models, specifically, because analog processing is more effective for something that’s a natural language process.

Art Cavazos: Well, that brings up a point that I want to bring up, because I know Courtney has a meeting to get to. Before you have to drop out, I did want to bring this up. We’ll unpack a little bit more about the OpenAI situation later, but kind of cutting towards the end and the AI ethics and diversity piece of it. Where we ended up with in a nutshell is, so there used to be two women on the OpenAI board before this whole Sam Altman ouster drama occurred. But where we ended up today is all of the former board members have been removed except Adam D’Angelo, who’s the current CEO of Cora, which is actually a competitor of OpenAI. So, Sam Altman in the past has acknowledged that Adam D’Angelo has conflicts of interest issues being on the board, but despite that and despite that he was one of the ones who ousted Sam Altman – The New York Times reported that he played an important role in the deliberations and was the main leader in negotiations holding out for concessions from Mr. Altman during the tense back and forth – he’s still on the board. But Helen Toner and Tasha McCauley are not on the board any longer. The other folks who are no longer on the board, Altman, Greg Brockman, and I’m going butcher his name, but I think it’s Ilya Sutskever, they are all still with OpenAI, just not on the board. So, the only two that are completely severed from OpenAI are Helen Toner and Tasha McCauley. That may be just a coincidence, but then also the new board is composed entirely of men.

So, Courtney, before you drop off, being the only woman on our panel, I just wanted to give you the opportunity to give your thoughts on that. To be fair, they have also said, this was also reported by The New York Times, that the provisional board, which is the current board, is expected to become more diverse as it expands in the coming months.

William Nilson: Also, we will not replace you with a man when you drop off.

Courtney White: Thank you, Will. I think, one, we know that in most corporate boards, there’s a lack of diversity period. There’s a lack of women on most corporate boards. Corporate boards tend to be male-dominated, and we know there’s obviously a lack of racial diversity. There are several organizations that are dedicated to training individuals for board leadership, because there’s a lot of complex reasons that probably I don’t have time to discuss as to why women are not on boards. But I think because, you know, you’ve lost this female leadership, I think that’s going to be a real problem with OpenAI. I’m wondering if, moving forward, there will be increased distrust if they’re unable to diversify their corporate board. But it may not be an issue because, as I said, the majority of corporate boards just are not diverse.

Even if you look at most nonprofit boards, they’re not diverse either. Sometimes, that is because for nonprofit boards, there’s a fiduciary responsibility or some other reason that the boards are not diverse, but in boards where people actually are paid, it’s really a puzzle to me as to why companies don’t believe that’s important. They may see diversity within the positions that people have that are on the board. But again, racially and gender-wise, they don’t exist. I think that’s why in terms of Facebook and companies that have been around for a while within this tech space, you have seen books like Lean In from Sheryl Sandberg that address this topic of women in leadership in the tech space. But you would think that a company that is kind of pioneering something new, that they would pioneer the way their entire work processes work. I guess maybe they haven’t thought of that diversity element.

I also think that likely there needs to be a governing board that is kind of governing where AI is moving. You’ve also seen some of those comments iterated by our federal government. I think as we move forward, as AI continues to develop and affect every single industry, maybe some of those changes will lead toward some level of diversity. However, with our recent Supreme Court decisions, I’m not so sure. So, I hope that helps.

Art Cavazos: Yeah, definitely. I think for me, the reason diversity and AI ethics is so important is because it really kind of overlays this discussion on AI safety versus AI acceleration, which are often pitted against each other. I guess there’s a feeling that you can’t both go fast and go slow and do things deliberately and with careful thought. It does worry me that the reasons given for this whole shake up at OpenAI in the first place was AI safety and acceleration. The official reason released by the board was that Altman was allegedly not consistently candid in his communications with the board, and then the two women on the board are the only ones who were kind of completely ousted and will have no further input or influence with OpenAI. So, obviously, none of us were in those rooms or part of those discussions. But, to me, it doesn’t bode well that the replacements were former Treasury Secretary Larry Summers, who, if you’re familiar with him, has a very controversial past. The other replacement board member is ex-Salesforce CEO Brett Taylor. It seems like the AI acceleration side won out. Oh, and Microsoft got an observer seat on the board, so you have a former Treasury Secretary, a bullish CEO, and Microsoft, the biggest investor, and, you know, the moneybags in the room have all replaced the former board.

Greg Lambert: Well, there’s a couple of things that we need to keep in mind. One is the Microsoft board seat is an observer-only seat, the nonvoting member. The other thing is to remember this: This board is the nonprofit board for OpenAI, which is really weird that you have a board that is set up through the nonprofit wing but it’s also making decisions on the for-profit side, on the commercial side of things. So, it makes for great copy and in newspapers, but the setup is just weird. I think eventually they kind of have to clean up their act because, Art, I know you and I listen to the Hard Fork podcast. The vultures are, we’re coming, we’re really ready to snap up OpenAI’s employees. So, they have some cleaning up to do.

Art Cavazos: Yeah, I do see that interpretation that they now look vulnerable. But I think that very strange structure that you mentioned, where they have a nonprofit sitting on top of a for-profit, is due to their history. They were initially formed as a nonprofit with the idea of, you know, kind of developing AI in a very safe way that wasn’t driven solely by the for-profit incentive. And then, you know, over the years, they kind of decided, well, we need more money to develop this technology, and so we’re going to create this for-profit arm that’s going to go raise all this money. It was extremely successful in doing so mainly through their partnership with Microsoft and raised billions of dollars for the technology. I think that this nonprofit overlay is kind of a vestige of a company that has transformed almost completely.

I wonder if this whole debacle was really the completion of that transformation, where they kind of shed the last aspects of this AI safety focus that they started with and have fully made the transition to, you know, solely focused on profit and acceleration of the technology.

Greg Lambert: Yeah, I’d say that’s pretty good assumption.

Art Cavazos: Yeah. So, in some ways, you could say they’ve come out stronger because now they don’t have to deal with the AI safety aspects that some would say were holding them back. I’m not advocating for that position, but I think folks who are now in full control of the company might feel good about that.

Greg Lambert: Yeah, I’m going to sit back and wait and see how it unfolds, because I think they’re not 100% yet. They’re moving in a different direction. But I think anytime you have this type of adversity in the company, typically it’s not a good thing. It can eventually turn into one, but we’ll have to see how they how they react going forward.

Art Cavazos: Yeah. Of course, it’s important because ChatGPT is right now kind of the leader. Will, you mentioned earlier the importance of being the first mover. Right now, they’re the biggest and most influential player in this AI space that we all expect is going to be transformational over the next coming years.

William Nilson: I don’t know if you guys have read The Three-Body Problem. I highly suggest this book. It’s Sci-Fi, and I don’t really read Sci-Fi. This is probably the only Sci-Fi I’ve read aside from what I had to read in high school, and I don’t really know why because I think Sci-Fi is fascinating. I won’t ruin anything, but philosophically speaking, it challenges the reader to think about their problems in no longer a 10-year view and no longer a seven-year view. We kind of tend to have that. We’re very focused on our own lives. That’s just a natural kind of human condition, is that we think about what we’re going to be doing in seven years. How is my marriage going? How’s my house going? How is my family growth going? What do I want to do? Those questions are so centric to who I am that I even brought those questions up as generalized examples. So, that’s how like mute perspective we are. But this book has challenged me and, I think, challenges a lot of folks into thinking about the 100 years and the 1,000 years of humanity in making small decisions now. I would almost require [The Three-Body Problem] as reading if I were teaching a school or was headmaster or something like that, because of what’s happening now in scientific development. It could totally challenge how we’re viewing something that has great potential. I highly suggest The Three Body Problem. Check it out.

Greg Lambert: I think that plays on the fact that humans tend to think very linear, you know, going from now to some point in the future. That’s why we also have such a hard time dealing with exponential growth, which is what we’re seeing here. I imagine the book – and I have heard other people talk about it – that it kind of feeds off of that and, you know, getting away from just A to B to C and then thinking much more holistically about where things are going. We have a really hard time doing that.

William Nilson: Yeah. Some of the best comedians, right, the audience is the joke. That’s kind of what Liu Cixin does as he kind of tells you, “By the way, reader, you are the joke of this book,” like you are surprised by what’s happening. And that’s the joke is that you’re surprised and you shouldn’t be. This is obviously my interpretation of what the author is going for. But that’s part of what I’m trying to take away as why am I shocked at all by something like this happening, and that’s because I’ve limited my mind-set so strenuously over time to try to control my own life and know what’s happening in my own life, but I don’t. So, that’s it. I think it’s really a good mind expansion for neuroplasticity, if you will, we can talk about that on a different podcast.

Courtney White: I think in terms of neuroplasticity, one of the things that I think is going to be the most challenging in the legal space, again, is using these AI tools and being comfortable with using them. Legal is an industry that is really unwilling in many aspects to adapt to rapid change for the belief that the way that we’ve been doing things is effective. However, I think if clients start demanding that we get work done more efficiently and more quickly, we’re going to have to have that neuroplasticity when we’re looking at a problem, of thinking of different ways to solve it using AI.

I know Greg is always really great about, “Well, have you used AI, have you thought about maybe just using it for this segment of the project?” I now find myself doing it. Even figuring out different ways to ask questions to make processes more efficient. I know we’ve had to do that within the discovery process in terms of document review, which that has already kind of shown us the benefits of AI and using computer technology to make the discovery process more streamlined. But I think we’re going to have to think of it in our research components now. I think that is going to be exciting. But again, as I may be shared earlier, there are lawyers who are adept at that and who will be open to that, and I think there will be lawyers who may be left behind because they refuse to be open. With that, I’m going to leave you guys for a little bit.

Art Cavazos: Thanks, Courtney, for joining us and making the time. We look forward to you being on the podcast again soon.

William Nilson: Thank you, @courthousecouture.

Greg Lambert: I want to pick up on something that Courtney just talked about, and that is, typically, we are lagging behind when it comes to change and change management and adaption. The problem that we’re having this time with that generalization of how the industry works with changes, especially in technology, is that we’re also an industry where we race to be second. What I mean by that is, no one really wants to go first when it comes to a change, but they also don’t want to get too far left behind. So, we tend to watch others. That can be salary raises, it could be technology adoption, it could be office setups. We tend to follow the leader on this. So, what’s kind of throwing it off in this situation is that we have some big players that have already gone first, and everyone else seems to be doing some type of catch up on that.

Now, it could be all PR, but if people believe that other law firms are outracing them, even law firms that typically don’t jump in that race right away, jumped in that race right away. This is a different environment. Now, again, I think once we come down off the peak of the hype cycle, we’ll realize that it’s not going to be the answer to everything, but it is the answer to some things. What’s going to be really cool about this is that what really works for a super huge international law firm may not work for a regional law firm. Everyone’s going to be cutting up their own little piece of the pie here that works just for them. I think that’s where it’s really exciting.

Art Cavazos: My last question is going to be for you, Greg. You’ve been out there researching all of these different AI products and seeing what’s going on in the marketplace and what’s available. For me, one of the biggest impediments right now to actually utilizing these tools is confidentiality concerns, putting any kind of information into these models or into these programs, whether that’s forms of documents that you maybe consider proprietary or whether that’s putting client information or transactional information that is confidential. So, avoiding that kind of puts you in a place of paralysis. Well, how can I even use these tools without putting that information into it? What have you been seeing to address that issue?

Greg Lambert: Well, again, I think that’s probably a March 2023 problem more than it is a December 2023 problem. Once we got away from the companies just looking at creating a wrapper front end for OpenAI or Anthropic or Bard, are being the reality of, I really want to put my own information in there, because that’s where the value is going to be. But at the same time, you know, I’m a, I’m a lawyer, I’m a doctor, you know, this, um, you know, there are certain ethical and legal issues that we have to follow. What has occurred is that, especially with the larger companies that have created their own or bought required smaller vendors to kind of speed up the overall process of the general AI capabilities, is that they are really focused on the fact that none of the information that you put in goes to train the model either on their side or on the AI side of things.

One of the situations there is it’s not really easy to update an LLM. So, it’s not like you’re going to put your information in and it’s automatically just going to get sucked into the LLM. But at the same point, all of the generative AI vendors, the legitimate ones, have created a barrier and the third party, so it’s Westlaw Lexis, Bloomberg, any of them that you can name that are the big players in legal information. These types of products have also created barriers that keep your information from commingling with any of the other information. So, there is a very solid wall between your information and the LLM, or the training model for the third party that I think is going to go away. We will still have to be very careful, especially for startups that come in. You know, the thing is, they come in with super great ideas and they’re very nimble, but they don’t always think about the trust factor, or it’s kind of like what OpenAI used to be, you know, let’s get this out onto the market and then we can adjust, which I have to say is not just an OpenAI thing. That was years ago, we used to say that Microsoft would release a product and then beta and then wait for the customer feedback and fix everything as they went along. It wasn’t a bug, it was it was a feature. But when it comes to the big players in the legal market, their #1 concern is they don’t want to have the reputation of leaking anyone’s information. Just think of all the confidential information, that you have proprietary information that you put into a Westlaw search. They want it to be essentially where you can put the information in, you don’t have to worry about that. I think this is just going to be something that gets better and better.

In fact, a friend of mine, Jason Barnwell at Microsoft, gave me a comment on LinkedIn the other day, and he was like, ‘This is the worst that these products will ever be.’ And that’s true. So that, you know, we’re at a point now, and in fact, I was talking yesterday with the Jeff Pfeiffer from Lexis, he equated this is like, you know, it’s like a 1971. If you had a big console TV, that was like super great, that was like the most awesome thing, especially if it was a color TV. That was a huge advancement. Now look at the televisions and how much cheaper, they are bigger and better. We’re going to see this the same thing with AI. But again, we’re very good at thinking laterally on this, but we’re not very good at thinking exponentially. The problem is that we’re running into is, you know, we’ve got exponential growth ahead of us. I would say my one warning would be we’re going to find out that these things can do things that we didn’t think they could do, and it’s how quickly do we understand that and react to it. It’s not necessarily going to be a bad thing. There will be some really good things. I think you’re seeing that already with the ability to do legal research in Spanish or French or Portuguese, and get, you know, search English documents in one language and get the answers and research it and get answers in another language. So, you know, the opportunities I think are vast is just how quickly do we catch changes, adapt to those changes, and make them actually benefit them rather than just sit back and kind of let it take control of us.

William Nilson: Yeah. What makes a good athlete? It’s not that they have the fastest 40. It’s because good athletes adapt to the situation as it’s happening immediately. They’re the best at that adaptation. They have the fundamentals, yes. But there are a lot of law firms or a lot of lawyers that have the fundamentals. What do I do to adapt? It’s a personal decision.

Greg Lambert: Yeah. I would say one more thing that I think may be kind of overlooked – and we talked earlier about – legal tends to be, we used to laugh and say, well, we tend to be about five years behind our corporate clients. When it comes to technology, I don’t think that’s true anymore. In fact, if you look at generative AI for products that are on the market today, legal is outpacing most of the other industries out there, whether it’s medical, whether it’s venture capital, there’s so much going on with actual products that you can use today, they use generative AI in this market that aren’t available in any other market. So, it’s kind of a crazy, bizarro world. Everything seems to be upside down on this. But I think it’s really interesting. And again, I think it probably goes back to the fact that this is such a language-based industry, and this is a language tool. So, we’re able to use our minds and adapt to this new tool much more quickly than I think a lot of the other industries are. We don’t do a lot of math either. So, we’re good with that.

Art Cavazos: Well, it’s definitely an exciting time, a lot of change, and a lot of unknowns that are going to become more clear as time goes on, where OpenAI is headed, where all these various other companies are headed, which ones will kind of emerge as trustworthy and able, you know, to enter in confidential information or sensitive information and trust that that’s not going to go anywhere, or be used for anything. So, seems like there’s still a lot to learn. And it’s an exciting time. I’m looking forward to it. All right. Well, thanks, guys, for joining us on this episode of Future Ready Business. And thank you, everyone, to our listeners. We touched on a lot of things today regarding AI and automation and hope everyone will join us again soon. In the meantime, illustrious panel, and we’ll have to we’ll have to splice in Courtney here as well. But where can folks find you on the internet?

William Nilson: Right now, my Instagram is @BigWillyGotBack. That’s b i g w i l l y g o t b a c k. That’s my personal and I’ve giving that personal because official Austin Bespoke is coming up soon. I need to fully rebrand but that’ll be coming up quick. It’ll be Austin Bespoke Fits. That’s the that’s gonna be the handle for that.

Greg Lambert: Yeah. And I’m, I’m the Gen X in this group. So I’m totally on LinkedIn. @Greg Lambert on LinkedIn. You know, it’s me because I’ve got gray hair and a gray beard. And then also listening to @The Geek in Review.

Courtney White: And I am @CourthouseCouture on multiple social media platforms.

Art Cavazos: Excellent. And if you liked the show, please rate and review us wherever you listen to your favorite podcast and share FRB with your friends and colleagues. You can find us on Instagram and threads @FutureReadyBusiness and you can find me on Twitter and Tiktok @FinanceLawyer. As mentioned at the top of the show. The opinions expressed today do not necessarily reflect the views of Jackson Walker, its clients or any of their respective affiliates. This podcast is for informational and entertainment purposes only and does not constitute legal advice. We hope you enjoyed it. Thanks for listening.

Visit JW.com/future-ready-business-podcast for more episodes. Follow Jackson Walker LLP on LinkedIn, Twitter, Facebook, and Instagram.

This podcast is made available by Jackson Walker for informational purposes only, does not constitute legal advice, and is not a substitute for legal advice from qualified counsel. Your use of this podcast does not create an attorney-client relationship between you and Jackson Walker. The facts and results of each case will vary, and no particular result can be guaranteed.