Navigating the AI Landscape and Revolutionizing Legal Tech

November 15, 2023 | Podcast: Future-Ready Business

The release of ChatGPT in late 2022 thrust AI into the forefront of public conversations, shifting attitudes from initial skepticism to the current widespread fascination. In this episode of FRB, we engage in a conversation with the CMO of HyperDraft, Inc., Ashley Carlisle, as she provides a comprehensive exploration of predictive analytics, the role of AI in this domain, ethical considerations related to AI usage, the importance of diversity in AI development, and strategies for improving tech literacy.

Featured This Episode

Our Hosts:
Art Cavazos

Art Cavazos
Partner, San Antonio
Twitter: @FinanceLawyer
Follow on LinkedIn »

Courtney White

Courtney White
Research Attorney, Dallas & Houston
Instagram: @courthousecouture
Follow on LinkedIn »

Episode Guest:
Ashley CarlisleAshley Carlisle
HyperDraft, Chief Marketing Officer
Follow on LinkedIn »

Episode Transcription

Art Cavazos: Hi, I’m Art Cavazos, a corporate and finance lawyer with Jackson Walker, and this is Future-Ready Business. I’m joined today by my co-host, Courtney White, and we’re going to be talking about AI and automation with our very special guest, Ashley Carlisle. Ashley, we’d like to let our guests introduce themselves, that way the audience can tie voice to name and get to hear a little bit about you. You want to go first?

Ashley Carlisle: Sure, thanks for having me. I appreciate your time. My name is Ashley Carlisle. I am the CMO at HyperDraft, and I am a former Goodwin and Kirkland corporate attorney myself, so I understand the pain and excited to connect with you all.

If you haven’t heard of HyperDraft, we help organizations scale legal work with our AI-powered document and workflow automation solutions. Basically, we make legacy processes simpler by digitizing them so that legal can be less tedious and less annoying. We work mainly with Fortune 500 and public tech, healthcare, financial, and private equity clients across the U.S.

Art Cavazos: Fantastic. Thank you very much for that. Courtney, you of course are a returning guest on FRB and part of the FRB team. But for those who haven’t heard the previous episodes, can you tell us a little bit about yourself, as well?

Courtney White: Sure. My name is Courtney, and I’m a research attorney in our Houston office. I also host the Jackson Walker Fast Takes podcast. Outside of work, I’m also a blogger on TikTok, Instagram at the account Courthouse Couture.

Art Cavazos: Great, thank you. And as always, before we jump in, I’d like to remind our listeners that the opinions expressed today do not necessarily reflect the views of Jackson Walker, its clients, or any of their respective affiliates. This podcast is for informational and entertainment purposes only and does not constitute legal advice.

Today, Ashley, obviously you told us a little bit about your background. We’re here to talk mainly about AI, artificial intelligence, which has become such a huge topic this year. You know, as I know, last year was not at the forefront of everybody’s, you know, what they were talking about. I think back to like March 2022, and headlines were being made because Elon Musk and a thousand other tech aficionados were signing this open letter calling for a six-month pause on AI development. And of course, you know, that didn’t happen. I think everybody who signed that letter probably continued working on AI. It was really late 2022 when ChatGPT was released. By early 2023, that’s been all anybody can talk about. 2023 will probably be remembered as the year of AI. But you’ve been doing this for a long time, right? You didn’t just start when this became the hot trending topic on whatever Elon Musk is calling his social media sites these days. So, can you tell us a little bit about, kind of really just building more on your background, how did you get interested in AI, and how did you get to this point?

Ashley Carlisle: Yeah, I think you’re right that 2023 is going to be known as the year of AI and history will see whether that’s an exciting thing or a boring thing.

Yes, HyperDraft has been around since the end of 2017, beginning of 2018. It was interesting how you phrased the question, because I was kind of nodding along internally because it’s kind of representative of our company’s journey with AI. When we started off, our company was called HyperDraft AI, and in 2018 and 2019, when we’d go to legal departments or law firms and even mentioned our name, the mention of AI would just make them bored or like skeptical – one of the two. Eventually, we got a lot of feedback from them and advisors to be like, ‘Oh, take AI out of your company name. Like no one really wants to know how the sausage is made. No one cares that you guys have your own proprietary AI. They just want to know the output and the use cases.’ So, we did that, and we’ve been HyperDraft since 2019. So, it’s really funny in this last year, like you said, of like the ChatGPT, post-ChatGPT, in the last year to see how people have just completely turned a 180.

I think it is slowly dying down, but I also think it shows an underlying phenomenon with especially the legal industry, but the world generally. Our generation grew up with technology, and we just assume that things are possible that maybe other generations didn’t. So, you know, you see ChatGPT and instead of wondering like, ‘Oh, how can ChatGPT help me do like these basic things?’ You see it and immediately your imagination is like how can it help me fly? How can it help me do these crazy things? You go from zero to 100 very quickly, though I think that’s kind of been the year of ChatGPT echo chambering that we’ve lived in as people seeing this very basic thing and using it as a jumping point to jump to, like, let their imagination run wild. I think, hopefully, 2024 will be a year where that’s kind of reined in and people are using it more and becoming more familiar and really asking more foundational questions that can help us all increase our tech literacy with AI and actually to kind of allow it to move us meaningfully forward, especially in the legal industry.

Courtney White: I think what I’m most interested in is AI’s ability to streamline legal work. Lawyers get paid by the hour, so it is no mystery that that can get expensive, especially in large-scale transactions, litigation, and the like. So, what I really would like to discuss is how can AI be harnessed in areas of transactional work – for instance, contracts, M&A transactions, and real estate transactions. Those are all areas where there is a lot of detailed work. If you are handling the work, the work can get very voluminous. Law firms obviously want to get the largest amount of work that they possibly can, but you want to be efficient in handling that work and also be mindful of the client’s objectives of keeping the bill as manageable as possible. So, I’d love to know your thoughts on how AI can be harnessed in those areas in a way that is also respectful of client information and privacy concerns that many have with AI.

Ashley Carlisle: Yes, well, there are like seven things in there. I’m excited, so if I’ve missed one of them, hold me accountable and bring me back for that part of the question.

The first thing you started off with was the premise that we talked about a lot in legal tech, which is the billable hour. It has been the elephant in the room for legal tech for 30 years. The tools that we have made better have foundationally been around for 30 years. The main reason why they haven’t had prolific adoption is the billable hour and people’s misconceptions of how they give value to their clients.

I think post-pandemic, a lot of things have shifted. People know that technology can be an instrumental part of business and legal work now and are kind of pushing from the in-house side onto their legal providers for innovative ways to do things. I think people are more cost-conscious than ever. I also think that with people getting excited about ChatGPT and using more technology, people are getting these new ideas on how they can bring more value to their practice with creativity and leveraging these tools in a different way.

To be frank, the billable hour is still a problem with adoption in the law firm setting. But that is why you will see so many in-house legal departments really being the leaders in adoption in many ways in our space, and it’s been very interesting to see the success stories on that end. I think some of us are thinking and we’re kind of seeing it firsthand that that’s been kind of an example situation where the law firms are scratching their heads thinking, ‘Oh, well, if our clients can do these internal projects so much more efficiently, how can I redo how I’m doing my work, whether that be with flat fees or alternative billing structures to use my team and this technology more effectively?’ That’s kind of that point that you mentioned at the beginning with the billable hour and how you commoditize the value of attorney work.

One thing that I’ve realized leaving big law and now being in this legal tech bubble is, when I was a debt finance attorney, I really only thought of my time in hours. Now that I’m on the other side, I realized that a lot of in-house counsel doesn’t really care about the bill as long as you’re giving them the value they want. At the end of the day, there are still a lot of teams that are not thinking of the bigger picture.

Courtney White: Ashley, I am sure you are familiar with the way legal departments work having a legal background yourself. I’d love to just start out with exploring how AI is being used for contract analysis and management.

Ashley Carlisle: Sure. So, contract analysis has been around for a while and the tools keep getting better and better. I know when I was at big law firms, we used EagleEye, Contract Companion. There are tools like that. There are also tools that dive into identifying one’s market, creating issues lists from contracts, and then also contracts analysis tools are often used on the backend after contracts are executed for reporting to see where trends lie within certain deal type structures and for teams to assess how the goal is doing and whether that’s working for the big picture of the business.

I would say that I do hear complaints sometimes about contracts analysis tools because of the fact that it requires a very large amount of data to produce the answers that people are looking for. Another thing, which kind of ties back to some general AI concepts that I think people are talking about more so in this last year, is it’s really hard to create models and it’s really hard to sculpt data, because typically when you’re doing that, you’re looking for the average answer. As most of us know, having gone to law school and being very hard-working, lawyers aren’t looking for the average answer; lawyers are looking for the best answer. So, using these tools is often not something people should completely be reliant on, it should be a guidepost or a double-checking mechanism or something that kind of makes sure that you’re not missing something along the way. But that’s a common complaint that I hear is just that they expect these tools are going to be able to replicate what’s in their brain. I guess from a job security perspective, it’s good that they can’t necessarily do that. But it’s also important to know why that is. That really is just kind of how data modelling and these AI models work within an analytics framework.

In regards to contract management, contract management is also known as Contract Lifecycle Management, or CLM. If you’re in house, I apologize for all of the people out there who have spammed you with emails about CLM solutions or all the events you’ve been invited to or what have you. Obviously, that has been a huge adoption push for the legal industry in the past few years. If you’re unfamiliar with what that is, that basically is just automating a contract process. So, basically digitizing how you draft, manage, store, and then review contracts. The idea is to put a whole organization connectively together with technology so that an in-house department, for example, you could see how finances using your documents, how sales is using your documents, a connectivity tissue, kind of a technological way that typically isn’t there in standard organizations just because as they grow, that becomes increasingly harder.

Our CEO actually wrote a chapter that’s an intro to what contract management is in a new book called The Legal Tech Handbook. I’m not going to bore you; I could go on for about two hours as to what CLM is and how it’s being used and legal departments, but if people are interested, that book is available on Amazon. Generally, people are using it just to speed up deal cycles and to make sure that legal is not the bottleneck, because often, problems with legal or not legal doing a poor job, it’s the underlying people and processes that they have to interact with. So kind of by standardizing process and using contract management, you can as a team solve these issues and then automate friction out of the process.

Courtney White: In that same vein, which I think you already touched on some of it, I would love to know if AI could even be used in very technical transactional deals like M&A deals, complex real estate transactions, areas where you have large amounts of data. You want to pare down the information and give your client a product that’s very useful to them, but you also want to protect client important information and proprietary information to your client. Could AI be used in those instances as well?

Ashley Carlisle: Sure. People have used AI tools in diligence processes. They’ve also used them to see one’s market and certain things, as I said, the market example sometimes falls flat just because you need so much data to do that and in order for you to get that data securely, that would take a long time. Many law firms are trying and with time, maybe they’ll develop their own process to do that. And then in regards to the diligence, I think the main thing that lawyers really want, which I understand as a first-year associate, I quickly realized I was not going to be an M&A lawyer, because diligence is awful, at least to me, finance diligence is a lot less. So, I went that route, is that even though these tools can help you identify what type of agreements are in a corpus of documents, certain provisions, red flags, synthesize them into tables, which is helpful.

We still as attorneys have a duty to our clients a duty of competence to know what’s in those documents, we still have to read them. And so obviously, that’s something that all of us attorneys know. But that is something that you often see articles written about of like, will robot lawyers replace us? Well, technically, as it stands, we still have to read the documents. So no, and you’re gonna be happy that we still have to read the documents, because lord knows something will come out a few months later, and you’re like, help me this is happening. And if we just depended on the tools, it would take a lot longer and a crisis management situation to figure that out. So short answer is yes, they’re being used. But I would say that people overestimate the replication of complex legal minds in the transactional context, especially on the diligence side, the tools are being used and document automation and workflow automation commonly and we do have many clients in house and law firms that are using them. Because, you know, you can only cut an MSA so many ways or at SBA so many ways or a credit agreement so many ways. But at the end of the day, people have to review and make sure that the bespoke provisions to that client have to be detailed for them. So really, it’s just kind of jumping ahead 10 steps as opposed to replacing the entirety of a lawyer’s workflow.

Art Cavazos: So when you mentioned jumping ahead 10 steps, that makes me think of another use for AI, which is in predictive analytics. Can you tell us a little bit about what Predictive analytics is? And what AI does? What role AI can play in that, or maybe is playing in that?

Ashley Carlisle: Sure, this is something that’s, you know, I think all of us in legal tech are very excited about because we’re kind of at the beginning of hopefully a predictive analytic, or analysis age, but kind of baseline. It’s using advanced analytics, machine learning statistical models to analyze large volumes of data and have more proactive and predictive information for legal, which, if that was too long winded for you, basically, it could be that lawyers can be more of a preventative practice as opposed to a crisis management practice, because we’d know ahead of time, is this motion worth it based on the judge the court the type of case. I’ve gone through the corpus of data. And now I have this prediction that maybe it is good for my client or not, should I settle? How much will it cost for my team to do a series of transactions for this real estate company is my pricing, you know, something they’re going to go for? It really could allow the practice of law to operate more like a traditional business than an insular legal institution, which we have for maybe centuries at this point. But like I said, we’re kind of at the beginning of predictive analytics, there are some companies, Lex Machina is one of them that has partnered with some law firms. And they are trying to build this stuff out. And I think everyone, including myself, is very excited to see how this could kind of transform our industry. And maybe our clients might come to us before there’s like a big explosion, which would be super exciting, be a long term business partner with them, perhaps. But one thing I will say is that in this case, right now, it’s being used more to litigation framework, and hopefully with time that will be expanded into a transactional as well. But data is also key there.

Art Cavazos: And what types of matters could be predicted? Like what types of you know, you used the word explosion, what type of events are being identified?

Ashley Carlisle: So I think mainly, it’s a risk assessment risk management tool. So whether it be on the litigation side, if there are certain infractions that are going to lead to litigation, if it’s worth settling, things like that. And then more on the regulatory side, I guess it would be more of a monitoring of kind of violations, maybe privacy, labor, what have you, and kind of how the organization can structure, the teams that are handling those are outside counsel. And the best way for them. One thing that I forgot to mention, which I think is very different from the rest of the legal Tech game right now, is that law firms have really been the ones to mold this space and will in the future, which is the opposite of the other categories. Because the other categories, there’s people like us who are building it, and we have a lot of in house clients, we have some law firms, but mainly in house clients that are leading the charge. But in this case, it’s really been law firms that have put a ton of money into kind of trying to build these on their own. And maybe each law firm someday will have their own predictive analytics software that your teams would be using at the beginning of each matter, to determine who was on your team, what the course of action will be. So perhaps kind of staffing and how we assess deals and matters going forward might be different.

Courtney White: I’d like to talk a little bit about AI ethics. This is a very interesting topic, a lot of people have a lot of opinions on the future of AI because of this ethical component. And so the first question just really is if you could just dive into some of the ethical considerations that people have in the use of AI and business, and potentially the bias that could potentially happen, or is happening already with the usage of AI, the role of an errors with human oversight. Just love to have you know that or just start that discussion?

Ashley Carlisle: Yeah, I think this is going to continue to develop, especially in the coming years, we kind of know the big picture right now, which I’ll kind of go over briefly. But there’s gonna be more things that pop out of the woodwork because at the end of the day, AI is still a blackbox. I ask our engineers all the time, many of whom have been working with AI and models since 2008, some 2003. If they know exactly how it’s going to act at a certain time, especially with these large LLM stock, the ones we make but kind of the larger ones. And the reality is no, there’s going to be a ton of companies that pop up in the next few years that kind of dive into that black box and figure out what other ethical things us as lawyers should be considering. But today, I would say cognitive bias is huge. And the crazy thing about bias with AI is technically we could eliminate it but no one’s figured out how to yet. So that’s gonna be one of those processes that we can I know it exists, and we need to be vigilant, but there isn’t a solution to fix it today. But the biases come from two places, which is the technology doesn’t create itself, it is developed by a developer. And unconsciously, just like, you know, when you’re drafting a document, you’re gonna have little quirks in there, whether you know it or not the people under you, the people above, you might see kind of your behavior and like the things you’d like to include, don’t like to include, but you’re not aware of it. And so when you have so many developers, hundreds of thousands of developers and testers working with this AI data, it’s just a multiplication of biases that they’re unaware of. And so they can’t self-identify and self-correct. The other thing is, if you have a lack of data, you can create an inadvertent bias.

Also, if you include too much data that’s irrelevant, you can create an inadvertent bias, it really goes to, I think, also, this next generation, there’s going to be a lot of people that just aren’t data stackers and data modelers, a lot of Gen Z and Gen Alpha, or whatever is coming next, they’re just going to be organizing stacks of data for us. So we can know exactly what we’re looking at to the extent we can. And then in regards to human oversight, like you mentioned Art, there’s been a lot of weird, you know, I think even the UN signed a letter about AI, which I read, and I don’t even think they really understand what AI is, but they tried. Silicon Valley, there’s been many different groups that have talked about the role of oversight, we all stand in the same bucket of people need to be looking into this. But no one really knows how that’s going to happen, whether that’s going to be regulations on the federal or state level, whether that’s going to be a consortium of tech companies that fund kind of a governance arm, I think there’s going to be a lot, there’s already a lot of smart people trying to figure out how to solve this. And we’re already seeing in the legal tech space, a lot of companies that are kind of popping up. Basically, governance companies willing to help you identify these issues, identify these red flags and put policies in place. And I know a lot of in house teams are kind of looking to them for guidance, and know that it’s going to be an advisory role for many years, not something we can fix today.

I think the last thing I skipped over sorry, which is probably the most important is the privacy concerns, especially with these large LLM models, the ChatGPT, the llamas, there’s going to be hundreds more. There’s already I think 10 big ones. Most of these are training data with your inputs. And so most in house departments will advise people and most law firms have policies, whether informally or formally saying, “Please don’t put our information in these models.” I know you want a cover letter or an email draft from ChatGPT. But like what you put in there, you might not realize this proprietary, even client names, things like that, that might like where they’re located might seem like totally chill to you. But like this is all being captured. And the footprint is there. That is a big concern is also the shadow IT phenomenon of you know, as lawyers, we could tell people what to do and how to do it. But are people going to actually follow directions? As we know that doesn’t always happen. So even if we have the most ironclad privacy policies tell people not to use these things on the back end? How are we going to clean up and advertent proprietary information that’s been put into these models and then scraped by other actors, that’s going to be something that all lawyers are going to be figuring out how to clean up in the next like, decade or so.

Courtney White: Right. So everybody’s going to have to understand privacy, at least to a functional level, to serve their client. And then the next question I really have is a little bit more specific regarding we have a Diversity Counseling practice here. And we care about it at our law firm. Have you seen any best practices? Do you know of any, in avoiding bias, specifically, with a lack of diversity within this AI space? We already know it exists within the tech space. And so I would assume that it also exists just generally within AI in the creation of AI and these models. And so I just love to know how that is being addressed.

Ashley Carlisle: So I think that is one of the many big things that the Silicon Valley letters were was kind of that’s what the impetus for them being concerned about this. And one thing I will say is ChatGPT didn’t follow the standard development protocol for LLMs. Which is why we don’t have the answers to these questions now. So ChatGPT decided to release its consumer interface without testing it fully on the population or a beta group of the population and knowing how to fix these problems. They just decided, Oh, well the world will help us decide what the problems are. And they even have policies which they have been very self-aware at of saying like, we don’t know what’s going to happen. Please don’t depend on this. Please know that we might accidentally have these biases or maybe we didn’t think things through yet, what have you. The problem is now that it’s already out in the world, the people like Microsoft, who already had been developing the same time, they were forced to release their model when ChatGPT did, but really, they were trying to fight the good fight of let’s test this internally, let’s really think about the biases, let’s really do multiple beta testing, let’s think about these issues of diversity and misinformation and, you know, even regionalized point of views in different areas in different cultures and different languages. And unfortunately, because people were so interested in ChatGPT, and their competitors kind of were like, well, I guess we gotta release ours before we’re done with everything.

I don’t think there is an answer. And I think there, like I said, there’s going to be a lot of advisory situations and parallel to the adoption of this. And personally, I’m more conservative, hence why I went to law school. So I’m like, Oh, we should test this and kind of know, what problems are going to create before we unleash it into the world. So I wouldn’t have taken the ChatGPT approach. But I think that’s something that people as they’re using this data, and as they’re analyzing the output should realize that, not that it hasn’t been thought of thoughtfully, but it definitely could be full of misinformation, unintentionally, that could be detrimental to, you know, having points of view or information that reflects the world we live in. Because we don’t know what the developer, the background of the developers, we don’t know how many there are, there are so many things. So in a way, we kind of unfortunately became part of their organization and testing and dealing with this technology and real time like they are.

Art Cavazos: Yeah, and you bring up a really interesting point about AI safety and regulation on one hand, and folks who aren’t so concerned with that and want to just push things forward as fast as they can. And kind of let the technology and the market sort it all out. But I wanted to ask you, on the question about, you know, you were speaking about inadvertent bias and that, but what about, for example, there’s some AI chatbots out there that have programmed values. I think some of them refer to it as a constitutional AI or something. And essentially, the idea being that, that you were not inadvertently but actually intentionally placing principles or values into the model to affect the outcome in some way. What are your thoughts on that? And will people start using that to kind of create their own intentionally biased, which could be in a lot of different contexts, like a partisan context, or in various other contexts, where you actually do want, and the goal is to produce a chatbot, that is going to give a certain set of predictable answers that align with a certain set of values?

Ashley Carlisle: I mean, so is your question, are people going to use it nefariously?

Art Cavazos: Well, will that even be considered nefarious? Or will that be just one use case that people are going to start using it, you know, create their “talking points” Chabot?

Ashley Carlisle: I mean, they might I think what’s interesting, too, is people, it’s been very interesting to see how people think that technology is fundamentally changed with ChatGPT, when in reality, it’s just a new interface, the technology, we’re already living in our own algorithms, we’re already living on our own sides of the internet. Even your Google, for example, by Google’s results will not be the same as either of yours, which always is shocking to me, my husband and I, at the end of the day, I always talk about what did your algorithm tell you today, and it is completely different from mine, right? So what you’re saying is, it holds true in the fact that we’ve already been living in our own value sets and our own preferences, and the technology world around us has just showed us what we want to see or what they guess we want to see. And I think that’s why there’s just been so much disconnect and polarization and that, like feeds that that’s a whole other conversation. And I think ChatGPT, maybe it’s just an easier way for people to see how it could go wrong, or how you can create bias information for your own uses. But in reality that kind of already exists. And I think if anything, ChatGPT was just kind of lightning in a bottle that made people more interesting and realizing the world we’re living in, for better or worse. I wish I had a happier answer to that one.

Art Cavazos: Yeah, no, I agree. I think I think it’s just going to amplify what people are already doing. Just now you can create an AI generated YouTube video to go along with it and have AI generated social media accounts, pushing it out and write AI generated articles and it just an amplification tool.

Ashley Carlisle: And I also think like, kind of segwaying it’s going to be potentially a lot of misinformation and an age of misinformation. So, you know, first amendment defamation attorneys, this might be your decade, I don’t know it’s gonna be an interesting time to kind of parse what is fact and as you know, in a court of law, what is fact? How do you determine it? We’re not the most tech savvy, we’re going to have to increase our tech literacy, probably as lawyers to have that duty of competence to make sure we know what is truth and what is not. It’s going to be a very interesting time potentially. But I think it’ll be more incremental than it will be overnight, which doesn’t solve the problem. It’s still things we need to kind of be vigilant of, but it gives me a little bit of solace that like we’re not gonna have to deal with the world potentially all being fueled by misinformation. Tomorrow, it’ll be probably more incremental and more hidden than we realize, it’s just important to keep that in mind, especially as you’re talking to in a legal context to your clients. Having more data points for what they’re saying or what the information they’re giving, you will probably be more important as well.

Courtney White: I’d love to know Ashley, if you have any perspective on how law firms can increase their tech literacy. It seems that law is one of those fields that is very slow to change. So I’d love to know your perspective on that, especially since you’ve worked in a law firm.

Ashley Carlisle: Yes, so I would say there are a lot of km professionals, knowledge management professionals that are especially at the big law firms. Obviously, the small to mid-sized law firms have a leaner staff. And typically, they don’t have those departments. But what’s really interesting is the small and mid-sized law firms in the last 20 years have been kind of the most innovative and the test case for technology and the legal industry, because it’s easier to adopt with a smaller set of people, obviously, less opinions, less red tape, the big law firms have kind of been watching small and mid-sized law firms and have seen that they’ve had more adoption with document automation, workflow automation, kind of the low hanging fruit as you will, the main bottleneck right now for tech literacy has been this build versus buy phenomenon. As lawyers, we think we can do everything. And to be honest, we probably could. But is it worth our time? Is it worth our money? You know, you give up something to do something. And I think law firms especially now are in this mode of, well, I don’t want to be want to buy it, I could just build it. And it’s been very interesting phenomenon. It used to be they want to do that for doc automation, you know, they would see doc automation solutions and be like, Oh, this doesn’t impress me, I could just build my own and then it never happened. Then it was workflow automation. Now it’s CLM. Now it’s AI.

So I think a big part of the problem is the fact that we are independent thinkers, we are people who are very smart. And to our detriment, we tried to do everything. You know, we consult law firms, legal departments, we give presentations on AI, we go through policies, there are many others like us. And I’m really hoping with the ChatGPT age that more law firms will just dedicate the time to being okay with learning from other people and not being scared to ask for information. But I think at the end of the day, that’s kind of in the DNA of the legal industry. And so as vendors, we’ve had to be very creative or meet people where they are, and just hope for incremental improvements with tech literacy. It’s not that it’s not out there because it is. It’s mainly just, unfortunately, our own neurosis that typically is the proper.

Courtney White: We’re natural skeptics. Yes, absolutely.

Art Cavazos: Speaking of which, there’s a there’s this kind of ongoing debate that I’ve been skeptical of both sides, if that’s possible. There’s an argument and I’ve made this argument in the past myself about AI that it could do the menial tasks, you know, the tasks that no one wants to do, and kind of free up people to do the more creative work, more strategic work. You were saying earlier that robot lawyers won’t be replacing us anytime soon, you know, you’re still going to need humans to do certain types of work. But also, we’ve seen a lot of these, like DALL·E and other models that can reproduce art work, you know, ChatGPT, and others can do lots of the written word, whether that’s like copywriting or story-writing, you name it. And those are all very creative areas. And those seem to be some of the first places that are being hit, so to speak, by AI and, you know, potentially being replaced by AI. And you saw it with the Hollywood strikes and everything, the reactions that that there have been. So what are your thoughts on that kind of debate about what is AI’s role? And what are people’s role in jobs going forward and kind of this line that sometimes gets drawn between creative or strategic work versus the more menial tasks?

Ashley Carlisle: It’s been an interesting conversation that’s continued to evolve, and the thing that I’ve noticed is it totally depends on the personality of the specific lawyer I’m talking to in regards to how they define that creative strategic work right. So I come from a transactional background, I myself naturally before really having this ongoing conversation was like, okay, that would be, you know, I’m gonna have like the most innovative covenants I’m going to be fighting, like for the deals that note, like the points that no one’s getting on every deal, like, I guess that’s what my extra time would allow me to do, right? But in reality, if you ask clients of yours, they don’t really care about that; they would love if you actually would talk with them on the phone a couple times a week. They would love if you were a more active counselor to their business. But I think the problem is, especially on the transactional side, we are so used to kind of being Scriveners of our documents. And that is important, but we forget that we are such a trusted adviser. And I really think that we are going to become more business counselors and talk with our clients a lot more as we start using this technology. And I think for some lawyers, that’s really exciting to them. And I think for some, it’s terrifying because they don’t want to talk, you know.

A big thing, as you know, in law school people have to typically pick between this is a generalization, but transaction or litigation, and oftentimes you find them more gregarious people, maybe the drama kids, the people in like law review or whatever those productions will go litigation, and then the more quiet people will go transactional or tax, or what have you, it could be that everyone’s just talking to their client, always everyone’s doing more presentations. I think it really could be a new generation of, and probably a more casual situation for our profession of, we’re probably not going to be wearing suits and be in our ivory tower and instructing people the rules; we’re probably going to be more in the weeds, maybe with like jeans on at their office a couple times a week, being a quasi-member of their team. And I think, for some people, that’s really exciting. Some it’s not, I get it, but at the end of the day, I really think that’s where AI is going to push us into is being more of a conversational role. And I think at the end, it’s going to help because, as you guys see, it annoys me. So I’m sure it annoys, hopefully, everybody. People don’t realize what lawyers do. People think it’s really easy. They oversimplify it because there’s an asymmetry of information. They only deal with us when they’re upset; when they’re upset, they’re not going to be able to remember, like, all the clarifications and basically parenthesis things of our job; they’re only going to remember the emotional parts of it. So they don’t really know or have an accurate view of their experience with us are remember, like all the different things we did for them.

As people try to replicate what lawyers do with tech, they’re going to be increasingly disappointed; they’re going to realize that they’re still going to need us; it’s going to give us new business in different ways that we haven’t even realized yet. People just have no idea what we do. So they think it’s so easy. But time and time again, even now we’re getting calls from people being like, Oh, I thought I could replace this. But this is awful. It’s like, yeah, you can’t replace us. I’m sorry. Like we are neurotic, law school was awful. We took the bar; we work way too hard in our profession. No machine can do that. And then also, we’re like looking at all these different issues at one time; you can’t train that type of intent and complex intent into these models. Maybe like an another century, but no time soon is our brand going to be replicated; it’s going to be very average answers. So maybe, for like basic wills, if you don’t have a complicated family structure, maybe ChatGPT would be useful as a starting point in that context. But the majority of law is nuanced. And so it’s really, I think people, like you said, they get skeptical, they get scared of the technology, and maybe how it will change the industry. Because I think it will, but I don’t think it’s going to change in the way that people are thinking it’s going to change.

Art Cavazos: So we as a group of lawyers can all agree that lawyers cannot be replaced. And that’s how law is made. Right? I think we just made a law.

Courtney White: I think and this is probably just because I love social media. I’d love to know your thoughts on how we can kind of harness AI to beef up our social media presence as lawyers, a lot of lawyers are already using social media to innovatively talk about legal issues, I’d love to know how AI could potentially play a role in that.

Ashley Carlisle: So, and I’m, I’m a skeptic on this one. I don’t know if you guys have used these Large Language Models to create content, you know, more marketing content or social media content or what have you.

Courtney White: No, but I’ve seen it.

Ashley Carlisle: Yep, it’s to me, it’s pretty awful. It’s pretty average, right? With time, it’ll slightly get better, hopefully, unless the model is, the problem with these models too, as the inputs being put in. So the more people that go for the average, awful content, it’ll be the status quo. So I think in some ways, when I talk to friends of mine who want to get more into the personal branding side of things, which I think will be more important, like I said, as the soft skills of law become more important, we will be having to do more of that. And I think we should have for a long time. I think lawyers have this idea that we’re the best kept secret and that people should know how talented we are. And we get upset when people don’t realize it and we forget that we haven’t told the world like, Oh yeah, I do this and this and this and I’m awesome. So I think it’s finally the time that people are going to just tell what they do and how awesome they are, and I’m excited for that. But I think AI is gonna be good in the point of if you’re scared and don’t know where to start. It’ll push you forward, give you an outline, give you ideas, show you resources, be an easier kind of form of Google that’s less intimidating, a better interface. But I really don’t know how much more it’s going to change other than that.

Also, maybe if you need graphics, or photos, there are better starting points, like you said, I mean, when I started this job, I had no idea what Sigma, Canva any of these things are that I now use on a daily basis. So perhaps things like that. But at the end of the day, content, contracts, legal documents, the status quo will be loud. But those that rise above the pack, those that do it on their own, will be of even more value than they are today. Because it’s gotta be obvious who’s putting in the work.

Art Cavazos: Well, thank you, Ashley. And thank you, everyone, for joining us on this episode of Future-Ready Business. We really did touch on a lot of things today regarding AI. Our producer Greg suggested we talk about some other things, I don’t think we did that. But hopefully he’ll forgive us and post this episode anyway. Ashley, I hope you’ll join us again soon. And I hope you enjoyed it and had a good time. Are you on social media? Where can folks find you on the internet? I’m kind of not. So that’s why I ask.

Ashley Carlisle: Yes. Well, thank you so much for having me. And I know we covered a lot of topics. And there’s so much more to cover here. So happy to come back. And to the extent people have any questions about anything that we kind of touched on today, you can feel free to email me my email is You can find HyperDraft at our website, We are currently an invite only platform so you can sign up for updates. And we will let you know kind of what’s going on. And if you have any questions about how we’re helping clients with document or workflow automation, we’d be happy to chat about that. And if you want to follow along, we have some fun events coming up this year.

We’re active on all socials at @HyperDraftInc, especially on LinkedIn. And like Courtney said, if you’re not on LinkedIn already, you should be. Only 20% of legal professionals are on there. And it’s a shame. I know when I was in big law, sorry, I’m going to soapbox this. When I was in big law, I didn’t, will go on there. I was really scared to kind of not have a voice. But I was just like, I’m busy. I don’t want to do this. But I think it would have been better just for my professional development. And to figure out what my business development niche was, if I did it incrementally with time as opposed to like as a fifth year six year associate freaking out like, Oh my God, who are my clients going to be? What’s my point of view? Like, how am I going to do this? So I would think of it as more of like, a course that you do for yourself on personal branding as you go along. But yeah, find us on LinkedIn at HyperDraft Inc.

Art Cavazos: Thank you. And Courtney, unlike me, you kind of are a big deal online. Where can folks find you on the internet?

Courtney White: People can find me on my social media channels @CourthouseCouture.

Art Cavazos: Fantastic. And if you liked the show, please rate and review us wherever you listen to your favorite podcasts and share FRB with your friends and colleagues. You can find us, I exaggerated, we do have a @FutureReadyBusiness [account] on Instagram and Threads. And I do have a Twitter account, but I’m still calling it Twitter. So I don’t know if like I’m gonna get kicked off at some point.

Courtney White: It’s “X”.

Art Cavazos: Yeah, but exactly, so until I get kicked off the platform for continuing to call it “Twitter”, I’m @FinanceLawyer. As mentioned at the top of the show, the opinions expressed today do not necessarily reflect the views of Jackson Walker, its clients, or any of their respective affiliates. This podcast is for informational and entertainment purposes only and does not constitute legal advice. We hope you enjoyed it. Thank you for listening.

Courtney White: Thank you.

Visit for more episodes. Follow Jackson Walker LLP on LinkedInTwitterFacebook, and Instagram.

This podcast is made available by Jackson Walker for informational purposes only, does not constitute legal advice, and is not a substitute for legal advice from qualified counsel. Your use of this podcast does not create an attorney-client relationship between you and Jackson Walker. The facts and results of each case will vary, and no particular result can be guaranteed.