Generative AI is causing many areas of expertise and expertise to reevaluate fundamental practices and their value. Taking into account the jurisprudence, an area that has long been warned about the potential threats of automation, Julia Gentile describes the socio-technical challenges facing the discipline and looks at how new technologies can change the balance of relationships between lawyers, technologists and clients.
We are still thinking about the potential impact of AI on our society. It seems that no field is immune from the influence of AI, including jurisprudence in the broadest sense. Lawyers, clerks and legal advisors around the world are exploring the meaning of AI. Two different questions are currently on the minds of lawyers when it comes to AI, demonstrating the power of this technology (and tools like ChatGPT).
The first question is how AI can be introduced into the legal profession in order to intensify work of a professional lawyer. A more immediate benefit of AI is the increased efficiency of legal work through automation. Tools such as ChatGPT could, for example, be used to draw up contracts or summaries of decisions. Law firms announced trial of robotic lawyers.
The second, more important question is how AI can perform at least part of the tasks traditionally performed by lawyers. Can AI replace lawyers? AI inevitably raises questions that change the legal profession as a whole. This is where I want to focus.
The development of technology increases the demands and expectations from lawyers. Most lawyers currently practicing do not have training in data science, algorithms, or artificial intelligence. Therefore, there is currently an information gap affecting the profession: any future use of AI will require lawyers to upgrade their skills and learn at least the basics of these technologies.
The level of knowledge required is debatable. Should lawyers become AI experts? This may be unattainable for the current generation. A reasonable alternative would be for lawyers to work more closely with AI experts, although this would make lawyers somewhat dependent on AI experts. Also, such collaborations may not necessarily result in AI systems that are suited to the purposes of lawyers – developing AI tools for lawyers will likely take time and involve the lawyers themselves in terms of training. Even so, by the time the system is trained, a new artificial intelligence technology may emerge that will render the purpose-built tool obsolete. If so, the interdisciplinary process, under the pressure of the technological race, will start again.
This means that AI may not make life easier in the legal profession, but simply take the risk of making it much more difficult and driven by the demands of technology rather than clients and the law.
2. Transforming customer relationships?
The complexity resulting from the impact of AI on the legal profession is also changing the relationship between lawyers and clients. This is because the way AI is used in the work of lawyers also affects the responsibility of legal consultants and their professional obligations towards their clients. For example, lawyers are required to provide their clients with clear, non-misleading information (see, for example, SRA Code of Conduct). The use of AI can make this duty much more difficult and difficult for two reasons.
First, lawyers need to understand AI tools in order to be able to explain them. As already mentioned, the current digital literacy of lawyers may be limited, and in any case, it is difficult even for experts to track how the AI system made a decision due to black box problem. This compromises the ability of lawyers to inform their clients.
Second, AI systems, such as neural networks, can learn on their own and acquire features that were not foreseen at the initial stage of AI design. In addition, it is well known how AI tools can provide information that may look correct but is not – so-called hallucinations. This means that these tools can be unpredictable, with significant risk of errors and potential damage to clients. Recently American lawyer used ChatGPT to compile summaries that went awry when he introduced authorities to support his arguments. Now he faces sanctions.
As a result, the duty of lawyers to provide clear information so as not to mislead their clients can be challenging when AI is used in the professional sector based on modern technologies. These settings ultimately affect the public’s trust in lawyers.
3. Public perception of the legal profession using AI
Equally important is the transformation of public perception and trust that the introduction of AI into the legal profession can bring about.
Clients who experience “robot lawyers”, automated legal advice, or the risks of using AI to draft contracts or other legal documents may end up losing trust in lawyers. The human dimension of the relationship between lawyers and clients is essential to building trust, understanding and empathy. Clients need to know that their lawyers understand their concerns and will try to resolve them. But this trust dynamic could be seriously undermined by robotic lawyers or lawyers’ over-reliance on AI tools.
For example, if a technology such as ChatGPT has been used to provide prior legal advice, it may not fully understand the client’s requirements. He also may not offer the same level of empathy, human connection, and legal creativity as a human lawyer. The main question is this: would we feel comfortable taking the most important advice of your life from a robot? As troubling as the question sounds, some might answer yes! Undoubtedly, even the best human lawyers are not perfect, and not all human lawyers are the cream of the crop, so one might wonder if we should abandon our belief in human lawyers. But, as with the automation bias in other fields, this view ignores the fact that machines will also have flaws (albeit in different ways, not all of which are currently known or properly understood). The task of lawyers will be to demonstrate to the public that they are developing thanks to new technologies aimed at eliminating human error. This will require passing between Scylla and Charybdis: the desire to “keep the law human”, on the one hand, and blind faith in the “higher” powers of poorly understood and developing technologies (which will inevitably be erroneous) on the other.
Subtler points aside, could we be just as passionate about Erin Brochowitz and Harvey Specter if we knew they were just cold computing machines?
The content created on this blog is for informational purposes only. This article represents the views and opinions of the authors, but does not reflect the views and opinions of the Impact of Social Science blog (blog) or the London School of Economics and Political Science. Please see our comment policy if you have any concerns about posting a comment below.
Image credit: Google DeepMind via Unsplash.com.