why-v2

What Would Einstein Do? The Ethical Conundrums of Artificial Intelligence

By Chris Surdak, JD Senior IRPA AI Contributor

This week I presented at the Sub-Four eDiscovery and Information Governance in the legal industry, hosted at Pelican Hill Resort. While these topics may strike some as exciting as watching paint dry, others of us find these topics to be interesting, relevant, and sometimes even critical to our careers and our lives. In the session I moderated, we discussed the implications of Artificial Intelligence on the legal profession and whether we believe AI will have a meaningful impact on the practice of law. The discussion rapidly detoured to a focus on the ethical implications of AI in the law, and I wanted to revisit the discussion here for posterity.

Einstein’s biggest blunder

To understand what is at stake with the use of AI, I reflect on the life, times, and challenges taken up by none other than Albert Einstein. During his lifetime, one of the problems  Einstein struggled with was the age of the universe, how it began, and how might it end. Prior to 1931, Einstein believed, along with most of his contemporaries, that the universe was static and had been that way since its creation. But this belief was shattered by the work of Edwin Hubble, the astronomer who discovered the universe was expanding, presumably from a singularity we now know as the Big Bang.

In retrospect, that the universe is not static, nor perfectly balanced, seems obvious. The First, Second and Third Laws of Thermodynamics pretty much guarantee that, over long enough time scales, nothing ever remains the same. Entropy and enthalpy are in a constant tug of war with one other, and no system can remain static for long given this battle over the soul of the universe.

But not all dynamic systems are created equally. As we discussed in the retreat session, dynamic systems are either convergent or divergent in nature. Meaning that over time, all dynamic systems will either collapse to a single, stable state (i.e., convergent) or will expand into a polarized stable state (i.e., divergent). Regarding the universe, astronomers are still arguing over whether the universe is heading for a convergence, aka the Big Crunch, in which it will collapse back upon itself, or if it is divergent and heading for a Big Whimper in which it will keep expanding until there is basically space filled with nothing. Cosmologists still puzzle over this question, but the notion of a static, unchanging universe is no longer a viable option.

Coming together or falling apart?

What does this have to do with the adoption and impact of AI in the legal industry, or for that matter society at large? Over time, the societal impact of any technology will also be either convergent or divergent. The automobile was divergent on society. It allowed people to move out of cities and live in the suburbs. Cloud computing is proving to be convergent as organizations are eliminating dedicated data centers and moving their computational loads to shared cloud resources. It often takes a long time to see which path a given technology will follow to its inevitable results, but we tend to see the lifecycle of technologies growing shorter  as our technology innovation continues to accelerate.

As discussed at the retreat, AI has already started to have an impact on our society albeit relatively small thus far. It is fair to say that those organizations who are driving the development and use of AI are those who command the largest pools of computational resources, the largest staffs of talented data scientists, and the largest piles of data with which to perform training and analysis. These organizations are the current batch of digital giants whose names are known to us all. They include Apple, Google, Amazon, Facebook, Twitter, and Microsoft. These organizations have both the technical wherewithal to bring AI to fruition in our world and the financial incentive to do so.

This raises some questions. Will these organizations use AI to achieve convergent results or divergent results? And which of these is the more ethical, versus the more destructive, path? To answer this, we can look at these organizations and their use of other, similar, disruptive technologies, and make the logical leap that they will likely follow the same path with AI as they have with other technologies.

The canary in the coal mine is “tweeting”

I’ll use social media and cloud computing as the exemplars as much of the data and analytics that come from exploiting social media and the cloud will feed the AI beast. Arguably, Amazon and Microsoft have used these technologies to achieve convergence: driving for efficiency and better outcomes. Tesla, too, has demonstrated use of these technologies to advance AI in a convergent, and beneficial end goal.

But it can be argued the other digital giants of Silicon Valley have used these technologies for divergent, and in some cases destructive, purposes to amass money and pursue power. As recently revealed regarding Facebook, the social media giants purposefully use their platforms in a divergent way — to drive wedges between different groups of people. Their business model is all about exploiting the “Attention Economy” in which eyeball-minutes and likes are currency,  and the goal is to keep people staring into their smartphones for as long as possible.

Many humans are attracted, if not addicted, to drama. Mass media has known this for centuries, hence the mantra of “if it bleeds it leads” on the nightly news or in the daily newspaper. Controversy attracts attention, and with social media powered by analytics and industrial-scale psychometric profiling, it is quite easy to determine who is susceptible to falling for, and contributing to, controversy.

If it bleeds, it leads

This is the reason many, if not most, of the social platforms have grown so toxic: Toxicity sells. Pit one group against another, one psychological “clan” against a rival, and watch the post volume and advertiser revenue spike. The financial incentives to push for division (i.e., divergence) are exceptionally high, and the social platforms follow this path with a vengeance.

In as much as AI will be an extension of the data and analytics which serve these platforms, the likelihood of AI leading to divergent, rather than convergent, results for our society appears rather high. This is disappointing if not unexpected. It would be a mistake to believe this is unavoidable: that AI will contribute more bad to the world than good. But the trend of AI use causing more harm than good has been established. The ability to cause this sort of division and harm at-scale leads to some chilling prospects. Let’s all hope it does not come to that.

The crossroads of AI

So, what can we do to prevent the harmful use of AI? How do we ensure AI is used for convergent, positive, predictable results rather than divergent, negative, and chaotic results? We collectively attempted to address this in the retreat session, and while there are no definitive answers, there were some useful suggestions. First and foremost, our society needs much more awareness and transparency. The digital giants have been extremely reticent to reveal what they do with these technologies, but recent disclosures shed light on some less than ethical use of AI tools. Greater transparency is essential.

Second, we must achieve a greater degree of accountability for the decisions made regarding the use of AI. If there are little to no negative consequences for using these tools unethically, and massive financial incentives for being a bad actor, is it any wonder we’re seeing these results? In many jurisdictions, there are laws which protect these companies from any culpability for their actions. Is it any wonder, then, when they act to improve their profitability at the potential expense of societal cohesion? The social cost we’re paying is currently unknown, but given the astronomical market capitalization of these companies, it must be exceedingly high. Let’s not forget the old axiom, “You don’t get something for nothing.” If these companies are collectively worth ten or more trillion dollars, the social costs we’ve most likely paid for allowing them to grow to this size is almost certainly of the same order of magnitude — if not greater.

Finally, it is imperative we maintain a degree of human oversight and control of these technologies. True “general AI,” of the sort we see in science fiction, is unlikely to appear any time soon. And if we do achieve such self-aware AI, it is likely that by the time we discover we’ve achieved it, we will have already been enslaved by it. Such an AI would likely find many human traits distasteful if not downright repulsive. But we are what we are, and our humanness is both our blessing and our curse. Regardless, it is imperative we not allow these technologies to leave our full control, lest we lose the ability to make decisions of  “Right” and “Wrong” for ourselves.

While such a dystopian future remains unlikely, are we really willing to risk literally everything we have, and everything we’ve achieved, as a species by not being vigilant? The more potentially destructive a technology is, the greater care we must use in protecting ourselves from it. This is why you can’t buy plutonium on Amazon and likely won’t ever be able to. AI has the potential to be exceedingly dangerous to humanity. As such, a high degree of caution should be applied to its use. I don’t see such an air of caution and humility amongst many in the AI community, and it is somewhat disturbing.

The AI genie has yet to fully emerge from its bottle, but we are likely close to such a decanting. By the end of this decade, I foresee most, if not all, organizations leveraging AI to some degree if only to remain competitive. Because of this inevitability, I believe it’s important we have these discussions in the here and now — before we get much further down the path toward wide-scale AI adoption. If we reasonably assess the level of social damage caused by the various social platforms out there, we can make reasonable estimates of the potential negatives AI could cause if not properly controlled. Costs could be extraordinarily high. I hope we, as an industry of practitioners in this space, take it upon ourselves to have these discussions more frequently and more openly, and this article is an attempt to do just that.

At the end of his career, Einstein reflected upon his position regarding the Big Bang and an expanding universe and called  his belief in a static universe one of the biggest blunders of his career. Let us hope those of us who are proponents of the use of AI don’t have similar lamentations in our future, regarding our assessment of AI’s potential for convergence or divergence.

Leave a Reply

Featured Content

Latest News

Join The IRPA NETWORK

Are you ready to take the first step and learn more about RPA?

Contact Us