Artificial intelligence is being discussed in the mainstream media more than ever.

I see artificial intelligence mooted as the panacea for all sorts of society’s problems.

It is easy to understand why when you see robots being mooted as substitutes for almost everything in our day to day lives.

There is no doubt that Artificial intelligence (AI) is here to stay and starting to play a greater role in our everyday lives.

It is a technology that will develop especially over the next few years into areas of use that we previously have never even contemplated.

Our personal and especially our working lives will be increasingly affected by AI in some shape or form.

Those businesses who adopt such AI technology will have also no doubt have an early advantage over their competitors.

But it is also early days yet.

Sorry if you think this tech lawyer is spoiling your fun.

Those businesses adopting AI will need to be very careful.

They will need to be especially careful with the legal implications of using such largely untested AI technology and the way that it will practically manifests itself through its actions and decisions.

Developers of AI technology openly accept that it is a technology in development.

They also say that it is a technology that can practically manifest itself in ways that cannot be easily explained and that can be unpredictable.

Add to this the machine to machine learning nature of AI and you can see the risks as clearly as the advantages.

Ethics aside, the key legal areas of concern for businesses considering adopting AI are liability issues and risk management relating to the use of mass data.

Obviously also they are in relation to decision making based on flawed assumptions arising from this data. Bad decisions breeding even more bad decisions.

The use of AI in sorting through mass data and facilitating automated decision making is attractive for businesses.

However businesses who think they can just leave it to and trust the algorithms in the AI that they are using and justify any decisions and actions made by these AI agents are missing the point when it comes to their organisation’s risk management strategy.

AI needs human supervision.

Blaming the AI agent when things go wrong just won’t cut it with clients nor regulators nor ultimately the Courts.

Risk and liability assessment and mitigation are and will be critical when assessing the use of AI.



Proper collaboration between developers and company directors and lawyers is as essential as is also a well thought out contingency plan if things do go wrong.

The compounding factor when things go wrong in AI is similar in scale potentially to the efficiency gains it promises when things go fine.

We are beginning to see businesses espousing the virtues of AI technologies as game changing without also considering what if something goes wrong. This doesn’t even take into account the ethical issues of when things go wrong.

The sorts of things that can go wrong include insufficient testing, coding errors, data leakage, privacy breaches, lack of properly trained operators and a poor system of checking the checkers.

The role of lawyers in being able to identify risk and help mitigate them is well documented historically.

Our advice is to get specialised legal advice and get it early on.

Paul Ippolito is Principal of Ippolito Lawyers

Related Posts

Comments are closed.