top of page
Henry Fraser

Accountability as a value proposition

Updated: May 7, 2021

There is a fast-growing cottage industry of consultants and advisers who advise businesses on AI Ethics. Their value proposition is that AI ethics is essentially a risk category, to be added to all manner of other risks for which businesses need to develop prudent strategies. Failure to implement AI in accordance with 'ethical AI' frameworks (whether these be regional, national or industry-based) is conceived as a business risk; developing ethical practices in the use of data, machine learning, and other forms of automated decision-making is packaged as risk management. For example, there are serious public relations implications to deploying an AI that exhibits bias, especially against marginalised or protected groups. Google suffered serious blowback for image recognition algorithms that labelled African American faces as gorillas. Amazon faced a scandal when it came to light that a job recruitment AI that reviewed CVs was systematically discriminating against women, because it was trained from recruitment data over a ten year period where female candidates were underrepresented. Other similar examples abound. Most businesses do not have the resilience of massive oligopolies like Google and Amazon, and are unlikely to weather scandals of this kind so well.


I am all for the wide adoption of ethical approaches to the use of AI, though I am in agreement with most sensible people that ethics principles are only a first step toward improving the accountability of the use of AI and data.


But if I were selling consulting services advising on AI ethics, I would want to emphasise more than risk. I would also emphasise the commercial value of accountability, which is to say the value of trust.


Frank Pasquale, a leading thinker on algorithmic accountability, observed that lawyers are at a disadvantage to various automated and semi-automated legal tech tools, because they are held to higher standards of accountability. He argues that software-based replacements for lawyers should be held to the same standards of accountability to ensure they are operating on a level playing field.


I do not have a firm opinion one way or another on that particular recommendation. What does occur to me though, is that, all other things being equal people generally would prefer legal advice from a lawyer than from software. By that I mean, assuming costs were not an issue (the biggest appeal of legal software is the cost-saving), most people would probably be more comfortable relying on a certified lawyer's legal advice than on the advice of an automated system. This is not just because people prefer to relate to humans and machines.


It is also because the higher accountability of lawyers recommends them. They are obliged to maintain very hefty insurance policies, so we know we have some recourse if they fail us dramatically. They are obliged to have appropriate qualifications, so we have reasonable assurance that they know what they are talking about. They are obliged to maintain high standards of ethics and confidentiality, and there are very serious consequences for failures of professional legal ethics. They are obliged to continuously maintain professional development. All of this builds trust. People might not like lawyers, but when it comes to resolving legal problems they trust lawyers over anybody else.


The legal profession and its gatekeeping bodies (Bar Associations, Law Societies) place great value on ensuring the profession is exclusive, and maintains very high standards. It's not just that they don't want lawyers to get a bad name (risk management). It is that they know their high professional standards are what allow lawyers to charge so much money!


Accountability is worth a lot to lawyers, and if we align carrots and sticks properly, it can be worth a lot to AI developers and service providers.













14 views0 comments

Comments


bottom of page