Sure, you trust your machine learning (ML) solutions, but do your customers? Do regulators? Courts of law? How does an ML-based solution inspire trust? Easy, says AI explainability (XAI) specialist Monitaur: it’s all about demonstrating transparency, compliance, fairness, and safety.
Great, you say, but how do you do that?
Happily, Monitaur has a few ideas. “At a high level I think there’s a future place where there’s a need for some objective validation that this is a monitored, trusted system. So, if you think about AI making decisions about your health, your finances, your employment, your insurance … you’d like to know that this thing has some controls around it,” Anthony Habayeb, co-founder and CEO tells Bloor Group.
“So, you have regulators say, ‘Wait a second, you’re gonna approve somebody for a job or a loan or an insurance product and you can’t explain how you did that, that doesn’t make sense.’ So … we’ve created Monitaur very intentionally to help create a solution that helps bring those things together.”
Monitaur’s notion of “objectivity” is at least partially grounded in the concept of AI explainability. For this reason, it is more pragmatic or empirical than ideal: it is about showing or demonstrating objective explainability, rather than (as in philosophy) positing or claiming that something – in this case, the results that are the basis for the decision – is “true.” The proof and the truth are in the demonstration – and, specifically, in the consistent replication of both results and decisions.
In this respect, “explainability” as Monitaur conceives it seems reminiscent of the “impersonal” objectivity that Theodore M. Porter described in his 1995 book Trust in Numbers.
The nominal function of objectivity in decision-making is to demonstrate strict adherence to policies and processes, the uniform application of standards, and the replicability of results. As Porter describes it, however, the actual purpose of objectivity – its impersonal dimension – is to eliminate subjective judgment, bias, error, arbitrariness, etc. One neat thing Porter does with this concept of objectivity is to show how it evolved as a kind of tool in bureaucracies. By demonstrating strict adherence to predefined processes, by developing and applying uniform standards, and by being able consistently to reproduce their results, people in human organizations are able to protect themselves against criticism or censure.
“So, we bring forward with Monitaur an approach to machine learning assurance through our software, which is really a set of processes and software that orchestrate people, processes, and tech to deliver ultimately assurance … and you achieve assurance through good governance,” he explained.
Explaining ML
As Porter shows in Trust in Numbers, objective explainability is essential whenever a disruptive force challenges entrenched powers. In such cases, would-be disruptors use objectivity as a tool to shield themselves from critics and skeptics. It isn’t hard to slot ML-based automation technology into the role of would-be disruptor. And it is easy enough to see how, and why, people might be suspicious of ML-based automation. Like Porter, Habayeb believes the best way to convince skeptics is via objective – that is, transparent, testable, and replicable – explainability.
And one way you can make ML objectively explainable is by building observability into it. “I do want to make sure that I have visibility into the system, I do want to be monitoring for outliers or anomalies when it’s actually running,” Habayeb told us, arguing that this is just one piece of Monitaur’s platform: “I do need to maintain a source of truth. I do need logging in versioning, but also … this is not just a technology issue. It is also a people-management issue,”
This is one of the reasons Monitaur takes a lifecycle-oriented approach to this problem, starting with core project governance. “[Monitaur] lets the governance team establish the controls that matter [and then] document, what’s the risk, what’s the [appropriate] control, and how do you test that control. We offer 32 proprietary risk and control cards that can be leveraged by a customer to be applied into their process,” Habayeb said. “We actually bring people into our application as the system of record. So, the way that the governance application works is first we enable someone to create their policies … [then] we enable – through emails being sent – the assignment of cards and tickets, the work to be evidenced, and then we allow the reportability that shows, these are the controls, here’s proof that they were done, here’s the proof that they were verified. Here’s my evidence of good governance practices.”
Monitaur also aims to automate, to the extent practicable, the instantiation of governance controls in software once they’ve been formalized. “We provide libraries that connect to the actual model making decisions about people. So … [imagine] a model that diagnoses whether or not somebody has diabetes. It could be a model that says … ‘Should I give you life insurance?’ Yes or no. ‘Should I hire you? ‘Yes or no,” he explained. “And with this library, we are logging the model, specifically [with regard] to future needs of proving governance and control around that model, we capture the full model … all dependencies, all the inputs, the outputs, the state of the system.”
The Monitaur software permits an organization to collect, analyze, and rapidly to identify the basis for the decisions an ML model makes, too. “We provide a front-end that shows what was the decision, what inputs went into that decision,” Habayeb said, stressing that – even today – most organizations lack any means to do this. “That compliance manager, that governance [manager], that auditor, they have no place today to go in to say what was the decision and what went into it.”
Monitaur is betting that by empirically demonstrating objective explainability, it can smooth the way for ML-based decision automation and similar technologies.
Habayeb sees this as a non-controversial idea, citing the emergence of a teeming ecosystem of open source XAI libraries, tools, frameworks, practices, and so on. “I think explainability will be a commodity … [and] there’s a lot of good academic work on this front. A lot of good open source tooling that provides explainer libraries,” he told us, stressing that the real value has less to do with the mere fact of objective explainability than with the fact that – because the model, its data, and its supporting collateral are objectively explainable – the decision itself can be tested and consistently replicated.
“If I’m an auditor. I don’t care why your data science team … says something happened. I want to test that myself. So, we provide a decision- and model-auditing workflow. Remember we’ve got this source of truth of every decision, [as well as of] every decision that’s been made to build this model,” he said.
It isn’t “just a matter of time” before regulatory bodies and statutory bodies take action on this, Habayeb concluded: it’s already happening. “The FDIC actually talks about … I want you to be able to demonstrate how you are managing fairness, accountability, compliance, transparency, and safety,” he concluded. “Across all regulators, you have this expectation of, ‘Show me that you’re doing these things, these translate to me as a consumer,” he told Bloor Group.
About Stephen Swoyer
Stephen Swoyer is a technology writer with more than 25 years of experience. His writing has focused on data engineering, data warehousing, and analytics for almost two decades. He also enjoys writing about software development and software architecture – or about technology architecture of any kind, for that matter. He remains fascinated by the people and process issues that combine to confound the best-of-all-possible-worlds expectations of product designers, marketing people, and even many technologists. Swoyer is a recovering philosopher, with an abiding focus on ethics, philosophy of science, and the history of ideas. He venerates Miles Davis’ Agharta as one of the twentieth century’s greatest masterworks, believes that the first Return to Forever album belongs on every turntable platter everywhere, and insists that Sweetheart of the Rodeo is the best damn record the Byrds ever cut.