The term ‘black box AI’ refers to situations where machine learning technology operates in a way which is not visible or comprehensible to the user. In a healthcare context this presents an issue as treatment decisions need to be explained to patients. Blackbox AI solutions are potentially inscrutable by their very nature.
This DCU Research collaboration posits there is not necessarily a ‘black and white’ distinction between explainable and not explainable. An engineer would offer more detail than the patient might need for example. The principle should be that explanations need to fulfil the patient’s needs, and allow them to contest diagnoses.
As a society we no longer only engage with completely understandable and transparent objects and tools. We use bridges, elevators, cars, trains and airplanes on a regular basis without necessarily having any detailed and comprehensive understanding of their construction and functioning. We know that there are regulatory frameworks and quality standards in place as well as domain experts with appropriate credentials who maintain, control and direct these things, and this paper argues that machine learning in healthcare should be the same.