Document Type


Publication Title

Xavier Health AI Summit Proceedings: Whitepaper

Publication Date

Summer 2018


Goals of this paper Healthcare is often a late adopter when it comes to new techniques and technologies; this works to our advantage in the development of this paper as we relied on lessons learned from CLS in other industries to help guide the content of this paper. Appendix V includes a number of example use cases of AI in Healthcare and other industries. This paper focuses on identifying unique attributes, constraints and potential best practices towards what might represent “good” development for Continuously Learning Systems (CLS) AI systems with applications ranging from pharmaceutical applications for new drug development and research to AI enabled smart medical devices. It should be noted that although the emphasis of this paper is on CLS, some of these issues are common to all AI products in healthcare. Additionally, there are certain topics that should be considered when developing CLS for healthcare, but they are outside of the scope of this paper. These topics will be briefly touched upon, but will not be explored in depth. Some examples include: Human Factors – this is a concern in the development of any product – what are the unique usability challenges that arise when collecting data and presenting the results? Previous efforts at generating automated alerts have often created problems (e.g. alert fatigue.) CyberSecurity and Privacy – holding a massive amount of patient data is an attractive target for hackers, what steps should be taken to protect data from misuse? How does the European Union’s General Data Protection Regulation (GDPR) impact the use of patient data? Legal liability – if a CLS system recommends action that is then reviewed and approved by a doctor, where does the liability lie if the patient is negatively affected? Regulatory considerations – medical devices are subject to regulatory oversight around the world; in fact, if a product is considered a medical device depends on what country you are in. AI provides an interesting challenge to traditional regulatory models. Additionally, some organizations like the FTC regulate non-medical devices. This paper is not intended to be a standard, nor is this paper trying to advocate for one and only one method of developing, verifying, and validating CLS systems – this paper highlights best practices from other industries and suggests adaptation of those processes for healthcare. This paper is also not intended to evaluate existing or developing regulatory, legal, ethical, or social consequences of CLS systems. This is a rapidly evolving subject with many companies, and now some countries, establishing their own AI Principles or Code of Conduct which emphasize legal and ethical considerations including goals and principles of fairness, reliability and safety, transparency around how the results of these learning systems are explained to the people using those systems5 . The intended audience of this paper are Developers, Researchers, Quality Assurance and Validation personnel, Business Managers and Regulators across both Medical Device and Pharmaceutical industries that would like to learn more about CLS best practices, and CLS practitioners wanting to learn more about medical device software development.

Publication Information

Xavier University, Xavier Health Organization,