McMenamin - Slidedeck for Slideshare - Therapist AI & ChatGPT- How to Use Legally & Ethically.pptx
marlenemaheu
55 views
128 slides
Sep 14, 2023
Slide 1 of 128
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
About This Presentation
Slidedeck Therapist AI & ChatGPT- How to Use Legally & Ethically
Size: 18.53 MB
Language: en
Added: Sep 14, 2023
Slides: 128 pages
Slide Content
Therapist AI & ChatGPT: How to Use Legally & Ethically Joseph McMenamin, MD, JD Marlene M. Maheu, PhD 2.5 CME & CE Hours
Joseph P. McMenamin, MD, JD, FCLM Joe McMenamin is a partner at Christian & Barton in Richmond, Virginia. His practice concentrates on digital health and on the application of AI in healthcare. He is an Associate Professor of Legal Medicine at Virginia Commonwealth University and Board-certified in Legal Medicine.
Marlene M. Maheu, PhD Marlene Maheu, PhD has been a pioneer in telemental health for three decades. With five textbooks, dozens of book chapters, and journal articles to her name, she is the Founder and CEO of the Telebehavioral Health Institute (TBHI). She is the CEO of the Coalition for Technology in Behavioral Science (CTiBS) , and the Founder of the Journal for Technology in Behavioral Science .
4 And you? Please introduce yourself with city and specialty
Participants will be able to outline an array of legal and ethical issues implicated by the use of therapist AI and ChatGPT. Name the primary reason ChatGPT is not likely to replace psychotherapists in our lifetimes. Outline how to best minimize therapist AI and ChatGPT ethical risks today. Learning Objectives 5
Preventing Interruptions Maximize your learning by: Making a to-do list as we go. Turning on your camera & join the conversation throughout this activity. Muting your phone. Asking family and friends to stay away. We will not be discussing all slides.
Mr. McMenamin speaks neither for any legal client nor for Telehealth.org Is neither a technical expert nor an Intellectual Property lawyer. Offers information about the law, not legal advice. Labors under a dearth of legal authorities specific to AI. Speaker Disclaimers
Must treat some subjects in cursory fashion only. Presents theories of liability as illustrations, conceding nothing as to their validity. Criticizes no person or entity, nor AI. In this presentation, neither creates nor seeks to create an attorney-client relationship with any member of the audience. Speaker Disclaimers
What Uses Can Mental Health Professionals Make of AI? 9
If you have begun or are considering using AI or ChatGPT in your work, please outline those activities in the chat box. 10
We will proceed with the presentation while you do so, then we will come back later.
What are AI and ChatGPT? ?
Three Primary Areas: Information Retrieval and Research Personalized Case Analysis, Diagnosis & Treatment Plans Client & Patient Education How are AI & ChatGPT being used to help healthcare practices? 13
Programs like Elicit and Claude can provide advanced research capabilities that exceed traditional methods. For example, AI at Elicit can extract information from up to 100 papers and present the information in a structured table. It can find scientific papers on a question or topic and organize the data collected into a table. It can also discover concepts across papers to develop a table of concepts synthesized from the findings. Information Retrieval and Research 14
Ethical Considerations : Ethical research practices must still apply, ensuring the information retrieved is evidence-based, peer-reviewed, and sensitive to privacy regulations such as HIPAA. Issues of ChatGPT copyright ownership must also be considered, as just because a system allows us to act does not mean we should. Information Retrieval and Research 15
Programs like OpenAI, Bard, Monica, and others can analyze and detect behavioral health issues and potential diagnoses from "prompts, "that is, commands, that include short behavioral descriptions to vast patient datasets. They can query for signs of substance use, self-harm, depression, suicidality, etc. They can also engage brainstorming sessions to explore various possible diagnoses, which facts to collect or areas to explore to arrive at a definitive diagnosis. They can incorporate extensive patient data, including medical history, psychological assessments, and patient demographics. Personalized Case Analysis, Diagnosis & Treatment Plans 16
They use natural language processing (NLP) to extract relevant information from clinical notes, interviews, and questionnaires. They can be instructed to incorporate structured data such as diagnostic codes (ICD-10), medication history, and desired treatment outcomes. These chatbots can be given established clinical guidelines or consensus documents to ask how one's treatment plan needs to be adjusted to comply with the guidelines. They can also engage brainstorming sessions to explore various possible diagnoses, which facts to collect or areas to explore to arrive at a definitive diagnosis. Personalized Case Analysis, Diagnosis & Treatment Plans 17
Ethical Considerations : All protected health information (PHI) must be meticulously removed before uploading any prompts. Plus, full transparency must be given to clients and patients regarding AI's role in their diagnosis. Attention to the strong biases inherent to AI must be given to ensure that AI doesn't perpetuate existing healthcare inequalities. HIPAA privacy and copyright laws must also be followed. These requirements take time and attention. Practitioners are strongly advised only to attempt these activities after due training. Personalized Case Analysis, Diagnosis & Treatment Plans 18
These chatbots can develop tailored treatment plans to meet individual patient needs after considering diagnoses, client or patient preferences, comorbidities, and responses to previous treatments. Ethical Considerations : Legal and ethical standards for patient privacy, autonomy, and informed consent must be upheld. Free ChatGPT systems often publicly announce in their Terms and Conditions files that they own all information entered into their systems. 3. Personalized Treatment Plans 19
Depression-clients’ voices. OUD-Narx scores and overdose risk rating. Digital Therapeutics: CBT for OUD (Pear) Bankrupt Akili Interactive Labs: Interactive digital games (like videogames). ADHD, Major depression, ASD, MS. Other Uses of ChatGPT by Professionals
Is Facebook’s Suicide Prevention Service “Research”?
Facebook Innovation Technique is innovative, novel. Facebook taught its algorithm text to ignore. Proprietary: Details not available. Informed Consent?
Accuracy Traditional View: Prediction requires analysis of hundreds of factors: race, sex, age, SES, medical history, etc. Record of results? Publication? Efficacy across races, sexes, nationalities? False Positive: Unwanted psych care? Users: Wariness enhanced? Barnett and Torous, Ann. Int. Med. (2/12/19)
What is AI’s Clinical Reliability?
How AI has helped: Personal Sensing (“Digital Phenotyping”) C ollecting and analyzing data from sensors (smartphones, wearables, etc.) to identify behaviors, thoughts, feelings, and traits. Natural language processing Chatbots D’Alfonso, Curr Opin Psychol. 2020;36:112–117.
How AI has helped: Machine Learning Predict and classify suicidal thoughts, depression, schizophrenia with ”high accuracy”. U. Cal and IBM, https://www.forbes.com/sites/bernardmarr/2023/07/06/ai-in-mental-health-opportunities-and-challenges-in-developing-intelligent-digital-therapies/ Causation v. Correlation Better prognosis for pneumonia in asthma patients.
How AI has helped: Hallucinations NEDA’s Tessa: Harmful diet advice to patients with eating disorders. Generalizability When training data do not resemble actual data. Watson and chemo. No compassion or empathy No conceptual thinking No common sense
Does AI Threaten Privacy?
Big data Amazon’s Alexa and the NHS: No ? sharing of patient data. Duration of retention of information?
Facebook, again: No opt-in or opt-out. Targeted ads? HIPAA: N/A. No covered entity, no business associate. Is de-identification obsolete? COPPA: N/A: Child committing suicide was less than 13 years old.
P rivacy laws expanding, yet not clear that existing laws suffice. Consider California: 1. HIPAA as amended by HITECH 2. Cal. Confidentiality of Medical Information Act 3. Cal. Online Privacy Protection Act 4. Cal. Consumer Privacy Act 5. California’s Bot Disclosure Law 6. GDPR Yet still not certain the law covers info on apps. Facial recognition: both privacy and discrimination laws. 32
“A person has no legitimate expectation of privacy in information he voluntarily turns over to third parties.“ Smith v. Maryland , 442 U.S. 735 (1979)(pen register); United States v. Miller , 425 U.S. 435 (1976) Questioned: United States v. Jones , 565 U.S. 400, 417 (2012) (Sotomayor, J., concurring)
Has AI Generated Any Privacy Litigation?
PM v OpenAI (N.D. Cal. 2023) Purported class action alleges OpenAI violated users’ privacy rights based on data scraping of social media comments, chat logs, cookies, contact info, log-in credentials and financial info.
Do We Need to License AI to Use it in Healthcare?
Do We Need to License AI to Use it in Healthcare? Practice of clinical psychology includes but is not limited to: ‘Diagnosis and treatment of mental and emotional disorders’ which consists of the appropriate diagnosis of mental disorders according to standards of the profession and the ordering or providing of treatments according to need. Va. Code § 54.1-3600 Other professions have similar statutes across the 50 states & territories.
Do We Need to License AI to Use it in Healthcare? Definitions of medicine, psychology, nursing, etc.: Likely broad enough to encompass AI functions. An AI system is not human, but if it functions as a HC professional, some propose licensure or some other regulatory mechanism.
Do We Need to License AI to Use it in Healthcare? If licensure needed: If so, in what jurisdiction(s)? Consider scope of practice.
What Does FDA Say About AI in Healthcare?
What Does FDA Say About AI in Healthcare? Regulatory framework is not yet fully developed. Historical: Drug or device maker wishing to modify product submits proposal, and supporting data; FDA says yes or no. FDA recognizes potential for drug development and the impediments that fusty regulation could erect.
What Does Federal Drug Administration (FDA) Say About AI in Healthcare? Concerned with transparency (can it be explained? intellectual property) and security and integrity of data generated; potential for amplifying errors or biases. FDA urges creation of a risk management plan, and care in choice of training data, testing, validation. Pre-determined change control plans.
FDA Approvals of AI/ML Devices
What Types of Clinical Decision Software (“CDS”) Will FDA Regulate Most Closely?
45 FDA Concerns CDS to “inform clinical management for serious or critical situations or conditions” especially where the health care provider cannot independently evaluate basis for recommendation. CDS functions intended for patients to inform clinical management of non-serious conditions or situations, and not intended to help patients evaluate basis for recommendations. Software that uses patient’s images to create treatment plans for health care provider review for patients undergoing RT with external beam or brachytherapy.
What Have the States to Say About AI in Employment Decisions?
Most States: Silent so far. Ill., Md., and NYC: Employers need candidate’s consent to use AI in hiring. NYC: Must prove to a third-party audit company that Employer’s process was free of sexual or racial biases.
Can AI Be Liable in Tort?
Not human, and not a legal person. Cannot be directly liable for its own negligence or serve as an agent for vicarious liability. Many different SW and HW developers take part.
Control hard to determine, given. Discreteness: Parts made at different times in different places without coordination. Diffuseness: Developers may not act in conjunction. Yet: Consider corporations and ships (an “in rem” action in admiralty law)
Does AI Owe a Duty to Clients?
For the court. In health care, duty arises from professional relationship. Can AI have such a relationship? Consulting physician who does not interact with the patient owes no duty to that patient. See Irvin v. Smith , 31 P.3d 934, 941 (Kan. 2001); St. John v. Pope , 901 S.W.2d 420, 424 (Tex. 1995)
Does AI resemble a consultant? Or an MRI, e.g.? Epic sepsis model missed 2/3 of cases. JAMA IM 6/21 Beware Automation Bias
56 https://telehealth.org/chatgpt-ai-bias/
Can Plaintiffs Impose a Standard of Care on AI?
58 HCP: Reasonableness Can AI ever be un reasonable? Is the HCP relying on AI immune from liability? Higher SOC for HCP using AI? Will AI endanger state standards of care? Will res ipsa play a role? Probably not if the harm is unexplainable, untraceable, and rare. Nor can P establish exclusive control But what about the auto pilot cases?
Are AI Errors Foreseeable?
60 Foreseeability: A precondition of a finding of negligence. Law expects actor to take reasonable steps to reduce the risk of foreseeable harms. Software developer cannot predict how unsupervised AI will solve the tasks and problems it encounters. Machine teaches itself how to solve problems in unpredictable ways. No one knows exactly what factors go into AI system’s decisions The unforeseeability of AI decisions is itsel f foreseeable. Are AI Errors Foreseeable?
61 Computational models to generate recommendations are opaque. Algorithms may be non-transparent because they rely on rules we humans cannot understand. No one, not even programmers, knows what factors go into ML. AI's solution may not have been foreseeable to a human. Even the human who designed the AI. Does that defeat a claim of duty? Are AI Errors Foreseeable?
62 In a black-box AI system, the result of an AI’s decision may not have been foreseeable to its creator or user. So, will an AI system be immune from liability? Will its creator? Are AI Errors Foreseeable?
What if AI Recommends Non-standard Treatment?
What if AI Recommends Non-standard Treatment? The progress problem: Arterial blood gas monitoring in premature newborns circa 1990. Non-standard advice: Proceed with caution. The tension between progress and tort law.
Can I be Liable for My AI’s Mistake?
Can AI be my agent? No ability to negotiate the scope of authorization. Cannot dissolve agent-principal relationship. Cannot renegotiate its terms. An agent can refuse agency; A principal can refuse to be the master. Agency law does contemplate that the agent will use her discretion in carrying out the principal’s tasks.
Who controls the AI, if anyone? AI autonomy is increasing. If machine is autonomous, could it not embark on a frolic and detour beyond the scope of its employment?
If AI Can be an Agent, What or Who is its Principal? Note the decline of the “Captain of the Ship” doctrine.
Possibilities: Component designer? Medical device company? The owner of the AI’s algorithm? Whoever maintains the product? Health care professionals?
Possibilities (cont’d): Hospitals and health care systems? Pharmaceutical companies? Professional schools? Insurers? Regulators?
Could I be Liable for Promoting AI?
72 Hospitals: Large investments in robotic systems, e.g. Procedures more expensive. By shifting resident teaching time from standard laparoscopy to robotic surgery, we may produce “high-cost” surgeons whom insurers will penalize. Damage to the professional relationship? The rapport problem.
Does the Law Require the Patient’s Informed Consent to Use of AI in Health Care?
Does the Law Require the Patient’s Informed Consent to Use of AI in Health Care? Traditional: “Every human being of adult years and sound mind has a right to determine what shall be done with his own body” Schloendorff v. NY Hospital, 105 N.E. 92 (N.Y. 1914) (Cardozo, J.) AI: What disclosures are required?
(cont’d) Explain how AI works? W hat does ‘informed’ mean where no-one knows how black-box AI works? W hether the AI system was trained on a data set representative of a particular patient population? C omparative predictive accuracy and error rates of AI system across patient subgroups? Roles human caregivers and the AI system will play during each part of a procedure?
(cont’d) Whether a medical technologist or pharmacist influenced an algorithm? Compare results with AI and human approaches? What if there are no data? What if the patient doesn’t want to know? Provider’s financial interest in the AI used? Disclose AI recommendations HCP disapproves, or COIs?
(cont’d) Pedicle screw litigation: Used off-label At present, nearly all AI is used off-label. Investigative nature of the device's use? Rights of subjects in clinical trials? Experimental procedures: “most frequent risks and hazards” will remain unknown until the procedure becomes established.
Will Plaintiffs be Able to Prevail on Product Liability Claims?
79 A creature of state law. Theories of liability sound in negligence, strict liability, or breach of warranty . R esponsibility of a manufacturer, distributor, or seller of a defective product . Is AI a “product” or a service? The law has traditionally held that only personal property in tangible form can be considered “products.” The law has traditionally considered software to be a service. Will Plaintiffs be Able to Prevail on Product Liability Claims?
80 C laimant must prove the item that caused the injury was defective at the time it left the seller’s hands. By definition, ML changes the product over time. Suppose an AI system is used to detect abnormalities on MRIs automatically and is advertised as a way to improve productivity in analyzing images, No problem interpreting high-resolution images but Fails with images of lesser quality. Likely: A products liability claim for both negligence and failure to warn. Will Plaintiffs be Able to Prevail on Product Liability Claims?
81 No matter how good the algorithm is, or how much better it is than a human, it will occasionally be wrong . Exception to strict liability for unavoidably unsafe products. (Restatement) Imposing strict liability: Would likely slow down or cease production of this technology. Will Plaintiffs be Able to Prevail on Product Liability Claims?
Is There a Duty to Warn?
Duty to warn: Traditional Products: Manufacturer knew or should have known that the product poses substantial risk to the user. Danger would not be obvious to users. Risk of harm justifies the cost of providing a warning. Mental Health: Tarasoff v. The Regents of the University of California (1976)
LI Rule: Likelihood harm will occur if intermediary does not pass on the warning to the ultimate user. Magnitude of the probable harm. P robability that the particular intermediary will not pass on the warning. Ease or burden of the giving of the warning by the manufacturer to the ultimate user.
Will Plaintiffs be Able to Prove Causation?
Causation will often be tough in AI tort cases. Demonstrating the cause of an injury: Already hard in health care. Outcomes frequently probabilistic rather than deterministic. AI models: Often nonintuitive, even inscrutable. Causation even more challenging to demonstrate.
No design or manufacturing flaw if robot involved in an accident was properly designed, but based on the structure of the computing architecture, or the learning taking place in deep neural networks, an unexpected error or reasoning flaw could have occurred. Mracek v Bryn Mawr Hospital, 610 F. Supp. 2d 401 (E.D. Pa. 2009) , aff ‘d, 363 Fed. Appx. 925, 927 (3d Cir. 2010)
Who is an Expert?
Who is an Expert? Trial Court: Cardiologist not qualified to testify on weight loss drug combo that proprietary software package recommended because doctor is not a software expert. Skounakis v. Sotillo A-2403-15T2 (N.J. Super. Ct. App. Div. Mar. 19, 2018) (on appeal, reversed)
Who is an Expert? MD who had performed many robotic surgeries not qualified on causation for want of programming expertise. Mracek v. Bryn Mawr Hospital, 363 F. App'x. 925, 926 (3d Cir. 2010) (ED complicating robotic prostatectomy)
Marketing: Should We Expect Breach of Warranty Claims?
92 A warranty may arise by an affirmation of fact or a promise made by seller relating to the product. See U.C.C. § 2-313 . Need not use special phrases or formal terms (“guarantee”; “ warranty”) Promotion of an AI system as a superior product may create a cause of action for breach of warranty . Darringer v. Intuitive Surgical, Inc., No. 5:15-cv-00300-RMW, 2015 U.S. Dist. LEXIS 101230, at *1, *3 (N.D. Cal. Aug. 3, 2015) . (another DaVinci robot case) Marketing: Should We Expect Breach of Warranty Claims?
Is AI a Person?
Is AI a Person? Of course not.. Artificial agents lack self-consciousness, human-like intentions, ability to suffer, rationality, autonomy, understanding, and social relations deemed necessary for moral personhood. But:
Is AI a Person? But: Could serve useful cost-spreading and accountability functions. EU Parliament, 2017: R ecognizing autonomous robots as “having the status of electronic persons responsible for making good any damage they may cause”. C ompulsory insurance scheme
Opponents Harm caused by even fully autonomous technologies is generally reducible to risks attributable to natural persons or existing categories of legal persons . Even limited AI personhood (corps, e.g.) will require robust safeguards such as having funds or assets assigned to the AI person .
Will Plaintiffs be Able to Impose Common E nterprise L iability with AI?
Example: Hall Du Pont , 345 F.Supp. 353 (E.D.N.Y. 1972)
99 1955-’59: Blasting caps injured 13 kids, 12 incidents, 10 states. Claim: Failure to warn. Ds: 6 cap mfrs + TA. Evidence: Acting independently, Ds adhered to industry-wide safety standard; delegated labeling to TA; industry-wide cooperation in the manufacture and design of blasting caps. Held: If Ps could show made ≥ 1 D mfr made the caps, burden of proof on causation would shift to Ds. Example: Hall Du Pont , 345 F.Supp. 353 (E.D.N.Y. 1972)
100 Theory: Clinicians, manufacturers of clinical AI systems, and hospitals that employ the systems are engaged in a common enterprise for tort liability purposes. As members of common enterprise, c ould be held jointly liable. Used where Ds strategically formed and used corporate entities to violate consumer protection law. E.g. , Fed. Trade Comm'n v. Pointbreak Media, LLC, 376 F. Supp. 3d 1257, 1287 (S.D. Fla. 2019) ( corporations were considered to be functioning jointly as a common enterprise ) (cont’d)
How Can We Defend Ourselves Against Claims?
102 Compliance with FDA regulations: Preemption. Policy: N o product liability claim encompasses the unpredictable, autonomous machine-mimicking-human behavior underlying AI’s medical decision-making . Unpredictability of autonomous AI is not a bug , but a feature. How Can We Defend Ourselves Against Claims?
103 Software is not a Product. Rodgers v. Christie, 795 F. App'x 878, 878-79 (3rd Cir. 2020) : Public Safety Assessment (PSA), an algorithm that was part of the state's pretrial release program, was not a product, so product liability for the murder of a man by a killer on pre-trial release did not lie. Not disseminated commercially. Algorithm was neither “tangible personal property” nor tenuously “analogous to” it . How Can We Defend Ourselves Against Claims?
104 Breach of warranty : Privity T ypically the clinician, and not the patient, purchased system. Product misuse, modification: Progress notes, e.g. S eller does not know specifics of these additional records or how algorithm developed following provider’s use. LI doctrine How Can We Defend Ourselves Against Claims?
Will AI Put Me Out of Work?
Will AI Put Me Out of Work? ChatGPT can outperform 1st and 2nd year medical students in answering challenging clinical care exam questions. Law students: Similar. But: Probabl y not.
(cont’d) John Halamka: “Generative AI is not thought, it's not sentience.” Most, if not all, countries are experiencing severe clinician shortages. Shortages are only predicted to get worse in the U.S. until at least 2030.
(cont’d) AI-infused precision health tools might well be essential to improving the efficiency of care. AI might help burn-out : ease the day-to-day weariness, lethargy, and delay of reviewing patient charts. The day may come when the SOC requires use of AI.
Can we Get Paid for Using AI?
110 Consider an outpatient setting: W hether the outpatient facility is in or out-of-network for the patient's insurer. W hether the facility is owned by a hospital. I f hospital-owned, may add a “facilities fee”. W hether this patient's insurer deems the AI to be “medically necessary”. N egotiated fee schedule between facility and the patient's insurer. H ow much of the deductible the patient will have met by the conclusion of this episode of care. Can we Get Paid for Using AI?
111 Provided for "medically necessary" care. Not: experimental treatments or devices Slow governmental adoption: The telehealth model. 9/20: CMS approved the 1st add-on payment up to $1,040, + inpatient hospitalization costs -for use of software to help detect strokes by Viz.ai Whether a 43-patient study used to support the company’s claim of clinical benefit was large enough to warrant the added reimbursement? Can we Get Paid for Using AI? 619-255-2788 619-255-2788
Can AI Detect or Prevent Fraud?
Can AI Detect or Prevent Fraud? One large health insurer reported a savings of $1 billion annually through AI-prevented FWA. Fed. Ct App: C ompany’s use of AI for prior auth and utilization management services to MA and Medicaid managed care plans is subject to qualitative review that may result in liability for the AI-using entity. US ex re v. Evicore Healthcare MSI, LLC. (2d. Cir., 2022)
Can Providers Use AI to Cheat?
Does AI Infringe Copyright?
J. DOE 1 et al. v. GitHub, Inc. et al. , Case No. 4:22-cv-06823-JST (N.D. Cal. 2022): Ps: They and class own copyrighted materials made available publicly on GitHub. Ps: Representing class, assert 12 causes of action, including violations of Digital Millennium Copyright Act, California Consumer Privacy Act, and breach of contract .
Claim: Defendants' OpenAI's Codex and GitHub's Copilot generate suggestions nearly identical to code scraped from public GitHub repositories, without giving the attribution required under the applicable license.
Defenses: S tanding. Did these Plaintiffs suffer injury? Intent: Copilot, as a neutral technology, cannot satisfy DMCA’s § 1202's intent and knowledge requirements.
Ownership of data Antitrust Algorithmic pricing can be highly competitive. But competitors could use the same software to collude.
Does AI Engage in Invidious Discrimination?
123 Training data key: A facial recognition AI software was unable to accurately identify > 1/3 of BFs in a photo lineup. Algorithm was trained on a majority male and white dataset. Does AI Engage in Invidious Discrimination?
124 Optum: Algorithm to identify high-risk patients to inform fund allocation. Used health care costs to make predictions. Only 17.7% of black patients were identified as high-risk; true number should have been ~ 46.5%. Spending for black patients lower than for white patients owing to “unequal access to care” . Does AI Engage in Invidious Discrimination?
125 Julia Angwin et al., “Machine Bias,” ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Emily Berman, “A Government of Laws and Not of Machines,” 98 B.U.L. Rev. 1278, 1315, 1316 (2018) Karni Chagal-Feferkorn, “The Reasonable Algorithm,” U. Ill. J.L. Tech. & Pol'y (forthcoming 2018) Duke Margolis Center for Health Policy, “Current State and Near-Term Priorities for AI-Enabled Diagnostic Support Software in Health Care” (2019) References
126 Cade Metz and Craig S. Smith, “Warnings of a Dark Side to A.I. in Health Care,” NY Times (3/21/19) Daniel Schiff and Jason theBorenstein, “How Should Clinicians Communicate With Patients About Roles of Artificially Intelligent Team Members?” 21(2) AMA Journal of Ethics E138-145 (Feb. 2019) Nicolas P. Terry, “Appification, AI, and Healthcare's New Iron Triangle,” [Automation, Value, and Empathy] 20 J. Health Care L. & Pol'y 118 (2018) Wendell Wallach, A Dangerous Master 239-43 (2015). Andrew Tutt, “An FDA for Algorithms,” 69 Admin. L. Rev. 83, 104 (2018) References