Document Type
Working Paper
Publication Date
3-2022
Language
English
Abstract
Artificial Intelligence startups use training data as direct inputs in product development. These firms must balance numerous trade-offs between ethical issues and data access without substantive guidance from regulators or existing judicial precedence. We survey these startups to determine what actions they have taken to address these ethical issues and the consequences of those actions. We find that 58% of these startups have established a set of AI principles. Startups with data-sharing relationships with high-technology firms; that were impacted by privacy regulations; or with prior (non-seed) funding from institutional investors are more likely to establish ethical AI principles. Lastly, startups with data-sharing relationships with high-technology firms and prior regulatory experience with General Data Protection Regulation are more likely to take costly steps, like dropping training data or turning down business, to adhere to their ethical AI policies.
Recommended Citation
James Bessen, Stephen M. Impink, Lydia Reichensperger & Robert Seamans,
Ethical AI Development: Evidence from AI Startups
(2022).
Available at:
https://scholarship.law.bu.edu/faculty_scholarship/1188
Included in
Growth and Development Commons, Industrial Organization Commons, Labor Economics Commons, Science and Technology Law Commons