This essay was adapted from the script for a presentation of the same name delivered as part of the Coalition for Networked Information’s Project Briefing Series: Summer 2025.
Over the course of the last few years, I have read a great deal of literature discussing the ethics and latent values in the area of technology and more specifically, artificial intelligence. For the sake of brevity, I will not provide an exhaustive list of what I have read nor is what I have read exhaustive in regard to what’s available, but I will highlight some titles. The issue that I take with most AI ethics literature is that they either discuss AI ethics without ever mentioning frameworks from moral philosophy, or they discuss AI ethics using only one ethical framework. This presentation is a brief discussion of an extensive literature review and concludes with an explanation for where I feel AI ethics discourse should be directed.
Vallor’s (2016) Technology and the Virtues specifies virtue ethics, stemming from Aristotelian traditions, and she does include some discussion of non-Western virtue ethics. Likewise, her book The AI Mirror (2024) also follows a tradition of virtue ethics. Gertz’s (2018) Nihilism and Technology entirely focuses on the concept of nihilism; although this framework comes from existentialism, not moral philosophy, I thought it interesting to include because some scholars consider Friedrich Nietzsche to be a virtue ethicist, and his ubermensch model is portrayed as someone who overcomes universal determinism through self determination by redefining morality. Even the Vatican has offered its views on AI ethics in partnership with Santa Clara University in their book Ethics in the Age of Disruptive Technologies: An Operational Roadmap (2023).
Other books I have read, such as Hepler et al.’s (2024) AI Ethics in Higher Education and Smuha’s (2025) The Cambridge Handbook of the Law, Ethics, and Policy of Artificial Intelligence, say nothing of ethical frameworks or if they do, it is only in passing or merely implied. Typically, the reader is left with a series of chapters that proclaim best practices and unpack problematic characteristics inherent to AI, such as bias, mis/disinformation, stereotyping, harmful language, and infringements on privacy and intellectual property. These sorts of conversations are generally held with a utilitarian framework in mind; however, utilitarianism or any other form of consequentialism is never formally discussed. It is up to the reader to know enough about ethics before entering the conversation. The only book that I have come across that discusses the application of ethics in all its forms to information and technology is Burgess and Knox’s (2019) Foundations of Information Ethics.
I am beginning to notice that this is a pattern in AI ethics literature, and others, it seems, have noticed as well. During a panel presentation at the STRAKER AI ethics workshop held at The University of Alabama on May 10, 2025, one presenter, Dr. Jiaqi Gong from the College of Engineering, mentioned that while worthwhile, AI ethics, as it is now, is unhelpful. Luo (2025, p. 392) reports on a survey of librarians using AI in their workflows and says, “‘The hand-wringing about ethics is warranted but not useful.’ [Generative AI] has become the reality, and a shift in attitude toward AI is unavoidable.”
Such sensibilities have driven many researchers, ethicists, and consultants toward the notion that it no longer serves us to discuss ethical frameworks, instead grounding discussions of ethics into practice. This, however, reveals problems as well. As opposed to having a discussion about AI ethics, authors offer what I would describe as a code of conduct to be exercised when interacting with AI products and services. They provide a top-down paternalistic essay with every chapter that teaches not ethics but responsibility, mindfulness, and professionalism. These issues, of course, have their time, place, and use in multiple fields integrating AI into their work, not to mention the wisdom they hold for society in general.
The problem, though, with a paternalistic approach is two fold. First, it implies that users can successfully superimpose their own values over the values of the developers who created these AI systems. Second, it overlooks the multiple layers of competing interests that create what ethicists call moral dilemmas and what political scientists call wicked problems (Head & Alford, 2015; McConnell, 2022).
In the realm of AI research, the question of authorship in regard to expressions of ideas is often debated. Is AI responsible for the text, images, audio, and/or video it generates? I use this as an opportunity to relate it to another form of technology on which society seems similarly divided. Do guns kill people, or do people kill people? Talk to any firearm owner, and they will tell you all the reasons why they own a firearm. It is used for self defense, it is used for recreational activity at the range, or maybe it is used for hunting, which for some is recreational and for others is a way to secure food for themselves and their families and friends. These are values that owners impose on their firearms; however, the firearm’s purpose, the teleological value that it is imbued with when made, is to end life as efficiently and effectively as possible.
A discussion of how we should all use AI, as opposed to a conversation of true AI ethics, suggests that each individual user can compensate for the way these systems are made and function by simply behaving in a responsible manner. AI products and systems, which advocates innocuously call “tools,” were designed with specific purposes in mind. DuPont, for example, developed Teflon for non-stick pans, among many other products, making it easier for cooking and cleaning. Despite the public knowing how to use non-stick pans appropriately, DuPont always knew more than anyone else, including the EPA, about the harms of “forever chemicals,” which have perforated and poisoned American society. While the public may be aware of how to use AI systems thanks to the way companies market them, we will never overcome the fact that AI developers, such as Amazon, Google, Meta, and Microsoft will always know more about these products than we do. As such, these values are hidden from the public, and the only values revealed are the ones that encourage the public to engage with their products.
Furthermore, from a policy perspective, various parties with competing interests want or need AI to operate in different ways, and most of these conversations are in regard to the data used to train AI systems. This sets up a moral dilemma, a situation in which two priorities come into conflict with each other when trying to concurrently observe both. This is most prevalent in the areas of intellectual property and privacy, and how these issues are juxtaposed with diversity in training datasets to account for bias, stereotyping, harmful content, and mis/disinformation.
For instance, outspoken AI critics want technology developers to respect their privacy. In fact, Amazon, Google, Meta, and Microsoft have been called out for the ways in which they leverage their products to gather consumers’ voice imprints (Henderson, 2023; Taylor, 2025; Zilber, 2024). These voice imprints are most likely used for AI training. The purpose is so that voice interactive AI programs, such as Amazon’s and Google’s AI voice assistants, are programmed to successfully process a wide spectrum of voices and speech patterns, including different kinds of regional accents and speech impediments. A discussion solely grounded in practice, which substitutes responsibility for ethics, assumes that any actions can be justified for the greater good, and indeed, some ethical frameworks can be used to justify this behavior; however, we overlook nuance and meaning by not discussing this scenario within the contexts of other ethical frameworks such as virtue ethics, deontology, social contract theory, and care ethics as well as non-Western frameworks like Ubuntu, Confucianism, and Karmic ethics .
For example, employing concepts found in deontology and care ethics would show that gathering consumer voice imprints without their knowledge and consent is impermissible, but with utilitarianism, there is a path toward justifying this behavior. Only after assessing this kind of dataveillance, AI training, and product development using multiple ethical frameworks can we decide which is the higher moral priority. Is it the public’s right to privacy, or is it the public’s right to an AI voice assistant that understands everyone? This is an important process because ethics intends to maximize benefits and minimize harms.
Similarly, the use of copyrighted content for training data has paved the way for the US to take the lead in the global AI race (Sag & Yu, 2025). AI developers have historically claimed fair use, which according to Section 107 of the US Copyright Act, permits the unauthorized use of copyrighted content under specific circumstances, which has so far been upheld in court; see Bartz v. Anthropic and Kadrey v. Meta, which were ruled on this summer in the Northern District of California. Again, there is a moral dilemma facing many courts across the US. Copyright law is heavily influenced by utilitarianism in the sense that copyright laws and court rulings are made under the assumption that these decisions will lead towards maximizing the greatest good or minimizing the greatest harms for the whole of US citizens and government. Which will judges decide is the greater moral imperative? Will it be the rights of copyright holders to determine how their works are used (e.g., duplicated, distributed, etc.), or will it be the rights of AI developers to continue to use copyrighted works for AI training data so that the US may continue to lead the world in AI?
Given that this example has introduced global concerns, this is an excellent opportunity to introduce the concept of wicked problems. A wicked problem is a complex issue with no clear definition or solution, primarily due to layers of uncertainty, shifting values, and competing interests, and solutions to a wicked problem often change the problem by leading to unforeseen consequences. For those familiar with philosophies of modernity and technology, this is very similar to Jacques Ellul’s concept of technique, whereby technical solutions to humanistic problems result in self-defeating and dehumanizing outcomes.
For example, in the 1960’s, Ghana undertook the Volta River Project. It was thought that damming the Volta River would address perceived poverty in Ghana by encouraging Westernized industrialization, thereby providing a higher standard of living for the public. Despite all intentions, only the urban elite reaped these benefits while nearly eighty-thousand residents were displaced. The project caused a string of public health crises, and failed attempts to address them led to contaminating the drinking water. In the end, employing a cascade of technical solutions resulted in additional problems because the government ignored the local ecology and history. Ghana’s government ended up relinquishing control of the project, turning it over to Western specialists and investors, debasing Ghana’s historical struggle to liberate itself from Western influence (Agbemabiese & Byrne, 2005).
The wicked problem of AI on a global scale is that the US is competing with nations that do not necessarily reciprocate the US’ ideals of human rights and democratic values. AI is an extraordinarily powerful technology that can outcompete human counterparts in work productivity, efficiently persuade and even radicalize people, monitor and analyze human activity at scale, and produce significant novel results in various areas of science and engineering due to its pattern recognition capabilities. It could be very dangerous if a nation that does not prioritize human rights and democratic values were to dominate the AI space. There are, of course, other competing interests relevant to allied and adversarial nations as well as international organizations such as the UN and NATO. However, for the US to continue leading in AI development at its current pace – note that China is not far behind – it seems that it must deprioritize the rights of citizens and residents to protect their privacy and intellectual property. This conclusion has been reached only by way of utilitarianism because that is the framework that US policy is built upon. What answers, then, would we come up with if other frameworks were leveraged? Of those alternatives, which would be viewed as the moral course of action?
This is why AI ethics must be more than just grounded discussion in practice. If we are always grounding discussion in practice, then conversation will always turn into top-down paternalistic decision guidance on responsibility, and that sense of responsibility will inevitably cater to instrumentalism and pragmatic utilitarianism, which ignore competing values and may lead to undesirable outcomes. We need more literature that discusses how to develop, assess, adopt, deploy, and use AI products and systems, and each of these matters can be discussed separately; however, each issue must be analyzed through lenses found in multiple ethical frameworks to ensure that a Western bias toward consequentialism and pragmatism does not shut out other moral concepts. If the US is to lead the world in AI development, then the US is likewise responsible for leading the world in AI ethics. As such, multiple ethical frameworks simultaneously employed to probe the various dimensions of AI are required for the philosophically and politically diverse global population.
Given this diversity, one ethical framework that has emerged and should be further investigated is pluralistic ethics. Summarizing Charles Ess (2006), pluralistic ethics aims to conjoin competing and antithetical value systems among diverse societies and cultures by identifying overlapping values and judgments held by those communities. For example, many cultures have their own versions of the “Golden Rule,” which states, “Do unto others as you would have others do unto you,” and the other versions of this principle state something of similar meaning and effect (Roberts & Black, 2021). In accordance with pluralistic ethics, this means that all societies that adhere to a golden rule would then agree on this issue and find common ground. That, however, is not necessarily the case because pluralistic ethics appears to stop short of considering the value of reciprocity. Just because two opposing societies each adhere to a golden rule does not mean that these societies believe that the terms of their respective golden rules extend to those outside their own societies.
Pluralistic ethics has been useful in the global governance of other technologically relevant issues such as the Internet and data (Wiesmuller, 2023). What we need now, though, is something that goes a step further. We need a framework of reciprocal ethics. A framework of reciprocal ethics would identify not only superficial commonalities in values but also substantive commonalities in judgments.
AI developers, consumers, advocates, and regulators all retain competing interests, but wanting useful AI is something they all have in common. The term useful, however, is value-laden, and so, it must be determined what each party means by useful. Within each party’s definition of the term useful, there may be additional value-laden terms, and those terms must be further defined until definitions are made clear in their entirety. All parties involved may need to leverage wisdom from multiple ethical frameworks. Once value-laden terms, like useful, helpful, good, and intuitive are explained to the point where all terminology shares the same meaning to all parties, then all parties can decide on how to proceed with development, assessment, adoption, deployment, and use.
I don’t know exactly what that looks like from an administrative perspective; however, we do have reciprocity models in the US to consider such as concealed weapons permits and credentials for teachers, electricians, realtors, architects, and insurance claim adjusters. Recognition of these various certifications differ from state to state. Driver’s licenses, on the other hand, despite being issued by individual states, are reciprocal in every US state and territory and even in some foreign countries. We can, therefore, ask, “What criteria of values does each governing body prioritize before recognizing another governing body’s issued credential?” From further understanding systems of reciprocity, we can begin to build onto pluralistic ethics by establishing an ethics of reciprocity.
References
Agbemabiese, L., & Byrne, J. (2005). Commodification of Ghana’s Volta River: An Example of Ellul’s Autonomy of Technique. Bulletin of Science, Technology & Society, 25 (1), 17-25. https://doi.org/10.1177/0270467604273821.
Burgess, J. T. F., & Knox, E. J. M. (2019). Foundations of Information Ethics. American Library Association.
Ess, C. (2006). Ethical Pluralism and Global Information Ethics. Ethics and Information Technology, 8, 215-226. https://doi.org/10.1007/s10676-006-9113-3.
Flahaux, J. R., Green, B. P., & Skeet, A. G. (2023). Ethics in the Age of Disruptive Technologies: An Operational Roadmap. Markkula Center for Applied Ethics.
Gertz, N. (2018). Nihilism and Technology. Rowman and Littlefield International.
Head, B. W., & Alford, J. (2013). Wicked Problems: Implications for Public Policy and Management. Administration & Society, 47 (6), 711-739. https://doi.org/10.1177/0095399713481601.
Henderson, J. G. (2023). FTC and DOJ Charge Amazon with Violating Children’s Privacy Law by Keeping Kids’ Alexa Voice Recordings Forever and Undermining Parents’ Deletion Requests. Federal Trade Commission. Retrieved July 10, 2025, from https://www.ftc.gov/news-events/news/press-releases/2023/05/ftc-doj-charge-amazon-violating-childrens-privacy-law-keeping-kids-alexa-voice-recordings-forever.
Luo, L. (2025). Use of Generative AI in Aiding Daily Professional Tasks: A Survey of Librarians’ Experiences. Library Trends, 73 (3), 381-398. https://doi.org/10.1353/lib.2025.a961200.
McConnell, T. (2022). Moral Dilemmas. Stanford Encyclopedia of Philosophy. Retrieved July 10, 2025, from https://plato.stanford.edu/entries/moral-dilemmas.
Roberts, C., & Black, J. (2022). Doing Ethics in Media: Theories and Practical Applications. Routledge.
Sag, M., & Yu, P. K. (2025). The Globalization of Copyright Exceptions for AI Training. Emory Law Journal, 74, 1-47. https://doi.org/10.2139/ssrn.4976393.
Smuha, N. A. (2025). The Cambridge Handbook of the Law, Ethics, and Policy of Artificial Intelligence. Cambridge University Press.
Taylor, J. (2025). NSW Education Department Caught Unaware after Microsoft Teams began Collecting Students’ Biometric Data. The Guardian. Retrieved May 22, 2025, from https://www.theguardian.com/australia-news/2025/may/19/nsw-education-department-caught-unaware-after-microsoft-teams-began-collecting-students-biometric-data.
Vallor, S. (2024). The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. Oxford University Press.
Vallor, S. (2016). Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press.
Wiesmuller, S. (2023). The Relational Governance of Artificial Intelligence: Forms and Interactions. Springer.
Zilber, A. (2024). Marketing Firm Admits Using Your Own Phone to Listen in on Your Conversations. New York Post. Retrieved July 10, 2025, from https://nypost.com/2024/09/03/business/marketing-firm-spies-on-you-through-your-phones-microphone-report/.