For many years now there have been ongoing discussions, both among scholars and the general public, about the ethics of practices that involve the use of Artificial Intelligence – or AI – techniques. To stay in the business context, in this short article the concept of algorithmic pricing will be mainly discussed.
The engagement in AI practices raises a lot of concerns due to its potential effects on the nature of human beings and may consequently be prejudicial to individual human freedoms. Despite that, in the light of this context it is important to introduce the concept of Homo economicus, which is an interpretative model that moves its reality into the economic field, although it has also been the subject of study of other social sciences. This expression must be understood as an economic agent that makes rational decisions. It is almost considered as an evolution of the Homo sapiens itself which, as a result of the information available and its decision-making capacity, attempts to maximize its own welfare while not taking into account the social, moral, or relational zones.
An early interpretation of the agent as understood above can be traced back to the studies of John Stuart Mill, specifically in his essay titled “On the definition of Political Economy and on the method of investigation proper to it” (1836). Nevertheless, the concept of rationality that is referred to in this theory is disconnected from the classical philosophical and ethical implications that are normally referred to by the term. The Homo economicus is considered as “rational” in its attempt to maximize its own economic well-being, which is defined by a specific mathematical function, the so-called utility function. Therefore, these individuals aim to pursue specific objectives in such a way as to ensure that the maximization of their job is as wide as possible and, consequently, the resulting well-being is also maximized. The rational attitude of the economic operator does not affect the specificity of the objective, that is, the nature of the good being sought is not relevant. Indeed, it should only be considered that the achievement of that objective allows the agent to increase its individual welfare.
In 1942, the writer Isaac Asimov formulated the “Three Laws of Robotics” in his novel “I, Robot”, which intended to regulate the behaviors of robots equipped with AI in a hypothetical future. Namely, according to the three following laws, a robot: (1) may not cause harm to a human being, nor may it allow a human being to suffer harm because of the robot’s failure to act; (2) must obey orders given by humans if they are not contrary to the first law; (3) must protect its existence if it does not conflict with the first or second law. Beyond narrative fiction, these laws are a first attempt to base the action of intelligent robots not only on the utilitarian calculation of the optimal outcome, but on a real sense of duty. Jeremy Bentham was one of the highest proponents of utilitarianism which, as has already been pointed out before, is an ethical theory in which the consequence of action is maximized, where every such action is aimed at improving the common well-being: it is “good” any action that brings more qualitative well-being to more subjects, that is, it is more useful.
However, those rules based on duty, so-called deontological, have common issues with the criteria of utilitarian morality. That is, they fail to reflect or predict the complexity of moral deliberation, as they tend to abstract from the concrete context in which the action takes place. This is a relevant point to consider, especially when AI assumes traits like human action.
Unlike the moral utilitarian and ethics, the philosophy of Aristotle and its tradition are inspired by the thought of his moral action, which must always be considered in its specific context. Namely, only starting from the concrete circumstances it is possible to reach a reasoned choice, and the virtuous, which is placed “in the middle” between the excesses of the opposites. The perspective suggested by Aristotle puts at the center the concept of virtue, considered as the ability to identify the values at stake in a situation and to pursue them effectively. This position is referred to as virtue ethics.
Note that the latter approach could also be widely applied to algorithmic pricing techniques. In fact, virtues are the result of an intelligent learning process and, in the same way, AI pricing theorizes the prospect of “learning” moral behaviors. To put it simple, following this logic if this technology is able to learn to behave in a virtuous way, we should attribute to it a moral personality like a human being.
Algorithmic pricing techniques do not possess the characteristics of intention (will) and self-awareness (regret) necessary to arrive at a moral judgment on its own action. The algorithm acts outwardly in accordance with moral duty, yet its actions are the result of an order and it is convinced that obeying the moral law is right. In other words, what differentiates the behaviors of human beings from that of machines is that the former are aware of their inner states, they recognize their act as “free” and are able to choose on the basis of an analysis of the complexity of the circumstances, and not simply on the basis of a calculation of advantages and disadvantages.
On the bright side, algorithmic pricing has brought many advantages to the society and business environment we currently enjoy through reduced costs and income growth, as well as a decrease in waste of resources; production, marketing and storage of goods for sale time. On the other hand, the autonomy of choice has been largely affected and the societal implications for that are not to be ignored. In this context it is important to differentiate between the quantitative freedom gained as a result of development in AI pricing contracted to the loss of qualitative freedom.
From an ethical perspective, the conclusions may vary depending on the school of thought. That is, utilitarians would approve algorithmic pricing techniques due to its efficiency gains and broader spectrum of options. Instead, deontologists and virtues ethicists would put emphasis on the constitutional nature of the inputs and outputs. With respect to the inputs, deontologists would ask themselves whether an individual’s autonomy of choice actually entails economic freedom. On a narrower level, virtue ethicists may accentuate personal flourishing as a result of decision-making authority. Thereafter, with respect to the outputs, deontologists refuse a classification on the aggregate increase in efficiency and economic prosperity; instead, proposes an examination of its effects under an individual equity viewpoint. Similarly, virtue ethicists would measure an individual’s accomplishments on the basis of their engagement in self-realization.
Another important topic in the field of algorithmic pricing is that of privacy. Here, deontologists may reject AI pricing techniques, particularly in the case of non-consensual gathering of information or if consumers are not able to avoid such practices. On the contrary, utilitarians and virtue ethics shall opt for concentrating on the concrete effects of the exploitation of customized figures.
Developments in artificial intelligence may invoke a diametrically opposed dichotomy for marketers, consumers and authorities with regard to buyer’s well-being. On the one hand, AI technologies could contribute positively to welfare in a variety of ways. For instance, making costumer’s choices easier, more practical, and more efficient. On the other hand, they could threaten people’s sense of autonomy, thus adversely affecting consumer well-being.
As a consequence, the different ethical and moral attributions to algorithmic pricing mechanisms should entail further ethical frameworks and/or regulations at a national and international level. Policymakers should take advantage of this time in history to reflect upon marketplace objectives, which whilst maximizing profits, should also maximize the well-being of the community. It is particularly important for both governments and citizens to engage in the definition of AI (and algorithmic pricing) as a trustworthy and society-enhancing tool, while rejecting all forms of science fiction related to these concepts. Those kinds of technologies are already part of our daily lives and, in front of this scenario, humankind should make a collective effort to integrate the aforementioned terminology into our culture.
18 Comments