Advanced robots and AI are reshaping our autonomy. Animesh Jain argues that to curb potential harm to human autonomy, we must examine the techniques used in model design.
As robots and AI systems become more advanced, they are likely to take on more social and work roles, interacting with humans in increasingly sophisticated ways. Their physical presence and social capacities may give people greater autonomy or possibly inhibit and disrespect it. Robots and AI can improve autonomy by helping us achieve more meaningful goals and make more authentic choices. But they can also undermine autonomy by causing us to achieve fewer valuable goals independently and make less authentic choices. They can show a lack of respect for our autonomy and make it more vulnerable.
Self-determination and self-rule are the hallmarks of human autonomy, requiring considerable control over our instincts and impulses. This is a level of autonomy not shared by most animals and technologies. Human autonomy is closely tied to our practical rationality – the ability to evaluate reasons for actions and pursue things we consider valuable.
The impact of AI systems on human autonomy, whether positive or negative, depends on their design, regulation, and use. To fully understand the potential effects of AI on human autonomy, we must consider various aspects of sociotechnical systems, including consent, data collection and processing, institutional and societal factors, computational tasks, and interface design. For instance, if AI systems reduce the availability of cultural practices and resources by homogenising cultural products, they could diminish the diversity of our cultural experiences. This makes it harder to exercise autonomy in choosing between different cultural options, making the cultural space we live in bleaker. The sociotechnical foundations of autonomy are crucial when designing and using AI ethically and sustainably.
Consider the use of AI risk assessment tools in the criminal justice system. Human autonomy is affected in two major ways. First, judges may feel pressured to conform to the assessments of AI, limiting their decision-making freedom and thus their autonomy. Second, defendants are judged based on the behaviour of others with similar profiles rather than strictly on their own actions and circumstances, which limits their individual autonomy.
False Idea of Choices
Influencing people's behaviour without their awareness starts with understanding their perceptions, blind spots, vulnerabilities, and boundaries. Once these are identified, it is possible to manipulate them. This is exactly what technology product designers can do to our minds – they exploit our psychological vulnerabilities (consciously or unconsciously) to capture our attention and guide our actions. Once these vulnerabilities are mapped to different types of people and profiles, tech companies can easily use artificial intelligence algorithms to automate and scale this manipulation.
By designing options that benefit them regardless of our choices, tech companies create the illusion of choice. This strategy has significantly benefited tech companies. When presented with a set of options, people rarely question what wasn't offered, why certain options were provided over others, or the underlying motives behind the available choices. The proliferation of options in almost every aspect of our lives (information, events, destinations, friends, dating, careers) fosters the belief that tech companies are the most useful and empowering providers of choices.
Data or Design: Identifying the Problem
Excessive data collection blurs the distinction between sensitive and non-sensitive information, paving the way for discrimination, inequality, and restricted access to vital services. AI's advanced data processing capabilities amplify the impact of this extensive data collection. Unlike traditional data analysis, AI can sift through massive amounts of data, identifying and clustering datasets to create highly accurate, targeted profiles of users. For instance, research has found that 70 percent of the time, YouTube's AI algorithm determines what people watch on the platform. Meta is not just a tool for safely sharing information and opinions, and Google is not just a search engine; both are sophisticated services aiming to anticipate our needs and interests, monetising emotions, sentiments, and biases.
Contrary to the belief that AI systems simply reflect the biases in the data they process, algorithms themselves are not neutral. Design choices play a crucial role in shaping user autonomy and can either amplify or mitigate harm. Identifying which design decisions disproportionately increase error rates is essential for reducing algorithmic harm. The concept of diffusion of responsibility, where people fail to act because they expect others to, is relevant to algorithm design. Assuming that bias can be fully eliminated through data alone is risky. Since datasets are often flawed, the resulting harm comes from both the data and the model design.
Understanding how these factors influence human autonomy in AI systems is key to minimising harm. While algorithm design is inherently biased, addressing these issues through careful design choices is often more effective than attempting to create flawless datasets.
AI and Human Autonomy: A Design Challenge
While some AI applications seem to threaten human autonomy, a closer look reveals these threats often stem from shortcomings in design, not inherent qualities of AI itself. Issues like lack of transparency, privacy concerns, and lack of sufficient data categories can all contribute to this tension. This indicates that there is no intrinsic conflict between human autonomy and AI. To resolve this tension, we can design AI systems that respect autonomy by focusing on these critical functionalities, features and other design aspects. Improved design can help individuals retain their "meta-autonomy," or their ability to decide when to decide. Recognising how model design impacts human autonomy opens up new mitigation techniques that are less burdensome than comprehensive data collection. Acknowledging the impact of model design can play an important role in curbing harm.
Photo by Alex Knight on Unsplash