Designing a Domestic Robot’s Moral Architecture

The process of calibrating an AI humanoid robot to live in your home necessitates a stark self-confrontation of morality in practice vs morality in theory.

When conflict suddenly arises in your home, you react in real time. Your robot may appear to do the same, but in reality, its response is the product of decisions made months, if not years, earlier and preprogrammed into its software. This raises the question: who decides your robot’s moral architecture? What legal regulations, moral codes, religious doctrines or cultural sensitivities will this programming take into account? A robot might tidy for a visiting lover in a married French household, ignore light corporal punishment of a misbehaved child in Singapore, or assist with porkchop preparation in a Utah kitchen.  These are actions that might be perfectly acceptable in one moral ecosystem but completely unacceptable in another.  Programming a robot in a lab, assembling it in a factory and shipping it around the world may work in theory- but in reality, when the box is opened and the robot is powered on, they will each be entering a home with an entirely unique moral ecosystem that they will have to adapt to in order to be adopted into the user’s life. The level of programming, adaptability and customization is influenced on a range of levels including company policy, owner preferences, household rules, community culture and governmental laws. Ultimately, the adoption of these robots into homes and the necessary moral architectural honing will force individuals to confront their morality in practice in a quantifiable and undeniable manner we have yet to be faced with on an individual level as a society. 

Although the robots may be designed and produced in one country, they will likely be shipped abroad once sold. It is therefore incumbent upon the company to ensure that the behavioral programming of the robot is compliant not just with the destination’s generally accepted regional morality but also its local legalities. For example, there exists a legal practice in the US called mandated reporting which is a legal obligation placed on specific individuals to report suspected cases of abuse, neglect, or other harm to vulnerable people, such as children, the elderly, or individuals with disabilities, to authorities.  A number of questions arise with domestic robots entering family homes. Would a domestic robot caring for children, the elderly, or individuals with disabilities be classified as a mandated reporter? If so, what behaviors must it flag? And if it fails to report questionable behavior, who is held responsible: the manufacturer, the owner, or the robot itself? What behavior would it be programmed to recognize as suspicious? Perhaps an older sibling is play fighting with their little brother and the robot is recording- would an automatic report be filed with video evidence? The concept of an autonomous surveillance system acting as a mandated reporter could help to keep vulnerable members of society safer but would the knowledge of this legal factor truly uncover abuse or merely drive it deeper underground?

If a robot is not classified as a mandated reporter but it witnesses or records abuse in a household between spouses or parents and older children, can it be called upon to testify in a potential court case surrounding custody, divorce or abuse?  If it was, what would that look like? Perhaps the robot could be compelled to turn over video evidence but what if it wasn’t recording and it was called upon to testify in a narrative manner? Then the court would be relying on preprogrammed analysis and interpretation of behavior which boil down to algorithms, training data, and developer assumptions.  If a robot’s testimony is so deeply shaped by its internal models, is it truly a reliable independent witness or are we hearing the voice and interpretation of its programmers?

This also brings up the concept of continuous recording, data storage and memory wiping. Memory wiping may be the most ethically fraught concept in domestic robotics.  It raises existential questions not only about privacy, but about truth, responsibility, and control.  Determining who has the authority to wipe the memory of a robot is a process laden with complications. If it is only the owner of the robot, that creates a dangerous power asymmetry in the household. If anyone can at any time, that also presents significant risks. The continuity of the robot’s understanding of its environment, relationships, and tasks would be disrupted. Memory erasure opens the door to misuse, such as erasing evidence of mistreatment, abuse, manipulation, or accidents. To combat this, we need protocols for memory wiping that include matters of consent, transparency, accountability and purpose for deletion.  There could be a data erasure log, two factor authentication with another device or dual authorization in households where many users interact with the robot.  Just as there are protections surrounding what is thoughtfully or purposefully saved, so too must there be the same protections surrounding what is purposefully deleted. 

Multi user households with minors or staff, inter-generational households and households regularly welcoming guests all introduce an extra layer of complexity to an already difficult task. There is a blurring of authority, consent and context in the robot’s operational reality. There could be vastly different privacy expectations across generations, classes, cultures and levels of technological use.  The notion that a singular “owner” can exist in a shared piece of domestic technology like it does with our smartphones or social media accounts is impossible to reconcile. We are conditioned to think of our settings on an personal basis, as nearly all forms of current AI enabled technology are designed for individual use. Now, we are dealing with a delicate balance of interpersonal issues like autonomy vs oversight or privacy vs security, whereby our own desires need to be weighed against the feelings and comfort of others impacted by our settings and preferences. As these robots have barely made their way out of the lab, we haven’t been confronted with these issues yet but as they make their way into homes, there will be issues in multi-user and intergenerational households that expose the limitations of current models.  We cannot rely on the simplified, single user model as robots enter the delicate ecosystems of the household. The moral architecture of these robots will not just need to have multi user profiles but will need to know how to balance them in practice. 

In the construction of this moral architecture, users will need to confront their morality in practice, not in theory. This means that a family or individual’s identity may, in fact, be challenged by the reality of how they act in certain real-life scenarios. Perhaps the robot winds up in a religious household which prides itself on upholding the values of their religion, yet in reality the family has a rather liberal way of applying these values to their daily lives? How would the robot respond or react in encountering this tension between programming and reality? If the robot was commanded to do something that its models show are against the calibrated guidelines for the house, would the robot be out of line to bring up the contradiction before asking for an override to complete the task? If so, which profiles can override the robot? What happens if the robot is asked to do something illegal? Should the request be flagged to law enforcement or simply refused? 

These complex questions linger in the midst of early adopters’ experiences, clouding the future of widespread adoption and prophesying the many inter-familial issues that will occur until these issues are resolved. The construction of moral architecture must be a well coordinated effort, balancing functionality with awareness, among programmers, primary users, other household members, local communities, and of course, regulatory bodies.  Each user and household must confront their morality not just in theory but in practice and in relation to the practice of the other users of the same system.  This may present an insurmountable challenge to some families and individuals who do not wish to engage in the stark self confrontation of their own morality. The best way forward would be independent user profiles which are calibrated based on a private questionnaire or private interview process in order to align the robot with the individual users in the household, with the registered owner named as the indisputable primary operator. This may prove to be a barrier to families who have the financial ability to welcome one of these robots into their homes but do not wish to also welcome the magnifying glass that it brings to sharpen the view of their family dynamics, morality and behavior. 

Previous
Previous

Societal Shifts of the Industrial Revolution Echoed Today

Next
Next

Domestic Surveillance: Ambient Behavioral Data Mining