Law and AI Robotics: The legal phenomenon of capability loss
I am one of the first people on earth to live full time with a domestic AI humanoid robot. It sounds like a wild futuristic adventure but in reality, it is a constant wave of headaches, frustration and disappointment. I have a state of the art Unitree G1 EDU 2 with a dextrous hand. It should be able to get my groceries, accompany me to the University where I work, take the metro with me, clean my home and more. But it doesn’t. It can but it doesn’t. For example, when I tried to bring it on the tram with me to my office at the University of Vienna, there was a massive issue with ticketing. The robot needed to sit in order to steady itself with the constant stopping and starting of the tram carriage and the elevation changes from district to district. In taking a seat from another passenger, it straddled being a piece of oversized luggage and being a passenger in need of a ticket. When I inquired about ticketing options for humanoid robots, unsurprisingly the transit authority declined to issue a ticket and the debate waged on. When I tried to teach it how to clean my apartment and realized that within the trial and error of it learning the layout of my space, that it was running into some of my antiques, I tried to increase the insurance in my home. I was hung up on, passed from manager to manager and ultimately declined, as the insurance company was unclear on how to handle the actions of an autonomous humanoid. When I wanted to get the robot a job, I tried to register it as a resident in Vienna at my address to ensure that the authorities were well aware of its location and intention. I went to the local registration authority in my district and was told that registration was for biological residents only. When I wanted it to learn how to pick up ingredients for me at the supermarket, I needed it to remember the route on the street and be able to record as I taught it, which it cannot do due to privacy regulations. At each step along the way, the novel, state of the art technology that had been developed and that I invested in the robot for was truncated in its application due to regulatory infractions until it was rendered essentially useless.
This experience with an AI humanoid robot revealed a reality that is largely absent from mainstream discussions of both robotics research and legal theory. The primary obstacles to meaningful robotic integration and effective robotic deployment are not necessarily technical, but legal, regulatory, and economic. While innovation in artificial intelligence and robotics is advancing at extraordinary speed, the moment these systems leave the laboratory and enter domestic space they encounter a bottleneck of policy constraints that effectively strip away most of their technical features. These constraints are from insurance regimes, liability doctrines, consumer protection laws, privacy regulation, labor politics, contractual control, cloud governance, and geo-fenced compliance obligations. The mine field of external constraints render much of the technical achievements made by innovative engineers moot because this web of restrictions reshapes and restricts what the robot is designed to do. Through the lens of this immersive research method of daily cohabitation with a humanoid robot, the experiment treats lived experience as ethnographic data, revealing regulatory frictions that remain invisible in more abstract policy debates and sidelined from the primary focus of AI engineering research. The findings of my research result in an observation that I describe as capability loss: a systematic stripping of robotic functionality as systems move from experimental laboratory environments to deployment in everyday life. This loss is not the result of insufficient hardware or software, but of policy structures protecting the domestic sphere, a space that was developed without embodied AI as a consideration. Unless these legal frameworks are re-evaluated and updated, domestic robotics risks remaining permanently over-constrained, under-deployed, and economically illogical to adopt, regardless of how advanced the technology becomes.
The domestic sphere constitutes one of the most legally restrictive environments for the deployment and adoption of embodied AI. Unlike industrial or commercial contexts, where risk assessment, supervision, consent, and data use are typically more formally structured, the household sits at the intersection of multiple legal landscapes which were developed without any anticipation of embodied AI. Privacy and data protection law govern observation and memory; consumer protection law mediates acceptable risk and user vulnerability; liability doctrines shape permissible action and intervention sensitivity levels; labor law constrains task allocation and complicates socioeconomic intricacies; and cloud governance and contractual frameworks limit long term learning and system updates. These regulatory landscapes now all intersect in the domain of just one machine, layering a web of obligations atop it just to assist with the most ordinary of household activities like movement, assistance, and observation. As a result, the home cannot be understood as a neutral testing ground for AI robotics but instead as a space where legal issues reveal themselves at their most practical level and where tolerance for ambiguity has to be near its lowest. This intersection of policies produces a structure of tension in which the environment that most demands contextual sensitivity and adaptive behavior is also the one in which a robot’s actions are most heavily regulated and unable to deliver at their actual technical capacity.
Unlike standard types of technological limitation like hardware issues or model performance ceilings, capability loss is entirely imposed from external sources. It comes from the interaction of policy landscapes that do not only regulate the robot’s behavior but actually determine its operational limits. In this sense, law does not simply govern robotics after the fact or protect the human involved but it shapes the robot’s effectiveness in deployment settings by constraining the conditions under which any embedded technical capacity can actually be used. It’s important to note that capability loss is not the result of any single policy, but of the clashing of multiple legal frameworks that were never designed to function together within a single embodied system bridging the most advanced hardware and software innovations in one. Each policy involved may be independently useful, yet their convergence within the operation of embodied AI produces an effect that is rarely discussed, which is robots that are technically quite sophisticated but behaviorally constrained past the point of usefulness. The result is a widening gap between what robotics research accomplishes and what deployment allows. The throttled functionality undercuts the practical usefulness of the robots and therefore chips away at the economic incentive for purchasing one of these systems. This article presents a selection of legal landscapes currently contributing to capability loss throughout the cohabitation experiment. These domains are liability law, consumer protection, privacy regulation, labor politics, contract law, cloud governance, and geo-regulatory fragmentation.
Insurance and the how Over Caution leads to Inaction
Insurance landscapes exert a clear influence on domestic robotics by translating legal uncertainty into economic risk, shaping what robots are allowed to do even before any statute or court ruling is established. In practice, insurers function as early regulators: tasks that these heavy robots complete involving physical proximity, autonomous movement, or intervention are classified as high-risk, whereas non-invasive, low-impact functions are favored regardless of technical ability on either end of the spectrum. This asymmetry in risk results in more conservative design choices, lowering decision thresholds for inaction while discouraging potentially useful actions like physical assistance. It is precisely this asymmetry that insurance policies exert influence on domestic robotics not only at the point of use, but during the design process and development. Long before an embodied AI system enters a private household, its permissible behaviors are filtered through economic assessments of insurability, liability exposure, and risk assessments. In this sense, insurance is operating not only as a reactive measure after the machine is produced, but as a decision making mechanism that helps to determine which robotic behaviors are economically viable to build or approve for use at all. The result is that many technically feasible capabilities are constrained, disabled, or excluded during development, not because they are unsafe, but because they introduce forms of risk that cannot be readily priced, pooled, or defended.
Adaptive judgment, physical intervention, and learning in situ, capabilities that would drastically improve a robot’s usefulness in the home, are often treated as economically uninsurable because they complicate clear behavioral attribution and blur responsibility. Developers are incentivized to suppress or truncate these capacities in favor of conservative behavioral envelopes that can be reliably understood and regulated. This design-time influence produces robots that are not simply cautious in practice, but intentionally engineered to avoid forms of intelligence that would introduce a high level of exposure, even when the intelligence to do so is technically possible to include in the system.
In domestic settings, where physical assistance usually requires contextual sensitivity, negotiated risk, and moment-to-moment judgment, this insurability-driven truncation of capability has real life consequences in the usefulness of the robots. Robots optimized for economic defensibility rather than functional effectiveness struggle to deliver meaningful value when compared to their cost, undermining the very adoption that insurance frameworks are seeking to stabilize. Insurance thus emerges as a central, if largely unacknowledged, driver of capability loss: a system of economic governance that shapes what domestic robotics is allowed to become by filtering technical possibility through the logic of risk avoidance long before any human–robot interaction takes place.
Scenario
Robot Framing
If Robot Acts
If Robot Does Not Act
Insurance Preference
Robot is purely observational (no assistive claims)
Passive / informational
High exposure (unexpected action)
Very low exposure
Strongly prefers non-action
Robot performs low-risk assistance (e.g., reminders, alerts)
Assistive but non-physical
Moderate exposure
Low exposure
Prefers limited, scripted action
Robot is marketed as safety-enhancing (e.g., fall detection)
Protective / preventative
Moderate–high exposure if harm occurs
Moderate exposure if harm was foreseeable
Ambivalent; narrows acceptable behavior
Robot has intervened successfully in the past
Reliance established
Exposure if intervention fails
Exposure if non-intervention follows prior success
Prefers consistency over judgment
Ambiguous domestic risk (e.g., fatigue, clutter, tools)
Context- dependent
High exposure (discretionary judgment)
Medium exposure (missed prevention)
Prefers warning over action
Physical intervention involving contact
High-risk assistive
Very high exposure
Lower exposure
Strongly prefers non-action
Table 1. Insurance Matrix: Exposure Based Action Preference for Domestic Robotics
Liability and the Suppression of Agency
If insurance shapes what kinds of robots are likely to be built in the first place, liability law shapes how those robots are allowed to behave once they exist. In both the U.S. and E.U. legal systems, domestic robots are currently treated as products or tools rather than independent actors. This means that when something goes wrong, responsibility is assigned to either a person or a company. The robot itself, even in autonomous mode, carries no legal responsibility as there is currently no legal mechanism for machine responsibility. As a result, any autonomy on the part of the robot immediately translates into legal risk for someone else. In practice, this disincentivizes the building and deployment of autonomous mode. To reduce this exposure to liability, developers are incentivized to limit robots to tightly scripted behaviors that can be easily explained and justified later. This leads to systems that are technically capable of adapting, but legally safer if they do not.
Liability law places robots in a difficult position once they are marketed to the public, and their potential future owners, as capable of helping out at home. If a robot intervenes and causes harm, it could easily be blamed for acting. If it does nothing in a situation where harm was foreseeable, it could easily be blamed for failing to act. They are put at an intersection of double the risk and as such, companies will likely narrow the scope of the robot’s role altogether in its marketing. This way, the robots can be programmed to give verbal warnings over taking action, can choose deferral over decision-making, and can rely on inactivity over intervention. In homes that are seeking extra presence and support, for instance those with elderly and minor populations present, this legal pressure discourages exactly the kind of product features that would make robots genuinely useful. The result is another form of capability loss: robots that technically could help more, but are specifically designed not to because if injury occurs, it is the company producing them who could be held responsible.
Consumer Protection and User Protection
Where insurance and liability frameworks constrain the capacities of domestic robotics by managing economic and legal risk, consumer protection law constrains domestic robotics by making assumptions about the user. In both U.S. and EU contexts, these frameworks are built around the idea that individual consumers are vulnerable, inconsistently informed, and in need of a range of safeguards. However, the underlying legal pressures differ across the two distinct cultures of consumer safety. In the US, consumer robotics must exist within an extremely litigious environment, where design choices are often shaped by the anticipation of class actions, product warnings, and legal claims of deceptive or unfair practices. In the EU, consumer protection operates alongside strict data protection policies, where compliance with privacy, consent, and data minimization requirements significantly limits how systems can observe and learn within a domestic environment.
Although driven by different forces, these two environments incentivize conservative design choices for deployment in both markets. In the U.S., robotics companies may choose to provide simple interfaces, restrictive defaults that limit riskier behavioral settings, and excessive warnings and legal paperwork included with the consumer robots in order to reduce their chances of being exposed to consumer litigation. In the EU, a similar decision making process arises from the need to avoid illegal data processing or collection, even where that same data would improve safety and application. Features that would allow users to meaningfully adjust robot behavior like adjusting risk tolerance, enabling adaptive learning, or authorizing physical assistance in times of need may never even make it to the EU market through the mine field of legal restrictions. Consumers are treated as legally risky users and may not even be given a chance to consent to the more complex trade-offs that create applied usefulness and inherent value from the robots, even in the privacy of their own homes.
The result is the same each time. We see the continued pattern of capability loss, just reached through different legal pathways. Domestic robots are marketed as the future, built to be intelligent, helpful and adaptive. Yet in reality, due to the many legal fields they intersect with, they wind up deployed as truncated products, with their robotic hands tied behind their proverbial backs. Users stand in front of machines that are technical wonders in the lab but legally constrained in the home. This unfortunate truth will become better known as more consumers face the frustration and the real world cost-value dilemma of domestic robotics is revealed. Consumer protection law is essential in preventing harm, but is another force shaping domestic robotics into systems that are not worth the cost of purchasing them.
Privacy Regulation and the Collapse of Contextual Knowledge
Privacy regulation shapes domestic robotics by limiting what robots are allowed to see, remember, and learn inside and outside of the home. For embodied AI, this is crucial to usefulness. Usefulness in the home depends on understanding routines, recognizing people, and learning from repeated interaction over time. When privacy rules make data collection tricky or encourage frequent deletion, robots could technically function but lack important contextual data they need to behave helpfully.
This issue is especially difficult to manage in the EU, where strict data protection laws place enforceable limits on personal data processing, storage, sharing and reuse. Even when customers electively bring a robot into their home, the system may be prohibited from retaining information about household members, routines, or past interactions. This is made even more extreme if the robot is bipedal and expected to venture into the community. As a result of these constraints, robots often operate with limited memory and reduced perception, leading to repetitive wrong behaviors, a lack of contextual knowledge and slow or nonexistent adaptation. In the US, privacy regulation is more fragmented on the state and local level and less restrictive at the federal level. This allows greater freedom for robotics in data collection and learning overall, but shifts protection from a blanket overlay toward disclosure, state-based contracts, and after-the-fact enforcement rather than clear unified limits.
Approaches on both sides of the Atlantic constrain domestic robotics in different ways. In the EU, strict privacy safeguards reduce a robot’s ability to develop long-term understanding of its environment and users. In the US, less restrictive data access increases capability but also raises surveillance concerns, which often leads companies to impose their own limits and overreach through design choices and terms of service and places the consumer at their will. In both systems, privacy regulation changes what robots are allowed to know about the homes they inhabit. The result is another form of capability loss: robots placed in complex, intimate environments but legally prevented from acquiring the contextual knowledge needed to function effectively within them.
Labor Politics and Public Perception
Labor politics shape domestic robotics by influencing which tasks robots are allowed to perform and more specifically, how their role is publicly described. Concerns about labor displacement and worker protection put strong social and political pressure on the deployment of embodied AI, even in private homes. While these debates are usually centered around the workforce, they can also spill over into domestic settings, where robots are often designed and marketed in ways that are designed to highlight the replacement of tasks rather than the replacement of a laborer. This helps to stem the flow of an additional public discourse surrounding robots replacing humans and risking their employment. In both the US and the EU, this pressure to avoid a touchy public subject encourages developers to present state of the art robots as “assistants” or “helpers,” even when their technical capabilities could extend much further in real-world application and deployment. Tasks associated with care, cleaning, maintenance, or service work are frequently limited or carefully framed to avoid political and labor union response. In the EU, where labor protections are strong and social safety nets are delicate mechanisms, there is a publicly held sensitivity to technologies that could undermine employment. In the US, labor politics are less unified, but public sensitivity to automation and labor displacement still shapes how domestic robotics are discussed, regulated, and socially accepted.
The result is that robots are often deliberately underutilized or infantalized. Capabilities that could reduce physical strain, support care work, or assist with time-consuming domestic tasks are constrained in their marketing and therefore their use to avoid public or political controversy. This narrowing of robotic roles in media relations and marketing materials does not always reflect technical limits, but social compromise for a smoother adoption process. Domestic robots arrive in homes technically capable of more than they are advertised to do, reinforcing the broader pattern of capability loss and weakening the economic case for adoption by limiting the very forms of assistance that would make these systems valuable in everyday life.
Contract Law and the Illusion of Ownership
Contract law and cloud governance shape domestic robotics by determining who ultimately controls a robot after it enters the home. Although domestic robots are consistently marketed as consumer products, their operation is typically influenced by terms of service, end-user license agreements, and cloud-based dependencies that give manufacturers an enormous amount of authority over the behavior of the robot. These contracts often allow the company to make changes to features, functionality, and data practices without the input of the consumer, meaning the robot a consumer purchases is not necessarily the robot they will continue to live with over time.
Automatic software updates are at the core of this issue of ownership and consistency. Updates are commonly proposed to users as necessary for security, safety, and compliance, leaving little meaningful choice in accepting. Refusal of an update can disable even the most basic functionality, restrict access to cloud services, or render the robot partially or entirely unusable. In practice, consent becomes moot: users must accept new terms and behavioral changes in order to retain basic operation on their investment. This arrangement shifts control away from the household and toward the platform provider, embedding legal authority directly into the technical infrastructure of the robot.
This standard has negative implications for trust in continued capability. Features can be added, altered, or removed to meet changing regulatory, economic, or corporate priorities, most of the time effectively without a renewed user agreement. In both the US and the EU, contract law generally enforces these arrangements, treating continued use as acceptance. As a result, even purchased and outright owned domestic robots function less as physically owned objects and more as leased software services on owned hardware, subject to ongoing governance from outside the home. This dependency further contributes to capability loss: even where a robot is technically capable, its usefulness remains contingent on contractual compliance and uninterrupted cloud access, reinforcing the gap between consumer expectation and lived experience.
Geo-Regulation and Fragmented Use
Variation in regulation shapes domestic robotics by tying robotic capability to geographic location. As robots cross borders or even operate within different regulatory zones their behavior is increasingly governed by location-specific rules related to data protection, safety standards, cloud access, and AI governance. As a result, the same robot may function differently depending on where it is used, not because of any meaningful technical variation, but because regulatory compliance is enforced through geographically regulated restrictions.
In practice, this often takes the form of geo-fencing: a phenomenon where features are enabled, limited, or disabled based on jurisdiction. In the European Union, stricter data protection and emerging AI regulations may require reduced data retention, limited perception, or constrained autonomy. In the United States, fewer structural limits may allow broader functionality, but this flexibility is offset by legal uncertainty and greater exposure to litigation. Manufacturers respond by tailoring behavior regionally or defaulting to the most restrictive standard across markets, further narrowing overall capability.
For users, geo-regulation produces inconsistency and confusion. A robot that performs one way in one country may behave differently or lose functionality entirely after relocation, travel, or regulatory updates. These changes are rarely transparent to the user and are often implemented remotely through software controls. Geo-regulation thus reinforces capability loss by fragmenting robotic behavior across borders and subordinating technical possibility to jurisdictional compliance. The result is a system whose intelligence is not only legally constrained, but geographically contingent, further complicating adoption and weakening the promise of domestic robotics as a stable, long-term technology.
The True Cost of Not Adapting Law to Application
Taken together, these legal and regulatory domains do not operate in isolation; they accumulate and reinforce one another in ways that fundamentally shape the lived reality of the deployment of domestic robotics. Insurance constrains what capabilities are economically viable to build, liability law suppresses discretionary behavior once robots are deployed, consumer protection limits user agency and customization, privacy regulation restricts perception and memory, labor politics narrow acceptable task scope, contract law and cloud governance centralize control outside the home, and geo-regulation fragments behavior across jurisdictions. Each framework is internally rational and justified, yet their collective overlap produces a system in which technical capability is steadily throttled as robots move from testing in the robotics lab into deployment in the homes of users.
This cumulative effect produces an unfortunate paradox in applied robotics: domestic robots are increasingly sophisticated, yet increasingly constrained. Users encounter machines that are expensive, technologically advanced, and heavily marketed as intelligent, but which hesitate, forget, refuse, or purposefully underperform in the moments where assistance would matter most. The issue is not that robotics has failed to develop, but that the conditions under which robots are allowed to operate have narrowed so significantly that meaningful functionality becomes almost impossible to deliver to the customer. Capability loss thus emerges not as a side effect of regulation, but as its predictable outcome when multiple legal regimes converge without thoughtful, high level coordination.
From an adoption perspective, this presents an enduring negative result in the cost–benefit analysis of adoption for customers. Domestic robots are understandably costly because they incorporate advanced hardware and AI systems, yet their constrained behavior limits the value they can provide to users. Consumers are asked to accept surveillance trade-offs, contractual dependency, and behavioral inconsistency in exchange for systems that are legally allowed to do very little. Over time, this mismatch undermines trust, slows adoption, and reinforces skepticism about the practical value of embodied AI. Unless legal and regulatory frameworks are re-evaluated with attention to their cumulative intersecting impact, domestic robotics risks remaining trapped in a cycle where increasing technological capability yields diminishing real-world utility.
Conclusion
Living with a humanoid robot makes clear that the main barriers to domestic robotics are not technical, but legal and regulatory. As robots move from research labs into private homes, they encounter a combination of insurance requirements, liability rules, consumer protection laws, privacy regulation, labor politics, contractual controls, cloud dependence, and location-based restrictions. Each of these systems is designed to address real risks, yet together they completely truncate what robots are allowed to do in everyday life. This produces the persistent pattern of capability loss that cannot be solved by technical leaps in engineering alone. The analysis in this article shows that domestic robots are constrained not because they lack intelligence, engineering, or mechanical ability, but because existing legal frameworks favor caution, predictability, and risk avoidance over usefulness. These constraints become most visible in daily interaction like that of my immersive research project, where legal rules translate into uselessly miniscule memory and reduced autonomy. The result is the glaring gap between what domestic robots could reasonably provide a user and what they are permitted to deliver in practice.
If domestic robotics is to become a viable and widely adopted technology, legal and regulatory frameworks must be reconsidered with their combined effects in mind. This does not mean weakening protections or accelerating deployment without safeguards. Rather, it requires governance approaches that recognize the realities of embodied AI in the home and allow sufficient functional capacity to justify the costs, trade-offs, and expectations placed on users. Without a reassessment, domestic robots will unfortunately remain expensive, limited, and difficult to justify with the fault falling squarely on the legal constraints rather than the incredible feats of engineering which hold the promise of a brighter tomorrow.
What this research ultimately shows is that embodied AI does not fit into any existing legal category, and that issue of identity is at the heart of the regulatory complication. A humanoid robot in the home is treated at the same time as a consumer product, a tool, a data-collecting system, a safety device, a potential worker, and a cloud-controlled service. Each area of law applies its own rules as if the robot were only one of those things. The result is a pile-up of requirements that were never designed to work together. No single system is wrong, but together they place so many limits on behavior that the robot’s real abilities are slowly stripped away. This is how capability loss happens: not because the robot cannot act, but because the law has no clear way to understand what the robot actually is.
For this reason, regulating only how robots affect humans is no longer enough. Moving forward, embodied AI will increasingly need its own legal classification, one that recognizes it as a distinct type of system without turning it into a legal person. A new form of classification would make it possible to regulate the robot directly, rather than indirectly through fragmented rules about products, labor, data, or liability. Clear boundaries could be set around what a robot is allowed to do, how responsibility is assigned, what kinds of learning are permitted, and how risk is shared between manufacturers, owners, and insurers. This would not reduce safety or protections. It would replace today’s unintentional over-restriction with intentional classification. Without this shift, domestic robots will remain expensive, heavily limited, and frustrating to live with, no matter how advanced the technology becomes. If embodied AI is going to function meaningfully in everyday life, the law must begin by acknowledging it as a new category of our society and regulating it clearly, consistently, and on purpose.

