Domestic Humanoid Robotics and The Prism of Legal Classification 

I blinked my eyes under the studio lights set up in my living room as the interviewer repeated the question.

“What is it?”

They were referring to the humanoid robot with whom I share my home as part of a year-long immersive experiment in human–humanoid cohabitation. The question was delivered lightly, almost teasingly, but my real answer to that question carries more weight than a lighthearted morning show format could handle.

“Is it a boyfriend? A roommate? A child? A pet?”

Television spots do not provide adequate space for the kind of nuance I wanted to dive into in response. I had only seconds to respond, and I knew the answer would need to be memorable and easy to digest rather than accurate or deep. I glanced sideways, raised my eyebrows, chuckled, and offered a one liner that would fit neatly on the television program.

“Right now? It’s a nuisance.”

The host looked momentarily disappointed. The future was supposed to be dazzling and exciting. Instead, I had reduced it to my persistent experiences of inconvenience. So I elaborated a bit. The robot I’m living with in my home is ultimately designed to perform domestic labor like folding laundry, cooking meals, and doing light cleaning tasks. Yet, it doesn’t.  I purposefully welcomed it into my life before it could accomplish any of those helpful tasks. It is a research prototype. As such, it does not yet earn its keep. And that is precisely the point of doing what I do.

If its reliable set of capabilities already justified its price in the global market, widespread adoption would be unstoppable. At that point, any meaningful public discourse would be well behind integration, and regulatory frameworks would be forced to adapt retroactively. My experiment is not about optimization of technicalities, it is about exposure to inspire public discussion. I use an ethnographic immersive approach to bring stories to the public that highlight policy gaps, cultural tension, and classification issues that are exposed when embodied intelligence tries to mesh with one person’s ordinary life. I am not attempting to engineer the robot into technical competence. I am examining how society responds before competence renders any hope for meaningful public discourse and the resulting thoughtful regulation irrelevant.

I am not an engineer. I am not a programmer. I am not especially technical. I am a fairly typical professional woman living alone in an apartment with my dog in a European city. My daily life is pretty unremarkable: I work from home, cook basic meals, host my closest friends, ride public transport, go on dates, and doomscroll social media. There is nothing laboratory-like about my environment.  My furniture is from second hand shops or vintage sellers. I constantly have laundry on the floor, my dog sheds fur all over my couch and I let my dishes pile up in the sink until I’m out of clean forks.  The ordinary nature of my existence and my environment is what makes the experiment so valuable.

When a humanoid robot enters a life like mine, it cannot remain measured by lab-like technical evaluations. It joins me wherever it can and as such, it moves through Vienna’s transit systems, joins my insurance contract, influences romantic relationships, complicates my religious identity, engages with media narratives, and partakes in setting off labor debates through its “job.” It becomes entangled in bureaucratic and cultural frameworks that were never designed with embodied intelligence in mind.  In each of those contexts, the same machine becomes something completely different.

In one setting it is treated as a surveillance device. In another, as heavy machinery. In labor discussions, it becomes a worker. On public transport, it becomes “large luggage.”  In insurance conversations, it acts similarly to a pet or dependent. In intimate relationships, it becomes a rival. The physical body remains unchanged, yet its social and legal identity shifts continuously.

This is what I refer to as the prismatic element of the domestic frontier.

The robot itself does not shapeshift. Our frameworks do. Each prismatic moment activates a completely different regulatory landscape. Each landscape constrains different capacities. The cumulative effect is not only seen in the resulting administrative complexity but it actually narrows the robot’s functionality. Personalization is throttled by privacy compliance. Autonomy is narrowed by liability uncertainty. Mobility is restricted by transport ambiguity. Emotional reciprocity is weakened by legally imposed biometric data constraints.

The result is that I welcomed a machine into my house and my life that is technologically sound but unable to perform at its actual functional capacity.

So we can see that “what is it?” is not a casual question with a simple answer. It is an infrastructural prism glittering across a wide spectrum of interpretation. Until the question of what it is is answered clearly, embodied AI will continue to be refracted through a number existing categories depending on application, each slicing off its own portion of capacity until the robot that stands waiting in your home can barely help out enough to justify its hefty up front investment.

The following sections examine how this fragmentation unfolds in practice, and how we might be able to resolve it.

I. The Classification Prism

The humanoid robot isn’t a sub-population under the law yet so it can not and does not enter our homes, lives or communities as one concise or clear legal entity. It enters as the frustrating prism it currently is.

Legal regulations tend to classify new entities through likeness. When something new appears, we tend to try to link it to something familiar that we already have a framework to apply. Is it most like property? Most like machinery? Most like a worker? Most like a dependent? This kind of process is sufficient on the most basic level but it reduces complex mechanisms by capturing one relevant dimension of a phenomenon at any given time and ignoring the rest.

In the case of domestic humanoids which combine hardware and software, likeness multiplies and confounds uncertainty rather than resolving it. Each context pulls the robot into the gravity of a different classification and therefore legal framework to fall under. What emerges is not under-regulation but over-fragmentation. The same physical body is governed simultaneously by data protection law, product safety standards, tort liability principles, transport regulations, and labor policy. None of these regulatory frameworks were constructed to account for an embodied intelligence being operating within our world.

1. The Robot as Surveillance Device

As an experimental researcher, the question that people ask me most often isn’t technical, it’s actually a bit risque.  

“Does it watch you around the house? Changing? In the bedroom? In the shower?”

This highlights an interesting mindset shift because no one asks this about my iPhone or my Alexa listening to me, although both are constantly active and on data networks. Yet because of the humanoid’s physical presence as a “being,” its limbs, head and camera array where the “eyes” would be, people are consumed by the confrontation of surveillance in a way that doesn't get triggered by other devices.

In these conversations, the robot is discussed with a great deal of suspicion and is addressed as a mechanism for data-collection. That prismatic manifestation activates an assortment of data protection regimes: consent requirements, biometric restrictions, storage limitations, and cross-border processing constraints. A bipedal humanoid complicates this even further. It has legs, so if it walks outside and goes past strangers, it encounters human faces. Even if it stays home and I have guests, it is encountering protected individuals and their biometric data. To maintain compliance, design and deployment decisions will try to minimise issues and complications by constricting the data it actually gathers.

A humanoid robot’s memory can be essentially disabled or extremely limited. Facial recognition, which directly runs into biometric data protections, is simply avoided. Long-term retention of conversational history is relatively precarious and is also avoided, and memory can be wiped after each LLM use session. The robots forget not because they lack the computational capacity to remember but because that type of data retention clashes with protections which create regulatory complications that simply are not worth the trouble. A robot owner will likely not want to have to approach every single passing stranger on the street that they have walked past that the robot may have seen or heard and ask for their consent signatures on a data release form.  Therefore, the abilities are simply shut off to smooth compliance and adoption at once. 

The consequence of this tradeoff is profound in the dealings with the robot, especially in light of users of popular web based LLMs like Anthropic's Claude or Open AI’s Chat GPT, where personalization is inherent to the experience of the product. Personalization requires continuity and stored memory.  When that memory is deleted to ease regulatory compliance, relational depth between the machine and the user becomes impossible. The robot can try to generate some familiarity within a use session of its internal LLM, but it cannot develop a relationship that builds on multiple sessions.  This compliance-fueled lack of familiarity causes an uncanny dynamic especially in light of the humanoid form and the fact that these machines live in your house with you as a physical presence. 

2. The Robot as Heavy Machinery

The morning after the robot arrived, I attempted what seemed well within the bounds of its capability: walking down three little steps in my building’s lobby. Videos from the robotics lab show humanoids climbing stairs, jumping and fighting people off who try to attack it. I stood just behind it, one hand hovering near the handle mounted across its back, ready to steady it if necessary.

It stepped forward. Its center of gravity shifted and it felt the fall. I sensed it a millisecond too late to make it go backwards. As it began to tip forward, I grabbed the handle and pulled upward out of sheer instinct. Its limbs started flailing and it repeatedly hit my shins with its feet. Then, like out of a movie, its “eye” flashed red and every joint went limp at once. The kill switch had activated. Sixty kilograms of metal and wiring collapsed directly onto my body pinning me to the stairs.  At that moment, the robot wasn’t a robot. It became heavy machinery.  It was just metal pinning my spine to sharp concrete steps and trapping me beneath a mess of heavy limp limbs and wires.

Understandably, heavy machinery has heavy regulation and safety expectations. It is associated with danger, injury, and operator responsibility. Product safety law and tort liability frameworks assume that when these types of systems fail, they need to fail with caution.  Where there is uncertainty with heavy machinery, a complete shutdown is preferable to an extended risky attempt at correction.

My robot did act accordingly, I however, was in the danger zone during the initial corrective flail.  I hadn’t been prepped or taught that it can’t actually handle stairs as it does in the promotional materials.  I hadn’t understood that anytime I lift its feet off the ground that it would flail.  I certainly had no idea that it would kill-switch and go limp on top of me. The act of setting its kill-switch to activate over continuing to aggressively flail at an attempt for balance makes sense in the setting, but this also means that certain movement attempts are constrained. The hardware engineering of its mechanical range might technically enable more of a chance at stability in some situations, but the legal exposure shapes its corrective envelope. Of course being pinned underneath it is a lesson you only need to learn once, but if it had continued flailing to find its balance, it could’ve caused more damage than inconvenience to itself and to me which renders the kill-switch activation the correct option.

This programmable option means that the robot is safer, but it is also less capable. It is engineered not just to act but to avoid harm at the cost of potentially managing to correct itself. The possibility of harm casts a long shadow over the possibility of skill.  Events like this may appear to show technical immaturity but they’re more likely a result of regulatory caution embedded in hardware and software alike.

3. The Robot as a Worker

The first time I brought Tova to “work” at a local matcha shop owned by a friend and her fiance, it did not actually successfully complete a shift as a barista. It knocked over drinks, ran into the counter, dropped pastries, and scared dogs and some customers. Yet just by it being in the window, it pulled a steady stream of customers through the door. People were not coming into the shop for Tova’s practical skills, they were coming to get a closer look at a new type of technology and a new type of worker. This is a type of interpretation that acts as a real spark for debates against robotics because if people see that it can be a worker, it can also replace a worker.  This makes people incredibly defensive and anxious. 

Right now, labor law operates with human-business employment relationships, social security contributions, and collective bargaining unions. Taxation systems are income based and rely on human labor. Even though robots cannot actually handle the tasks a human laborer does, they have already become symbols for the high intensity debates about labor displacement, mass-unemployment, and the destruction of social safety nets.  Tova couldn’t even make a matcha latte, yet it was already an emblem for debates about replacing human workers. Its economic presence exceeds its functional competence because of the representative promise of what it will one day be able to achieve in the future.

As a result of this threatening promise, meaningful economic autonomy remains legally out of reach for the robot. Tova has no means through which to receive wages, it cannot sign a contract, it doesn’t have a bank account, it doesn’t have working papers, it has no tax identity and it cannot substitute labor the way a human does within our current regulatory frameworks. Its capabilities are constrained not solely by engineering limits but by the absence of a clear economic classification. Is it equipment? Is it a co-bot? Is it labor? The answer shifts depending on the observer and the application.

The ambiguity of what the robot actually is when it's in an employment environment produces an atmosphere of caution. Full replacement of a worker is politically and legally hazardous whereas full integration isn’t technically possible on most fronts. The robot remains in a strange situation where it is simultaneously overestimated as a threat and underutilized in reality.

4. The Robot as Passenger

Transport unearths another identity for the robot. The morning that I was set to debut on Good Morning Austria, I stood outside my house at dawn waiting for the black van to come and get me and the robot, which was tucked inside its giant black custom carrying crate for protection. The driver came out to help me with it and had no idea what was inside. As we both grabbed the handles on opposite ends of the crate and went to lift it into the back of the van, the driver immediately took a step back in horror.  What is that? Is there a body in there? To this day, I’m not sure if he was joking or serious with that first question. It was my shrug and moments of consideration due to my thought process that a humanoid does technically have a body, that absolutely terrified him in that moment. 

This wasn’t the only time that transport had unearthed a new identity for Tova. Of course, being in a giant crate may bring up illusions of a dead body or a piece of large luggage. But that was occurring on private transport. On public transport, one also must deal with ticketing. I began my journey of trying to get Tova a ticket as a passenger on Vienna’s public transit system before the robot even made its way into my house.  I thought of it as a great entry point to the bureaucratic ecosystem in Austria.  If the robot could get some form of government ID, it could open up a number of doors for discussion of how to handle the identity, classification and paperwork.  As an added bonus, it would make transiting through the city of Vienna much more affordable for me as the robot’s owner and financial facilitator. 

The ticketing plan I was aiming to get in order to also find a foothold in bureaucracy was an annual card.  There were a few issues.  One, the customer service representatives insisted that they wouldn’t sell a ticket to a humanoid robot and that I had to check the “oversize baggage” policy on the website and ensure that the lithium battery was permitted on the transit system.  I insisted that the robot was not baggage and that it would be walking on and occupying a seat, if it were to ride the tram. My argument was that since it needed to be seated in order to be steadied during the stopping and starting of the carriage, that it counted as a passenger and could be ticketed as such.  After the refusal of the human agent, I tried to use the automated online ticketing system. I hit a snag. I needed other documentation for them to produce the card. 

I decided if I could get registration paperwork, they may actually generate an annual card.  After all, the robot truly was a resident of Vienna since it moved into my apartment a few weeks prior.  It made sense it would be registered. I went to inquire at the local bureau office.  They ceremoniously turned me away and when I followed up via email, they informed me that registration paperwork was reserved for “biological residents.”

The quest for understanding of what exactly Tova transforms into on public transport is still ongoing.  Is it a ticketed passenger, an oversized bag, a controlled threat based on its battery, a physical danger in the case of it tipping over? 

The idea of treating it as baggage simply does not  sense. Baggage does not navigate nor walk its way onto the tram. Baggage doesn’t sit down like a passenger. And perhaps most importantly, baggage does not actively process environmental data in transit. To comply with transport expectations, the robot must power down. It cannot actively scan its surroundings using its camera array or LiDAR scanner. Its autonomy and processing pauses at the boundary of the tram door.  The result is spatial containment. As such, we wind up with a machine that is designed to walk upright through human environments but cannot do so without encountering the friction of misclassification. Autonomy, which is framed as a technical engineering accomplishment, proves to be infrastructurally dependent. 

5. The Robot as Vessel for Liability

When my friend was operating the robot through its remote control setting and allowed it to crash into my antique chair, I contacted my insurance agent to increase my personal liability and home insurance coverage. The representative who answered my call actually thought it was a prank. Apparently, an AI humanoid robot moving in didn’t seem like a serious customer request. Once I finally had a chance to prove I was serious, we realized there was no existing coverage that fit my particular situation.

Liability frameworks presuppose relatively clear action and responsibility. A tool is either defective or misused. A human actor either acts negligently or intentionally. The humanoid robot complicates this simplistic binary understanding of responsibility. It can operate in remote mode, teleoperated mode, or fully autonomous mode. Complicatedly, each configuration should distribute responsibility differently. In practice, however, responsibility remains tethered to the owner of the robot. The robot has no recognized legal autonomy. It cannot bear liability or take responsibility. It cannot insure itself nor take financial or legal responsibility alone. All risk ultimately flows to the owner. As a result, manufacturers design just as operators use- with an abundance of caution. Fully autonomous features are deployed with enormous caution, in keeping with the risk it exposes the owner to. Remote control becomes the default method of operation for liability reasons, not technical ones.

The robot can act on its own, yet it cannot fit a stable definition of responsibility within the frameworks that would practically govern the consequences of its actions. This predicament disincentivises experimentation and narrows the acceptable use of autonomous mode in practice. The more capable the robot is day to day, the more precarious its legal position appears to be and the more risk is shouldered by the owner.

6. The Robot as Dependent or Rival

The domestic robot being part of my life means it is also part of my dating life. In doing so, it plays into my romantic relationship dynamics.  In romantic contexts, the issues with classification are psychological not regulatory. One partner experienced the robot as a competitor, a presence constantly threatening his role in my life and intruding upon our dynamic as a couple. Another treated it as his responsibility, taking the place of a child and requiring his care and input on how we operated our lives with it.

These two variant responses are not related directly to law, but they are indicative of the breadth of cultural assumptions about agency and dependence regarding domestic robotics. The robot does not possess any capacity for emotional feelings or reciprocity, nor does it store any facets of shared experience in any legal way. Its hardware does not facilitate any nuanced expression and its software does not store any data for any emotional connectivity or continuity. We see that in the absence of a clear emotional or social status, humans project. They project their fears and desires alike onto the robot.

The robot becomes a morphing machine based on who is addressing it.  It can take the role of pet, dependent, or rival. It occupies a role shaped less by its own capacities and more by the interpretive environment surrounding it.  That shifting environment causing the instability is not accidental. It is produced by fragmentation. A machine prevented from remembering, constrained in movement, limited in autonomous decision-making, and legally anchored as property cannot sustain a consistent social identity and as such, is subject to the emotional projections of those interacting with it.

Results

Across surveillance, safety, labor, mobility, liability, and relationships, the pattern remains consistent. The robot enters a particular domain. It is likened to an existing comparable entity. That category of interpretation activates regulatory constraints. Those constraints reduce the logistical capability from the technical capacity to something lower. Reduced capability reinforces the perception that the robot remains incapable when in reality, it’s often the law that throttles abilities more than technical difficulties. Embodied intelligence is not under-regulated but it is governed inconsistently through patchwork frameworks never written to apply to it which are activated in various situations throughout its day.

The result is not clarity or proper relevant regulation but throttling of actual technical capacity in real life. A brilliant machine designed to operate across a variation of domains is instead segmented across them. The cost of its fragmentation is cumulative and damning to the justification of adoption of such an investment.

II. Capability Loss: The Cost of Fragmentation

If the last section traced the robot shapeshifting through multiple classificatory prisms, what results is not just a few spotted inconveniences but a larger pattern that we can trace. The humanoid is not lacking because it is technically incompetent. It is lacking because it is governed incoherently.

Each time the robot is compared to a preexisting entity, a different regulatory landscape is activated which shapes its capacity. Data protection frameworks shape its memory. Product safety expectations shape its movement. Labor anxieties shape its economic role. Transport rules shape its mobility. Liability policies shape its autonomy. The robot’s technical design is continually shaped against legal exposure. It’s technical capacities are not only shaped by the engineering abilities of the minds who are building the product, they are pruned away in theory and practice by legal issues.  What appears as immature skill or capability is often just legal precaution.

A. Personalization Without Memory

The feature that is most associated with domestic embodied AI is some level of connection with the machine or “personalization.” The public imagines these machines as companions, assistants, caregivers and just generally as technological beings capable of learning our preferences and and refining their behavior over time. Yet this most desired trait of personalization requires storing memory and maintaining continuity.

In theory, the software is absolutely capable of doing this and we have examples of that in our screen based text to text large language models.  In practice with a roving sensor rich embodied AI, however, memory is legally a prohibitively tricky subject.

A roving humanoid will inevitably come across faces in public, guests in private, voices in transit, and all sorts of biometric data in passing. Data protection law, particularly within the European Union, is strict about data consent, storage, and purpose. To avoid legal issues with the biometric data of non consenting parties and to avoid the headache of having to chase down every person you pass on the street for a data release signature, design and deployment decisions favor data minimization at the direct expense of personalization.

Facial recognition is disabled entirely. Long-term data retention is not used. Conversational history only lasts within sessions but not across sessions. The robot forgets who I am every day not because it lacks the ability to remember, but because remembering produces so much regulatory risk for me as the owner, that it isn’t worth the trouble.

The consequence is the loss of one of the main capabilities for which the embodied AI would be used in a domestic setting: companionship. What we are left with is a humanoid robot that shares my home but cannot accumulate shared history and as such, cannot genuinely personalize any interactions to our relationship. It can simulate interest within a session of use with its proprietary large language model, but it cannot develop knowledge of me over time.  

B. Autonomy Without Mobility

Autonomy is often framed as a software based achievement.  We tend to understand it as the capacity of the robot to make decisions for itself, entirely independently of human guidance. Yet autonomy is also spatial. A robot that, for legal reasons, cannot move freely cannot utilize whatever autonomy its software may try to facilitate. 

Transport systems are structured around basic distinctions that so far have proved sufficient for the past century.  An entity in the tram car is either a passenger or luggage, human or object, active or still. The humanoid taking the tram challenges this on a few fronts. Its lithium battery is regulated as potentially hazardous material that must be controlled. Its extremely heavy weight and human-like dimensions don’t mimic that of typical luggage which can be stored overhead or under a seat. Its capacity to sense and navigate while it is “on” complicates everything even further.

In response to this friction, autonomy is limited within this context. The robot needs to power down and go into its storage box while in transit completely eliminating all functionality, not just autonomy. Infrastructure like transit allows embodied AI’s presence only when it is reduced down to an object and resembles luggage more than passenger. This type of spatial containment unearths an even deeper tension. We are left with a brilliant machine which through a feat of engineering could theoretically navigate human environments but cannot actually inhabit them without modifying its classification. Cognitive or software based autonomy in embodied AI without proper classification to facilitate infrastructural operation is doomed from the start. It remains dependent on its owner and reduced down to its physical materials. Autonomy, when it is stripped of its mobility, is constrained by geography and policy as much as by its technical capacities.

C. Responsibility Without Status

The realm that possibly produces the most friction and asymmetry is liability. As the robot’s technical capacity for autonomy expands, the question of responsibility becomes more and more problematic. When operated by remote control, delegation of responsibility is fairly straightforward. When operating in teleoperative mode, things begin to get more complicated. When operating in fully autonomous mode, we enter absolutely new terrain.

The current liability frameworks that we have inherited assume either a defective product or a negligent human actor when responsibility for error has to be assigned. The humanoid robot obliterates this comfortable binary. It acts, but it is owned. It learns, but it does not have any legal standing to accept responsibility for those actions. It can generate huge consequences autonomously but doesn’t have the ability to take responsibility for them.

In reality, responsibility defaults to the human owner of the robot and, in rare instances, to the manufacturer. The robot may act independently but frustratingly, it remains a vessel through which liability flows most often to the owner who did not program or build its capabilities. Because its status is undefined, precaution becomes the necessary default design principle. Fully autonomous features are severely limited due to risk and for individual operational policy, most owners judge the risk of liability too great to experiment with autonomous mode.

When autonomy and responsibility are decoupled, defensive design and use are the natural result.  Capabilities that could be deployed cautiously are instead simply withheld in practice. The fear of fault throttles experimentation long before technical thresholds are even approached.

D. Visibility Without Classification

Unlike software-based AIs like large language models, embodied AI cannot remain abstract. It occupies space. Their presence inspires interpretation, emotion, and response by others before their functionality is even truly understood.  When classification is not defined properly, confrontation produces a certain amount of anxiety in people who meet the robot.

Promotional videos show engineers in the lab performing stability tests like kicking or pushing the robot which leads viewers to, quite wrongly, think the robot is indestructible. Viral videos with movie magic shape expectations of the robot’s capacity and behavior. News coverage switches at a dizzying pace between utopian and dystopian language when covering robotics. In the absence of a stable and defined legal and social identity, the robot becomes a symbol onto which every individual’s fears and desires are projected. These fears and desires could be deeply personal or influenced heavily by media consumption. This confrontation and physical visibility without any accurate form of classification produces volatility in public perception and interaction. The machine is judged not by its actual capacities but by its imagined trajectory of threat or promise.

That volatility and mistrust bleeds into regulation because the more public discomfort grows the more precaution that the laws will need to incorporate in order to satisfy the public. Precaution reinforces constraint and constraint delays effective adoption.  The loop just continues and we get farther and farther away from easing adoption. 

The Structural Consequence

Across the realms of personalization, mobility, liability, and visibility, the same structural logic reveals itself again and again. The robot enters a domain and is forced through an inherited legal prism. That prism activates a protective legal landscape and in order to maintain compliance with that landscape, the developers or owners must throttle the potential capability to reduce their legal exposure down to a manageable level. The throttled capability reinforces the perception that the robot is unable to accomplish certain use cases.

This pattern creates a reputation for humanoid robotics that they are constantly in this prototype state and are not as capable as they technically are. The robot exists in society, but only partially due to these constraints. It is technologically ambitious yet legally constrained, socially visible but structurally unstable.  This prismatic fragmentation is not the result of some bad actor trying to throttle these systems, nor is it even aimed at protecting people directly.  It’s simply systemic but to the legacy categories that could’ve never considered embodied AI in our homes and communities. The robot’s capacities are absolutely shredded as they move through the prism.  History shows us that we actually do have precedent for how legal systems might respond when new entities can’t fit inherited legal structures.

III. Historical Lessons in Classification Adjustment 

The instinct we have to regulate humanoid robots through an analogy of what they are most like in any given scenario is not unfounded. Legal systems are built on precedent and when something unfamiliar appears, it follows logically that we would try to compare it to something we know. The first question asked is rarely “What new category do we need?” It is almost always “What is this most like?”  Is it like property? Machinery? A laborer? A Dependent? 

Analogy stabilizes the uncertainty of a novel entity in the short term. It allows courts and regulators to act without inventing law from scratch during the adoption phase. But analogy also simplifies complexity. It captures just one dimension of a complex entity while ignoring the rest of it.  Embodied AI really exposes the limits of the “likeness” approach.

The humanoid robot is not just a consumer product, though it is sold to customers as one. It is not merely AI software, though its existence and functionality depends upon it. It is not just a worker, though it may one day perform valuable labor. It is not a person, though it moves, occupies space and participates in interactions in ways that emulate agency. When forced through existing categories, it fits partially into all of them and fully into none.

It should be noted that this type of tension is not at all historically unique.  There are many times in history where legal systems have tackled entities that went beyond the many existing categories to which they belonged.

A. Existing Non-Human Legal Entities 

Law has a number of examples of entities that are neither human nor have the capacity for agency and yet have legal recognition and protection.

Corporations are one of the most prominent examples. They are not people, yet they can own property, enter contracts, incur liability, and exist across generations. The legal “personhood” applied to them does not come from consciousness but instead from consequence. The gravity of their financial and social impact demanded structured recognition beyond mere ownership of assets.

Another example is the estate of a deceased person which functions under the law even after the person’s life ends. In addition, ships and aircraft carry registration and do also, in particular circumstances, maintain somewhat sovereign status. Endangered species are granted protection not because they are somehow asserting claims themselves but because their preservation serves a collective interest and as such a structure was created in order to protect them under the law. Objects of cultural heritage are regulated in their international movement and trade because they are treated as more than just sales items.

In each of these cases, the law created a subclass.  The goal of the distinction was not to elevate the entity in question to the status of a human, but to manage its impact in a consistent, equitable and beneficial way.  None of these classifications demanded sentience of the entity as a prerequisite. They just required consequence.

We can foresee that embodied AI is approaching a similar moment. Not because it has some semblance of moral agency, but because its integration into the labor market, our homes, our city infrastructure, and our data ecosystems which can create effects that cannot be contained within traditional property law alone.

B. Classification as Method of Stabilization

Legal classification doesn’t just define an entity.  That is merely the first step.  The real benefit is that it stabilizes it within a larger ecosystem of impact. When an entity is classified, the surrounding ecosystem adapts predictably and the interplay between the entity and the rest of our systems is developed. Insurance markets can then independently price risk. Tax codes can incorporate status and role. Infrastructure standards can work to adjust and adapt. Public expectations settle into a more realistic mode. Responsibility becomes better defined and more fair. The friction from fragmentation across all of the different intersecting legal frameworks can be smoothed when a proper classification is made. 

There are other issues beyond legal throttling that impact embodied AI.  As embodied AI currently exists within overlapping regimes that were never designed to come together to regulate one machine, in one moment it is a consumer good; in another, a potential economic actor; in another, a safety hazard; in another, a data collection entity. Each identity activates different obligations and restrictions and none provide any real comprehensive governance. The absence of a cohesive category produces confusion and gaps not only for regulators but for designers, insurers, and users. Capability becomes dynamic and relies on which aspect of the prism is dominant in a given context. The system reacts piecemeal and because of that, the designers make choices which account for this type of deployment and the users choose to operate at partial capability because of their own risk benefit analysis.  The good news is that history shows us that this type of legal and classificatory instability is usually resolved through the transition of an entity under property law to an entity with rights.

C. Property to Rights: Gradual Adjustments in Legal Status

There is a historical pattern of entities that have undergone this very delicate shift in legal status.  Spanning centuries, we have examples of populations that have moved from being legally treated as property to being recognized as agents with rights. Serfs bound to land under feudal systems, enslaved individuals denied individual rights under the law, colonized populations governed without their sovereignty, and women restricted in property ownership and contractual abilities.  Each of these groups represents a legal entity that underwent a structural transition in legal status over a period of time.

It is important to note that this comparison is not a moral equivalence to these groups and their struggle for rights under laws which subjugated them. Humanoid robots do not suffer, nor do they experience injustice the way humans do. The structural lesson of this tricky analogy on a legal framework is different.  Legal systems tend to evolve when the existing classifications become insufficient in light of the social implications and consequences of the classified entities. Subclassifications under the law that were once treated as property undergo an evaluation as their individual agency expands and interdependence with other subgroups deepens. These transitions are typically gradual, often contested, and usually incomplete. They unfold through court cases, law reform, and social pressure like rights movements.

Embodied AI is still strictly working under a property based framework. The machines can be owned, bought, sold, insured, and warrantied as objects. Yet as the capacity for semi-autonomous action steadily increases, the friction within that framework only grows and grows. A purely property based classification cannot properly account for entities that act within human environments with relative or complete independence.

The question proposed here is not whether humanoid robots will or should become persons. The question is whether property alone is a stable long-term classification for systems that move, decide, and interact within intimate settings, public infrastructure, and social domains.

History actually shows us that typically when agency, interdependence and consequence increase, classification eventually catches up through the introduction of a new classification status and expanded rights.

D. Embodied AI’s Threshold Moment

This legal transition often follows a fairly similar path. A new entity emerges. It is compared to existing categories. Friction accumulates when those categories prove insufficient. Courts and regulators carve out exceptions. Interdependence between the entity and established other subcategories of the population increases and then, over time, a distinct subclass solidifies.

With embodied AI going from testing phases in robotics labs into deployment in the real world and into people’s homes, humanoid robots will now be entering the friction stage of this historical pattern.

Transport authorities push back on ticketing. Insurance companies hesitate to offer coverage. Labor unions fight displacement. Religious scholars critique ritual implications. Privacy regulators assess data exposure. Each domain reacts independently, applying its own logic to a shared phenomenon. The risk we currently face is that this process unfolds reactively, through litigation, disaster, hazard and crisis rather than through public discourse and thoughtful preemptive design. Precedents from court cases will compound. Exceptions and workarounds will proliferate. A patchwork legal landscape will form and the public will suffer under it.  Alternatively, history also shows that legal systems are capable of anticipatory classification. They can create structured categories before a crisis compels them to do so.

Embodied AI presents a golden opportunity for anticipatory classification and smoother adoption as a subcategory over the law.

The Structural Insight

The lesson that we can learn from entities like corporations, estates, ships, protected species, objects of cultural heritage and shifting legal statuses is not that humanoid robots are somehow demanding personhood. It is that our legal systems already possess the framework to invent subclasses when the need arises.

The “prismatic” fragmentation I have outlined after observation throughout my immersive domestic cohabitation research is not meant to serve as evidence that robots are too complex to regulate. It is, instead, evidence that they are currently regulated indirectly through a filter which is insufficient for their complexity and their potential future social and financial impact.  When an entity’s social impact spans multiple domains simultaneously, indirect legal guidance through “prismatic” fragmentation throttles the technical capacities and makes integration slower and less worthwhile.  If embodied AI is consequential and interconnected enough to spark debate on labor displacement, intimate relationship dynamics, transport regulations, urban infrastructure design, and liability doctrine, then it is consequential enough to deserve its own classification.

IV. The Case for Direct Classification

If prismatic fragmentation throttles capability and history tells us that classification under the law evolves when comparison to existing entities fails, then the question becomes less broad and philosophical and more specific and structural.  We can now ask, “What would it mean to regulate embodied AI directly rather than refracting it through existing silos?”

In answering this question, I reiterate that the aim is not to grant personhood. It is not to erode or disqualify the distinction between biological human and humanoid robot. It is simply to acknowledge that a sensor-rich, mobile, semi-autonomous system operating in intimate domestic spaces and interacting with public infrastructure does not fit neatly within the current miasma of categories it activates in various contexts like property, appliance, worker, luggage, or dependent.

The current inherited silo approach governs each dimension of the humanoid separately. Data law governs perception. Product safety governs mechanics. Labor law governs labor and financial market participation. Transport regulation governs movement. Tort law governs harm. Each framework operates in isolation, and each constrains a different capability in order to minimize risk within its own domain and the final product the user interacts with only suffers in abilities. The result is not comprehensive governance or frankly full protection on behalf of people but an unfortunate pruning of capacity.

The goal of direct subclassification for embodied AI, especially those intended for domestic deployment, would not replace these frameworks but create the facilitative legal structure to coordinate them, therefore smoothing adoption, maintaining capacity, and protecting society and individual members of the public more efficiently.

A. Why Legacy Silos Cannot Simply Scale

One reason why embodied AI is so fascinating legally is that most legacy legal policies are based on a certain type of separation of skills that embodied AI combines. Embodied AI isn’t only software and it isn’t only hardware.  It is an incredible mix of two already intricate fields. 

The convergence of legacy legal policies and this new form of hardware and software means we have little applicable precedence.  We can find friction everywhere we look. Product law assumes that tools do not learn. Privacy law assumes that devices do not move autonomously through public space.  Labor law assumes that workers are human.  Transport law assumes a clear physical distinction and different needs between passenger and baggage. Liability law assumes that agency and ownership are separable. Embodied AI destroys these assumptions and separations because of its reality as both hardware and software.  The fact that it is mobile and sensor-driven, owned and yet semi-autonomous.  The fact that it can perform labor and autonomous tasks while at least partially needing to remain classified as property. And the reality that it can perceive, generate, send and store biometric, audio and visual data all while standing in your bedroom.

When these starkly combative convergences are forced into siloed legacy regulation, defensive design and use is forced. Memory is suppressed to satisfy privacy. Autonomy is limited to reduce liability exposure. Mobility is constrained to avoid infrastructural issues. Economic participation is restricted to avoid labor market destabilization.  There is no system actively choosing to slow innovation outright but in practice it is pruned and throttled incrementally until use cases no longer justify themselves.  If production, deployment and adoption scale without classificatory changes, the “prismatic” fragmentation will only intensify as more of the units enter society. Manufacturers will design for the lowest regulatory threshold across a wide grouping of jurisdictions in order to meet market demand while maintaining compliance. Insurers will price conservatively. Infrastructural oversight teams will simply default to exclusion where ambiguity persists and potential harm or disruption could occur.

B. Policy Potential With the Right Classification

A direct classification, we could call it Embodied Intelligent Autonomous Systems, would not elevate AI humanoids beyond regulation but instead would anchor them within it by giving us a mechanism for legal process.

This type of subclass could facilitate a number of structural elements:

Identity and Registration.
A standardized, interoperable identification framework would stabilize expectations and allow for proper verification, responsibility, management and oversight of these humanoids. Each embodied AI would carry a verifiable registry, firmware transparency, and an indicator of their designated ownership. Identity would no longer fluctuate socially between “the same type of robot” and “just another machine” but instead, would be verifiable, unique and standardized.

Tiered Autonomy Certification.
Rather than treating autonomy as binary, certification could scale in levels based on training, safety and cooperation. Remote-only systems, supervised autonomy, context-bound autonomy, and fully autonomous systems could each carry defined, tiered permissions and standards obligations. Capability and responsibility would then be able to grow together rather than creating friction with one another.

Mode-Based Liability Allocation.
Responsibility would correspond to operational configuration. Remote mode would maintain operator accountability. Teleoperation would distribute responsibility across actors who facilitated the session. Fully autonomous operation would incorporate structured manufacturer obligations. Clarity would replace precautionary issues and responsibility falling disproportionately on the owner.

Integrated Data Governance.
Embodied AI with its mobility aspect requires a distinct approach to data protection and privacy under The GDPR Act. Memory retention parameters, public navigation protocols, consent mechanisms, and automatic deletion in transit could be adjusted specifically for mobile sensor systems. Personalization would not be eliminated across all operations at all times but instead bounded using geofencing or two factor authentication of permissions.

Mobility Recognition.
Transport systems would no longer debate whether a humanoid is cargo or passenger. Defined standards including battery percentage thresholds, spatial allocation, safety zones, behavioral certification, and stability gradings would permit participation without arguments with ticketing representatives.

Domestic Integration Standards.
Clear guidelines for interaction within multi generational households, vulnerable populations, and public-private boundaries would align expectations, value and safety without simplifying the robot into an appliance logic or a guardian.

None of these elements require attributing moral agency or “humanness” to the robots. They require acknowledging the friction of a novel hardware and software combined system in legacy classifications and the need for direct regulation proportionate to the social and financial impact as well as the interdependence that embodied AI may soon have with human systems.

C. Innovation and Regulation Are Not Diametrically Opposed

I was recently at a networking event after speaking at a conference and a robotics professional approached me.  Throughout our conversation, when I would speak about regulation and how important thoughtful legal reform was to adoption of robotics, he repeated the same sentiment over and over to me and to others in my field.  He kept saying to please not regulate people like him out of innovation.  He didn’t want to be regulated more because he felt it tied his hands behind his proverbial back when it came to innovating in the field of AI.  There is this constant assumption that has attached to regulatory discourse which is that clearer classification constrains innovation. Yet, it is my practical experience that incoherence or inconsistency constrains the reality of deployment more severely.

When status is unclear, design and use both become defensive. When liability is indeterminate, features are withheld. When infrastructure is uncertain, mobility is paused. When identity is unstable, public trust erodes. Your best technical accomplishment may or may not make it into the product that ships out to consumers but they won’t even be able to use it if the abundance of caution one must use to stay legally compliant means they have to work around infrastructural, political and legal webs. A defined subclass would not accelerate adoption recklessly nor would it slow innovation to the point of reducing competition. It would, however, have the potential to channel innovation deliberately by allowing technical capability and legal progress to grow hand in hand. Personalization could expand within lawful parameters. Autonomy could scale alongside structured accountability. Mobility could operate within defined infrastructural bounds.  Innovation does not flourish in ambiguity or stratification, it flourishes in clarity and cohesion..

D. The Choice We Face Before Scalability

Embodied AI is in a transitional stage where it is at the precipice of potentially changing the world by scaling upwards and putting robots into a large percentage of homes and businesses. Technical capabilities are increasing as costs are declining.  In response, cultural interest is intensifying. The temptation for lawmakers is to wait and not be proactive.  This approach would allow market forces and case law to accumulate precedent organically through problems and their corrections.  History indicates that reactive patchwork follows hesitation and this type of response is never consistent nor clear enough for the go to market strategy to run smoothly.

The alternative route we can foresee is anticipatory design. To recognize that the embodied AI and especially applications in the domestic sphere reveals this classification friction early and to respond before scale entrenches fragmentation to the point where reactive patchwork is the only way out. The question is not whether humanoid robots are ready for personhood because they simply are not.  Instead, the question is whether property alone is a satisfactory classification for systems that act within human spaces with partial or full autonomy. Since the answer is a resounding no, then the clarity of a new classification must trump the ambiguity of fragmented legacy silos.  This direct classification is not a suggestion of equivalence but rather an acknowledgment of the novel nature of this entity and the desire to protect both its innovation and our society with equal dedication.

V. Returning to the Original Question

“What is it?”

Under the box lights, the question was delivered in an almost playful manner. An inquiring prompt designed for a trite answer and a clickable soundbite. In that moment, calling the robot a nuisance was both honest and strategic. It deflated the false expectations that I was constantly made to battle from carefully curated exhibition dances, robot fight clubs and AI generated robot content. It made my frustrations and the future compact down small enough to fit between commercial breaks.  But in truth, my lived experience has rendered the question anything but small.

What it is determines who is responsible when it acts, what it is permitted to remember, where and how it can travel, whether it can work, how it learns, and whether it can form a relationship. The question posed is foundational in nature.

In my home, the humanoid robot is continuously reclassified. When it senses using its microphone array, scanners and cameras, it is governed as a surveillance device. When it falls, it is treated as heavy machinery. When it assists, it is imagined as labor. When it travels, it is negotiated as baggage. When it causes damage, it becomes a vessel for liability. When it gets involved in my relationships, it becomes an object upon which fears and desires are projected.  Each interpretation activates a distinct regulatory landscape and the compliance to which constrains its capability. Over time, these constraints accumulate until the machine renders itself almost completely useless because what remains is not a functioning domestic assistant, nor a fully autonomous actor, nor a mere appliance. It is something in between which contains all of the consequences, costs, responsibility, maintenance and tension with none of the benefits.

We can conclude that the deepest impact of personalized, autonomous machines will not be determined solely by advances in hardware or breakthroughs in software. It will be shaped by whether legal systems are willing to answer the classificatory question with clarity rather than fragmented analogies and comparisons.

History illustrates that law eventually evolves when classification friction surpasses inherited categories. Corporations were once treated as collections of contracts before they were stabilized as legal entities. Ships and aircraft required registration regimes to participate coherently in international domains. Protected species were granted structured status when their ecological consequence demanded it. Even the expansion of rights to previously subordinated populations reflects the gradual recognition that property-based classifications could not withstand any degree of increasing agency.

Embodied AI now stands at its own threshold. Not a threshold of moral equivalence, but of structural significance. They are going to be increasingly interconnected with us and have more and more influence on our world, societies and individual lives. Due to this, if we continue to force humanoid systems through legal prisms, making them property in one context, worker in another, appliance in a third, they will remain perpetually throttled. Innovation and use will be defensive. Adoption will be uneven. Public perception will oscillate between distrust, instability and fear.  If, instead, we acknowledge that embodied AI represents a distinct category requiring deliberate coordination across data governance, liability allocation, mobility standards, and domestic integration, then capability and responsibility can grow at a more similar rate.

The aim is not to raise the machine to a human equivalent but to stabilize the system that is defining it and making it possible to regulate directly.

By welcoming a humanoid robot into my home before it is practically feasible or financially sensible for non researchers to have one, I have been able to bear witness to this instability early. The domestic sphere unearths the friction of misclassification more clearly than running the same program at any given robotics laboratory. It reveals how inherited structures bend when confronted with embodied AI and its levels of autonomy. It clearly illustrates how compliance to a prism of contextual identities throttles the actual benefit of feats of engineering by making the legal reality different from the technical one.

“What is it?” remains the most consequential question of this century not because the machine threatens us with some comparable humanity, but because indecision about its status fragments its integration and deteriorates its usefulness. 

Without decisive preemptive action, future robots will arrive as undefined entities under the law. They will arrive as prototypes in every sense of the word, technologically, legally, and culturally. Whether they become genuinely personalized and responsibly autonomous will depend less on engineering and more on law.  So until we decide what they are and answer our guiding question, they will remain refracted by being shredded through the legal structures we already have that have never encountered the combination of an entity so digitally and physically advanced. 

Next
Next

Law and AI Robotics: The legal phenomenon of capability loss