Emily Kate Genatowski Emily Kate Genatowski

Domestic Humanoid Robotics and The Prism of Legal Classification 

I blinked my eyes under the studio lights set up in my living room as the interviewer repeated the question.

“What is it?”

They were referring to the humanoid robot with whom I share my home as part of a year-long immersive experiment in human–humanoid cohabitation. The question was delivered lightly, almost teasingly, but my real answer to that question carries more weight than a lighthearted morning show format could handle.

“Is it a boyfriend? A roommate? A child? A pet?”

Television spots do not provide adequate space for the kind of nuance I wanted to dive into in response. I had only seconds to respond, and I knew the answer would need to be memorable and easy to digest rather than accurate or deep. I glanced sideways, raised my eyebrows, chuckled, and offered a one liner that would fit neatly on the television program.

“Right now? It’s a nuisance.”

The host looked momentarily disappointed. The future was supposed to be dazzling and exciting. Instead, I had reduced it to my persistent experiences of inconvenience. So I elaborated a bit. The robot I’m living with in my home is ultimately designed to perform domestic labor like folding laundry, cooking meals, and doing light cleaning tasks. Yet, it doesn’t.  I purposefully welcomed it into my life before it could accomplish any of those helpful tasks. It is a research prototype. As such, it does not yet earn its keep. And that is precisely the point of doing what I do.

If its reliable set of capabilities already justified its price in the global market, widespread adoption would be unstoppable. At that point, any meaningful public discourse would be well behind integration, and regulatory frameworks would be forced to adapt retroactively. My experiment is not about optimization of technicalities, it is about exposure to inspire public discussion. I use an ethnographic immersive approach to bring stories to the public that highlight policy gaps, cultural tension, and classification issues that are exposed when embodied intelligence tries to mesh with one person’s ordinary life. I am not attempting to engineer the robot into technical competence. I am examining how society responds before competence renders any hope for meaningful public discourse and the resulting thoughtful regulation irrelevant.

I am not an engineer. I am not a programmer. I am not especially technical. I am a fairly typical professional woman living alone in an apartment with my dog in a European city. My daily life is pretty unremarkable: I work from home, cook basic meals, host my closest friends, ride public transport, go on dates, and doomscroll social media. There is nothing laboratory-like about my environment.  My furniture is from second hand shops or vintage sellers. I constantly have laundry on the floor, my dog sheds fur all over my couch and I let my dishes pile up in the sink until I’m out of clean forks.  The ordinary nature of my existence and my environment is what makes the experiment so valuable.

When a humanoid robot enters a life like mine, it cannot remain measured by lab-like technical evaluations. It joins me wherever it can and as such, it moves through Vienna’s transit systems, joins my insurance contract, influences romantic relationships, complicates my religious identity, engages with media narratives, and partakes in setting off labor debates through its “job.” It becomes entangled in bureaucratic and cultural frameworks that were never designed with embodied intelligence in mind.  In each of those contexts, the same machine becomes something completely different.

In one setting it is treated as a surveillance device. In another, as heavy machinery. In labor discussions, it becomes a worker. On public transport, it becomes “large luggage.”  In insurance conversations, it acts similarly to a pet or dependent. In intimate relationships, it becomes a rival. The physical body remains unchanged, yet its social and legal identity shifts continuously.

This is what I refer to as the prismatic element of the domestic frontier.

The robot itself does not shapeshift. Our frameworks do. Each prismatic moment activates a completely different regulatory landscape. Each landscape constrains different capacities. The cumulative effect is not only seen in the resulting administrative complexity but it actually narrows the robot’s functionality. Personalization is throttled by privacy compliance. Autonomy is narrowed by liability uncertainty. Mobility is restricted by transport ambiguity. Emotional reciprocity is weakened by legally imposed biometric data constraints.

The result is that I welcomed a machine into my house and my life that is technologically sound but unable to perform at its actual functional capacity.

So we can see that “what is it?” is not a casual question with a simple answer. It is an infrastructural prism glittering across a wide spectrum of interpretation. Until the question of what it is is answered clearly, embodied AI will continue to be refracted through a number existing categories depending on application, each slicing off its own portion of capacity until the robot that stands waiting in your home can barely help out enough to justify its hefty up front investment.

The following sections examine how this fragmentation unfolds in practice, and how we might be able to resolve it.

I. The Classification Prism

The humanoid robot isn’t a sub-population under the law yet so it can not and does not enter our homes, lives or communities as one concise or clear legal entity. It enters as the frustrating prism it currently is.

Legal regulations tend to classify new entities through likeness. When something new appears, we tend to try to link it to something familiar that we already have a framework to apply. Is it most like property? Most like machinery? Most like a worker? Most like a dependent? This kind of process is sufficient on the most basic level but it reduces complex mechanisms by capturing one relevant dimension of a phenomenon at any given time and ignoring the rest.

In the case of domestic humanoids which combine hardware and software, likeness multiplies and confounds uncertainty rather than resolving it. Each context pulls the robot into the gravity of a different classification and therefore legal framework to fall under. What emerges is not under-regulation but over-fragmentation. The same physical body is governed simultaneously by data protection law, product safety standards, tort liability principles, transport regulations, and labor policy. None of these regulatory frameworks were constructed to account for an embodied intelligence being operating within our world.

1. The Robot as Surveillance Device

As an experimental researcher, the question that people ask me most often isn’t technical, it’s actually a bit risque.  

“Does it watch you around the house? Changing? In the bedroom? In the shower?”

This highlights an interesting mindset shift because no one asks this about my iPhone or my Alexa listening to me, although both are constantly active and on data networks. Yet because of the humanoid’s physical presence as a “being,” its limbs, head and camera array where the “eyes” would be, people are consumed by the confrontation of surveillance in a way that doesn't get triggered by other devices.

In these conversations, the robot is discussed with a great deal of suspicion and is addressed as a mechanism for data-collection. That prismatic manifestation activates an assortment of data protection regimes: consent requirements, biometric restrictions, storage limitations, and cross-border processing constraints. A bipedal humanoid complicates this even further. It has legs, so if it walks outside and goes past strangers, it encounters human faces. Even if it stays home and I have guests, it is encountering protected individuals and their biometric data. To maintain compliance, design and deployment decisions will try to minimise issues and complications by constricting the data it actually gathers.

A humanoid robot’s memory can be essentially disabled or extremely limited. Facial recognition, which directly runs into biometric data protections, is simply avoided. Long-term retention of conversational history is relatively precarious and is also avoided, and memory can be wiped after each LLM use session. The robots forget not because they lack the computational capacity to remember but because that type of data retention clashes with protections which create regulatory complications that simply are not worth the trouble. A robot owner will likely not want to have to approach every single passing stranger on the street that they have walked past that the robot may have seen or heard and ask for their consent signatures on a data release form.  Therefore, the abilities are simply shut off to smooth compliance and adoption at once. 

The consequence of this tradeoff is profound in the dealings with the robot, especially in light of users of popular web based LLMs like Anthropic's Claude or Open AI’s Chat GPT, where personalization is inherent to the experience of the product. Personalization requires continuity and stored memory.  When that memory is deleted to ease regulatory compliance, relational depth between the machine and the user becomes impossible. The robot can try to generate some familiarity within a use session of its internal LLM, but it cannot develop a relationship that builds on multiple sessions.  This compliance-fueled lack of familiarity causes an uncanny dynamic especially in light of the humanoid form and the fact that these machines live in your house with you as a physical presence. 

2. The Robot as Heavy Machinery

The morning after the robot arrived, I attempted what seemed well within the bounds of its capability: walking down three little steps in my building’s lobby. Videos from the robotics lab show humanoids climbing stairs, jumping and fighting people off who try to attack it. I stood just behind it, one hand hovering near the handle mounted across its back, ready to steady it if necessary.

It stepped forward. Its center of gravity shifted and it felt the fall. I sensed it a millisecond too late to make it go backwards. As it began to tip forward, I grabbed the handle and pulled upward out of sheer instinct. Its limbs started flailing and it repeatedly hit my shins with its feet. Then, like out of a movie, its “eye” flashed red and every joint went limp at once. The kill switch had activated. Sixty kilograms of metal and wiring collapsed directly onto my body pinning me to the stairs.  At that moment, the robot wasn’t a robot. It became heavy machinery.  It was just metal pinning my spine to sharp concrete steps and trapping me beneath a mess of heavy limp limbs and wires.

Understandably, heavy machinery has heavy regulation and safety expectations. It is associated with danger, injury, and operator responsibility. Product safety law and tort liability frameworks assume that when these types of systems fail, they need to fail with caution.  Where there is uncertainty with heavy machinery, a complete shutdown is preferable to an extended risky attempt at correction.

My robot did act accordingly, I however, was in the danger zone during the initial corrective flail.  I hadn’t been prepped or taught that it can’t actually handle stairs as it does in the promotional materials.  I hadn’t understood that anytime I lift its feet off the ground that it would flail.  I certainly had no idea that it would kill-switch and go limp on top of me. The act of setting its kill-switch to activate over continuing to aggressively flail at an attempt for balance makes sense in the setting, but this also means that certain movement attempts are constrained. The hardware engineering of its mechanical range might technically enable more of a chance at stability in some situations, but the legal exposure shapes its corrective envelope. Of course being pinned underneath it is a lesson you only need to learn once, but if it had continued flailing to find its balance, it could’ve caused more damage than inconvenience to itself and to me which renders the kill-switch activation the correct option.

This programmable option means that the robot is safer, but it is also less capable. It is engineered not just to act but to avoid harm at the cost of potentially managing to correct itself. The possibility of harm casts a long shadow over the possibility of skill.  Events like this may appear to show technical immaturity but they’re more likely a result of regulatory caution embedded in hardware and software alike.

3. The Robot as a Worker

The first time I brought Tova to “work” at a local matcha shop owned by a friend and her fiance, it did not actually successfully complete a shift as a barista. It knocked over drinks, ran into the counter, dropped pastries, and scared dogs and some customers. Yet just by it being in the window, it pulled a steady stream of customers through the door. People were not coming into the shop for Tova’s practical skills, they were coming to get a closer look at a new type of technology and a new type of worker. This is a type of interpretation that acts as a real spark for debates against robotics because if people see that it can be a worker, it can also replace a worker.  This makes people incredibly defensive and anxious. 

Right now, labor law operates with human-business employment relationships, social security contributions, and collective bargaining unions. Taxation systems are income based and rely on human labor. Even though robots cannot actually handle the tasks a human laborer does, they have already become symbols for the high intensity debates about labor displacement, mass-unemployment, and the destruction of social safety nets.  Tova couldn’t even make a matcha latte, yet it was already an emblem for debates about replacing human workers. Its economic presence exceeds its functional competence because of the representative promise of what it will one day be able to achieve in the future.

As a result of this threatening promise, meaningful economic autonomy remains legally out of reach for the robot. Tova has no means through which to receive wages, it cannot sign a contract, it doesn’t have a bank account, it doesn’t have working papers, it has no tax identity and it cannot substitute labor the way a human does within our current regulatory frameworks. Its capabilities are constrained not solely by engineering limits but by the absence of a clear economic classification. Is it equipment? Is it a co-bot? Is it labor? The answer shifts depending on the observer and the application.

The ambiguity of what the robot actually is when it's in an employment environment produces an atmosphere of caution. Full replacement of a worker is politically and legally hazardous whereas full integration isn’t technically possible on most fronts. The robot remains in a strange situation where it is simultaneously overestimated as a threat and underutilized in reality.

4. The Robot as Passenger

Transport unearths another identity for the robot. The morning that I was set to debut on Good Morning Austria, I stood outside my house at dawn waiting for the black van to come and get me and the robot, which was tucked inside its giant black custom carrying crate for protection. The driver came out to help me with it and had no idea what was inside. As we both grabbed the handles on opposite ends of the crate and went to lift it into the back of the van, the driver immediately took a step back in horror.  What is that? Is there a body in there? To this day, I’m not sure if he was joking or serious with that first question. It was my shrug and moments of consideration due to my thought process that a humanoid does technically have a body, that absolutely terrified him in that moment. 

This wasn’t the only time that transport had unearthed a new identity for Tova. Of course, being in a giant crate may bring up illusions of a dead body or a piece of large luggage. But that was occurring on private transport. On public transport, one also must deal with ticketing. I began my journey of trying to get Tova a ticket as a passenger on Vienna’s public transit system before the robot even made its way into my house.  I thought of it as a great entry point to the bureaucratic ecosystem in Austria.  If the robot could get some form of government ID, it could open up a number of doors for discussion of how to handle the identity, classification and paperwork.  As an added bonus, it would make transiting through the city of Vienna much more affordable for me as the robot’s owner and financial facilitator. 

The ticketing plan I was aiming to get in order to also find a foothold in bureaucracy was an annual card.  There were a few issues.  One, the customer service representatives insisted that they wouldn’t sell a ticket to a humanoid robot and that I had to check the “oversize baggage” policy on the website and ensure that the lithium battery was permitted on the transit system.  I insisted that the robot was not baggage and that it would be walking on and occupying a seat, if it were to ride the tram. My argument was that since it needed to be seated in order to be steadied during the stopping and starting of the carriage, that it counted as a passenger and could be ticketed as such.  After the refusal of the human agent, I tried to use the automated online ticketing system. I hit a snag. I needed other documentation for them to produce the card. 

I decided if I could get registration paperwork, they may actually generate an annual card.  After all, the robot truly was a resident of Vienna since it moved into my apartment a few weeks prior.  It made sense it would be registered. I went to inquire at the local bureau office.  They ceremoniously turned me away and when I followed up via email, they informed me that registration paperwork was reserved for “biological residents.”

The quest for understanding of what exactly Tova transforms into on public transport is still ongoing.  Is it a ticketed passenger, an oversized bag, a controlled threat based on its battery, a physical danger in the case of it tipping over? 

The idea of treating it as baggage simply does not  sense. Baggage does not navigate nor walk its way onto the tram. Baggage doesn’t sit down like a passenger. And perhaps most importantly, baggage does not actively process environmental data in transit. To comply with transport expectations, the robot must power down. It cannot actively scan its surroundings using its camera array or LiDAR scanner. Its autonomy and processing pauses at the boundary of the tram door.  The result is spatial containment. As such, we wind up with a machine that is designed to walk upright through human environments but cannot do so without encountering the friction of misclassification. Autonomy, which is framed as a technical engineering accomplishment, proves to be infrastructurally dependent. 

5. The Robot as Vessel for Liability

When my friend was operating the robot through its remote control setting and allowed it to crash into my antique chair, I contacted my insurance agent to increase my personal liability and home insurance coverage. The representative who answered my call actually thought it was a prank. Apparently, an AI humanoid robot moving in didn’t seem like a serious customer request. Once I finally had a chance to prove I was serious, we realized there was no existing coverage that fit my particular situation.

Liability frameworks presuppose relatively clear action and responsibility. A tool is either defective or misused. A human actor either acts negligently or intentionally. The humanoid robot complicates this simplistic binary understanding of responsibility. It can operate in remote mode, teleoperated mode, or fully autonomous mode. Complicatedly, each configuration should distribute responsibility differently. In practice, however, responsibility remains tethered to the owner of the robot. The robot has no recognized legal autonomy. It cannot bear liability or take responsibility. It cannot insure itself nor take financial or legal responsibility alone. All risk ultimately flows to the owner. As a result, manufacturers design just as operators use- with an abundance of caution. Fully autonomous features are deployed with enormous caution, in keeping with the risk it exposes the owner to. Remote control becomes the default method of operation for liability reasons, not technical ones.

The robot can act on its own, yet it cannot fit a stable definition of responsibility within the frameworks that would practically govern the consequences of its actions. This predicament disincentivises experimentation and narrows the acceptable use of autonomous mode in practice. The more capable the robot is day to day, the more precarious its legal position appears to be and the more risk is shouldered by the owner.

6. The Robot as Dependent or Rival

The domestic robot being part of my life means it is also part of my dating life. In doing so, it plays into my romantic relationship dynamics.  In romantic contexts, the issues with classification are psychological not regulatory. One partner experienced the robot as a competitor, a presence constantly threatening his role in my life and intruding upon our dynamic as a couple. Another treated it as his responsibility, taking the place of a child and requiring his care and input on how we operated our lives with it.

These two variant responses are not related directly to law, but they are indicative of the breadth of cultural assumptions about agency and dependence regarding domestic robotics. The robot does not possess any capacity for emotional feelings or reciprocity, nor does it store any facets of shared experience in any legal way. Its hardware does not facilitate any nuanced expression and its software does not store any data for any emotional connectivity or continuity. We see that in the absence of a clear emotional or social status, humans project. They project their fears and desires alike onto the robot.

The robot becomes a morphing machine based on who is addressing it.  It can take the role of pet, dependent, or rival. It occupies a role shaped less by its own capacities and more by the interpretive environment surrounding it.  That shifting environment causing the instability is not accidental. It is produced by fragmentation. A machine prevented from remembering, constrained in movement, limited in autonomous decision-making, and legally anchored as property cannot sustain a consistent social identity and as such, is subject to the emotional projections of those interacting with it.

Results

Across surveillance, safety, labor, mobility, liability, and relationships, the pattern remains consistent. The robot enters a particular domain. It is likened to an existing comparable entity. That category of interpretation activates regulatory constraints. Those constraints reduce the logistical capability from the technical capacity to something lower. Reduced capability reinforces the perception that the robot remains incapable when in reality, it’s often the law that throttles abilities more than technical difficulties. Embodied intelligence is not under-regulated but it is governed inconsistently through patchwork frameworks never written to apply to it which are activated in various situations throughout its day.

The result is not clarity or proper relevant regulation but throttling of actual technical capacity in real life. A brilliant machine designed to operate across a variation of domains is instead segmented across them. The cost of its fragmentation is cumulative and damning to the justification of adoption of such an investment.

II. Capability Loss: The Cost of Fragmentation

If the last section traced the robot shapeshifting through multiple classificatory prisms, what results is not just a few spotted inconveniences but a larger pattern that we can trace. The humanoid is not lacking because it is technically incompetent. It is lacking because it is governed incoherently.

Each time the robot is compared to a preexisting entity, a different regulatory landscape is activated which shapes its capacity. Data protection frameworks shape its memory. Product safety expectations shape its movement. Labor anxieties shape its economic role. Transport rules shape its mobility. Liability policies shape its autonomy. The robot’s technical design is continually shaped against legal exposure. It’s technical capacities are not only shaped by the engineering abilities of the minds who are building the product, they are pruned away in theory and practice by legal issues.  What appears as immature skill or capability is often just legal precaution.

A. Personalization Without Memory

The feature that is most associated with domestic embodied AI is some level of connection with the machine or “personalization.” The public imagines these machines as companions, assistants, caregivers and just generally as technological beings capable of learning our preferences and and refining their behavior over time. Yet this most desired trait of personalization requires storing memory and maintaining continuity.

In theory, the software is absolutely capable of doing this and we have examples of that in our screen based text to text large language models.  In practice with a roving sensor rich embodied AI, however, memory is legally a prohibitively tricky subject.

A roving humanoid will inevitably come across faces in public, guests in private, voices in transit, and all sorts of biometric data in passing. Data protection law, particularly within the European Union, is strict about data consent, storage, and purpose. To avoid legal issues with the biometric data of non consenting parties and to avoid the headache of having to chase down every person you pass on the street for a data release signature, design and deployment decisions favor data minimization at the direct expense of personalization.

Facial recognition is disabled entirely. Long-term data retention is not used. Conversational history only lasts within sessions but not across sessions. The robot forgets who I am every day not because it lacks the ability to remember, but because remembering produces so much regulatory risk for me as the owner, that it isn’t worth the trouble.

The consequence is the loss of one of the main capabilities for which the embodied AI would be used in a domestic setting: companionship. What we are left with is a humanoid robot that shares my home but cannot accumulate shared history and as such, cannot genuinely personalize any interactions to our relationship. It can simulate interest within a session of use with its proprietary large language model, but it cannot develop knowledge of me over time.  

B. Autonomy Without Mobility

Autonomy is often framed as a software based achievement.  We tend to understand it as the capacity of the robot to make decisions for itself, entirely independently of human guidance. Yet autonomy is also spatial. A robot that, for legal reasons, cannot move freely cannot utilize whatever autonomy its software may try to facilitate. 

Transport systems are structured around basic distinctions that so far have proved sufficient for the past century.  An entity in the tram car is either a passenger or luggage, human or object, active or still. The humanoid taking the tram challenges this on a few fronts. Its lithium battery is regulated as potentially hazardous material that must be controlled. Its extremely heavy weight and human-like dimensions don’t mimic that of typical luggage which can be stored overhead or under a seat. Its capacity to sense and navigate while it is “on” complicates everything even further.

In response to this friction, autonomy is limited within this context. The robot needs to power down and go into its storage box while in transit completely eliminating all functionality, not just autonomy. Infrastructure like transit allows embodied AI’s presence only when it is reduced down to an object and resembles luggage more than passenger. This type of spatial containment unearths an even deeper tension. We are left with a brilliant machine which through a feat of engineering could theoretically navigate human environments but cannot actually inhabit them without modifying its classification. Cognitive or software based autonomy in embodied AI without proper classification to facilitate infrastructural operation is doomed from the start. It remains dependent on its owner and reduced down to its physical materials. Autonomy, when it is stripped of its mobility, is constrained by geography and policy as much as by its technical capacities.

C. Responsibility Without Status

The realm that possibly produces the most friction and asymmetry is liability. As the robot’s technical capacity for autonomy expands, the question of responsibility becomes more and more problematic. When operated by remote control, delegation of responsibility is fairly straightforward. When operating in teleoperative mode, things begin to get more complicated. When operating in fully autonomous mode, we enter absolutely new terrain.

The current liability frameworks that we have inherited assume either a defective product or a negligent human actor when responsibility for error has to be assigned. The humanoid robot obliterates this comfortable binary. It acts, but it is owned. It learns, but it does not have any legal standing to accept responsibility for those actions. It can generate huge consequences autonomously but doesn’t have the ability to take responsibility for them.

In reality, responsibility defaults to the human owner of the robot and, in rare instances, to the manufacturer. The robot may act independently but frustratingly, it remains a vessel through which liability flows most often to the owner who did not program or build its capabilities. Because its status is undefined, precaution becomes the necessary default design principle. Fully autonomous features are severely limited due to risk and for individual operational policy, most owners judge the risk of liability too great to experiment with autonomous mode.

When autonomy and responsibility are decoupled, defensive design and use are the natural result.  Capabilities that could be deployed cautiously are instead simply withheld in practice. The fear of fault throttles experimentation long before technical thresholds are even approached.

D. Visibility Without Classification

Unlike software-based AIs like large language models, embodied AI cannot remain abstract. It occupies space. Their presence inspires interpretation, emotion, and response by others before their functionality is even truly understood.  When classification is not defined properly, confrontation produces a certain amount of anxiety in people who meet the robot.

Promotional videos show engineers in the lab performing stability tests like kicking or pushing the robot which leads viewers to, quite wrongly, think the robot is indestructible. Viral videos with movie magic shape expectations of the robot’s capacity and behavior. News coverage switches at a dizzying pace between utopian and dystopian language when covering robotics. In the absence of a stable and defined legal and social identity, the robot becomes a symbol onto which every individual’s fears and desires are projected. These fears and desires could be deeply personal or influenced heavily by media consumption. This confrontation and physical visibility without any accurate form of classification produces volatility in public perception and interaction. The machine is judged not by its actual capacities but by its imagined trajectory of threat or promise.

That volatility and mistrust bleeds into regulation because the more public discomfort grows the more precaution that the laws will need to incorporate in order to satisfy the public. Precaution reinforces constraint and constraint delays effective adoption.  The loop just continues and we get farther and farther away from easing adoption. 

The Structural Consequence

Across the realms of personalization, mobility, liability, and visibility, the same structural logic reveals itself again and again. The robot enters a domain and is forced through an inherited legal prism. That prism activates a protective legal landscape and in order to maintain compliance with that landscape, the developers or owners must throttle the potential capability to reduce their legal exposure down to a manageable level. The throttled capability reinforces the perception that the robot is unable to accomplish certain use cases.

This pattern creates a reputation for humanoid robotics that they are constantly in this prototype state and are not as capable as they technically are. The robot exists in society, but only partially due to these constraints. It is technologically ambitious yet legally constrained, socially visible but structurally unstable.  This prismatic fragmentation is not the result of some bad actor trying to throttle these systems, nor is it even aimed at protecting people directly.  It’s simply systemic but to the legacy categories that could’ve never considered embodied AI in our homes and communities. The robot’s capacities are absolutely shredded as they move through the prism.  History shows us that we actually do have precedent for how legal systems might respond when new entities can’t fit inherited legal structures.

III. Historical Lessons in Classification Adjustment 

The instinct we have to regulate humanoid robots through an analogy of what they are most like in any given scenario is not unfounded. Legal systems are built on precedent and when something unfamiliar appears, it follows logically that we would try to compare it to something we know. The first question asked is rarely “What new category do we need?” It is almost always “What is this most like?”  Is it like property? Machinery? A laborer? A Dependent? 

Analogy stabilizes the uncertainty of a novel entity in the short term. It allows courts and regulators to act without inventing law from scratch during the adoption phase. But analogy also simplifies complexity. It captures just one dimension of a complex entity while ignoring the rest of it.  Embodied AI really exposes the limits of the “likeness” approach.

The humanoid robot is not just a consumer product, though it is sold to customers as one. It is not merely AI software, though its existence and functionality depends upon it. It is not just a worker, though it may one day perform valuable labor. It is not a person, though it moves, occupies space and participates in interactions in ways that emulate agency. When forced through existing categories, it fits partially into all of them and fully into none.

It should be noted that this type of tension is not at all historically unique.  There are many times in history where legal systems have tackled entities that went beyond the many existing categories to which they belonged.

A. Existing Non-Human Legal Entities 

Law has a number of examples of entities that are neither human nor have the capacity for agency and yet have legal recognition and protection.

Corporations are one of the most prominent examples. They are not people, yet they can own property, enter contracts, incur liability, and exist across generations. The legal “personhood” applied to them does not come from consciousness but instead from consequence. The gravity of their financial and social impact demanded structured recognition beyond mere ownership of assets.

Another example is the estate of a deceased person which functions under the law even after the person’s life ends. In addition, ships and aircraft carry registration and do also, in particular circumstances, maintain somewhat sovereign status. Endangered species are granted protection not because they are somehow asserting claims themselves but because their preservation serves a collective interest and as such a structure was created in order to protect them under the law. Objects of cultural heritage are regulated in their international movement and trade because they are treated as more than just sales items.

In each of these cases, the law created a subclass.  The goal of the distinction was not to elevate the entity in question to the status of a human, but to manage its impact in a consistent, equitable and beneficial way.  None of these classifications demanded sentience of the entity as a prerequisite. They just required consequence.

We can foresee that embodied AI is approaching a similar moment. Not because it has some semblance of moral agency, but because its integration into the labor market, our homes, our city infrastructure, and our data ecosystems which can create effects that cannot be contained within traditional property law alone.

B. Classification as Method of Stabilization

Legal classification doesn’t just define an entity.  That is merely the first step.  The real benefit is that it stabilizes it within a larger ecosystem of impact. When an entity is classified, the surrounding ecosystem adapts predictably and the interplay between the entity and the rest of our systems is developed. Insurance markets can then independently price risk. Tax codes can incorporate status and role. Infrastructure standards can work to adjust and adapt. Public expectations settle into a more realistic mode. Responsibility becomes better defined and more fair. The friction from fragmentation across all of the different intersecting legal frameworks can be smoothed when a proper classification is made. 

There are other issues beyond legal throttling that impact embodied AI.  As embodied AI currently exists within overlapping regimes that were never designed to come together to regulate one machine, in one moment it is a consumer good; in another, a potential economic actor; in another, a safety hazard; in another, a data collection entity. Each identity activates different obligations and restrictions and none provide any real comprehensive governance. The absence of a cohesive category produces confusion and gaps not only for regulators but for designers, insurers, and users. Capability becomes dynamic and relies on which aspect of the prism is dominant in a given context. The system reacts piecemeal and because of that, the designers make choices which account for this type of deployment and the users choose to operate at partial capability because of their own risk benefit analysis.  The good news is that history shows us that this type of legal and classificatory instability is usually resolved through the transition of an entity under property law to an entity with rights.

C. Property to Rights: Gradual Adjustments in Legal Status

There is a historical pattern of entities that have undergone this very delicate shift in legal status.  Spanning centuries, we have examples of populations that have moved from being legally treated as property to being recognized as agents with rights. Serfs bound to land under feudal systems, enslaved individuals denied individual rights under the law, colonized populations governed without their sovereignty, and women restricted in property ownership and contractual abilities.  Each of these groups represents a legal entity that underwent a structural transition in legal status over a period of time.

It is important to note that this comparison is not a moral equivalence to these groups and their struggle for rights under laws which subjugated them. Humanoid robots do not suffer, nor do they experience injustice the way humans do. The structural lesson of this tricky analogy on a legal framework is different.  Legal systems tend to evolve when the existing classifications become insufficient in light of the social implications and consequences of the classified entities. Subclassifications under the law that were once treated as property undergo an evaluation as their individual agency expands and interdependence with other subgroups deepens. These transitions are typically gradual, often contested, and usually incomplete. They unfold through court cases, law reform, and social pressure like rights movements.

Embodied AI is still strictly working under a property based framework. The machines can be owned, bought, sold, insured, and warrantied as objects. Yet as the capacity for semi-autonomous action steadily increases, the friction within that framework only grows and grows. A purely property based classification cannot properly account for entities that act within human environments with relative or complete independence.

The question proposed here is not whether humanoid robots will or should become persons. The question is whether property alone is a stable long-term classification for systems that move, decide, and interact within intimate settings, public infrastructure, and social domains.

History actually shows us that typically when agency, interdependence and consequence increase, classification eventually catches up through the introduction of a new classification status and expanded rights.

D. Embodied AI’s Threshold Moment

This legal transition often follows a fairly similar path. A new entity emerges. It is compared to existing categories. Friction accumulates when those categories prove insufficient. Courts and regulators carve out exceptions. Interdependence between the entity and established other subcategories of the population increases and then, over time, a distinct subclass solidifies.

With embodied AI going from testing phases in robotics labs into deployment in the real world and into people’s homes, humanoid robots will now be entering the friction stage of this historical pattern.

Transport authorities push back on ticketing. Insurance companies hesitate to offer coverage. Labor unions fight displacement. Religious scholars critique ritual implications. Privacy regulators assess data exposure. Each domain reacts independently, applying its own logic to a shared phenomenon. The risk we currently face is that this process unfolds reactively, through litigation, disaster, hazard and crisis rather than through public discourse and thoughtful preemptive design. Precedents from court cases will compound. Exceptions and workarounds will proliferate. A patchwork legal landscape will form and the public will suffer under it.  Alternatively, history also shows that legal systems are capable of anticipatory classification. They can create structured categories before a crisis compels them to do so.

Embodied AI presents a golden opportunity for anticipatory classification and smoother adoption as a subcategory over the law.

The Structural Insight

The lesson that we can learn from entities like corporations, estates, ships, protected species, objects of cultural heritage and shifting legal statuses is not that humanoid robots are somehow demanding personhood. It is that our legal systems already possess the framework to invent subclasses when the need arises.

The “prismatic” fragmentation I have outlined after observation throughout my immersive domestic cohabitation research is not meant to serve as evidence that robots are too complex to regulate. It is, instead, evidence that they are currently regulated indirectly through a filter which is insufficient for their complexity and their potential future social and financial impact.  When an entity’s social impact spans multiple domains simultaneously, indirect legal guidance through “prismatic” fragmentation throttles the technical capacities and makes integration slower and less worthwhile.  If embodied AI is consequential and interconnected enough to spark debate on labor displacement, intimate relationship dynamics, transport regulations, urban infrastructure design, and liability doctrine, then it is consequential enough to deserve its own classification.

IV. The Case for Direct Classification

If prismatic fragmentation throttles capability and history tells us that classification under the law evolves when comparison to existing entities fails, then the question becomes less broad and philosophical and more specific and structural.  We can now ask, “What would it mean to regulate embodied AI directly rather than refracting it through existing silos?”

In answering this question, I reiterate that the aim is not to grant personhood. It is not to erode or disqualify the distinction between biological human and humanoid robot. It is simply to acknowledge that a sensor-rich, mobile, semi-autonomous system operating in intimate domestic spaces and interacting with public infrastructure does not fit neatly within the current miasma of categories it activates in various contexts like property, appliance, worker, luggage, or dependent.

The current inherited silo approach governs each dimension of the humanoid separately. Data law governs perception. Product safety governs mechanics. Labor law governs labor and financial market participation. Transport regulation governs movement. Tort law governs harm. Each framework operates in isolation, and each constrains a different capability in order to minimize risk within its own domain and the final product the user interacts with only suffers in abilities. The result is not comprehensive governance or frankly full protection on behalf of people but an unfortunate pruning of capacity.

The goal of direct subclassification for embodied AI, especially those intended for domestic deployment, would not replace these frameworks but create the facilitative legal structure to coordinate them, therefore smoothing adoption, maintaining capacity, and protecting society and individual members of the public more efficiently.

A. Why Legacy Silos Cannot Simply Scale

One reason why embodied AI is so fascinating legally is that most legacy legal policies are based on a certain type of separation of skills that embodied AI combines. Embodied AI isn’t only software and it isn’t only hardware.  It is an incredible mix of two already intricate fields. 

The convergence of legacy legal policies and this new form of hardware and software means we have little applicable precedence.  We can find friction everywhere we look. Product law assumes that tools do not learn. Privacy law assumes that devices do not move autonomously through public space.  Labor law assumes that workers are human.  Transport law assumes a clear physical distinction and different needs between passenger and baggage. Liability law assumes that agency and ownership are separable. Embodied AI destroys these assumptions and separations because of its reality as both hardware and software.  The fact that it is mobile and sensor-driven, owned and yet semi-autonomous.  The fact that it can perform labor and autonomous tasks while at least partially needing to remain classified as property. And the reality that it can perceive, generate, send and store biometric, audio and visual data all while standing in your bedroom.

When these starkly combative convergences are forced into siloed legacy regulation, defensive design and use is forced. Memory is suppressed to satisfy privacy. Autonomy is limited to reduce liability exposure. Mobility is constrained to avoid infrastructural issues. Economic participation is restricted to avoid labor market destabilization.  There is no system actively choosing to slow innovation outright but in practice it is pruned and throttled incrementally until use cases no longer justify themselves.  If production, deployment and adoption scale without classificatory changes, the “prismatic” fragmentation will only intensify as more of the units enter society. Manufacturers will design for the lowest regulatory threshold across a wide grouping of jurisdictions in order to meet market demand while maintaining compliance. Insurers will price conservatively. Infrastructural oversight teams will simply default to exclusion where ambiguity persists and potential harm or disruption could occur.

B. Policy Potential With the Right Classification

A direct classification, we could call it Embodied Intelligent Autonomous Systems, would not elevate AI humanoids beyond regulation but instead would anchor them within it by giving us a mechanism for legal process.

This type of subclass could facilitate a number of structural elements:

Identity and Registration.
A standardized, interoperable identification framework would stabilize expectations and allow for proper verification, responsibility, management and oversight of these humanoids. Each embodied AI would carry a verifiable registry, firmware transparency, and an indicator of their designated ownership. Identity would no longer fluctuate socially between “the same type of robot” and “just another machine” but instead, would be verifiable, unique and standardized.

Tiered Autonomy Certification.
Rather than treating autonomy as binary, certification could scale in levels based on training, safety and cooperation. Remote-only systems, supervised autonomy, context-bound autonomy, and fully autonomous systems could each carry defined, tiered permissions and standards obligations. Capability and responsibility would then be able to grow together rather than creating friction with one another.

Mode-Based Liability Allocation.
Responsibility would correspond to operational configuration. Remote mode would maintain operator accountability. Teleoperation would distribute responsibility across actors who facilitated the session. Fully autonomous operation would incorporate structured manufacturer obligations. Clarity would replace precautionary issues and responsibility falling disproportionately on the owner.

Integrated Data Governance.
Embodied AI with its mobility aspect requires a distinct approach to data protection and privacy under The GDPR Act. Memory retention parameters, public navigation protocols, consent mechanisms, and automatic deletion in transit could be adjusted specifically for mobile sensor systems. Personalization would not be eliminated across all operations at all times but instead bounded using geofencing or two factor authentication of permissions.

Mobility Recognition.
Transport systems would no longer debate whether a humanoid is cargo or passenger. Defined standards including battery percentage thresholds, spatial allocation, safety zones, behavioral certification, and stability gradings would permit participation without arguments with ticketing representatives.

Domestic Integration Standards.
Clear guidelines for interaction within multi generational households, vulnerable populations, and public-private boundaries would align expectations, value and safety without simplifying the robot into an appliance logic or a guardian.

None of these elements require attributing moral agency or “humanness” to the robots. They require acknowledging the friction of a novel hardware and software combined system in legacy classifications and the need for direct regulation proportionate to the social and financial impact as well as the interdependence that embodied AI may soon have with human systems.

C. Innovation and Regulation Are Not Diametrically Opposed

I was recently at a networking event after speaking at a conference and a robotics professional approached me.  Throughout our conversation, when I would speak about regulation and how important thoughtful legal reform was to adoption of robotics, he repeated the same sentiment over and over to me and to others in my field.  He kept saying to please not regulate people like him out of innovation.  He didn’t want to be regulated more because he felt it tied his hands behind his proverbial back when it came to innovating in the field of AI.  There is this constant assumption that has attached to regulatory discourse which is that clearer classification constrains innovation. Yet, it is my practical experience that incoherence or inconsistency constrains the reality of deployment more severely.

When status is unclear, design and use both become defensive. When liability is indeterminate, features are withheld. When infrastructure is uncertain, mobility is paused. When identity is unstable, public trust erodes. Your best technical accomplishment may or may not make it into the product that ships out to consumers but they won’t even be able to use it if the abundance of caution one must use to stay legally compliant means they have to work around infrastructural, political and legal webs. A defined subclass would not accelerate adoption recklessly nor would it slow innovation to the point of reducing competition. It would, however, have the potential to channel innovation deliberately by allowing technical capability and legal progress to grow hand in hand. Personalization could expand within lawful parameters. Autonomy could scale alongside structured accountability. Mobility could operate within defined infrastructural bounds.  Innovation does not flourish in ambiguity or stratification, it flourishes in clarity and cohesion..

D. The Choice We Face Before Scalability

Embodied AI is in a transitional stage where it is at the precipice of potentially changing the world by scaling upwards and putting robots into a large percentage of homes and businesses. Technical capabilities are increasing as costs are declining.  In response, cultural interest is intensifying. The temptation for lawmakers is to wait and not be proactive.  This approach would allow market forces and case law to accumulate precedent organically through problems and their corrections.  History indicates that reactive patchwork follows hesitation and this type of response is never consistent nor clear enough for the go to market strategy to run smoothly.

The alternative route we can foresee is anticipatory design. To recognize that the embodied AI and especially applications in the domestic sphere reveals this classification friction early and to respond before scale entrenches fragmentation to the point where reactive patchwork is the only way out. The question is not whether humanoid robots are ready for personhood because they simply are not.  Instead, the question is whether property alone is a satisfactory classification for systems that act within human spaces with partial or full autonomy. Since the answer is a resounding no, then the clarity of a new classification must trump the ambiguity of fragmented legacy silos.  This direct classification is not a suggestion of equivalence but rather an acknowledgment of the novel nature of this entity and the desire to protect both its innovation and our society with equal dedication.

V. Returning to the Original Question

“What is it?”

Under the box lights, the question was delivered in an almost playful manner. An inquiring prompt designed for a trite answer and a clickable soundbite. In that moment, calling the robot a nuisance was both honest and strategic. It deflated the false expectations that I was constantly made to battle from carefully curated exhibition dances, robot fight clubs and AI generated robot content. It made my frustrations and the future compact down small enough to fit between commercial breaks.  But in truth, my lived experience has rendered the question anything but small.

What it is determines who is responsible when it acts, what it is permitted to remember, where and how it can travel, whether it can work, how it learns, and whether it can form a relationship. The question posed is foundational in nature.

In my home, the humanoid robot is continuously reclassified. When it senses using its microphone array, scanners and cameras, it is governed as a surveillance device. When it falls, it is treated as heavy machinery. When it assists, it is imagined as labor. When it travels, it is negotiated as baggage. When it causes damage, it becomes a vessel for liability. When it gets involved in my relationships, it becomes an object upon which fears and desires are projected.  Each interpretation activates a distinct regulatory landscape and the compliance to which constrains its capability. Over time, these constraints accumulate until the machine renders itself almost completely useless because what remains is not a functioning domestic assistant, nor a fully autonomous actor, nor a mere appliance. It is something in between which contains all of the consequences, costs, responsibility, maintenance and tension with none of the benefits.

We can conclude that the deepest impact of personalized, autonomous machines will not be determined solely by advances in hardware or breakthroughs in software. It will be shaped by whether legal systems are willing to answer the classificatory question with clarity rather than fragmented analogies and comparisons.

History illustrates that law eventually evolves when classification friction surpasses inherited categories. Corporations were once treated as collections of contracts before they were stabilized as legal entities. Ships and aircraft required registration regimes to participate coherently in international domains. Protected species were granted structured status when their ecological consequence demanded it. Even the expansion of rights to previously subordinated populations reflects the gradual recognition that property-based classifications could not withstand any degree of increasing agency.

Embodied AI now stands at its own threshold. Not a threshold of moral equivalence, but of structural significance. They are going to be increasingly interconnected with us and have more and more influence on our world, societies and individual lives. Due to this, if we continue to force humanoid systems through legal prisms, making them property in one context, worker in another, appliance in a third, they will remain perpetually throttled. Innovation and use will be defensive. Adoption will be uneven. Public perception will oscillate between distrust, instability and fear.  If, instead, we acknowledge that embodied AI represents a distinct category requiring deliberate coordination across data governance, liability allocation, mobility standards, and domestic integration, then capability and responsibility can grow at a more similar rate.

The aim is not to raise the machine to a human equivalent but to stabilize the system that is defining it and making it possible to regulate directly.

By welcoming a humanoid robot into my home before it is practically feasible or financially sensible for non researchers to have one, I have been able to bear witness to this instability early. The domestic sphere unearths the friction of misclassification more clearly than running the same program at any given robotics laboratory. It reveals how inherited structures bend when confronted with embodied AI and its levels of autonomy. It clearly illustrates how compliance to a prism of contextual identities throttles the actual benefit of feats of engineering by making the legal reality different from the technical one.

“What is it?” remains the most consequential question of this century not because the machine threatens us with some comparable humanity, but because indecision about its status fragments its integration and deteriorates its usefulness. 

Without decisive preemptive action, future robots will arrive as undefined entities under the law. They will arrive as prototypes in every sense of the word, technologically, legally, and culturally. Whether they become genuinely personalized and responsibly autonomous will depend less on engineering and more on law.  So until we decide what they are and answer our guiding question, they will remain refracted by being shredded through the legal structures we already have that have never encountered the combination of an entity so digitally and physically advanced. 

Read More
Emily Kate Genatowski Emily Kate Genatowski

Law and AI Robotics: The legal phenomenon of capability loss

I am one of the first people on earth to live full time with a domestic AI humanoid robot. It sounds like a wild futuristic adventure but in reality, it is a constant wave of headaches, frustration and disappointment.  I have a state of the art Unitree G1 EDU 2 with a dextrous hand. It should be able to get my groceries, accompany me to the University where I work, take the metro with me, clean my home and more.  But it doesn’t.  It can but it doesn’t.  For example, when I tried to bring it on the tram with me to my office at the University of Vienna, there was a massive issue with ticketing.  The robot needed to sit in order to steady itself with the constant stopping and starting of the tram carriage and the elevation changes from district to district.  In taking a seat from another passenger, it straddled being a piece of oversized luggage and being a passenger in need of a ticket.  When I inquired about ticketing options for humanoid robots, unsurprisingly the transit authority declined to issue a ticket and the debate waged on.  When I tried to teach it how to clean my apartment and realized that within the trial and error of it learning the layout of my space, that it was running into some of my antiques, I tried to increase the insurance in my home.  I was hung up on, passed from manager to manager and ultimately declined, as the insurance company was unclear on how to handle the actions of an autonomous humanoid.  When I wanted to get the robot a job, I tried to register it as a resident in Vienna at my address to ensure that the authorities were well aware of its location and intention.  I went to the local registration authority in my district and was told that registration was for biological residents only.  When I wanted it to learn how to pick up ingredients for me at the supermarket, I needed it to remember the route on the street and be able to record as I taught it, which it cannot do due to privacy regulations. At each step along the way, the novel, state of the art technology that had been developed and that I invested in the robot for was truncated in its application due to regulatory infractions until it was rendered essentially useless. 


This experience with an AI humanoid robot revealed a reality that is largely absent from mainstream discussions of both robotics research and legal theory. The primary obstacles to meaningful robotic integration and effective robotic deployment are not necessarily technical, but legal, regulatory, and economic. While innovation in artificial intelligence and robotics is advancing at extraordinary speed, the moment these systems leave the laboratory and enter domestic space they encounter a bottleneck of policy constraints that effectively strip away most of their technical features.  These constraints are from insurance regimes, liability doctrines, consumer protection laws, privacy regulation, labor politics, contractual control, cloud governance, and geo-fenced compliance obligations.  The mine field of external constraints render much of the technical achievements made by innovative engineers moot because this web of restrictions reshapes and restricts what the robot is designed to do. Through the lens of this immersive research method of daily cohabitation with a humanoid robot, the experiment treats lived experience as ethnographic data, revealing regulatory frictions that remain invisible in more abstract policy debates and sidelined from the primary focus of AI engineering research. The findings of my research result in an observation that I describe as capability loss: a systematic stripping of robotic functionality as systems move from experimental laboratory environments to deployment in everyday life. This loss is not the result of insufficient hardware or software, but of policy structures protecting the domestic sphere, a space that was developed without embodied AI as a consideration. Unless these legal frameworks are re-evaluated and updated, domestic robotics risks remaining permanently over-constrained, under-deployed, and economically illogical to adopt, regardless of how advanced the technology becomes.


The domestic sphere constitutes one of the most legally restrictive environments for the deployment and adoption of embodied AI. Unlike industrial or commercial contexts, where risk assessment, supervision, consent, and data use are typically more formally structured, the household sits at the intersection of multiple legal landscapes which were developed without any anticipation of embodied AI. Privacy and data protection law govern observation and memory; consumer protection law mediates acceptable risk and user vulnerability; liability doctrines shape permissible action and intervention sensitivity levels; labor law constrains task allocation and complicates socioeconomic intricacies; and cloud governance and contractual frameworks limit long term learning and system updates. These regulatory landscapes now all intersect in the domain of just one machine, layering a web of obligations atop it just to assist with the most ordinary of household activities like movement, assistance, and observation. As a result, the home cannot be understood as a neutral testing ground for AI robotics but instead as a space where legal issues reveal themselves at their most practical level and where tolerance for ambiguity has to be near its lowest. This intersection of policies produces a structure of tension in which the environment that most demands contextual sensitivity and adaptive behavior is also the one in which a robot’s actions are most heavily regulated and unable to deliver at their actual technical capacity.


Unlike standard types of technological limitation like hardware issues or model performance ceilings, capability loss is entirely imposed from external sources. It comes from the interaction of policy landscapes that do not only regulate the robot’s behavior but actually determine its operational limits. In this sense, law does not simply govern robotics after the fact or protect the human involved but it shapes the robot’s effectiveness in deployment settings by constraining the conditions under which any embedded technical capacity can actually be used. It’s important to note that capability loss is not the result of any single policy, but of the clashing of multiple legal frameworks that were never designed to function together within a single embodied system bridging the most advanced hardware and software innovations in one. Each policy involved may be independently useful, yet their convergence within the operation of embodied AI produces an effect that is rarely discussed, which is robots that are technically quite sophisticated but behaviorally constrained past the point of usefulness. The result is a widening gap between what robotics research accomplishes and what deployment allows.  The throttled functionality undercuts the practical usefulness of the robots and therefore chips away at the economic incentive for purchasing one of these systems.  This article presents a selection of legal landscapes currently contributing to capability loss throughout the cohabitation experiment.  These domains are liability law, consumer protection, privacy regulation, labor politics, contract law, cloud governance, and geo-regulatory fragmentation.


Insurance and the how Over Caution leads to Inaction

Insurance landscapes exert a clear influence on domestic robotics by translating legal uncertainty into economic risk, shaping what robots are allowed to do even before any statute or court ruling is established. In practice, insurers function as early regulators: tasks that these heavy robots complete involving physical proximity, autonomous movement, or intervention are classified as high-risk, whereas non-invasive, low-impact functions are favored regardless of technical ability on either end of the spectrum. This asymmetry in risk results in more conservative design choices, lowering decision thresholds for inaction while discouraging potentially useful actions like physical assistance. It is precisely this asymmetry that insurance policies exert influence on domestic robotics not only at the point of use, but during the design process and development. Long before an embodied AI system enters a private household, its permissible behaviors are filtered through economic assessments of insurability, liability exposure, and risk assessments. In this sense, insurance is operating not only as a reactive measure after the machine is produced, but as a decision making mechanism that helps to determine which robotic behaviors are economically viable to build or approve for use at all. The result is that many technically feasible capabilities are constrained, disabled, or excluded during development, not because they are unsafe, but because they introduce forms of risk that cannot be readily priced, pooled, or defended.  

Adaptive judgment, physical intervention, and learning in situ, capabilities that would drastically improve a robot’s usefulness in the home, are often treated as economically uninsurable because they complicate clear behavioral attribution and blur responsibility. Developers are incentivized to suppress or truncate these capacities in favor of conservative behavioral envelopes that can be reliably understood and regulated. This design-time influence produces robots that are not simply cautious in practice, but intentionally engineered to avoid forms of intelligence that would introduce a high level of exposure, even when the intelligence to do so is technically possible to include in the system.

In domestic settings, where physical assistance usually requires contextual sensitivity, negotiated risk, and moment-to-moment judgment, this insurability-driven truncation of capability has real life consequences in the usefulness of the robots. Robots optimized for economic defensibility rather than functional effectiveness struggle to deliver meaningful value when compared to their cost, undermining the very adoption that insurance frameworks are seeking to stabilize. Insurance thus emerges as a central, if largely unacknowledged, driver of capability loss: a system of economic governance that shapes what domestic robotics is allowed to become by filtering technical possibility through the logic of risk avoidance long before any human–robot interaction takes place.

Scenario

Robot Framing

If Robot Acts

If Robot Does Not Act

Insurance Preference

Robot is purely observational (no assistive claims)

Passive / informational

High exposure (unexpected action)

Very low exposure

Strongly prefers non-action

Robot performs low-risk assistance (e.g., reminders, alerts)

Assistive but non-physical

Moderate exposure

Low exposure

Prefers limited, scripted action

Robot is marketed as safety-enhancing (e.g., fall detection)

Protective / preventative

Moderate–high exposure if harm occurs

Moderate exposure if harm was foreseeable

Ambivalent; narrows acceptable behavior

Robot has intervened successfully in the past

Reliance established

Exposure if intervention fails

Exposure if non-intervention follows prior success

Prefers consistency over judgment

Ambiguous domestic risk (e.g., fatigue, clutter, tools)

Context- dependent

High exposure (discretionary judgment)

Medium exposure (missed prevention)

Prefers warning over action

Physical intervention involving contact

High-risk assistive

Very high exposure

Lower exposure

Strongly prefers non-action

Table 1. Insurance Matrix: Exposure Based Action Preference for Domestic Robotics


Liability and the Suppression of Agency

If insurance shapes what kinds of robots are likely to be built in the first place, liability law shapes how those robots are allowed to behave once they exist. In both the U.S. and E.U. legal systems, domestic robots are currently treated as products or tools rather than independent actors. This means that when something goes wrong, responsibility is assigned to either a person or a company. The robot itself, even in autonomous mode, carries no legal responsibility as there is currently no legal mechanism for machine responsibility. As a result, any autonomy on the part of the robot immediately translates into legal risk for someone else. In practice, this disincentivizes the building and deployment of autonomous mode. To reduce this exposure to liability, developers are incentivized to limit robots to tightly scripted behaviors that can be easily explained and justified later. This leads to systems that are technically capable of adapting, but legally safer if they do not.

Liability law places robots in a difficult position once they are marketed to the public, and their potential future owners, as capable of helping out at home. If a robot intervenes and causes harm, it could easily be blamed for acting. If it does nothing in a situation where harm was foreseeable, it could easily be blamed for failing to act. They are put at an intersection of double the risk and as such, companies will likely narrow the scope of the robot’s role altogether in its marketing.  This way, the robots can be programmed to give verbal warnings over taking action, can choose deferral over decision-making, and can rely on inactivity over intervention. In homes that are seeking extra presence and support, for instance those with elderly and minor populations present, this legal pressure discourages exactly the kind of product features that would make robots genuinely useful. The result is another form of capability loss: robots that technically could help more, but are specifically designed not to because if injury occurs, it is the company producing them who could be held responsible.

Consumer Protection and User Protection

Where insurance and liability frameworks constrain the capacities of domestic robotics by managing economic and legal risk, consumer protection law constrains domestic robotics by making assumptions about the user. In both U.S. and EU contexts, these frameworks are built around the idea that individual consumers are vulnerable, inconsistently informed, and in need of a range of safeguards. However, the underlying legal pressures differ across the two distinct cultures of consumer safety. In the US, consumer robotics must exist within an extremely litigious environment, where design choices are often shaped by the anticipation of class actions, product warnings, and legal claims of deceptive or unfair practices. In the EU, consumer protection operates alongside strict data protection policies, where compliance with privacy, consent, and data minimization requirements significantly limits how systems can observe and learn within a domestic environment.

Although driven by different forces, these two environments incentivize conservative design choices for deployment in both markets. In the U.S., robotics companies may choose to provide simple interfaces, restrictive defaults that limit riskier behavioral settings, and excessive warnings and legal paperwork included with the consumer robots in order to reduce their chances of being exposed to consumer litigation. In the EU, a similar decision making process arises from the need to avoid illegal data processing or collection, even where that same data would improve safety and application. Features that would allow users to meaningfully adjust robot behavior like adjusting risk tolerance, enabling adaptive learning, or authorizing physical assistance in times of need may never even make it to the EU market through the mine field of legal restrictions. Consumers are treated as legally risky users and may not even be given a chance to consent to the more complex trade-offs that create applied usefulness and inherent value from the robots, even in the privacy of their own homes.

The result is the same each time.  We see the continued pattern of capability loss, just reached through different legal pathways. Domestic robots are marketed as the future, built to be intelligent, helpful and adaptive. Yet in reality, due to the many legal fields they intersect with, they wind up deployed as truncated products, with their robotic hands tied behind their proverbial backs. Users stand in front of machines that are technical wonders in the lab but legally constrained in the home.  This unfortunate truth will become better known as more consumers face the frustration and the real world cost-value dilemma of domestic robotics is revealed. Consumer protection law is essential in preventing harm, but is another force shaping domestic robotics into systems that are not worth the cost of purchasing them. 

Privacy Regulation and the Collapse of Contextual Knowledge 

Privacy regulation shapes domestic robotics by limiting what robots are allowed to see, remember, and learn inside and outside of the home. For embodied AI, this is crucial to usefulness. Usefulness in the home depends on understanding routines, recognizing people, and learning from repeated interaction over time. When privacy rules make data collection tricky or encourage frequent deletion, robots could technically function but lack important contextual data they need to behave helpfully.

This issue is especially difficult to manage in the EU, where strict data protection laws place enforceable limits on personal data processing, storage, sharing and reuse. Even when customers electively bring a robot into their home, the system may be prohibited from retaining information about household members, routines, or past interactions. This is made even more extreme if the robot is bipedal and expected to venture into the community.  As a result of these constraints, robots often operate with limited memory and reduced perception, leading to repetitive wrong behaviors, a lack of contextual knowledge and slow or nonexistent adaptation. In the US, privacy regulation is more fragmented on the state and local level and less restrictive at the federal level. This allows greater freedom for robotics in data collection and learning overall, but shifts protection from a blanket overlay toward disclosure, state-based contracts, and after-the-fact enforcement rather than clear unified limits.

Approaches on both sides of the Atlantic constrain domestic robotics in different ways. In the EU, strict privacy safeguards reduce a robot’s ability to develop long-term understanding of its environment and users. In the US, less restrictive data access increases capability but also raises surveillance concerns, which often leads companies to impose their own limits and overreach through design choices and terms of service and places the consumer at their will. In both systems, privacy regulation changes what robots are allowed to know about the homes they inhabit. The result is another form of capability loss: robots placed in complex, intimate environments but legally prevented from acquiring the contextual knowledge needed to function effectively within them.

Labor Politics and Public Perception

Labor politics shape domestic robotics by influencing which tasks robots are allowed to perform and more specifically, how their role is publicly described. Concerns about labor displacement and worker protection put strong social and political pressure on the deployment of embodied AI, even in private homes. While these debates are usually centered around the workforce, they can also spill over into domestic settings, where robots are often designed and marketed in ways that are designed to highlight the replacement of tasks rather than the replacement of a laborer. This helps to stem the flow of an additional public discourse surrounding robots replacing humans and risking their employment. In both the US and the EU, this pressure to avoid a touchy public subject encourages developers to present state of the art robots as “assistants” or “helpers,” even when their technical capabilities could extend much further in real-world application and deployment. Tasks associated with care, cleaning, maintenance, or service work are frequently limited or carefully framed to avoid political and labor union response. In the EU, where labor protections are strong and social safety nets are delicate mechanisms, there is a publicly held sensitivity to technologies that could undermine employment. In the US, labor politics are less unified, but public sensitivity to automation and labor displacement still shapes how domestic robotics are discussed, regulated, and socially accepted.

The result is that robots are often deliberately underutilized or infantalized. Capabilities that could reduce physical strain, support care work, or assist with time-consuming domestic tasks are constrained in their marketing and therefore their use to avoid public or political controversy. This narrowing of robotic roles in media relations and marketing materials does not always reflect technical limits, but social compromise for a smoother adoption process. Domestic robots arrive in homes technically capable of more than they are advertised to do, reinforcing the broader pattern of capability loss and weakening the economic case for adoption by limiting the very forms of assistance that would make these systems valuable in everyday life.

Contract Law and the Illusion of Ownership 

Contract law and cloud governance shape domestic robotics by determining who ultimately controls a robot after it enters the home. Although domestic robots are consistently marketed as consumer products, their operation is typically influenced by terms of service, end-user license agreements, and cloud-based dependencies that give manufacturers an enormous amount of authority over the behavior of the robot. These contracts often allow the company to make changes to features, functionality, and data practices without the input of the consumer, meaning the robot a consumer purchases is not necessarily the robot they will continue to live with over time.

Automatic software updates are at the core of this issue of ownership and consistency. Updates are commonly proposed to users as necessary for security, safety, and compliance, leaving little meaningful choice in accepting. Refusal of an update can disable even the most basic functionality, restrict access to cloud services, or render the robot partially or entirely unusable. In practice, consent becomes moot: users must accept new terms and behavioral changes in order to retain basic operation on their investment. This arrangement shifts control away from the household and toward the platform provider, embedding legal authority directly into the technical infrastructure of the robot.

This standard has negative implications for trust in continued capability. Features can be added, altered, or removed to meet changing regulatory, economic, or corporate priorities, most of the time effectively without a renewed user agreement. In both the US and the EU, contract law generally enforces these arrangements, treating continued use as acceptance. As a result, even purchased and outright owned domestic robots function less as physically owned objects and more as leased software services on owned hardware, subject to ongoing governance from outside the home. This dependency further contributes to capability loss: even where a robot is technically capable, its usefulness remains contingent on contractual compliance and uninterrupted cloud access, reinforcing the gap between consumer expectation and lived experience.

Geo-Regulation and Fragmented Use

Variation in regulation shapes domestic robotics by tying robotic capability to geographic location. As robots cross borders or even operate within different regulatory zones their behavior is increasingly governed by location-specific rules related to data protection, safety standards, cloud access, and AI governance. As a result, the same robot may function differently depending on where it is used, not because of any meaningful technical variation, but because regulatory compliance is enforced through geographically regulated restrictions.

In practice, this often takes the form of geo-fencing: a phenomenon where features are enabled, limited, or disabled based on jurisdiction. In the European Union, stricter data protection and emerging AI regulations may require reduced data retention, limited perception, or constrained autonomy. In the United States, fewer structural limits may allow broader functionality, but this flexibility is offset by legal uncertainty and greater exposure to litigation. Manufacturers respond by tailoring behavior regionally or defaulting to the most restrictive standard across markets, further narrowing overall capability.

For users, geo-regulation produces inconsistency and confusion. A robot that performs one way in one country may behave differently or lose functionality entirely after relocation, travel, or regulatory updates. These changes are rarely transparent to the user and are often implemented remotely through software controls. Geo-regulation thus reinforces capability loss by fragmenting robotic behavior across borders and subordinating technical possibility to jurisdictional compliance. The result is a system whose intelligence is not only legally constrained, but geographically contingent, further complicating adoption and weakening the promise of domestic robotics as a stable, long-term technology.

The True Cost of Not Adapting Law to Application

Taken together, these legal and regulatory domains do not operate in isolation; they accumulate and reinforce one another in ways that fundamentally shape the lived reality of the deployment of domestic robotics. Insurance constrains what capabilities are economically viable to build, liability law suppresses discretionary behavior once robots are deployed, consumer protection limits user agency and customization, privacy regulation restricts perception and memory, labor politics narrow acceptable task scope, contract law and cloud governance centralize control outside the home, and geo-regulation fragments behavior across jurisdictions. Each framework is internally rational and justified, yet their collective overlap produces a system in which technical capability is steadily throttled as robots move from testing in the robotics lab into deployment in the homes of users.

This cumulative effect produces an unfortunate paradox in applied robotics: domestic robots are increasingly sophisticated, yet increasingly constrained. Users encounter machines that are expensive, technologically advanced, and heavily marketed as intelligent, but which hesitate, forget, refuse, or purposefully underperform in the moments where assistance would matter most. The issue is not that robotics has failed to develop, but that the conditions under which robots are allowed to operate have narrowed so significantly that meaningful functionality becomes almost impossible to deliver to the customer. Capability loss thus emerges not as a side effect of regulation, but as its predictable outcome when multiple legal regimes converge without thoughtful, high level coordination.

From an adoption perspective, this presents an enduring negative result in the cost–benefit analysis of adoption for customers. Domestic robots are understandably costly because they incorporate advanced hardware and AI systems, yet their constrained behavior limits the value they can provide to users. Consumers are asked to accept surveillance trade-offs, contractual dependency, and behavioral inconsistency in exchange for systems that are legally allowed to do very little. Over time, this mismatch undermines trust, slows adoption, and reinforces skepticism about the practical value of embodied AI. Unless legal and regulatory frameworks are re-evaluated with attention to their cumulative intersecting impact, domestic robotics risks remaining trapped in a cycle where increasing technological capability yields diminishing real-world utility.

Conclusion

Living with a humanoid robot makes clear that the main barriers to domestic robotics are not technical, but legal and regulatory. As robots move from research labs into private homes, they encounter a combination of insurance requirements, liability rules, consumer protection laws, privacy regulation, labor politics, contractual controls, cloud dependence, and location-based restrictions. Each of these systems is designed to address real risks, yet together they completely truncate what robots are allowed to do in everyday life. This produces the persistent pattern of capability loss that cannot be solved by technical leaps in engineering alone.  The analysis in this article shows that domestic robots are constrained not because they lack intelligence, engineering, or mechanical ability, but because existing legal frameworks favor caution, predictability, and risk avoidance over usefulness. These constraints become most visible in daily interaction like that of my immersive research project, where legal rules translate into uselessly miniscule memory and reduced autonomy. The result is the glaring gap between what domestic robots could reasonably provide a user and what they are permitted to deliver in practice.

If domestic robotics is to become a viable and widely adopted technology, legal and regulatory frameworks must be reconsidered with their combined effects in mind. This does not mean weakening protections or accelerating deployment without safeguards. Rather, it requires governance approaches that recognize the realities of embodied AI in the home and allow sufficient functional capacity to justify the costs, trade-offs, and expectations placed on users. Without a reassessment, domestic robots will unfortunately remain expensive, limited, and difficult to justify with the fault falling squarely on the legal constraints rather than the incredible feats of engineering which hold the promise of a brighter tomorrow. 

What this research ultimately shows is that embodied AI does not fit into any existing legal category, and that issue of identity is at the heart of the regulatory complication. A humanoid robot in the home is treated at the same time as a consumer product, a tool, a data-collecting system, a safety device, a potential worker, and a cloud-controlled service. Each area of law applies its own rules as if the robot were only one of those things. The result is a pile-up of requirements that were never designed to work together. No single system is wrong, but together they place so many limits on behavior that the robot’s real abilities are slowly stripped away. This is how capability loss happens: not because the robot cannot act, but because the law has no clear way to understand what the robot actually is.

For this reason, regulating only how robots affect humans is no longer enough. Moving forward, embodied AI will increasingly need its own legal classification, one that recognizes it as a distinct type of system without turning it into a legal person. A new form of classification would make it possible to regulate the robot directly, rather than indirectly through fragmented rules about products, labor, data, or liability. Clear boundaries could be set around what a robot is allowed to do, how responsibility is assigned, what kinds of learning are permitted, and how risk is shared between manufacturers, owners, and insurers. This would not reduce safety or protections. It would replace today’s unintentional over-restriction with intentional classification. Without this shift, domestic robots will remain expensive, heavily limited, and frustrating to live with, no matter how advanced the technology becomes. If embodied AI is going to function meaningfully in everyday life, the law must begin by acknowledging it as a new category of our society and regulating it clearly, consistently, and on purpose.



Read More
Emily Kate Genatowski Emily Kate Genatowski

Citing the Uncitable: Developing Standards for AI and New Media in Scholarly Work

Abstract

Artificial Intelligence and generative models are rapidly reshaping methodologies of the production of academic research and scholarship, but cohesive citation standards are still in progress. Without reliable, transparent and standardized citations accepted by the academic community, AI supported research risks either being falsely or underreported or deemed untrustworthy of acceptance in scholarship. The citation of AI support poses a number of novel issues, the two most foundational being that AI outputs are neither replicable nor verifiable, two core tenets of traditional citation requirements.  Outputs from large language models are non-deterministic, ephemeral, and context-sensitive.  This necessitates nuanced probes into citation debates on the issues of variable models, training data, hosting, prompting, classification of function, legalities, variance of institutional standards, authorship, licensing, ethics, digital or physical archival responsibility and the rapid pace of development. The goal of a standard practice of citing AI supported research and scholarship is to promote transparency of use and build credibility for its use as a methodological tool.  This paper presents a section of work from the CLARIAH-AT-funded initiative to develop citation standards for six types of new media, AI outputs being one of them. Drawing on a series of collaborative discussion-based workshops, teaching opportunities, and written reflections, the project culminated in the proposal of six citation categories covering software packages, data sets, digitized resources, social media posts, ephemeral content and AI outputs.  These efforts demonstrate that citation of new media is complex and contested but the act of bringing forth these discussions serves as an ethical roadmap for academia as historical and scholarly methodologies adapt to the age of AI.

Keywords: AI, LLM, Prompts, Citation, Scholarship, New Media, Transparency

1. Introduction

AI has moved from novelty to commonplace in our lives and in many cases, our workflows. Historians, librarians, and researchers use AI tools to process data, analyze sources, and shape narratives. However, we have not yet facilitated a method of citation for the use of these tools and applications.  Widely utilized and accepted citation conventions were designed for monographs, collected works and articles.  These classical conventions struggle to account for a medium that produces dynamic, personalized outputs as well as the variation of role that the tools play in the production of research and scholarship, be it collaborator, co-author, analyst, or other. Traditional conventions are far out of their depth as citations from a book will not change no matter who may pick it up and an article wouldn’t have the ability to be a coauthor of a paper it’s included in. 

Each of the top citation convention organizations have recognized the need for citation conventions for the contributions of AI outputs and put forth “temporary suggestions” for how one may cite AI use under their guidelines.  The suggestions provide indications of how each organization views AI, whether it be as a software, an author of irretrievable content, a collaborator, or a container. These suggestions merely provide a basic level of citation guidelines and don’t account for the variation of applications of AI in research and scholarship.


1.1 International Standards


Software or Tool:


APA Guidelines: Author/Developer. (Year). Title (Version) [Description]. URL.

Harvard Guidelines: Author/Organisation, Year. Title, version. [Computer software] (or similar). Available at: URL (Accessed: date). 

These guides treat AI as a software or tool through their formatting suggestions and information included. This is indicated through the inclusion of a “developer” option as opposed to only listing an author as well as the inclusion of “versioning” indicating that there is no stable, citable content object but rather, a tool. The URL examples also link to the product pages not to specific citable outputs, indicating that the citation is documenting a method or tool not pointing towards a retrievable publication.


Container:


MLA Guidelines: Author (if available). “Prompt (if relevant).” Name of AI Tool, version (if given), Company, Date of Access, URL (if applicable).

This guide treats AI as a container, similar to an article being in a journal or a page being on a website. The AI system is the container or publication environment and the output exists within it. 


Author of Irretrievable Content: 


Chicago Guidelines: “Prompt text.” Name of AI Tool, version (if given), Company, date of generation, URL (if applicable).”

This guide treats AI in book-like form due to it being a reference work and through the inclusion of information like a publisher, edition and year of generation. The work is understood to be irretrievable due to the fact that the exact AI output cannot be recovered and this is similar to citing an unpublished manuscript or personal communication.


1.2 Current State of Research


The scholarly work which has tackled AI use in Academia has covered a wide range of subtopics on the issue which directly factor into the decision making process and debates surrounding when and how to cite AI support in scholarship.  There are, of course, a number of broad based looks at how generative AI is reshaping the entire scholarly value-chain from knowledge production to dissemination.  These articles postulate the appropriate uses for AI support in the generation of scholarship and include knowledge synthesis, development, evaluation and translation (Grimes, 2023).  The variation of so-called acceptable applications indicate a need for transparent and appropriate methods of citations.  There are also articles on which types of models are appropriate for use and citation in scholarly works.  These models are typically smaller and more refined in scope and topics, contain training data from scholarly sources and are often locally hosted (Montague-Hellen, 2024). The variation of models and reliability of output indicate a gulf between the use cases of application, meaning that a closely aligned, smaller model trained on scholarly works would be a more acceptable candidate for direct output citation than a larger model trained on vast data which could be hallucination prone.  The topic of the delicate balance between AI authorship and academic integrity has been raised in a number of works.  The protection of human creativity and critical thinking is paramount in the preservation of human authorship (Wise, 2024). The role that AI plays in the creation of scholarship also indicates the method of appropriate citation, be it a mention in methodology, a proper in-line citation, or an acknowledgement in a list of works consulted. 


2. Materials and Methods

The research referenced in this article is from a range of sources. 


2.1 CLARIAH-AT Project

Primarily, this research is surrounding the CLARIAH-AT funded project designed to draft citation guidelines for new media sources in order to promote data reuse and strengthen transparency of the inclusion of new methods in scholarship. The new citation guidelines were framed as a revision of and addition to the current guidelines of the Institute of History at the University of Vienna, specifically section E, Electronic Resources which were last updated in June 2023. This project took the form of two interdisciplinary workshops with about 10 attendees across academic fields and institutions.  The workshops were hosted by project co-leads, Emily Genatowski and Dr. Thomas Wallnig and were conducted virtually.  Prior to each workshop, materials were circulated to introduce the topics to be discussed. Throughout the workshop, slides were shown to the group to guide discussion on each of the six topics and as the discussions and debates were conducted, notes were taken.  The notes were then synthesized and incorporated as proposed new forms of citations which took into account the issues raised throughout the discussion.  The new proposed forms were then circulated once more to the group of participants prior to the final workshop. At the final workshop, the participants raised any concluding issues and notes were taken.  After the conclusion, the notes were synthesized and any final adjustments were made. The final citation conventions for each of the six categories were then submitted to CLARIAH as the conclusion of the project. 


2.2 International Love Data Week: University of Graz

One of the project co-leads, Emily Genatowski, attended International Love Data Week in Graz, Austria to present on the citation of new media initiative. This talk was held at the Library at the University of Graz and outlined the goals of the project co-leads, the methodology of the project, the structure and timelines of the team and engaged with a number of discussions throughout a Q&A segment following the talk. The discussions were noted and brought up once again to the workshop groups throughout the virtual sessions.


2.3 AI in Academia Workshop: University of Vienna 

Both project co-leads, Dr. Thomas Wallnig and Emily Genatowski, were involved in a workshop at the University of Vienna titled, AI in Academia: Transparency, Efficiency and Responsibility. This workshop was aimed at graduate students looking to refine and strengthen their use of AI in their work. The pair delivered a joint lecture titled Citations of AI Supported Scholarship which introduced the students in attendance to the CLARIAH funded project, as well as the current widely-acknowledged debates on AI supported scholarship. The students then were led in an anonymized study through a series of questions inquiring about their use of AI in scholarship and what types of crediting and citations they felt were possible. The results were anonymous and displayed in realtime on a projector in front of the group and then the students were asked to discuss the results.


2.4 AI and Large Language Models for Humanities Research: University of Vienna

Project co-Lead, Emily Genatowski, founded and operated a Master’s Level Methodological Workshop at the University of Vienna in 2023 which provided ground work for the concepts of AI supported scholarship.  The course was interactive and covered topics integral to the citation debate including transparency, authorship, ethics, prompt techniques, training data analysis, model variability and hosting concerns and more. The discussions in this course were integral to the foundations of the AI prompting discussion sessions in the ensuing CLARIAH project workshop series. This course was later adapted and publicized through DARIAH Campus, a pan-European digital infrastructure for educational materials. 


2.5 Emerging Digital Methodologies Conference: Oxford University 

Project co-Lead, Emily Genatowki delivered a full paper presentation at the University of Oxford’s Emerging Digital Methodologies Conference on the process of updating, adapting and problemitising the citation methods surrounding AI supported scholarship. The discussions surrounding this presentation were incorporated into the composition of section 4 of this paper. 


2.6 Reading Course Digital Humanities - Theory and Concepts in the Digital Humanities: University of Vienna

Project co-Lead, Emily Genatowki guest lectured during this course taught by Project co-Lead, Dr. Thomas Wallnig and covered citation methodologies.  The material covered in the session taught by Ms. Genatowski was split equally between traditional citation methodologies and citations of AI supported scholarship.

.

3. Results

The results of the project after the workshop series, lectures and discussions span all six categories of data packages, software packages, data sets, ephemeral media, social media and AI outputs.  For the purposes of this paper, the additional five categories findings will be listed in the appendix and the master citation guide and variations of AI citations are placed below. The master citation guide incorporates on a higher level what could and should be included and provides a more flexible framework whereas the seven citation variations within the category of AI supported research are due to the difference in engineering, application and responsibility. The variations occur in how the prompts are engineered e.g. single prompt, multimodal prompt or multi turn refinement, the application of the output in the academic work e.g. citation of output text or as a tool or method, and how much agency the AI support has in the creation of the scholarly work e.g. authorship or co-collaborator. You can find the final examples below. 


3.1 Flexible Framework


Footnote Format

Prompt Author. “Prompt: [Full or excerpted prompt text].”

Generated using: [Model name and provider, e.g., OpenAI ChatGPT-4].

Platform: [e.g., ChatGPT, Poe, Perplexity AI].

Date of Generation: [YYYY-MM-DD].

Preserved via: [e.g., Archived transcript, Screenshot, Exported file].

Archive Reference or Link: [URL, archive ID, or filename].

Accessed [Date].

(Optional: Response Excerpt or Summary; Optional: Use Case Context).


Bibliography Format

Prompt Author. “Prompt: [Full prompt or representative excerpt].”

Generated using: [LLM Model Name and Version] by [Provider, e.g., OpenAI].

Platform: [Chat Interface Name, e.g., ChatGPT, Perplexity].

Date Generated: [e.g., 2025-06-03].

Preserved via: [e.g., Screenshot, Archive.org, Local Transcript Export].

Link or Archive ID: [Persistent URL or Filename].

Accessed [Date].

(Optional: “AI Output used in: [e.g., analytical summary, creative generation, etc.]”)


3.2 Variation Guidelines Based on Use 

I. Single-Prompt, Single-Output (Public Use)

Footnote format:

Authoring Entity (e.g., OpenAI), Model Name, prompt: “Prompt text,” date generated, platform (e.g., chat.openai.com), accessed [Access Date]. License: [Usage License or Terms].

Bibliography format:

  Authoring Entity. Model Name. Prompt: “Prompt text.”

  Generated [Date] via [Platform]. Accessed [Access Date].

License: [Usage License or Terms].

II. Multi-Turn Conversation (Threaded Dialogue)

Footnote format:

Authoring Entity (e.g., Anthropic), Model Name, transcript of conversation with [User Name], title or topic (if applicable), date(s) of interaction, platform (e.g., claude.ai), archived as: [Filename or Repository Link], accessed [Access Date]. License: [Terms].

Bibliography format:

Authoring Entity. Model Name. Transcript of conversation with [User Name], “[Conversation Title].”

Conducted [Date(s)] via [Platform]. Archived as: [Filename or Repository Link].

  Accessed [Access Date]. License: [Terms].

III. Prompt Used as Research Protocol or Method

Footnote format:

Authoring Entity, Model Name, prompt: “Prompt text,” executed via [Platform or API], date run, archived at: [Stable URL or Archive ID]. Accessed [Date]. License: [Terms].

Bibliography format:

Authoring Entity. Model Name. Prompt used as method: “Prompt text.”

Executed via [Platform or API] on [Date]. Archived at: [URL or ID].

Accessed [Date]. License: [Terms].

IV. Citable Output from Prompt Use

Footnote format:

Authoring Entity, Model Name, prompt: “Prompt text,” date generated. Output cited in: [Scholar Name], “Title of Work,” [Publication or Submission Context]. Accessed [Date]. License: [Terms].

Bibliography format:

  Authoring Entity. Model Name. Prompt: “Prompt text.”

  Generated [Date]. Quoted in: [Scholar Name], “Title of Work.”

  Accessed [Date]. License: [Terms].

V. Prompt-Based Collaboration (e.g., Co-Writing)

Footnote format:

Authoring Entity, Model Name, co-writing session with [User Name], title or description, prompt chain executed [Date], archived as: [Filename or Repository ID]. Final version edited by [User Name]. Accessed [Date]. License: [Terms].

Bibliography format:

Authoring Entity. Model Name. Co-writing session with [User Name]: “[Title or Description].”

 Prompt chain executed [Date]. Archived as: [Filename or Repository ID].

 Final version edited by [User Name]. Accessed [Date]. License: [Terms].

VI. Prompt Transcript for Teaching or Institutional Submission

Footnote format:

User Name, “Title or Description of Prompt Transcript,” course or project title, institution, date created, generated using [Model Name], submitted or stored at: [Platform or Repository], filename: [File Name]. License: [Terms or Academic Use].

Bibliography format:

User Name. “Title or Description of Prompt Transcript.” Submission for [Course Title],

  [Institution]. Created [Date]. Generated using [Model Name].

  Stored at: [Platform or Repository], filename: [File Name].

  License: [Terms or Academic Use].

VII. Visual/Multimodal Prompts (e.g., DALL·E, Midjourney)

Footnote format:

Authoring Entity, Model Name, prompt: “Prompt text,” date generated, platform (e.g., midjourney.com, chat.openai.com), image or media file: [Filename]. Accessed [Date]. License: [Image Generation Terms].

Bibliography format:

  Authoring Entity. Model Name. Prompt: “Prompt text.”

  Generated [Date] via [Platform]. Media file: [Filename].

  Accessed [Date]. License: [Image Generation Terms].

Table 1.

Category

Use Case

Footnote Example

Single Prompt

Basic query

OpenAI ChatGPT-4. Prompt: “How can I analyze 17th-century OCR errors?” Prompt by Emily Genatowski, 14 May 2025. Archived at [URL].

Multi-Turn

Dialogue

Anthropic Claude 3. Transcript with Emily Genatowski, “AI in Teaching,” 10–12 May 2025. Archived [ID]. 

Prompt as Method

Protocol

OpenAI ChatGPT-4. Prompt used as method: “Summarize dataset cleaning steps,” API, 20 May 2025. Archived [ID].

Citable Output

Output Cited

OpenAI ChatGPT-4. Prompt: “Generate timeline of AI ethics events,” cited in Genatowski, 2025.

Collaboration

Co-Writing

OpenAI ChatGPT-4. Co-writing session with Emily Genatowski, “Drafting AI Citation Paper,” 15 May 2025. Archived transcript.

Teaching

Coursework

Emily Genatowski. “Prompt Transcript for Digital Humanities Seminar,” Uni Wien, 2025. Generated using ChatGPT-4. Stored [Repository].


Visual

Images/Media

OpenAI DALL·E 3. Prompt: “Illustration of AI citation workflow,” 21 May 2025. Image file [File]. License [Terms].



4. Discussion

4.1 Workshop Series Discussion 

The discussion surrounding AI use in the first workshop covered the following topics, which were reflected in the suggested guidelines above.

Prompt transparency was heavily emphasized.

The group opted to add explicit formatting for quoting the exact prompt text, as meeting participants stressed the need for reproducibility and intellectual accountability in AI-supported work.

Archiving and persistence requirements were adopted.

The group opted to include fields for archived transcripts, file names, or persistent URLs , reflecting concerns about the ephemerality and editability of AI-generated content.

Model name and version identification were enshrined.

The group opted to require citation of the specific AI model and version (e.g., ChatGPT-4, Claude 3 Opus) to reflect technological variation and ensure reproducibility of outputs, which may change over time.

Platform or access point clarification was suggested.

The group opted to distinguish between interactive platforms (e.g., chat.openai.com) and API-based use, acknowledging that researchers may interact with AI differently and this affects outputs and rights.

License disclosure was facilitated.

The group opted to add a required License field (e.g., OpenAI Terms of Use), in response to legal and copyright concerns raised during the meeting. This was aiming to help define permissible use of generated content.

A policy of no AI co-authorship was adopted.

The group opted to reaffirm that AI cannot be listed as a co-author. This clarified “co-writing session” as a collaborative tool, with the human user explicitly acknowledged as editor or final author.

A new type of consideration for teaching & institutional contexts was introduced. 

The group opted to introduce a new citation type for AI prompt transcripts submitted for coursework or stored in repositories, in response to education-focused feedback about student accountability and transparency.

Visual/Multimodal prompt citations were accounted for.

The group opted to add distinct formatting for image or video generation prompts (e.g., from DALL·E or Midjourney), supporting the emerging practice of AI-generated visual scholarship.

Flexible terminologies for the variation of applications were introduced.

The group opted to incorporate field labels like “used as method” or “quoted in” to accommodate the varied academic uses of AI from analytical pipelines to creative citations.


4.2 AI in Academia Workshop Anonymous Poll Results

The following figures are realtime displays of responses from graduate students in how they would choose to deal with the following ethical quandaries surrounding the citation of AI supported research. 



4.3 Discussion Based Topics from Lectures

 

Function’s Influence on Citation

The application or function an AI system serves within the research process directly shapes how it should be cited. When an AI tool acts as a tool, for example formatting citations, translating or transcribing audio it typically requires acknowledgment but not formal reference or citation. However, as the AI’s role shifts toward a more interpretive or creative role, its function increasingly resembles a human collaborator or co-author which necessitates a transparent citation. If the output is quoted directly, a traditional citation is also necessary.  Categorizing the function of the AI use by the scholar prevents the misuse and misattribution and therefore helps maintain the traceability of intellectual contributions across the AI and academic collaboration.


International Citation Standards and Classification

International citation standards are still in development regarding AI supported scholarship. While style guides such as APA, MLA, and Chicago have issued provisional examples, included in the above sections, there is no universal or international agreement on how and when to cite AI’s contributions. Classification systems seem to disagree on whether AI should be treated as software, dataset, or collaborator and this distinction leads to inconsistencies in the included information and indexing. Unification through standards of DOI and ORCID registries could help to promote a standard which will encourage interoperability, verifiability, and trust.


Legalities vs Individual Institutional Standards

The gap between legal frameworks which are helping to define the legal ownership of AI output and university or institutional policies that try to attribute authorship is still wide. Legally, many jurisdictions hold that an AI system cannot maintain authorship rights, which de facto assigns ownership to the academic prompting the system or the owner of the AI system. Yet, the current trend in institutional standards is to prioritize transparency and contribution disclosure over legal or copyright claims.  If an academic is following the legal standard they could still be in violation of the institution under which they are working. This highlights a conceptual gap: legal systems that aim to safeguard intellectual property and academic institutions that aim to preserve academic integrity, transparency and trust. To reconcile these will require joining rights-based and responsibility-based models of scholarly credit.


Model Types and Hosting

The model type and hosting environment of an AI tool influence citation practices through implications for the transparency of their training data as well as their relative reproducibility. Closed, proprietary models hosted via API often restrict insight into data provenance and training parameters, complicating verification and long-term archiving. Conversely, open-source or locally hosted models permit fuller disclosure of versioning, fine-tuning datasets, and model weights, aligning better with academic norms of verifiability. Therefore, citation of AI systems must increasingly note not only the model name but also its access mode, version, and hosting conditions to preserve scholarly audit trails.


Persistent Identifiers

Persistent identifiers (PIDs) such as DOIs, Handles, or emerging AI-specific identifiers play a crucial role in ensuring the citability and traceability of AI outputs. Without stable links to the specific model instance, prompt, or dataset used, scholarly references risk obsolescence as models update or are retired. Assigning PIDs to AI models, generated outputs, and even prompt-result pairs would provide an intrepid referential object for scholarly infrastructure. Integrating these identifiers into citation metadata would extend FAIR principles Findable, Accessible, Interoperable, Reusable to generative AI contexts.


Institutional Guidance vs Departmental Practice

Institutional policies on AI citation often provide broad ethical frameworks, but actual practices tend to crystallize at the departmental level, reflecting disciplinary norms. For instance, humanities departments may emphasize interpretive transparency, while computational disciplines prioritize reproducibility. This type of misalignment can leave researchers uncertain about what compliance means in certain cases. Bridging institutional guidance and departmental expectations requires developing dynamic policies that evolve alongside disciplinary conventions, supported by centralized registries of AI-use disclosure templates and examples of model citation for students and authors to reference.


Ethics

The ethics of citing AI in scholarship extends beyond formal attribution to questions of accountability, bias transmission, and epistemic honesty. Ethical citation demands that scholars acknowledge not only that AI was, in fact, used but how it may have influenced reasoning, interpretation, and narrative framing. Transparent disclosure allows peers to assess potential distortions arising from model bias or non-deterministic outputs. As AI becomes embedded in knowledge production, ethical citation will serve as both a moral and methodological safeguard, reinforcing scholarly trust and the integrity of the research record.


5. Conclusions

AI is no longer peripheral to research, but citation standards still continue to lag behind. Without clear norms, AI remains invisible. With them, AI use has the potential to become transparent and accountable. The CLARIAH-AT project demonstrates rigorous, flexible, adaptable and usable standards are possible. Seven templates and an archival suggestion offer a prospective roadmap for scholars and institutions as well as continue the discussion surrounding AI acceptance and usability in the scholarly context. These debates should continue to evolve as the technology evolves but AI use should not be pushed into the shadows as it risks misuse, distrust and malpractice. Clear citation guidelines honor intellectual honesty and facilitate transparency, we should strive to continue to keep up with the latest technology in order to allow academics to innovate responsibly with efficiency.  Embracing these temporary formatting suggestions will give historians, librarians, and students the confidence to work openly with AI.

Appendix A

Appendix A.1

Full List of Citation Formats:


Software Packages: 

Footnote Format

Developer(s) or Organization. Software Title. Version [Version Number or Tag], release date: [YYYY-MM-DD].

Developed by: [Contributor Roles, if applicable – e.g., "Curated by", "Maintained by", "Lead Engineer"].

Platform or Host: [e.g., GitHub, Zenodo, institutional repository, commercial vendor].

Distributed by: [if distinct from host; optional].

Persistent Identifier or URL: [DOI, Handle, or Stable Link].

License: [Full license name, e.g., MIT, GPL v3, CC-BY 4.0].

Documentation available at: [Manual URL or README, optional].

Archived at: [Archive.org, Perma.cc, or repository ID; optional].

Accessed [Date].

Bibliography Format

Developer(s) or Organization. Software Title. Version [Version Number or Tag], released [YYYY-MM-DD].

Developed by: [Contributor Roles, if applicable].

Platform or Host: [e.g., GitHub, Zenodo, institutional repository].

Distributed by: [Vendor or publisher name, if different].

Persistent Identifier or URL: [DOI or stable access link].

License: [e.g., MIT License, GNU GPL 3.0, CC-BY-NC-SA 4.0].

Documentation: [URL to manual, GitHub Wiki, or README file].

Archived at: [Web archive or local repository ID, if used].

Accessed [Date].


Data Sets:

Footnote Format

Creator(s) or Organization. Dataset Title. Version [Number or Label], release date: [YYYY-MM-DD].

Curated by: [Curator(s), Annotator(s), Schema Designers, or Editorial Team, if applicable].

Hosted by: [Repository or Hosting Platform, e.g., Zenodo, Phaidra, CLARIN, Harvard Dataverse].

Distributed by: [If distinct from host; optional].

Persistent Identifier or URL: [DOI, Handle, or Stable Link].

License: [e.g., CC-BY 4.0, Open Data Commons, Custom Terms].

Documentation: [Optional – URL to README, metadata schema, or data dictionary].

Archived at: [Optional – e.g., Perma.cc, WebCite, university archive].

Accessed [Date].

Bibliography Format

Creator(s) or Organization. Dataset Title. Version [Number or Label], released [YYYY-MM-DD].

Curated by: [Names and roles of contributors, e.g., “Curated by Jane Smith, Annotated by Max Mustermann”].

Hosted by: [Repository Name, e.g., Zenodo, CLARIN, Phaidra].

Distributed by: [If different from host; optional].

Persistent Identifier or URL: [e.g., DOI: 10.1234/zenodo.45678].

License: [e.g., CC-BY 4.0, CC0, or institutional terms].

Documentation: [URL to additional metadata, schema, or usage guide].

Archived at: [e.g., Archive.org snapshot or institutional long-term storage ID].

Accessed [Date].


Digitized Resources:

Footnote Format

Original Creator or Author. Title or Description of Original Work, [Original Date of Creation or Publication].

Held at: [Institution Name], Collection or Archive Name, Shelfmark or Identifier.

Digitized by: [Digitizing Entity or Platform].

Hosted by: [Platform or Repository Name].

Persistent Identifier or URL: [DOI, Handle, or Stable Link].

License: [e.g., Public Domain, CC BY-NC-SA 4.0, or institutional terms].

Documentation or Metadata: [Optional – Link to catalog entry or digital edition].

Accessed [Date].

Bibliography Format

Original Creator or Author. Title or Description of Original Work. [Original Year of Creation or Publication].

Collection or Archive: [Holding Institution, Shelfmark or ID].

Digitized by: [Name of Digitizing Institution or Platform].

Hosted by: [Digital Repository or Access Platform].

Persistent Identifier or URL: [e.g., http://hdl.handle.net/123456/789].

License: [e.g., CC0, Public Domain, or specific repository rights].

Accessed [Date].

(Optional: Documentation or metadata record URL.)


Social Media:

Footnote Format

Author (Real Name if Known or Platform Handle). “Post Content or Short Excerpt.”

Platform: [Platform Name, e.g., X (formerly Twitter), Facebook, Instagram].

Date of Post: [YYYY-MM-DD], Time (optional).

URL or Persistent Link: [Full post URL or web archive link].

Accessed [Date].

(Optional: Screenshot Filename or Archive ID; Optional: License or Usage Terms).

Bibliography Format

Author (Handle or Name). “Post Content or Excerpt.” Platform Name.

Posted on [Full Date], [Time (optional)].

URL: [e.g., https://twitter.com/username/status/1234567890123].

Accessed [Date].

(Optional: Screenshot or Archive Reference; License: [e.g., Standard Platform License]).

Ephemeral Media:

Footnote Format

Creator or Event Host. Title or Description of Content.

Type of Content: [e.g., Instagram Story, Livestream, Temporary Exhibit, Event Page].

Platform or URL: [e.g., YouTube Live, Instagram, Webpage], Published [Date and Time].

Preserved via: [e.g., Screenshot, Archive.org, Perma.cc, Local Capture ID].

Filename or Persistent Link: [e.g., Screenshot_2025-05-20.png, https://perma.cc/...].

Accessed [Date].

(Optional: License or Platform Terms; Optional: Event or Campaign Hashtag; Optional: Approximate Duration or Expiration Date).

Bibliography Format

Creator or Host. Title or Description of Content.

 [Type of Media], originally published [Date].

 Platform: [e.g., TikTok, Instagram, YouTube Live, Eventbrite].

 Preserved via: [e.g., Screenshot, Web Archive, Local Capture].

 Filename or Archived Link: [e.g., chat_screenshot_May2025.png, https://perma.cc/ABC1-XYZ].

 Accessed [Date].

 (Optional: License: [e.g., Standard Platform Terms or CC license]).


AI Prompts:

Footnote Format

Prompt Author. “Prompt: [Full or excerpted prompt text].”

Generated using: [Model name and provider, e.g., OpenAI ChatGPT-4].

Platform: [e.g., ChatGPT, Poe, Perplexity AI].

Date of Generation: [YYYY-MM-DD].

Preserved via: [e.g., Archived transcript, Screenshot, Exported file].

Archive Reference or Link: [URL, archive ID, or filename].

Accessed [Date].

(Optional: Response Excerpt or Summary; Optional: Use Case Context).

Bibliography Format

Prompt Author. “Prompt: [Full prompt or representative excerpt].”

Generated using: [LLM Model Name and Version] by [Provider, e.g., OpenAI].

Platform: [Chat Interface Name, e.g., ChatGPT, Perplexity].

Date Generated: [e.g., 2025-06-03].

Preserved via: [e.g., Screenshot, Archive.org, Local Transcript Export].

Link or Archive ID: [Persistent URL or Filename].

Accessed [Date].

(Optional: “AI Output used in: [e.g., analytical summary, creative generation, etc.]”)


Extended versions for each section along with specified reasonings for final formatting suggestions based on workshop notes can be found here


References

American Psychological Association. 2023. APA Style Guidelines for Citing AI-Generated Content. Available online: https://apastyle.apa.org/blog/how-to-cite-chatgpt

GO FAIR Initiative. 2017. FAIR Principles. Available online: https://www.go-fair.org/fair-principles/.

Grimes, Seth. 2023. Generative AI and the Scholarly Value Chain: Knowledge Creation, Synthesis, and Translation in the Age of LLMs. Journal of Scholarly Publishing 54: 327–45. 

Harvard University Library. 2023. Harvard Referencing Guide: Software, Tools & AI Systems. Available online: https://www.library.harvard.edu/referencing-guides

Modern Language Association. 2023. MLA Handbook: How to Cite Generative AI Output (9th ed.). Available online: https://style.mla.org/citing-generative-ai 

Montague-Hellen, Laura. 2024. Domain-Specific AI Models in Scholarship: Reliability, Scope, and Academic Use-Cases. Digital Scholarship Quarterly 12: 44–67.

University of Chicago Press. 2023. Chicago Manual of Style: Citing AI-Generated Text (17th ed. Update). Available online: https://www.chicagomanualofstyle.org/help-tools/AI-citation-guidance.html 

Wise, Anna. 2024. AI Authorship, Academic Integrity, and the Future of Scholarly Voice. Ethics in Higher Education Review 8: 201–18. 

Wilkinson, Mark D., Michel Dumontier, Ijsbrand Jan Aalbersberg, Gabrielle Appleton, Myles Axton, Arie Baak, Niklas Blomberg, et al. 2016. The FAIR Guiding Principles for Scientific Data Management and Stewardship. Scientific Data 3: 160018.

AI assistance was used in the form of ChatGPT (OpenAI) for drafting, language refinement, and reference structuring. All conceptual arguments, interpretation, and final revisions were completed by the Author.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Read More
Emily Kate Genatowski Emily Kate Genatowski

Redesigning Infrastructure of the 20th-Century City for Humanoid Robotics

When people imagine humanoid robots becoming part of everyday life, the conversation often jumps immediately to futuristic cityscapes — purpose-built environments designed from scratch to accommodate autonomous systems. It is easy to assume that if embodied AI is coming, then our existing cities must be fundamentally incompatible with it. The visual language of robotics encourages this leap. We picture gleaming corridors, sensor-embedded sidewalks, and perfectly ordered urban grids.

But history suggests something much more ordinary, and much more practical.

Cities are not replaced when transformative technologies arrive. They are layered.

Electricity did not require Europe to begin again. It required standard voltage systems, sockets, fire codes, and public confidence that wires in the walls would not burn buildings down. Modern sanitation did not mean abandoning London; it meant coordinated sewer construction after disease and stench made inaction impossible. Automobiles did not erase urban life. They forced the introduction of driver licensing, registration plates, insurance systems, traffic signals, and road markings onto streets that had once been shared by horses, pedestrians, and street vendors. The internet did not dissolve cities either. It overlaid fiber networks, spectrum regulation, authentication systems, and eventually data protection law onto the built environment.

In each case, capability advanced first. Governance lagged behind. Public anxiety surfaced. Standards were developed. Infrastructure was retrofitted. Over time, what had once seemed disruptive became mundane.

Humanoid robotics now occupies that early stage.

The machines can already walk, navigate, carry objects, and interact with digital systems. Their technical capability is not theoretical. What is missing is not intelligence or locomotion, but civic accommodation. Our cities were designed for human bodies, wheeled vehicles, static appliances, and centralized utilities. They were not designed for mobile, sensor-equipped, semi-autonomous physical systems operating in shared space.

This gap creates friction that is often misinterpreted as impossibility. When a robot hesitates on cobblestones or encounters uncertainty on public transit, the conclusion is sometimes that humanoid robotics itself is premature. But friction at the edges of infrastructure is a familiar historical signal. It tells us less about whether the technology will exist, and more about whether the surrounding environment has been adapted to support it.

The choice before us is not whether to construct entirely new “robot cities.” It is whether we are willing to undertake the incremental work of retrofitting existing ones.

Every major infrastructure shift in modern history has required coordinated adjustments: technical standards, regulatory clarity, institutional responsibility, and eventually social norms. Humanoid robotics is not a rupture in that pattern. It is the next instance of it.

The question, then, is not whether robots belong in public space. The more practical question is what minimal layers of energy, identity, access control, accountability, and social signaling are required to make their presence stable and legitimate.

Cities have absorbed transformative systems before. They will do so again. The work is not demolition. It is layering.

The Pattern We Keep Repeating

When a technology meaningfully alters how bodies move, how power circulates, or how information flows through a city, the transformation does not begin with regulation. It begins with capability.

Electric lighting worked before unified voltage standards and fire codes existed. Internal combustion engines propelled vehicles through crowded streets long before traffic signals, driver licensing, or insurance requirements were standardized. Early networked computing connected institutions before lawmakers understood how digital communication would reshape commerce, speech, or identity. In each case, the technical system proved itself first. The civic framework arrived later.

This sequence is not accidental. It reflects a structural lag between invention and integration.

The early period of any transformative technology tends to expose mismatches between capability and environment. Streets designed for pedestrians and horses were suddenly shared with motor vehicles capable of unprecedented speed. Dense urban housing built without sanitation systems became public health crises once population levels intensified. Electrical systems installed without standardized safety measures produced fires and distrust. The friction that followed was not a sign that these technologies were fundamentally incompatible with cities. It was evidence that cities had not yet adapted to them.

Public anxiety often concentrates attention during this phase. Fatal automobile accidents, disease outbreaks, electrical hazards, or privacy concerns create visible moments of instability. These moments, in turn, create political momentum. Governance does not emerge in a vacuum; it tends to respond to concentrated risk.

The state’s role at this stage is rarely to suppress the technology outright. More often, it standardizes interfaces and assigns accountability. Voltage levels are harmonized. Sewer systems are coordinated across districts. Vehicles are registered. Drivers are licensed. Insurance markets are formalized. Traffic rules are codified. Infrastructure is gradually redesigned to accommodate the new system without dismantling the old city entirely.

Importantly, this process is incremental. Roads were not rebuilt overnight. Electrical grids expanded neighborhood by neighborhood. Fiber networks were laid gradually. Each adjustment was layered onto existing structures. Over time, the extraordinary became routine. Few people today think of wall sockets, sewer lines, or traffic lights as radical interventions. They are simply part of urban life.

Humanoid robotics appears to be entering that early stage of structural lag.

The machines can walk, lift, navigate, and interact. They are increasingly capable of operating beyond laboratory settings. Yet the civic systems that would allow them to do so at scale — energy access, operational permissions, identity verification, liability allocation — remain fragmented across regulatory silos. Product safety law addresses mechanical risk. Data protection law addresses information processing. Emerging AI regulation addresses algorithmic risk categories. But none of these frameworks fully contemplates a mobile, sensor-equipped, semi-autonomous physical system operating in shared public space.

What we are witnessing, then, is not the failure of robotics. It is the familiar gap between capability and civic integration.

History suggests that the appropriate response is not demolition, nor panic, nor uncritical acceleration. It is deliberate layering: identifying the minimal standards, interfaces, and accountability mechanisms that allow a new system to coexist with existing urban life.

Humanoid robotics is not the first technology to challenge the assumptions embedded in the built environment. It is simply the latest. If precedent holds, the path forward will involve standardization, retrofitting, and norm formation — not the abandonment of the city as we know it.

A Minimum Viable Robot-Ready City

Every transformative system that entered urban life required a small number of foundational layers. Electricity required standardized connectors and safety codes. Automobiles required licensing, registration, and traffic signals. Sanitation required coordinated sewer systems. The internet required authentication protocols and spectrum allocation. These were not aesthetic choices. They were enabling conditions.

Humanoid robotics requires its own set of enabling layers — and each of them can be implemented incrementally.

1. The Energy Layer

Before questions of intelligence or autonomy arise, there is a more basic requirement: robots must be able to remain upright and operational without becoming hazards.

A robot that loses power in a public environment does not simply inconvenience its owner; it can become a physical obstruction or a liability. Reliable energy access is therefore not a convenience feature but a safety consideration.

A minimum retrofit would include standardized charging connectors, clearly designated docking alcoves in institutional settings, and fire-code integration for high-density battery charging.

What This Looks Like in Practice

Standardized Charge Ports
A uniform connector standard across manufacturers — similar to USB-C or EV charging standards — allowing robots to plug into certified public docking points. Without standardization, infrastructure cannot scale.

Docking Alcoves in Public Buildings
Universities, hospitals, transit hubs, and municipal buildings could include recessed wall bays where robots can safely stand and charge without blocking pedestrian flow — much like bicycle parking areas.

Emergency Low-Power Safe Zones
Transit stations or high-density areas could designate small “robot recovery” zones where low-battery systems automatically shift into safe-mode and await retrieval or recharge.

Fire-Code Integration
Battery charging stations would need to comply with existing fire safety standards, including ventilation, spacing, and thermal monitoring — similar to current regulations governing EV chargers.

Energy is not speculative infrastructure. It is a predictable extension of existing electrical planning.

2. The Identity Layer

Mobility technologies become governable when they become legible.

A robot operating in public space should possess a cryptographically secure identity linked to verifiable credentials: manufacturer compliance, insurance status, operational tier, and authorized capabilities.

What This Looks Like in Practice

Embedded Secure Identity Chip
Each humanoid would include a tamper-resistant hardware identity module, issuing a unique, verifiable digital credential.

Visible Machine Identifier
A small, clearly visible plate or display indicating registration status — not unlike a vehicle plate — reassuring the public that the machine is not anonymous.

Scannable Credential Access
Authorities or authorized institutions could scan an NFC/QR interface to verify compliance credentials without accessing private behavioral data.

Insurance & Operational Tier Registry
A centralized or nationally coordinated registry linking robot identity to operator responsibility and insurance coverage.

Legibility does not imply personhood. It implies accountability.

3. The Access Layer

Cities already regulate contextual entry constantly. Humanoid robotics requires a comparable calibration layer.

What This Looks Like in Practice

Public-Mode Activation
Upon entering certain zones (transit, schools, government buildings), robots automatically shift into a restricted operating mode: lower speed, limited arm articulation, reduced data capture.

NFC/QR Checkpoints
Transit turnstiles or building entrances could require a simple credential check before allowing entry — verifying operational tier and insurance status.

Geo-Fenced Speed Limits
Software-based speed ceilings in dense pedestrian areas, enforced automatically by location data.

Size & Capacity Classification
Large humanoids may be restricted from narrow interior spaces; smaller systems may receive broader access tiers.

This layer allows cities to adjust robot behavior by context rather than imposing blanket bans.

4. The Authority Layer

For legitimacy to hold, public authorities must be able to identify and verify a robot encountered in shared space.

What This Looks Like in Practice

Police Scanning Devices
Handheld scanners capable of verifying robot identity, owner registration, and insurance status — without accessing internal logs unless legally authorized.

Safe-Mode Override Capability
In emergency scenarios, authorities could trigger a certified safe shutdown mode.

Operational Tier Verification
Instant confirmation that the robot is authorized for the environment in which it is operating.

This mirrors roadside vehicle checks. It normalizes presence through oversight.

5. The Liability Layer

Markets demand clarity.

Humanoid robotics requires clear operator responsibility, insurance linkage, and incident auditability once systems move through shared civic space.

What This Looks Like in Practice

Mandatory Insurance Requirement
Public-operation tiers could require proof of insurance coverage before activation.

Incident Logging Standards
Tamper-resistant event logs recording critical operational decisions (e.g., collision events) without continuous surveillance.

Tiered Deployment Categories
Domestic-only, semi-public, and full-public classifications, each with escalating compliance requirements.

Liability clarity reduces resistance from insurers, municipalities, and businesses.

6. The Norm Layer

Twentieth-century cities assume visible agents are human. Humanoid robots complicate that assumption.

Norm formation is not trivial. It is stabilizing.

What This Looks Like in Practice

Visible Operational Status Indicator
A small external light or display indicating “Public Mode Active,” reassuring bystanders that restricted behavior protocols are engaged.

Data Collection Signaling
Clear visual indication when cameras or environmental sensors are active beyond navigation baseline.

Public Etiquette Standards
Speed limits in crowded areas, no abrupt arm movements in queues, no autonomous engagement without invitation.

Designated Pilot Zones
Early adoption corridors where the public can gradually acclimate to robot presence.

Norms do not require legislation first. But clarity accelerates comfort.

And now, the deeper point emerges:

None of these measures require tearing up cities. They require targeted standardization, modest retrofits, and coordinated governance. We have added curb cuts, bike lanes, EV chargers, fiber networks, and traffic signals without abandoning our urban cores. The same layering logic applies here.

Humanoid robotics does not demand a new civilization. It demands infrastructure maturity.

The Physical Environment: Calibration, Not Reinvention

The twentieth-century city was built for human gait, wheeled vehicles, and static appliances. Its tolerances assume biological balance, flexible ankles, peripheral vision, and the ability to improvise. Humanoid robots operate differently. They rely on calibrated joint articulation, predictable surface geometry, and sensor interpretation of terrain.

The goal is not to smooth every cobblestone street or redesign historic centers. It is to identify where small environmental inconsistencies produce disproportionate instability and to address those selectively.

The work is closer to adding curb cuts than to rebuilding Rome.

Sidewalks

Irregular sidewalks pose one of the most immediate challenges. Humans compensate subconsciously for subtle height variations; bipeds with fixed foot geometries do not.

In historic districts with cobblestones, cities could introduce narrow, level “mobility corridors” — discreet concrete or stone strips embedded within existing pavement — allowing robots (and, incidentally, wheelchairs, strollers, and mobility aids) a predictable path without altering the visual character of the street.

New sidewalk installations, particularly near transit hubs and civic buildings, could adopt slightly tighter tolerances for surface variation. The city remains itself. The walking surface becomes marginally more legible.

Curb Transitions

Drop-offs of even a few centimeters can destabilize robotic gait if not properly detected. Standardizing curb heights and ensuring smoother ramp transitions at crossings would significantly reduce fall risk.

Where curbs cannot be cut or ramped — due to heritage preservation or structural constraints — an alternative approach becomes possible. Passive NFC or RFID markers could be embedded within curb structures, paired with subtle paint indicators signifying “non-rampable curb.”

Robots equipped with foot-level scanners could detect these embedded signals before committing weight, receiving structured information about height differential and angle. Rather than relying solely on visual depth estimation, the system would access calibrated environmental data.

Cities already embed magnetic loops for traffic lights and RFID systems for transit. Extending that logic to curb geometry is an evolution, not a revolution.

Doorways & Entry Points

Public doorways are remarkably inconsistent. Threshold lips, uneven tile transitions, heavy manual doors, and mirrored glass create minor inconveniences for humans but significant uncertainty for robots.

Small adjustments could include:

  • Threshold leveling standards in new construction and renovations, particularly in government buildings and transit facilities.

  • Embedded entry markers within door frames indicating door width, threshold height, and automatic/manual status.

  • API-linked automatic door integration, allowing credentialed robots operating in public mode to trigger motion sensors without physical contact.

These measures reduce guesswork. They do not alter architectural identity.

Stairways & Vertical Transitions

Stairs remain one of the most destabilizing elements for robotic locomotion.

Enhancements might include:

  • High-contrast, machine-readable stair-edge markers detectable by depth sensors.

  • Passive tags embedded in the first and last steps, signaling total step count and height differential.

  • Digital elevation mapping for public buildings, accessible via municipal APIs.

Humans read stairs instinctively. Robots benefit from redundancy.

Floor Material Transitions

Shifts from marble to tile to carpet affect traction and gait calibration.

Public construction standards could incorporate:

  • Material classification strips at transition points, broadcasting friction data.

  • Standardized friction ratings that are both human- and machine-readable.

This benefits elderly pedestrians and mobility devices as much as robots.

Urban Furniture & Smart Bollards

Benches, planters, café tables, sculptures, and storefront displays give cities character. They also create dense micro-obstacle fields.

In high-traffic areas, modest clearance standards — similar to fire egress regulations — could define minimal navigation corridors.

More interestingly, fixed street furniture such as bollards could incorporate passive identification tags. “Smart bollards” would not surveil space; they would broadcast their presence and classification to nearby robots.

In proximity to sensitive environments — outdoor dining areas, museum entrances, fragile storefronts — these markers could trigger automatic behavioral modulation:

  • Reduced walking speed

  • Restricted arm articulation range

  • Narrowed turning radius

The robot does not need to be constantly constrained. It responds contextually to embedded environmental signals.

This is not about control. It is about calibration.

Crosswalk & Signal Integration

Intersections are structured, timed environments. Traffic lights already operate through electronic control systems.

Broadcasting pedestrian-phase countdown data in machine-readable form would allow robots to synchronize crossing more precisely. Passive curb markers could confirm alignment with designated crosswalks before stepping into the street.

This extends infrastructure that already exists.

None of these adjustments require a sterile, sensor-saturated metropolis. They require identifying high-friction points — curbs, thresholds, stair edges, narrow passages — and making them slightly more predictable.

Cities have always evolved in response to the bodies that move through them. The introduction of curb cuts did not erase historic streetscapes; it expanded who could navigate them. Bike lanes did not dismantle cities; they layered new movement patterns onto old roads. EV chargers did not redefine parking lots; they augmented them.

Humanoid robotics does not require reinvention.

It requires calibration.

The Digital Overlay: Software as Infrastructure

If the physical environment must be calibrated, the digital environment must be coordinated.

Cities already operate on layered digital systems. Transit networks use contactless authentication. Buildings rely on access control badges. Traffic signals are centrally managed. Utility meters transmit data wirelessly. Fiber, 5G, and municipal APIs quietly structure daily life.

Integrating humanoid robotics into urban space does not require inventing digital infrastructure from scratch. It requires extending existing systems to recognize a new category of mobile, authenticated device.

Where the physical layer reduces instability, the digital layer reduces ambiguity.

Credential-Scanning Infrastructure

Cities regulate entry constantly. Turnstiles scan transit cards. Office buildings verify badges. Hotels authenticate room keys. Shared space is already mediated through credentials.

Humanoid robots operating in public environments could interact with similar systems.

Transit Gate Integration
At metro entrances or bus boarding points, robots could tap or scan to verify operational tier and insurance status before entering the system. This would not replace ticketing; it would confirm authorization for public operation.

Building Entry Authentication
Government buildings, universities, and hospitals could require credential validation before allowing robot access, just as they require employee badges.

Event-Specific Authorization
Temporary permissions could be granted for conferences, exhibitions, or service contracts, expiring automatically after defined time periods.

The goal is not surveillance. It is contextual verification.

Public-Mode Enforcement Protocols

Not every space requires the same behavior.

A robot navigating a quiet residential sidewalk does not need the same constraints as one entering a crowded train platform. Digital protocols allow environment-based behavior modulation.

Automatic Public-Mode Activation
Upon entering designated high-density zones — transit hubs, schools, stadiums — robots would automatically switch to a restricted mode:

  • Reduced speed

  • Limited arm articulation

  • Suppressed non-essential movements

  • Restricted sensor data retention

This activation could be triggered via geo-fencing, NFC checkpoints, or building-level broadcast signals.

Speed Ceilings by Zone
Municipal geospatial data could define maximum operational speeds in certain pedestrian corridors, enforced at the software level.

Sensitive Environment Flags
Museums, medical facilities, or government buildings could broadcast “restricted interaction” signals, limiting autonomous engagement behaviors.

This is analogous to speed limits enforced through traffic design and signage — except implemented digitally.

Geo-Fencing & Spatial Classification

Cities already maintain detailed geospatial data. Extending this to robotics requires structured classification rather than blanket prohibition.

Operational Tier Mapping
Different zones could be classified as:

  • Domestic-only

  • Semi-public

  • Full-public

Robots certified for domestic use would not automatically gain access to high-density urban corridors.

Temporary Restriction Zones
Construction sites, protests, emergency response areas, or accident scenes could broadcast temporary exclusion perimeters.

Dynamic Crowd Density Feedback
In future iterations, anonymized crowd-density data could signal robots to slow or reroute without collecting personal information.

Geo-fencing is not exotic. It is already used in ride-sharing fleets and delivery robotics.

Municipal APIs & Structured Environmental Data

The most transformative shift would be the publication of structured civic data in machine-readable formats.

Elevation & Accessibility APIs
Cities could publish standardized data on curb heights, ramp availability, stair geometry, and elevator locations.

Building Access Metadata
Public buildings could provide digital descriptors: doorway width, lift capacity, interior layout class.

Signal Timing Broadcast
Traffic systems could broadcast pedestrian-phase timing, allowing robots to synchronize crossings without guesswork.

These datasets would not be created solely for robots. They would enhance navigation tools, accessibility planning, and urban analytics for humans as well.

Data Minimization & Privacy Protocols

Digital integration must not default to over-collection.

A robot-ready city would need clear guidance on:

  • Baseline navigation data versus discretionary capture

  • Automatic data deletion schedules in public mode

  • Visible signaling when extended recording is active

  • Restricted use of biometric processing in shared space

Public trust will depend not on technical capability, but on restraint.

Interoperability Standards

Perhaps most importantly, none of this functions without standardization.

Robots from different manufacturers must:

  • Recognize shared credential formats

  • Interpret municipal signals consistently

  • Respond uniformly to public-mode triggers

This requires coordination across manufacturers, cities, insurers, and regulators — similar to how vehicle safety standards or internet protocols were harmonized.

The digital overlay is therefore less about embedding intelligence into sidewalks and more about ensuring that robots can understand the digital language cities already speak.

The physical layer makes space more predictable.
The digital layer makes behavior more predictable.

Together, they move humanoid robotics from improvisation to integration.

And like every infrastructure shift before it, this one will not arrive as a single dramatic overhaul. It will appear gradually: an API here, a standardized credential there, a public-mode signal in a transit station. Small additions, layered onto systems that already exist.

Cities are already partially digitized. The question is whether that digitization will be extended to include embodied AI in a coherent way — or left fragmented across regulatory silos.

The Regulatory & Liability Layer: Making Responsibility Visible

Physical calibration reduces instability. Digital coordination reduces ambiguity. But neither is sufficient without institutional clarity.

Mobility technologies do not scale in shared space unless responsibility is assignable.

Automobiles did not become ordinary because engines improved. They became ordinary when registration systems linked vehicles to owners, insurance markets quantified risk, traffic courts adjudicated disputes, and police could verify compliance at the roadside. Legibility and liability transformed novelty into governable infrastructure.

Humanoid robotics will require a comparable maturation.

At present, regulatory responsibility is fragmented. Mechanical risk is addressed through product safety law. Data practices fall under data protection frameworks. Emerging AI regulation categorizes algorithmic risk. Yet once a robot leaves private space and enters shared civic environments, these silos intersect. The question is no longer only whether the product was safely manufactured, but who bears responsibility for its operation in real time.

The regulatory layer does not need to invent entirely new institutions. It needs to adapt existing ones.

Operator Responsibility

In public operation, a humanoid robot should not be an autonomous legal mystery. A clearly identified legal operator — whether an individual, a company, or an institution — must bear primary responsibility for its deployment.

This can be implemented through:

  • Mandatory operator registration for public-tier use

  • Clear distinction between manufacturer liability and operational liability

  • Tiered certification categories (domestic-only, semi-public, full-public)

This mirrors existing distinctions between vehicle manufacturers and licensed drivers. The machine is engineered by one party; it is operated by another.

Insurance Integration

Insurance markets are often the quiet architects of infrastructure stability.

Public operation tiers could require proof of liability insurance prior to activation. Insurers, in turn, would demand:

  • Compliance verification

  • Maintenance records

  • Software update documentation

  • Incident reporting standards

Over time, risk categories would emerge organically. Premium differentiation would incentivize safer deployment practices without requiring constant legislative revision.

This is not hypothetical. Insurance already shapes everything from vehicle safety to building codes.

Incident Logging & Auditability

Once systems operate in shared space, disputes are inevitable. The goal is not to eliminate incidents, but to ensure they can be adjudicated fairly.

A minimum standard could include:

  • Tamper-resistant event logs

  • Time-stamped collision records

  • Operational mode records (e.g., public mode active)

  • Software version traceability

These logs would not constitute continuous surveillance. They would function similarly to aviation black boxes or vehicle event data recorders: activated or reviewed only in the event of a dispute or accident.

Without auditability, liability becomes speculative.

Police & Inspector Verification Authority

For civic legitimacy to hold, authorities must be able to verify compliance when necessary.

This could include:

  • Handheld credential scanners capable of confirming identity, operator registration, insurance status, and operational tier

  • Safe-mode activation authority in emergency situations

  • Environmental compliance checks (e.g., verifying public-mode constraints are engaged)

Crucially, this does not imply broad surveillance powers. It implies reactive verification — similar to a roadside registration check.

The presence of this authority stabilizes public perception. A system that can be verified is less likely to be feared as uncontrolled.

Operational Tiers & Contextual Classification

Not all robots require the same regulatory intensity.

A clear classification framework could distinguish:

  • Domestic Tier — operation confined to private property

  • Semi-Public Tier — limited operation in controlled environments (corporate campuses, institutional grounds)

  • Full-Public Tier — operation in open civic space and transit systems

Each tier would carry escalating compliance requirements for identity registration, insurance coverage, logging standards, and access permissions.

This reduces regulatory overreach. A robot folding laundry at home does not require the same governance as one navigating a crowded tram.

Administrative Coordination

Perhaps the most practical challenge is institutional coordination.

Responsibility may be distributed across:

  • Product safety authorities

  • Data protection regulators

  • Municipal transport agencies

  • Insurance supervisory bodies

  • Police departments

The regulatory retrofit lies less in creating a new ministry than in aligning existing ones through interoperable standards and shared registries.

This is familiar territory. Vehicle regulation already spans manufacturing standards, insurance requirements, driver licensing authorities, and traffic enforcement agencies. The architecture exists; the object category changes.

The regulatory layer is often perceived as restrictive. Historically, it has functioned as enabling.

Without licensing, registration, and insurance, automobiles might have remained niche curiosities. Without standardized electrical codes, electricity might have remained distrusted. Governance does not merely constrain technology. It often makes adoption politically and socially possible.

Humanoid robotics will not normalize through technical sophistication alone. It will normalize when responsibility becomes visible, risk becomes quantifiable, and compliance becomes verifiable.

In that sense, the regulatory layer is not the final obstacle. It is the bridge between capability and legitimacy.

The Social & Norm Layer: Making Presence Legible and Humane

Cities are not only physical and regulatory systems. They are psychological environments.

Twentieth-century urban space operates on a foundational assumption: visible, moving agents in public are human. Even when we encounter delivery drones, autonomous vehicles, or security cameras, their presence is usually peripheral. A humanoid robot standing at a tram stop or waiting in a queue challenges a deeply embedded social expectation.

The question, therefore, is not only how to regulate robots, but how to make their presence culturally intelligible.

Norm formation is not cosmetic. It is infrastructural.

Visible Operational Status

One of the simplest ways to reduce anxiety is to make state visible.

Robots operating in public environments could include a clear external indicator — a small display panel or light band — signaling “Public Mode Active.” This indicator would reassure bystanders that constrained behavior protocols are engaged: reduced speed, limited arm articulation, restricted data retention.

The goal is not theatrical transparency. It is legibility. When people can see that a system is operating under constraints, uncertainty decreases.

Data Signaling & Privacy Awareness

Public anxiety around embodied AI often centers on data collection. A camera mounted on a static traffic pole is abstract. A camera on a walking humanoid feels personal.

Norm stabilization may therefore require:

  • Clear visual indicators when extended recording is active

  • Automatic deletion policies in public mode

  • Default navigation-only capture outside authorized contexts

The social contract depends less on technical capability than on restraint.

Behavioral Etiquette Standards

Early automobiles required behavioral norms before they required highways. Drivers learned not to accelerate through crowds. Pedestrians learned to interpret signals. Mobile phones introduced etiquette about where and when to speak.

Humanoid robotics will similarly require behavioral expectations that are culturally internalized.

Examples might include:

  • No autonomous initiation of conversation in public without invitation

  • Reduced arm articulation in dense pedestrian areas

  • Queue etiquette alignment

  • Maintaining predictable walking trajectories

These norms may not begin as law. They may begin as design defaults.

Emergency Responsiveness & Civic Responsibility

Perhaps the most significant opportunity for norm stabilization lies in aligning robots visibly with human safety.

If humanoids are to operate in shared civic environments, they should not merely avoid harm. They should be prepared to respond to it.

A minimum social expectation for public-tier robots could include:

  • Basic first aid instructional knowledge, consistent with nationally recognized guidelines

  • Automatic emergency service contact capability, including verified location transmission

  • Live feed relay to emergency responders when legally authorized

  • Audible and visual guidance to nearby humans on how to administer first aid until help arrives

This does not require robots to replace trained professionals. It requires them to function as stabilizing intermediaries.

In the event of a medical emergency on public transit, for example, a robot could immediately contact emergency services, transmit precise location data, activate live video if authorized, and guide bystanders through CPR instructions while responders are en route.

Such capabilities shift perception. A machine that is visibly prepared to preserve human life is understood differently than one perceived merely as a roaming sensor array.

Emergency alignment reinforces the idea that embodied AI is not an intruder in civic life, but a participant in it.

Designated Pilot Zones & Gradual Familiarization

Cultural adaptation benefits from structured exposure.

Cities might designate early “robot pilot corridors” — specific transit lines, campuses, or pedestrian districts — where public interaction norms can form gradually. Clear signage, informational materials, and community engagement initiatives could accompany these deployments.

This mirrors early elevator operators who reassured passengers in the transition from manually operated lifts to automated systems. Presence becomes ordinary through repetition.

Visible Identity Without Anthropomorphism

There is also a delicate balance to maintain. Humanoids may resemble human form, but governance clarity requires avoiding confusion.

Visible identifiers — registration plates, operator affiliation markers, or digital ID displays — reinforce that these are accountable systems, not autonomous citizens. The city remains a human-centered space, even as machines move within it.

Norm stability depends on conceptual clarity.

The social layer is the least technical and the most decisive.

Electricity normalized when people trusted their homes would not burn. Automobiles normalized when traffic became predictable. Mobile phones normalized when etiquette settled. Humanoid robotics will normalize when presence feels bounded, legible, and aligned with human well-being.

Infrastructure makes operation possible. Norms make it livable.

Layering

When new technologies arrive, we are often tempted to frame them as ruptures. We imagine replacement rather than adjustment, disruption rather than calibration. Yet cities rarely transform through sudden erasure. They evolve through layering.

Electric grids were threaded through medieval streets. Sewer systems were tunneled beneath historic neighborhoods. Traffic signals were added to intersections that once carried horses. Fiber cables were laid beneath stone plazas. None of these changes required abandoning the city. They required coordination, standards, and patience.

Humanoid robotics belongs in this lineage.

The machines are advancing. That fact alone neither guarantees their integration nor justifies it. What determines whether they become stable participants in public life is the surrounding civic infrastructure — physical tolerances, digital coordination, regulatory clarity, and social norms.

Without those layers, robots will feel awkward or intrusive. With them, they may become ordinary.

The work ahead is not speculative futurism. It is infrastructural maturity. It is the incremental extension of systems we already know how to build: standardized connectors, verifiable credentials, insurance frameworks, calibrated sidewalks, contextual access protocols, and visible operational constraints.

We do not need a pristine, purpose-built robot metropolis rising from empty land. We need to extend the logic that has governed every major technological transition of the past two centuries.

Cities have always adapted to the bodies that move through them. If embodied AI becomes one more category of moving body, the task is not reinvention. It is responsibility.

Layer by layer, calibration by calibration, what now feels novel may one day fade into the background of urban life — as unremarkable as a traffic light, as invisible as a sewer line, as taken for granted as a wall socket.

The question is not whether we can build such a city.

The question is whether we will build it deliberately.


Read More