I recently watched a video of a Tesla in the United States navigating a parking lot by itself while its owner sat in the driver’s seat. Other than pushing a button, the owner did nothing. The car eased between parking markers on its own. It looked like magic, until you imagine it happening here.
If that car hit another vehicle while it parked, who would we blame in the Philippines? We would blame the person in the seat. Our laws give us no other choice. We would deem that driver at fault even if he never touched the steering wheel or the pedals.
Carmakers now sell not just metal and batteries, but software-defined driving. They market self-parking, highway automation, and hands-off capability, even if most systems on the road today still require driver supervision. Still, we are surely moving from driver assistance toward conditional automation.
Philippine traffic law assumes there is a human driver at all times. Republic Act No. 4136, the Land Transportation and Traffic Code, defines a driver as every licensed operator of a motor vehicle. Republic Act No. 10913, the Anti-Distracted Driving Act, penalizes the use of mobile communications devices and electronic entertainment or computing devices while driving, subject to limited exceptions. These statutes were written for a world where a person controls the steering wheel and pedals.
So the moment a vehicle truly drives itself, even for a short interval like parking, our enforcement choices become muddled. If the system steers and brakes, does the person still “operate” the vehicle in the sense the law means? If a crash happens, do we hold the human liable for the machine’s driving decision? If the system demands a takeover and the person ignores it, should we treat that failure as the core fault rather than the manner of driving?
The United Kingdom offers a useful approach. Its Automated Vehicles Act 2024 recognizes a “user in charge” (UiC) when an authorized self-driving feature is engaged. In that mode, the UiC is generally not held criminally liable for offenses arising from self-driving activities.
At the same time, the law assigns responsibility for the self-driving system to an Authorized Self-Driving Entity (ASDE), a regulated entity behind the feature. The UiC retains non-driving duties, including being fit and ready to take over when the vehicle issues a transition demand.
The UK also keeps an important safeguard. Even with authorized self-driving engaged, mobile phone use remains prohibited for the UiC. I agree with that. The law shifts liability for the vehicle’s manner of driving to the ASDE, but it does not excuse careless behavior inside the cabin.
Japan also illustrates the same logic. Conditional automated driving has been allowed on public roads there in practice since April 2020, but drivers must take over immediately and properly when conditions require it. The system performs the driving task within defined conditions, while the driver remains the fallback.
Germany goes further. It allows driverless operation in defined operating areas and requires “technical supervision” by a human who can monitor and intervene even from outside the vehicle. Germany assigns accountability to a role that fits the technology.
These changes reflect a market and governance reality. Carmakers cannot credibly sell self-driving if the person inside remains legally treated as the driver for every consequence of the machine’s decisions. Regulators, on the other hand, cannot accept a future where nobody carries responsibility. So these jurisdictions defined roles and tied them to strict conditions.
The next step, as a matter of course, is insurance. After an accident, the question that matters most is not philosophical liability but who pays, and how fast. We need a process that compensates victims quickly, then determines liability, whether human or machine.
We should not spend years proving whether a sensor failed, a software update misfired, or a human ignored a takeover request. Yet that is what will happen if we treat automated driving as a simple negligence case against the person in the driver’s seat.
Philippine road crash liability remains largely fault-based. But our compulsory motor vehicle liability insurance already includes a limited no-fault indemnity for death or bodily injury, payable without the need to prove fault or negligence, but subject to set limits. For bigger claims, disputes still tend to turn on proving who was legally liable.
At the same time, compulsory liability insurance does not yet speak clearly to automated driving and the possibility of product or software failure causing an accident. When an automated system drives, a crash can result from defective code, training data, mapping, calibration, or cybersecurity vulnerabilities.
Pay-first, argue-later should remain the default posture even for incidents involving authorized self-driving. The UK offers an instructive model. When an insured automated vehicle causes an accident while it is “driving itself,” the insurer pays in the first instance, then uses mechanisms that preserve recovery rights and allocate responsibility after compensation.
Evidence sits at the heart of all of this. Every automated driving dispute begins with a question: who controlled the vehicle at the time, the human or the system? We can investigate and question human drivers. But we also need a credible way to examine automated driving systems, and to assign responsibility when the system, not the human, drove.
We should start by clearly distinguishing driver assistance from conditional automation. If a driving system merely assists, the human remains the driver under existing rules. If a system performs the dynamic driving task within a defined operational design domain, the law should recognize that mode and assign a status like UiC.
A competent authority should also approve which features qualify as self-driving, under which conditions, and on which roads. We can start with allowing initially only low-speed applications like parking in controlled environments, then widen coverage as standards and infrastructure mature.
A conditional system always ends with a question: can the human take over when the system asks? We should specify what counts as a valid transition demand, what minimum warning time applies, and what the UiC must do. If the person fails to respond and an accident follows, liability should attach to that failure.
More important, we need reliable event data. Mode engagement, alerts, and takeover demands must be recorded in a form insurers, regulators, and courts can access under clear privacy safeguards. Without this, every serious claim becomes a guessing game. Automated cars will need black boxes or event data recorders that can establish, at a minimum, whether the system or the human controlled the vehicle during the accident.
Also, no-fault indemnity under existing compulsory insurance should extend to authorized self-driving incidents so victims get compensated quickly. Then insurers should be able to recover from the responsible entity when the system, not the human, caused the loss.
The easiest policy failure is the lazy one. We either ban automated driving out of fear, or we allow it to spread quietly and then blame the human occupant for everything when something goes wrong. Both paths will create a messy market and invite unjust outcomes when accidents happen.
I would rather we do the harder work. We should now calibrate regulation and authorize what we believe can safely operate locally, define the roles, assign accountability to the entity behind the automated system when the system drives, and make sure victims get paid quickly when accidents happen.
Marvin Tort is a former managing editor of BusinessWorld, and a former chairman of the Philippine Press Council
matort@yahoo.com


