November 1, 2024
When robots go haywire, who picks up the tab?
The rapid integration of robots and AI into various industries is no longer a vision of the future. It's happening right now! From autonomous vehicles navigating our streets to AI agents managing our schedules, robots are becoming indispensable. But one question looms large: what about robot insurance?
Why do robots need insurance?
Like any other asset or an employee, robots can cause damage or be damaged themselves. They can malfunction, be hacked, or make decisions that lead to unintended consequences. For instance, a self-driving Uber vehicle struck and killed a pedestrian in Arizona in 2018. This tragic incident highlighted the complex liability issues surrounding autonomous systems. Who is at fault — the operator, the manufacturer, the software developer, or the AI itself?
As robots become more autonomous, the traditional lines of responsibility blur. Insurance is needed not just to protect the owners and operators of robots, but also to provide a framework for accountability and compensation when things go wrong.
The most important point is the simplest – robots are new. They won’t simply be machines to help lift, rather they promise to be problem solvers. But they also promise, implicitly, to be problem creators. Like the Saturday Night Live skit above, or the endless repetition of jokes about Skynet, no one will want powerful unpredictable machines without some form of protection!
In other words, if you’re a company who is trying to sell robots, then you almost certainly need a way to provide robot insurance.
What's the problem with current insurance products?
A skeptic would say that robots are no different than the mid-20th century advent of “labor-saving devices” like washing machines, dishwashers or food processors. After all, the first food processors didn’t have the safety measures that a modern Cuisinart does – I’m always a little impressed that the machine simply can’t turn on until the top is properly in place.
Any machine that combines physical power and computational power will require underwriting of operational malfunctions, cybersecurity threats, and their design and manufacture must meet industry standards and regulations. But most importantly, traditional insurance also offers financial protection against damage to third-parties.
The important traditional liability insurance products (whether home, auto, or commercial) are based on human error and predictable risk patterns. In insurance, we understand that these liability policies cover a combination of the physical and behavioral attributes – a house with a pool will be more expensive to insure than one without. A ski resort with a history of customer injuries around their ski-lifts will find insurance more expensive than one with a spotless safety record.
Robots operating on AI algorithms with decision making autonomy will now mix all these risk categories together in confusing and hard to predict ways. Manufacturing and operational malfunctions will now require us to expand behavioral analysis to include robot behavior; cybersecurity risks could now cause malfunctions – and the bad outcomes no longer limited to the simple logic of button pushing.
For our society to make use of autonomous and semi-autonomous robots, we must simultaneously parse and allocate the downsides and responsibilities. Without these commercial arrangements, the uncertainty will be too much to bear and the logical decision will be to simply avoid the risk.
How do robots change the risks?
If you haven’t yet, watch Figure.ai’s youtube videos – the robot puts away dishes, identifies and hands the user an apple, and has integrated OpenAI to handle voice prompts.
Robot insurance will most likely start from the frameworks that we’ve already developed for industrial robots, like injury to workers or product damage, but the surface area for risk is clearly already much wider than the mental model we might have of a controlled factory floor. In 2019, a Swiss Post drone crashed near a group of children due to a malfunction. Insurance for such drones should cover potential injuries and property damage.
For example, cyber insurance might cover data breaches but not physical damage caused by a hacked robot. In 2017, the FDA recalled St. Jude's cardiac devices due to vulnerabilities that could allow hackers to remotely control them. Cyber insurance doesn't typically cover damages resulting from the actions or the breakdowns of hacked devices.
Similarly, property insurance might cover equipment failure due to mechanical issues but not damages resulting from a robot's decision-making process. This gap leaves operators and manufacturers exposed to significant financial risks.
Creators and buyers of these robots may also seek to cover more mundane commercial concerns, like warranties to cover repair costs or business interruption if a robot plays a critical role in a commercial process. Operators and owners will also need new types of professional liability coverage to handle the added complexity of AI-mediated instructions.
Most regulatory approaches to AI will fall into a co-pilot framework, where the expectations are that humans will supervise AI, with human’s continuing to bear the personal and professional obligations of any action performed. But it’s not clear that robots acting in the world – geographically and logically removed from initial instructions can operate according to this logic of human responsibility.
What challenges do insurers face to create Robot Insurance?
At the philosophical level, insurance exists to put financial price on the priceless. From theft to car accidents, hurricanes, or death, what we find unthinkable in our own lives is processed and transformed into the bloodless language of incidence, severity and premiums.
To perform these transformations, the insurance industry needs data, over time, preferably with stable conditions. The novelty of robots means there's insufficient historical data to predict future losses accurately. Without a robust data set, actuarial models struggle to estimate risk. Robots and AI technologies evolve quickly, rendering existing data obsolete. An actuarial model based on last year's technology may not apply today.
Despite the fact that AI-systems aren’t purely mechanistic, there have been leaps in research on AI-explainability. Moreover, the sheer volume of data coming from these systems should help tech forward insurers build risk models more quickly –while also, of course, helping creators and operators rapidly diagnose and fix missteps.
Take the drone example above. If delivery drones provide data on flight paths, weather conditions, and mechanical status, insurers can more accurately price premiums based on actual risk exposure. Similar steps should allow insurers to analyze operational data to determine the likelihood of malfunctions or accidents.
New tech-driven claims processing will also have much richer data than before – instead of photos of car damage and analysis of tire tracks, data logs of telemetry, onboard cameras, and “decision events” must be analyzed to process claims and provide incontrovertible evidence of what occurred. In autonomous vehicle accidents, like the 2016 Tesla crash where a driver was killed while using Autopilot, data played a crucial role in understanding the circumstances.
Utilizing robot-collected data enhances transparency and accuracy in insurance. However, it also raises privacy concerns. Insurers must balance data usage with ethical considerations to maintain trust.
How can insurers partner with startups to cross the chasm?
Robotic systems have the potential to create more direct connections between the creator of the systems and the insurer bearing the risk. After all, no one has more incentive to make consumers comfortable with robots than the people and companies creating them.
Deep partnerships can make sure that insurance coverage begins with risk mitigation. Continuous monitoring of a robot's performance can help prevent risk with overrides, constraints or “kill switches”. Scenario simulations and digital twins during training can be adjusted to be risk-aware, and insurers can help set these standards. Perhaps borrowing from the FDA-framework of testing safety separately from testing efficacy or performance.
Once in contact with the real world, these same tools could help adjust premiums dynamically, incentivizing safe operation and regular updates. Insurers must be proactive, not reactive, in their pricing strategies to accommodate the evolving nature of AI decision-making.
Moreover, the pricing of risk creates a market signal around AI-safety that can naturally ripple through the value chain. If it really is true that one LLM-company has more commitment to creating SAFE AI, then an insurers’ price will naturally drive manufacturers to have lower prices for the safer option. This can also impact transparency and explainability as well, for example, as both insurers and regulators will prefer predictable and understandable systems over black boxes with the same statistical profile.
In this context, insurance can be a useful adjunct to regulation. For example, insurers were important advocates for airbags, even before federal regulation began to require them. Insurance costs will be faster to act to incentivize the developers of AI models to take one more step when ensuring the right controls are in place. In an industry that is moving this quickly, it may be that insurance (not waiting for regulation) is our best hope of avoiding the Terminator.
How can robot insurance get started?
Embedded insurance programs can also help build the bridge towards larger capacity and coverage over time. Initially, data limitations and experience will likely create a situation that is mediated by smaller coverage limits and large exclusions. Significant events are more likely to be handled by litigation over negligence than through insurance coverage.
But we see already that innovators can fill this gap by risking their own balance sheet while still using insurance structures. For example, in our portfolio, Tint helps innovative marketplaces like Deel, Turo, and Guesty – help their customers get insurance or protections around novel behaviors like international contracting, car sharing or short-term rentals. Their clients also use a mix of their own balance sheets, captive insurance arrangements and more traditional carrier partnerships to bring novel products to market.
Even when innovators do put their own capital at risk, structuring the protections to be “legible” to an insurer or re-insurer is critical to scale. Even if the commercial practice is to give a “satisfaction guaranteed” protection, no insurer could backstop that guarantee unless there’s a track record demonstrating underwriting, claims, and payouts to understand and price the real risk over time.
The balance sheets of startups ultimately won’t be able to support the size of risks or create the diversification necessary for good outcomes. So insurers and reinsurers will need to invest in the systems – data, underwriting, claims, and partnerships necessary to enable a robotic future. There are also established approaches to make the first policies easier to structure and underwrite:
Usage-based insurance is an approach that would tailor policies based on actual robot usage and performance data, and is already established using telematics in auto insurance. It’s also likely that AI-underwriting will need to be embraced. For example, Lloyd's of London is exploring AI to underwrite cyber risks associated with robotics. Making proper and effective use of data as systems rapidly change won’t work without automation and it’s likely that machine learning and AI-strategies will be essential to picking out patterns in usage, accident, and claims data.
Parametric insurance, which offers pre-defined payouts when specific triggers occur, may also enable early insurance data sets to be established, though it may work more effectively for commercial coverages like business interruption than for third-party damages.
Modular policies can also be developed to cover these risks in a crawl, walk, run cadence. For example, AI-insurance startup Armilla started building software for AI-governance and model testing, then added products guarantee algorithm performance, and has now introduced products that expand into liability coverages as well for legal defense and third party claims. While they don’t cover robots yet, they offer a template for gradual expansion across a variety of novel risks.
All of these approaches will require a collaborative insurance framework involving all parties to ensure comprehensive coverage. Shared liabilities reflect the interconnected nature of robotic technologies and their deployment.
What will a good insurance product look like?
Machines are already embedded in daily life and many of us are excited by the productivity enhancements that could come in a Jetson’s like future, not to mention the significant safety gains that have already come from increased automation in factories, and the promise that robots could now do both our tedious and dangerous tasks – from folding laundry to defusing bombs.
While AI and robotic safety are paramount and should make us better off on the whole, the real world is messy – accidents and tragedies will happen. So this future also will require the kinds of promises that insurance has been making for hundreds of years.
An effective insurance product for robots should:
- Cover both physical damages and intangible risks like data loss or reputational harm.
- Adapt to the rapid advancements in technology and the evolving nature of AI behavior.
- Should be clear on liability. It should clearly define the responsibilities of manufacturers, operators, and developers to avoid legal ambiguities.
- Encourage best practices in cybersecurity and regular maintenance to minimize risks.
- Utilize data analytics to assess risks accurately and price premiums fairly.
As robots continue to permeate various aspects of society, the need for specialized insurance becomes increasingly critical. It's not just about mitigating financial losses but also about fostering trust in robotic technologies. Insurers, manufacturers, and regulators must collaborate to develop products that address the unique challenges posed by robots. This will ensure that innovation does not outpace our ability to manage its risks.
By QED Investors Partner Amias Gerety and Moxxie Ventures' Prateek Joshi.