Editor's note: This letter remains in the condition in which it was sent.
Auto insurers have been discriminating against drivers for decades. Will A.I. make it worse?
What goes into the calculation of your premium? The underwriting and rate calculations auto insurers use is more discriminatory than you might expect. As a government-mandated expense, buying car insurance can be a necessary yet unaffordable burden for millions of working Americans. Importantly, low-income individuals and people of color are disproportionately affected by high insurance prices.
With access to limitless demographic data and statistics from myriad sources, your information is wholly available to insurance companies to use in their pricing determinations. Coupled with the prevalence of artificial intelligence, the methods insurers use to evaluate policy holders are often murky and discriminatory. Auto insurance reform—especially regarding zip code risk calculations—is a crucial social justice issue that demands immediate attention from lawmakers.
Insurance companies use historical data to predict how likely an individual is to make a claim on their policy. The calculation of your insurance premium is done with an algorithm, or system of calculations, that considers factors like credit score, zip code, age, and even whether you rent or own your home. These risk assessment algorithms take those factors and assign weight and value to them in their calculation of your premium. For instance, age is a more heavily weighted factor for drivers under 25, for whom rates are higher. This algorithmic risk assessment becomes problematic in its consideration of your zip code, where the demographics, income level, and crime data of your area (among other things) are considered by your insurance company.
A zip code might effectively predict your rate. According to a Consumer Federation of America report, in communities where at least three quarters of residents were African American, premiums averaged 71 percent higher than in areas of comparable white populations. What’s more, in metropolitan areas with long histories of redlining and segregation, premiums charged to residents in predominantly non-white zip codes were nearly double those charged in largely white zip codes. Across New York State, drivers who live in predominantly non-white zip codes pay $1,728 more in average annual premiums.
When insurers designate a zip code as “high risk,” a negative feedback loop is perpetuated in which members of a low-income community end up paying higher premiums. The community becomes systematically undesirable, enticing wealthy residents to leave for lower rates. Because actors like credit score and demographics contribute to a “high risk” ZIP code, this system disproportionately targets communities of color for price hikes.
The actual algorithms that insurers use are either completely opaque or are kept mostly concealed from the public or oversight agencies as proprietary secrets. A 2020 Consumer Reports study noted that Allstate kept the details of its “price optimization” algorithm explicitly out of many of its filings for state regulators. The ‘black box’ that is zip code risk calculation is intentionally hidden to allow insurers great leeway in their pricing schemes. Without transparency in how a company evaluates zip code risk, lawmakers have no way to determine whether an algorithm violates fundamental antidiscrimination statutes.
Two potential aspects of risk calculation are cited as alternatives to the use of zip codes: AI telematics and crime rates. The introduction of AI-based driver metrics to the insurance industry poses a potential risk to low-income drivers and people of color. In some cases, auto manufacturers have shared consumer telematics with insurers without the driver’s knowledge. These telematic collecting devices monitor driving habits—like destination, acceleration, speed, and braking. With what we know about AI in other industries so far, it's hard to say how the collected information is being used, or whether its capabilities would or would not perpetuate discrimination. AI’s use of historical data to predict risk may likely perpetuate racial bias and classism. Destination tracking, for example, can lead risk algorithms to increase premiums for drivers perceived to be frequenting “high risk” areas.
While zip codes are known proxies for class and race, insurance companies attest that they depend on crime data to make their risk underwriting calculations. Crime is, in theory, reasonable data to include. However, as Cathy O’Neil decisively demonstrated in her 2016 book, Weapons of Math Destruction, crime rates themselves are often problematic algorithms. The feedback loop of highly policed, poor areas creates data that draws police back to those areas and cements their “high risk” statuses. Thus, the use of AI or crime data necessitates a greater degree of regulation by lawmakers and consumer vigilance.
Fortunately, New York is already taking steps to mitigate the bias and systemic perpetuation of discrimination arguably inherent to AI’s incorporation. A 2024 NY Dept. of Financial Services report stated that external consumer data and information sources and artificial intelligence systems should be monitored for “unfair and unlawful discrimination” by insurers. Increased state oversight on insurer practices is expected to follow.
Similarly, the National Association of Insurance Commissioners’ (a non-governmental standard-setting organization) recommended an increase in “oversight of unregulated big data and vendors of algorithms currently used to establish pricing.”
As consumers, we can pressure our elected officials to pursue oversight of the insurance industry at all levels of government. Additionally, appropriating more funds to consumer protection agencies is a crucial step in curbing illegal discriminatory practices. Consumers should have the right to understand how their zip code is being evaluated so that they can make informed decisions. Insurers must be transparent their underwriting methods, especially in their use of AI and zip codes. Stopping the unfair and opaque use of zip code data in risk calculations should be an immediate priority to achieve equity in fees.
Further reading: The Opportunity Atlas; 2023 CFA Report; Colorado’s AI mitigation law
The opinion desk can be reached at opinion@ubspectrum.com
https://www.linkedin.com/in/maia-hinesley/