Kochava, the self-proclaimed industry leader in mobile app data analytics, is locked in a legal battle with the Federal Trade Commission in a case that could lead to big changes in the global data marketplace and in Congress’ approach to artificial intelligence and data privacy.
The stakes are high because Kochava’s secretive data acquisition and AI-aided analytics practices are commonplace in the global location data market. In addition to numerous lesser-known data brokers, the mobile data market includes larger players like Foursquare and data market exchanges like Amazon’s AWS Data Exchange. The FTC’s recently unsealed amended complaint against Kochava makes clear that there’s truth to what Kochava advertises: it can provide data for “Any Channel, Any Device, Any Audience,” and buyers can “Measure Everything with Kochava.”
Separately, the FTC is touting a settlement it just reached with data broker Outlogic, in what it calls the “first-ever ban on the use and sale of sensitive location data.” Outlogic has to destroy the location data it has and is barred from collecting or using such information to determine who comes and goes from sensitive locations, like health care centers, homeless and domestic abuse shelters, and religious places.
According to the FTC and proposed class-action lawsuits against Kochava on behalf of adults and children, the company secretly collects, without notice or consent, and otherwise obtains vast amounts of consumer location and personal data. It then analyzes that data using AI, which allows it to predict and influence consumer behavior in an impressively varied and alarmingly invasive number of ways, and serves it up for sale.
Kochava has denied the FTC’s allegations.
The FTC says Kochava sells a “360-degree perspective” on individuals and advertises it can “connect precise geolocation data with email, demographics, devices, households, and channels.” In other words, Kochava takes location data, aggregates it with other data and links it to consumer identities. The data it sells reveals precise information about a person, such as visits to hospitals, “reproductive health clinics, places of worship, homeless and domestic violence shelters, and addiction recovery facilities.” Moreover, by selling such detailed data about people, the FTC says “Kochava is enabling others to identify individuals and exposing them to threats of stigma, stalking, discrimination, job loss, and even physical violence.”
I’m a lawyer and law professor practicing, teaching and researching about AI, data privacy and evidence. These complaints underscore for me that U.S. law has not kept pace with regulation of commercially available data or governance of AI.
Most data privacy regulations in the U.S. were conceived in the pre-generative AI era, and there is no overarching federal law that addresses AI-driven data processing. There are Congressional efforts to regulate the use of AI…