What are algorithms, and why should we care? In a simple definition, they are sets of steps to complete a task or solve a problem. The problem can be as simple as the process of making a peanut butter and jelly sandwich, or as complex as a Google search algorithm that hunts through myriad websites for words that relate to your search terms.
Their influence on our lives is subtle, and pervasive. They are at the root of computer science and they create many of the everyday miracles that have changed our lives lately—like the GPS system that gives you car directions, or the code compilations behind Internet marketing, social media innovations and computerized customer service.
Algorithms also assist government in making public policy decisions on our behalf. For example, in criminal courts, algorithms are used when a DNA sample is contaminated, as in a mix of several people’s biomatter. Suddenly, complex math programs are critical in the process of deciding guilt or innocence. After a person is convicted, algorithms help officials decide on the likelihood of recidivism and help direct the range of choices between maximum security prison and releasing a prisoner into the community. As such, they are very important to the lawyers, courts, journalists and scholars engaged in finding what’s actually true.
But who gets to test an algorithm’s soundness?
Before computers and apps became so pervasive, the primary sources of information were usually words or images on paper. Citizens, scholars and journalists could research court records or make Freedom of Information requests for government agency files. But as we increasingly rely on software solutions from Silicon Valley to make important decisions, a tremendous amount of accountability is being lost.
Almost without exception, the computer “source code” behind an algorithm is commercially protected and all-but unfathomable. That’s due both to scientific complexity and the proprietary shield afforded intellectual property. This affects real people in real ways.
Take the example of Mayer Herskovic, a young father and home heating technician who’s also a Hasidic Jew. He’s facing four years imprisonment for the 2013 beating of Taj Patterson, then 22, an African American fashion student in The Williamsburg section of Brooklyn.
On a night in November, members of a neighborhood watch group made up of Hasidic Jews saw Patterson in their neighborhood. Under the mistaken belief that he was vandalizing cars, they pursued him. They beat him so severely Patterson lost an eye. Herskovic contends he’s innocent and was never there. The only evidence linking him to the crime was an extremely small amount of DNA swiped from Patterson’s Air Jordan sneaker.
It amounted to 97.9 picograms of material. A picogram is one trillionth of a gram. It was analyzed under an algorithm contained in the New York City Medical Examiners Office’s proprietary software known as the Forensic Statistical Tool (FST, designed to analyze minuscule, degraded or mixed samples of more than one person’s DNA.
According to reporting done by the investigative journalism project Pro Publica, Herskovic was targeted by two confidential police informants. Neither testified at his trial and thus were not confronted or cross examined. Nor could the FST software be cross examined, and after analysis by an independent software expert in a separate case, it was discontinued starting in 2017, in favor of an FBI-endorsed test for mixed DNA.
Herkovic’s appellate lawyer planned to argue the algorithm’s reliability was never tested on a closely-related population like Brooklyn’s Hasidic Jews, who have many common ancestors, and therefore much of the same DNA.
“This case is a poster-child for how ‘DNA evidence’ can literally be fabricated out of thin air, and how statistics can be manipulated to create a false impression of ‘scientific evidence’ of guilt,” Donna Aldea told Pro Publica reporter Lauren Kirchner.
(Some algorithms for calculating trace DNA are not proprietary and are held in the public domain through the Creative Commons as open source, for all to see.)
The federal and state governments purchase proprietary software to help evaluate risk and make decisions such as which prisoners should be held in high security and who should be in halfway-house facilities. Other software evaluates security issues and risk-management decisions.
The companies that create algorithms that implement public policy should not expect blanket protection from FOI review simply because of underlying intellectual property rights. There needs to be a balancing of interests, and potential flaws or biases in the algorithms need to be discoverable.
When bidding for software products, Connecticut’s state government agencies should consider structuring those bids to put bidders on notice of the potential for public Freedom of Information analysis of the underlying assumptions and weighting factors of the algorithm. Anything less would undermine necessary public protections and accountability.