Please note that by playing this clip YouTube and Google will place a long term cookie on your computer.
While the FBI through the years has struggled with an antiquated computer system and repeatedly gone to Congress for massive amounts of funding to bring it into the 21st century, our elected officials have bought into the notion that a new algorithm, computing system or data mining operation will save us from the next catastrophe — which of course is ever looming on the horizon.
The FBI already collects far too much information about us for its own good. But instead of putting the brakes on the surveillance state, we keep throwing money (and our personal data) at what we hope will be the final technological fix.
Between the years 2003 and 2006, the FBI's ELSUR database collection of audio wiretaps grew 62 percent, and an astonishing 3,034 percent for "digital collections," or email and other seized online media. Another major FBI database had "exceeded 3,000 users, 350 million tracked “products,” and 50,000 tracked accounts by mid-2009."
Agents themselves admit that they cannot adequately process all of the data they are bombarded with every day. Nevertheless, the government keeps degrading federal rules and statutes to allow agencies to collect more information about all of us than ever before, with fewer restrictions and wider loopholes.
Everyone from the FBI to the ACLU acknowledges that there is too much data to adequately manage, but the message from the top is never, "Slow down and focus on the real threats." Instead, we hear the same, costly refrain time and again: "We need more access to data, and more money so we can develop technologies that will enable us to properly manage it."
That's a misleading trap. Instead of focusing on known threats, the government is widening the net to ensnare all of us. That not only makes it harder for investigators to do meaningful work, but it is fundamentally incompatible with a free, open society. Data mining simply doesn’t work. If it did, how could data-mining systems like Palantir’s allow someone like the Aurora movie shooter to slip through the cracks?
A good example of how over-reliance on technology and data overload can blunt the effectiveness of a terrorism investigation is provided by the case of the Fort Hood shooter, Maj. Nidal Hasan.
In "How FBI technology woes let Fort Hood shooter slip by," former Navy officer and longtime tech journalist Sean Gallagher insists that technical problems at the FBI are to blame for the fact that the agency "missed" Hasan, the Fort Hood shooter.
Gallagher describes in detail the FBI's many databases and their backend and user interface problems. He's right to say that the FBI is using somewhat obstructionist technology, and it's also true that the agency has too many databases and no good way of networking access to the information in all of them.
What's misleading is the assumption he claims to share with the Commission headed by former FBI and CIA director William Webster that it is these problems, and not human incompetence, that led to the intelligence failures that ended with 13 dead service members at Fort Hood in November 2009. In fact, the Webster report clearly states that human error was a major problem.
In Gallagher's words, Webster’s report about the killings "strongly implies that FBI IT systems and the bureau’s poor state of information sharing with other agencies played a role in the failure to take a harder look at Hasan."
But the fundamental problem in the Nidal Hasan case appears to be the fact that agents in San Diego didn't know how to properly use the tools they had. In the words of the Commission, “Prior to the Fort Hood shootings training on these tools and databases was limited or non-existent.” That's clearly a training and human resources problem, not a technology problem.
Gallagher does say that FBI "user training" was inadequate, but in a piece thousands of words long that insight is buried at the bottom of the "technology is the problem!" lake.
Here's how Gallagher describes what happened:
On November 5, 2009, an Army psychiatrist stationed at Fort Hood, Texas shot and killed 12 fellow soldiers and a civilian Defense Department employee while wounding 29 others. US Army Major Nidal Malik Hasan, the American-born son of two Palestinian immigrants, reportedly shouted “Allahu Akbar!”—“God is great!”—before launching his 10-minute shooting rampage at the Soldier Readiness Center. The shooting—the worst ever on an American military base—occurred as Hasan was facing imminent deployment to Afghanistan. A civilian police officer shot Hasan and placed him under arrest.
In the investigation that followed, the FBI and Defense Department investigators found that Hasan had been communicating with Anwar al-Aulaqi (sometimes spelled "al-Awlaki"), an American radical Islamic cleric living in Yemen. In the process of reviewing the evidence, investigators found that the FBI’s Joint Terrorism Task Forces in San Diego and Washington, DC had been aware of Hasan’s interactions with Aulaqi for over 11 months before the attack. Yet Hasan had never even been interviewed about his connection with the imam who would later be tied to “underwear bomber” Umar Farouk Abdulmutallab and to attempts to bomb US bound cargo planes with explosives packed in laser printer cartridges. (al-Aulaqi would later be killed by a US drone strike in Yemen.)
As federal officials looked into whether they had somehow missed leads that might have prevented the shooting, they found that the information technology at the heart of the FBI’s efforts to prevent terrorist attacks was fractured, overburdened, and running on aging and underpowered hardware.
Gallagher proceeds to explain how and why this is true. But given the available evidence, he reaches the wrong conclusions, emphasizing technical rather than human error.
The tech journalist also deploys no small amount of fear mongering to pack a rhetorical punch. Yes, the PATRIOT ACT has fundamentally undermined the Fourth Amendment. Many people hate the fact that the government has access to our data without judicial oversight. But Gallagher emphasizes not why this is problematic, but rather that the government still doesn't have enough power or money to stop the ever-growing threats we face.
Gallagher:
…US law enforcement and intelligence agencies have struggled over the past decade to take all of this information and put it to use. The poor search capabilities of the FBI's software, inadequate user training, and the fragmented nature of the organization's intelligence databases all meant there was no way for anyone involved in the investigation to have a complete picture of what was going on with Hasan.
While much has changed since November of 2009, the FBI’s intelligence analysis and sharing systems remain a work in progress at best—and there's no telling what other potential threats may have gone unnoticed.
Fear-mongering aside, here's a basic run-down of what the FBI report says happened:
Back in 2008, agents at the San Diego Joint Terrorism Task Force (JTTF) were investigating Anwar al-Aulaqi. Agents were tasked with sorting through messages sent to his website, his emails and instant messages; they were to mark the records either "pertinent" or "non pertinent" and segregate the former for further analysis. It was a "crushing" amount of data.
Gallagher writes:
Between the first message sent by Hasan to Aulaqi (December 2008) and the last (June 2009), the agent and analyst assigned to the case reviewed 7,143 documents—between 65 and 70 on an average day, with as many as 132 documents on peak days. Getting through that volume of data consumed astounding amounts of time. The analyst spent about 40 percent of his total time reviewing documents for the Aulaqi investigation, while the agent assigned to the case spent about three hours each day reviewing documents.
One of the messages Hasan sent to Aulaqi looked like this (click to enlarge):
Agents in San Diego realized that the person writing the message, Hasan, could be in the military. So they got in touch with military investigators and passed on the information to the Washington, DC FBI office. That FBI office would not take up the issue for months, during which time Hasan was promoted within the military.
"Six more messages were sent by Hasan to Aulaqi while the lead sent by San Diego agents to Washington went unattended," Gallagher writes. The San Diego agent who flagged the message to send to DC says he followed up by phone to push the issue; the Washington DC agent denies this phone call took place, and the report provides no evidence that it did.
What went wrong?
The Webster commission found that none of the data obtained from Hasan's e-mail communications—nor any of the other communications by him on other e-mail addresses recovered by forensics investigation of his computer after his arrest—indicated he was preparing to act violently…If the information had been shared with Army officials early on, or if the Washington Joint Terrorism Task Force had interviewed Hasan and his superiors, both the military and the JTTF would have gained a much different picture of him—one likely to have led to the revocation of his Secret security clearance and a reconsideration of orders deploying him to Afghanistan.
There you have it. The FBI had all the information it needed to prompt an interview with Hasan about his communications with Aulaqi, someone the US government thought so dangerous that it killed him in an extrajudicial drone strike, risking international censure and criticism. But it did not act. The question remains: why?
When FBI officials initially inquired with their military counterparts to find out if Hasan was an active duty service member, they received bad information. The military JTTF participants told them that Hasan worked in military intelligence, which was not the case. Therefore, instead of passing on the information about him to his superiors in the military, the San Diego JTTF agents passed on the information to the FBI in DC. They feared that Hasan would be able to intercept messages to the military, and they didn't want to tip him off. This was a mistake caused by human error, not technology.
The San Diego JTTF agent says the FBI in DC dropped the Hasan issue because it didn't want to risk a politically damaging public relations problem – he was a military psychiatrist, after all. The DC field office denies this, and says that it wasn’t given all of the information about Hasan.
Sean Gallagher seems to be arguing that the US government could have stopped Hasan from shooting those people at Fort Hood had the computer systems only been more advanced. But the analysts had all the information they needed to open a separate investigation into Hasan, based solely on his email to Aulaqi, shown above. They didn't do it.
And the Webster report agrees: at the root of whatever intelligence failure allowed the attack to transpire is a human problem, not a technology problem.
We conclude that, working in the context of the FBI’s governing authorities, policies and operational capabilities, and the technological environment of the time, individuals who handled the Hasan information made mistakes….these committed individuals need better policy guidance to know what is expected of them in performing their duties, and better technology, review protocols and training to navigate the ever-expanding flow of electronic information.
The intelligence apparatus failed to intercept Hasan before he went on his shooting rampage because its agents were not properly trained to use the systems at their disposal, they were crushed by the amount of data they were responsible for processing and they made human errors.
The database system in question "is operating under maximum stress" because there is too much information in it.
Strict rules barring investigators from spying on people against whom there is no evidence of wrongdoing exist for two main reasons: first, because they reflect the proper relationship between government and governed in a free society, and second, because they ensure that investigators don't waste all of their time chasing down foolishness that has nothing to do with dangerous criminal activity and instead drill down on verifiable, potential threats.
And finally, let’s give those FBI agents a break. After all, the government is never going to be able to stop every person with ill intent from committing heinous crimes. It’s simply not possible to predict the future. We should stop pretending we could do it if only we had more data and better tech. That set of assumptions is taking us in a terrifying direction. There’s still time to turn back, before it’s too late. Quitting the notion that the government can protect us from all harm is a good first step. What else?
We need to tighten the rules, and lighten the databases.