The argument is a fairly obvious extension of the rationalization for in rem asset forfeiture, to “take the profit out of crime.” It’s a great slogan, given that crime is bad and profit is its motive. When the money seized was the cash in a traveler’s pocket, seized not because there was any particular basis to believe that a crime was committed or that money snatched was either derived from or used in crime, but just cash that cops could grab, however, the slogan rang hollow.
But that was cash, and there was more money to be had. And much like drug dealers might carry suitcases filled with cash, bad dudes might use banks to hold, launder and pay for bad purposes. Something must be done, activists cried. And so something was done.
These situations are what banks refer to as “exiting” or “de-risking.” This isn’t your standard boot for people who have bounced too many checks. Instead, a vast security apparatus has kicked into gear, starting with regulators in Washington and trickling down to bank security managers and branch staff eyeballing customers. The goal is to crack down on fraud, terrorism, money laundering, human trafficking and other crimes.
Banking law put the onus on banks to identify “suspicious” transactions and file a “suspicious activity report” or the banks are held culpable to the federal government. And the last thing in the world banks want is to have the feds going after them for failing to snag a customer, whether individual or business, engaged in bad acts. So bank compliance departments created algos to red flag activity that strayed from what banks consider the usual.
Bank customers get a letter in the mail saying their institution is closing all of their checking and savings accounts. Their debit and credit cards are shuttered, too. The explanation, if there is one, usually lacks any useful detail.
Or maybe the customers don’t see the letter, or never get one at all. Instead, they discover that their accounts no longer work while they’re at the grocery store, rental car counter or A.T.M. When they call their bank, frantic, representatives show concern at first. “Oh, no, so sorry,” they say. “We’ll do whatever we can to fix this.”
But then comes the telltale pause and shift in tone. “Per your account agreement, we can close your account for any reason at any time,” the script often goes.
The point isn’t that the customer service rep doesn’t want to tell you, but that they have no clue why the algos kicked your account back, but it did and that’s that. Far better to lose a few thousand questionable customers than fail to throw out a fraudster or money launderer and get slammed by the feds, with the ensuing fines if not worse, for complicity in crime.
In the process, banks are evicting what appear to be an increasing number of individuals, families and small-business owners. Often, they don’t have the faintest idea why their banks turned against them.
But there are almost always red flags — transactions that appear out of character, for example — that lead to the eviction. The algorithmically generated alerts are reviewed every day by human employees.
Algos are bludgeons, and easily pick up on activity outside the “norm” of banking. The problem is that there are a great many perfectly lawful and, indeed, entirely normal transactions that are “out of character” unless you ask why. Algos do not ask questions. Buying a used car from someone on Craig’s List? You’ll need cash to complete the transaction. There’s nothing unusual about buying a used car. People do so all the time. But they don’t do so everyday, and so the algo raises a red flag over an unexplained cash transaction and you’re suddenly a potential criminal. Banks won’t take that chance.
At the outset, the filing of SARs was intended to alert the government so it could investigate suspicious banking activity, but the sheer number of SARs filed made that an impossibility. While banks weren’t in a position to seize money, they were in a position to protect themselves from being targets that were easier for the feds to nail than individuals and businesses whose accounts were flagged. After all, when some bad dude makes the front page or 60 Minutes, the feds can then search their SAR databank to see if the bank flagged the account. It’s backwards investigation to find out who gets blamed, and banks do not want to get that call asking for comment or what follows.
So what’s the problem? There are regular people and businesses burned in the banks’ efforts to make sure their own butts are well and truly covered.
But there are almost always red flags — transactions that appear out of character, for example — that lead to the eviction. The algorithmically generated alerts are reviewed every day by human employees.
Individuals can’t pay their bills on time. Banks often take weeks to send them their balances. When the institutions close their credit cards, their credit scores can suffer.
Upon cancellation, small businesses often struggle to make payroll — and must explain to vendors and partners that they don’t have a bank account for the time being.
Imagine you, a law-abiding person with bills to pay and mouths to feed, suddenly cut off from your savings and credit for an unknown extended period of time. Your kids or employees don’t want to hear about the banks’, or the federal government’s, problems when they’re hungry. The banks, and the feds, really don’t care.
To former bank employees, the bloodless data belie the havoc that banks wreak. “There is no humanization to any of this, and it’s all just numbers on a screen,” said Aaron Ansari, who used to program the algorithms that flag suspicious activity. “It’s not ‘No, that is a single mom running a babysitting business.’ “It’s ‘Hey, you’ve checked these boxes for a red flag — you’re out.’”
In a world driven by algos, explanations don’t matter. But that’s the only way to make sure that no bad dude launders money, and so what if a few good people go hungry?
Discover more from Simple Justice
Subscribe to get the latest posts sent to your email.

This is Artificial Intelligence in action. It is not the cute version that is currently being sold, but it has been around for quite a while. Insurance companies also use credit scores to determine how much to charge you, based on AIs that suck up every bit of data that they can associate with you, and feed it thru an algorithm to produce a score. I have personal experience with this mess.
And then there is Mr Farrage, leader of the Brexit movement in the UK, who was unbanked because a senior manager at his bank did not like his politics.
To the left, no weapon should ever be wasted in the pursuit of social justice.
You don’t think the right would use this to go after it’s own version of social justice? Seems small minded to think this is just a tool of the left.
Orange man has allegedly already started plotting his revenge on people he feels have wronged him. This would totally be up his ally….
Oh, and don’t get me started on the FATCA law, which made life impossible for US citizens living outside the USA. I was one, and we got back to the US just as it was taking hold. It holds foreign banks with no presence at all in the USA to the same standards, with the threat of having them cut off from the international money transfer regime if they don’t ” know their customers”. So they have stopped allowing Americans to have local bank accounts. Which makes life IMPOSSIBLE if you live in France.
The problem is human not digital. The law enforcement mind is not comfortable with the concept of a false positive. The creator of an algorithm with a 50% false positive rate will be proud. “Look at all those bad guys we uncovered” he will say, thinking of the 50% true positives, and not realizing the burden placed on the non-criminal 50%.
A 5% false positive rate is even worse. Law enforcement will be certain “the algo never lies.”
The algos flagging perfectly lawful “irregular” activity is bad enough. But how long before this enforcement tool gets added to the “equity” toolkit, or used to flag not just banking activities outside the norm but also mark individuals in undesirable social categories. After all, Kendiism tells us that any algo is either racist or anti-racist (or -fascist, or -denialist, -phobic, -insurrectionist, etc.). AI can use a tool like this for some bottom-up brave new worlding. Cancel culture wants to take offenders’ jobs, why wouldn’t it want to take their banking.
On second thought, no way our institutions are using this without baking it in the equity oven first, even if they are still a few graduating classes away from using it to punish people with heretical politics.
Coin and bullion dealers are being canceled by their existing banks. And finding it difficult to establish a new banking relationship. This is new.
Some faceless nameless unelected bureaucrat has set in motion.
I am not a Libertarian but they are right about many issues.
I don’t think it’s that banks are becoming cops, I think it is political correctness, as in we don’t want to be associated with “those people”, as it hurts our credibility with the “right sort of people.”