AI in Banking: From Innovation to Automated Harm — Why Recovery Spaces Now Matter More Than Ever
- Steve Conley
- Jan 20
- 6 min read

By Steve Conley, Founder, Get SAFE
A quiet shift is happening in banking.
Most people don’t see it yet. But victims of financial harm feel it every day.
Artificial intelligence is being rolled out across banks, insurers, and financial platforms at speed. We’re told it’s about efficiency, innovation, and better service.
But on the ground, many people are experiencing something very different:
Being trapped in chatbot loops
Being denied a human conversation
Being told “the system says no”
Being unable to get clear answers
Being exhausted into giving up
For people already dealing with financial loss, fraud, mis-selling, or institutional harm, this shift isn’t neutral. It is deeply destabilising.
And now, Parliament has confirmed what victims already knew.
Parliament has sounded the alarm — and it’s serious
In January 2026, the UK Treasury Select Committee issued a stark warning:
“The financial system is not adequately prepared for an AI shock… The three authorities are exposing consumers and the financial system to potentially serious harm.”
This wasn’t a fringe tech report. This was one of Parliament’s most powerful committees telling the Treasury, the FCA, and the Bank of England:
You are not in control of what is being built. And people will get hurt because of it.
The Committee highlighted:
Widespread AI adoption in banks
A lack of meaningful oversight
No new consumer protections
No designated regulation of cloud providers
No resilience planning for outages
No accountability framework for AI failures
In plain English:
Banks are automating faster than the system is learning how to keep people safe.
What banks are actually using AI for
Let’s be honest about how AI is being deployed right now.
It is not primarily being used to:
Detect emotional vulnerability
Prevent scams earlier
Protect cognitively overloaded customers
Increase duty-of-care safeguards
Improve complaint fairness
Restore trust after harm
It is primarily being used to:
Cut back-office staff
Replace call centres
Deflect complaints
Delay escalation
Increase sales conversion
Reduce legal liabilities
Narrow the number of humans who carry accountability
This creates a dangerous new pattern:
When something goes wrong, there is no longer a person who can fix it.There is only a system that says no.
For victims of financial exploitation, this is not progress. It is a new kind of institutional wall.
The new harm pathway: from human injustice to automated injustice
Traditional financial harm was slow, personal, and paper-based. You were mis-sold something. You were scammed. You were given bad advice. You could at least talk to someone.
AI changes the shape of harm.
Now people are facing:
Algorithmic denial of complaints
Chatbots that cannot understand trauma
Systems that misclassify vulnerability
Automated decision-making with no explanation
No clear route to appeal
No named decision-maker
No emotional recognition
No narrative space for what actually happened
For someone already in shock, grief, or financial panic, this is psychologically devastating.
It doesn’t just block justice.
It re-traumatises.
Systemic fragility is increasing — not decreasing
The same parliamentary report also revealed something deeply worrying:
Major banks were knocked offline by a cloud failure caused by AI automation systems.Lloyds and Halifax customers lost access during an Amazon Web Services outage.
This confirms a hard truth:
Banks are now dependent on a handful of tech giantsfor their most critical systems.
Yet:
None of those tech firms are regulated as systemically critical
None have been designated under the UK’s Critical Third Parties regime
None face consumer harm liability when their systems fail
None are supervised like banks are
This means we now have:
A financial system that is more automated,more centralised,more brittle,and less accountable than ever before.
That is not resilience.
That is a single point of failure economy.
The regulatory gap victims are already falling into
The FCA has publicly said:
“We don’t need new rules for AI.”
Parliament has publicly said:
“This is exposing consumers to serious harm.”
Victims already know who’s right.
Because when something goes wrong today:
You can’t tell who made the decision
You can’t see the logic used
You can’t challenge the data inputs
You can’t demand a human review
You can’t get a trauma-informed response
You can’t slow the system down
You can’t opt out of automation
You can’t escape the maze
This is not just a technical failure.
It is a moral and governance failure.
Why recovery spaces now matter more than ever
This is where Get SAFE comes in.
We were not created as a protest group. We were not created as a legal service.We were not created as a political movement.
We were created because people who’ve been financially harmed need:
A calm place to land
A space that doesn’t rush them
Language that doesn’t blame them
Time to stabilise before seeking justice
Human recognition before legal process
Emotional grounding before evidence bundling
Agency before advocacy
AI-driven systems do not provide any of that.
They cannot hold your story. They cannot feel your confusion. They cannot pace your recovery. They cannot recognise your grief. They cannot restore your dignity.
But a human community can.
Recovery first. Justice second. Always.
At Get SAFE, we work from one core principle:
Recovery comes first. Justice comes second.
Not because justice doesn’t matter.But because traumatised people cannot fight systems properly.
Before anyone is pushed into complaints, regulators, courts, or campaigns, they need:
Nervous system safety
Cognitive clarity
Financial stabilisation
Emotional containment
Narrative coherence
Rebuilt self-trust
Social support
Realistic expectations
Only then does justice work become empowering instead of re-wounding.
AI systems are currently being built to do the opposite:
Speed people up
Fragment their stories
Exhaust their energy
Deny their emotions
Narrow their options
Break their persistence
That is why independent recovery spaces are no longer optional.
They are now essential civic infrastructure.
The deeper truth: AI is amplifying power imbalances
AI in banking is not neutral technology.
It is a force multiplier.
It amplifies:
Institutional power over individuals
System speed over human pacing
Profit logic over care logic
Risk transfer over risk reduction
Opacity over transparency
Automation over accountability
Unless deliberately constrained, it will:
Make it harder to complain
Make it harder to prove harm
Make it harder to get redress
Make it harder to feel believed
Make it harder to recover
Make it harder to hold anyone responsible
That is not a dystopian future.
That is what victims are already living through.
What needs to change — structurally
If AI is going to be used ethically in financial services, five things must happen:
Mandatory human override rightsEvery customer must have a clear right to speak to a trained human decision-maker.
Trauma-informed system designAI interfaces must be built around cognitive load, emotional distress, and vulnerability — not just efficiency.
Explainable decision pathwaysCustomers must be able to see why a decision was made and who is accountable.
Independent recovery infrastructureVictims must have access to non-commercial, non-institutional recovery spaces.
Regulation of cloud providers as critical infrastructureTech giants must be supervised like utilities, not startups.
None of this is currently in place.
Why Get SAFE exists — and why it will matter even more now
Get SAFE exists because:
Financial harm is not just financial
Institutional harm is not just legal
AI harm is not just technical
Justice without recovery is not healing
Recovery without agency is not empowerment
We are building:
A calm, trauma-informed space
A human counterweight to automated systems
A bridge between victims and justice
A place where people are not rushed, blamed, or sold to
A foundation for citizen-led accountability in an AI age
As automation increases, human dignity becomes scarcer.
That makes our work more necessary, not less.
If you’ve been harmed, this is what we want you to know
If you’re here because:
You were scammed
You were mis-sold
You were ignored
You were blocked
You were worn down
You were gaslit
You were told “the system says no”
Please hear this:
You are not weak. You are not stupid. You are not imagining things. You are not alone. You are not broken. You are not finished.
You don’t need to fight yet. You don’t need to explain everything. You don’t need to be “strong”.
You are welcome exactly as you are.
A final word
AI can be used for good. But right now, in banking, it is being used primarily for extraction, deflection, and control — not protection, care, or justice.
That will change.
But until it does, people need places that still work at human speed, with human ethics, and human care.
That is why Get SAFE exists.
And why, in an automated financial system, recovery spaces now matter more than ever.
.png)



Comments