Barney Goodman

AI-native Change Director. Strategy, product, operations and build. 20+ years in financial services. Analogue film photographer in my spare time.

Barney GoodmanBG

About

Change & Technology Director at Shermin Finance. I'm Product Owner of Stax, the UK's largest loan originations platform for consumer credit. Twenty years in financial services, digital transformation, product strategy, and operations.

One of the first UK Consumer Finance professionals to achieve 6x Claude Certifications, as verified by Anthropic.

I work with agentic engineering to architect solutions, design products, write code, and ship finished work myself end-to-end. Strategy, operations, and delivery all run through one person. The result:

Compressed development cycles. Products and MVPs that would normally take a team months, delivered in days. Full executive oversight with zero interpretation loss between strategy and execution.

I write about AI implementation, agentic engineering and fintech. I shoot analogue film and travel in my spare time. This site is where I share what I'm thinking about.

Latest Pictures

View gallery

Daily Digest

View full digest
·TLDR Tech

Salesforce's Agent Play Is a Procurement Problem

Salesforce Headless 360 is being framed as an AI innovation story. It's actually a vendor lock-in story, and UK financial services technology leaders need to read it that way. The shift from 'system of record' to 'system of execution' sounds neutral, even exciting. What it means in practice is that Salesforce wants its platform to be the thing that *does* things on behalf of your customers, not just stores data about them. Once your AI agents are executing loan decisions, affordability checks, or customer communications through Salesforce's orchestration layer, your switching costs don't double. They multiply. We've been here before with CRM. Firms that let Salesforce become their core customer data store in the 2010s are still paying for that decision. Agent orchestration is a deeper dependency than data storage because it's embedded in your operational logic, your audit trails, your FCA compliance architecture. The FCA's operational resilience rules are directly relevant here. PS21/3 requires firms to map important business services and set impact tolerances for disruption. If a third-party agent platform becomes your execution layer for credit decisions or collections workflows, that vendor relationship sits inside your resilience framework, not outside it. The contractual SLAs Salesforce offers are almost certainly not written to meet your impact tolerances. Two things technology leaders should do now: - Treat agent orchestration platforms as critical third-party infrastructure from day one, not after you've integrated them - Push hard on contractual specifics: what are the SLAs, what does remediation look like, and what does exit actually cost The interesting question isn't whether agentic AI has a future in consumer finance. It does. The question is whether you want a single US vendor controlling the execution layer of your regulated business processes, and whether your board and risk function understand that's what's being proposed.

AI agentsSalesforceAI
·TLDR Tech

Codex Is Now an Automation Layer, Not a Coding Tool

The framing of this announcement as a coding upgrade is wrong. What OpenAI is actually describing is an agent that can sit across your entire desktop and SaaS stack, execute multi-step workflows, and remember context between sessions. That is an automation platform with a coding origin story. For UK consumer finance, this matters more than most sectors want to admit. Our technology teams spend enormous time on the connective tissue between systems: extracting data from one platform, reformatting it, pushing it into another, triggering downstream processes. That work is often too bespoke for off-the-shelf automation tools and too low-value to build properly. An agent layer that understands business context and can operate across applications without custom integration code could absorb a significant chunk of that overhead. The enterprise memory feature is worth paying attention to specifically. An agent that retains knowledge of your workflows, your naming conventions, your edge cases, starts to look less like a tool and more like institutional knowledge that doesn't leave when someone hands in their notice. Two things should give technology leaders pause though: - Compliance and audit exposure. An agent that operates across systems and executes tasks creates a new class of action that needs to be logged, reviewed, and attributable. Most firms' governance frameworks were not designed for this. - Vendor concentration. Routing automation logic through a single AI provider, on top of existing OpenAI dependencies, creates a concentration risk that the FCA's operational resilience rules were designed to make firms think hard about. The competition with Anthropic's Claude Code is less interesting than the broader shift it signals. The major AI labs are no longer competing to be the best assistant. They are competing to be the operating layer that everything else runs through. Whether your technology strategy has an answer to that question yet is worth asking.

agenticAI agentsAIautomation
·TLDR Tech

Why AI-Friendly Frameworks Matter for Lending Tech

The interesting thing about Plain is not that it's another Python framework. It's the explicit design choice to make code legible to AI agents, not just human developers. Django is genuinely brilliant for building loan origination platforms. Convention over configuration gets you to a working credit application journey faster than almost anything else. But those conventions are implicit. They live in the heads of senior engineers and in documentation that an AI coding agent has to infer its way through. Plain's approach, forking Django and making everything typed and explicit, is a direct response to the reality that AI is now writing and reviewing a significant chunk of production code. For technology leaders in consumer finance, this matters more than in most sectors. We operate under conduct rules that require explainability and auditability. When an AI agent generates a change to your eligibility logic or your affordability calculation, you need to be confident that the change is: - Traceable to a deliberate decision - Typed and predictable enough to catch errors before they reach production - Readable by a compliance engineer, not just the original developer Implicit magic in a framework makes all three harder. I'm not suggesting everyone abandon Django tomorrow. The ecosystem, the talent pool, the existing platform investment, none of that disappears because a new framework has better type annotations. But the underlying question Plain raises is worth sitting with: are your engineering choices optimised for a world where humans write all the code, or for the one you're actually in? The teams I see moving fastest right now are the ones treating AI coding agents as a genuine constraint on their architecture decisions. Plain is one answer to that constraint. The more important shift is recognising the question exists at all.

agenticAI agentsAI

Latest Posts

View all