PROJECT · 2025 · AUTHOR · ARCHIVED

auxmoney: Event-Driven Credit at Germany's Largest Loan Marketplace

Principal architect on the credit-assessment pipeline at auxmoney.com — Kafka, AI/LLM automation, and regulated fintech infrastructure at serious scale.

Screenshot of auxmoney.com — Germany's largest peer-to-peer loan marketplace

Let me be upfront about how this one ended: the business unit was closed in a strategic restructuring in April 2025. Not a performance issue. Not a technical failure. Fintech is weird like that — you can ship excellent infrastructure and then watch the org chart fold under you anyway. The systems were running fine when the lights went off. That distinction matters to me, and I’ll say it plainly rather than burying it.

Now. On to the actually interesting part.

What auxmoney is

auxmoney is Germany’s largest loan marketplace — a peer-to-peer lending platform connecting private borrowers with institutional and retail investors. The scale is real. The regulatory envelope is serious. BaFin, GDPR, the Consumer Credit Directive, German lending law — every one of those frameworks has teeth that bite differently when you’re moving credit decisions at volume.

I joined as Principal Software Engineer and Architect. The job was what auxmoney calls “Playing Captain”: highly hands-on in the codebase, not just drawing boxes on whiteboards. I was expected to understand the system deeply enough to change it, not just to review other people’s changes. That distinction is important and underrated. Architecture that isn’t grounded in how the code actually runs is fan fiction.

The credit-assessment pipeline

A borrower submits an application. That application is not a row in a database waiting for a cron job. It is an event — immediately — and a cluster of downstream services are already waiting for it. The credit-scoring pipeline. The fraud-detection layer. The regulatory reporting hooks. The partner-lender routing logic. Each of those is its own domain, each has its own failure mode, and the requirement is that none of them block any of the others.

The happy path has to work. The unhappy paths — a credit bureau call timing out, a partner lender API that throttled at 3am, a compliance check that returned an ambiguous status — all of those have to be handled without the borrower knowing anything happened at all.

Kafka is the backbone because you need durability, replay, and independent consumer progress. The credit bureau call fails? The consumer retries from the committed offset with a clean state. You need to replay three days of events because a downstream service had a bug? Rewind the consumer. You’re onboarding a new partner lender that needs to process events that already happened? Replay again. Queues forget. Kafka remembers. In a financial system, “I forgot” is not an acceptable failure mode.

Regulatory explainability is not optional; it is the product

This is the thing that shapes every architectural decision in EU fintech and that conference talks systematically undersell.

Every credit decision — every loan offer, every rejection, every limit adjustment — has to be explainable, attributable, and reproducible. Not as a nice-to-have. As a legal requirement. If a regulator asks “why was this borrower rejected on this date,” the answer cannot be “we ran a model and the model said no.” You need to be able to reconstruct the exact inputs, the exact model version, the exact rule that fired, and the exact human-in-the-loop handoff point if there was one.

This changes how the event pipeline is designed. Events are immutable. They carry more metadata than payload in some cases: who initiated the decision, which model version was consulted, which rule fired, what the input data looked like at the time of the decision. All of that survives for years. On Kubernetes this means exactly-once consumer semantics, idempotent handlers, and dead-letter queues that are actually monitored. Ask me how many dead-letter queues I have seen that were treated as write-only storage. The answer is too many.

Shipping AI/LLM automation inside a regulated pipeline

The model is not the hard part. Getting LLM automation past compliance review, in a production credit pipeline, in a regulated financial environment — that is the hard part.

The capability is genuinely there. Modern LLMs are useful for document understanding, structured extraction from unstructured input, classification. They are useful for things that a loan application generates in abundance: income statements that are PDFs, employment letters that are scans, self-reported data that needs to be normalized against external signals.

The governance is the work. A model call in a credit workflow is not a raw API call. It is a versioned, logged, auditable invocation with a stable prompt template pinned to a specific model version, with output stored alongside the decision record as part of the audit trail. The LLM becomes a component in an auditable pipeline. You’re not calling it and hoping for the best. You’re treating its output as one signal in a documented, explainable decision process. This is more conservative than what most teams do in consumer apps. It should be. The stakes are different.

The interesting engineering is in the integration seam: how the LLM output is validated before it enters the decision path, how confidence thresholds gate automatic vs. manual review, how model version pinning interacts with the rest of the event schema versioning. None of that is in any tutorial. It’s in the places where the abstract architecture meets the specific compliance requirement.

Playing Captain in a decade-old codebase

“Playing Captain” is not a title. It is a description of how you work. Hands in the code, not just in the docs.

In a fintech codebase that has been running for a decade, this means navigating what actually exists. PHP services that predate the Kubernetes migration. Java microservices that predate the Kafka adoption. Fresh utilities that someone shipped last quarter. These are not legacy problems to be ashamed of — they are accumulated investment in a system that has been making real credit decisions for real borrowers for years. You cannot ignore them and you cannot rewrite them all this quarter.

So you build at the boundaries. You publish domain events from the edges of the legacy systems. You define the contract at the Kafka topic level and let each service own its implementation. The PHP service that is the source of truth for borrower state is not being replaced this sprint — it is being wrapped in an event interface that lets the new credit-assessment pipeline consume from it without knowing its internals. That is the only kind of migration that actually ships.

AWS underneath all of it: EKS for the Kubernetes workloads, MSK for managed Kafka, RDS for the relational state that needs to be relational. The infrastructure is not exotic. The problems are in the domain and the regulations, not the stack.

What this kind of work leaves you with

Lending infrastructure done right is one of the most technically thorough domains I’ve worked in. The combination of high transaction volume, hard latency requirements, regulatory explainability, and real financial consequences produces engineering discipline that I haven’t seen replicated as consistently elsewhere.

When the business unit closed, it closed. That’s the business. What the system was doing before the org chart changed is a separate question, and the answer is: running well.

I’m currently doing open-source work on Fulcrum — a local-first agent control plane — and available for fintech and financial-infrastructure conversations. If you’re building event-driven credit infrastructure and want to compare notes on the compliance integration layer specifically, you know where to find me.