Before I was a software architect, I was the guy who fixed the network at a gaming center in Amman. Before that, I was a salesman and cashier at a computer shop. I was 17 when I started, 22 when I left that world to finish my CIS degree and start building software professionally.
I don’t lead with this part of my background at job interviews. But I think about it constantly. It shaped how I debug systems, how I think about reliability, and how I talk to non-engineers about technical problems — more than any job since.
Explosion Network Gaming Centre, 2007–2009
The job title was Network Administrator and Manager. In practice: I ran the network, handled the money, managed the machines, dealt with the customers, and occasionally banned players for cheating at Counter-Strike.
Yes, I was the guy who banned players for cheating at Counter-Strike. I regret nothing.
The network was a physical LAN — cables, switches, DHCP, a router to the ISP, and about 30 Windows XP machines that were being used hard, constantly, by teenagers who had zero sympathy for uptime windows. When something broke, it had to be fixed now. Not “we’ll address it in the next sprint.” Not “let me open a ticket.” Now. The person at the desk was waiting, the meter was running (we charged by the hour), and I was the entire ops team.
That pressure is not stressful in the way on-call at a tech company is stressful. It’s simpler and more immediate: a real human is standing in front of you, losing money because the machine at station 12 can’t reach the game server. Fix it.
What a physical LAN teaches you that AWS doesn’t
When the network is made of cables you can touch, networks stop being abstract. “The connection is down” means something specific. It means: is the cable seated? Is the switch port lit? Is the DHCP lease fresh? Did someone trip over the cable run behind the wall panel again?
I have traced a network problem by physically walking the length of a cable more times than I can count. That is not a skill that shows up on a LinkedIn profile. But it is absolutely the foundation of the instinct I have when something is wrong in a distributed system: the problem is somewhere specific. Find it. Don’t guess from the dashboard; trace the path.
Most engineers I’ve worked with who went straight from a CS degree into cloud infrastructure have a mental model of “the network” as an API. You call it, it responds (usually). When it doesn’t respond, you read the CloudWatch logs. That’s fine. It works. But you’re relying on somebody else’s model of what the network is doing.
If you’ve traced a cable, you have a different model. Networks are physical things with failure points that are often embarrassingly mundane. A spanning tree loop that took down a gaming hall on a Friday night is the same class of problem as a BGP misconfiguration that takes down a datacenter region — the topology failed in a predictable way that someone should have anticipated. The embarrassment scales, but the category doesn’t change.
The cashier years, and what money through hands teaches you
Before the gaming center, I was at Collection Computer Systems: selling computers, taking cash, doing basic PC maintenance. I was 17.
Handling money changes how you think about value. Not in a philosophical sense — in a very literal one. You develop a feel for when a customer is about to walk out, what a reasonable margin on a sale actually is, why a product that can’t be explained in 30 seconds will never sell from a retail floor no matter how technically superior it is.
I’ve been in engineering meetings where someone presents a solution that is clearly correct — technically rigorous, well-evidenced, obviously the right call — and watches it die because nobody in the room connected it to what the business was trying to do. That’s a communication failure, not a technical one. The engineers who came up through paths that included a customer on the other side of a counter have usually learned this earlier and more viscerally than the ones who went straight into development.
The customer is not a user story. The customer is a person who can leave. Everything downstream of that — the architecture, the uptime, the feature prioritisation — is in service of that person having a reason to stay.
The part people don’t expect: MUMPS, 2010
After the OSS club, after my degree, I went into EHS/HAKEEM — Jordan’s national healthcare system — and spent time writing MUMPS routines on VistA. MUMPS, if you’re not familiar: a 1966-vintage language, still running hospital systems across the world, famously cryptic, famously important.
I bring this up in the context of the unusual path because it’s another layer of the same thing: the people who write healthcare software that keeps running for 30 years have a different relationship to correctness than people whose deployment pipeline handles rollbacks. In a system where wrong output has clinical consequences, the definition of “done” is different. You don’t ship and see what happens.
The gaming center taught me: fix it now, the customer is waiting. MUMPS taught me: be very sure of what you’re doing, the patient is depending on it. Both are correct. They’re in tension, and most of my career has been about navigating that tension in contexts that are somewhere between those two poles.
What the unusual path is actually worth
I’ve been on the ops side of the counter in a way most architects never have. I’ve held the cable that was causing the problem. I’ve explained to a teenager why his account was banned in terms that were technically accurate and comprehensible simultaneously. I’ve taken cash for something I built (or fixed) and understood immediately whether it was worth the price.
These are not the skills on my CV. They’re the skills underneath the skills on my CV.
The thing I notice in engineers who came up through paths like mine is a specific kind of stubbornness about root causes. When something’s broken, there’s a physical thing causing it. Not a vibe, not a vague miscommunication between services — a thing. Find the thing. The LAN cable was either seated or it wasn’t. That epistemology scales to distributed systems surprisingly well.
The other thing: patience with non-technical stakeholders. If you’ve explained to a parent why their kid’s account was suspended, or explained to a customer why the computer they want is more expensive than the one in the ad, you’ve practiced explaining technical decisions to people who have skin in the game but not the context. That is a large percentage of what a senior engineer does in a reasonably complex organisation.
I don’t think everyone needs to start as a cashier to become a good architect. But I do think the engineers who’ve been on the receiving end of a system — who’ve been the person whose experience depends on whether the network is working — carry something that’s genuinely hard to acquire from the other side of the keyboard alone.
My path was retail to gaming-hall net-admin to MUMPS to distributed systems. It’s not a playbook. It’s just mine, and I wouldn’t swap it for the straight line.