Elevatus is a real product. It’s running today at elevatus.jobs, handling hiring for enterprise clients and government ministries across the Middle East. I was part of the team that built the multi-tenant architecture it runs on, during my time at Talentera between 2018 and 2020. This is the post about what that actually means — not the abstract multi-tenancy talk you’ll find in a Stripe engineering blog, but the specific, embarrassing, fascinating details of building a platform where a government ministry and a retail chain share the same codebase and the same database architecture and the same workflow engine, and neither of them can ever know the other exists.
The tenant isolation problem is not primarily a data problem
When engineers think about multi-tenancy, they usually think about data first: shared tables vs. separate schemas vs. separate databases. That’s real and it matters. At Elevatus, we used a row-level isolation model with tenant IDs on every row, combined with schema-level separation for certain sensitive configuration tables. That gets you the basics.
The actual hard problem is identity and access. Not “can tenant A read tenant B’s data” — that’s a query filter. The hard problem is: how do you manage the identity of a user across a platform where the same email address might belong to a job candidate who has applied to positions at three different tenants? What does it mean for that person to be “logged in”? Which tenant’s configuration governs their session? What happens when tenant A and tenant B are running the same hiring round for the same position, and the same candidate has applied to both?
These aren’t hypothetical edge cases. In the MENA recruitment market, large candidates apply broadly. Government hiring rounds — which are centralized and run at scale — can have tens of thousands of applicants, many of whom have also applied to private sector roles on the same platform.
The answer we built around Keycloak: a federated identity model where each tenant had its own realm, with candidates existing as a cross-realm identity type. A candidate’s core profile lived in a shared identity space. Their application data lived in the tenant realm. The authentication flow was tenant-scoped — you couldn’t log in to tenant A’s interface with tenant B’s credentials — but a candidate could hold active applications across multiple tenants simultaneously without the platform creating conflicting identity records. Keycloak’s realm federation and cross-realm token exchange made this tractable. Without it, we’d have been solving identity federation by hand, which is the kind of work that generates security incidents.
What “AI-powered” meant in 2018
The Elevatus pitch included “AI-powered hiring,” which was a true statement in 2018 in the same way that a lot of true statements are slightly more complicated than they sound.
We weren’t doing large language models. GPT-3 didn’t exist yet. What we were doing was a combination of ML-driven CV parsing and scoring, candidate ranking based on weighted criteria matching, and early experiments with video interview analysis. The CV parsing was the part that actually worked well in production — extracting structured data from unstructured resumes across five languages (Arabic, English, French, Hindi, and a smattering of others) and normalizing that data into a candidate profile that the system could rank against job criteria. The challenge at the time was not that ML was hard — it’s that labeled training data for Arabic-language CV parsing was genuinely sparse, and the model quality reflected that. We built a feedback loop where recruiter decisions (hired / rejected / moved to next stage) fed back into the ranking model, which improved it over time on a per-tenant basis.
That last part — per-tenant model personalization — was architecturally interesting and operationally annoying. The per-tenant model variant meant that a government ministry’s recruiter decisions (where regulatory compliance and credential verification dominate the hiring rubric) shaped a model that was meaningfully different from the one trained on a tech company’s decisions (where portfolio and demonstrated skill dominate). That’s the right outcome. It’s also a storage and serving problem that doesn’t exist if you’re just running one global model. Multi-tenancy applied to ML infrastructure is a whole additional set of constraints.
The video interview analysis was less production-ready. Natural language processing on video-recorded interview responses, generating summary assessments for recruiters. It worked. It was also, in retrospect, the kind of system that required much more careful design around bias and fairness than we had tooling for in 2018. The field has matured considerably. What I’ll say is: we were asking questions about “does this model reflect the diversity of the hiring pool fairly” before those questions became mainstream, because our clients required it — government ministries operating under explicit diversity mandates will ask pointed questions about your AI outputs, and “the model said so” is not an acceptable answer to a minister’s office.
Workflow orchestration per tenant, not globally
The Camunda BPMN integration was one of the structural decisions I’m most satisfied with from that period. Recruitment workflows are genuinely complex: they have conditional branching (if the role requires a security clearance, add these steps), parallel tracks (technical screen and HR screen can run simultaneously), time-dependent state (if the candidate hasn’t responded in 7 days, trigger a reminder, then escalate, then close the application), and they differ radically between tenants.
A government ministry’s hiring workflow for a civil service position might have 14 stages and involve three approval committees. A startup’s workflow might have 5 stages and run in a week. You cannot hardcode either of these. You cannot build a configurable UI that covers both. What you can do is model them both as BPMN process definitions, store them per-tenant, and let Camunda execute them with the same engine.
This meant the platform’s workflow layer was tenant-specific at the process definition level but shared at the execution infrastructure level. Adding a new workflow variant for a new client was a BPMN file deployment, not a code deployment. That distinction matters enormously when you have enterprise clients who have specific requirements and a sales team that has promised them the platform can handle those requirements.
The onboarding problem
Enterprise and government clients both have long, painful onboarding processes. But they’re painful in different ways.
Enterprise clients want integrations: SSO via their existing identity provider, webhooks into their ATS or HRIS, custom branding, API access for their internal tooling. They have developers. They can debug a webhook payload. The onboarding problem is one of configurability — how many knobs does the platform expose, and how well-documented are they.
Government clients want compliance: data residency documentation, security assessments, formal approval for specific features. They often have procurement processes that predate SaaS. They may require on-premise deployment options or dedicated infrastructure rather than shared cloud. They have legal teams that ask questions your standard terms of service didn’t anticipate.
Building a platform that can onboard both types simultaneously is a product and architecture problem that can’t be solved by making the platform more configurable. You need organizational muscle alongside the technical capability — people who understand procurement, who can produce the documentation a government security review requires, who can translate “we need a data processing agreement that complies with local data sovereignty law” into actual platform constraints. The code is the easier part.
The thing I’d do differently
The mobile layer — Ionic initially, Flutter later — was added as a separate product surface and it showed. Mobile-first UX patterns for recruitment are genuinely different from desktop patterns. Candidate experience on mobile (applying, tracking application status, doing video interviews on a phone) has different constraints than recruiter experience on desktop (reviewing hundreds of candidates, making batch decisions, running hiring rounds). We built the mobile experience as an adaptation of the web experience, which meant it was never quite as good as a native-mobile-first product would have been.
If I were designing this today, the candidate experience and the recruiter experience would be treated as separate products with separate design constraints, that happen to share a backend. The API layer would have been designed with both consumers in mind from the start, not retrofitted to support mobile after the fact. This is the standard “API-first” lesson that everyone learns the hard way once.
Governments actually use Elevatus. That sentence lands differently when you’ve been through the procurement process, the security reviews, the customization negotiations, and the integration work that getting to “actually use” requires. It’s not an MVP being used by a friendly early adopter. It’s production software running critical processes for institutions that cannot tolerate downtime.
That’s what I built.