PROJECT · 2010 · AUTHOR · ARCHIVED

SAMS: Spatial Automated Marketing System (B.Sc. 2010)

University of Jordan graduation project: a GIS-powered location-aware marketing system with server backend, Windows Mobile, and Symbian clients. Yes, Symbian.

Client-server architecture diagram showing request-response flow, hero image for architecture topic posts.

It’s 2010. The iPhone is three years old and still a curiosity in Amman. “Mobile” means Nokia. Blackberry is for executives. Android has been out for two years but feels like a developer experiment. And I’m at the University of Jordan finishing my B.Sc. in Computer Information Systems, building a location-aware marketing system for my graduation project.

The audacity. I love that guy.

What SAMS was

Spatial Automated Marketing System. I named it myself, which is how you know it was 2010 — we were still naming projects with four-word acronyms like we were writing RFCs.

The system was built around a simple premise: if your phone knows where you are, and a business knows where its customers are, you can send targeted promotions to customers when they’re geographically close to a store or venue. Location-based marketing. In 2010 this was a novel engineering problem. In 2024 it’s a feature flag in every ad platform. In 2010 I had to build the whole thing.

The architecture — and I am using that word generously for a graduation project — was three-tier:

  1. Server backend: Java/PHP, relational database (SQL Server), GIS data layer. Businesses registered their venues with geographic coordinates. Customer profiles were maintained with opt-in preferences. The marketing engine matched venue proximity to customer profiles and queued promotion messages.

  2. Windows Mobile client: A .NET Compact Framework application for Windows Mobile handsets. GPS polling. Map rendering — genuinely painful in 2010, because mobile mapping SDKs were not what they are today. You were often rendering tiles yourself. The client sent location updates to the server and received promotion triggers when proximity rules fired.

  3. Symbian client: Because Windows Mobile was not the dominant platform in the Jordanian market in 2010 — Nokia was. Symbian Series 60, specifically. A second client application, different SDK, different runtime, same protocol. This is where I learned that “write once, run anywhere” was a Java marketing slogan and not a description of mobile reality.

The technical problems that were actually hard

GPS in 2010 on consumer handsets was not reliable. Accuracy was measured in tens of meters on a good day, worse indoors or in dense urban areas. A proximity trigger at 200m radius that fires correctly in an open-air parking lot will false-positive constantly in a shopping mall. The server-side geofence logic had to account for GPS drift, which meant it had to treat customer location as a probability distribution, not a point. I handled this with a moving average over the last N location samples and a confidence threshold before triggering. This was not in my coursework. I worked it out from first principles, which at the time felt like research and in retrospect was just debugging.

The Symbian networking stack was not forgiving. HTTP on Symbian Series 60 required explicit network session management, and the platform had several generations of APIs with incompatible behaviors across device models. I tested on Nokia devices I could borrow from classmates and family, which meant my test matrix was “whatever Nokia phones people had in Amman in early 2010.” This is not rigorous. It is very undergraduate.

Battery was a constant constraint. A GPS application that polls every 30 seconds will drain a 2010 Nokia’s battery in hours. The polling interval was configurable; I wrote a section in the project report explaining why the right tradeoff depended on the deployment context (venue density, customer movement speed, business requirements). This section was probably longer than it needed to be. I was proud of having thought about it.

What the project actually taught me

The gap between “it works on my machine” and “it works in the wild.”

I had two clients and a server. Each had different failure modes. The server could be down. The GPS could be unavailable. The mobile network could be dropped. The database query could time out. The promotion could be a duplicate because the client retried. Every one of these failure modes required a decision — retry, ignore, degrade gracefully, or fail loudly — and the right answer was different for each.

This was my first distributed system. It was tiny. It was also genuinely distributed: three separate runtime environments with separate failure domains and a network between each pair of them. The lessons about partial failure and graceful degradation that I applied at Elevatus, at Bytro, at every multi-service system I’ve worked on since — they were first learned building SAMS.

I also learned that naming a project SAMS does not make it sound more serious. It just means you have to explain the acronym in every presentation. Future Mo: pick the name after you’ve built the thing.

The committee presentation

My graduation project was reviewed by a panel at the University of Jordan. I demo’d the Windows Mobile client on a device borrowed from a classmate, with a GPS receiver attached, in a building that had poor satellite visibility. The GPS lock took four minutes while the committee watched. I had prepared for this possibility and had a recorded video of the full flow working correctly outdoors. The committee appreciated the contingency planning more than they would have appreciated a clean live demo — one professor told me afterward that the fact that I had anticipated the failure mode was “more impressive than if the GPS had just worked.”

That feedback has stayed with me. The system that fails gracefully with a rehearsed fallback beats the system that succeeds unpredictably.

Where it is now

Archived. Windows Mobile is gone. Symbian is gone. The specific Nokia handsets I tested on are museum pieces. The GPS accuracy that I had to work around with moving averages is now handled automatically by every modern location SDK. The problem I solved has been solved many times since, by teams with far more resources, integrated into platforms that billions of people use without thinking about it.

And yet: I built a functioning end-to-end location-aware system in my final year of university, on three different runtimes, with no cloud provider and no mapping SDK worth mentioning and a GPS receiver that took four minutes to lock indoors.

I’ll take it.