1001010110101010
Thank you! Our team will contact you soon

Burger Index - Accelerating Menu-Data Insight with Amazon OpenSearch Service

  • Industry : F&B
  • Country : Global
aws
rds
ecs
scalability

Executive Summary

Burger Index delivers real-time competitive-intelligence to food-and-beverage brands world-wide. By re-platforming its search layer onto Amazon OpenSearch Service, the company cut median query latency from ≈2 s to sub 800 ms — a > 60 % improvement—while giving analysts Google-like faceted search, geo filters, and instant type-ahead. Engineering effort once spent on shard juggling and manual scaling now goes into new product features.

Why Amazon Web Services:

AWS offered a portfolio of fully managed services that aligned neatly with every pain point. Amazon OpenSearch Service removed the burden of patching, capacity planning, and shard allocation, while its Hot → Warm → UltraWarm → Cold storage lifecycle promised automatic tiering at a fraction of the cost of self managed clusters. Graviton-based r7g instances delivered an additional twenty per-cent price-performance boost out of the box. Integrated IAM authentication, KMS-backed encryption, and VPC-only access satisfied Burger Index's strict security posture without bolted-on proxies. Critically, AWS allowed the team to build a cross-region disaster-recovery strategy: continuous cross-cluster replication for OpenSearch, fifteen-minute snapshot shipping for Redshift and SageMaker artefacts via S3 Cross-Region Replication, and multi-region S3 buckets for raw data. The end result met the company's fifteen-minute recovery-point objective using native cloud primitives rather than bespoke scripts.

The Challenge

By late 2024 Burger Index's self-hosted Elasticsearch cluster and MySQL instance on RDS were buckling under the weight of success. Global users tended to log in around mealtimes, producing sharp load spikes that pushed query latency well beyond acceptable thresholds. Each time new shards were added, engineers were forced to initiate twenty-minute read-only windows while they manually re balanced data—a disruption that undermined the “real-time” promise of the platform. Analysts often circumvented slow free-text and geo searches by exporting entire datasets as CSV files and running offline SQL, a workflow that stalled decision-making. The stack also lived entirely in a single AWS Region, leaving the company vulnerable to a regional outage at precisely the moment when customers would need pricing insight the most. With data ingestion already topping fifty gigabytes per day and growing at roughly ten per cent month-over month, Burger Index needed a scalable, highly available, low-latency search solution.

The Solution

A fleet of AWS Fargate tasks continuously scrapes public storefronts and whitelisted marketplace APIs, converting raw HTML into structured JSON that lands in an Amazon S3 data lake. Event-driven AWS Lambda functions validate, enrich, and fan out each object. One branch bulk-indexes documents into Amazon OpenSearch Service, where a two-node r6g.2xlarge hot tier handles the most recent forty-five days of data, a one-node r6g.large warm tier maintains ninety-day history, and UltraWarm/Cold tiers extend retention to three years with zero replica overhead. A second branch copies data into Amazon Redshift RA3 for heavyweight SQL roll-ups, while a third feeds Amazon SageMaker pipelines that retrain nightly price-elasticity models. The React single-page application, served via Amazon CloudFront, calls a GraphQL API hosted on Amazon API Gateway; the API stitches low-latency search results from OpenSearch with aggregated metrics from Redshift and returns a unified response in under a second. Continuous cross-cluster replication keeps a passive OpenSearch domain in eu-west-1 seconds behind the active eu-central-1 cluster. Redshift and SageMaker snapshots replicate every fifteen minutes through S3 CRR, giving Burger Index a consistently enforced cross-region posture without operational toil. CloudWatch dashboards and OpenSearch's native monitoring track P95 ingest-to-query latency, shard health, and index skew, while CloudTrail events flow into AWS Security Hub to flag suspicious activity.

Solution Architecture

Outcome and Benefits

Within weeks of cut-over, Burger Index recorded a dramatic shift in user experience and operational efficiency. Median dashboard search time fell from roughly two seconds to under eight hundred milliseconds, and 95th-percentile latency dropped from 4.5 s to 1.6 s, eliminating the frustrating pauses that once plagued peak-time usage. Storage-per-gigabyte costs dropped by approximately thirty-four per cent thanks to automatic tiering and best_compression on the new OpenSearch domain. Because shard allocation, patching, and scaling are now handled by the service, DevOps engineers reclaimed an estimated four hours per week, time that is being reinvested in feature development and data-quality initiatives. Analysts, newly empowered by sub-second faceted search and geo filters, rarely resort to CSV exports, accelerating insight delivery to end clients. Most importantly, Burger Index's data and model artefacts now exist in two AWS Regions with a fifteen-minute recovery-point objective, ensuring that enterprise customers can rely on uninterrupted competitive intelligence even in the face of regional disruptions.

About Zero&One

Zero&One is a leading Premier AWS Consulting Partners in MENA region with a vision to empower businesses of all scales in their cloud adoption journey. We specialize in AWS services like DevOps, application modernization, cloud migration and serverless computing. We currently operate from our offices in Lebanon, UAE, and Saudi with 100+ certifications in our hands and serve 50+ happy customers across the region.

01
Contact Us

We'd like to hear from you

Protect yourself and others from the covid-19 pandemic. Learn more