
Operations Support Engineer
Spot the Problem
- Find what's costing you interviews at Master-Works
- Get AI-rewritten bullet points
- Download Gulf-ready CV
60 seconds. $3.99 one-time.
We are seeking an Operations Support Engineer to support and monitor cloud-based data and analytics platforms. The role focuses on ensuring system stability, performance optimization, incident resolution, and operational excellence across GCP analytics environments and Hadoop ecosystems.
Key Responsibilities
Monitor system health, uptime, and performance of the Slisor portal and backend services (dashboards, APIs, workflows).
Lead incident triage, root cause analysis, and escalation with engineering teams and vendors.
Support daily operations of GCP analytics services and data pipelines.
Oversee ingestion, storage, and consumption layers ensuring scalability and security.
Track anonymized dataset transfers from on-prem DS/AI platforms to cloud.
Manage user access, provisioning, and ticket-based support requests.
Generate operational dashboards (uptime, error trends, campaign metrics, usage analytics).
Participate in knowledge transfer sessions during new releases.
Provide on-call/off-hours support for critical incidents.
Requirements
• 3+ years of experience in platform/application support within data or analytics environments
• Strong knowledge of GCP analytics services (BigQuery, Cloud Functions, IAM)
• Hands-on experience with Hadoop ecosystem (Kafka, Spark, Hive, Sqoop, Oozie, HDFS)
• Strong SQL expertise and data pipeline performance optimization
• Experience with ETL pipelines, monitoring tools, and logging systems
• Understanding of APIs, microservices, and system integrations (CRM/BSS adapters)
• Experience using ticketing systems (Jira, ServiceNow, or similar)
• Ability to analyze logs, troubleshoot performance issues, and coordinate with engineering teams
• Familiarity with data governance, metadata management, lineage, and data quality frameworks
• Preferred Qualifications (Good-to-Have)
• Experience with GCP (BigQuery, Dataflow, Pub/Sub) or AWS (Glue, Redshift, Kinesis)
• Data security and masking tools (Apache Ranger, Voltage, IAM policies)
• Knowledge of NDMO standards and enterprise data frameworks (DAMA, DCAM)
• BI/Visualization tools (Power BI, Tableau, MicroStrategy, Looker Studio)
• CI/CD, GitOps, Docker, Kubernetes, Terraform
• Exposure to telecom domain (BSS, OSS, CDR, usage analytics, CLDM)
• Understanding of data privacy, consent management, k-anonymity
Requirements
- •3+ years of experience in platform/application support in data/analytics
- •Strong knowledge of GCP analytics services (BigQuery, Cloud Functions, IAM)
- •Hands-on experience with Hadoop ecosystem (Kafka, Spark, Hive, Sqoop, Oozie, HDFS)
- •Strong SQL expertise and data pipeline performance optimization
- •Experience with ETL pipelines, monitoring tools, and logging systems
- •Understanding of APIs, microservices, and system integrations
- •Experience using ticketing systems (Jira, ServiceNow, or similar)
- •Ability to analyze logs and troubleshoot performance issues
Nice to Have
- •Experience with GCP or AWS
- •Data security and masking tools
- •Knowledge of NDMO standards and enterprise data frameworks
- •BI/Visualization tools
- •CI/CD, GitOps, Docker, Kubernetes, Terraform
- •Exposure to telecom domain
- •Understanding of data privacy, consent management
Responsibilities
- •Monitor system health, uptime, and performance
- •Lead incident triage, root cause analysis, and escalation
- •Support daily operations of GCP analytics services and data pipelines
- •Oversee ingestion, storage, and consumption layers
- •Track anonymized dataset transfers
- •Manage user access, provisioning, and ticket-based support
- •Generate operational dashboards
- •Participate in knowledge transfer sessions
Related Jobs
- Check if your CV is ATS-ready for Master-Works
- Get AI-rewritten bullet points
- Download Gulf-ready CV
60 seconds. $3.99 one-time.



