Site Reliability Engineer - Observability
What's Your Score?
- See the score Mirai Arabian International Company Limited's ATS gives your resume
- Get AI-rewritten bullet points
- Download Gulf-ready CV
60 seconds. $3.99 one-time.
About Us
At Mirai - a Scopely Studio, we believe high-quality software is about partnership, advocacy, and deep respect for our colleagues. The best engineers augment their technical ability with listening first, asking thoughtful questions, and influencing through clarity, not volume. If that’s you, you’re going to thrive here and learn a ton!
With world-class games like MonopolyGo, PokémonGo, Marvel StrikeForce, and Star Trek F.C., we're full of trendsetters solving interesting problems on a scale no other gaming company has. We’re game-changing (literally), and intentional about building a great place to be.
About the role
We are hiring an SRE focused on observability, automation, and runtime reliability for AI platforms and internal agentic systems. This is not a generic SOC role. It is an engineering role for someone who builds telemetry, automates findings-to-fix loops, improves production readiness, and keeps AI systems measurable, resilient, and controllable in production.
Suitable backgrounds
• Site Reliability Engineers or backend engineers with strong automation skills
• Platform reliability or observability engineers who build tooling, not just dashboards
• Cloud automation engineers with strong logging, tracing, and incident-response instincts
• Detection or security automation engineers who prefer code, pipelines, and remediation over ticket operationsTech stack
• Python for automation and workflow integration
• Observability tooling: metrics, logs, traces, OpenTelemetry, Datadog or adjacent stacks
• AWS logging, telemetry, IAM-aware diagnostics, and infrastructure scripting
• CI/CD integration for runtime checks, rollback drills, and policy validation
• Nice to have: Wiz, CrowdStrike, Orca, GuardDuty, WAF / RASP-style controls, MCP / agent telemetryRequirements
• Design and operate the telemetry and observability layer for AI platforms, including audit trails, tool-call logs, correlation IDs, traces, and runtime visibility across service boundaries.
• Build automated findings-to-fix loops for AI and cloud platforms, integrating signals from tooling such as Wiz, Astrix, or future AI security products into pragmatic remediation workflows.
• Implement reliability and hardening controls for internal AI systems, including alerting, health checks, rollback drills, kill-switch validation, rate limiting, and drift detection.
• Codify detections, policies, and operational checks as code where they reduce toil, prevent regressions, and improve platform control.
• Review platform and AI-application changes from a reliability and application-hardening perspective, especially around secrets, telemetry, external calls, risky MCP usage, and production readiness.
• Own AI-platform-specific operational readiness and partner with central IT / EAS / SOC teams for escalations, handoffs, and shared incident workflows when needed.
• Continuously improve production readiness through automation, post-incident learning, and repeatable playbooks for AI runtime issues.
• Qualifications
• 3+ years in SRE, production engineering, platform operations, or security automation with strong coding ability.
• Hands-on scripting and coding experience, especially Python, with comfort working against APIs, log pipelines, and automation workflows.
• Experience building pragmatic observability and alerting systems in AWS or comparable cloud environments.
• Ability to reduce operational toil through automation while keeping signal quality high and false positives manageable.
• Comfortable with incident handling, rollback thinking, SLA / SLO discussions, and evidence-driven postmortems.
• Interest in AI systems, agent runtimes, and MCP-style integration risks is highly valuable.
• Nice to have
• Software engineering background beyond scripting, including code review and testing habits.
• Experience with AI agent runtimes, prompt / tool telemetry, or internal platform hardening for LLM-powered systems.
• Experience with privacy-aware telemetry, compliance-oriented logging, or runtime protection products.
Requirements
- •Design and operate the telemetry and observability layer for AI platforms
- •Build automated findings-to-fix loops for AI and cloud platforms
- •Implement reliability and hardening controls for internal AI systems
- •Codify detections, policies, and operational checks as code
- •Review platform and AI-application changes from a reliability perspective
- •Own AI-platform-specific operational readiness
- •Continuously improve production readiness through automation
Nice to Have
- •Wiz
- •CrowdStrike
- •Orca
- •GuardDuty
- •WAF / RASP-style controls
- •MCP / agent telemetry
Related Jobs
Browse Similar
- See the score Mirai Arabian International Company Limited's ATS gives your resume
- Get AI-rewritten bullet points
- Download Gulf-ready CV
60 seconds. $3.99 one-time.
Mirai Arabian International Company Limited is focused on developing and publishing video games. It aims to create immersive entertainment experiences for players.
Visit WebsiteView all jobs