About the role
Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to safe AGI lies in automating research and code generation to improve models and solve alignment more reliably than humans can alone. Our approach combines frontier-scale pre-training, domain-specific RL, ultra-long context, and inference-time compute to achieve this goal.
About the role
As an engineer on the Supercomputing Platform & Infrastructure team, you will design, build, and operate the large-scale GPU infrastructure that powers Magic’s model training and inference workloads.
A core part of this role is building and maintaining our infrastructure using Terraform-driven infrastructure-as-code practices, ensuring reproducibility, reliability, and operational clarity across clusters spanning thousands of GPUs.
Magic’s long-context models create sustained pressure on compute, networking, and storage systems. Long-running distributed jobs, high-throughput data movement, and strict availability requirements demand infrastructure that is automated, observable, and resilient by design. You will own the systems and IaC foundations that make this possible, including the Kubernetes (K8s) environments that coordinate workloads across our GPU infrastructure.
This role can evolve into broader ownership of supercomputing platform architecture, shaping how Magic scales GPU clusters and infrastructure reliability as model workloads grow.
What you’ll work on
Design and operate large-scale GPU clusters for training and inference
Build and maintain infrastructure using Terraform across cloud and hybrid environments
Deploy, operate, and optimize K8s clusters used to schedule and manage AI workloads
Develop modular, scalable IaC patterns for compute, networking, and storage provisioning
Improve deployment reproducibility, environment consistency, and operational safety
Optimize networking and storage systems for high-throughput AI workloads
Automate fault detection and recovery across distributed clusters
Debug complex cross-layer issues spanning hardware, drivers, networking, storage, OS, and cloud
Improve observability, monitoring, and reliability of core platform systems
What we’re looking for
Strong systems engineering fundamentals
Deep, hands-on experience with Terraform, including module design, state management, environment isolation, and large-scale deployments
Experience operating production GPU infrastructure or high-performance distributed systems
Strong understanding of networking and storage systems
Experience with major cloud platforms (GCP, AWS, Azure, OCI, etc.)
Track record of owning production-critical infrastructure end-to-end
Compensation, benefits, and perks (US):
Annual salary range between $200K - $550K depending on experience
Equity is a significant part of total compensation, in addition to salary
401(k) plan with 6% salary matching
Generous health, dental and vision insurance for you and your dependents
Unlimited paid time off
Visa sponsorship and relocation stipend to bring you to SF, if possible
A small, fast-paced, highly focused team
Magic strives to be the place where high-potential individuals can do their best work. We value quick learning and grit just as much as skill and experience.
Our culture
Integrity. Words and actions should be aligned
Hands-on. At Magic, everyone is building
Teamwork. We move as one team, not N individuals
Focus. Safely deploy AGI. Everything else is noise
Quality. Magic should feel like magic
Find similar jobs
Explore opportunities with similar job descriptions at other companies.