Engineering
Building and shipping software, systems, and technical solutions. Covers software engineering (frontend, backend, full-stack, mobile), ML engineering (production systems), AI agent engineering, applied AI engineering, prompt engineering, DevOps/SRE, platform engineering, QA/testing, and technical architecture. The people who write code and ship product.
Roles
The canonical roles within Engineering.
Infrastructure & Platform Engineer
Engineers in this role architect and operate the systems that power AI research and product development at scale. They design distributed infrastructure for training, serving, and orchestrating AI workloads across GPU clusters, build internal platforms that accelerate developer velocity, and optimize the critical path from code to production. This role bridges deep systems engineering expertise—in areas like Kubernetes, build systems, data pipelines, and performance tuning—with the unique demands of AI workloads, combining hands-on infrastructure work with close collaboration with researchers and product teams to eliminate bottlenecks that slow down innovation.
Backend Engineer
Backend engineers at AI companies build the server-side systems, APIs, and data pipelines that power AI products and infrastructure. Their day-to-day involves designing distributed services, optimizing data processing at scale, and operating mission-critical systems that handle everything from serving AI model inferences to processing observability data across GPU-dense infrastructure. What distinguishes backend roles in AI is their focus on the unique challenges of AI workloads—high-throughput serving paths, streaming data pipelines for telemetry, and infrastructure optimized for compute-intensive tasks—rather than traditional web application concerns. These engineers typically work within infrastructure, platform, or core services teams that act as force multipliers, building the foundational systems that enable product teams and researchers to move faster.
Machine Learning Engineer
Machine learning engineers in this role build and optimize systems that translate research models into production—spanning model serving infrastructure, inference performance tuning, and distributed training pipelines. They distinguish themselves by combining deep systems expertise with ML knowledge, working on problems like latency optimization, resource efficiency, and scaling models across heterogeneous hardware and platforms. These engineers typically sit within specialized teams focused on either search and retrieval, robotics, foundation models, or inference optimization, collaborating closely with research teams to operationalize cutting-edge architectures at scale.
Engineering Manager
Engineering managers in AI companies lead technical teams building distributed systems and data platforms while balancing hands-on coding with people development. They own strategic roadmaps for mission-critical infrastructure—from inference serving and storage systems to identity management and GPU training orchestration—ensuring reliability, scalability, and operational excellence at massive scale. These leaders distinguish themselves by remaining deeply technical, making architectural trade-offs across cost, latency, and performance, while hiring and coaching engineers who can solve complex problems in production AI workloads. They typically sit within platform, infrastructure, or core services organizations, partnering cross-functionally to unblock product teams and drive adoption of foundational technologies that enable the entire company to move faster.
Fullstack Engineer
Engineers in this role build end-to-end features that translate AI model capabilities into user-facing products and developer platforms, working across frontend interfaces, backend services, and infrastructure. They typically focus on developer experience, platform reliability, and scaling systems to millions of users, whether designing SDKs and APIs for third-party builders or shipping consumer-facing AI applications. These roles sit within product-focused engineering teams at frontier AI companies, collaborating closely with research, product, and design to rapidly iterate on new capabilities and define product direction as the organization scales.
Forward Deployed Engineer
Forward Deployed Engineers embed with enterprise customers to architect and operationalize production AI systems that solve domain-specific business problems. Unlike traditional software engineers, they own the full lifecycle from discovery and system design through scaling and optimization, working directly alongside customer teams to translate complex requirements into deployed solutions. These roles typically sit within customer success, professional services, or partnerships teams at AI platform companies, bridging the gap between core product capabilities and real-world customer needs while feeding field insights back to drive product evolution.
Software Engineer
Software engineers in this role build and maintain the infrastructure and platforms that power AI and machine learning systems at scale, working across the full stack from data pipelines and model training to deployment and monitoring. They design robust systems that enable teams to train, evaluate, and deploy models and AI agents reliably, often collaborating with open-source communities and integrating with diverse ML frameworks and ecosystems. These engineers typically work within specialized teams focused on AI/ML infrastructure, developer tooling, or platform capabilities, where they balance innovation with operational excellence while mentoring junior engineers and shaping technical direction.
Technical Program Manager
Technical Program Managers in AI companies orchestrate the delivery of complex initiatives across distributed engineering teams working on model infrastructure, GPU platforms, and AI product development. They distinguish themselves by maintaining operational rigor around inherently ambiguous technical work—translating research priorities and customer requirements into executable programs while managing dependencies across hardware, software, and research domains. These roles typically sit within central program management offices or alongside engineering leadership, partnering closely with product, infrastructure, and research teams to keep high-stakes initiatives moving predictably at scale.
AI Agent Engineer
Engineers in this role design and deploy autonomous AI agents that solve real-world business problems across diverse industries, from finance and healthcare to infrastructure and marketing operations. They move fast across the full development lifecycle—from prototyping with frontier LLMs to shipping production systems that handle complex customer interactions, workflow automation, and operational decision-making at scale. What sets this work apart is the emphasis on reliability and observability: these engineers don't just build agents, they ensure they perform consistently in ambiguous, high-stakes environments while integrating with enterprise systems and human operators. Typically embedded in dedicated agent or agentic AI teams within product-focused AI companies, these roles sit at the intersection of platform engineering and direct impact, partnering closely with product managers, domain experts, and cross-functional stakeholders to turn loosely defined opportunities into robust, measurable business outcomes.
Site Reliability Engineer
Engineers in this role maintain the reliability and performance of AI infrastructure at scale, spending their days on incident response, automation, and observability across distributed systems that power AI workloads. They differ from software engineers by focusing on operational excellence and system resilience rather than feature development, and from DevOps roles by owning broader platform-level reliability goals. These teams typically sit within infrastructure or platform organizations, partnering closely with product engineering teams to ensure AI services remain fast, secure, and always available across multiple regions.
Quality Engineer
Engineers in this role focus on testing and validating complex AI software systems across domains like machine learning frameworks, inference platforms, and autonomous systems. They design automated test frameworks, build CI/CD infrastructure, and collaborate with engineering teams to ensure AI products meet stringent quality and performance standards. What distinguishes them is their emphasis on systems-level thinking—they architect scalable testing solutions that handle the unique challenges of AI workloads, from ML model accuracy validation to hardware-software integration testing. These engineers typically sit within larger quality or systems teams in AI-focused companies, working cross-functionally with ML engineers, infrastructure teams, and product owners to accelerate development velocity while maintaining reliability and safety.
Frontend Engineer
Frontend engineers in AI companies build the user-facing interfaces that make complex AI products accessible and intuitive. They design and ship features across web, mobile, and embedded contexts—from consumer-facing applications like AI chatbots to internal tools that help teams manage infrastructure, visualize simulation data, or collaborate in real time. What sets these roles apart is the focus on performance and reliability under demanding conditions: handling real-time multiplayer interactions, rendering massive datasets without lag, or optimizing for AI-driven workflows where responsiveness directly impacts user productivity. These engineers typically sit in product-focused teams alongside designers and product managers, taking ownership of the full feature lifecycle from concept through production, while often collaborating with platform teams who provide shared component libraries and infrastructure that accelerate their work.
Mobile Engineer
Mobile engineers in this role build native iOS or Android applications that seamlessly integrate advanced AI capabilities, from language models to real-time inference systems, delivering intuitive interfaces that put cutting-edge AI directly in users' hands. They balance obsessive attention to performance optimization—profiling memory, CPU, and battery consumption—with product sensibility, crafting pixel-perfect experiences that make complex AI interactions feel effortless. These engineers typically work within small, fast-moving AI product teams where they collaborate closely with researchers, designers, and backend engineers to ship high-impact features, often owning the full mobile development lifecycle from architecture decisions to production monitoring.
Database & Systems Engineer
Engineers in this role design and operate the database and storage systems that underpin AI infrastructure at massive scale, handling everything from query optimization and transaction management to distributed storage architecture. They work deeply with storage engines, cache layers, and multi-database topologies, making critical tradeoffs between consistency, performance, and resilience as their systems support billions of requests and exabyte-scale workloads. Unlike query optimization or distributed systems specialists, these engineers own the full vertical of how data is stored, retrieved, and scaled—partnering with infrastructure and product teams to ensure databases reliably serve both transactional product workloads and compute-intensive AI training pipelines. They typically sit within platform or infrastructure organizations alongside teams building query engines, replication systems, and cloud infrastructure.
Product Security Engineer
Product Security Engineers at AI companies work embedded within engineering teams to integrate security throughout the software development lifecycle, conducting threat modeling, secure code reviews, and vulnerability assessments specifically tailored to AI applications and model integration points. They distinguish themselves by focusing on application-layer security while maintaining the performance and reliability that AI systems demand, designing controls that protect against threats unique to conversational platforms, agents, and ML pipelines rather than infrastructure-wide concerns. These roles typically sit within product engineering organizations, partnering directly with developers to build security testing programs like SAST and DAST, establish secure coding practices, and enable rapid vulnerability remediation without slowing development velocity.
Applied AI Engineer
Applied AI Engineers build intelligent features into products by integrating LLMs, retrieval systems, and AI APIs to solve real business problems. Day-to-day, they prototype and productionize AI-powered workflows—from designing agent architectures and evaluation frameworks to implementing retrieval pipelines and optimizing inference costs at scale. They sit between product and infrastructure teams, combining hands-on engineering with deep customer collaboration to ship features that work reliably in production. Unlike ML Engineers who train models or Forward Deployed Engineers who embed at customer sites, Applied AI Engineers own the full stack of AI integration within their own organization's products, from architecture decisions to code contributions and technical mentorship.
Design Engineer
Design Engineers in this role combine pixel-perfect front-end craftsmanship with strong design sensibility to build user-facing experiences for AI products. Working closely with designers and product teams, they own product surfaces end-to-end—from prototyping in code and validating with users to shipping production-quality interfaces with obsessive attention to performance, accessibility, and detail. These engineers typically work in fast-moving AI companies building consumer or creator-focused products, translating complex AI capabilities into intuitive, delightful interfaces that feel magical to users. They move fluidly between design tools and code, prototype rapidly in React/TypeScript, and champion the small details that elevate craft and experience across their entire product.
Recent Jobs
The latest Engineering openings across the AI industry.