Security
Protecting the business from threats and proving trust. Covers information security, application security, security operations/SOC, penetration testing, trust & safety (enforcement/detection), cloud security, incident response, threat intelligence, GRC (security context), and security architecture.
Roles
The canonical roles within Security.
Infrastructure & Cloud Security Engineer
Engineers in this role design and implement security controls across GPU compute clusters, multi-cloud environments, and distributed infrastructure that power AI platforms. They work hands-on with Kubernetes, networking, identity systems, and CI/CD pipelines to establish Zero Trust principles and secure model weights, inference endpoints, and customer data at scale. What distinguishes this work is the focus on protecting specialized AI workloads—from GPU execution environments to model deployment systems—while enabling rapid infrastructure scaling. These engineers typically sit within dedicated security teams reporting to the CISO, partnering closely with platform, infrastructure, and ML engineering teams to shift security left and make secure-by-default systems the easiest path for developers.
Detection & Incident Response
Engineers in this role design and operate detection systems that identify security threats across AI infrastructure, cloud environments, and enterprise platforms, then lead investigations when incidents occur. They combine deep technical expertise in SIEM/SOAR platforms, forensics, and threat analysis with the ability to automate response workflows and mentor teams on detection improvements. These roles typically sit within dedicated Security Operations or Detection & Response teams at AI-native companies, where they bridge the gap between passive monitoring and proactive threat hunting while scaling security capabilities alongside rapid infrastructure growth.
Application Security Engineer
This role conducts comprehensive security reviews and threat modeling across AI-native platforms and data infrastructure, identifying vulnerabilities in applications that power enterprise AI agents, LLM systems, and knowledge graphs. What distinguishes Application Security Engineers from broader security roles is their focus on embedding security into the development lifecycle itself—through code reviews, secure design practices, and CI/CD integration—rather than conducting external assessments alone. These engineers typically sit within dedicated product or application security teams that partner closely with engineering organizations, translating security requirements into developer-friendly practices and tooling that enable teams to ship secure code at scale.
Security GRC & Compliance
Professionals in this role design and scale compliance programs that enable AI companies to operate securely across multiple regulatory frameworks—SOC 2, ISO 27001, FedRAMP, and emerging AI governance standards. Day-to-day, they conduct risk assessments, build automation to embed compliance into engineering workflows, respond to customer security questionnaires, and manage audit readiness across cloud infrastructure and AI-specific controls. What distinguishes this work is the technical depth required: rather than purely policy-focused compliance, these roles demand hands-on experience implementing controls, scripting automation, and translating complex regulatory requirements into practical controls that don't slow product velocity. They typically sit within security organizations reporting to CISOs or governance leaders, partnering closely with engineering, product, and sales teams to balance compliance rigor with business growth in fast-moving AI environments.
Trust & Safety
Specialists in this role develop detection systems and enforcement strategies to identify and mitigate emerging abuse patterns across AI products, working at the intersection of data science, policy, and operations. They balance competing priorities—detecting sophisticated threat actors while maintaining platform usability—by building scalable detection pipelines, conducting rapid investigations, and collaborating with policy and engineering teams to implement mitigations. Unlike policy-focused roles, these positions emphasize technical implementation and quantitative analysis; unlike pure engineering roles, they require deep domain expertise in specific abuse vectors and threat actor behavior. These analysts typically sit within dedicated Trust & Safety or Safeguards teams that operate cross-functionally with research, product, and legal to stay ahead of evolving misuse techniques.
Security Engineer
Security engineers in this role span multiple domains—application, infrastructure, and cloud security—while building the technical foundations that enable AI platforms to operate safely at scale. They write production code to automate detection and remediation, partner with engineering teams on authentication and access control design, and navigate the unique security challenges of AI systems handling sensitive customer data and agent workloads. These roles typically sit within dedicated security teams at growth-stage AI companies, working cross-functionally to embed security practices into development workflows while maintaining enterprise compliance standards like SOC 2 and ISO 27001.
Physical Security
Professionals in this role design and operate physical security programs protecting AI infrastructure, personnel, and sensitive operations across data centers, corporate facilities, and government-contracted environments. They distinguish themselves by combining deep technical expertise in access control, surveillance systems, and facility design with strategic program management—balancing rapid business growth against regulatory compliance frameworks like NISPOM, ISO 27001, and SOC 2 Type II. These roles typically sit within specialized security teams or Global Security functions, partnering closely with facilities, engineering, compliance, and executive leadership to embed security into facility planning while maintaining operational efficiency and threat responsiveness.
Offensive Security & Red Team
Engineers in this role execute offensive security assessments and red team operations across AI company infrastructure, applications, and—critically—AI-specific attack surfaces including prompt injection, model exfiltration, agent abuse, and tool-use exploitation. They combine hands-on penetration testing and adversarial simulation with custom tooling development, performing both rapid, targeted engagements and comprehensive open-scope operations that validate detection and response capabilities end-to-end. What sets this work apart is the focus on emerging AI risks: engineers assess production language models, agentic systems, and ML pipelines alongside traditional cloud, Kubernetes, and endpoint surfaces. They sit within the security function, partnering closely with defensive teams and product engineering to identify vulnerabilities early in design, then translate findings into actionable risk narratives that drive remediation and inform broader security strategy.
Security Leader
Security leaders at AI companies design and operate comprehensive security programs spanning cloud infrastructure, identity systems, threat detection, and compliance frameworks. They balance hands-on technical depth—from architecting zero-trust models and securing AI/LLM pipelines to investigating incidents directly—with executive-level strategy and customer-facing credibility. Unlike pure compliance officers, these leaders embed security throughout engineering organizations and product development, treating it as an enabler of velocity rather than a brake, while typically reporting to the CISO and managing growing teams across security operations, architecture, and engineering disciplines.
Identity & Access Management
Engineers in this role architect and operate identity systems that secure access across distributed AI infrastructure, multi-tenant platforms, and cloud environments serving thousands of users and services. They combine hands-on engineering—writing infrastructure-as-code, building authentication flows, automating provisioning workflows—with strategic design, setting long-term direction for how identity evolves alongside rapidly scaling AI platforms. Unlike general security roles, they specialize deeply in identity primitives like SSO, RBAC, service account management, and agentic AI workload access, often working across multiple cloud providers and compliance frameworks like FedRAMP. These engineers typically sit within dedicated security or trust teams, partnering closely with platform, infrastructure, and compliance functions to embed identity into every layer of the stack.
Recent Jobs
The latest Security openings across the AI industry.