This first-person essay is derived from a detailed interview with Prakhar Agarwal, who currently serves as an applied researcher at Meta Superintelligence Labs. The narrative presented here has been refined and expanded to ensure clarity and coherence, while remaining faithful to his account. Business Insider has verified both Agarwal’s professional experience and his academic credentials, ensuring the authenticity of his background.\n\nAgarwal began his professional journey at Apple in 2020, where he spent half a decade developing his technical expertise and professional identity before transitioning to OpenAI. There, he joined the team responsible for the OpenAI API, gaining deeper exposure to large-scale machine learning systems and their practical deployment. In the summer of his most recent career move, as many within the industry were reconsidering their roles and making lateral shifts, Agarwal accepted an opportunity at Meta Superintelligence Labs.\n\nHis career trajectory originated in academia while he was a graduate student at the University of Washington, specializing in the field of machine learning. It was during this period that he first applied to Apple, securing the opportunity that would launch his career in applied research. Over time, as his professional experience and contributions became more visible, he began receiving direct outreach from elite organizations such as OpenAI and Meta, among others. Remarkably, these subsequent opportunities required no formal applications—his portfolio and prior achievements spoke convincingly for themselves.\n\nReflecting on the broader hiring dynamics of leading AI companies, Agarwal acknowledges the decisive advantage that experience confers. In his view, these organizations maintain only a limited number of highly technical research and engineering roles, which naturally leads them to recruit individuals who have already demonstrated extensive competence in their respective fields. As a result, there is an evident convergence toward hiring professionals with considerable real-world expertise rather than fresh entrants.\n\nAccording to Agarwal, one of the most distinctive aspects of these positions is the extraordinary degree of autonomy afforded to employees. Unlike traditional corporate hierarchies where defined reporting lines and managerial oversight dictate workflow, these research environments emphasize self-direction. Researchers are entrusted with identifying unsolved problems, constructing pathways to address them, and exercising discretion when prioritizing which technical or conceptual challenge deserves immediate attention. Success is contingent on one’s ability to discern the most impactful initiative given finite time and resources.\n\nOnce integrated into the team, new hires are quickly immersed in deeply independent work. Agarwal describes the experience as being thrown into the deep end—a deliberate approach designed to foster innovation and accountability. Researchers must not only define their own problem statements but also develop creative, rigorous, and implementable solutions without waiting for detailed instructions.\n\nTop-tier AI organizations such as OpenAI and Meta are notable for the exceptional caliber of their teams. These companies expend a significant amount of effort during recruitment to identify individuals who combine technical mastery with intellectual curiosity. At this level, management expects employees to proactively determine what projects or improvements are necessary rather than rely on prescriptive direction. The prevailing philosophy is that innovation flourishes when talented individuals are empowered to lead their own discoveries.\n\nWhen discussing the hiring process itself, Agarwal explains that interviewers at leading AI labs typically evaluate candidates across several dimensions. The first assessment focuses on conceptual fluency: do candidates understand the specialist vocabulary and frameworks that underpin modern machine learning, particularly large language models (LLMs)? Beyond theoretical understanding, technical exercises often require candidates to write code that reflects realistic, job-related scenarios. Rather than testing isolated algorithmic puzzles, these problems simulate tasks that researchers genuinely confront in practice.\n\nThe second central component of the evaluation emphasizes adaptability in ambiguous domains. Candidates are often given vague or abstract challenges and asked to transform them into quantifiable, metric-driven solutions. The ability to define measurable objectives from open-ended prompts signals a strong grasp of applied research methodologies.\n\nAgarwal notes that holding a doctoral degree can provide a substantial advantage because it implies a proven ability to work within high levels of abstraction, design experiments, and draw generalizable insights. However, he clarifies that a Ph.D. is not the only path to demonstrating such competence. Equivalent experience—whether built through impactful work in a startup environment or through developing a pivotal piece of production software—can convey the same capacity for research-oriented thinking. What ultimately matters is showing tangible evidence of deep technical engagement and the ability to frame and solve open-ended problems.\n\nFor individuals aspiring to enter the AI research industry, Agarwal strongly recommends engaging directly with real-world problems. Practical, hands-on experience is invaluable for developing both intuition and applied skills. By taking on challenging projects—whether independent or professional—aspiring researchers learn what methods succeed, what barriers persist, and which approaches fail under real constraints. Mistakes, he emphasizes, are powerful teachers, cultivating the kind of technical instinct that differentiates standout candidates during high-stakes interviews.\n\nWhen advising job seekers, he outlines several essential principles. First and foremost, a solid grasp of theory is indispensable. One must deeply understand the terminology, mathematical foundations, and operational concepts that define the discipline. Yet beyond conceptual rigor, it is equally important to maintain continuous, practical engagement with the models themselves. Actively experimenting with current AI systems allows one to recognize both their capacities and their limits—an insight frequently overlooked by those who focus solely on theoretical study.\n\nPerhaps the most sought-after quality in modern AI professionals, Agarwal explains, is the ability to identify and articulate gaps in existing models. Researchers who can pinpoint deficiencies—such as limitations that might impact the next version of foundational systems like Llama—while also quantifying these issues using appropriate metrics, are particularly valuable. At the same time, awareness of technological direction and foresight about the capabilities models may achieve in the coming months demonstrates strategic insight that companies greatly prize.\n\nAgarwal also highlights the central importance of effective, high-bandwidth communication in research-intensive environments. In establishments such as Meta Superintelligence Labs or OpenAI, the pace of collaboration is significantly faster than in traditional Big Tech organizations. Complex problem-solving does not depend on lengthy slide decks or week-long deliberations. Instead, issues are tackled in dynamic, often spontaneous sessions—perhaps around a whiteboard—where researchers dissect challenges, propose hypotheses, and rapidly iterate through potential solutions before returning to individual or small-group work. In such settings, most discussions unfold in compact teams or in direct one-on-one or one-on-two dialogues, making the capacity to communicate precisely and concisely across hierarchical levels an indispensable skill.\n\nFinally, Agarwal shares his perspective on how to effectively learn and stay current within this rapidly shifting field. The broader AI community, he observes, tends to be strikingly open and collaborative. Individuals encountering technical obstacles can often seek advice or feedback on platforms such as Twitter or LinkedIn, where experts and peers are surprisingly responsive and generous with their time. In this sense, the global online research community plays a pivotal role in accelerating learning.\n\nHe cautions that formal academic coursework often lags behind real-world developments in artificial intelligence. Textbooks and syllabi—even those only five or ten years old—may be insufficient to capture the velocity of change or the latest breakthroughs in generative modeling, reinforcement learning, or multimodal systems. Therefore, he encourages learners to diversify their sources of information. Insightful blog posts, video lectures on platforms like YouTube, and professional discussions on social media can all provide valuable, up-to-date perspectives.\n\nAgarwal recommends following prominent figures in the AI space who consistently share substantive content. Even if the technical depth of such material feels daunting initially, consistent exposure helps learners gradually internalize core ideas and contemporary practices. Over time, these incremental understandings accumulate into genuine expertise, enabling emerging researchers to engage more confidently with state-of-the-art concepts.\n\nTo conclude, he emphasizes that breaking into a top AI lab is not solely about credentials or formal applications but about cultivating initiative, continuous curiosity, and an enduring commitment to learning. Those who seek and build knowledge across disciplines, adapt to ambiguity, and proactively engage with the evolving AI ecosystem are the individuals most likely to succeed in this fiercely competitive arena.\n\nReaders with firsthand experiences of working in major AI research institutions are invited to contribute their perspectives by contacting the reporter at cmlee@businessinsider.com.

Sourse: https://www.businessinsider.com/openai-meta-superintelligence-labs-tips-getting-hired-phd-llm-interview-2025-10