This narrative is derived from a detailed interview with Sol Rashidi, a seasoned former technology executive whose professional path has traversed leadership roles at some of the world’s most influential organizations—namely IBM, Amazon Web Services (AWS), and Estée Lauder. Currently based in Miami, Rashidi reflects upon an extensive 15-year career spent designing, constructing, and scaling artificial intelligence (AI) systems across a wide array of business applications. Her experience includes more than two hundred large-scale deployments of AI solutions, each contributing to a nuanced understanding of how advanced technologies can both empower and imperil the human workforce. This account has been carefully condensed and refined for comprehension and brevity.
Over the span of her professional journey, Rashidi transitioned from the role of a hands-on practitioner—an expert directly involved in the technical intricacies of AI—to that of a high-level executive guiding enterprise-wide data and analytics strategies. Her professional trajectory includes notable positions such as leading IBM’s enterprise data management practice, serving as the Chief Data Officer at Sony Music, and later assuming the title of Chief Analytics Officer at Estée Lauder. She also directed the technology division for North America’s startup operations within AWS, where she worked at the crossroads of innovation, scalability, and emerging market growth. Through these roles, she observed firsthand how swiftly organizations and individuals were integrating artificial intelligence into their decision-making frameworks.
Since 2011, Rashidi has cultivated a growing concern about the possibility of society developing a form of dependency—or what she calls “codependency”—on AI systems. Her realization extends beyond the professional sphere; it speaks to a fundamental challenge for humanity in the digital age. With this awareness, she has shifted her focus toward preparing the workforce and educating broad audiences about the responsible use of AI. Her current enterprise is devoted to helping organizations confront the evolving relationship between AI and human labor. The company’s mission is not merely to automate workflows but to help institutions strengthen their employees, teaching them how to implement AI and automation as tools that enhance productivity rather than diminish critical human skills. In her words, adopting AI can be wonderful—provided that one uses it to outsource only mundane tasks while retaining full ownership over critical thought and strategic reasoning. Failing to do so risks a form of “intellectual atrophy,” in which the human capacity for thinking independently and creatively begins to deteriorate.
Intellectual atrophy, as Rashidi explains, arises when individuals progressively lose their cognitive sharpness because they delegate not just tasks but judgment and reasoning to technological systems. Much like physical muscles that weaken without exercise, the brain too can lose its strength if it ceases to engage in deliberate and challenging thought. She cautions against allowing generative AI to homogenize perspectives—to let one’s thinking become generic simply because an increasing number of people rely on the same models, such as ChatGPT, for guidance. True differentiation and competitive advantage, she maintains, are preserved through sustained intellectual engagement and the deliberate use of one’s cognitive faculties.
For her own work, Rashidi relies on between six and eight AI tools daily. These assist in data processing and analysis, enabling her to focus more on identifying patterns, extracting insights, and constructing conceptual frameworks or models that guide decision-making. However, she is intentionally mindful of her relationship with these tools: before employing AI for any given task, she actively questions her motivations. She asks herself whether she is leveraging technology merely to accelerate her workflow, or whether she is, perhaps unintentionally, allowing it to perform the work in her place. The distinction is crucial. AI should act as a catalyst that speeds up the process, leaving room for the human mind to engage in strategic and conceptual thinking. She continuously reflects on whether a particular tool truly enhances her capability, amplifies her productivity, and ultimately increases her intellectual value—or whether it simply executes rote functions that displace rather than empower her own reasoning.
Given her emphasis on communication and connection, Rashidi imposes clear boundaries on her personal use of AI. She abstains from employing generative tools to draft emails, keynote speeches, or personal correspondence. For her, authentic communication demands the genuine infusion of one’s own intellect, emotion, and intent. Only through direct authorship can she ensure that her audience receives a message precisely as she intends it—clear, heartfelt, and aligned with her personal and professional values. Authenticity, she stresses, cannot be artificially replicated.
Rashidi also critiques a broader societal shift toward convenience and speed at the expense of depth, reflection, and discernment. We now inhabit a fast-paced digital culture where instantaneous information transfer is valued above thoughtful analysis. Every day, individuals are flooded with immense quantities of data from countless channels—messaging platforms, social media, and professional networks—creating an environment in which attention and discernment have become scarce resources. The methods once sufficient to manage the information flow of previous decades no longer suffice in a hyperconnected and accelerated world. As a remedy, she argues that people must deliberately strengthen what she calls their “discernment muscles”—the mental faculty that enables one to separate meaningful signals from the surrounding noise.
This need becomes even more urgent as the proportion of AI-generated content online continues to grow. Increasingly, algorithms train on content created by other algorithms, resulting in a feedback loop that risks diminishing originality and accuracy. Rashidi warns that such practices may bring us to a point of diminishing returns, where the quality of knowledge and problem-solving ability declines despite apparent technological progress. In this environment, the skills of verification, validation, and critical evaluation will become invaluable. She advises that while it is perfectly acceptable to use AI to produce an initial draft or framework, one must never copy and paste the output without review. The machine’s first attempt should be treated merely as a preliminary version—a starting point that invites human refinement and correction.
Recalling her leadership experience at a Fortune 500 company, Rashidi recounts supervising a team of data scientists tasked with developing a strategic approach for a new product launch. During this project, she observed a striking contrast: her junior data scientist turned in a polished and seemingly credible report in half the time it took the senior team to complete theirs. However, it soon became clear that the junior scientist had relied heavily on ChatGPT’s generated content, accepting its output without independent validation. Though the end product sounded impressive, the fundamental research, evaluation, and verification steps had been skipped. This prompted Rashidi to institute a new policy mandating that AI could be used solely to assist and accelerate the research process, never to replace it entirely.
She reminded her team—and, by extension, all professionals becoming overly reliant on AI—that the true value they bring to an organization lies in their mental acuity, originality, and human insight. “I am paying for your brain and your uniqueness,” she told them candidly. “If your work is merely the regurgitation of machine-generated text, then what distinguishes you from the tool you’re using? After all, a license for an enterprise-level AI model, such as OpenAI’s commercial API, costs far less than a salaried employee.” Her point underscored a vital truth: technological tools may replicate surface-level competence, but they cannot substitute for the depth, intuition, and discernment of the human intellect.
Rashidi acknowledges how enticing it can be to pose a question to ChatGPT or another AI system and receive an articulate, seemingly authoritative response within seconds. However, she cautions that without engaging the muscle of critical thinking, individuals risk devolving into passive consumers of machine outputs rather than independent problem-solvers. Those who abdicate reasoning in this way may find their skills obsolete in a matter of years. In the era of pervasive AI, sustaining intellectual independence is not simply a virtue—it is a professional necessity.
Ultimately, Rashidi’s reflections serve as both encouragement and warning: AI can be an extraordinary amplifier of human potential, but only when used judiciously and conscientiously. The future belongs not to those who blindly adopt every technological convenience, but to those who remain vigilant stewards of their own minds, ensuring that the very tools designed to empower us do not inadvertently suppress our capacity to think, create, and lead.
Sourse: https://www.businessinsider.com/former-aws-ibm-exec-ways-not-become-dependent-ai-2025-12