In Baltimore County, Maryland, an unsettling incident unfolded at Kenwood High School when a student was detained following an alert triggered by an artificial intelligence–based security system. The software, designed to detect potential firearms on school grounds, mistakenly flagged a simple bag of chips as a dangerous weapon. As a consequence of this false identification, the student involved, Taki Allen, found himself subjected to an unnerving and arguably excessive security response.

According to Allen’s account, shared with CNN affiliate WBAL, the misunderstanding occurred at a moment of complete normalcy. He explained that he had been holding a bag of Doritos in both hands, with one finger slightly extended as he gestured—a position that, to the automated system analyzing visual data, appeared consistent with someone brandishing a gun. From the system’s perspective, that slight alignment of hand and object produced an algorithmic match for a firearm silhouette. However, in reality, what Allen held was nothing more than a brightly colored snack bag. Despite the innocent nature of the situation, the security response escalated rapidly. Allen reported being ordered to kneel, to place his hands behind his back, and to submit to handcuffs—a humiliating experience made even more distressing by the fact that he had done nothing wrong.

Principal Katie Smith later sought to clarify the chain of events in a statement addressed to the student body’s parents and guardians. She explained that the school’s internal security team had quickly reviewed the AI-generated alert and, realizing it was a misidentification, canceled the gun warning. Unfortunately, due to a breakdown in communication, Smith was unaware that the cancellation had already occurred. Acting on the original alert, she contacted the school resource officer, who in turn notified the local police department, inadvertently escalating the situation further. This well-intentioned but misaligned sequence of responses underscored how technological errors can cascade when combined with human miscommunication.

Omnilert, the company responsible for developing and operating the AI security platform used at the school, expressed regret over the entire episode. In a statement to CNN, the company conveyed its concern for both the student and the community affected by the false alarm. While emphasizing sympathy for those who experienced distress, the firm also asserted that, from a technical standpoint, the security system’s procedural operations had performed as intended. In other words, the algorithm had detected what it interpreted as a possible threat and automatically initiated the prescribed safety protocol—precisely the sequence it was engineered to follow.

Though this justification might highlight the system’s internal consistency, it also points to a deeper dilemma confronting many modern institutions that rely on automated surveillance. The episode at Kenwood High School illustrates the fragile balance between ensuring safety through rapid technological detection and safeguarding individuals from unwarranted intrusion or emotional harm caused by false positives. As schools, developers, and policymakers continue exploring the integration of artificial intelligence into public safety frameworks, this incident serves as a telling reminder that technology, no matter how sophisticated, still requires vigilant human discernment and compassionate oversight to prevent the misfires that can turn protection into overreach.

Sourse: https://techcrunch.com/2025/10/25/high-schools-ai-security-system-confuses-doritos-bag-for-a-possible-firearm/