Over the past several months, the volume of packages submitted to EGO has shown a consistent and remarkable upward trend. Each successive month brings a noticeable influx of new contributors who are eager to participate in the dynamic and fast-growing extensions community by designing and publishing their own creative works. This steady expansion signals a healthy ecosystem of developers, but it also increases the workload for those responsible for maintaining quality and reviewing submissions. On some particularly demanding days, I find myself dedicating more than six continuous hours to carefully examining over fifteen thousand lines of extension code, in addition to responding to technical inquiries and providing guidance to community members. Such efforts require a high degree of concentration and a commitment to ensuring that every contribution meets established standards before being made publicly available.
During the last two months alone, the EGO platform has seen a surge in newly submitted extensions, a development that reflects both the enthusiasm of developers and the platform’s growing popularity. From one perspective, this surge is overwhelmingly positive: the introduction of fresh ideas and diverse functionalities has the potential to enrich the ecosystem, attract new users, and further strengthen the collaborative spirit that defines the extensions community. However, alongside this growth, a concerning pattern has also emerged. A number of developers have begun incorporating artificial intelligence tools into their workflow without fully grasping the underlying logic or structure of the code being generated. While the use of AI can be a valuable aid when properly understood, improper reliance on such tools often produces suboptimal results.
Consequently, some submitted packages now contain unnecessarily long segments of code, redundant functions, and structural inconsistencies—symptoms of code generation that has not been critically assessed or refined by human oversight. In several cases, these poorly implemented elements introduce inefficient or even harmful practices into the codebase. Once such problematic patterns are introduced into one package, they tend to spread quickly, as other developers may unknowingly use existing projects as templates for their own work. This domino effect multiplies the presence of bad practices throughout the repository, posing challenges to the overall quality, maintainability, and performance of future extensions.
As a direct consequence of these issues, the average waiting time for code reviews and approvals has increased significantly. What might have once been a relatively straightforward review process now requires deeper investigation and more frequent corrections to ensure each extension adheres to best practices. While the enthusiasm of new contributors remains encouraging, it highlights the importance of fostering a community that not only values innovation but also strives for technical precision and responsible use of emerging tools such as AI. By addressing these challenges, the EGO extensions ecosystem can continue to expand sustainably while preserving the integrity, efficiency, and trust that have made it successful.
Sourse: https://www.theverge.com/news/844655/gnome-linux-ai-shell-extensions-ban