A substantial 77% of organizations are either integrating or exploring the adoption of artificial intelligence (AI) to enhance efficiency and streamline workflows. This surge in reliance on AI, particularly GenAI models and Large Language Models (LLMs) like ChatGPT, has brought forth a critical need for robust security measures. Acknowledging this imperative, Akto has launched the GenAI Security Testing solution, aimed at addressing security concerns, including the scanning of APIs leveraging AI technology. Jim Manico, a Former OWASP Global Board Member and Secure Coding Educator, commends these efforts, emphasizing the importance of securing applications and providing education in the realm of AI security. Noteworthy vulnerabilities for LLMs include the risk of prompt injection, where unauthorized inputs manipulate outputs, and susceptibility to Denial of Service (DoS) attacks, leading to service disruptions. Additionally, overreliance on LLM outputs without adequate verification mechanisms has resulted in data inaccuracies and leaks. Organizations are urged to implement robust validation processes to mitigate these risks, as the industry witnesses an increase in data leaks associated with the overreliance on LLMs. Overall, securing GenAI systems demands a multifaceted approach, not only protecting the AI from external inputs but also ensuring the security of external systems relying on their outputs, as underscored by an OWASP Top 10 for LLM AI Applications Core team member.