
Last month, I discovered something that stopped me cold during a routine penetration test. A developer had spun up an Ollama server to experiment with local AI models. Nothing unusual about that, except the server was publicly accessible with no authentication. The models it hosted had been trained on internal company data. This scenario plays […]
DarkMind is a newly discovered backdoor attack that manipulates the reasoning processes of Large Language Models (LLMs), making it one of the most dangerous and stealthy AI threats to date. Unlike traditional attacks that tamper with input prompts or training data, DarkMind targets the logic and decision-making pathways within an LLM, allowing it to subtly […]
Recent research has uncovered serious vulnerabilities in Google’s Gemini for Workspace, an AI assistant integrated across various Google services. These weaknesses make the assistant susceptible to prompt injection attacks, allowing malicious actors to manipulate its output and potentially generate misleading or harmful responses. This raises significant concerns about the security and trustworthiness of AI-powered tools, […]