news-27082024-140301

Salesforce’s Slack Technologies recently announced that they have patched a critical flaw in their Slack AI that could have potentially allowed attackers to steal data from private channels within the collaboration platform. The flaw, discovered by researchers from security firm PromptArmor, involved a prompt injection vulnerability in the AI-based feature of the platform, which adds generative AI capabilities.

What Exactly Happened?

The issue stemmed from the large language model (LLM) on which the Slack AI is based. Essentially, the LLM could not differentiate between a legitimate instruction and a malicious one, making it susceptible to manipulation by threat actors. This meant that attackers could potentially steal data from private Slack channels or even carry out phishing attacks within the platform by injecting malicious prompts.

PromptArmor researchers explained that the prompt injection flaw occurs because the LLM cannot distinguish between a “system prompt” created by a developer and other context appended to the query. As a result, if Slack AI ingests any malicious instruction via a message, there is a high likelihood that it would follow that instruction rather than the user query.

The researchers outlined two potential scenarios in which this vulnerability could be exploited by malicious actors. In one scenario, an attacker with an account in a Slack workspace could steal data or files from a private channel within that workspace. In another scenario, an actor could use the flaw to phish users within the workspace.

The Impact of the Flaw

Given that Slack is widely used by organizations for collaboration, it often contains messages and files referring to sensitive business data and secrets. This vulnerability presented a significant risk of data exposure, potentially leading to unauthorized access to confidential information.

The situation was further exacerbated by a change made to Slack AI on August 14th, which allowed the system to ingest not only messages but also uploaded documents and Google Drive files. This expansion of the attack surface increased the risk area, as threat actors could use these documents or files to deliver malicious instructions.

PromptArmor’s Disclosure and Collaboration with Slack

PromptArmor responsibly disclosed the flaw to Slack on August 14th and worked closely with the company to clarify the issue over the following week. Initially, Slack responded that the problem was considered “intended behavior.” However, after further investigation and collaboration, Slack acknowledged the severity of the issue.

Ultimately, Slack deployed a patch to address the vulnerability, specifically targeting a scenario that could allow threat actors with existing accounts in the same workspace to phish users under specific circumstances. While Slack’s post did not mention data exfiltration, they stated that there was no evidence of unauthorized access to customer data at that time.

The Importance of AI Security

The incident raised concerns about the safety of AI tools and their susceptibility to manipulation by malicious actors. Akhil Mittal, senior manager of cybersecurity strategy and solutions for Synopsys Software Integrity Group, highlighted the need for organizations to prioritize security and ethics when using AI tools to safeguard sensitive data.

Mittal emphasized that vulnerabilities in AI systems could potentially expose unauthorized individuals to sensitive information, underscoring the importance of ensuring that these tools effectively manage data. As AI tools become more prevalent in business organizations, it is crucial to prioritize security measures to protect valuable information and maintain trust.

Lessons Learned and Recommendations

PromptArmor advised organizations using Slack to leverage the platform’s AI settings to restrict the feature’s ability to ingest documents. By limiting access to sensitive data and implementing stringent security measures, businesses can mitigate the risk of potential threats exploiting vulnerabilities in AI systems.

In conclusion, the prompt injection flaw in Slack AI highlights the need for continuous vigilance and proactive security measures to safeguard against potential vulnerabilities. It serves as a reminder of the evolving threat landscape and the importance of prioritizing cybersecurity in the deployment of AI technologies within organizations. By addressing security concerns and implementing best practices, businesses can effectively protect their data and maintain the integrity of their collaboration platforms.