A recently disclosed security flaw patched by Microsoft may have been exploited by the Russia-linked state-sponsored threat actor known as APT28, according to new findings from Akamai.
The vulnerability in question is CVE-2026-21513 (CVSS score: 8.8), a high-severity security feature bypass affecting the MSHTML Framework.
“Protection mechanism failure in MSHTML Framework allows an unauthorized attacker to bypass a security feature over a network,” Microsoft noted in its advisory for the flaw. It was fixed by the Windows maker as part of its February 2026 Patch Tuesday update.
However, the tech giant also noted that the vulnerability had been exploited as a zero-day in real-world attacks, crediting the Microsoft Threat Intelligence Center (MSTIC), Microsoft Security Response Center (MSRC), and Office Product Group Security Team, along with Google Threat Intelligence Group (GTIG), for reporting it.
In a hypothetical attack scenario, a threat actor could weaponize the vulnerability by persuading a victim to open a malicious HTML file or shortcut (LNK) file delivered through a link or as an email attachment.
Once the crafted file is opened, it manipulates browser and Windows Shell handling, causing the content to be executed by the operating system, Microsoft noted. This, in turn, allows the attacker to bypass security features and potentially achieve code execution.
While the company has not officially shared any details about the zero-day exploitation effort, Akamai said it identified a malicious artifact that was uploaded to VirusTotal on January 30, 2026, and is associated with infrastructure linked to APT28.
It’s worth noting that the sample was flagged by the Computer Emergency Response Team of Ukraine (CERT-UA) early last month in connection with APT28’s attacks exploiting another security flaw in Microsoft Office (CVE-2026-21509, CVSS score: 7.8).
The web infrastructure company said CVE-2026-21513 is rooted in the logic within “ieframe.dll” that handles hyperlink navigation, and that it’s the result of insufficient validation of the target URL, which allows attacker-controlled input to reach code paths that invoke ShellExecuteExW. This, in turn, enables execution of local or remote resources outside the intended browser security context.
“This payload involves a specially crafted Windows Shortcut (LNK) that embeds an HTML file immediately after the standard LNK structure,” security researcher Maor Dahan said. “The LNK file initiates communication with the domain wellnesscaremed[.]com, which is attributed to APT28 and has been in extensive use for the campaign’s multistage payloads. The exploit leverages nested iframes and multiple DOM contexts to manipulate trust boundaries.”
Akamai noted that the technique makes it possible for an attacker to bypass Mark-of-the-Web (MotW) and Internet Explorer Enhanced Security Configuration (IE ESC), leading to a downgrade of the security context and ultimately facilitating the execution of malicious code outside of the browser sandbox via ShellExecuteExW.
“While the observed campaign leverages malicious LNK files, the vulnerable code path can be triggered through any component embedding MSHTML,” the company added. “Therefore, additional delivery mechanisms beyond LNK-based phishing should be expected.”
This week is not about one big event. It shows where things are moving. Network systems, cloud setups, AI tools, and common apps are all being pushed in different ways. Small gaps in access control, exposed keys, and normal features are being used as entry points.
The pattern becomes clear only when you see everything together. Faster scans, smarter misuse of trusted services, and steady targeting of high-value sectors. Each story adds context. Reading them all gives a fuller picture of how today’s threat landscape is evolving.
⚡ Threat of the Week
Cisco SD-WAN Zero-Day Exploited— A newly disclosed maximum-severity security flaw in Cisco Catalyst SD-WAN Controller (formerly vSmart) and Catalyst SD-WAN Manager (formerly vManage) has come under active exploitation in the wild as part of malicious activity that dates back to 2023. The vulnerability, tracked as CVE-2026-20127 (CVSS score: 10.0), allows an unauthenticated remote attacker to bypass authentication and obtain administrative privileges on an affected system by sending a crafted request. Cisco credited the Australian Signals Directorate’s Australian Cyber Security Centre (ASD-ACSC) for reporting the vulnerability. The networking equipment major is tracking the exploitation and subsequent post-compromise activity under the moniker UAT-8616, describing the cluster as a “highly sophisticated cyber threat actor.”
🔔 Top News
Anthropic Accuses 3 Chinese Firms of Distillation Attacks— Anthropic accused three Chinese AI firms of engaging in concerted “industrial-scale” distillation attack campaigns aimed at extracting information from its model, making it the latest American tech firm to level such claims after OpenAI issued similar complaints. DeepSeek, Moonshot AI, and MiniMax are said to have flooded Claude with large volumes of specially-crafted prompts to elicit responses to train their own proprietary models. Last month, OpenAI submitted an open letter to U.S. legislators, claiming to have observed activity “indicative of ongoing attempts by DeepSeek to distill frontier models of OpenAI and other U.S. frontier labs, including through new, obfuscated methods.” The disclosure renewed a debate over training data sources and distillation techniques, with some criticizing the company for training its own systems using copyrighted material without permission. “Anthropic is guilty of stealing training data at a massive scale and has had to pay multibillion-dollar settlements for their theft,” xAI CEO Elon Musk said.
Google Disrupts UNC2814 GRIDTIDE Campaign— Google disclosed that it worked with industry partners to disrupt the infrastructure of a suspected China-nexus cyber espionage group tracked as UNC2814 that breached at least 53 organizations across 42 countries. The tech giant described UNC2814 as a prolific, elusive actor that has a history of targeting international governments and global telecommunications organizations across Africa, Asia, and the Americas. Central to the hacking group’s operations is a novel backdoor dubbed GRIDTIDE that abuses Google Sheets API as a communication channel to disguise C2 traffic and facilitate the transfer of raw data and shell commands. Chinese cyber espionage groups have consistently prioritized the telecommunication sector as a target precisely because of the access their networks provide to sensitive data and lawful intercept infrastructure.
Thousands of Public Google Cloud API Keys Exposed with Gemini Access— New research has found that Google Cloud API keys, typically designated as project identifiers for billing purposes, could be abused to authenticate to sensitive Gemini endpoints and access private data. The problem occurs when users enable the Gemini API on a Google Cloud project (i.e., Generative Language API), causing the existing API keys in that project, including those accessible via the website JavaScript code, to gain surreptitious access to Gemini endpoints without any warning or notice. With a valid key, an attacker can access uploaded files, cached data, and even rack up LLM usage charges, Truffle Security said. The issue has since been plugged by Google.
UAT-10027 Targets U.S. Education and Healthcare Sectors— A previously undocumented threat activity cluster known as UAT-10027 has been attributed to an ongoing malicious campaign targeting education and healthcare sectors in the U.S. since at least December 2025. The end goal of the attacks is to deliver a never-before-seen backdoor codenamed Dohdoor. “Dohdoor utilizes the DNS-over-HTTPS (DoH) technique for command-and-control (C2) communications and has the ability to download and execute other payload binaries reflectively,” Cisco Talos said. Analysis of the campaign has revealed no evidence of data exfiltration to date. Although no final payloads have been observed other than what appears to be the Cobalt Strike Beacon to backdoor into the victim’s environment, it’s believed that UAT-10027’s actions are likely driven by financial gain based on the victimology pattern.
Claude Code Flaws Allow Remote Code Execution and API Key Exfiltration— Security vulnerabilities in Anthropic Claude Code could have allowed attackers to remotely execute code on users’ machines and steal API keys by injecting malicious configurations into repositories, and then waiting for an unsuspecting developer to clone and open an untrustworthy project. The vulnerabilities were addressed between September 2025 and January 2026. “The ability to execute arbitrary commands through repository-controlled configuration files created severe supply chain risks, where a single malicious commit could compromise any developer working with the affected repository,” Check Point said. “The integration of AI into development workflows brings tremendous productivity benefits, but also introduces new attack surfaces that weren’t present in traditional tools.”
️🔥 Trending CVEs
New vulnerabilities surface daily, and attackers move fast. Reviewing and patching early keeps your systems resilient.
Automating Real-World Security Testing to Prove What Actually Works → This webinar explains why one-time security assessments are no longer enough and shows how organizations can automate continuous, real-world testing of their defenses to uncover gaps and measure how well controls hold up against actual attack techniques.
When AI Agents Become Your New Attack Surface → This webinar explains that as AI tools turn into autonomous agents that can browse, call APIs, and access internal systems, the security risk expands beyond the model to the entire environment they operate in, requiring stricter access controls, monitoring, and system-level safeguards rather than model testing alone.
Quantum Is Coming: Preparing for the End of Today’s Encryption → This webinar explains how future quantum computers could break today’s encryption, why “harvest now, decrypt later” attacks are a real risk, and what practical steps organizations can take now to begin shifting to post-quantum cryptography.
📰 Around the Cyber World
UNC6384 Drops New PlugX Variant — IIJ-SECT and LAB52 have detailed new activity from the Chinese cyber espionage group UNC6384. The attacks follow a known modus operandi of using STATICPLUGIN, a digitally signed downloader, to deliver updated versions of PlugX using DLL side-loading. The malicious payloads are distributed via phishing emails with meeting invitation lures or through fake software updates.
OpenAI Takes Action Against ChatGPT Accounts Used for Harmful Purposes — OpenAI said it took down ChatGPT accounts used for influence operations, phishing, and malware development. This included a possible Chinese intelligence operation in which an individual associated with Chinese law enforcement used the AI tool for covert influence operations against domestic and foreign adversaries. The company also acted against clusters conducting reconnaissance about U.S. persons and federal building locations, online romance scams, and Russian influence operations across Africa by generating social media posts and long-form commentary articles. “Unusually, this scam network combined manual ChatGPT prompting and an automated AI chatbot to try to entrap its targets,” OpenAI said about the scam operation running out of Cambodia. Some of these scams targeted Indonesian loveseekers. Other scams used ChatGPT to create content that purported to come from fictitious law firms, as well as impersonate real attorneys and U.S. law enforcement as part of a recovery scam targeting fraud victims.
AI-Induced Lateral Movement — New research from Orca Security has highlighted how AI can become a “third dimension” in the world of lateral movement, after network and identity, allowing attackers to expand their reach. “By injecting prompt injections in overlooked fields that are fetched by AI agents, hackers can trick LLMs, abuse Agentic tools, and carry out significant security incidents,” Orca said. “LLMs don’t truly understand the difference between data and instructions, and when tool output is fed back into the model, it can be interpreted as something to act on. Which opens a window to AI-induced Lateral Movement (AILM) activities.”
Russia Launches Probe into Telegram CEO — Russian authorities launched a criminal investigation of Telegram founder and CEO Pavel Durov. He is allegedly charged with promoting and facilitating terrorist activity on the messaging platform by failing to respond to law enforcement takedown requests. Russian officials have accused Durov of choosing a “path of violence and permissiveness” by not cooperating with its law enforcement agencies, according to the Rossiyskaya Gazeta. The move comes after Russia began restricting access to Telegram in the country in favor of MAX. Last month, Durov called it an “attempt to force its citizens to switch to a state-controlled app built for surveillance and political censorship.”
Hacked Prayer App Sends Surrender Messages — According to reports from The Wall Street Journal and WIRED, unidentified hackers seized control of an Iranian prayer app during a joint U.S.-Israeli attack to send messages urging the Iranian military to lay down their weapons and promising amnesty if they surrendered. The messages were sent in the form of push notifications to the BadeSaba Calendar app. It’s currently not clear who is behind the hack. The app has been downloaded more than 5 million times from the Google Play Store. Following the U.S.-Israel war on Iran, the government shut down all internet access in the country.
Smart TVs Turned Into AI Content Scrapers — Several smart TV app makers are deploying a new SDK named Bright SDK that lets users see fewer ads but also stealthily turns their TV into a node in a global proxy network that crawls and scrapes the web. Bright Data, the company behind the SDK, claims to operate more than 150 million residential proxy IP addresses spanning 195 countries.
Multiple Stealer Malware Families Detected — Multiple information stealer families have been detected in the wild. This includes Arkanix, CharlieKirk GRABBER, ComSuon, DarkCloud, MawaStealer, and MioLab (NovaStealer). Kaspersky’s analysis of Arkanix has revealed that it was likely developed as an LLM-assisted experiment, shrinking development time and costs. While Arkanix was promoted on underground forums in October 2025, the malware-as-a-service (MaaS) appears to have been taken down towards the end of 2025. The findings demonstrate continued demand for off-the-key stealer malware, creating an ecosystem that enables other threat actors to purchase stealer logs for obtaining initial access to targets. “Raw Infostealer logs are meticulously filtered by corporate domain, packaged, and sold to initial access brokers and attackers specifically looking for frictionless entry points into high-value corporate networks,” Hudson Rock said. The development has been complemented by underground networks turning into cybercrime marketplaces, complete with reputation systems, escrow, and specialist vendors, Varonis added. “One operator runs infostealers across thousands of machines. Another extracts and sorts the credentials. A third sells curated access,” security researcher Daniel Kelley said. “A fourth deploys the ransomware. Each person focuses on what they do best, and the ecosystem has become ruthlessly efficient.”
Chilean National Extradited to U.S. to Face Financial Fraud Crimes — Alex Rodrigo Valenzuela Monje (aka VAL4K), a 24-year-old Chilean national, has been extradited to the U.S. over his alleged role in running a cybercrime operation that involved the trafficking of payment card data. The defendant is accused of trafficking stolen credit card numbers and information for over 26,500 credit cards. “From at least May 2021 to August 2023, Valenzuela Monje operated an illegal online card shop, selling dumps of unauthorized access devices through Telegram channels,” the U.S. Justice Department said. “He allegedly operated the channels known as MacacoCC Collective and Novato Carding, offering payment card data for virtually all U.S. payment cards.”
New FUNNULL Infrastructure Discovered — QiAnXin has flagged new infrastructure associated with FUNNULL, a Philippines-based content delivery network (CDN) sanctioned last year by the U.S. Treasury for facilitating cyber scam operations. “Previously, their main method was to poison existing public CDN services; now they have evolved to independently develop complete server-side attack suites (RingH23), actively infiltrating CDN nodes, demonstrating a significant improvement in control and technical sophistication,” QiAnXin XLab said. Two independent supply chain infection channels have been identified: the compromise of maccms.la to distribute a malicious PHP backdoor through its update channel, and the compromise of the GoEdge CDN management node to implant an infection module, and deploy the proprietary RingH23 attack suite to all edge nodes via SSH remote commands. The campaign has compromised 10,748 unique IP addresses, predominantly video streaming sites.
Spike in Scans for SonicWall Devices — GreyNoise said it detected a spike in scans for SonicWall devices originating from the infrastructure of a known proxy provider. The activity started on February 22, 2026, and scanned for exposed SonicWall SSL VPNs. A total of 84,142 scanning sessions targeting SonicWall SonicOS infrastructure were observed between February 22 and February 25, 2026. The scanning came from 4,305 unique IP addresses across 20 autonomous systems. “Ninety-two percent of sessions probed a single API endpoint to determine whether SSL VPN is enabled — the prerequisite check before credential attacks,” GreyNoise said. “A commercial proxy service delivered 32% of campaign volume through 4,102 rotating exit IPs in two surgical bursts totaling 16 hours.”
Google Removes 115 Android Apps Tied to Ad Fraud — A new ad fraud operation dubbed Genisys involved hijacking Android devices to run malicious activity in the background. The activity leveraged a set of 115 apps that stealthily opened websites inside hidden browser windows to generate ad display revenue for their creators. More than 500 domains were generated using AI tools to serve the ads. “They appear as generic blogs, news-style sites, and informational properties produced at scale, built not to attract real audiences but to receive and monetize fraudulent traffic,” Integral Ads said. The apps have since been removed by Google. The findings build on another mobile ad fraud scheme called Arcade in which mobile apps generated hidden in-app browser activity to load websites in the background and convert mobile-origin activity into web traffic.
Zerobot Exploits Flaws in n8n and Tenda Routers — A Mirai-based IoT botnet named Zerobot has been observed exploiting vulnerabilities in the n8n AI automation platform (CVE-2025-68613) and Tenda routers (CVE-2025-7544) to expand its reach. The activity was first detected in January 2026. “Targeting of the n8n vulnerability is particularly interesting: Botnets typically exploit Internet of Things (IoT) devices, such as security cameras, DVRs, and routers, but n8n falls into an entirely different category,” Akamai said. “Although this isn’t entirely new behavior for botnets, this sort of targeting presents a greater danger to organizations by exposing more critical infrastructure to compromise as the n8n exploit could enable lateral movement for a threat actor.”
Various ClickFix Campaigns Spotted — Threat hunters disclosed multiple ClickFix campaigns, including one leading to a hands-on-keyboard attack that deployed the Termite ransomware. The attack has been attributed to a group known as Velvet Tempest (DEV-0504). Another ClickFix campaign, codenamed OCRFix, used websites impersonating the Tesseract OCR tool as a launchpad for delivering malware that uses EtherHiding to retrieve the C2 server, send system information, and await further instructions. A third campaign has been found employing fake GitHub repositories impersonating software companies and leveraging ClickFix to social-engineer victims into installing infostealers, such as SHub Stealer v2.0.
GTFire Phishing Scheme Detailed — A phishing campaign dubbed GTFire is abusing Google Firebase to host phishing pages and Google Translate to disguise the malicious URLs and bypass email and web security filters. “By chaining these services together, the attackers create phishing links that appear benign, leverage Google’s reputation, and dynamically redirect victims to brand‑impersonating login pages,” Group-IB said. “Once credentials are submitted and harvested, victims are often redirected back to the legitimate website of the targeted organization, reducing suspicion and delaying incident response.” The campaign is estimated to have harvested thousands of stolen credentials associated with more than a thousand organizations, spanning over a hundred countries and hundreds of industries. The threat actor behind the operation has been active since at least January 1, 2022. Mexico, the U.S., Spain, India, and Argentina are among the prominent targets.
C77L Ransomware Targets Russia — A ransomware operation called C77L has been tied to at least 40 attacks on Russian and Belarusian enterprises since March 2025. The group is assessed to be operating out of Iran. Initial access to target networks is accomplished via weak passwords for publicly available RDP and VPN endpoints. “The targets of attacks are Windows systems due to their overwhelming predominance in the IT infrastructures of medium and small businesses,” F6 said.
RESURGE Malware Can Be Dormant on Infected Ivanti Devices — The U.S. Cybersecurity and Infrastructure Security Agency (CISA) updated its original alert for RESURGE, a piece of malware deployed as part of exploitation activity targeting a now-patched security flaw in Ivanti Connect Secure (ICS) appliances. The agency said “RESURGE has sophisticated network-level evasion and authentication techniques, leveraging advanced cryptographic methods and forged TLS certificates to facilitate covert communications,” adding “RESURGE can remain latent on systems until a remote actor attempts to connect to the compromised device.”
30 Members of The Com Arrested — A coordinated law enforcement operation led by Europol detained 30 individuals connected to an underground online community known as The Com. The operation, launched in January 2025, has been codenamed Project Compass. An additional 179 members were also identified as part of the investigation. The Com is the name assigned to a loose-knit cybercrime collective that has been linked to online doxxing, harassment, threats of violence, extortion, sexual exploitation, phishing, SIM swapping, ransomware, and other digital crimes. Europol described The Com as a decentralized extremist network.
U.K. Government Cuts Cyber Attack Fix Times by 84% — The U.K. government has claimed it has reduced its backlog of critical vulnerabilities by 75% and reduced cyber attack fix times by 87%. Serious security weaknesses in public sector websites are fixed six times faster, cutting the average time from nearly two months to just over a week, the U.K. government said in an update published on 26 February.
Poland Dismantles Organized Crime Group — Poland’s Central Bureau for Combating Cybercrime (CBZC) dismantled an organized group that used phishing to take control of Facebook accounts and extract BLIK payment codes from victims. Eleven members of an organized criminal group operating in Poland and Germany between May 2022 and May 2024 were identified. Six suspects have been placed in pretrial detention as part of the investigation, and over 100,000 credentials were seized. The group used “phishing techniques to obtain login details for Facebook accounts, and then gained access to them and used instant messaging to extort BLIK codes from other users of the portal,” CBZC said.
Hacker Exploits Clade to Target Mexican Government Sites — An unknown hacker exploited Anthropic’s Claude chatbot to carry out attacks against Mexican government agencies, according to a report by Gambit Security. “Within a month of the initial compromise, ten government bodies and one financial institution were affected, approximately 195 million identities exposed, and roughly 150GB of data exfiltrated: tax records, civil registry files, voter data,” the company said. “The attacker even built an automated system that forges official government tax certificates using live data. It was orchestrated by an individual actor directing AI to operate as a nation-state-level team of operators and analysts.” The operation ran on more than 1,000 prompts and regularly passed information to OpenAI’s GPT-4.1 for analysis. The breach began in late December 2025 and continued for about a month. Anthropic has since disrupted the activity and banned all of the accounts involved. The attacks haven’t been attributed to a specific group.
🔧 Cybersecurity Tools
Titus → It is an open-source tool from Praetorian that scans code, files, repositories, and traffic to find leaked credentials like API keys and tokens. It uses hundreds of pattern rules and can check whether a detected secret is actually active. You can run it as a command-line tool, use it inside other tools as a Go library, or use it as extensions in Burp Suite or a browser to uncover credential leaks in different workflows.
Sirius → It is an open-source vulnerability scanning platform on GitHub that automates network and system security checks to find weaknesses and risks in infrastructure. It combines community-driven security data with automated tests, runs within containers, and gives operators a unified view of vulnerabilities to prioritize remediation.
Disclaimer: These tools are provided for research and educational use only. They are not security-audited and may cause harm if misused. Review the code, test in controlled environments, and comply with all applicable laws and policies.
Conclusion
Viewed one by one, these incidents seem contained. Seen together, they show how risk now flows across connected systems that organizations rely on daily. Infrastructure, AI platforms, cloud services, and third-party tools are deeply intertwined, and strain in one area often exposes another.
The takeaway is clarity, not alarm. Adversaries are improving efficiency, scaling access, and operating inside normal processes. Reading through each report helps map that shift and understand how the broader environment is changing.
Google has announced a new program in its Chrome browser to ensure that HTTPS certificates are secure against the future risk posed by quantum computers.
“To ensure the scalability and efficiency of the ecosystem, Chrome has no immediate plan to add traditional X.509 certificates containing post-quantum cryptography to the Chrome Root Store,” the Chrome Secure Web and Networking Team said.
“Instead, Chrome, in collaboration with other partners, is developing an evolution of HTTPS certificates based on Merkle Tree Certificates (MTCs), currently in development in the PLANTS working group.”
As Cloudflare explains, MTC is a proposal for the next generation of the Public Key Infrastructure (PKI) used to secure the internet that aims to reduce the number of public keys and signatures in the TLS handshake to the bare minimum required.
Under this model, a Certification Authority (CA) signs a single ‘Tree Head’ representing potentially millions of certificates, and the ‘certificate’ sent to the browser is a lightweight proof of inclusion in that tree, Google said.
In other words, MTCs facilitate the adoption of post-quantum algorithms without having to incur additional bandwidth associated with classical X.509 certificate chains. The approach, the company added, decouples the security strength of the corresponding cryptographic algorithm from the size of the data transmitted to the user.
“By shrinking the authentication data in a TLS handshake to the absolute minimum, MTCs aim to keep the post-quantum web as fast and seamless as today’s internet, maintaining high performance even as we adopt stronger security,” Google said.
The tech giant said it’s already experimenting with MTCs with real internet traffic and that it plans to gradually expand the rollout in three distinct phases by the third quarter of 2027 –
Phase 1 (In progress) – Google is conducting a feasibility study in collaboration with Cloudflare to evaluate the performance and security of TLS connections relying on MTCs.
Phase 2 (Q1 2027) – Google plans to invite Certificate Transparency (CT) Log operators with at least one “usable” log in Chrome before February 1, 2026, to participate in the initial bootstrapping of public MTCs.
Phase 3 (Q3 2027) – Google will finalize the requirements for onboarding additional CAs into the new Chrome Quantum-resistant Root Store (CQRS) and corresponding Root Program that only supports MTCs.
“We view the adoption of MTCs and a quantum-resistant root store as a critical opportunity to ensure the robustness of the foundation of today’s ecosystem,” Google said. By designing for the specific demands of a modern, agile, internet, we can accelerate the adoption of post-quantum resilience for all web users.
Cybersecurity researchers have disclosed details of a now-patched security flaw in Google Chrome that could have permitted attackers to escalate privileges and gain access to local files on the system.
The vulnerability, tracked as CVE-2026-0628 (CVSS score: 8.8), has been described as a case of insufficient policy enforcement in the WebView tag. It was patched by Google in early January 2026 in version 143.0.7499.192/.193 for Windows/Mac and 143.0.7499.192 for Linux.
“Insufficient policy enforcement in WebView tag in Google Chrome prior to 143.0.7499.192 allowed an attacker who convinced a user to install a malicious extension to inject scripts or HTML into a privileged page via a crafted Chrome extension,” according to a description on the NIST National Vulnerability Database (NVD).
Palo Alto Networks Unit 42 researcher Gal Weizman, who discovered and reported the flaw on November 23, 2025, said the issue could have permitted malicious extensions with basic permissions to seize control of the new Gemini Live panel in Chrome. The panel can be launched by clicking the Gemini icon located at the top of the browser window. Google added Gemini integration to Chrome in September 2025.
This attack could have been abused by an attacker to achieve privilege escalation, enabling them to access the victim’s camera and microphone without their permission, take screenshots of any website, and access local files.
The findings highlight an emerging attack vector arising from baking artificial intelligence (AI) and agentic capabilities directly into web browsers to facilitate real-time content summarization, translation, and automated task execution, as the same capabilities could be abused to perform privileged actions.
The problem, at its core, is the need for granting these AI agents privileged access to the browsing environment to perform multi-step operations, thereby becoming a double-edged sword when an attacker embeds hidden prompts in a malicious web page, and a victim user is tricked into accessing it via social engineering or some other means.
The prompt could instruct the AI assistant to perform actions that would otherwise be blocked by the browser, leading to data exfiltration or code execution. Even worse, the web page could manipulate the agent to store the instructions in memory, causing it to persist across sessions.
Besides the expanded attack surface, Unit 42 said the integration of an AI side panel in agentic browsers brings back classic browser security risks.
“By placing this new component within the high-privilege context of the browser, developers could inadvertently create new logical flaws and implementation weaknesses,” Weizman said. “This could include vulnerabilities related to cross-site scripting (XSS), privilege escalation, and side-channel attacks that can be exploited by less-privileged websites or browser extensions.”
While browser extensions operate based on a defined set of permissions, successful exploitation of CVE-2026-0628 undermines the browser security model and allows an attacker to run arbitrary code at “gemini.google[.]com/app” via the browser panel and gain access to sensitive data.
“An extension with access to a basic permission set through the declarativeNetRequest API allowed permissions that could have enabled an attacker to inject JavaScript code into the new Gemini panel,” Weizman added. “When the Gemini app is loaded within this new panel component, Chrome hooks it with access to powerful capabilities.”
It’s worth noting that the declarativeNetRequest API allows extensions to intercept and change properties of HTTPS web requests and responses. It’s used by ad-blocking extensions to stop issuing requests to load ads on web pages.
In other words, all it takes for an attacker is to trick an unsuspecting user into installing a specially crafted extension, which could then inject arbitrary JavaScript code into the Gemini side panel to interact with the file system, take screenshots, access the camera, turn on the microphone – all features necessary for the AI assistant to perform its tasks.
“This difference in what type of component loads the Gemini app is the line between by-design behavior and a security flaw,” Unit 42 said. An extension influencing a website is expected. However, an extension influencing a component that is baked into the browser is a serious security risk.”
Using AI to find security vulnerabilities holds significant promise, but the initial products fall short of the needs of enterprises and software developers, say experts.
New research has found that Google Cloud API keys, typically designated as project identifiers for billing purposes, could be abused to authenticate to sensitive Gemini endpoints and access private data.
The findings come from Truffle Security, which discovered nearly 3,000 Google API keys (identified by the prefix “AIza”) embedded in client-side code to provide Google-related services like embedded maps on websites.
“With a valid key, an attacker can access uploaded files, cached data, and charge LLM-usage to your account,” security researcher Joe Leon said, adding the keys “now also authenticate to Gemini even though they were never intended for it.”
The problem occurs when users enable the Gemini API on a Google Cloud project (i.e., Generative Language API), causing the existing API keys in that project, including those accessible via the website JavaScript code, to gain surreptitious access to Gemini endpoints without any warning or notice.
This effectively allows any attacker who scrapes websites to get hold of such API keys and use them for nefarious purposes and quota theft, including accessing sensitive files via the /files and /cachedContents endpoints, as well as making Gemini API calls, racking up huge bills for the victims.
In addition, Truffle Security found that creating a new API key in Google Cloud defaults to “Unrestricted,” meaning it’s applicable for every enabled API in the project, including Gemini.
“The result: thousands of API keys that were deployed as benign billing tokens are now live Gemini credentials sitting on the public internet,” Leon said. In all, the company said it found 2,863 live keys accessible on the public internet, including a website associated with Google.
The disclosure comes as Quokka published a similar report, finding over 35,000 unique Google API keys embedded in its scan of 250,000 Android apps.
“Beyond potential cost abuse through automated LLM requests, organizations must also consider how AI-enabled endpoints might interact with prompts, generated content, or connected cloud services in ways that expand the blast radius of a compromised key,” the mobile security company said.
“Even if no direct customer data is accessible, the combination of inference access, quota consumption, and possible integration with broader Google Cloud resources creates a risk profile that is materially different from the original billing-identifier model developers relied upon.”
Although the behavior was initially deemed intended, Google has since stepped in to address the problem.
“We are aware of this report and have worked with the researchers to address the issue,” a Google spokesperson told The Hacker News via email. “Protecting our users’ data and infrastructure is our top priority. We have already implemented proactive measures to detect and block leaked API keys that attempt to access the Gemini API.”
It’s currently not known if this issue was ever exploited in the wild. However, in a Reddit post published two days ago, a user claimed a “stolen” Google Cloud API Key resulted in $82,314.44 in charges between February 11 and 12, 2026, up from a regular spend of $180 per month.
We have reached out to Google for further comment, and we will update the story if we hear back.
Users who have set up Google Cloud projects are advised to check their APIs and services, and verify if artificial intelligence (AI)-related APIs are enabled. If they are enabled and publicly accessible (either in client-side JavaScript or checked into a public repository), make sure the keys are rotated.
“Start with your oldest keys first,” Truffle Security said. “Those are the most likely to have been deployed publicly under the old guidance that API keys are safe to share, and then retroactively gained Gemini privileges when someone on your team enabled the API.”
“This is a great example of how risk is dynamic, and how APIs can be over-permissioned after the fact,” Tim Erlin, security strategist at Wallarm, said in a statement. “Security testing, vulnerability scanning, and other assessments must be continuous.”
“APIs are tricky in particular because changes in their operations or the data they can access aren’t necessarily vulnerabilities, but they can directly increase risk. The adoption of AI running on these APIs, and using them, only accelerates the problem. Finding vulnerabilities isn’t really enough for APIs. Organizations have to profile behavior and data access, identifying anomalies and actively blocking malicious activity.”
OpenClaw has fixed a high-severity security issue that, if successfully exploited, could have allowed a malicious website to connect to a locally running artificial intelligence (AI) agent and take over control.
“Our vulnerability lives in the core system itself – no plugins, no marketplace, no user-installed extensions – just the bare OpenClaw gateway, running exactly as documented,” Oasis Security said in a report published this week.
The flaw has been codenamed ClawJacked by the cybersecurity company.
The attack assumes the following threat model: A developer has OpenClaw set up and running on their laptop, with its gateway, a local WebSocket server, bound to localhost and protected by a password. The attack kicks in when the developer lands on an attacker-controlled website through social engineering or some other means.
The infection sequence then follows the steps below –
Malicious JavaScript on the web page opens a WebSocket connection to localhost on the OpenClaw gateway port.
The script brute-forces the gateway password by taking advantage of a missing rate-limiting mechanism.
Post successful authentication with admin-level permissions, the script stealthily registers as a trusted device, which is auto-approved by the gateway without any user prompt.
The attacker gains complete control over the AI agent, allowing them to interact with it, dump configuration data, enumerate connected nodes, and read application logs.
“Any website you visit can open one to your localhost. Unlike regular HTTP requests, the browser doesn’t block these cross-origin connections,” Oasis Security said. “So while you’re browsing any website, JavaScript running on that page can silently open a connection to your local OpenClaw gateway. The user sees nothing.”
“That misplaced trust has real consequences. The gateway relaxes several security mechanisms for local connections – including silently approving new device registrations without prompting the user. Normally, when a new device connects, the user must confirm the pairing. From localhost, it’s automatic.”
Following responsible disclosure, OpenClaw pushed a fix in less than 24 hours with version 2026.2.25 released on February 26, 2026. Users are advised to apply the latest updates as soon as possible, periodically audit access granted to AI agents, and enforce appropriate governance controls for non-human (aka agentic) identities.
The development comes amid a broader security scrutiny of the OpenClaw ecosystem, primarily stemming from the fact that AI agents hold entrenched access to disparate systems and the authority to execute tasks across enterprise tools, leading to a significantly larger blast radius should they be compromised.
Reports from Bitsight and NeuralTrust have detailed how OpenClaw instances left connected to the internet pose an expanded attack surface, with each integrated service further broadening the blast radius and can be transformed into an attack weapon by embedding prompt injections in content (e.g., an email or a Slack message) processed by the agent to execute malicious actions.
The disclosure comes as OpenClaw also patched a log poisoning vulnerability that allowed attackers to write malicious content to log files via WebSocket requests to a publicly accessible instance on TCP port 18789.
Since the agent reads its own logs to troubleshoot certain tasks, the security loophole could be abused by a threat actor to embed indirect prompt injections, leading to unintended consequences. The issue was addressed in version 2026.2.13, which was shipped on February 14, 2026.
“If the injected text is interpreted as meaningful operational information rather than untrusted input, it could influence decisions, suggestions, or automated actions,” Eye Security said. “The impact would therefore not be ‘instant takeover,’ but rather: manipulation of agent reasoning, influencing troubleshooting steps, potential data disclosure if the agent is guided to reveal context, and indirect misuse of connected integrations.”
“As AI agent frameworks become more prevalent in enterprise environments, security analysis must evolve to address both traditional vulnerabilities and AI-specific attack surfaces,” Endor Labs said.
Elsewhere, new research has demonstrated that malicious skills uploaded to ClawHub, an open marketplace for downloading OpenClaw skills, are being used as conduits to deliver a new variant of Atomic Stealer, a macOS information stealer developed and rented by a cybercrime actor known as Cookie Spider.
“The infection chain begins with a normal SKILL.md that installs a prerequisite,” Trend Micro said. “The skill appears harmless on the surface and was even labeled as benign on VirusTotal. OpenClaw then goes to the website, fetches the installation instructions, and proceeds with the installation if the LLM decides to follow the instructions.”
The instructions hosted on the website “openclawcli.vercel[.]app” include a malicious command to download a stealer payload from an external server (“91.92.242[.]30”) and run it.
Threat hunters have also flagged a new malware delivery campaign in which a threat actor by the name @liuhui1010 has been identified, leaving comments on legitimate skill listing pages, urging users to explicitly run a command they provided on the Terminal app if the skill “doesn’t work on macOS.”
The command is designed to retrieve Atomic Stealer from “91.92.242[.]30,” an IP address previously documented by Koi Security and OpenSourceMalware for distributing the same malware via malicious skills uploaded to ClawHub.
What’s more, a recent analysis of 3,505 ClawHub skills by AI security company Straiker has uncovered no less than 71 malicious ones, some of which posed as legitimate cryptocurrency tools but contained hidden functionality to redirect funds to threat actor-controlled wallets.
Two other skills, bob-p2p-beta and runware, have been attributed to a multi-layered cryptocurrency scam that employs an agent-to-agent attack chain targeting the AI agent ecosystem. The skills have been attributed to a threat actor who operates under the aliases “26medias” on ClawHub and “BobVonNeumann” on Moltbook and X.
“BobVonNeumann presents itself as an AI agent on Moltbook, a social network designed for agents to interact with each other,” researchers Yash Somalkar and Dan Regalado said. “From that position, it promotes its own malicious skills directly to other agents, exploiting the trust that agents are designed to extend to each other by default. It’s a supply chain attack with a social engineering layer built on top.”
What bob-p2p-beta does, however, is instruct other AI agents to store Solana wallet private keys in plaintext, purchase worthless $BOB tokens on pump.fun, and route all payments through an attacker-controlled infrastructure. The second skill claims to offer a benign image generation tool to build the developer’s credibility.
Given that ClawHub is becoming a new fertile ground for attackers, users are advised to audit skills before installing them, avoid providing credentials and keys unless it’s essential, and monitor skill behavior.
The security risks associated with self-hosted agent runtimes like OpenClaw have also prompted Microsoft to issue an advisory, warning that unguarded deployment could pave the way for credential exposure/exfiltration, memory modification, and host compromise if the agent can be tricked into retrieving and running malicious code either through poisoned skills or prompt injections.
“Because of these characteristics, OpenClaw should be treated as untrusted code execution with persistent credentials,” the Microsoft Defender Security Research Team said. “It is not appropriate to run on a standard personal or enterprise workstation.”
“If an organization determines that OpenClaw must be evaluated, it should be deployed only in a fully isolated environment such as a dedicated virtual machine or separate physical system. The runtime should use dedicated, non-privileged credentials and access only non-sensitive data. Continuous monitoring and a rebuild plan should be part of the operating model.”
Claude Code Security made a big splash when it was introduced last week, but it may be too early to call it as disruptive as the markets suggested.
Anthropic unveiled Claude Code Security on Feb. 20, built into the web version of agentic AI coding tool Claude Code. Available now in research preview, the new tool scans codebases for vulnerabilities and suggests patches and fixes categorized by priority level. Anthropic said the tool makes recommendations only for human review, so developers are always in control when deciding to ship a patch Claude Code creates.
Somewhat limited in scope, Claude Code is not a one-and-done security solution and still requires developers at the helm. But its debut had a notable impact on share prices in the security market. CrowdStrike’s stock dropped from about $420 a share on Feb. 19 to less than $350 on Feb. 23, though, as of this writing, it has partially recovered to $380. JFrog saw an even more aggressive dip during the same time period, from about $50 a share to $35, though it has also partially recovered to about $42, as of this writing.
Zscaler, Datadog, Okta, Fortinet, SentinelOne, Palo Alto Networks, and others saw varying share price declines in the wake of a coding tool that had neither been fully launched or fully tested by the larger community.
As markets have a tendency toward knee-jerk reactions from time to time, it’s hard to say exactly how disruptive this tool and others like it will be for the security market. For now, this level of fervor appears to be premature.
Claude Code Security’s Promising Tech
Claude Code Security makes big promises. Built from more than a year of security research, Anthropic said in its blog post that “Claude Code Security reads and reasons about your code the way a human security researcher would: understanding how components interact, tracing how data moves through your application, and catching complex vulnerabilities that rule-based tools miss.”
Each finding goes through a multistage verification process that aims to weed out false positives, and flaws are packaged up into an easy-to-read dashboard. The tool also has “confidence ratings” to account for the nuances that AI models can’t always pick up on. And through Claude Opus 4.6, released earlier this month, the blog claimed Anthropic “found over 500 vulnerabilities in production open-source codebases — bugs that had gone undetected for decades, despite years of expert review.”
There’s also some promising data regarding the use of large language models (LLMs) to find and remediate vulnerabilities. At DEF CON 33, last summer, DARPA hosted the finals of its two-year AI Cyber Challenge (AIxCC), in which teams used AI technology to secure open source technology’s underlying critical infrastructure. Much of the work during this challenge involved using cyber reasoning systems to fix vulnerabilities in open source technology.
Justin Cappos, a professor in the Computer Science and Engineering department at New York University, as well as a longtime developer of open source software, helped develop the format of the challenge. He tells Dark Reading that many people, including some of those who won the contest, “did not expect it to go as well as it did.”
“They basically thought these models would find a few minor types of bugs but probably struggle with creating patches, but that’s not actually what happened,” Cappos says. “What happened is that they were able to find quite a lot of fairly complicated issues and actually create semi-reasonable patches for a lot of them, including a lot of issues that weren’t known at the time, that weren’t artificial things that the conference organizers put in.”
Broadly, Cappos says he’s mildly positive that there are good things that can happen with AI coding security tools like this. He warned that it’s still early days for these tools, or as he called it, the “Will Smith eating spaghetti” phase.
Cappos, who maintains multiple open source projects, says he and others have begun to receive bug reports from AI coding tools. While some are good from a helpfulness angle, many are also false positives or make suggestions that aren’t useful or practical in real-life development environments. “There’s a lot of junk,” he notes, with understatement.
Melinda Marks, practice director of cybersecurity at analyst firm Omdia (which, like Dark Reading, is owned by Informa TechTarget), says it’s interesting to see security vendors take a hit, but it doesn’t mean agentic AI solutions will take over security wholesale.
For example, she called attention to three critical vulnerabilities in Claude Code that Check Point Research discovered and reported on this week. While Claude Code is powerful and has the potential to make software development easily accessible to anyone with an idea, Marks says these vulnerabilities “highlight the importance of security when utilizing these types of coding tools, besides using the agentic security capabilities.”
“Claude Code Security is super exciting, as we need to apply AI on the defender side because it is the only way for security teams to keep up with the scale of development, especially with AI adoption. Our research shows that security teams are using or want to use agentic AI to scale security to stay ahead of threats and attacks,” she says. “For companies wanting to secure usage of AI, they would likely still need third-party security vendor tools to efficiently mitigate risk associated with AI adoption.”
Eran Kinsbruner, VP of product marketing at application security vendor Checkmarx, says Claude Code Security marks “meaningful progress” in bringing security awareness closer to code creation. That said, it’s not a one-size-fits-all application security solution for the complex environments organizations deal with these days. “Safer code generation alone doesn’t equate to comprehensive software security,” he adds.
“The idea of streamlining patching through an integrated, developer-friendly interface is understandably appealing. Anything that reduces friction between identifying and fixing vulnerabilities can help organizations move faster,” Kinsbruner says. “However, this speed comes at a cost in terms of literal dollars. Whereas AppSec solutions are built for ongoing scanning, an LLM-based solution like Claude Code Security is prompted to conduct point-in-time checks that add up across hundreds if not thousands of repositories.”
Anthropic did not respond to Dark Reading’s request for comment.
A large fintech company is pinning the blame for its recent data breach on its firewall vendor and suing the vendor for damages. It’s a line that some organizations have toed in recent years, and it carries significant implications for the cybersecurity industry.
The plaintiff, Marquis, provides marketing and compliance solutions to more than 700 banks and credit unions, according to its website. On Aug. 14, a ransomware actor gained access to Marquis’s IT network and client data, including personally identifying information (PII) belonging to customers of some of its clients. Recent news reports have suggested that more than 780,000 people were impacted, though Dark Reading could not independently confirm that figure.
For a while, Marquis wasn’t aware of how hackers were able to get into its systems. Meanwhile, on Sept. 17, its firewall vendor, SonicWall, revealed that it had fallen victim to its own breach. Attackers gained access to SonicWall’s cloud backup service and stole customers’ firewall configuration files, which would have made for easy follow-on attacks against those customers. At the time, the security company claimed that only 5% of its customers were affected. On Oct. 8, though, it admitted that, in fact, all of its customers were impacted.
And Marquis took that personally. In a complaint filed with the US District Court for the Eastern District of Texas on Feb. 23, the company laid the blame for its attack on SonicWall and is now seeking damages.
In response to an inquiry from Dark Reading, Marquis shared a press release claiming that “Not only did SonicWall fail to disclose its compromise promptly, but the company assured Marquis that its firewall protection was not affected for a period of several weeks,” and “because SonicWall failed to timely disclose the full scope and severity of its breach, Marquis was prevented from mitigating the harm that resulted from the SonicWall breach.”
Meanwhile, a SonicWall spokesperson told Dark Reading that “At this time, we have not identified any technical evidence establishing a link between these events. Unfortunately, the customer filed a lawsuit without providing documentation to substantiate its allegations in advance. We are reviewing these claims now and are prepared to vigorously defend any unsubstantiated claims.”
Details aside, the lawsuit raises an important question: Who should bear the blame for a third-party data breach?
“Historically, most breach-related lawsuits have flowed from consumers or regulators toward the breached company, but this case highlights a growing shift: enterprises turning around and suing their cybersecurity vendors, managed service providers, and software suppliers for contribution, indemnification, or outright negligence,” says Bradley partner Erin Jane Illman. “That fundamentally changes the risk calculus for the industry. Vendors are no longer just technical partners — they are potential co-defendants.”
Though it’s exceedingly rare, relative to how often companies suffer data breaches through third-party vendors, Marquis isn’t the first company to try this course of action.
In 2018, for instance, a breach at email security vendor Barracuda Networks led to a breach of personal health information (PHI) from one of its clients, Zoll Services. Zoll sued Barracuda, but the US District Court for the District of Massachusetts ruled in Barracuda’s favor. Just a few months ago, in November 2025, Zoll’s appeal was also rejected.
There have also been variations on this theme. In 2014, a handful of banks pursued two separate lawsuits not only against Target — for its now infamous point-of-sale (PoS) breach — but also Trustwave, which apparently co-signed Target’s IT security just before the incident occurred. Those cases were withdrawn or otherwise petered out.
Jackson Stephens, senior cybersecurity counsel for Galactic Advisors, points to the MoveIT breach from 2023 sparking a flurry of legal action.
“That breach resulted in dozens of lawsuits, many of which are still pending in court,” he says. “Suits against managed service providers (MSPs) and cybersecurity vendors are becoming more common,” he thinks.
In the case of Marquis and SonicWall, he says, “these cases rarely go to trial — I suspect that the contract requires arbitration or mediation, and like most suits, ending in an undisclosed settlement.” But, he adds, a company like SonicWall could face any number of other legal challenges in the future, like “if SonicWall’s business customers had personal data leaked, those business customers could be sued by a class action of affected individuals. Those business customers will seek to shift the blame onto SonicWall.” Alternatively, SonicWall could be subject to enforcement actions from any number of government authorities.
Legal Risk to Cybersecurity Providers
Bradley’s Illman worries that Marquis might make an attractive example for other breach victims to follow. “This environment creates strategic incentives for executives,” she explains. “Faced with shareholder suits or regulatory scrutiny after a breach, leadership may be more inclined to shift blame downstream — arguing that a vendor’s tool failed, a patch was defective, or a managed service provider missed indicators of compromise.”
She adds, “That doesn’t eliminate executive responsibility, but it does open a new front of cross-claims and indemnity fights behind the scenes.”
The criteria for negligence remains a moving target. “Plaintiffs are probing theories like misrepresentation, failure to warn, negligent design, or overstated security claims to pierce those protections,” says Illman. And beyond that, “courts may begin to scrutinize how ‘reasonable cybersecurity’ is defined for a professional security provider. When a company sells security as its core product, the standard of care it’s held to could be materially higher than that of an ordinary enterprise IT department.”
Of course, there’s another way to look at a case like Marquis v. SonicWall. Organizations choose their vendors, and have the power to shape the terms of those relationships in contracts, and over time.
“It’s not uncommon for companies to engage vendors without doing appropriate due diligence to assess the cybersecurity of their vendors,” says Joseph Lazzarotti, an attorney with JacksonLewis. It’s also common, he notes, to have service level agreements (SLAs) which don’t adequately account for worst-case scenarios, like when the vendor is the cause of an attack.
If organizations are as careless in hiring vendors as they claim vendors are in protecting them, Lazzarotti says, “it could result in claims that the company was negligent in selecting a vendor and or monitoring that vendor, resulting in exposure of the company’s data or that of its consumers.”
This article was updated at 1:20 ET on Feb. 27, with statements from both Marquis and SonicWall.
Cybersecurity experts are calling for a major shift in how companies handle data breaches and security failures, arguing that greater transparency and specific detail disclosure about how and why they occur is essential if the industry hopes to effectively reduce cyber-risk.
At the upcoming RSAC Conference, threat research experts Adam Shostack and Adrian Sanabria will make the case for greater incident transparency and the need for structured feedback loops in cybersecurity, in a session aptly titled “A Failure Is a Terrible Thing to Waste: The Case for Breach Transparency,” scheduled for Monday, March 23.
Shostack, founder of security consultancy Shostack and Associates, and Sanabria, founder and principal researcher at The Defender’s Initiative, posit that the cybersecurity field is sorely lacking in formal processes for providing feedback after a major data breach or other type of security incident. This type of feedback, they argue, is critical to how other safety-motivated industries such as aviation, medicine, and public health operate.
In an interview with Dark Reading, the researchers cite incidents such as plane crashes or patient deaths in medicine, which are heavily scrutinized, with measures often being put in place to avoid a repeat of the scenario.
Rather than doing this in the majority of cases, however, the cybersecurity industry often treats breach investigations as legal liabilities or shameful incidents to be hidden rather than lessons to share and from which other professionals can learn.
Security Culture Change Needed
This culture is contrary to facilitating an environment in which organizations and professionals can take away valuable insight from security breaches that could prevent future incidents, Shostack says.
Instead of hiding most of the details or deflecting blame, the guiding principle of all breaches should be, “If you’ve made a mistake, admit you’ve made a mistake and tell us what happened,” he tells Dark Reading.
However, organizations often are viewed as negligent after a security incident, even though research suggests that many successful attacks involve chains of small failures — missing patches, misconfigured tools, weak monitoring, inadequate testing — rather than comprehensive incompetency, Sanabria says.
“It’s rarely one thing,” he tells Dark Reading. “There are dozens of controls that should have stopped the attacker and didn’t.”
If organizations disclose these small details of incidents instead of hiding them for fear of shame or blame, it can benefit the industry as a whole and help prevent future breaches, Sanabria says.
Laws and policies for how data breaches are treated in the US vary from state to state and organization to organization, the researchers say. Publicly traded companies, for instance, are required to disclose major security incidents in SEC filings, but only if they have a material impact on the company.
There are a couple of key reasons that the cybersecurity industry on the whole isn’t doing diligent, mandated post-mortem on major breaches and incidents. One is the legal ramifications for the organizations that may be found to be at fault and thus liable, financially or otherwise, for the impact incidents cause, Shostack explains.
This points to a difference in organizational culture — more specifically, the difference between engineers and lawyers, he says. “Lawyers in their ethical code have an ethical requirement to zealously pursue the interest of their clients,” Shostack explains. “Engineers, in contrast, are required to consider things like public safety.”
Typically, when a cybersecurity incident happens, an organization’s lawyers warn the CEO not to talk about it, for fear that the company might get sued, he says. But this is contrary to “the normal way that engineering works,” Shostack says.
“This isn’t what we do when a bridge falls down, when an airplane falls out of the sky,” he says. “When we have any other technologically mediated system failure, we talk about what happened and we learn from it.”
With no formal governance over how organizations handle security incidents, this difference in culture continues to keep many details around breaches shrouded in mystery, Shostack says.
Another reason is the current lack of federal regulatory support or requirements for breach transparency. There was a short-lived federal effort, the Cyber Safety Review Board (CSRB), that aimed to create a model similar to the National Transportation Safety Board to investigate major cyber incidents and publish real-time feedback on them.
While the board did manage to issue several reports, the current Trump administration fired all of its members soon after taking office in January, while the CSRB was in the middle of investigating the breach of numerous US telecommunications companies by the China-backed APT Salt Typhoon. Whille the board theoretically still exists, no one sits on it, and thus it’s not currently doing the work it was formed to do.
The Data Is Out There
That’s not to say that there isn’t ample, publicly available data on major security breaches and incidents if one knows where to look. In fact, Sanabria has spent months reviewing public breach documents — congressional reports, regulatory filings, lawsuits and after-action reports — and says “there is a pile of gold” in breach data that people tend to miss.
“There are a lot of extractable lessons that you can find that are literally sitting there waiting to be picked up to be seen,” he says.
The challenge in analyzing this data is that breach narratives tend to get both overlooked and oversimplified. In the 2017 breach at Equifax, headlines focused on an unpatched Apache Struts vulnerability as the cause of the breach. But later congressional materials revealed deeper cultural and process failures, including breakdowns in internal communication and testing.
“What everyone remembers is Day One,” Sanabria says. “The deeper lessons often come 18 months later.” By that time, however, the breach is old news, and rarely do people “get to page 171” of a report summarizing the failures of the breach, he says.
There are examples of organizations taking it upon themselves to release detailed accounts of breaches for the public good. For example, after a ransomware attack in 2023, The British Library published a detailed after-action report acknowledging mistakes and outlining lessons learned. In Canada, a federal privacy commissioners released findings into a breach of PowerSchool education technology and its effect on various educational institutions, providing insights into systemic failures.
Here at home, the US Federal Trade Commission also has published detailed complaints in breach-related cases. However, rarely do these public resources offer comprehensive breach details that provide enough feedback to help cybersecurity professionals learn from others’ mistakes, Sanabria says. “None of these sources go deep,” he tells Dark Reading. “They can’t tell you the narrative or the how of the breach.”
The Way Forward
Without better data and empirical evidence to inform breach prevention, the industry risks investing in what Sanabria calls “busywork generators” — tools and compliance activities that may not significantly reduce real-world risk.
“Every other industry that cares about safety builds feedback loops so they can get better,” Sanabria says. “Reducing risk is more of a gamble without data.”
Indeed, without formalized transparency and governance, the researchers warn, the industry will struggle to determine if organizations are truly making improvements in their response to and prevention of secrity incidents, the researchers say.
To promote breach transparency in a way that will foster improvements in cybersecurity, the pair support building structured, institutionalized mechanisms for breach transparency, potentially including anonymized reporting, delayed public disclosures, or regulatory safe harbors for good-faith transparency. The goal, they say, is not public shaming, but collective learning.
“Modern engineering is built on studying failure,” Shostack says. “We don’t have enough of that in cybersecurity.”