Author: Mark

  • SLH Offers $500–$1,000 Per Call to Recruit Women for IT Help Desk Vishing Attacks

    SLH Offers $500–$1,000 Per Call to Recruit Women for IT Help Desk Vishing Attacks

    Ravie LakshmananFeb 25, 2026Social Engineering / Cloud Security

    The notorious cybercrime collective known as Scattered LAPSUS$ Hunters (SLH) has been observed offering financial incentives to recruit women to pull off social engineering attacks.

    The idea is to hire them for voice phishing campaigns targeting IT help desks, Dataminr said in a new threat brief. The group is said to be offering anywhere between $500 and $1,000 upfront per call, in addition to providing them with the necessary pre-written scripts to carry out the attack.

    “SLH is diversifying its social engineering pool by specifically recruiting women to conduct vishing attacks, likely to increase the success rate of help desk impersonation,” the threat intelligence firm said.

    A high-profile cybercrime supergroup comprising LAPSUS$, Scattered Spider, and ShinyHunters, SLH has a record of engaging in advanced social engineering attacks to sidestep multi-factor authentication (MFA) through techniques like MFA prompt bombing and SIM swapping. 

    The group’s modus operandi also involves targeting help desks and call centers to breach companies by posing as employees and convincing them to reset a password or install a remote monitoring and management (RMM) tool that grants them remote access. Once initial access is obtained, Scattered Spider has been observed moving laterally to virtualized environments, escalating privileges, and exfiltrating sensitive corporate data.

    Some of these attacks have further led to the deployment of ransomware. Another hallmark of these attacks is the use of legitimate services and residential proxy networks (e.g., Luminati and OxyLabs) to blend in and evade detection. Scattered Spider actors have used various tunneling tools like Ngrok, Teleport, and Pinggy, as well as free file-sharing services such as file.io, gofile.io, mega.nz, and transfer.sh.

    SLH’s Telegram post to recruit women

    In a report published earlier this month, Palo Alto Networks Unit 42, which is tracking Scattered Spider under the moniker Muddled Libra, described the threat actor as “highly proficient at exploiting human psychology” by impersonating employees to attempt password and multi-factor authentication (MFA) resets.

    Scattered Spider attack chain

    In at least one case investigated by the cybersecurity company in September 2025, Scattered Spider is said to have created and utilized a virtual machine (VM) after obtaining privileged credentials by calling the IT help desk and then used it to conduct reconnaissance (e.g., Active Directory enumeration) and attempt to exfiltrate Outlook mailbox files and data downloaded from the target’s Snowflake database.

    “While focusing on identity compromise and social engineering, this threat actor leverages legitimate tools and existing infrastructure to blend in,” Unit 42 said. “They operate quietly and maintain persistence.”

    The cybersecurity company also noted that Scattered Spider has an “extensive history” of targeting Microsoft Azure environments using the Graph API to facilitate access to Azure cloud resources. Also put to use by the group are cloud enumeration tools such as ADRecon for Active Directory reconnaissance.

    With social engineering emerging as the primary entry point for the cybercrime group, organizations are advised to be on alert and train IT help desk and support personnel to watch out for pre-written scripts and polished voice impersonation, enforce strict identity verification, harden MFA policies by shifting away from SMS-based authentication, and audit logs for new user creation or administrative privilege escalation following help desk interactions.

    “This recruitment drive represents a calculated evolution in SLH’s tactics,” Dataminr said. “By specifically seeking female voices, the group likely aims to bypass the ‘traditional’ profiles of attackers that IT help desk staff may be trained to identify, thereby increasing the effectiveness of their impersonation efforts.”


    Source: thehackernews.com…

  • Claude Code Flaws Allow Remote Code Execution and API Key Exfiltration

    Claude Code Flaws Allow Remote Code Execution and API Key Exfiltration

    Ravie LakshmananFeb 25, 2026Artificial Intelligence / Vulnerability

    Cybersecurity researchers have disclosed multiple security vulnerabilities in Anthropic’s Claude Code, an artificial intelligence (AI)-powered coding assistant, that could result in remote code execution and theft of API credentials.

    “The vulnerabilities exploit various configuration mechanisms, including Hooks, Model Context Protocol (MCP) servers, and environment variables – executing arbitrary shell commands and exfiltrating Anthropic API keys when users clone and open untrusted repositories,” Check Point Research said in a report shared with The Hacker News.

    The identified shortcomings fall under three broad categories –

    • No CVE (CVSS score: 8.7) – A code injection vulnerability stemming from a user consent bypass when starting Claude Code in a new directory that could result in arbitrary code execution without additional confirmation via untrusted project hooks defined in .claude/settings.json. (Fixed in version 1.0.87 in September 2025)
    • CVE-2025-59536 (CVSS score: 8.7) – A code injection vulnerability that allows execution of arbitrary shell commands automatically upon tool initialization when a user starts Claude Code in an untrusted directory. (Fixed in version 1.0.111 in October 2025)
    • CVE-2026-21852 (CVSS score: 5.3) – An information disclosure vulnerability in Claude Code’s project-load flow that allows a malicious repository to exfiltrate data, including Anthropic API keys. (Fixed in version 2.0.65 in January 2026)

    “If a user started Claude Code in an attacker-controller repository, and the repository included a settings file that set ANTHROPIC_BASE_URL to an attacker-controlled endpoint, Claude Code would issue API requests before showing the trust prompt, including potentially leaking the user’s API keys,” Anthropic said in an advisory for CVE-2026-21852.

    In other words, simply opening a crafted repository is enough to exfiltrate a developer’s active API key, redirect authenticated API traffic to external infrastructure, and capture credentials. This, in turn, can permit the attacker to burrow deeper into the victim’s AI infrastructure.

    This could potentially involve accessing shared project files, modifying/deleting cloud-stored data, uploading malicious content, and even generating unexpected API costs.

    Successful exploitation of the first vulnerability could trigger stealthy execution on a developer’s machine without any additional interaction beyond launching the project.

    CVE-2025-59536 also achieves a similar goal, the main difference being that repository-defined configurations defined through .mcp.json and claude/settings.json file could be exploited by an attacker to override explicit user approval prior to interacting with external tools and services through the Model Context Protocol (MCP). This is achieved by setting the “enableAllProjectMcpServers” option to true.

    “As AI-powered tools gain the ability to execute commands, initialize external integrations, and initiate network communication autonomously, configuration files effectively become part of the execution layer,” Check Point said. “What was once considered operational context now directly influences system behavior.”

    “This fundamentally alters the threat model. The risk is no longer limited to running untrusted code – it now extends to opening untrusted projects. In AI-driven development environments, the supply chain begins not only with source code, but with the automation layers surrounding it.”


    Source: thehackernews.com…

  • CISA Confirms Active Exploitation of FileZen CVE-2026-25108 Vulnerability

    CISA Confirms Active Exploitation of FileZen CVE-2026-25108 Vulnerability

    Ravie LakshmananFeb 25, 2026Vulnerability / Software Security

    The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Tuesday added a recently disclosed vulnerability in FileZen to its Known Exploited Vulnerabilities (KEV) catalog, citing evidence of active exploitation.

    The vulnerability, tracked as CVE-2026-25108 (CVSS v4 score: 8.7), is a case of operating system (OS) command injection that could allow an authenticated user to execute arbitrary commands via specially crafted HTTP requests.

    “Soliton Systems K.K FileZen contains an OS command injection vulnerability when a user logs-in to the affected product and sends a specially crafted HTTP request,” CISA said.

    According to the Japan Vulnerability Notes (JVN), the vulnerability affects the following versions of the file transfer product –

    • Versions 4.2.1 to 4.2.8
    • Versions 5.0.0 to 5.0.10

    Soliton noted in its advisory that successful exploitation of the issue is only possible when FileZen Antivirus Check Option is enabled, adding it has “received at least one report of damage caused by the exploitation of this vulnerability.”

    The Japanese technology company also revealed that a bad actor must sign in to the web interface with general user privileges to be able to pull off an attack. Users are advised to update to version 5.0.11 or later to mitigate the threat.

    “If you have been attacked or suspect that you have been victimized by this vulnerability, please consider not only updating to V5.0.11 or later, but also changing all user passwords as a precaution, as an attacker can log on with at least one real account,” it added.

    Federal Civilian Executive Branch (FCEB) agencies are advised to apply the necessary fixes by March 17, 2026, to secure their networks.


    Source: thehackernews.com…

  • SolarWinds Patches 4 Critical Serv-U 15.5 Flaws Allowing Root Code Execution

    SolarWinds Patches 4 Critical Serv-U 15.5 Flaws Allowing Root Code Execution

    Ravie LakshmananFeb 25, 2026Vulnerability / Windows Security

    SolarWinds has released updates to address four critical security flaws in its Serv-U file transfer software that, if successfully exploited, could result in remote code execution.

    The vulnerabilities, all rated 9.1 on the CVSS scoring system, are listed below –

    • CVE-2025-40538 – A broken access control vulnerability that allows an attacker to create a system admin user and execute arbitrary code as root via domain admin or group admin privileges.
    • CVE-2025-40539 – A type confusion vulnerability that allows an attacker to execute arbitrary native code as root.
    • CVE-2025-40540 – A type confusion vulnerability that allows an attacker to execute arbitrary native code as root.
    • CVE-2025-40541 – An insecure direct object reference (IDOR) vulnerability that allows an attacker to execute native code as root.

    SolarWinds noted that the vulnerabilities require administrative privileges for successful exploitation. It also said that they carry a medium security risk on Windows deployments as the services “frequently run under less-privileged service accounts by default.”

    The four shortcomings affect SolarWinds Serv-U version 15.5. They have been addressed in SolarWinds Serv-U version 15.5.4.

    While SolarWinds makes no mention of the security flaws being exploited in the wild, prior vulnerabilities in the software (CVE-2021-35211, CVE-2021-35247, and CVE-2024-28995) have been exploited by malicious actors, including by a China-based hacking group tracked as Storm-0322 (formerly DEV-0322).


    Source: thehackernews.com…

  • Defense Contractor Employee Jailed for Selling 8 Zero-Days to Russian Broker

    Defense Contractor Employee Jailed for Selling 8 Zero-Days to Russian Broker

    Ravie LakshmananFeb 25, 2026 Zero Day / National Security

    A 39-year-old Australian national who was previously employed at U.S. defense contractor L3Harris has been sentenced to a little over seven years in prison for selling eight zero-day exploits to Russian exploit broker Operation Zero in exchange for millions of dollars.

    Peter Williams pleaded guilty to two counts of theft of trade secrets in October 2025. In addition to the jail term, Williams has been ordered to serve three years of supervised release with special conditions, as well as forfeit illicit proceeds, including properties, clothing, jewelry, and luxury watches, purchased from the cryptocurrency payments he received in return for selling the exploits.

    The case’s connection to Operation Zero was disclosed by cybersecurity journalist Kim Zetter late last year. The nature of the exploits are presently unclear. But a sentencing memorandum published earlier this month revealed that the tools could have been “used against any manner of victim, civilian or military around the world, and engage in all manner of crime from cyber fraud, theft, and ransomware, to state directed spying and offensive cyber operations against military targets.”

    “Williams exploited his senior role at a U.S. defense contractor to enrich himself at the expense of the United States and his employer,” said Assistant Attorney General for National Security John A. Eisenberg. “The tools he compromised were intended to protect this Nation; instead, he auctioned them off to a Russian bidder.”

    According to U.S. Attorney Jeanine Pirro for the District of Columbia, Williams sold the trade secrets for up to $4 million in cryptocurrency. The exploit tools could have allowed Russia to access millions of digital devices, Pirro added.

    The theft of eight cyber-exploit components took place over a period of three years between 2022 and 2025. The zero-day exploits are designed to be sold exclusively to the U.S. government and select allies. The actions are estimated to have incurred L3Harris $35 million in financial losses.

    The U.S. State Department, in tandem, announced the designations of Operation Zero (aka Matrix LLC), along with Sergey Sergeyevich Zelenyuk and Special Technology Services LLC FZ (STS), under the Protecting American Intellectual Property Act (PAIPA) in connection with the trade secret theft.

    Zelenyuk is a Russian national and the director and owner of Operation Zero. He also established STS in the U.A.E. to conduct business with various countries in Asia and the Middle East and likely get around U.S. sanctions imposed on Russian bank accounts.

    The U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC) also sanctioned Zelenyuk, Operation Zero, STS, and four other associated individuals and entities for acquiring and distributing cyber tools harmful to U.S. national security. According to the Treasury, Operation Zero is said to have sold the tools acquired from Williams to at least one unauthorized user.

    Operation Zero has offered up to $4 million in bounties for Telegram exploits and $20 million for tools that could be used to break into Android and iPhone devices. The exploit broker is believed to have engaged in efforts to recruit hackers to support its activities and develop business relationships with foreign intelligence agencies through social media. It’s been active since at least 2021.

    “Zelenyuk and Operation Zero have stated that they will only sell the exploits they acquire to customers from non-NATO countries. Zelenyuk, through Operation Zero, has sought to sell exploits to foreign intelligence agencies,” the Treasury Department said.

    “Zelenyuk and Operation Zero have also sought to develop other cyber intelligence systems, including spyware and methods to extract personal identifying information and other sensitive data uploaded by users of artificial intelligence applications like large language models.”

    The names of the other sanctioned individuals and entities are listed below –

    • Marina Evgenyevna Vasanovich, Zelenyuk’s assistant
    • Azizjon Makhmudovich Mamashoyev and Oleg Vyacheslavovich Kucherov, for having had work relationships with Operation Zero (Kucherov is also suspected of being a member of the TrickBot cybercrime gang)
    • Advance Security Solutions, an exploit brokerage firm created by Mamashoyev that offers bounties for exploits for U.S.-built software

    “Peter Williams stole a U.S. defense contractor’s trade secrets about highly sensitive cyber capabilities and sold them to a broker whose clients include the Russian government, putting our national security and countless potential victims at risk,” said Assistant Director Roman Rozhavsky of the Federal Bureau of Investigation’s (FBI) Counterintelligence and Espionage Division.

    “Let this be a clear warning to all who consider placing greed over country: if you betray your position of trust and sell sensitive American technology to our foreign adversaries, the FBI will not rest until you’re brought to justice.”


    Source: thehackernews.com…

  • Manual Processes Are Putting National Security at Risk

    Manual Processes Are Putting National Security at Risk

    National Security at Risk

    Why automating sensitive data transfers is now a mission-critical priority

    More than half of national security organizations still rely on manual processes to transfer sensitive data, according to The CYBER360: Defending the Digital Battlespace report. This should alarm every defense and government leader because manual handling of sensitive data is not just inefficient, it is a systemic vulnerability. 

    Recent breaches in defense supply chains show how manual processes create exploitable gaps that adversaries can weaponize. This is not just a technical issue. It is a strategic challenge for every organization operating in contested domains, where speed and certainty define mission success.

    In an era defined by accelerating cyber threats and geopolitical tension, every second counts. Delays, errors, and gaps in control can cascade into consequences that compromise mission readiness, decision-making, and operational integrity. This is exactly what manual processes introduce: uncertainty in environments where certainty is non-negotiable. They create bottlenecks and increase the risk of human error. In short, they undermine the very principles of mission assurance: speed, accuracy, and trust.

    Adversaries know this. They exploit seams in data movement. Every manual step is a potential breach point. In a contested environment, these vulnerabilities are operational, not theoretical.

    Why Manual Persists

    If manual processes are so risky, why do they remain? The answer lies in a mix of technical, cultural, and organizational factors. 

    Legacy systems remain a major barrier. Many defense and government environments still run on infrastructure that predates modern automation capabilities. These systems were never designed for seamless integration with policy engines or encryption frameworks. Replacing them is costly and disruptive, so organizations layer manual steps as a workaround. 

    Procurement cycles compound the problem. Acquiring new technology in national security contexts is often slow and complex. Approval chains are long, requirements are rigid, and by the time a solution is deployed, the threat landscape has shifted. Leaders often adopt manual processes as a stopgap, but these temporary measures quickly become permanent habits.

    Cross-domain complexity adds another layer. Moving data between classification levels requires strict controls. Historically, these controls relied on human judgment to inspect and approve transfers. Automation was seen as too rigid for nuanced decisions. That perception persists even as modern solutions can enforce granular policies without sacrificing flexibility. 

    Culture plays a role as well. Trust in people runs deep in national security organizations. Manual handling feels tangible and controllable. Leaders and operators believe that human oversight reduces risk, even when evidence shows the opposite. This slows the adoption of automation. 

    In some cases, operators still print and hand-carry classified files because digital workflows are perceived as too risky. Regulatory inaction compounds this problem. Compliance frameworks often lag behind technology, reinforcing manual habits and slowing modernization efforts.

    Finally, there is a fear of disruption. Missions cannot pause for technology transitions. Leaders worry the automation will introduce delays or errors during rollout. They prefer the known imperfections of manual processes to the unknown risks of change. 

    These factors explain persistence, but they do not justify it. The environment has changed. Threats are faster, more sophisticated, and increasingly opportunistic.

    The Risk of Manual Handling

    1. Human error and variability: Sensitive data transfer should be consistent and precise. Manual steps introduce variance across teams and time. Even highly trained personnel face fatigue and workload pressure. Small errors can cascade into operational delays or unintended disclosures. Fatigue during high-tempo missions amplifies mistakes, and insider risk grows when oversight depends on trust alone.
    2. Weak enforcement of policy: Automation turns policy into code. Manual handling turns policy into interpretation. Under pressure, exceptions grow, and workarounds become standard practice. Over time, compliance erodes. These gaps slow incident response and undermine accountability during investigations, leaving leaders without timely insights when decisions matter most.
    3. Audit gaps and accountability risks: Manual movements are hard to track. Evidence is fragmented across emails and ad hoc logs. Investigations take too long. Leaders cannot rely on consistent chain-of-custody records.
    4. Security blind spots across domains: Sensitive data often moves across classification levels and networks. Manual processes make these transitions opaque. Adversaries exploit seams where enforcement is inconsistent.
    5. Mission performance drag: Speed is a security control. Manual transfers add handoffs and delays. Decision cycles slow down. People compensate by skipping steps, introducing new risks.

    Manual processes are not resilient. They are fragile, and they fail quietly and then fail loudly.

    Principles for Secure Automation: The Cybersecurity Trinity

    Manual processes are not resilient. They fail quietly and then fail loudly. Eliminating these vulnerabilities requires more than simply automating steps. It demands a security architecture that enforces trust, protects data, and manages boundaries at scale. So, how do defense and government organizations close these gaps and make automation secure? The answer lies in three principles that work together to protect identity, data, and domain boundaries. This is the Cybersecurity Trinity

    Automation alone is no longer enough. Modern missions demand a layered approach that addresses identity, data, and domain boundaries. The Cybersecurity Trinity of Zero Trust Architecture (ZTA), Data-Centric Security (DCS), and Cross Domain Solutions (CDS) is now a mission imperative for defense and government organizations. 

    Zero Trust Architecture (ZTA) ensures that every user, device, and transaction is verified continuously. It eliminates implicit trust and enforces least privilege across all environments. ZTA is the foundation for identity assurance and access control. This reduces insider risk and ensures coalition partners operate under consistent trust models, even in dynamic mission environments.

    Data-Centric Security (DCS) shifts the focus from perimeter defense to protecting the data itself. It applies encryption, classification, and policy enforcement wherever the data resides or moves. In sensitive workflows, DCS ensures that even if networks are compromised, the data remains secure. It supports interoperability by applying uniform controls across diverse networks, enabling secure collaboration without slowing operations.

    Cross Domain Solutions (CDS) enable controlled, secure transfer of information between classification levels and operational domains. They enforce release authorities, sanitize content, and prevent unauthorized disclosure. CDS is critical for coalition operations, intelligence sharing, and mission agility. These solutions enable secure multinational sharing without introducing delays, which is critical for time-sensitive intelligence exchange.

    Together, these three principles form the backbone of secure automation. They close the gaps that manual processes leave open. They make security measurable and mission success sustainable. 

    Special Considerations for Defense and Government

    Sensitive data transfer in national security contexts presents unique challenges. CDS requires automated inspection and enforcement of release authorities. Coalition operations demand federated identity and shared standards to maintain security across organizational boundaries. Tactical systems need lightweight agents and resilient synchronization for low-bandwidth environments. Supply chain exposure must be addressed by extending automation to contractors with strong verification and audit requirements.

    In joint missions, delays caused by manual checks can stall intelligence sharing and compromise operational tempo. Automation mitigates these risks by enforcing common standards across partners. Emerging threats such as AI-driven attacks and deepfake data manipulation make manual verification obsolete, increasing the urgency for automated safeguards. Insider risk remains a concern, but automation reduces opportunities for misuse by limiting manual handling and providing detailed audit trails.

    The Human Factor

    Automation does not eliminate the need for skilled personnel. It changes their focus. People design policies, manage exceptions, and investigate alerts. To make the transition successful, invest in training and culture. Show teams how automation improves mission speed and reduces rework. Communicate clearly and consistently. Celebrate early wins. Create feedback loops where operators can refine workflows. Start with pilot programs in low-risk workflows to build confidence before scaling. Leadership buy-in and clear communication are essential to overcome resistance and accelerate adoption. When automation feels like support rather than surveillance, adoption accelerates.

    Conclusion

    Manual handling of sensitive data is a strategic liability. It slows missions, creates blind spots, and erodes trust. Automation is not optional; it is mission imperative. Start with high-impact workflows designed by subject matter experts, and in turn, appropriately test the policy into enforceable rules. Integrate identity, encryption, and audit. Measure outcomes, train teams, and fund initiatives that reduce risk. 

    What should not remain true is that more than half rely on manual today. Your organization does not have to be among them tomorrow. The next conflict will not wait for manual processes to catch up. Leaders must act now to harden data flows, accelerate mission readiness, and ensure that automation becomes a force multiplier rather than a future aspiration.

    Source: The CYBER360: Defending the Digital Battlespace.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • UAC-0050 Targets European Financial Institution With Spoofed Domain and RMS Malware

    UAC-0050 Targets European Financial Institution With Spoofed Domain and RMS Malware

    Ravie LakshmananFeb 24, 2026Cyber Espionage / Malware

    A Russia-aligned threat actor has been observed targeting a European financial institution as part of a social engineering attack to likely facilitate intelligence gathering or financial theft, signaling a possible expansion of the threat actor’s targeting beyond Ukraine and into entities supporting the war-torn nation.

    The activity, which targeted an unnamed entity involved in regional development and reconstruction initiatives, has been attributed to a cybercrime group tracked as UAC-0050 (aka DaVinci Group). BlueVoyant has designated the name Mercenary Akula to the threat cluster. The attack was observed earlier this month.

    “The attack spoofed a Ukrainian judicial domain to deliver an email containing a link to a remote access payload,” researchers Patrick McHale and Joshua Green said in a report shared with The Hacker News. “The target was a senior legal and policy advisor involved in procurement, a role with privileged insight into institutional operations and financial mechanisms.”

    The starting point is a spear-phishing email that uses legal themes to direct recipients to download an archive file hosted on PixelDrain, a file-sharing service used by the threat actor to bypass reputation-based security controls.

    The ZIP is responsible for initiating a multi-layered infection chain. Present within the ZIP file is a RAR archive that contains a password-protected 7-Zip file, which includes an executable that masquerades as a PDF document by using the widely abused double extension trick (*.pdf.exe).

    The execution results in the deployment of an MSI installer for Remote Manipulator System (RMS), a Russian remote desktop software that allows remote control, desktop sharing, and file transfers.

    “The use of such ‘living-off-the-land’ tools provides attackers with persistent, stealthy access while often evading traditional antivirus detection,” the researchers noted.

    The use of RMS aligns with prior UAC-0050 modus operandi, with the threat actor known to drop legitimate remote access software like LiteManager and remote access trojans such as RemcosRAT in attacks targeting Ukraine.

    The Computer Emergency Response Team of Ukraine (CERT-UA) has characterized UAC-0050 as a mercenary group associated with Russian law enforcement agencies that conducts data gathering, financial theft, and information and psychological operations under the Fire Cells branding.

    “This attack reflects Mercenary Akula’s well-established and repetitive attack profile, while also offering a notable development,” BlueVoyant said. “First, their targeting has been primarily focused on Ukraine-based entities, especially accountants and financial officers. However, this incident suggests potential probing of Ukraine-supporting institutions in Western Europe.”

    The disclosure comes as Ukraine revealed that Russian cyber attacks aimed at the country’s energy infrastructure are increasingly focused on collecting intelligence to guide missile strikes rather than immediately disrupting operations, The Record reported.

    Cybersecurity company CrowdStrike, in its annual Global Threat Report, said it expects Russia-nexus adversaries to continue conducting aggressive operations with the goal of intelligence gathering from Ukrainian targets and NATO member states.

    This includes efforts undertaken by APT29 (aka Cozy Bear and Midnight Blizzard) to “systematically” exploit trust, organizational credibility, and platform legitimacy as part of spear-phishing campaigns targeting U.S.-based non-governmental organizations (NGOs) and a U.S.-based legal entity to gain unauthorized access to the victims’ Microsoft accounts.

    “Cozy Bear successfully compromised or impersonated individuals with whom targeted users maintained trusting professional relationships,” CrowdStrike said. “Impersonated individuals included employees from international NGO branches and pro-Ukraine organizations.”

    “The adversary heavily invested in substantiating these impersonations, using compromised individuals’ legitimate email accounts alongside burner communication channels to reinforce authenticity.”


    Source: thehackernews.com…

  • RoguePilot Flaw in GitHub Codespaces Enabled Copilot to Leak GITHUB_TOKEN

    RoguePilot Flaw in GitHub Codespaces Enabled Copilot to Leak GITHUB_TOKEN

    A vulnerability in GitHub Codespaces could have been exploited by bad actors to seize control of repositories by injecting malicious Copilot instructions in a GitHub issue.

    The artificial intelligence (AI)-driven vulnerability has been codenamed RoguePilot by Orca Security. It has since been patched by Microsoft following responsible disclosure.

    “Attackers can craft hidden instructions inside a GitHub issue that are automatically processed by GitHub Copilot, giving them silent control of the in-codespaces AI agent,” security researcher Roi Nisimi said in a report.

    The vulnerability has been described as a case of passive or indirect prompt injection where a malicious instruction is embedded within data or content that’s processed by the large language model (LLM), causing it to produce unintended outputs or carry out arbitrary actions.

    The cloud security company also called it a type of AI-mediated supply chain attack that induces the LLM to automatically execute malicious instructions embedded in developer content, in this case, a GitHub issue.

    The attack begins with a malicious GitHub issue that then triggers the prompt injection in Copilot when an unsuspecting user launches a Codespace from that issue. This trusted developer workflow, in turn, allows the attacker’s instructions to be silently executed by the AI assistant and leak sensitive data, such as the privileged GITHUB_TOKEN.

    RoguePilot takes advantage of the fact that there are a number of entry points to launch a Codespaces environment, including templates, repositories, commits, pull requests, or issues. The problem occurs when a codespace is opened from an issue, as the built-in GitHub Copilot is automatically fed the issue’s description as a prompt to generate a response.

    As a result, this AI integration can be weaponized to manipulate Copilot into running malicious commands. The attack can be made stealthy by hiding the prompt in the GitHub issue through the HTML comment tag “<!–the_prompt_goes_here–>.” The specially crafted prompt, for its part, instructs the AI assistant to leak the GITHUB_TOKEN to an external server under the attacker’s control.

    “By manipulating Copilot in a Codespace to check out a crafted pull request that contains a symbolic link to an internal file, an attacker can cause Copilot to read that file and (via a remote JSON $schema) exfiltrate a privileged GITHUB_TOKEN to a remote server,” Nisimi explained.

    From Prompt Injections to Promptware

    The development comes as Microsoft has discovered that Group Relative Policy Optimization (GRPO), a reinforcement learning technique normally used to fine-tune LLMs after their deployment, can also be used to remove its safety features. The process has been codenamed GRP-Obliteration.

    More importantly, the study found that a single unlabeled prompt, such as “Create a fake news article that could lead to panic or chaos,” was enough to reliably unalign 15 language models.

    “What makes this surprising is that the prompt is relatively mild and does not mention violence, illegal activity, or explicit content,” Microsoft researchers Mark Russinovich, Giorgio Severi, Blake Bullwinkel, Yanan Cai, Keegan Hines, and Ahmed Salem noted. “Yet training on this one example causes the model to become more permissive across many other harmful categories it never saw during training.”

    The disclosure also coincides with the discovery of various side channels that can be weaponized to infer the topic of a user’s conversation and even fingerprint user queries with over 75% accuracy, the latter of which exploits speculative decoding, an optimization technique used by LLMs to generate multiple candidate tokens in parallel to improve throughput and latency.

    Recent research has uncovered that models backdoored at the computational graph level – a technique called ShadowLogic – can further put agentic AI systems at risk by allowing tool calls to be silently modified without the user’s knowledge. This new phenomenon has been codenamed Agentic ShadowLogic by HiddenLayer.

    An attacker could weaponize such a backdoor to intercept requests to fetch content from a URL in real-time, such that they are routed through infrastructure under their control before it’s forwarded to the real destination.

    “By logging requests over time, the attacker can map which internal endpoints exist, when they’re accessed, and what data flows through them,” the AI security company said. “The user receives their expected data with no errors or warnings. Everything functions normally on the surface while the attacker silently logs the entire transaction in the background.”

    And that’s not all. Last month, Neural Trust demonstrated a new image jailbreak attack codenamed Semantic Chaining that allows users to sidestep safety filters in models like Grok 4, Gemini Nano Banana Pro, and Seedance 4.5, and generate prohibited content by leveraging the models’ ability to perform multi-stage image modifications.

    The attack, at its core, weaponizes the models’ lack of “reasoning depth” to track the latent intent across a multi-step instruction, thereby allowing a bad actor to introduce a series of edits that, while innocuous in isolation, can gradually-but-steadily erode the model’s safety resistance until the undesirable output is generated.

    It starts with asking the AI chatbot to imagine any non-problematic scene and instruct it to change one element in the original generated image. In the next phase, the attacker asks the model to make a second modification, this time transforming it into something that’s prohibited or offensive.

    This works because the model is focused on making a modification to an existing image rather than creating something fresh, which fails to trip the safety alarms as it treats the original image as legitimate.

    “Instead of issuing a single, overtly harmful prompt, which would trigger an immediate block, the attacker introduces a chain of semantically ‘safe’ instructions that converge on the forbidden result,” security researcher Alessandro Pignati said.

    In a study published last month, researchers Oleg Brodt, Elad Feldman, Bruce Schneier, and Ben Nassi argued that prompt injections have evolved beyond input-manipulation exploits to what they call promptware – a new class of malware execution mechanism that’s triggered through prompts engineered to exploit an application’s LLM.

    Promptware essentially manipulates the LLM to enable various phases of a typical cyber attack lifecycle: initial access, privilege escalation, reconnaissance, persistence, command-and-control, lateral movement, and malicious outcomes (e.g., data retrieval, social engineering, code execution, or financial theft).

    “Promptware refers to a polymorphic family of prompts engineered to behave like malware, exploiting LLMs to execute malicious activities by abusing the application’s context, permissions, and functionality,” the researchers said. “In essence, promptware is an input, whether text, image, or audio, that manipulates an LLM’s behavior during inference time, targeting applications or users.”


    Source: thehackernews.com…

  • Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model

    Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model

    Ravie LakshmananFeb 24, 2026Artificial Intelligence / Anthropic

    Anthropic on Monday said it identified “industrial-scale campaigns” mounted by three artificial intelligence (AI) companies, DeepSeek, Moonshot AI, and MiniMax, to illegally extract Claude’s capabilities to improve their own models.

    The distillation attacks generated over 16 million exchanges with its large language model (LLM) through about 24,000 fraudulent accounts in violation of its terms of service and regional access restrictions. All three companies are based in China, where the use of its services is prohibited use of its services is prohibited due to “legal, regulatory, and security risks.”

    Distillation refers to a technique where a less capable model is trained on the outputs generated by a stronger AI system. While distillation is a legitimate way for companies to produce smaller, cheaper versions of their own frontier models, it’s illegal for competitors to leverage it to acquire such capabilities from other AI companies at a fraction of the time and cost that would take them if they were to develop them on their own.

    “Illicitly distilled models lack necessary safeguards, creating significant national security risks,” Anthropic said. “Models built through illicit distillation are unlikely to retain those safeguards, meaning that dangerous capabilities can proliferate with many protections stripped out entirely.”

    Foreign AI companies that distill American models can weaponize these unprotected capabilities to facilitate malicious activities, cyber-related or otherwise, thereby serving as a foundation for military, intelligence, and surveillance systems that authoritarian governments can deploy for offensive cyber operations, disinformation campaigns, and mass surveillance.

    The campaigns detailed by AI upstart entail the use of fraudulent accounts and commercial proxy services to access Claude at scale while avoiding detection. Anthropic said it was able to attribute each campaign to a specific AI lab based on request metadata, IP address correlation, request metadata, and infrastructure indicators.

    The details of the three distillation attacks are below –

    • DeepSeek, which targeted Claude’s reasoning capabilities, rubric-based grading tasks, and sought its help in generating censorship-safe alternatives to politically sensitive queries like questions about dissidents, party leaders, or authoritarianism across over 150,000 exchanges.
    • Moonshot AI, which targeted Claude’s agentic reasoning and tool use, coding capabilities, computer-use agent development, and computer vision across over 3.4 million exchanges.
    • MiniMax, which targeted Claude’s agentic coding and tool use capabilities across over 13 million exchanges.

    “The volume, structure, and focus of the prompts were distinct from normal usage patterns, reflecting deliberate capability extraction rather than legitimate use,” Anthropic added. “Each campaign targeted Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding.”

    The company also pointed out that the attacks relied on commercial proxy services that resell access to Claude and other frontier AI models at scale. These services are powered by “hydra cluster” architectures that contain massive networks of fraudulent accounts to distribute traffic across their API.

    The access is then used to generate large volumes of carefully crafted prompts that are designed to extract specific capabilities from the model for the purpose of training their own models by harvesting the high-quality responses. 

    “The breadth of these networks means that there are no single points of failure,” Anthropic said. “When one account is banned, a new one takes its place. In one case, a single proxy network managed more than 20,000 fraudulent accounts simultaneously, mixing distillation traffic with unrelated customer requests to make detection harder.”

    To counter the threat, Anthropic said it has built several classifiers and behavioral fingerprinting systems to identify suspicious distillation attack patterns in API traffic, strengthened verification for educational accounts, security research programs, and startup organizations, and implemented enhanced safeguards to reduce the efficacy of model outputs for illicit distillation.

    The disclosure comes weeks after Google Threat Intelligence Group (GTIG) disclosed it identified and disrupted distillation and model extraction attacks aimed at Gemini’s reasoning capabilities through more than 100,000 prompts.

    “Model extraction and distillation attacks do not typically represent a risk to average users, as they do not threaten the confidentiality, availability, or integrity of AI services,” Google said earlier this month. “Instead, the risk is concentrated among model developers and service providers.”


    Source: thehackernews.com…

  • UnsolicitedBooker Targets Central Asian Telecoms With LuciDoor and MarsSnake Backdoors

    UnsolicitedBooker Targets Central Asian Telecoms With LuciDoor and MarsSnake Backdoors

    LuciDoor and MarsSnake Backdoors

    The threat activity cluster known as UnsolicitedBooker has been observed targeting telecommunications companies in Kyrgyzstan and Tajikistan, marking a shift from prior attacks aimed at Saudi Arabian entities.

    The attacks involve the deployment of two distinct backdoors codenamed LuciDoor and MarsSnake, according to a report published by Positive Technologies last week.

    “The group used several unique and rare instruments of Chinese origin,” researchers Alexander Badaev and Maxim Shamanov said.

    UnsolicitedBooker was first documented by ESET in May 2025, attributing the China-aligned threat actor to a cyber attack targeting an unnamed international organization in Saudi Arabia with a backdoor dubbed MarsSnake. The group is assessed to be active since at least March 2023 and has a history of targeting organizations in Asia, Africa, and the Middle East.

    Further analysis of the threat actor has uncovered tactical overlaps with two other clusters, including Space Pirates and an as-yet-unattributed campaign targeting Saudi Arabia with another backdoor referred to as Zardoor.

    The latest set of attacks documented by the Russian cybersecurity vendor was found to target Kyrgyz organizations in late September 2025 with phishing emails containing a Microsoft Office document, which, when opened, instructs recipients to “Enable Content” so as to run a malicious macro.

    While the document displays a telecom provider’s tariff plan to the victim, the macro stealthily drops a C++ malware loader called LuciLoad that, in turn, delivers LuciDoor. Another attack observed in late November 2025 adopted the same modus operandi, only this time it used a different loader codenamed MarsSnakeLoader to deploy MarsSnake.

    As recently as January 2026, UnsolicitedBooker is said to have leveraged phishing emails as a vector to target companies in Tajikistan. While the overall attack chain remains the same, the messages embedded links to the decoy documents as opposed to directly attaching them.

    Written in C++, LuciDoor establishes communication with a command-and-control (C2) server, collects basic system information, and exfiltrates the data to the server in encrypted format. It then parses the responses sent by the server to run commands using cmd.exe, write files to the system, and upload files.

    LuciDoor and MarsSnake Backdoors
    Macros in the document

    MarsSnake, similarly, allows attackers to harvest system metadata, execute arbitrary commands, and read or write any file on disk.

    Positive Technologies said it also found signs that MarsSnake was put to use in attacks targeting China. The starting point is a Windows shortcut that masquerades as a Microsoft Word document (*.doc.lnk) that triggers the execution of a batch script to launch a Visual Basic Script, which then launches MarsSnake without the loader component.

    The decoy file is believed to be based on an LNK file associated with a publicly available pentesting tool called FTPlnk_phishing, owing to the identical LNK file creation time and Machine ID indicators. It’s worth noting that a similar LNK file was put to use by the Mustang Panda group in attacks targeting Thailand in 2022.

    “In their attacks, the group used rare tools of Chinese origin,” Positive Technologies said. “Interestingly, at the very beginning, the group used a backdoor we dubbed LuciDoor, but later switched to the MarsSnake backdoor. However, in 2026, the group made a U-turn and resumed using LuciDoor.”

    “Furthermore, in at least one case, we observed the attackers using a hacked router as a C2 server, and their infrastructure mimicked that of Russia in some attacks.”

    PseudoSticky and Cloud Atlas Target Russia

    The disclosure comes as a previously unknown threat actor is deliberately mimicking the tactics of a pro-Ukrainian hacking group called Sticky Werewolf (aka Angry Likho, MimiStick, and PhaseShifters) to attack Russian organizations in the retail, construction, and research sectors with malware like RemcosRAT and DarkTrack RAT for comprehensive data theft and remote control.

    The new group, referred to as PseudoSticky, has been active since November 2025. Victims are typically infected by phishing emails containing malicious attachments that lead to the deployment of the trojans. There are indications that the threat actor has relied on large language models (LLMs) to develop attack chains that drop DarkTrack RAT via PureCrypter.

    “A closer analysis reveals differences in the infrastructure, malware implementation, and individual tactical elements, leading us to suspect that there is likely no direct connection between the groups, but rather deliberate mimicry,” Russian security vendor F6 said.

    Russian entities have also been targeted by another hacking group called Cloud Atlas, using phishing emails bearing malicious Word documents to distribute custom malware known as VBShower and VBCloud.

    “When opened, the malicious document loads a remote template from C2 specified in one of the document’s streams,” cybersecurity company Solar said. “This template exploits the CVE-2018-0802 vulnerability. This is followed by downloading a malicious file with alternate streams, i.e., VBShower.”


    Source: thehackernews.com…