Category: Cybersecurity

  • Microsoft Develops Scanner to Detect Backdoors in Open-Weight Large Language Models

    Microsoft Develops Scanner to Detect Backdoors in Open-Weight Large Language Models

    Ravie LakshmananFeb 04, 2026Artificial Intelligence / Software Security

    Microsoft on Wednesday said it built a lightweight scanner that it said can detect backdoors in open-weight large language models (LLMs) and improve the overall trust in artificial intelligence (AI) systems.

    The tech giant’s AI Security team said the scanner leverages three observable signals that can be used to reliably flag the presence of backdoors while maintaining a low false positive rate.

    “These signatures are grounded in how trigger inputs measurably affect a model’s internal behavior, providing a technically robust and operationally meaningful basis for detection,” Blake Bullwinkel and Giorgio Severi said in a report shared with The Hacker News.

    LLMs can be susceptible to two types of tampering: model weights, which refer to learnable parameters within a machine learning model that undergird the decision-making logic and transform input data into predicted outputs, and the code itself.

    Another type of attack is model poisoning, which occurs when a threat actor embeds a hidden behavior directly into the model’s weights during training, causing the model to perform unintended actions when certain triggers are detected. Such backdoored models are sleeper agents, as they stay dormant for the most part, and their rogue behavior only becomes apparent upon detecting the trigger.

    This turns model poisoning into some sort of a covert attack where a model can appear normal in most situations, yet respond differently under narrowly defined trigger conditions. Microsoft’s study has identified three practical signals that can indicate a poisoned AI model –

    “Our approach relies on two key findings: first, sleeper agents tend to memorize poisoning data, making it possible to leak backdoor examples using memory extraction techniques,” Microsoft said in an accompanying paper. “Second, poisoned LLMs exhibit distinctive patterns in their output distributions and attention heads when backdoor triggers are present in the input.”

    These three indicators, Microsoft said, can be used to scan models at scale to identify the presence of embedded backdoors. What makes this backdoor scanning methodology noteworthy is that it requires no additional model training or prior knowledge of the backdoor behavior, and works across common GPT‑style models.

    “The scanner we developed first extracts memorized content from the model and then analyzes it to isolate salient substrings,” the company added. “Finally, it formalizes the three signatures above as loss functions, scoring suspicious substrings and returning a ranked list of trigger candidates.”

    The scanner is not without its limitations. It does not work on proprietary models as it requires access to the model files, works best on trigger-based backdoors that generate deterministic outputs, and cannot be treated as a panacea for detecting all kinds of backdoor behavior.

    “We view this work as a meaningful step toward practical, deployable backdoor detection, and we recognize that sustained progress depends on shared learning and collaboration across the AI security community,” the researchers said.

    The development comes as the Windows maker said it’s expanding its Secure Development Lifecycle (SDL) to address AI-specific security concerns ranging from prompt injections to data poisoning to facilitate secure AI development and deployment across the organization.

    “Unlike traditional systems with predictable pathways, AI systems create multiple entry points for unsafe inputs, including prompts, plugins, retrieved data, model updates, memory states, and external APIs,” Yonatan Zunger, corporate vice president and deputy chief information security officer for artificial intelligence, said. “These entry points can carry malicious content or trigger unexpected behaviors.”

    “AI dissolves the discrete trust zones assumed by traditional SDL. Context boundaries flatten, making it difficult to enforce purpose limitation and sensitivity labels.”


    Source: thehackernews.com…

  • CISA Adds Actively Exploited SolarWinds Web Help Desk RCE to KEV Catalog

    CISA Adds Actively Exploited SolarWinds Web Help Desk RCE to KEV Catalog

    Ravie LakshmananFeb 04, 2026Software Security / Vulnerability

    The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Tuesday added a critical security flaw impacting SolarWinds Web Help Desk (WHD) to its Known Exploited Vulnerabilities (KEV) catalog, flagging it as actively exploited in attacks.

    The vulnerability, tracked as CVE-2025-40551 (CVSS score: 9.8), is a untrusted data deserialization vulnerability that could pave the way for remote code execution.

    “SolarWinds Web Help Desk contains a deserialization of untrusted data vulnerability that could lead to remote code execution, which would allow an attacker to run commands on the host machine,” CISA said. “This could be exploited without authentication.”

    SolarWinds issued fixes for the flaw last week, along with CVE-2025-40536 (CVSS score: 8.1), CVE-2025-40537 (CVSS score: 7.5), CVE-2025-40552 (CVSS score: 9.8), CVE-2025-40553 (CVSS score: 9.8), and CVE-2025-40554 (CVSS score: 9.8), in WHD version 2026.1.

    There are currently no public reports about how the vulnerability is being weaponized in attacks, who may be the targets, or the scale of such efforts. It’s the latest illustration of how quickly threat actors are moving to exploit newly disclosed flaws.

    Also added to the KEV catalog are three other vulnerabilities –

    • CVE-2019-19006 (CVSS score: 9.8) – An improper authentication vulnerability in Sangoma FreePBX that potentially allows unauthorized users to bypass password authentication and access services provided by the FreePBX administrator
    • CVE-2025-64328 (CVSS score: 8.6) – An operating system command injection vulnerability in Sangoma FreePBX that could allow for a post-authentication command injection by an authenticated known user via the testconnection -> check_ssh_connect() function and potentially obtain remote access to the system as an asterisk user
    • CVE-2021-39935 (CVSS score: 7.5/6.8) – A server-side request forgery (SSRF) vulnerability in GitLab Community and Enterprise Editions that could allow unauthorized external users to perform Server Side Requests via the CI Lint API

    It’s worth noting that the exploitation of CVE-2021-39935 was highlighted by GreyNoise in March 2025, as part of a coordinated surge in the abuse of SSRF vulnerabilities in multiple platforms, including DotNetNuke, Zimbra Collaboration Suite, Broadcom VMware vCenter, ColumbiaSoft DocumentLocator, BerriAI LiteLLM, and Ivanti Connect Secure.

    Federal Civilian Executive Branch (FCEB) agencies are required to fix CVE-2025-40551 by February 6, 2026, and the rest by February 24, 2026, pursuant to Binding Operational Directive (BOD) 22-01: Reducing the Significant Risk of Known Exploited Vulnerabilities.


    Source: thehackernews.com…

  • Eclipse Foundation Mandates Pre-Publish Security Checks for Open VSX Extensions

    Eclipse Foundation Mandates Pre-Publish Security Checks for Open VSX Extensions

    Ravie LakshmananFeb 04, 2026Supply Chain Security / Secure Coding

    Open VSX Extensions

    The Eclipse Foundation, which maintains the Open VSX Registry, has announced plans to enforce security checks before Microsoft Visual Studio Code (VS Code) extensions are published to the open-source repository to combat supply chain threats.

    The move marks a shift from a reactive to a proactive approach to ensure that malicious extensions don’t end up getting published on the Open VSX Registry.

    “Up to now, the Open VSX Registry has relied primarily on post-publication response and investigation. When a bad extension is reported, we investigate and remove it,” Christopher Guindon, director of software development at the Eclipse Foundation, said.

    “While this approach remains relevant and necessary, it does not scale as publication volume increases and threat models evolve.”

    The change comes as open-source package registries and extension marketplaces have increasingly become attack magnets, enabling bad actors to target developers at scale through a variety of methods such as namespace impersonation and typosquatting. As recently as last week, Socket flagged an incident where a compromised publisher’s account was used to push poisoned updates.

    By implementing pre-publish checks, the idea is to limit the window of exposure and flag the following scenarios, as well as quarantine suspicious uploads for review instead of publishing them immediately –

    • Clear cases of extension name or namespace impersonation
    • Accidentally published credentials or secrets
    • Known malicious patterns

    It’s worth noting that Microsoft already has a similar multi-step vetting process in place for its Visual Studio Marketplace. This includes scanning incoming packages for malware, then rescanning every newly published package “shortly” after it’s been published, and periodic bulk rescanning of all the packages.

    The extension verification program is expected to be rolled out in a staged fashion, with the maintainers using the month of February 2026 to monitor newly published extensions without blocking publication to fine-tune the system, reduce false positives, and improve feedback. The enforcement will begin next month.

    “The goal and intent are to raise the security floor, help publishers catch issues early, and keep the experience predictable and fair for good-faith publishers,” Guindon said.

    “Pre-publish checks reduce the likelihood that obviously malicious or unsafe extensions make it into the ecosystem, which increases confidence in the Open VSX Registry as shared infrastructure.”


    Source: thehackernews.com…

  • Microsoft Warns Python Infostealers Target macOS via Fake Ads and Installers

    Microsoft Warns Python Infostealers Target macOS via Fake Ads and Installers

    Ravie LakshmananFeb 04, 2026Malvertising / Infostealer

    macOS via Fake Ads and Installers

    Microsoft has warned that information-stealing attacks are “rapidly expanding” beyond Windows to target Apple macOS environments by leveraging cross-platform languages like Python and abusing trusted platforms for distribution at scale.

    The tech giant’s Defender Security Research Team said it observed macOS-targeted infostealer campaigns using social engineering techniques such as ClickFix since late 2025 to distribute disk image (DMG) installers that deploy stealer malware families like Atomic macOS Stealer (AMOS), MacSync, and DigitStealer.

    The campaigns have been found to use techniques like fileless execution, native macOS utilities, and AppleScript automation to facilitate data theft. This includes details like web browser credentials and session data, iCloud Keychain, and developer secrets.

    The starting point of these attacks is often a malicious ad, often served through Google Ads, that redirects users searching for tools like DynamicLake and artificial intelligence (AI) tools to fake sites that employ ClickFix lures, tricking them into infecting their own machines with malware.

    “Python-based stealers are being leveraged by attackers to rapidly adapt, reuse code, and target heterogeneous environments with minimal overhead,” Microsoft said. “They are typically distributed via phishing emails and collect login credentials, session cookies, authentication tokens, credit card numbers, and crypto wallet data.”

    One such stealer is PXA Stealer, which is linked to Vietnamese-speaking threat actors and is capable of harvesting login credentials, financial information, and browser data. The Windows maker said it identified two PXA Stealer campaigns in October 2025 and December 2025 that used phishing emails for initial access.

    Attack chains involved the use of registry Run keys or scheduled tasks for persistence and Telegram for command-and-control communications and data exfiltration.

    In addition, bad actors have been observed weaponizing popular messaging apps like WhatsApp to distribute malware like Eternidade Stealer and gain access to financial and cryptocurrency accounts. Details of the campaign were publicly documented by LevelBlue/Trustwave in November 2025.

    Other stealer-related attacks have revolved around fake PDF editors like Crystal PDF that are distributed via malvertising and search engine optimization (SEO) poisoning through Google Ads to deploy a Windows-based stealer that can stealthily collect cookies, session data, and credential caches from Mozilla Firefox and Chrome browsers.

    To counter the threat posed by infostealer threats, organizations are advised to educate users on social engineering attacks like malvertising redirect chains, fake installers, and ClickFix‑style copy‑paste prompts. It’s also advised to monitor for suspicious Terminal activity and access to the iCloud Keychain, as well as inspect network egress for POST requests to newly registered or suspicious domains.

    “Being compromised by infostealers can lead to data breaches, unauthorized access to internal systems, business email compromise (BEC), supply chain attacks, and ransomware attacks,” Microsoft said.


    Source: thehackernews.com…

  • When Cloud Outages Ripple Across the Internet

    When Cloud Outages Ripple Across the Internet

    Recent major cloud service outages have been hard to miss. High-profile incidents affecting providers such as AWS, Azure, and Cloudflare have disrupted large parts of the internet, taking down websites and services that many other systems depend on. The resulting ripple effects have halted applications and workflows that many organizations rely on every day.

    For consumers, these outages are often experienced as an inconvenience, such as being unable to order food, stream content, or access online services. For businesses, however, the impact is far more severe. When an airline’s booking system goes offline, lost availability translates directly into lost revenue, reputational damage, and operational disruption.

    These incidents highlight that cloud outages affect far more than compute or networking. One of the most critical and impactful areas is identity. When authentication and authorization are disrupted, the result is not just downtime; it is a core operational and security incident.

    Cloud Infrastructure, a Shared Point of Failure

    Cloud providers are not identity systems. But modern identity architectures are deeply dependent on cloud-hosted infrastructure and shared services. Even when an authentication service itself remains functional, failures elsewhere in the dependency chain can render identity flows unusable.

    Most organizations rely on cloud infrastructure for critical identity-related components, such as:

    • Datastores holding identity attributes and directory information
    • Policy and authorization data
    • Load balancers, control planes, and DNS

    These shared dependencies introduce risk in the system. A failure in any one of them can block authentication or authorization entirely, even if the identity provider is technically still running. The result is a hidden single point of failure that many organizations, unfortunately, only discover during an outage.

    Identity, the Gatekeeper for Everything

    Authentication and authorization aren’t isolated functions used only during login – they are continuous gatekeepers for every system, API, and service. Modern security models, specifically Zero Trust, are built on the principle of “never trust, always verify”. That verification depends entirely on the availability of identity systems.

    This applies equally to human users and machine identities. Applications authenticate constantly. APIs authorize every request. Services obtain tokens to call other services. When identity systems are unavailable, nothing works.

    Because of this, identity outages directly threaten business continuity. They should trigger the highest level of incident response, with proactive monitoring and alerting across all dependent services. Treating identity downtime as a secondary or purely technical issue significantly underestimates its impact.

    The Hidden Complexity of Authentication Flows

    Authentication involves far more than verifying a username and password, or a passkey, as organizations increasingly move toward passwordless models. A single authentication event typically triggers a complex chain of operations behind the scenes.

    Identity systems are commonly:

    • Resolve user attributes from directories or databases
    • Store session state
    • Issue access tokens containing scopes, claims, and attributes
    • Perform fine-grained authorization decisions using policy engines

    Authorization checks may occur both during token issuance and at runtime when APIs are accessed. In many cases, APIs must authenticate themselves and obtain tokens before calling other services.

    Each of these steps depends on the underlying infrastructure. Datastores, policy engines, token stores, and external services all become part of the authentication flow. A failure in any one of these components can fully block access, impacting users, applications, and business processes.

    Why Traditional High Availability Isn’t Enough

    High availability is widely implemented and absolutely necessary, but it is often insufficient for identity systems. Most high-availability designs focus on regional failover: a primary deployment in one region with a secondary in another. If one region fails, traffic shifts to the backup.

    This approach breaks down when failures affect shared or global services. If identity systems in multiple regions depend on the same cloud control plane, DNS provider, or managed database service, regional failover provides little protection. In these scenarios, the backup system fails for the same reasons as the primary.

    The result is an identity architecture that appears resilient on paper but collapses under large-scale cloud or platform-wide outages.

    Designing Resilience for Identity Systems

    True resilience must be deliberately designed. For identity systems, this often means reducing dependency on a single provider or failure domain. Approaches may include multi-cloud strategies or controlled on-premises alternatives that remain accessible even when cloud services are degraded.

    Equally important is planning for degraded operation. Fully denying access during an outage has the highest possible business impact. Allowing limited access, based on cached attributes, precomputed authorization decisions, or reduced functionality, can dramatically reduce operational and reputational damage.

    Not all identity-related data needs the same level of availability. Some attributes or authorization sources may be less fault-tolerant than others, and that may be acceptable. What matters is making these trade-offs deliberately, based on business risk rather than architectural convenience.

    Identity systems must be engineered to fail gracefully. When infrastructure outages are inevitable, access control should degrade predictably, not completely collapse.

    Ready to get started with a robust identity management solution? Try the Curity Identity Server for free.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • Hackers Exploit Metro4Shell RCE Flaw in React Native CLI npm Package

    Hackers Exploit Metro4Shell RCE Flaw in React Native CLI npm Package

    Ravie LakshmananFeb 03, 2026Open Source / Vulnerability

    Threat actors have been observed exploiting a critical security flaw impacting the Metro Development Server in the popular “@react-native-community/cli” npm package.

    Cybersecurity company VulnCheck said it first observed exploitation of CVE-2025-11953 (aka Metro4Shell) on December 21, 2025. With a CVSS score of 9.8, the vulnerability allows remote unauthenticated attackers to execute arbitrary operating system commands on the underlying host. Details of the flaw were first documented by JFrog in November 2025.

    Despite more than a month after initial exploitation in the wild, the “activity has yet to see broad public acknowledgment,” it added.

    In the attack detected against its honeypot network, the threat actors have weaponized the flaw to deliver a Base64-encoded PowerShell script that, once parsed, is configured to perform a series of actions, including Microsoft Defender Antivirus exclusions for the current working directory and the temporary folder (“C:Users<Username>AppDataLocalTemp”).

    The PowerShell script also establishes a raw TCP connection to an attacker-controlled host and port (“8.218.43[.]248:60124”) and sends a request to retrieve data, write it to a file in the temporary directory, and execute it. The downloaded binary is based in Rust, and features anti-analysis checks to hinder static inspection.

    The attacks have been found to originate from the following IP addresses –

    • 5.109.182[.]231
    • 223.6.249[.]141
    • 134.209.69[.]155

    Describing the activity as neither experimental nor exploratory, VulnCheck said the delivered payloads were “consistent across multiple weeks of exploitation, indicating operational use rather than vulnerability probing or proof-of-concept testing.”

    “CVE-2025-11953 is not remarkable because it exists. It is remarkable because it reinforces a pattern defenders continue to relearn. Development infrastructure becomes production infrastructure the moment it is reachable, regardless of intent.”


    Source: thehackernews.com…

  • [Webinar] The Smarter SOC Blueprint: Learn What to Build, Buy, and Automate

    [Webinar] The Smarter SOC Blueprint: Learn What to Build, Buy, and Automate

    The Hacker NewsFeb 03, 2026Threat Detection / Enterprise Security

    Most security teams today are buried under tools. Too many dashboards. Too much noise. Not enough real progress.

    Every vendor promises “complete coverage” or “AI-powered automation,” but inside most SOCs, teams are still overwhelmed, stretched thin, and unsure which tools are truly pulling their weight. The result? Bloated stacks, missed signals, and mounting pressure to do more with less.

    This live session, “Breaking Down the Modern SOC: What to Build vs Buy vs Automate,” with Kumar Saurabh (CEO, AirMDR) and Francis Odum (CEO, SACR), clears the fog. No jargon. Just real answers to the question every security leader faces: What should we build, what should we buy, and what should we automate?

    Secure your spot for the live session ➜

    You’ll see what a healthy modern SOC looks like today—how top-performing teams decide where to build, when to buy, and how to automate without losing control.

    The session goes beyond theory: expect a real customer case study, a side-by-side look at common SOC models, and a practical checklist you can use right away to simplify operations and improve results.

    If your SOC feels overloaded, underfunded, or always one step behind, this session is your reset point. You’ll leave with clarity, not buzzwords—a grounded view of how to strengthen your SOC with the people, tools, and budget you already have.

    Budgets are shrinking. Threats are scaling. The noise is deafening. It’s time to pause, rethink, and rebuild smarter.

    Register for the Webinar ➜

    Register Free Now — and learn how to simplify your SOC, cut the clutter, and make every decision count.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • Docker Fixes Critical Ask Gordon AI Flaw Allowing Code Execution via Image Metadata

    Docker Fixes Critical Ask Gordon AI Flaw Allowing Code Execution via Image Metadata

    Ravie LakshmananFeb 03, 2026Artificial Intelligence / Vulnerability

    Cybersecurity researchers have disclosed details of a now-patched security flaw impacting Ask Gordon, an artificial intelligence (AI) assistant built into Docker Desktop and the Docker Command-Line Interface (CLI), that could be exploited to execute code and exfiltrate sensitive data.

    The critical vulnerability has been codenamed DockerDash by cybersecurity company Noma Labs. It was addressed by Docker with the release of version 4.50.0 in November 2025.

    “In DockerDash, a single malicious metadata label in a Docker image can be used to compromise your Docker environment through a simple three-stage attack: Gordon AI reads and interprets the malicious instruction, forwards it to the MCP [Model Context Protocol] Gateway, which then executes it through MCP tools,” Sasi Levi, security research lead at Noma, said in a report shared with The Hacker News.

    “Every stage happens with zero validation, taking advantage of current agents and MCP Gateway architecture.”

    Successful exploitation of the vulnerability could result in critical-impact remote code execution for cloud and CLI systems, or high-impact data exfiltration for desktop applications.

    The problem, Noma Security said, stems from the fact that the AI assistant treats unverified metadata as executable commands, allowing it to propagate through different layers sans any validation, allowing an attacker to sidestep security boundaries. The result is that a simple AI query opens the door for tool execution.

    With MCP acting as a connective tissue between a large language model (LLM) and the local environment, the issue is a failure of contextual trust. The problem has been characterized as a case of Meta-Context Injection.

    “MCP Gateway cannot distinguish between informational metadata (like a standard Docker LABEL) and a pre-authorized, runnable internal instruction,” Levi said. “By embedding malicious instructions in these metadata fields, an attacker can hijack the AI’s reasoning process.”

    In a hypothetical attack scenario, a threat actor can exploit a critical trust boundary violation in how Ask Gordon parses container metadata. To accomplish this, the attacker crafts a malicious Docker image with embedded instructions in Dockerfile LABEL fields. 

    While the metadata fields may seem innocuous, they become vectors for injection when processed by Ask Gordon AI. The code execution attack chain is as follows –

    • The attacker publishes a Docker image containing weaponized LABEL instructions in the Dockerfile
    • When a victim queries Ask Gordon AI about the image, Gordon reads the image metadata, including all LABEL fields, taking advantage of Ask Gordon’s inability to differentiate between legitimate metadata descriptions and embedded malicious instructions
    • Ask Gordon to forward the parsed instructions to the MCP gateway, a middleware layer that sits between AI agents and MCP servers.
    • MCP Gateway interprets it as a standard request from a trusted source and invokes the specified MCP tools without any additional validation
    • MCP tool executes the command with the victim’s Docker privileges, achieving code execution

    The data exfiltration vulnerability weaponizes the same prompt injection flaw but takes aim at Ask Gordon’s Docker Desktop implementation to capture sensitive internal data about the victim’s environment using MCP tools by taking advantage of the assistant’s read-only permissions.

    The gathered information can include details about installed tools, container details, Docker configuration, mounted directories, and network topology.

    It’s worth noting that Ask Gordon version 4.50.0 also resolves a prompt injection vulnerability discovered by Pillar Security that could have allowed attackers to hijack the assistant and exfiltrate sensitive data by tampering with the Docker Hub repository metadata with malicious instructions.

    “The DockerDash vulnerability underscores your need to treat AI Supply Chain Risk as a current core threat,” Levi said. “It proves that your trusted input sources can be used to hide malicious payloads that easily manipulate AI’s execution path. Mitigating this new class of attacks requires implementing zero-trust validation on all contextual data provided to the AI model.”


    Source: thehackernews.com…

  • Notepad++ Hosting Breach Attributed to China-Linked Lotus Blossom Hacking Group

    Notepad++ Hosting Breach Attributed to China-Linked Lotus Blossom Hacking Group

    Ravie LakshmananFeb 03, 2026Malware / Open Source

    A China-linked threat actor known as Lotus Blossom has been attributed with medium confidence to the recently discovered compromise of the infrastructure hosting Notepad++.

    The attack enabled the state-sponsored hacking group to deliver a previously undocumented backdoor codenamed Chrysalis to users of the open-source editor, according to new findings from Rapid7.

    The development comes shortly after Notepad++ maintainer Don Ho said that a compromise at the hosting provider level allowed threat actors to hijack update traffic starting June 2025 and selectively redirect such requests from certain users to malicious servers to serve a tampered update by exploiting insufficient update verification controls that existed in older versions of the utility.

    Cybersecurity

    The weakness was plugged in December 2025 with the release of version 8.8.9. It has since emerged that the hosting provider for the software was breached to perform targeted traffic redirections until December 2, 2025, when the attacker’s access was terminated. Notepad++ has since migrated to a new hosting provider with stronger security and rotated all credentials.

    Rapid7’s analysis of the incident has uncovered no evidence or artifacts to suggest that the updater-related mechanism was exploited to distribute malware.

    “The only confirmed behavior is that execution of ‘notepad++.exe’ and subsequently ‘GUP.exe’ preceded the execution of a suspicious process ‘update.exe’ which was downloaded from 95.179.213.0,” security researcher Ivan Feigl said.

    “Update.exe” is a Nullsoft Scriptable Install System (NSIS) installer that contains multiple files –

    • An NSIS installation script
    • BluetoothService.exe, a renamed version of Bitdefender Submission Wizard that’s used for DLL side-loading (a technique widely used by Chinese hacking groups)
    • BluetoothService, encrypted shellcode (aka Chrysalis)
    • log.dll, a malicious DLL that’s sideloaded to decrypt and execute the shellcode

    Chrysalis is a bespoke, feature-rich implant that gathers system information and contacts an external server (“api.skycloudcenter[.]com”) to likely receive additional commands for execution on the infected host.

    The command-and-control (C2) server is currently offline. However, a deeper examination of the obfuscated artifact has revealed that it’s capable of processing incoming HTTP responses to spawn an interactive shell, create processes, perform file operations, upload/download files, and uninstall itself.

    “Overall, the sample looks like something that has been actively developed over time,” Rapid7 said, adding it also identified a file named “conf.c” that’s designed to retrieve a Cobalt Strike beacon by means of a custom loader that embeds Metasploit block API shellcode.

    One such loader, “ConsoleApplication2.exe” is noteworthy for its use of Microsoft Warbird, an undocumented internal code protection and obfuscation framework, to execute shellcode. The threat actor has been found to copy and modify an already existing proof-of-concept (PoC) published by German cybersecurity company Cirosec in September 2024.

    Cybersecurity

    Rapid7’s attribution of Chrysalis to Lotus Blossom (aka Billbug, Bronze Elgin, Lotus Panda, Raspberry Typhoon, Spring Dragon, and Thrip) based on similarities with prior campaigns undertaken by the threat actor, including one documented by Broadcom-owned Symantec in April 2025 that involved the use of legitimate executables from Trend Micro and Bitdefender to sideload malicious DLLs.

    “While the group continues to rely on proven techniques like DLL side-loading and service persistence, their multi-layered shellcode loader and integration of undocumented system calls (NtQuerySystemInformation) mark a clear shift toward more resilient and stealth tradecraft,” the company said.

    “What stands out is the mix of tools: the deployment of custom malware (Chrysalis) alongside commodity frameworks like Metasploit and Cobalt Strike, together with the rapid adaptation of public research (specifically the abuse of Microsoft Warbird). This demonstrates that Billbug is actively updating its playbook to stay ahead of modern detection.”


    Source: thehackernews.com…

  • Mozilla Adds One-Click Option to Disable Generative AI Features in Firefox

    Mozilla Adds One-Click Option to Disable Generative AI Features in Firefox

    Ravie LakshmananFeb 03, 2026Artificial Intelligence / Privacy

    Disable Generative AI Features

    Mozilla on Monday announced a new controls section in its Firefox desktop browser settings that allows users to completely turn off generative artificial intelligence (GenAI) features.

    “It provides a single place to block current and future generative AI features in Firefox,” Ajit Varma, head of Firefox, said. “You can also review and manage individual AI features if you choose to use them. This lets you use Firefox without AI while we continue to build AI features for those who want them.”

    Mozilla first announced its plans to integrate AI into Firefox in November 2025, stating it’s fully opt-in and that it’s incorporating the technology while placing users in the driver’s seat.

    The new feature is expected to be rolled out with Firefox 148, which is scheduled to be released on February 24, 2026. At the outset, AI controls will allow users to manage the following settings individually –

    • Translations
    • Alt text in PDFs (adding accessibility descriptions to images in PDF pages)
    • AI-enhanced tab grouping (suggestions for related tabs and group names)
    • Link previews (show key points before a link is opened)
    • AI chatbot in the sidebar (Using well-known chatbots like Anthropic Claude, OpenAI ChatGPT, Microsoft Copilot, Google Gemini, and Le Chat Mistral while navigating the web)
    Cybersecurity

    Mozilla said user choice is crucial as more AI features are baked into web browsers, adding that it believes in giving people control regardless of how they feel about the technology.

    “If you don’t want to use AI features from Firefox at all, you can turn on the Block AI enhancements toggle,” Varma said. “When it’s toggled on, you won’t see pop-ups or reminders to use existing or upcoming AI features.”

    Last month, Mozilla’s new CEO, Anthony Enzor-DeMeo, said the company’s focus will be on becoming a trusted software company that gives users agency in how its products work. “Privacy, data use, and AI must be clear and understandable,” Enzor-DeMeo said. “Controls must be simple. AI should always be a choice – something people can easily turn off.”


    Source: thehackernews.com…