Tag: Cyber Security

  • AI Agents: The Next Wave Identity Dark Matter – Powerful, Invisible, and Unmanaged

    AI Agents: The Next Wave Identity Dark Matter – Powerful, Invisible, and Unmanaged

    The Rise of MCPs in the Enterprise

    The Model Context Protocol (MCP) is quickly becoming a practical way to push LLMs from “chat” into real work. By providing structured access to applications, APIs, and data, MCP enables prompt-driven AI agents that can retrieve information, take action, and automate end-to-end business workflows across the enterprise. This is already showing up in production through horizontal assistants and custom vertical agents. like Microsoft Copilot, ServiceNow, Zendesk bots, and Salesforce Agentforce, with custom and vertical agents moving fast behind them. This echoes the recent Gartner “Market Guide for Guardian Agents” report, where analysts note that the rapid enterprise adoption of these AI agents is significantly outpacing the maturity of the governance and policy controls required to manage them.

    We believe the primary disconnect is that these AI “colleagues” don’t look like humans.

    • They don’t join or leave through HR
    • They don’t submit access requests
    • They don’t retire accounts when projects end

    They’re often invisible to traditional IAM, and that’s how they become identity dark matter: real identity risk outside the governance fabric. And agentic systems don’t just use access, they hunt for the path of least resistance. They’re optimized to finish the job with minimal friction: fewer approvals, fewer prompts, fewer blockers. In identity terms, that means they’ll gravitate toward whatever already works, in-app-local accounts, stale service identities, long-lived tokens, API keys, bypass auth paths, and if it works, it gets reused.

    Team8’s 2025 CISO Village Survey found:

    • Nearly 70% of enterprises already run AI agents (any system that can answer and act) in production.
    • Another 23% are planning deployments in 2026.
    • Two-thirds are building them in-house.

    MCP adoption isn’t a question of if; it’s a question of how fast and wisely. It’s already here, and it’s only accelerating. Complicating this further is the reality of hybrid environments. Based on the Gartner research, it seems that organizations face significant hurdles in managing these non-human identities because native platform controls and vendor safeguards generally do not extend beyond their own cloud or platform borders. Without an independent oversight mechanism, cross-cloud agent interactions remain entirely ungoverned. The real question is whether your AI agents become trusted teammates or unmanaged identity dark matter?

    ​​

    How Identity Dark Matter Gets Abused by Agent-AI

    As autonomous AI agents that can plan and execute multi-step tasks with minimal human input, Agent AI is a powerful assistant but also a major cyber risk. Interestingly, leading industry analysts seem to expect that the vast majority of unauthorized agent actions will stem from internal enterprise policy violations, such as misguided AI behavior or information oversharing, rather than malicious external attacks. 

    The typical abuse pattern we see is similar, driven by agent automation and shortcut-seeking:

    • Enumerate what exists: Agent crawls apps and integrations, lists users/tokens, discovers “alternate” auth paths.
    • Try what’s easy first: Local accounts, legacy creds, long-lived tokens, anything that avoids a fresh approval.
    • Lock onto “good enough” access: Even low privilege is enough to pivot: read configuration files, pull logs, discover secrets, map organization structure.
    • Upgrade quietly: Find over-scoped tokens, stale entitlements, or dormant-but-privileged identities and escalate with minimal noise.
    • Operate at machine speed: Thousands of small actions occur across many systems, too fast and too wide for humans to spot early.

    The real risk here is the scale of impact: one neglected identity becomes a reusable shortcut across the estate.

    The Dark Matter Risks

    In addition to abusing identity dark matter, left unchecked, MCP agents (AI Agents that use the MCP protocol to connect to apps, A2A, APIs, and data sources) introduce their own hidden exposures. Orchid uncovers these exposures every day:

    • Over-permissioned access: Agents get “god mode” so they don’t fail, and then that privilege becomes the default operating state.
    • Untracked usage: Agents can execute sensitive workflows through tools where logs are partial, inconsistent, or not correlated back to a sponsor.
    • Static credentials: Hardcoded tokens don’t just “live forever”, they become shared infrastructure across agents, pipelines, and environments.
    • Regulatory blind spots: Auditors ask, “who approved access, who used it, and what data was touched?” Dark matter makes those answers slow, or impossible.
    • Privilege drift: Agents accumulate access over time because removing permissions is scarier than granting them, until an attacker inherits the drift.

    We believe addressing these blind spots aligns with Gartner’s observation that modern AI governance requires identity and access management to tightly converge with information governance. This ensures organizations can dynamically classify data sensitivity and monitor real-time agent behavior instead of relying solely on static credentials.

    AI agents aren’t just users without badges. They’re dark matter identities: powerful, invisible, and outside the reach of today’s IAM. And the uncomfortable part: even well-intentioned agents will exploit dark matter. They don’t understand your org chart or your governance intent; they understand what works. If an orphaned account or over-scoped token is the fastest path to completion, it becomes the “efficient” choice.

    Principles for Safe MCP Adoption

    To avoid repeating the mistakes of the past (with orphaned or overprivileged accounts, shadow IT, unmanaged keys, and invisible activity), organizations need to adapt and apply core identity principles to AI agents. Gartner introduced the concept of specialized “guardian” systems, supervisory AI solutions that continuously evaluate, monitor, and enforce boundaries on working agents.

    We recommend organizations follow 5 core principles as they deploy MCP-based agentic solutions.

    1. Pair AI Agents with Human Sponsors: Every agent should be tied to an accountable human operator. If the human changes roles or leaves, the agent’s access should change with them. We agree with Gartner on the necessity of ownership mapping, ensuring full lineage from creation to deployment is tracked to both the machine and its human owner.
    2. Dynamic, Context-Aware Access: AI agents should not hold standing, permanent privileges. Their entitlements should be time-bound, session-aware, and limited to least privilege.
    3. Visibility and Auditability: Gartner has been increasingly calling for organizations to maintain a centralized AI agent catalog that inventories all official, shadow, and third-party agents, alongside comprehensive posture management and tamper-evident audit trails. In our view, every action an AI agent takes should be logged, correlated back to its human sponsor, and made available for review. This ensures accountability and prepares organizations for future compliance scrutiny. Visibility isn’t just “we logged it.” You need to tie actions to data reach: what the agent accessed, what it changed, what it exported, and whether that action touched regulated or sensitive datasets. Otherwise, you can’t distinguish “useful automation” from “silent data movement”. 
    4. Governance at Enterprise Scale: MCP adoption should extend across both new and legacy systems within a single, consistent governance fabric, so that security, compliance, and infrastructure teams are not working in silos. This is also where Gartner emphasizes the importance of an enterprise-owned supervisory layer, one that ensures consistent controls and reduces the risk of vendor lock-in as MCP adoption expands.
    5. Commitment to Good IAM Hygiene: As with all identities, authentication flows, authorization permissions and implemented controls, strong hygiene- on the application server as well as the MCP server- is critical to keep every user within the proper bounds.

    The Bigger Picture

    AI agents pose a unique challenge beyond mere integration. They represent a shift in how work is delegated and executed inside enterprises. Left unmanaged, they will follow the same trajectory as other hidden identities: in-app-local accounts, stale service identities, long-lived tokens, API keys, and bypass auth paths that have become identity dark matter over time. And because LLM-driven agents are optimized for efficiency, least friction and fewest steps, they will naturally gravitate to those ungoverned identities as the fastest path to success. If an orphaned local admin or an over-scoped token “just works,” the agent will use it, and reuse it.

    The opportunity is to get ahead of this curve.

    By treating AI agents as first-class identities from day one (discoverable, governable, and auditable), organizations can harness their potential without creating blind spots.

    Enterprises that do this will not only reduce their immediate attack surface but also position themselves for the regulatory and operational expectations that are sure to follow.

    In practice, most Agent-AI incidents won’t start with a zero-day. They’ll start with an identity shortcut that someone forgot to clean up, then get amplified by automation until it appears to be a systemic breach.

    The Bottom Line

    AI agents are here. They are already changing how enterprises operate.

    The challenge is not whether to use them, but how to govern them.

    Safe MCP adoption requires applying the same principles that identity practitioners know well, least privilege, lifecycle management, and auditability, to a new class of non-human identities that follow this protocol.

    If identity dark matter is the sum of what we can’t see or control, then unmanaged AI agents may become its fastest-growing source. The organizations that act now to bring them into the light will be the ones who can move quickly with AI without sacrificing trust, compliance, or security. That’s why Orchid Security is building identity infrastructure to eliminate dark matter, and make Agent AI adoption safe to deploy at enterprise scale.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • Building a High-Impact Tier 1: The 3 Steps CISOs Must Follow

    Building a High-Impact Tier 1: The 3 Steps CISOs Must Follow

    Every CISO knows the uncomfortable truth about their Security Operations Center: the people most responsible for catching threats in real time are the people with the least experience. Tier 1 analysts sit at the front line of detection, and yet they are also the most vulnerable to the cognitive and organizational pressures that quietly erode SOC performance over time.

    The Paradox at the Gate: Why Tier 1 Carries the Weight but Lacks the Armor

    Tier 1 is the layer that processes the highest volume of alerts, performs initial triage, and determines what gets escalated. But it is built on a foundation that is structurally fragile. Entry-level analysts, high turnover rates, and relentless alert queues create conditions where even well-designed detection rules fail to translate into timely, accurate responses.

    The paradox is here: 

    • Tier 1 performance defines SOC performance;
    • But Tier 1 is often the least supported, least empowered, and most cognitively overloaded layer

    Tier 1 analysts face a daily avalanche of alerts. Over time, this leads to:

    • Alert fatigue: constant exposure to high volumes reduces sensitivity to real danger.
    • Decision fatigue: repeated micro-decisions degrade judgment quality.
    • Cognitive overload: too many dashboards, too little context.
    • False-positive conditioning: when 90% of alerts are benign, skepticism becomes automatic.
    • Burnout and turnover: institutional memory evaporates

    For CISOs, these are not HR problems. It’s a business risk. When Tier 1 hesitates, misses, or delays escalation:

    • Dwell time increases,
    • Incident costs rise,
    • Detection quality degrades,
    • Executive confidence in security drops.

    If Tier 1 is weak, the entire SOC becomes reactive rather than predictive.

    The Core Engine Room: Monitoring and Triage as Business-Critical Workflows

    Tier 1 owns two foundational SOC processes: monitoring and alert triage. Monitoring is the continuous process of ingesting signals from across the environment — endpoints, networks, cloud infrastructure, identity systems — and applying detection logic to surface events of potential concern. 

    Triage is what happens next: the structured, human-driven process of evaluating those events, assigning severity, ruling out false positives, and determining whether escalation is warranted.

    Basically, these are routine tasks. Watch telemetry. Sort alerts into true positive/false positive/needs escalation. But these also are revenue protection mechanisms since they determine MTTR, MTTD, and resource allocation efficiency. When these workflows are inefficient:

    • Tier 2 and Tier 3 drown in noise,
    • Incident response begins late,
    • Business disruption expands,
    • Operational costs increase,
    • Regulatory exposure grows.

    Intelligence as Oxygen: The Foundation of Tier 1 Effectiveness

    Tier 1 cannot operate effectively in a vacuum, and raw alerts without context are just digital shadows. Actionable threat intelligence turns data into decisions. For a Tier 1 analyst asking, “Is this connected to an active campaign targeting our sector?”, it provides: 

    • IOC validation,
    • Campaign context,
    • TTP mapping,
    • Infrastructure associations,
    • Malware family attribution.

    Tier 1 analysts need threat intelligence more urgently than anyone else in the SOC, precisely because they make the most time-sensitive decisions with the least contextual background.

    Integrate actionable feeds and lookup enrichment into your SOC workflows to speed detection and improve operational resilience

    Reduce Dwell Time. Increase Confidence

    Step 1: Detect What Others Miss. Powering Monitoring with Live Threat Intelligence Feeds

    The first step toward a high-impact Tier 1 is upgrading the intelligence foundation of monitoring itself. Most SOC environments rely on detection rules built from static signatures or behavioral heuristics — logic that was accurate when written but degrades as adversaries adapt.

    Actionable threat intelligence feeds continuously inject fresh, verified indicators of compromise directly into the detection infrastructure. Rather than flagging anomalies and waiting for an analyst to research them, a feed-enriched monitoring layer flags activity that has already been confirmed as malicious through real-world analysis. Detections become based on behavioral ground truth, not statistical deviation.

    The operational effect on early detection is substantial. It compresses the window of exposure and dramatically reduces the cost of eventual containment.

    ANY.RUN’s Threat Intelligence Feeds aggregate indicators (malicious IPs, URLs, domains) drawn from a continuously operating malware analysis sandbox that processes real-world threats in real time. This means the data reflects active threat activity observed through dynamic execution analysis, not historical reporting or third-party aggregation alone. Adversaries who modify their malware to evade static signatures cannot easily evade behavioral observation.

    TI Feeds: data, benefits, integrations

    Delivered in STIX and MISP formats, TI Feeds integrate directly with SIEMs, firewalls, DNS resolvers, and endpoint detection systems. Each indicator carries contextual metadata like malware families and behavioral tags, so that a detection is not just a flag but an explanation. 

    For the business, intelligence-powered monitoring reduces MTTD, improves detection precision, and generates a measurable return on the broader security stack investment by ensuring that what gets detected is what actually matters.

    Step 2: From Flag to Finding. Enriching Every Alert with the Context Analysts Actually Need

    Before an analyst can enrich an alert, they often face a more immediate problem: a suspicious file or link has surfaced, and its nature is genuinely unknown. This is where the ANY.RUN Interactive Sandbox becomes a direct triage asset. 

    Rather than relying on static reputation checks alone, analysts can submit the artifact to the sandbox and observe its actual behavior in a live execution environment — watching in real time as the file makes network connections, modifies the registry, drops additional payloads, or attempts to evade detection. Within minutes, the sandbox produces a verdict grounded in what the sample actually does, not just what it looks like. 

    View sandbox analysis of a suspicious .exe file

    Sandbox detonation detects ScreenConnect malware

    But detection is only the beginning of a T1 analyst’s job. Once an alert surfaces, the analyst must determine whether it represents a genuine threat, understand what it means, and decide what to do with it — all under time pressure and against a queue of competing alerts. Without enrichment, this determination relies on analyst experience and manual research, both of which are in short supply at Tier 1.

    The quality and speed of enrichment determine the quality and speed of triage. Deep enrichment, grounded in behavioral analysis, allows analysts to reason about the actual risk of a detection rather than guessing at it.

    ANY.RUN’s Threat Intelligence Lookup delivers this depth on demand. Analysts can query any indicator — domain, IP, file hash, URL — and receive immediate context drawn from the sandbox’s analysis repository: full behavioral reports showing how the artifact executed, associated malware families and threat categories, network indicators observed during analysis, and connections to broader malicious infrastructure. A lookup is fast enough to fit into the triage workflow rather than interrupting it.

    domainName:”priutt-title.com”

    TI Lookup domain search with “Malicious” verdict and additional IOCs

    A single lookup allows us to understand that a doubtful domain spotted in the network traffic is most probably malicious, engaged in campaigns targeting IT, finance, and educational businesses all over the world right now, and linked to more indicators that can be used for further detection tuning. 

    This changes how T1 operates across several dimensions: 

    • Analysts make faster, more confident decisions because they have evidence rather than inference. 
    • Escalation notes improve because analysts can articulate what they found and why it matters, reducing back-and-forth with Tier 2 and accelerating the handoff.
    • False positives are closed with greater certainty, improving the precision of the escalation pipeline. 

    For business objectives, enriched triage supports several priorities simultaneously: 

    • It accelerates MTTD and MTTR, which are key metrics for both security program effectiveness and regulatory compliance. 
    • It improves the quality of incident documentation for post-incident review, insurance claims, and regulatory reporting. 
    • It reduces analyst burnout by replacing frustrating ambiguity with actionable clarity. 
    • Finally, it ensures that the SOC’s output reflects genuine analysis rather than overwhelmed guesswork.

    Step 3: Security That Compounds. Integrating ANY.RUN into Your Existing Stack

    Individual capabilities — however strong — deliver limited value when they operate in isolation. The third and most strategically significant step is integration: connecting ANY.RUN’s Threat Intelligence Feeds, Lookup, and Sandbox into the existing security infrastructure so that intelligence flows automatically across every layer of the environment.

    This is where investment in T1 intelligence capabilities translates into organization-wide risk reduction. 

    • SIEMs that ingest TI Feeds generate higher-precision alerts, because the detection layer is operating from verified behavioral indicators rather than generic rules. 
    • Firewalls and DNS resolvers that consume the same feeds block malicious infrastructure at the perimeter, reducing the volume of threats that reach endpoints and analysts in the first place. 
    • EDR systems enriched with sandbox-derived behavioral signatures detect malware that evades signature-based approaches. 
    • The entire stack becomes more coherent because it shares a common intelligence foundation.

    ANY.RUN supports this integration architecture through standard formats and APIs designed for compatibility with the security products already in deployment. STIX and MISP feed delivery integrates with leading SIEM and SOAR solutions. The TI Lookup API enables direct enrichment from within analyst workflows(ticketing systems, investigation dashboards, custom scripts) without requiring analysts to leave their primary interface. The sandbox itself can receive samples programmatically, enabling automated analysis pipelines that feed results back into detection and response systems.

    ANY.RUN integration capabilities

    For T1 teams, the day-to-day effect of integration is a reduction in the manual effort that currently consumes analyst time. Indicators enriched automatically before triage, feeds that update detection logic without human intervention, escalation data that populates from sandbox analysis rather than manual documentation — these changes shift analyst effort from information gathering to genuine investigation. T1 becomes faster without becoming larger.

    For CISOs, the business case for integration centers on compounding returns. Each point of integration multiplies the value of the intelligence investment: a feed consumed by five security controls delivers five times the coverage of a feed consumed by one. 

    This coherence also strengthens the organization’s posture in conversations with the board, insurers, and regulators. An integrated, intelligence-driven security architecture demonstrates not just that controls exist, but that they are actively informed by current threat activity, a substantively different claim than checkbox compliance.

    Integrate dynamic malware analysis, fresh intelligence feeds, and contextual search to improve detection quality and business outcomes

    Transform Your SOC Into an Early Warning System

    Three Steps, One Outcome: A Tier 1 That Actually Protects the Business

    The path to a high-impact Tier 1 is not hiring more analysts or writing more detection rules. It lies in addressing the structural shortcomings that make T1 fragile: monitoring that cannot reflect current threats, triage that lacks the context to be decisive, and intelligence capabilities that remain disconnected from the stack they should be informing.

    ANY.RUN’s Threat Intelligence Feeds, Lookup, and Interactive Sandbox form a closed loop — from behavioral analysis to detection to investigation — that addresses each of the steps to top performance without adding operational complexity. The Sandbox generates ground truth. The Feeds operationalize it across the detection layer. The Lookup makes the same analytical depth available on demand for every analyst, regardless of experience.

    CISOs who prioritize this investment are not just improving SOC metrics. They are changing the equation for every threat actor who targets their organization. A Tier 1 team that detects early, triages with confidence, and escalates accurately is one of the highest-leverage risk reduction assets a security program can build.

    Combine live TI Feeds with indicator enrichment to transform monitoring into high-confidence detection.

    Build a Smarter SOC Frontline

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…

  • Open-Source CyberStrikeAI Deployed in AI-Driven FortiGate Attacks Across 55 Countries

    Open-Source CyberStrikeAI Deployed in AI-Driven FortiGate Attacks Across 55 Countries

    Ravie LakshmananMar 03, 2026Vulnerability / Artificial Intelligence

    The threat actor behind the recently disclosed artificial intelligence (AI)-assisted campaign targeting Fortinet FortiGate appliances leveraged an open-source, AI-native security testing platform called CyberStrikeAI to execute the attacks.

    The new findings come from Team Cymru, which detected its use following an analysis of the IP address (“212.11.64[.]250”) that was used by the suspected Russian-speaking threat actor to conduct automated mass scanning for vulnerable appliances.

    CyberStrikeAI is an “open-source artificial intelligence (AI) offensive security tool (OST) developed by a China-based developer who we assess has some ties to the Chinese government,” security researcher Will Thomas (aka @BushidoToken) said.

    Details of the AI-powered activity came to light last month when Amazon Threat Intelligence said it detected the unknown attacker systematically targeting FortiGate devices using generative artificial intelligence (AI) services like Anthropic Claude and DeepSeek, compromising over 600 appliances in 55 countries.

    According to the description in its GitHub repository, CyberStrikeAI is built in Go and integrates more than 100 security tools to enable vulnerability discovery, attack-chain analysis, knowledge retrieval, and result visualization. It’s maintained by a Chinese developer who goes by the online alias Ed1s0nZ.

    Team Cymru said it observed 21 unique IP addresses running CyberStrikeAI between January 20 and February 26, 2026, with servers primarily hosted in China, Singapore, and Hong Kong. Additional servers related to the tool have been detected in the U.S., Japan, and Switzerland.

    The Ed1s0nZ account, besides hosting CyberStrikeAI, has published several other tools that demonstrate their interest in exploitation and jailbreaking AI models –

    • watermark-tool, to add invisible digital watermarks to documents.
    • banana_blackmail, a Golang-based ransomware,
    • PrivHunterAI, a Golang-based tool that uses Kimi, DeepSeek, and GPT models to detect privilege escalation vulnerabilities.
    • ChatGPTJailbreak, which contains a README.md file with prompts to jailbreak OpenAI ChatGPT by tricking it into entering a Do Anything Now (DAN) mode or asking it to act as ChatGPT with Developer Mode enabled.
    • InfiltrateX, a Golang-based scanner for detecting privilege escalation vulnerabilities.
    • VigilantEye, a Golang-based tool that monitors the disclosure of sensitive information, such as phone numbers and ID card numbers, in databases. It’s configured to send an alert via a WeChat Work bot if a potential data breach is detected.

    “Further, Ed1s0nZ’s GitHub activities indicate they interact with organisations that support potentially Chinese government state-sponsored cyber operations,” Thomas said. “This includes Chinese private sector firms that have known ties to the Chinese Ministry of State Security (MSS).”

    One such company the developer has interacted with is Knownsec 404, a Chinese security vendor that suffered a major leak of more than 12,000 internal documents late last year, exposing the firm’s employee data, government clientele, hacking tools, large volumes of stolen data such as South Korean call logs and information related to Taiwan’s critical infrastructure organizations, and the inner workings of ongoing cyber operations targeting other countries.

    “Ostensibly, KnownSec appeared to be just another security company, but this is only a half truth,” DomainTools noted in an analysis published this January, describing it as a “state-aligned cyber contractor” capable of supporting Chinese national security, intelligence, and military objectives.

    “In reality, […] it has a shadow organization that works for the PLA, MSS, and the organs of the Chinese security state. This leak exposes a company that operates far beyond the role of a typical cybersecurity vendor. Tools like ZoomEye and the Critical Infrastructure Target Library give China a global reconnaissance system that catalogs millions of foreign IPs, domains, and organizations mapped by sector, geography, and strategic value.”

    Ed1s0nZ has also been observed making active modifications to a README.md file located in an eponymous repository, removing references to them having been honored with the Level 2 Contribution Award to the China National Vulnerability Database of Information Security (CNNVD). The developer has also claimed that “everything shared here is purely for research and learning.”

    According to research published by Bitsight last month, China maintains two different vulnerability databases: CNNVD and the Chinese National Vulnerability Database (CNVD). While CNNVD is overseen by the Ministry of State Security, CNVD is controlled by CNCERT. Previous findings from Recorded Future have revealed that CNNVD takes longer to publish vulnerabilities with higher CVSS scores than vulnerabilities with lower ones.

    “The developer’s recent attempt to scrub references to the CNNVD from their GitHub profile points to an active effort to obscure these state ties, likely to protect the tool’s operational viability as its popularity grows,” Thomas said. “The adoption of CyberStrikeAI is poised to accelerate, representing a concerning evolution in the proliferation of AI-augmented offensive security tools.”


    Source: thehackernews.com…

  • Fake Tech Support Spam Deploys Customized Havoc C2 Across Organizations

    Fake Tech Support Spam Deploys Customized Havoc C2 Across Organizations

    Threat hunters have called attention to a new campaign as part of which bad actors masqueraded as fake IT support to deliver the Havoc command-and-control (C2) framework as a precursor to data exfiltration or ransomware attack.

    The intrusions, identified by Huntress last month across five partner organizations, involved the threat actors using email spam as lures, followed by a phone call from an IT desk that activates a layered malware delivery pipeline.

    “In one organization, the adversary moved from initial access to nine additional endpoints over the course of eleven hours, deploying a mix of custom Havoc Demon payloads and legitimate RMM tools for persistence, with the speed of lateral movement strongly suggesting the end goal was data exfiltration, ransomware, or both,” researchers Michael Tigges, Anna Pham, and Bryan Masters said.

    It’s worth noting that the modus operandi is consistent with email bombing and Microsoft Teams phishing attacks orchestrated by threat actors associated with the Black Basta ransomware operation in the past. While the cybercrime group appears to have gone silent following a public leak of its internal chat logs last year, the continued presence of the group’s playbook suggests two possible scenarios.

    One possibility is that former Black Basta affiliates have moved on to other ransomware operations and are using them to mount fresh attacks, or two, rival threat actors have adopted the same strategy to conduct social engineering and obtain initial access. 

    The attack chain begins with a spam campaign aiming to overwhelm a target’s inboxes with junk emails. In the next step, the threat actors, masquerading as IT support, contact the recipients and trick them into granting remote access to their machines either via a Quick Assist session or by installing tools like AnyDesk to help remediate the problem.

    With the access in place, the adversary wastes no time launching the web browser and navigating to a fake landing page hosted on Amazon Web Services (AWS) that impersonates Microsoft and instructs the victim to enter their email address to access Outlook’s anti-spam rules update system and update the spam rules.

    Clicking a button to “Update rules configuration” on the counterfeit page triggers the execution of a script that displays an overlay asking the user to enter their password.

    “This mechanism serves two purposes: it allows the threat actor (TA) to harvest credentials, which, when combined with the required email address, provides access to the control panel; concurrently, it adds a layer of authenticity to the interaction, convincing the user the process is genuine,” Huntress said.

    The attack also hinges on downloading the supposed anti-spam patch, which, in turn, leads to the execution of a legitimate binary named “ADNotificationManager.exe” (or “DLPUserAgent.exe” and “Werfault.exe”) to sideload a malicious DLL. The DLL payload implements defense evasion and executes the Havoc shellcode payload by spawning a thread containing the Demon agent.

    At least one of the identified DLLs (“vcruntime140_1.dll”) incorporates additional tricks to sidestep detection by security software using control flow obfuscation, timing-based delay loops, and techniques like Hell’s Gate and Halo’s Gate to hook ntdll.dll functions and bypass endpoint detection and response (EDR) solutions.

    “Following the successful deployment of the Havoc Demon on the beachhead host, the threat actors began lateral movement across the victim environment,” the researchers said. “While the initial social engineering and malware delivery demonstrated some interesting techniques, the hands-on-keyboard activity that followed was comparatively straightforward.”

    This includes creating scheduled tasks to launch the Havoc Demon payload every time the infected endpoints are rebooted, providing the threat actors with persistent remote access. That said, the threat actor has been found to deploy legitimate remote monitoring and management (RMM) tools like Level RMM and XEOX on some compromised hosts instead of Havoc, thus diversifying their persistence mechanisms.

    Some important takeaways from these attacks are that threat actors are more than happy to impersonate IT staff and call personal phone numbers if it improves the success rate, techniques like defense evasion that were once limited to attacks on large firms or state-sponsored campaigns are becoming increasingly common, and commodity malware is customized to bypass pattern-based signatures.

    Also of note is the speed at which attacks progress swiftly and aggressively from initial compromise to lateral movement, as well as the numerous methods used to maintain persistence.

    “What begins as a phone call from ‘IT support’ ends with a fully instrumented network compromise – modified Havoc Demons deployed across endpoints, legitimate RMM tools repurposed as backup persistence,” Huntress concluded. “This campaign is a case study in how modern adversaries layer sophistication at every stage: social engineering to get in the door, DLL sideloading to stay invisible, and diversified persistence to survive remediation.”


    Source: thehackernews.com…

  • 30 Alleged Members of 'The Com' Arrested in Project Compass

    30 Alleged Members of 'The Com' Arrested in Project Compass

    “The Com” has been a persistent cybersecurity threat for years, but law enforcement agencies are fighting back against what Europol calls an “extremist network” of hackers.

    Last week, Europol revealed the initial results of “Project Compass,” an ongoing international operation that launched in January 2025 to tackle The Com and its sprawling network of sub-groups. According to Europol, the operation has so far resulted in the arrests of 30 alleged perpetrators, with 179 members fully or partially identified by investigators.

    Notably, Project Compass is led by Europol’s European Counter Terrorism Centre and features partnerships with law enforcement agencies from 28 countries, including the US, the United Kingdom, Canada, and several European Union (EU) member states. Along with the arrests and identification of alleged members, the operation also identified and partially identified 62 victims and “safeguarded” four, according to Europol.

    Related:RAMP Forum Seizure Fractures Ransomware Ecosystem

    The Com, short for “The Community,” is a loose collective of English-speaking cybercriminals, primarily between the ages of 13 to 25, that engage in a variety of malicious activity as part of sub-groups such as Scattered Spider and Scattered Lapsus$ Hunters.

    Europol explained The Com uses social media platforms and messaging applications, as well as online gaming and music streaming platforms, to “recruit, radicalise and exploit young people.” And because it operates with a decentralized structure, the collective and its sub-groups have proven particularly difficult to disrupt.

    “These networks deliberately target children in the digital spaces where they feel most at ease,” said Anna Sjöberg, head of Europol’s European Counter Terrorism Centre Project, in a statement. “Compass allows us to intervene earlier, safeguard victims and disrupt those who exploit vulnerability for extremist purposes. No country can address this threat alone —  and through this cooperation, we are closing the gaps they try to hide in.”

    Project Compass Goals and Targets

    Europol provided few details about Project Compass and its results thus far. It’s unclear where and when the arrests were made, what the charges were, and which sub-groups or threat campaigns suspects were allegedly connected to. 

    However, Europol’s website for the operation states that Project Compass targets “terrorism and violent extremism” along with threats to minors. The website cites one known sub-group within The Com known as “764,” which the US Department of Justice describes as a “nihilistic violent extremist (NVE) network.” Last April, US authorities arrested the two alleged ringleaders of 764.

    Related:Why ‘Call This Number’ TOAD Emails Beat Gateways

    It’s unclear if these arrests were part of Project Compass. Dark Reading contacted Europol for more information about the operation but the agency did not respond at press time.

     

    A chart showing the initial results of Project Compass.

    Europol splits The Com into three distinct groups of activity: cyber activity, which includes targeting online commerce and infrastructure with data breaches and ransomware; offline activity, which includes physical damage to property, harm to other persons, and “acts of terrorism to promote a violent nihilistic worldview”; and extortion/sextortion activity, in which members extort minors to commit online and offline sex crimes, among other illegal activity.

    Europol noted that 764 is “notorious for its recruitment and grooming tactics” along with its violence. The sub-group typically targets minors and coerces them into participating in violent acts and producing explicit content, including child sexual exploitation material (CSAM). Members often use the materials to blackmail victims, according to Europol.

    Has Project Compass Made an Impact?

    While the number of arrests is significant, The Com has shown that it can withstand aggressive law enforcement actions. In 2024, US authorities arrested and charged several individuals accused of being prominent members of The Com and sub-groups such as Scattered Spider. 

    Related:Iran’s MuddyWater Targets Orgs With Fresh Malware as Tensions Mount

    However, the collective’s hacking exploits showed no signs of slowing down in 2025. In fact, sub-groups were tied to several high-profile threat campaigns, including Scattered Spider’s attacks on the airline industry and Scattered Lapsus$ Hunters’ breaches of enterprises Salesforce instances.

    Europol said Project Compass will continue to work toward building “a reliable network of Member States and Third Countries” to assist with investigations into The Com; promote the exchange of information, intelligence and effective strategies among partner organizations; and assist with current investigations under specialized law enforcement units, such as those focused on counter-terrorism, CSAM, and organized crime.


    Source: www.darkreading.com…

  • The Tug-of-War Over Firewall Backlogs in the AI-Driven Development Era

    The Tug-of-War Over Firewall Backlogs in the AI-Driven Development Era

    The relationship between application developers and security teams has always been fraught with tension. At the core lies an ongoing battle — speed versus security — and that tug of war has been further exacerbated by mounting firewall backlog challenges driven by increased reliance on artificial intelligence and automation. 

    Traditionally, developers submit a firewall rule request before deploying a new application, service, or tool inside an enterprise environment. However, security teams can take weeks to review and approve the request, as they are overwhelmed by sprawling firewall logs used to aid investigations, maintain policies, analyze network traffic, and identify unauthorized access. 

    Developers don’t want to wait. They want to build their next application. Security teams need time. They want to reduce risk. And as the rate of development and deployment accelerates, the volume of requests piles up.

    Related:‘Encrypt It Already’ Campaign Pushes Big Tech to Prioritize E2E Encryption

    This dichotomy creates a natural tension across the organization, explains Aviatrix CPO Chris McHenry. Acknowledging that tension, embracing it, and learning how to reduce it is vital for organizations, he urges.

    “There can be 3,000 rule requests in backlogs,” he adds. “Response time is anywhere between two and four weeks. Developers just sit, waiting to continue to work.”

    A Tale as Old as Time

    The strained relationship between developers and security teams can be traced back to the evolution of enterprise IT architecture, explains McHenry. Rapid cloud adoption fundamentally changed how organizations deploy applications and manage user access. AI and automation will only accelerate the process by spinning coding, deployment, and other development functions even faster.  

    Before the cloud era, security teams occupied the driver’s seat, as organizations operated with physical laptops, desktops, and data centers. However, the emergence of cloud offerings sparked a fundamental shift in organizational operations.

    Before, security teams “could literally create physical boundaries that they could control,” McHenry tells Dark Reading. “It’s tough for people to go from full control to no control.” 

    Cloud adoption improved speed, allowing developers to build applications even faster. Developers became cloud buyers, as they didn’t have to wait for someone else to handle procurement and setup. 

    “It was such a pickle with cloud security postures in many environments because developers — and the business, more importantly — now expect that speed, and security is trying to play catch-up,” McHenry says.

    Related:When the Cloud Rains on Everyone’s IoT Parade

    The friction between developers and security teams is actually improving, says Aaron Rose, Office of the CTO at Check Point. More organizations are treating security as a shared responsibility rather than a last-minute blocker, he adds. 

    However,  developers and security teams face significantly opposing demands that continue to strain the relationship. The former needs to ship code quickly while the latter feels pressure to “reduce risk with limited time and context,”  Rose tells Dark Reading.

    “When security tooling or approvals sit outside the developer workflow, you get long feedback loops, rework, and frustration on both sides,” he says.

    Architecture Evolves, Firewalls Stay the Same

    Developers used to be able to bypass firewalls more easily when policies relied on static IP addresses. But in the cloud, these change constantly. Now, it takes forever to get a new firewall rule in place, explains McHenry, noting that there are now more places for the process to break.

    If a firewall only knows how to handle IP addresses, organizations are in trouble, he warns. That can lead to significantly larger volumes of changes. Organizations face tight windows for changes, as firewalls represent a “huge blast radius” that can expose entire networks to risk. 

    Related:How Gray-Zone Hosting Companies Protect Data the US Wants Erased

    “I used to be able to click, click, click; but now I have to go back to opening a ticket and waiting two weeks, and someone will put it in, and they may or may not approve it,” he says. He adds that developers must write 100 lines of approval code to justify the access they requested in the first place. 

    While hybrid and multi-cloud architectures changed operations by increasing the number of enforcement points and the number of policy translations needed for a single business change, many organizations did not adapt their strategies. They still run firewall operations like they always have, explains Rose. That means tickets, manual review, manual implementation, and period audits, he adds. 

    “That model can’t keep up with modern delivery cadence, so backlogs emerge,” Rose says. 

    McHenry observed similar disconnects. Organizations will try to apply previous practices to new cloud services, but the speed developers were accustomed to slows down, and that’s a huge point of frustration for them. 

    ‘It’s Only Going to Get Worse’

    In large enterprises, Rose attributes backlogs to multi-vendor sprawl, global organizations, and layered processes. For small-to-medium (SMB) sized businesses, it’s usually a resource issue — or lack thereof. One person may handle networking, security, and cloud, and sometimes the help desk functions, adds Rose.

    “Changes get delayed not because of policy bureaucracy, but because there simply aren’t enough hours in a day,” Rose says.  

    Backlogs slow business operations, heighten network exposure, and drastically reduce visibility. McHenry reveals that people would be “surprised” by how many organizations users interact with regularly have no visibility or control over what comes in or out of their cloud. 

    Many SMBs don’t use rules at all, because they don’t have the capacity to manage them, says McHenry. Their firewalls are generally wide open, he warns.    

    Organizations often struggle to balance prioritizing new cybersecurity controls with maintaining operational speed and revenue. But McHenry says those two don’t have to be mutually exclusive.  

    Automating certain processes and embedding controls into developer workflows can help enterprises address these challenges. Enterprises are now treating firewall policies as an engineered product by defining intent in application terms, automating risk checks, and reserving human review for exceptions or high-risk changes, explains Rose. 

    Improving the relationship between developers and security presents a significant innovation opportunity for organizations, McHenry adds. Support developers with what they’re accustomed to regarding self-service, but do so in a way that still supports security best practices, he recommends. Organizations respond to the tension in a number of ways, but it’s not just about deploying new technology — processes need to be updated as well. 

    “If app teams are moving faster with Claude code and AI development, then holy crap, the log is going to grow like crazy,” McHenry warns. “Without changing the process, it’s only going to get worse.”


    Source: www.darkreading.com…

  • Critical OpenClaw Vulnerability Exposes AI Agent Risks

    Critical OpenClaw Vulnerability Exposes AI Agent Risks

    A newly disclosed — and now patched — vulnerability in the fastest-growing AI agent tool in the developer ecosystem underscores the expanding risks organizations face from deploying AI in their environments without adequate security oversight or controls.

    The vulnerability in OpenClaw, the open source AI agent that has seen meteoric adoption among developers since its launch last November, allowed a malicious website to hijack a developer’s AI agent without requiring any plug-ins, browser extensions, or user interaction. The vulnerability stemmed from OpenClaw’s failure to distinguish between connections coming from the developer’s own trusted apps and services and connections coming from a malicious website running in the developer’s browser.

    High Severity Vulnerability

    The OpenClaw team deemed the issue high severity and released a patch for it less than 24 hours after researchers at Oasis Security informed them about the flaw. “If OpenClaw is installed, update immediately,” Oasis recommended. “The fix for this vulnerability is included in version 2026.2.25 and later. Ensure all instances are updated — treat this with the same urgency as any critical security patch.”

    Related:Flaw-Finding AI Assistants Face Criticism for Speed, Accuracy

    OpenClaw, previously known as MoltBot and before that Clawdbot, is a viral open source AI agent that runs locally on a user’s system as a personal AI assistant. It integrates with messaging apps, calendars, and developer tools, and allows users to automate workflows, manage files, execute shell commands, and take a wide range of other autonomous actions. Developers can also extend its capabilities through community-built plug-ins, or “skills” available via its marketplace, ClawHub. That combination of flexibility, local control, and a fast-growing ecosystem has made it extraordinarily popular among developers in a very short time. In just three months since launch OpenClaw has already become the most starred project on GitHub, surpassing the React JavaScript library for building user interfaces

    A Growing List of Security Issues

    But that unprecedented adoption speed has also exposed organizations to new security risks, including vulnerabilities like CVE-2026-25253, which gave attackers a way to steal authentication tokens, as well as command injection bugs and prompt injection attacks. Examples include CVE-2026-24763, CVE-2026-25157 and CVE-2026-25475

    The growing presence of malicious skills on ClawHub and SkillsMP has been another issue. Researchers at Koi Security recently found that out of 10,700 skills on ClawHub, more than 820 were malicious, a sharp increase from the 324 it had discovered just a few weeks prior in early February. Trend Micro found threat actors using 39 such skills across ClawHub and SkillsMP to distribute the Atomic macOS info stealer.

    Related:Claude Code Security Shows Promise, Not Perfection

    The vulnerability that Oasis Security discovered stemmed from OpenClaw’s incorrect assumption that any connection originating from localhost — or from within the computer itself — can be implicitly trusted, without accounting for the fact that websites can also originate connections from that same local address. Researchers at Oasis found that if a developer were to visit any attacker-controlled or compromised website, JavaScript on that page could silently open a WebSocket connection directly to the OpenClaw gateway. 

    They discovered OpenClaw had set no rate limits or failure thresholds for incorrect passwords, meaning an attacker could use brute-force methods to attempt to guess the password to the gateway without triggering any alert. Once authenticated, the attacker could essentially register any malicious script as trusted and gain full control of the developer’s device.

    The Need for a New Approach

    The vulnerability comes at a time when stakeholders have expressed growing concern about the ungoverned spread of agentic AI tools within organizations. Randolph Barr, chief information security officer (CISO) at Cequence Security, says the risks mandate the need for security barriers between AI bots and the apps, APIs, and credentials they use. OpenClaw has basic safeguards in place, like authentication, limits on which devices can connect, controls over which tools can run, and the option to sandbox it. “But those controls don’t eliminate risk if the agent runs locally with broad access to files, credentials, and connected systems,” he says in comments to Dark Reading.

    Related:Flaws in Claude Code Put Developers’ Machines at Risk

    “The real protection comes from layered defenses, MDM enforcement, removing admin rights, scoped credentials, API monitoring, rate limiting, and sandboxing,” Barr says. “Those measures won’t stop every exploit, but they significantly reduce blast radius and limit what an attacker can do if an agent is compromised.” 

    Organizations with mature identity, API security, MDM, and logging capabilities, should consider moving their security controls to the execution layer while less mature environments should think about shifting from “authenticate and trust” to continuous verification of behavior, especially for non-human identities like AI agents, he notes.

    Organizations should also treat any browser-reachable local AI gateway as an Internet-facing service, according to Jason Soroko, senior fellow at Sectigo. “Remove the browser’s path to it where possible by using Unix domain sockets or named pipes, or by interposing a native companion that owns the connection,” he says. Organizations should also enforce strict Origin allowlisting, require cryptographic client identity such as mTLS or signed challenges, and ban auto-approval based on source IP. 

    “Then shrink the blast radius even when a session is established,” Soroko says. “Adopt a capability model that scopes what the agent can do by verb, directory, destination, and time, with step-up consent for high-risk sinks like shell execution, credential access, and large data reads.”


    Source: www.darkreading.com…

  • Bug in Google's Gemini AI Panel Opens Door to Hijacking

    Bug in Google's Gemini AI Panel Opens Door to Hijacking

    Google has fixed a high-severity flaw in its implementation of Gemini AI in the Chrome browser that could have allowed attackers to escalate privileges, violate user privacy while browsing, and access sensitive system resources. Researchers said the vulnerability demonstrates new security hazards that come with the deployment and use of agentic browsers that have AI built in.

    Specifically, the flaw tracked as CVE-2026-0628 could have allowed malicious browser extensions with only basic permissions to escalate privileges to access the victim’s camera and microphone without consent; take screenshots of any website; and access local files and directories, according to a report published today by researchers from Palo Alto Networks’ Unit 42, who discovered the flaw.

    “The vulnerability put any user of the new Gemini feature in Chrome at risk of system compromise if they had installed a malicious extension,” Gal Weizman, senior principal researcher, Palo Alto Networks, tells Dark Reading. “Beyond individual users, the risk profile was significantly amplified within business and organizational environments.”

    Related:Scam Abuses Gemini Chatbots to Convince People to Buy Fake Crypto

    In Chrome, the Gemini Live feature operates within a privileged browser side panel, granting it elevated capabilities to perform actions such as accessing on-screen content and interacting with local system resources to complete complex tasks. Indeed, many browsers now have agentic AI capabilities integrated into the browsing experience, allowing for quick dissemination of data, and executing complex, multistep operations that were previously impossible or required extensions and manual steps by the operator. 

    However, with this expanded capability and privileged access comes “a new and widened attack surface” that introduces new risks to both home and corporate users, Weizman wrote in the report. “This creates security implications that are not present in traditional browsers.”

    The Gemini AI Security Flaw & Its Fix

    Researchers uncovered the flaw in an extension to the Gemini side panel with access to a basic permission set through the “declarativeNetRequests” API, which failed to maintain a property security boundary. This “allowed permissions that could have enabled an attacker to inject JavaScript code into the new Gemini panel,” Weizman wrote in the report.

    This API function can be used for legitimate purposes, such as how AdBlock stops requests that could lead to privacy-undermining ads. In fact, it is allowed by design for some extension behavior, and would not be problematic if loaded into a typical browser tab, Weizman says.

    Related:ClickFix Attacks Abuses DNS Lookup Command to Deliver ModeloRAT

    However, in this case, it was the specific integration of Gemini AI with the browser that made the function potentially malicious, he said. The flaw allowed the same code injection to occur when the app was loaded within the new, trusted, and highly privileged Gemini side panel component, when “Chrome hooks it with access to powerful capabilities,” Weizman wrote. “These include being able to read local files, take screenshots, access the camera and microphone and more, so the app could perform complex tasks. Being able to intercept it under that setting would have allowed attackers to gain access to these powers, too.”

    Palo Alto researchers demonstrated how an ordinary extension could hijack the Gemini panel and perform the aforementioned malicious activities in October; Google responded, was able to reproduce the exploit conditions, and subsequently patched the flaw in early January, according to the report.

    Agentic AI Browsers Add Security Risk

    The risk of vulnerabilities like this one exposing browsers to malicious activity increases as AI becomes more integrated into their design, Palo Alto researchers noted. That’s due to the proactive nature of AI technology, which creates a new risk model because it is not just displaying content, as a typical browser does, but acting upon it as well. 

    Related:Ivanti EPMM Zero-Day Bugs Spark Exploit Frenzy — Again

    “These agents can inherit a user’s authenticated browser session and perform privileged actions inside enterprise applications, including modifying data or triggering workflows,” Anupam Upadhyaya, senior vice president of product management for Palo Alto Networks’ Prisma SASE, tells Dark Reading.

    This, in turn, means that developers of agentic browsers need to rethink and bolster security, creating browsers with native security that is “continuous and policy-enforced — not bolted on after deployment,” Upahyaya says. “Designers should build in real-time inspection of prompts, AI responses, and rendered content directly inside the browser, where users, data, and AI interact,” he says.

    Defenders in general also need to understand that this new attack surface is one that “traditional network and endpoint controls were never designed to monitor,” and adjust their own strategies accordingly beyond these controls, Upahyaya says.

    A good place to start would be by treating the browser as both “a primary attack surface and a potential control plane,” he says. “That means gaining visibility into which AI browsers and extensions are in use; in-browser visibility into user navigation, uploads, copy/paste activity and extension behavior; and enforcing policy controls in real time before data leaves the browser.”


    Source: www.darkreading.com…

  • North Korean Hackers Publish 26 npm Packages Hiding Pastebin C2 for Cross-Platform RAT

    North Korean Hackers Publish 26 npm Packages Hiding Pastebin C2 for Cross-Platform RAT

    Ravie LakshmananMar 02, 2026Supply Chain Attack / Malware

    Cybersecurity researchers have disclosed a new iteration of the ongoing Contagious Interview campaign, where the North Korean threat actors have published a set of 26 malicious packages to the npm registry.

    The packages masquerade as developer tools, but contain functionality to extract the actual command-and-control (C2) by using seemingly harmless Pastebin content as a dead drop resolver and ultimately drop a developer-targeted credential stealer and remote access trojan. The C2 infrastructure is hosted on Vercel across 31 deployments.

    The campaign, tracked by Socket and kmsec.uk’s Kieran Miyamoto is being tracked under the moniker StegaBin. It’s attributed to a North Korean threat activity cluster known as Famous Chollima.

    “The loader extracts C2 URLs steganographically encoded within three Pastebin pastes, innocuous computer science essays in which characters at evenly-spaced positions have been replaced to spell out hidden infrastructure addresses,” Socket researchers Philipp Burckhardt and Peter van der Zee said.

    The list of the malicious npm packages is as follows –

    • argonist@0.41.0
    • bcryptance@6.5.2
    • bee-quarl@2.1.2
    • bubble-core@6.26.2
    • corstoken@2.14.7
    • daytonjs@1.11.20
    • ether-lint@5.9.4
    • expressjs-lint@5.3.2
    • fastify-lint@5.8.0
    • formmiderable@3.5.7
    • hapi-lint@19.1.2
    • iosysredis@5.13.2
    • jslint-config@10.22.2
    • jsnwebapptoken@8.40.2
    • kafkajs-lint@2.21.3
    • loadash-lint@4.17.24
    • mqttoken@5.40.2
    • prism-lint@7.4.2
    • promanage@6.0.21
    • sequelization@6.40.2
    • typoriem@0.4.17
    • undicy-lint@7.23.1
    • uuindex@13.1.0
    • vitetest-lint@4.1.21
    • windowston@3.19.2
    • zoddle@4.4.2

    All identified packages come with an install script (“install.js”) that’s automatically executed during package installation, which, in turn, runs the malicious payload located in “vendor/scrypt-js/version.js.” Another common aspect that unites the 26 packages is that they explicitly declare the legitimate package they are typosquatting as a dependency, likely in an attempt to make them appear credible.

    The payload serves as a text steganography decoder by contacting a Pastebin URL and extracting its contents to retrieve the actual C2 Vercel URLs. While the pastes seemingly contain a benign essay about computer science, the decoder is designed to look at specific characters in certain positions in the text and string them together to create a list of C2 domains.

    “The decoder strips zero-width Unicode characters, reads a 5-digit length marker from the beginning, calculates evenly-spaced character positions throughout the text, and extracts the characters at those positions,” Socket said. “The extracted characters are then split on a ||| separator (with an ===END=== termination marker) to produce an array of C2 domain names.”

    The malware then reaches out to the decoded domain to fetch platform-specific payloads for Windows, macOS, and Linux, a tactic widely observed in the Contagious Interview campaign. One such domain, “ext-checkdin.vercel[.]app” has been found to serve a shell script, which then contacts the same URL to retrieve a RAT component.

    The Trojan connects to 103.106.67[.]63:1244 to await further instructions that allow it to change the current directory and execute shell commands, through which a comprehensive intelligence collection suite is deployed. It contains nine modules to facilitate Microsoft Visual Studio Code (VS Code) persistence, keylogging and clipboard theft, browser credential harvesting, TruffleHog secret scanning, and Git repository and SSH key exfiltration –

    • vs, which uses a malicious tasks.json file to contact a Vercel domain every time a project is opened in VS Code by taking advantage of the runOn: “folderOpen” trigger. The module specifically scans the victim’s VS Code config directory across all three platforms and writes the malicious tasks.json directly into it.
    • clip, which acts as a keylogger, mouse tracker, and clipboard stealer with support for active window tracking and conducts periodic exfiltration every 10 minutes.
    • bro, which is a Python payload to steal browser credential stores.
    • j, which is a Node.js module used for browser and cryptocurrency theft by targeting Google Chrome, Brave, Firefox, Opera, and Microsoft Edge, and extensions like MetaMask, Phantom, Coinbase Wallet, Binance, Trust, Exodus, and Keplr, among others. On macOS, it also targets the iCloud Keychain.
    • z, which enumerates the file system and steals files matching certain predefined patterns.
    • n, which acts as a RAT to grant the attacker the ability to remotely control the infected host in real-time via a persistent WebSocket connection to 103.106.67[.]63:1247 and exfiltrate data of interest over FTP.
    • truffle, which downloads the legitimate TruffleHog secrets scanner from the official GitHub page to discover and exfiltrate developer secrets.
    • git, which collects files from .ssh directories, extracts Git credentials, and scans repositories.
    • sched, which is the same as “vendor/scrypt-js/version.js” and is redeployed as a persistence mechanism.

    “While previous waves of the Contagious Interview campaign relied on relatively straightforward malicious scripts and Bitbucket-hosted payloads, this latest iteration demonstrates a concerted effort to bypass both automated detection and human review,” Socket concluded.

    “The use of character-level steganography on Pastebin and multi-stage Vercel routing points to an adversary that is refining its evasion techniques and attempting to make its operations more resilient.”

    The disclosure comes as the North Korean actors have also been observed publishing malicious npm packages (e.g., express-core-validator) to fetch a next-stage JavaScript payload hosted on Google Drive.

    “Only a single package has been published with this new technique,” Miyamoto said. “It is likely Famous Chollima will continue to leverage multiple techniques and infrastructure to deliver follow-on payloads. It is unlikely this signals a complete overhaul of their stager behaviour on npm.”


    Source: thehackernews.com…

  • How to Protect Your SaaS from Bot Attacks with SafeLine WAF

    How to Protect Your SaaS from Bot Attacks with SafeLine WAF

    Most SaaS teams remember the day their user traffic started growing fast. Few notice the day bots started targeting them.

    On paper, everything looks great: more sign-ups, more sessions, more API calls. But in reality, something feels off:

    • Sign-ups increase, but users aren’t activating.
    • Server costs rise faster than revenue.
    • Logs are filled with repeated requests from strange user agents.

    If this sounds familiar, it’s not just a sign of popularity. Your app is under constant automated attack, even if no ransom emails have arrived. Your load balancer sees traffic. Your product team sees “growth”. Your database sees pain.

    This is where a WAF like SafeLine fits in.

    SafeLine is a self-hosted web application firewall (WAF) that sits in front of your app and inspects every HTTP request before it reaches your code. 

    It does not just look for broken packets or known bad IPs. It watches how traffic behaves: what it sends, how fast, in what patterns, and against which endpoints.

    In this article, we’ll show what real attacks look like for a SaaS product, how bots exploit business logic, and how SafeLine can protect your app without adding extra work for your team.

    The Attacks SaaS Products Actually See

    When people say “web attacks”, many think only about SQL injection or XSS. Those still exist, and SafeLine blocks them with a built‑in Semantic Analysis Engine. 

    SafeLine’s Semantic Analysis Engine reads HTTP requests like a security engineer. Instead of just hunting keywords, it understands context, decoding payloads, spotting weird field types, and recognizing attack intent across SQL, JS, NoSQL, and modern frameworks. Blocks sophisticated bots and zero-days with 99.45% accuracy and no constant rule tweaks needed.

    Malicious Requests Blocked by SafeLine

    But for SaaS, the most painful attacks are not always the most “technical”. They are the ones that bend your business rules.

    Common examples:

    • Fake sign‑ups: Automated sign‑up scripts farm free trials, burn invitation codes, or harvest discount coupons.
    • Credential stuffing: Bots try leaked username/password pairs against your login endpoint until something works.
    • API scraping: Competitors or generic scrapers walk your API, page by page, copying your content or pricing.
    • Abusive automation: One user (or botnet) triggers heavy background jobs, export tasks, or webhook storms that you pay for.
    • Bot traffic spikes: Sudden waves of scripted requests hit the same endpoints, not big enough to be a classic DDoS, but enough to slow everything down.

    The tricky part is that all these requests look “normal” at the HTTP level.

    They are:

    • Well‑formed
    • Often over HTTPS
    • Using your documented API

    Why a Self‑Hosted WAF Makes Sense for SaaS

    There are many cloud WAF products. They work well for a lot of teams. But SaaS products have some special concerns:

    • Data control: You may not want every request and response to flow through another company’s cloud.
    • Latency and routing: Extra external hops can matter for global users.
    • Debugging: When a cloud WAF blocks something, you often see a vague message, not full context.

    SafeLine takes a different path:

    • It is self‑hosted and runs as a reverse proxy in front of your app.
    • You keep full control over logs and traffic.
    • You see exactly why a request was blocked, in your own dashboards.

    For SaaS teams, that means you can:

    • Meet stricter customer or compliance demands about where data flows.
    • Tune rules without opening a support ticket.
    • Treat your WAF configuration as part of your normal infrastructure, not a black‑box service.

    How SafeLine Sees and Stops Bot Traffic

    Bots are not one thing. Some are clumsy scripts; some are almost indistinguishable from real users. SafeLine uses several layers to deal with them.

    1. Understanding traffic, not just signatures

    SafeLine combines rule‑based checks with semantic analysis of requests.

    In practice, that means it looks at:

    • Parameters and payloads (for injection attempts, strange encodings, exploit patterns).
    • URL structures and access paths (for scanners, crawlers, and exploit kits).
    • Frequency and distribution of calls (for login abuse, scraping, and subtle flood attacks).

    This is what allows it to:

    • Block classic web attacks with a low false positive rate.
    • Detect weird patterns that do not match any single “signature” but clearly are not normal user behavior.

    2. Anti‑Bot challenges

    Some bots can only be stopped by forcing them to prove they are not machines. SafeLine includes an Anti‑Bot Challenge feature: when it detects suspicious traffic, it can present a challenge that real browsers handle, but bots fail.

    Key points:

    • Normal human users barely notice it.
    • Basic crawlers, scripts, and abuse tools get blocked or slowed down sharply.
    • You decide where to enable it: sign‑up, login, pricing pages, or specific APIs.

    3. Rate limiting as a safety net

    For SaaS, “too much of a good thing” is a real problem. One overly eager integration, one faulty script, or one attack can exhaust resources.

    SafeLine’s rate limiting lets you:

    • Limit how many requests an IP or token can make to specific endpoints per second, minute, or hour.
    • Protect login, sign‑up, and expensive APIs from brute force and floods.
    • Keep your application stable even under abnormal spikes.

    This is essential for:

    • Protecting free tiers from abuse.
    • Keeping “unlimited API calls” from turning into “unlimited cloud bills”.

    4. Identity and access controls

    Some parts of your SaaS should never be public:

    • Internal dashboards
    • Early beta features
    • Region‑specific admin tools

    SafeLine provides an authentication challenge feature. When enabled, visitors must enter a password you set before they can continue.

    This is a simple way to:

    • Hide internal or staging environments from scanners and bots.
    • Reduce the blast radius of misconfigured or forgotten routes.

    A Simple Story: A SaaS Team vs. Bot Abuse

    There is a small B2B SaaS product:

    • Less than 10 people on the team.
    • Nginx fronting a set of REST APIs.
    • Free trials, public sign‑up, and open API docs.

    At first, numbers look good. Then:

    • Fake sign‑ups climb to 150–200 per day.
    • CPU peaks hit 70% because of login attempts and abuse traffic.
    • The database grows faster than paying users.

    When they add SafeLine:

    • They deploy it behind Nginx, as a self‑hosted WAF.
    • They enable bot detection, rate limits on sign‑up and login, and basic abuse rules for new accounts.

    Within one week:

    • Fake registrations fall below 10 per day.
    • CPU stabilizes around 40%.
    • Conversion starts to recover, because real users face fewer obstacles.

    The interesting part is not the numbers.

    It is what the team did not have to do:

    • They did not design complex in‑app throttling.
    • They did not maintain custom bot‑blocking code.
    • They did not argue for months about whether they could send traffic to an external inspection service.

    SafeLine quietly took the first wave of abuse, and the product team focused again on features and customers.

    How SafeLine Fits into a SaaS Stack

    From an architecture point of view, SafeLine behaves like a reverse proxy:

    • External traffic → SafeLine → your Nginx / app servers.

    This makes it easier to adopt without rewriting your product.

    You can:

    • Put SafeLine in front of your main web app and API gateway.
    • Slowly route more domains and services through it as you gain confidence.

    The SafeLine dashboard then becomes your “security console”:

    • You see attack logs: which IP tried what, which rule triggered, what payload was blocked.
    • You see trends: increased scans, new kinds of payloads, or growing bot patterns.
    • You can adjust rules and protections in a few clicks.

    Deployment and Ease of Use

    SafeLine WAF is designed for SaaS operators who may not have dedicated security teams. 

    A deployment typically takes less than 10 minutes. Below is the one-click deployment command:

    bash -c “$(curl -fsSLk https://waf.chaitin.com/release/latest/manager.sh)” — –en

    See the official documentation for detailed instructions: https://docs.waf.chaitin.com/en/GetStarted/Deploy

    More importantly, SafeLine still provides a free edition for all users worldwide. So once you install it, it’s ready to use right out of the box—no extra costs at all. Only when you need advanced features is a paid license required.

    After installation, you’ll see a clean interface with a super simple and intuitive configuration experience. Protect your first app by following this official tutorial: https://docs.waf.chaitin.com/en/GetStarted/AddApplication.

    Once configured, the WAF operates autonomously while providing detailed visibility into threats and mitigation actions.

    Looking Ahead: Continuous Security

    The threat landscape is constantly evolving. Bots are becoming smarter, attacks are increasingly targeted, and SaaS platforms continue to grow in complexity. To stay ahead, companies must:

    • Monitor traffic behavior continuously
    • Adapt rate-limiting and bot detection rules dynamically
    • Regularly audit logs for unusual activity
    • Ensure sensitive endpoints have layered protections

    SafeLine’s approach aligns perfectly with these needs, providing a flexible, data-driven security layer that grows with your SaaS business. 

    For those interested in exploring the technology firsthand, visit the SafeLine GitHub Repository or experience the Live Demo. Or you can just go straight to install it and try it for free forever!

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


    Source: thehackernews.com…