Today in “Attack techniques so stupid, they can’t possibly succeed… except they do!” — we look at Invoice Scams.
PayPal and other sites allow anyone (an attacker) to send anyone (their victims) an invoice containing the text of the attacker’s choosing. In this attack technique, PayPal sends you an email suggesting that the attacker already has taken your money, and you should call the attacker-supplied telephone number if you have a problem with that.
Because PayPal is acting as a (clueless) accomplice in this scam, the email contains markers of legitimacy (including the “This message is from a trusted sender” notice):
In the current version of the Microsoft Outlook web application, you can choose to report this phishing email. Because it really was PayPal that sent this phishing lure, choosing “Report and Block” will block all future email from PayPal, including any emails that aren’t scams, which may not be what you expected to happen.
Note that PayPal isn’t the only vendor with this issue; Square has recently started allowing the same scam, and attackers are abusing Calendly for the same thing:
Pretty much any service that offers email notifications with attacker supplied text will get abused. For example, an attacker can configure Microsoft Azure to spam arbitrary email addresses with status alerts that have attacker-controlled text, like so:
Microsoft Azure-generated phone phishing lure
The advantage to an attacker here is that the email was sent by Microsoft’s legitimate email servers, being used as an unwitting accomplice to the fraud. The user can mark the email as fraudulent, but since it’s from a legitimate service, it’s unlikely that it will help much.
Best Practices
Software-as-a-Service vendors should take care not to allow attackers to abuse their services in this way. At the very minimum, every email sent on behalf of an untrusted party should have a Report Fraud link at the bottom to allow the vendor to learn when they’re behaving as a criminal accomplice.
If possible avoid sending emails to any address unless that address specifically opts-in to receiving messages (and ensure the opt-in prompt’s text is fully controlled by your service!).
Today in “Attack techniques so stupid, they can’t possibly succeed… except they do!” — the trojan clipboard technique. In this technique, the attacking website convinces the victim to paste something the site has silently copied to the user’s clipboard into a powerful and trusted context.
When the attacker page loads, the site places dangerous commands onto the victim’s clipboard, then asks for help in executing it.
A walkthrough of this attack can be found in the ThreatDown Blog, but simple screenshots give you the gist:
A similar one:
Now, in the modern Windows Terminal, trying to paste a string with a CRLF in it will show a warning prompt:
… but that protection still relies upon the user having some concept that they might be under attack and not hitting Enter.
In the current scenario, the target victim context is a native execution surface, but this is far from the first time an attack like this has been seen.
Here’s a nice video explanation that walks through what the attackers do if you fall for the Win+R attack.
UPDATE: A few months later, an attack coupled full-screen abuse with a fake WindowsUpdate reboot screen:
Thirteen years ago, Socially-Engineered XSS Attacks were all the rage, where bad guys would use the Address Bar / Omnibox get access to your Facebook account and worm the attack through all of your friends and contacts:
That attack led browsers to start dropping the javascript: prefix when pasting into the address bar. If a user really wants to run JavaScript, they have to manually type the scheme prefix themselves.
Similarly, pasting into DevTools was a recognized attack vector, so before browsers introduced built-in protections, websites would take it upon themselves to console.log a warning message like this one on WordPress.com:
Nowadays, Chromium blocks pasting by default:
…as does Firefox:
Other Execution Surfaces
The Windows Run dialog is a convenient target because it’s just a Win+R hotkey away. But it’s not the only such surface; for example, the location bar in Windows Common File dialogs and File Explorer windows are also execution surfaces.
Note: Enterprises or administrators who are concerned about their users unsafely running commands from Windows Explorer execution surfaces like the Run dialog or the Location bar may set a policy to disallow such actions.
You can do so by using RegEdit.exe to create a REG_DWORD named NoRun with a value of 1 in HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer
…or you can use the Group Policy editor to configure User Configuration > Administrative Templates > Start Menu and Taskbar > Remove Run menu from Start Menu
Beyond Explorer, Command Prompts, Terminals, and PowerShell IDE windows, are obvious targets, and several Windows utilities like Task Manager also have command-runners:
So, protecting just the Run dialog might not be enough.
Alas, We Can’t Patch the Human
All of these attacks make the end-user an active participant in their own victimization.
What are defenders to do about all of this?
Block copy from the browser? Warn on every paste into a sensitive context? Introducing friction would be annoying — 99.99% of the time, nothing bad is happening, and the user is pasting trustworthy or harmless content.
Mark of the Web – Clipboard Edition?
If you squint at it, this problem is somewhat like the problem of Windows Security Zones — Windows wants to apply additional scrutiny to files from the Internet, but as soon as you download a file, that file is no longer “on the Internet” — it’s on your disk.
Way back in 2003, Windows invented the “Mark-of-the-Web” which marks a file as having originated from the Internet, storing information about its origin in an NTFS Alternate Data Stream named Zone.Identifier:
While the Windows clipboard does not today have the exact equivalent, several analogous features do exist. In the Windows Vista era, a new format was added to the clipboard to reflect that the clipboard’s contents came from a Low IL page.
Starting with an Internet Explorer 9 update in 2011, an additional data format named msSourceUrl was added to the clipboard containing (in UTF-16) the source URL:
Much more recently, Chromium implemented a similar concept, originally for DLP purposes, as I understand it. In modern Chromium, when you copy text from the web, the clipboard now contains an additional data format called Chromium internal source URL that indicates from what URL the content was copied1.
Firefox has a similar format, text/x-moz-url-priv:
An application which wishes to protect itself from potentially untrusted clipboard data can check for these data formats, and if found, call Windows’ MapURLToZone API on the URL to determine what security zone the clipped text belongs to, prompting the user if needed.
March 2026— Mac OS 26.4 adds a new prompt on paste into the Terminal:
-Eric
1 When drag/dropping data (e.g. links or text), Chromium puts the source origin (not the full URL) in a different clipboard format, named chromium/x-renderer-taint.
My 2021-2024 Authenticode certificate expired yesterday, so I began the process of getting a replacement last week. As in past years, I again selected a 3 year OV certificate from DigiCert.
Validation was straightforward. After placing my order, I got a request for high-resolution photos of me holding my ID (I sent my passport and driver’s license). Then, a verification Zoom video call was scheduled (they had tons of slots open, I did mine when I was free at 10:30PM) where I showed the validator my ID and signed the attestation forms with them acting as a witness. I scanned the completed forms to a PDF and emailed it to the validator.
In 2023, the Baseline Requirements for code-signing were updated to require that all code-signing certificates be stored on hardware to limit theft and abuse. I’ve been storing my code-signing certificates on a hardware token since 2015 or so, and using a YubiKey 4 since 2016. However, Digicert now seems to require a new token, either a SafeNet eToken or a Common Criteria EAL4+ standard or FIPS 140-2 level 2 HSM. As far as I know, my YubiKey4 doesn’t qualify, so I ticked the option to have them send me a new SafeNet eToken 5110 CC for $120. (There are probably places to get this less expensively, but I didn’t want to fuss over it.)
A few days later, my new token arrived:
I popped it in my PC, installed the two Windows apps that DigiCert directed me to install, generated my new certificate, and set a new PIN. The process took perhaps fifteen minutes.
I uninstalled my expiring certificate from my Windows certificate manager (since it has the same Subject Name as the new one) so I would not need to make any changes to my build and sign batch script:
I took the opportunity to stop adding an additional signature that uses SHA1, since there’s no reason why you’d want to run this app on Windows XP (which didn’t support SHA256).
Sign Everything You Ship
In past years, it was common to simply sign your application’s installer, main executable, and anything that required elevation to run as Administrator — the three contexts in which signatures were most commonly checked by various components of the system (e.g. the Shell, firewalls/security software, and the UAC Elevation dialog). For the most part, other signatures were not automatically checked by the system (the fact that AV software is constantly looking at file signatures was basically invisible).
Things have changed with the start of the rollout of Windows 11’s Smart App Control feature.
Rather than just checking the signature of your installer, a system with SAC enabled will now check the signature on every executable module your program loads. If a signature is not found, an reputation check is conducted against the Defender File Metadata Service. If a positive reputation isn’t found, the module load is blocked.
You can see this at work when installing an old utility app I built, MezerTools. This product contained one unsigned DLL I knew about — docker.dll, a DLL that implements a feature to allow users to double-click any window’s edge to “grow” it to the corresponding edge of the screen. Because this DLL is unsigned, it fails to load, and the Bad Image notification dialog is shown.
Two SmartAppControl Failures
In the blocking dialog, two different Error Status codes are common:
0xc0e90002 – Unsigned and no positive reputation
0xc0e9000a – Unsigned and reputation service unreachable
Beyond that expected failure for docker.dll, there’s another one I didn’t anticipate, visible only in the notification toast shown by Windows complaining about System.dll. My app doesn’t install a System.dll, but the Nullsoft Install System I use to install MezerTools will try to drop its own System.dll to a temporary folder during install if your installation script uses certain NSIS functions (in my case, computing a folder’s size). Because the NSIS DLL isn’t signed, it is blocked and my API calls in the script will fail.
There’s a third failure not seen until the user tries to uninstall the app. The Uninst.exe uninstaller that Nullsoft automatically generates isn’t signed and can be blocked by SAC:
Due to how uninstall works in Windows, the failure is reported for “COM Surrogate”
The install script compiler writes its uninstaller directly into the built Setup.exe, making it easy for the developer to forget about. Even if they remember the uninstaller, how can they sign it?
Fortunately, the developers of NSIS have already solved that one by introducing the !uninstfinalize command. This command enables the developer to sign the uninstaller generated by NSIS before it is embedded into the installer.
ECC
Do not use ECC for code signing. Many systems, including Smart App Control (as of January 2026), do not properly consider ECC-signed binaries as having valid signatures. https://vcsjones.dev/authenticode-and-ecc/
Azure Trusted Signing
Beyond getting a certificate from a third-party CA, Microsoft now offers its Azure Trusted Signing service in Public Preview. With Azure Trusted Signing, your private key is stored securely in a security module in the cloud and used to sign your binaries. You can integrate your local signing processes with Trusted Signing and integrate signing into cloud build processes on GitHub and the like.
This is a cool approach and widely considered to be “the future,” but it’s overkill for my needs, and I do all of my builds on my local PC. I like the idea that anything bearing my digital signature is something that I literally/virtually had my hands on (by plugging my token into my PC).
Beyond that, Azure Trusted Signing is not presently available for individuals or legal entities younger than three years old. They do expect to relax that at some point in the future, so check back here for an update when I next need a certificate in 2027. :)
For most scenarios, any certificate that chains to a trusted root in the Microsoft Root Certificate Program will be acceptable for Windows scenarios. There are a few special exceptions however.
EV Certificates for Drivers
If you wish to sign a driver or other Windows component, you must use an Extended Validation Certificate. EV certificates require additional vetting as to the identity of the publisher and are not generally available to individuals. Notably, as of 2019, using an EV certificate no longer automatically grants your code “initial trust” by SmartScreen Application Reputation.
Secure Enclaves
Windows 11 allows 3rd party developers to build code that runs inside a Secure Enclave (I’ve written about Enclaves recently). In order to load code within an enclave, the signing certificate must have specific flags set. Those flags are only respected when the signing certificate is from Azure Trusted Signing.
DRM and Security Software
Various DRM and security features in Windows have niche signing requirements. For example, see this document for requirements about how anti-malware code is signed.
In yesterday’s post, I outlined the two most successful (and stupid simple) attack techniques that you might not expect to work (and you’d be so very wrong):
“Please give me your password.”
“Please run this file.“
Today, let’s explore number 3: “Please give me control of your computer so I can, uh, fix it?“
In this attack, an attacker convinces you that there’s some problem with your computer, your bank account, or something else, and to fix that problem you will need to allow them access to your computer. The Security industry tends to name this Remote Monitoring and Management software scam.
The attacker might start out by sending you SMS text messages or email telling you of a problem, or you might be tricked into calling the attacker when visiting a website takes over your entire screen and blares out a notice saying that “Microsoft” needs you to call them immediately.
Or, the attacker might sign your email address up for a ton of spammy mailing lists, then call you “from your IT department” with the pretext of needing to remotely control your PC “to fix the broken spam filter.”
Once the attacker has you on the hook, they make their move, asking for access to “fix the problem.” The attacker promises “Don’t worry! You can watch everything I’m doing.”
The victim, baited hook already in their mouth and not seeing any clear alternative, figures “Well, they said they’re from my bank or Microsoft, so they are the good guys, right? And they said that I have to act right now or I’ll lose everything! I guess I’ll just watch them very closely.”
Unfortunately for the good people of the world, bad guys need very little skill to be evil while you watch. The attacker follows a script to show you various innocuous-by-scary-seeming information on your PC, lies to you about seeing a major problem that they need to fix, convinces you to push the button to hand over control, and then downloads and runs various malware tools that immediately steal all of your passwords, drain your accounts, and steal and encrypt your files[1]. If your computer has access to a network (e.g. your company), they’ll begin attacking that network using your identity.
What can you do about this scam?
Tell your friends and family that they should never allow anyone they do not personally recognize access to their PC, for any reason. They should never trust emails or SMS messages they receive to be genuine, and should verify any information they receive by calling their financial institution directly, using the contact phone number on their bank card or statement. Microsoft will never call you.
-Eric
[1] An almost completely accurate depiction of this attack can be found in the recent thriller “The Beekeeper.” Unfortunately for the good guys, Jason Statham isn’t waiting on standby to go avenge victims of this all-too-common scam.
While it’s common to think of cyberattacks as being conducted by teams of elite cybercriminals leveraging the freshest 0-day attacks against victims’ PCs, the reality is far more mundane.
Most attacks start as social engineering attacks: abusing a user’s misplaced trust.
Most attackers don’t hack in, they log in. The most common cyberattack is phishing: Stealing the user’s password by asking them for it.
The next most common Initial Access Vector is socially-engineered malware: sending the user a malicious file and asking them to open it. When the malicious file runs, it disables security defenses, downloads more malware, and begins stealing data and performing other malicious activities.
Attackers have many choices for deploying their malware — on Windows, they can write evil executable files (.EXE, .SCR, .COM, etc) or installers (.MSI, .MSIX, etc).
However, for simplicity and compatibility reasons, one of the most common initial access choices for attackers is a file targeting the legacy scripting engines (.JS, .VBS, .HTA, .WSH).
Legacy Script Engines
These scripting file types, created alongside Internet Explorer in the 1990s, have been supported for almost 30 years now, and they still work on the latest versions of Windows. Unlike JavaScript running in your browser, these file types run outside of your browser, with no sandbox constraining their ability to reconfigure your system and steal your files.
JavaScript running in Chrome or Edge cannot read a file from your desktop without your explicitly selecting that file, whereas JavaScript running inside wscript.exe can read every file from your desktop, download and run any program without any prompts, and so forth.
VBScript no longer runs in browsers, but the Windows Scripting engines (cscript.exe and wscript.exe) are perfectly happy to run VBS files and provide full access to your system.
You can think of a HTA file as a prehistoric Electron app — it’s basically Internet Explorer with no sandbox and all of the security features turned off.
This level of power is, in a word, totally 🍌bananas🍌.
Why does it still exist?
Legacy compatibility.
User Experience
In Edge, the .HTA file type is marked as DANGEROUS and thus HTA downloads are blocked by default:
…but even blocked files can be sent to the user inside an archive (e.g. a ZIP File) and the user need only open the ZIP to be able to get at the HTA within.
In contrast, Chrome treats the HTA type as ALLOW_ON_USER_GESTURE and does not block .HTA downloads:
Reading the source, you can see that Chrome does not treat any of these file types as dangerous:
file_types {
# HTML Application. Executes as a fully trusted application.
extension: "hta"
platform_settings {
platform: PLATFORM_TYPE_WINDOWS
danger_level: ALLOW_ON_USER_GESTURE
}
}
file_types {
# JavaScript file. May open using Windows Script Host with user level privileges.
extension: "js"
platform_settings {
platform: PLATFORM_TYPE_WINDOWS
danger_level: ALLOW_ON_USER_GESTURE
}
}
file_types {
extension: "vbs"
platform_settings {
platform: PLATFORM_TYPE_WINDOWS
danger_level: ALLOW_ON_USER_GESTURE
}
}
After you click open, the only thing standing between your PC and the potentially malicious code is a 2003-era security prompt:
After the file starts running, your security software may be able to catch malicious behavior at runtime using a feature called the Antimalware Scan Interface, but I wouldn’t bet my PC on it.
A Smarter and Safer Future?
The new Windows 11 Smart App Control feature dramatically reduces the threat of an attacker sending the victim a simple script or batch file that takes over their PC. A wide swath of file types, including scripts (.js,.vbs,.hta,.ps1), batch commands (.bat,.cmd) and numerous other dangerous types are blocked from running if they came from the web.
Mitigations
You can easily block attacks against the legacy scripting engines today. There are numerous ways to do so, but perhaps the simplest approach which blocks browser and email attack vectors is to point the file types at a safe application (e.g. Notepad).
To do so, simply open the Windows Settings’ app and navigate to Choose defaults by file type. Search for a type:
Click the arrow icon, scroll the app list to pick a safe handler, then click Set default:
After fixing VBS, fix the other types:
In the unlikely event that you ever need to run one of these files inside its original handler, you can easily do so from the command line. Just run e.g. mshta.exe theApp.hta or cscript.exe myScript.js to run the file.
Blocking HTA Files
HTA Files are such a longstanding security problem that there’s a simple Group Policy (backed by a registry key) that blocks running them. From an elevated command prompt, run: