Defensive Technology: Windows Filtering Platform

Last November, I wrote a post about the basics of security software. In that post, I laid out how security software is composed of sensors and throttles controlled by threat intelligence. In today’s post, we’ll look at the Windows Filtering Platform, a fundamental platform technology introduced in Windows Vista that provides the core sensor and throttle platform upon which several important security features in Windows are built.

What is the Windows Filtering Platform?

The Windows Filtering Platform (WFP) is a set of technologies that enable software to observe and optionally block messages. In most uses, WFP is used to block network messages, but it can also block local RPC messages.

For networking and RPC scenarios, performance is critical, so a consumer of WFP specifies a filter that describes the messages it is interested in, and the platform will only call the consumer when a message that matches the filter is encountered. Filtering ensures that processing is minimized for messages that are not of interest to the consumer.

When a message matching the filter is encountered, it is sent to the consumer which can examine the content of the message and either allow it to pass unmodified, or it can change the content or indicate that WFP should drop the message.

This sensor and throttle architecture empowers several critical security features in Windows.

Windows Firewall

Most prominently, WFP is the technology underneath the Windows Firewall (formally, “Windows Defender Firewall with Advanced Security” despite the feature having little to do with Defender). The Firewall controls whether processes may establish outbound connections or receive inbound traffic using a flexible set of rules. The Firewall settings (wf.msc) allows the user to specify these rules that are then enforced using the Windows Filtering Platform.

Rules can be added via the UI:

…or programmatically using the native API, PowerShell (ibid), or the NetSh command line tool:

netsh.exe advfirewall firewall add rule name="FiddlerProxy" program="C:\Program Files\Fiddler2\Fiddler.exe" action=allow profile=any dir=in edge=deferuser protocol=tcp description="Permit inbound connections to Fiddler"

Users can optionally enable prompting:

…and when a process tries to bind a port to allow inbound traffic, a UI prompt is shown:

For the most part, however, the Windows Firewall operates silently and does not show much in the way of UI. In contrast, the Malware Bytes Windows Firewall Control app provides a much more feature-rich control panel for the Windows Firewall.

One major limitation of the Windows Firewall that existed for almost two decades is it natively only supports rules that target IP addresses. Many websites periodically rotate between IPs for operational, load-balancing, geographic CDNs, etc, and many products are unwilling to commit to a fixed set of IP addresses forever. Thus, the inability to create firewall rules that use DNS names (e.g. “Always permit traffic to VirusTotal.com“) was a significant limitation.

This limitation was later mitigated with a new feature called Dynamic Keywords. Dynamic keywords allow you to specify a target DNS name in a firewall rule. Windows Defender’s Network Protection feature will subsequently watch for any unencrypted DNS lookups of the specified DNS name. When Network Protection observes a resolution for a targeted DNS name, it will send a message to the firewall service directing it to update the rule with the IP address returned from DNS. (Note: This scheme is imperfect in several dimensions: for example, asynchrony means that the firewall rule may be updated milliseconds after the DNS resolution, such that the first attempt to connect to an allowed hostname could be blocked.)

Zero Trust DNS (ZTDNS)

More recently, the Windows team has been building a feature called “Zero Trust DNS” whereby a system can be configured to perform all DNS resolutions through a secure DNS resolver, and any connections to any network address that was not returned in a response from that resolver are blocked by WFP.

In this configuration, your organization’s DNS server becomes the network security control point: if your DNS server returns an address for a hostname, your clients can connect to that address, but if DNS refuses to resolve the hostname (returning no address), the network connection is blocked. (An app that is hardcoded to talk to a particular IP address would find its requests blocked, since no DNS request “unlocked” access to that address). The ZeroTrustDNS feature is obviously only suitable for certain operational environments due to its compatibility impact.

AppContainer

Windows 8 introduced a new application isolation technology called AppContainer that aimed to improve security by isolating “modern applications” from one another and the rest of the system. AppContainers are configured with a set of permissions, and the network permission set permits restricting an app to public or private network access.

James Forshaw from Google’s Project Zero wrote an amazing blog post about how AppContainer’s network isolation works. The post includes a huge amount of low-level detail about the implementation of both WFP and the Windows Firewall.

Restricted Network Access

Years before the advent of AppContainers, Windows included a similar feature to block network traffic from Windows Services. Like the AppContainer controls, it was implemented in WFP below/before the Windows Firewall feature, although like AppContainer’s controls, it was implemented in the service named Firewall.

Windows Service Hardening enables restrictions developers can set on a service’s network access. Network access restrictions for Services

  • are evaluated before the Windows Firewall rules
  • are enforced regardless of whether Windows Firewall is enabled
  • are defined programmatically using the INetFwServiceRestriction and INetFwRule APIs.

Windows Vista and later include a set of predefined network access restrictions for built-in services. Service network access restrictions are stored in the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SharedAccess\Parameters\FirewallPolicy\RestrictedServices\ registry key. The predefined restrictions are stored in the Static\System key, and custom restrictions are stored in the Configurable key.

Way back in 2008, Jan De Clercq wrote a great post explaining Service Hardening. A 2014 blog post shows a way to view the rules.

(New-Object -ComObject HNetCfg.FwPolicy2).ServiceRestriction.Rules

or

Get-NetFirewallRule -PolicyStore ConfigurableServiceStore

Port Scanning Prevention Filter

A one-off protection built atop WFP is the Port Scanning Prevention filter, described here. The article also briefly mentions the use of Event Tracing for Windows (ETW) Logging in the WFP.

RPC Filtering

The Windows Filtering Platform is not limited to Network messages; it can also be used to filter RPC messages between processes.

Again, Google’s James Forshaw has an amazing blog post on the RPC Filter, and Akamai’s Ophir Harpaz and Stiv Kupchik wrote a Definitive Guide to the RPC Filter too.

Various guides (e.g. this one) exist for configuring the RPC Filter to block various attack techniques, and it’s used in security features like Microsoft Defender for Endpoint’s Disruption module.

I only learned about MDE’s use of RPC Filtering recently. In 2024, Microsoft Employees were briefly unable to play Netflix or other DRM’d video in the Microsoft Edge browser. The root cause turned out to be Defender’s Disruption RPC filter blocking messages from the PlayReady DRM sandboxed process causing the failure during playback.

Microsoft Defender Network Protection

Network Protection (NP) is composed of a WFP driver (wdNisDrv.sys) and a service (NisSrv.exe). It has the ability to synchronously block TCP traffic based on IP and URI reputation, relying on threat intelligence from the SmartScreen webservice and signatures authored by the Defender team. While SmartScreen is integrated directly by Microsoft Edge, NP can extend SmartScreen’s anti-phishing/anti-malware protection to other clients like Chrome and Firefox.

Beyond Web Threat Protection, Network Protection’s WCF callout is also used as the sensor/throttle for Web Content Filtering and custom network indicators, as well as enforcing blocks on unsanctioned apps via Microsoft Defender for Cloud Apps.

Unfortunately, the use of TLS causes user-experience problems for the blocking of unwanted content. When Network Protection determines that it must block a HTTPS site, it cannot return an error page to the browser because TLS prevents the injection of content on the secure channel. So, NP instead injects a “Handshake Failure” TLS Fatal Alert into the bytestream to the client and drops the connection. The client browser, unaware that the fatal alert was injected by Network Protection, concludes that the server does not properly support TLS, so it shows an error page complaining of the same.

t=2817 [st=305] SOCKET_BYTES_RECEIVED
                --> byte_count = 14
                --> bytes =
                15 03 03 00 02 02 28 15  03 03 00 02 01 00
t=2817 [st=305] SSL_ALERT_RECEIVED
                --> bytes = 02 28
t=2817 [st=305] SSL_HANDSHAKE_ERROR
                --> error_lib = 16
                --> error_reason = 1040
                --> file = "..\\..\\third_party\\boringssl\\src\\ssl\\tls_record.cc"
                --> line = 594
                --> net_error = -113 (ERR_SSL_VERSION_OR_CIPHER_MISMATCH)
                --> ssl_error = 1

If the system’s Notifications feature is enabled, a notification toast appears with the hope that you’ll see it and understand what’s happened.

Chromium-based browsers show ERR_SSL_VERSION_OR_CIPHER_MISMATCH when the TLS handshake is interrupted
Mozilla Firefox shows SSL_ERROR_NO_CYPHER_OVERLAP when the TLS handshake is interrupted

Beyond blocking unwanted connections, Network Protection provides network-related signals to the Defender engine, enabling behavior monitoring signatures to target suspicious network connections. By way of example, by generating a JA3/JA3S/JA4 signature of a HTTPS handshake’s fields and comparing it to known-suspicious scenarios, security software can detect a malicious process communicating with its Command and Control servers, even when the traffic is encrypted and the process is not yet known to be malicious.

Microsoft Defender for Endpoint EDR

MDE’s EDR feature also includes MsSecWfp, a driver aimed at filtering network connections based on rules. It is leveraged on network isolation scenarios when a configuration for the machine is passed to this driver for enforcement. This driver produces audit events in the form of ETW events and it does not perform deep traffic inspection.

Enhanced Phishing Protection

As I described last year, Win11’s Enhanced Phishing Protection (EPP) aims to protect your domain password across all apps on the system.

When EPP is enabled, a WFP callout in WTD.sys watches for the Server Name Indicator extension in TLS Client Hello messages so that it can understand which hosts a process has established network connections with. If a user subsequently types their domain password, the Web Threat Defense service checks the reputation of the established connections to attempt to determine whether the process is connected to a known phishing site.

And More…

Importantly, WFP isn’t used just by Microsoft — many third party security products are also implemented using the Windows Filtering Platform. Security applications that integrate with the Windows Security Center are required (by policy) to be built upon WFP.

Tailscale wrote a great blog on building atop the WFP.

Other Approaches

While WFP is a core platform technology used by many products, it’s not the only one available. For example, beyond its use of WFP in Network Protection, Microsoft Defender for Endpoint also includes a Network Detection and Response (NDR) feature.

NDR is composed of a service which listens to ETW notifications generated by Pktmon and WinSock, performing async packet analysis using Zeek, an open-source network monitoring platform. Network data is captured at a layer below/before WFP, using PktMon. Compared to WFP, PktMon enables capture of lower-level network data, useful for watching for attempts to exploit bugs in Windows’ network stack itself (e.g. 2021, 2024).

Finally, some network security solutions are based on network traffic routing (e.g. proxies), optionally with decryption via MitM. Microsoft Entra’s Global Secure Access is a newer offering in this category, but there are many vendors offering solutions in this category.

Limitations & Future Directions

Perhaps the most important limitation of network-level monitoring is that the increasingly ubiquitous use of encryption (HTTPS/TLS) means that request and response payloads are usually indecipherable at the network level.

In the “good old days” of network security, a security product could observe all DNS resolution and examine the full content of requests and responses. Nowadays, DNS traffic is increasingly encrypted, and HTTPS prevents network-level monitoring from observing URLs, requests (headers and body) and responses (headers and bodies). For years, security software could still observe the target hostname by sniffing the SNI out of ClientHello messages, but the ongoing deployment of Encrypted Client Hello means that even this signal is disappearing:

Beyond ECH, a growing fraction of traffic takes place over HTTP3/QUIC which doesn’t use TCP at all. QUIC’s use of “initial encryption” precludes trivial determination of the server’s name.

Increasingly, network level monitors can only see the IP/Port of connections, and trying to evaluate connections based on IP address is fraught with peril.

Some security solutions attempt to inject their code into clients or otherwise obtain HTTPS decryption keys, but these approaches tend to be unreliable or offer poor performance.

It seems likely that in the future, the security industry will need to work with application developers to ensure that their software integrates with network security checks directly (as they already do elsewhere) instead of trying to sniff out the destination of traffic at the network level.

-Eric

Runtime Signature Checking Threat Model

Telerik developers recently changed Fiddler to validate the signature on extension assemblies before they load. If the assembly is unsigned, the user is presented with the following message:

In theory, this seems fine/good– signing files is a good thing!

However, it’s important to understand the threat model and tradeoffs here.

Validating signatures every time a file is loaded takes time and slows the startup of the app. That’s particularly true if online certificate revocation checking is performed. The performance impact is one reason why most of my applications have a manifest that indicates that .NET shouldn’t bother:

<configuration>
  <runtime> 
	<generatePublisherEvidence enabled="false"/> 
  </runtime> 
</configuration> 

Signing your installers is critical to help protect them at rest on your servers and streamline the security checks that happen on download and install. Signing the binaries within can be useful in case the user has Smart App Control enabled, and other security software (e.g. Firewall rules that target publisher signatures) may benefit as well.

However, having your app check the signature itself is less useful than you might expect for most applications. The problem is that there’s usually no trust boundary that would preclude an attacker from, for instance, tampering with your app’s code to remove the signature check. In most cases, the attacker could simply modify fiddler.exe to remove the new signature checking code, such that the protection is removed. Similarly, they could likely execute a .DLL hijacking attack to get their code loaded without any signature check at all. Or they could use their own process to inject code into the victim’s address space at runtime. It’s a long list.

In Telerik’s case, tampering to evade signature checking is even simpler. If the user elects to “Always allow” an unsigned extension, that decision is stored as a base64 encoded string in a simple Fiddler preference:

You can use Fiddler’s TextWizard to decode the preference value from about:config

An attacker with sufficient permission to write a .DLL to a place that Fiddler will load it would also have sufficient permission to edit the registry keys where this preference is stored.

Finally, Fiddler’s signature check doesn’t tell the user who signed the file, such that all signatures that chain to any CA are silently allowed. Now, this isn’t entirely worthless– CAs cannot prevent a certificate from being used to sign malware, but in theory a certificate found to do so will eventually get revoked.

If you plan to check code signatures in your application, carefully consider the threat model and ensure that you understand the limits to the protection. And remember that sometimes, “code” may be stored in a type of file that does not natively support signing, as in the case of Fiddler’s script or certain Chromium files.

-Eric

Spring Break

Spring break is one of the best times to be in Texas. The weather’s usually nice, and outdoor fun things to do aren’t miserably hot. This year, the kids are obsessed with roller coasters, so we bought Season Passes to Six Flags (which also includes a variety of other theme parks and water parks). Thus far, we’ve spent two days at Six Flags in San Antonio, and two days at Six Flags in Dallas.

The excellent “Dr. Diabolical” drop at Six Flags. The kids are in the back row. We all rode it a dozen times.

The kids spent the actual days of spring break on an adventure trip with their mom to Costa Rica:

While they were out of town, I took a quick four day cruise out of Galveston on the Mariner of the Seas.

To keep costs down, this time I took a Deck 7 “interior view” cabin that overlooked the Promenade:

… but I didn’t spend much time in the room. I spent a lot of time at shows, walking the top deck, enjoying music (“Ed”, a Brazilian singer/guitarist) in the pub, and generally relaxing. I passed some time reading Ken Williams’ history of Sierra On-line, a pleasant and nostalgic read that made little impression on me. The weather was imperfect (very windy with intermittent drizzles) but it was quite nice overall.

Most importantly, I achieved my secret goal for the trip, making some crucial progress in writing my long-overdue book which I’ve resolved to publish later this year.

The comedian (Rodney Johnson) was selling a copy of his book, and he autographed my copy with an inscription (coincidentally) perfect for my goals on the cruise:

Apparently, I was the only person to buy the book on the first day, and at the show on the last day, he asked if I was in the audience (“I am!”), had read it (“I did! Cover to cover!”) and what I thought of it (On the spot, I said “Pretty good”, and emailed him a funnier response later).

I often sit in the front row at shows, and I was called on stage to help Michael Holly with a magic trick (We held a chain and he walked through it.).

My original shore excursion (an adventure park) was canceled, I made the best of it with a short speedboat and snorkeling trip.

30 horsepower isn’t a lot, but it was plenty to jump the waves
After returning the boat, I killed an hour at a pretty swim-up bar.

With just one day at a destination and two sea days, I also booked a “Behind the Scenes” tour of the boat, getting the chance to see a galley, the provisions storerooms, the laundry, the bridge, the engine control room, and backstage in the theater.

The Engine Control Room
The crew work incredibly long hours: 10 hours a day, 7 days a week, on 7 month contracts. I resolved to think of this guy any time I’m feeling overwhelmed with work– He’s working in a windowless (underwater) room, hand-folding thousands of towels per day from a room-sized pile almost as tall as he is.

All in all, it was a busy but great spring break. Now, buckling down to get back in shape (two 10Ks coming up), finish booking various trips (including Kilimanjaro!), and otherwise get back to some semblance of a routine.

-Eric

Debugging Chromium

A customer recently complained that after changing the Windows Security Zone Zone configuration to Disable launching apps and unsafe files:

The default is “Prompt”

… trying to right-click and “Save As” on a Text file loaded in Chrome fails in a weird way. Specifically, Chrome’s download manager claims it saved the file (with an incorrect “size” that’s actually the count of files “saved”):

However, if you click on the entry, Chrome notices that the file doesn’t exist:

Weird, right?

I previously mentioned a closely-related scenario in my blog post on how Security Zones still impact modern browsers:

if you’ve configured the setting Launching applications and unsafe files to Disable in your Internet Control Panel’s Security tab, Chromium will block executable file downloads with a note: Couldn't download - Blocked.

…but this case here is somewhat different than that.

The customer claims that it is a regression, so let’s bisect.

python3 tools/bisect-builds.py -a win -g 1324255 -b 1589604 --verify-range -- --no-first-run https://webdbg.com/dl/txt.txt

Bisecting, we find that it is indeed a behavior change. Chrome 130 didn’t have this problem. The bisect process tells us:

You are probably looking for a change made after 1366085 (known good), but no later than 1366127 (first known bad)
CHANGELOG URL:
https://chromium.googlesource.com/chromium/src/+log/8c10e43000483bdc4a1b5bf092b39266597d3fc8..ac84b1ec75f49a771ad490760cdaf8872aae8a29

42 changelists is a pretty wide range, so let’s try the Win64 version to see if we can narrow either side:

python3 tools/bisect-builds.py -a win64 -g 1366500 -b 1366250 --verify-range -- --no-first-run https://webdbg.com/dl/txt.txt
You are probably looking for a change made after 1366107 (known good), but no later than 1366146 (first known bad).

CHANGELOG URL:
https://chromium.googlesource.com/chromium/src/+log/f3063c6a843ccf316d31f9169972b1ae546945b2..5f8917fb6721c1fd070cfccf45fa2ba44a8d0253 

Okay, so if we take the tightest constraints, we end up with a narrower range of 1366107 to 1366127, which is only 20 CLs.

Within that range, only one CL (1366109) appears to have anything to do with downloads. I quickly clicked through the others just to be sure.

Looking at the code of the 1366109 change, we don’t see anything directly related to the Save As scenario.

Neat. I like mysteries. If we follow Sherlock Holmes’ quote: “When you have eliminated the impossible, whatever remains, however improbable, must be the truth” we conclude that the CL in question must be responsible. But how could it be?

Well, this looks mildly interesting:

The code was changed to allow a parallel process to delete the file where that would not have been allowed previously. But still, what would delete the temp file in the middle of the overall download process?

Spoiler Alert: The answer turns out to be the Windows code called to apply the Mark-of-the-Web by the Chromium quarantine code!

If we fire up SysInternals’ Process Monitor against the current version of Chrome, we see that what’s happening is that the downloaded file created in the temp folder (before moved/renamed to its final name) is deleted by CAttachmentServices code when it is called by Chrome’s QuarantineFile function:

In contrast, in the older build of Chrome, we see that the temporary file cannot be deleted because the code that tries to delete it hits a SHARING_VIOLATION, since Chrome’s handle to the file didn’t offer SHARE_DELETE:

So… neat. It was basically an accident that this file could be saved in older Chrome.

Now, with all that said, we’re left with even more questions. For example, you’ll see that if you try to download a dangerous file type via a normal download when the Zone settings are configured as above, the download is blocked:

… but if you try to download our text file via the regular file download flow, it’s allowed:

So how can it be that if you try to perform a Save As on that same text file, it’s somehow blocked? What’s up with that? Chrome’s quarantine code runs on both files!

The secret is how the Windows Attachment Manager code decides whether a file is dangerous during the CAttachmentServices::Save call. Windows’ attachment manager has to rely on the file’s extension to decide whether the type is dangerous. In the “Save As” case, the file’s extension is .tmp, whereas in the regular download manager case, the file has the correct and final extension (.txt).

Chrome’s quarantine code correctly notes that the call is supposed to happen on the final filename:

However, if you look at the SaveFileManager code, the quarantine (MotW annotation) step happens inside OnURLLoaderComplete just after the file is downloaded but before that temporary file gets renamed to its final name (inside RenameAllFiles).

You can confirm that the .tmp filename extension is indeed causing the problem by using the registry to temporarily declare that .tmp is a low-risk file extension:

After you make this change, the Save As scenario works correctly.

I’ve filed a bug on Chrome’s SaveAs code to ensure that the file has the correct filename before the quarantine logic runs.

Authenticode in 2025 – Azure Trusted Signing

I’ve written about signing your code using Authenticode a lot over the years, from a post in 2015 about my first hardware token to a 2024 post about signing using a Digicert HSM.

Recently, Azure opened their Trusted Signing Service preview program up for individual users and I decided to try it out. The documentation and features are still a bit rough, but I managed to get a binary cloud-signed in less than a day of futzing about.

For many individual developers Azure Trusted Signing will be the simplest and cheapest option, at $10/month. (Microsoft Employees get a $150/month Azure credit for their personal use, so trying it out cost me nothing.)

Note that I’ve never done anything with Azure or any other cloud computing service before– I’m a purely old-school client developer.

First, I visited my.visualstudio.com to activate my Microsoft Employee Azure Subscription credit for my personal Hotmail account. I then visited Azure.com in my Edge Personal Profile and created a new account. There is a bit of weirdness about adding 2FA using Microsoft Authenticator to the account, which I already had enabled– what appears to actually be happening is you’re actually creating a new .onmicrosoft.com “shadow” account for your personal account.

With my account set up, in Azure Portal’s search box, I search for “Trusted Signing”:

and I click Create:

I fill out a simple form, inventing a new resource group (SigningGroup, no idea what this is for), and a new Account Name (EriclawSignerAccount, you’ll need this later), and make the important choice of the $9.99/month tier:

My new signing account then appears:

Click it and the side-panel opens:

It’s very tempting to click Identity validation now (since I know I must need to do that before getting the certificate) but instead you must click Access control (IAM) and grant your account permissions to request identity validation:

In the search box, search for Trusted, select the first one (Trusted Signing Certificate Profile Signer, then select Next:

In the Members tab, click Select members and pick yourself from the sidebar.

Click Select and then Review and Assign to grant yourself the role. Then repeat the process for the Trusted Signing Identity Verifier role.

With your roles assigned, it’s time to verify your identity. Click the Identity Validation button change the dropdown from Organization to Individual, and click New Identity > Public:

(If you skipped a step, the “New Identity” button will remain disabled until you assign yourself to the role that allows you to use it.)

Fill in the form with your information. Ensure that it matches your legal ID (Driver’s License):

You’ll then be guided through a workflow involving the Microsoft Authenticator app on your phone and a 3rd party identity verification company. You’ll see a Success message once you correctly link your new Verified ID in the Authenticator app to the Azure Service, but confusingly, you’ll still see Action Required in the Azure dashboard for a few minutes:

Just be patient — after about 10 minutes, you’ll get an email saying the process is complete and Action Required will change to Completed:

Next, click Certificate Profile to create a new certificate:

Click Create > Public

Fill out a simple form selecting your verified identity and naming the profile (I used EricLawCert you’ll need this later):

In short order, your certificate is ready for use:

Now, using the certificate is somewhat complicated than a local certificate, but many folks are now doing fancy things like signing builds in the cloud and as a part of continuous integration processes etc.

I, however, am looking for a drop-in replacement of my old manual local signing process, however, so I follow the guide here to get the latest version of SignTool, as well as the required DLIB file (which you can just unzip rather than using NuGet if you want) that knows how to talk to the cloud. Select the default paths in the installer because otherwise the thing doesn’t work. Run signtool.bat, which will pull the correct dependencies and then tell you where it put the real signtool.exe:

Now, create a file that will point at your cloud certificate profile; I named mine cloudcert.json. Be sure to put in the correct cloud endpoint URL, the account and profile names you selected (all of which were chosen when setting up the certificate):

{
"Endpoint": "https://wcus.codesigning.azure.net",
"CodeSigningAccountName": "EricLawSignerAccount",
"CertificateProfileName": "EricLawCert",
"CorrelationId": "set-this-to-whatever"
}

Then create a .bat file that points at the newly installed signtool.exe file, using the paths you chose to point at the DLIB, JSON, and file to be signed:

"C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\x64\signtool.exe" sign /v /debug /fd SHA256 /tr "http://timestamp.acs.microsoft.com" /td SHA256 /dlib "c:\tools\mstrustedSigningTools\dlib\bin\x64\Azure.CodeSigning.Dlib.dll" /dmdf "c:\tools\mstrustedSigningTools\cloudcert.json" alert.exe

Aside: Note the /tr and /td parameters indicating that the target file will be time-stamped. These are absolutely critical, because Azure Certificates are generated with only three days of validity:

… so if you fail to timestamp your signed file, the signature will expire when the certificate does just a few days later. With a proper timestamp, your signature will remain valid forever.

Run your batch file. If it doesn’t work and shows a bunch of local certificates that have nothing to do with the cloud, the DLIB isn’t working. Double-check the path you specified in the command line.

Now, at this point, you’ll probably get another failure complaining about DPAPI:

Currently, the DLIB package bundles an outdated version of System.Security.Cryptography.ProtectedData.dll from 2019. Rename that file to something else and locate the current version of that DLL elsewhere on your system and copy it into the dlib\bin\x64 folder:

After you do so, run the script again and you’ll get to a browser login prompt. Exciting, but this next part is subtle!

You may see the account you think you want to use already in the login form. Don’t click it: If you do, you’ll get a weird error message saying that “External Users must be invited” or something of that nature. Instead, click Use another account:

Then click Sign-in Options:

Then click Sign in to an organization:

Specify your .onmicrosoft.com tenant name[1] here and click Next:

Only now do you log into your personal email account as normal, and after you do, you’ll get a success message in your browser and the signature will complete:

You can choose Properties on the Explorer context menu for the signed file to see your newly added signature:

Triumph!

You can now sign all file types that SignTool supports.

Resource Links

-Eric

[1] If you don’t know the right organization name, find it in the Users tool in the Azure portal: