While Microsoft corporate culture has evolved over the years, and the last twenty years have seen the introduction of new mass communication mechanisms like Yammer and Teams, we remain an email heavy company. Many product teams have related “Selfhost” or “Discussions” aliases (aka “Discussion Lists” or DLs) to which thousands of employees subscribe so they can ask questions and keep an eye on topics relevant to the product.
As a consequence, many employees like me get hundreds or thousands of emails every day, the majority of which I don’t read beyond the subject line. To keep things manageable, I keep a long list of email sorting rules in Outlook designed to sort inbound mails to folders based on whether it was sent directly to me, to a particular alias, etc.
One such rule, which sorts mail from the Edge browser Selfhost alias looks like this:
Generally, this approach works great. However, almost every day there are one or more messages sent by well-meaning employees that drop into my inbox, often beginning with something like:
[BCC’ing the large alias to reduce noise.]
Bill, I’ll take this issue offline and work with you directly to investigate.
Don’t be this guy.
Please, I’m begging you with all of my heart, do not do this!
Despite your best intentions (reducing noise for others), you’re instead dramatically amplifying the prominence of your message.
When you move large DLs to BCC, it has the effect of breaking recipients’ email sorting rules such that it increases the prominence of your email by dropping it in thousands of employee’s inboxes instead of the folders to which the mail would ordinarily be neatly sorted.
Taking an issue “offline” also often has the side-effect of hiding information from everyone else on the alias, and getting information is usually why they joined the alias in the first place!
Instead, please do this:
I’ll investigate this issue with Bill, directly and after we figure out what’s going on, I’ll reply back to the alias with our findings.
Email to Alias
Hey, Bill– Can you try <a,b,c> and send me the log files collected? I’ll take a look at them and figure out what’s going on so I can reply back to the Selfhost alias with the fix timeline and any workarounds.
Email sent only to Bill
Thanks for your help in saving attention!
-Eric
Postscript: There is a mechanism you can use in a rule to detect that you’ve been BCC’d on a message. If you’ve received a message due to a BCC, there will be a message header added:
X-MS-Exchange-Organization-Recipient-P2-Type: Bcc
Allegedly you could put a rule checking for that header before any other sorting rules and Exchange will then try to put BCC’d replies into the same folder as the message being replied to.
I’m going to give this a try, but everything above still stands as this is NOT a trick most users are familiar with.
Twenty years ago (!!?!) was the first official release of Fiddler. I still run Fiddler for some task or another almost every working day. I still run my version (Fiddler Classic) although some of the newer tools in the Fiddler Universe are compelling for non-Windows platforms.
I presented some slides in a birthday celebration that Progress Telerik streamed online this morning. Those slides might not make sense without the audio, but most of the content is in a presentation I gave at the Codemash conference, and I’ve shared the slides and audio from that talk.
For decades, Fiddler has been a huge part of not only my professional life, but my personal life as well, so this milestone is a bittersweet one. I could write pages, but I won’t. Maybe some day.
Happy birthday, Fiddler, and thank you to everyone who has joined me on the journey over the decades!
In a recent post, I explored some of the tradeoffs engineers must make when evaluating the security properties of a given design. In this post, we explore an interesting tradeoff between Security and Privacy in the analysis of web traffic.
Many different security features and products attempt to protect web browsers from malicious sites by evaluating the target site’s URL and blocking access to the site if a reputation service deems the target site malicious: for example, if the site is known to host malware, or perform phishing attacks against victims.
When a web browser directly integrates with a site reputation service, building such a feature is relatively simple: the browser introduces a “throttle” into its navigation or networking stack, whereby URLs are evaluated for their safety, and a negative result causes the browser to block the request or navigate to a warning page. For example, Google Chrome and Firefox both integrate the Google Safe Browsing service, while Microsoft Edge integrates the Microsoft Defender SmartScreen service. In this case, the privacy implications are limited — URL reputation checks might result in the security provider knowing what URLs are visited, but if you can’t trust the vendor of your web browser, your threat model has bigger (insurmountable) problems.
However, beyond direct integration into a browser, there are other architectures.
One choice is to install a security addon (like Microsoft Defender for Chrome or Netcraft) into your browser. In that case, the security addon can collect navigating/downloading URL using the browser’s extension API events and perform its task as if its functionality was directly integrated into the browser. Similarly, Apple offers a new platform API that allows client applications to report URLs they plan to fetch to an extensible lookup service that allows a security provider to indicate whether a given request should be blocked.
In the loosest coupling, a security provider might not integrate into a web browser or client directly at all, instead providing its security at another architectural layer. For example, a provider might watch unencrypted outbound DNS lookups from the PC and block the resolution of hostnames which are known to be malicious. Or, a provider might watch network connections to an outbound site and block connections where the URL or hostname is known to be malicious. This sort of inspection might be achieved by altering the OS networking stack, or plugging into an OS firewall or similar layer.
A common choice is to configure all clients to route their traffic through a proxy or VPN that breaks TLS as a MitM (e.g. Entra Internet Access) and provides the security software access to unencrypted traffic.
The decision of where to integrate security software is a tradeoff between context and generality. The higher you are in the stack, the more context you have (e.g. you may wish to only check URLs for “active” content while ignoring images), while the lower you are in the stack, the more general your solution is (it is less likely to be inadvertently bypassed).
The advantage of integrating security software deep in the OS networking layer is that it can then protect any clients that use the OS networking stack, even if those clients aren’t widely known, and even if the client was written long after your security software was developed. By way of example, Microsoft Defender’s Network Protection and Web Content Filtering features rely upon the Windows Filtering Platform to inspect and block network connections.
For unencrypted HTTP requests, inspecting traffic at the network layer is easy — the URL is found directly in the headers on the outbound request, and because the HTTP traffic is unencrypted plaintext, parsing it is trivial. Unfortunately for security vendors, the story for encrypted HTTPS traffic is much more complicated — the whole point of HTTPS is to prevent a network intermediary (including a firewall, even on the same machine) from being able to read the traffic, including the URL.
To spy on HTTPS from the networking level, a security vendor might reconfigure clients to leak encryption keys, or it might rely on a longstanding limitation in HTTPS to sniff the hostname from the Client Hello message that is sent when the client is setting up the encrypted HTTPS channel with the server.
Security products based on sniffing network traffic (DNS or server connections) are increasingly encountering a sea change: web browser vendors are concerned about improving privacy, and have introduced a number of features to further constrain the ability of a network observer to view what the browser is doing. For example, DNS-over-HTTPS (DoH) means that browsers will no longer send unencrypted hostnames to the DNS server when performing lookups. Instead, a secure HTTPS connection is established to the DNS server, and all requests and responses are encrypted to hide them from network observers.
Additionally, a new feature called Encrypted Client Hello (ECH) means that browsers can no longer spy on the first few packets of a TLS connection to see what server hostname was requested. For example, in loading the https://tls-ech.dev test page with ECH enabled, the client sends a “decoy” Server Name Indicator (SNI) of public.tls-ech.dev rather than the real server name:
When a browser looks up a domain in DNS, the HTTPS Resource Record declares if ECH should be used and if so, the decoy SNI value to send in the OuterHello. You can see this record using a DNS viewer like https://dns.google:
The ECH data is base64-encoded in the record; you can decode it to read the decoy SNI value.
Both Chromium and Firefox have support for ECH. Originally, both DoH and ECH were disabled-by-default in Chrome, Edge, and Firefox for “managed” devices but it appears this may have changed. A test site for ECH can be used to see whether your browser is using ECH, and you can follow these steps to see if a server supports ECH.
A growing percentage of servers support HTTP3/QUIC, an encrypted protocol that runs over UDP. QUIC’s use of “initial encryption” precludes trivial extraction of the server’s name from the UDP packet.
Blinded by these privacy improvements, a security product running at the network level may be fooled and might be forced to fall back to only IP reputation, which suffers many challenges.
Unfortunately, this is a direct tradeoff between security and privacy: security products that aren’t directly integrated into browsers cannot perform their function, but disabling those privacy improvements means that a network-based observer could learn more about what sites users are visiting.
Browser Policies
Browsers offer policies to allow network administrators to make their own tradeoffs by disabling security-software-blinding privacy changes.
Chrome
DNS-over-HTTPS (already disabled by default on “Managed” devices)
Current versions of Safari reportedly do not offer a policy to disable QUIC. You may be successful in blocking QUIC traffic at the firewall level (e.g. block UDP traffic to remote port 443).
Stay safe out there!
-Eric
PS: It’s worth understanding the threat scenario here– In this scenario, the network security component inspecting traffic is looking at content from a legitimate application, not from malware. Malware can easily avoid inspection when communicating with its command-and-control (C2) servers by using non-HTTPS protocols, or by routing its traffic through proxies or other communication channel platforms like Telegram, Cloudflare tunnels, or the like.
PPPS: Sniffing the SNI is additionally challenging in several other cases:
1) If the browser uses QUIC, the HTTP/3 traffic isn’t going over a traditional TLS-over-TCP channel at all.
2) If the browser is using a VPN service (or equivalent), then the web traffic’s routing information is encrypted before reaching the network stack, and the ClientHello is encrypted and thus the SNI cannot be read.
3) If the browser is connected to a Secure Web Proxy (e.g. TLS-to-the-proxy itself, like the Edge Secure Network feature) then the traffic’s routing information is encrypted before reaching the network stack and ClientHello is encrypted and thus the SNI cannot be read.
4) If the browser uses H/2 Connection coalescing, the browser might reuse a single HTTP/2 connection across multiple different origins. For example, if your admin blocks sports.yahoo.com, you can get around that block by first visiting www.yahoo.com, which creates an unblocked HTTP/2 connection that can be later reused by requests to the Sports.yahoo.com origin.
While an enterprise browser policy is available to disable QUIC, there is not yet a policy to disallow H2 coalescing in Chromium-based browsers. In Firefox, this can be controlled via the network.http.http2.coalesce-hostnames preference.
Absolute security is simple– put your PC in a well-guarded vault, and never power it on. But that’s not what PCs are built for, and good luck finding a job that would pay you for such advice. Security Engineering (like all engineering) is a story of tradeoffs. Tradeoffs commonly take place across multiple dimensions:
Recently, an admin reached out to ask about the recommendations in the Microsoft Security Baselines for Edge. In particular, they were curious about why the baselines don’t mandate the disablement of the WebSerial, WebBluetooth, and WebUSB platform APIs, as each of these represents attack surface against devices from the browser, and the Filesystem Access API which could be used to abuse files on the user’s disk.
When an enterprise Security Admin considers whether to disable these APIs via Group Policy, it’s important to consider the full context to decide whether blocking browser APIs will reduce attack surface, or actually increase it.
In this case, these powerful APIs enable browsers to perform tasks that previously required downloading native applications. For example, instead of needing to download some random full-trust native code application to program my kid’s “smart” teddy bear with his name, favorite song, and favorite food:
… I can just go to a website that supports WebUSB, type in the desired information, manually grant the website permission to use WebUSB, and then update the info on the device. The overall attack surface is dramatically reduced by WebUSB, because my task no longer requires running unsandboxed/full-trust code from a teddy bear vendor, who may or may not have good secure development and deployment practices. At worst, I may have to reset my bear, but the browser sandbox means that my PC is not at risk.
Similarly, a few years back, I bought a cheap clock from Alibaba in China.
Setting the time on the clock requires configuration via Bluetooth. Instead of downloading some random full-trust program from an overseas vendor I’ve never heard of, I can use a trivial web app that uses Web Bluetooth to program the device.
Again, there’s no dangerous third-party code running on my PC, and no access to any device is granted unless I specifically choose to allow it.
Ultimately, all features represent attack surface. The key question for security engineers (whether platform engineers or security admins) is whether or not a given feature reduces or improves overall security — its net security impact.
My position is that these APIs, on the whole, improve the security posture of an environment to the extent that they are able to displace higher-risk alternatives (native apps).
Threat Models Aren’t Universal
As we’ve discussed before, however, threat models aren’t one-size-fits-all.
Security Admins must keep in mind that the security baselines are just that, baselines. If their environment is such that they’ve locked down their PCs such that users cannot run native apps of their choosing, then blocking advanced Web APIs probably will not harm their security posture, even if it might harm their users’ productivity.
Follow the Spirit of the Baseline
Admins should further take care to ensure that their use of the baselines maximizes their overall security posture. We recently encountered a customer who had disabled Basic Authentication over HTTP in Edge as a part of following the Edge security baseline:
However, this customer had hundreds of legacy devices that require Basic authentication and are only accessible over HTTP. To enable them to keep working in the face of the baseline’s restriction, the customer configured Edge such that these devices’ IPs would load in Internet Explorer Mode. The customer treated this as a workaround for the Security Baseline. This only worked because they hadn’t also enabled the equivalent WinINET policy that blocks Basic-over-HTTP in IE.
Thus, the customer could claim compliance with the Edge security baseline, but their workaround put the organization at far greateroverall risk, because IE Mode represents a huge attack surface. Allowing that attack surface to be combined with MiTM-able HTTP networking means that any network-based attacker could easily exploit vulnerabilities in the legacy IE code, outside of the strong Chromium sandbox in which Edge loads content. The customer would be far more secure if they simply ignored the “No Basic over HTTP” requirement for their unique environment.
When moving from other development platforms to the web, developers often have a hard time understanding why the web platform seems so … clunky. In part, that’s because the platform is pretty old at this point (>25 years as an app platform), partly because changes in form factors and paradigms (particular mobile) have introduced new constraints, and partly because the Web is just lousy with bad actors hoping to do bad things.
One “simple thing” desktop developers expect to be able to do is to prompt the user to take some action before leaving an application (e.g. “Save your work?”). For years, browsers offered a simple mechanism for doing so: closing the browser, navigating to a different page, or hitting Refresh button would trigger an OnBeforeUnload() event handler. If the developer’s implementation of that function returned a string, that string would be shown to the user:
Unfortunately, tech scam sites would often abuse this and use it as yet another opportunity to threaten the user, so browsers decided that the most expedient solution was to simply remove the site’s control over the prompt string and show a generic “Changes might not be saved” string, hoping that this reflected the webapp’s true concern:
For the dialog to even display, there must have been an earlier user-gesture on the page; if there was no gesture, the dialog is suppressed entirely.
For some applications, this prompt works just fine, but for complicated workflow applications, the user’s issue might have nothing to do with “saving” anything at all. A user who wants to satisfy the app’s concern must click “Stay on page” and then scroll around to try to figure out where the problem is.
You might hope that you could just update your page’s DOM to show a alert message beside the built in prompt (“Hey, before you go, can you go verify that the setting <X> is what you wanted”, but alas, that doesn’t work — DOM updates are suppressed until after the dialog is dismissed.
Browsers won’t even emit the site’s string into the DevTools console so that an advanced user could figure out what a site was complaining about without manually stepping through the code in a debugger.
Web Platform evangelists tend to reply with workarounds, trying to guide developers to thinking in a more “webby” way (“Oh, just store all the data in indexedDb and restore the state the next time the user visits“), and while well-intentioned, this advice is often hard to follow for one reason or another. So users suffer.
I’m sad about this state of affairs — I don’t like giving bad guys power over anyone. This truly feels like a case where the web platform threw the baby out with the bathwater, expediently deciding that “We cannot have nice things. No one may speak, because bad people might lie.” The platform could’ve chosen to have a more nuanced policy, e.g. allowing applications to show a string if there was some indication of user trust (e.g. site engagement, installed PWA, etc).