Browser Security Bugs that Aren’t: JavaScript in PDF

A fairly common security bug report is of the form: “I can put JavaScript inside a PDF file and it runs!

For example, open this PDF file with Chrome, and you can see the alert(1) message displayed:

Support for JavaScript within PDFs is by-design and expected by the developers of PDF rendering software, including common browsers like Chrome and Edge. Much like HTML, PDF files are an active content type and may contain JavaScript.

Periodically, less experienced security researchers excitedly file this issue against browsers, and those reports are quickly resolved “By Design.”

Periodically, more experienced security researchers excitedly file this issue against sites and applications that are willing to host or transfer untrusted PDF files, arguing that this represents a “Stored Cross-Site Scripting vulnerability.”

Their confusion here is somewhat more understandable– if a website allows a user to upload a HTML document containing script, and then serves that HTML document from their domain, any script within it will run in the security context of the serving domain. That describes a classic Stored XSS Attack, and it presents a security threat because the embedded script can steal or manipulate cookies (by accessing the document.cookie property), manipulate web platform storage (IndexedDB, localStorage, etc), conduct request forgery attacks from a 1st party origin, etc.

The story for PDF documents is very different.

The Chrome Security FAQ describes the limitation tersely, noting the set of bindings provided to the PDF are more limited than those provided by the DOM to HTML documents, nor do PDFs get any ambient authority based upon the domain from which they are served.

What does that mean? It means that, while PDF’s JavaScript does run, the universe the script runs in is limited: there’s no access to cookies or storage, very limited ability to make requests (e.g. you can navigate the document’s window elsewhere after a user gesture, but that’s about it), and no ability to make use of the Web Platform’s powerful capabilities exposed by the HTML Document Object Model (DOM) objects like document, window, navigator, et cetera. While the capabilities of JavaScript in PDF are extremely limited, they’re not non-existent, and PDF engine software must take care to avoid introducing new capabilities that void the safety assumptions of PDF-handling code.

Restricting JavaScript in PDF

Engineers should take care that handling of JavaScript in PDF respects app/user settings. For example, if the user turns off JavaScript for a site, PDFs hosted on that site shouldn’t allow script either. This works properly in Chrome and Edge.

Firefox’s global javascript.enabled toggle from the about:config page doesn’t impact script inside its PDF viewer:

Instead, Firefox offers an individual pdfjs.enableScripting preference that can be configured from the about:config page.

Chromium currently ignores Content-Security-Policy headers on PDF responses because it renders PDF files using web technologies that could otherwise be disallowed by the CSP, leading to user confusion and webdev annoyance.

A Slow 10K

I “ran” the Capitol 10K for a third time on Sunday. It did not go well, but not for any of the reasons I worried about. The rain stopped hours before the race, and the course wasn’t wet. My knees and feet didn’t complain. My heart rate felt pretty much under control. I had found the charger for my running watch and got my headphones charged.

No, like my first race, I got screwed by nutrition. After mile 2, I was feeling heartburn for the first time in months (years?), and I began frequently dropping to a walk while I worried about it. Was it the flavored coffee I had in the morning? My dubious dinner (Subway) or the beer I had with it? In hindsight, the real problem was probably some red wine I’d had the prior afternoon — I don’t drink wine often anymore, and I’m more sensitive to it. Ah well, whatever the cause, you can see the impact.

You can see where things went off the rails. My first half was a non-disastrous 4 minutes slower than last year, but my second half took my 6 minutes longer than the first. The only bright spot was the short uphill “King of the hill” segment, which I completed at a pace of 8:15/mi vs. last year’s 9:32/mi. (Update: I misread this, and actually took much longer on the KQ Hill than last year, but further research strongly suggests that they pick a different length/hill each race).

I spent miles 3 to 5 mostly walking through the neighborhoods that I’d joyfully sped through last year, watching my fellow “A” group people blow by, wondering if I was ever going to be able to run a half marathon again after this puny 10k was killing me.

By mile 5, I was starting to feel somewhat better but I still held off on picking up the pace, recognizing that this was not going to be my day. Unlike last year, I didn’t hear my (phantom) children cheering as I turned the final corner after the bridge and sped up for the finish line.

I finished in 1:06:15, a pokey 10:40/mi pace, 14 minutes slower than last year, 8.5 minutes slower than my fall Daisy Dash, and just 83 seconds faster than my first Cap10K. Not a great result.

I’ve signed up for the Sunshine 10K next month to hopefully find myself back on track, and also got the early-bird discount for signing up for the 2025 Cap10K.

Better luck prep next time?

-Eric

PS: Bonus pic from yesterday’s (cloudy) total eclipse in Austin.

Attacker Techniques: Gesture Jacking

A few years back, I wrote a short explainer about User Gestures, a web platform concept whereby certain sensitive operations (e.g. opening a popup window) will first attempt to confirm whether the user intentionally requested the action.

As noted in that post, gestures are a weak primitive — while checking whether the user clicked or tapped a key is simple, gestures poorly suit the design ideal of signaling an unambiguous user request.

Hijacking Gestures

A recent blog post by security researcher Paulos Yibelo clearly explains a class of attack whereby a user is enticed to hold down a key (say, Enter) and that gesture is treated as both an acceptance of a popup window and results in activating a button on a target victim website. If the button on that website performs a dangerous operation (“Grant access”, “Transfer money“, etc), the victim’s security may be irreversibly compromised.

The author calls the attack a cross window forgery, although I’d refer to it as a gesture-jacking attack, as it’s most similar to the ClickJacking attack vector which came to prominence in 2008. Back then, browsers vendors responded by adding defenses against ClickJacking attacks against subframes, first with IE’s X-Frame-Options response header, and later with the frame-ancestors directive in Content Security Policy. At the time, cross-window ClickJacking was recognized as a threat unmitigated by the new defenses, but it wasn’t deemed an especially compelling attack.

In contrast, the described gesture-jacking attack is more reliable, as it does not rely upon the careful positioning of windows, timing of clicks, and the vagaries of a user’s display settings. Instead, the attacker entices the user to hold down a key, spawns a victim web page, and the keydown is transferred to the victim page. Easy breezy.

Some folks expected that this attack shouldn’t be possible– “browsers have popup-blockers after all!” Unfortunately for their hopes and dreams, the popup blocker isn’t magical. The popup-blocker blocks a popup only if it’s not preceded by a user-gesture. Holding the Enter key is a user-gesture, so the attacker’s page is allowed to spawn a popup window to a victim site.

The Core of the Threat

As with many cool attack techniques, the core of this attack depends upon a built-in web platform behavior. Specifically, when you navigate to a URL containing a fragment:

…the browser will automatically scroll to the first (if any) element with an id matching the fragment’s value, and set focus to it if possible. As a result, keyboard input will be directed to that element.

The Web Platform permits a site that creates a popup to set the fragment on its URL, and also allows it to set the size and position of the popup window.

Web Page Defenses

As noted in Paulos Yibelo’s blog post, a website can help protect itself against unintentional button activations by not adding id attributes to critical buttons, or by randomizing the id value on each page load. Or the page can “redirect” on load to strip off an unexpected URL Fragment.

For Chromium-based browsers, an additional option is available: a document can declare that it doesn’t want the default button-focusing behavior.

The force-load-at-top document policy (added as opt-out for the cool Scroll-to-Text-Fragment feature) allows a website to turn off all types of automatic scrolling (and focusing) from the fragment. In Edge and Chrome, you can compare the difference between a page loaded:

Browser support is not universal, but Firefox is considering adding it.

WebDev Best Practices
  1. Set force-load-at-top (if appropriate) for your scenario, and/or remove id values from sensitive UI controls (e.g. for browsers that don’t support document policy)
  2. Use frame-ancestors CSP to prevent framing
  3. Auto-focus/make default the safe option (e.g. “Deny”)
  4. Disable sensitive UI elements until:
    • Your window is sized appropriately (e.g. large enough to see a security question being asked)
    • The element is visible to the user (e.g. use IntersectionObserver)
    • The user has released any held keys
    • An activation cooldown period (~500ms-1sec) to give the user a chance to read the prompt. Restart the cooldown each time a key is held, your window gains focus, or your window moves.
  5. Consider whether an out-of-band confirmation would be possible (e.g. a confirmation prompt shown by the user’s mobile app, or message sent to their email).

Beyond protecting the decision itself, it’s a good idea to allow the user to easily review (and undo) security decisions within their settings, such that if they do make a mistake they might be able to fix it before the damage is done.

Attacks on Browser UI

It’s not just websites that ask users to make security decisions or confirm sensitive actions. For instance, consider these browser prompts:

Each of these asks the user to confirm a security-critical or privacy-critical change.

As you might expect, attackers have long used gesture-jacking to abuse browser UI, and browser teams have had to make many updates to prevent the abuse:

Common defenses to protect browser UI have included changing the default button to the safe choice (e.g. “Deny”) and introducing an “input protection” activation timer.

Stay safe out there!

-Eric

pushState and URL Blocking

The Web Platform offers a handy API called pushState that allows a website’s JavaScript to change the URL displayed in the address bar to another URL within the same origin without sending a network request and loading a new page.

The pushState API is handy because it means that a Web Application can change the displayed URL to reflect the “current state” of the view of the application without having to load a new page. This might be described as a virtual navigation, in contrast to a real navigation, where the browser unloads the current page and loads a new one with a different URL.

For example, if I click the Settings link in my mail application, the URL may change from https://example.com/Inbox to https://example.com/Settings while JavaScript in the page swaps in the appropriate UI to adjust the app’s settings.

function onSettingsClick() {
ShowSettingsWidgets();
history.pushState({}, '', '/Settings');
}

Then when I click the “Apply” button on the Settings UI, the Settings widgets disappear and the URL changes back to https://example.com/Inbox.

function onApplySettingsClick() {
CloseSettingsWidgets();
history.pushState({}, '', '/Inbox');
}

Why would web developers bother changing the URL at all? There are three major reasons:

  1. Power users may look at the address bar to understand where they are within a webapp.
  2. If the user hits F5 to refresh the page, the currently-displayed URL is used when loading content, allowing the user to return to the same view within the app.
  3. If the user shares or bookmarks the URL, it allows the user to return to the same view within the app.

pushState is a simple and powerful feature. Most end-users don’t even know that it exists, but it quietly improves their experience on the web.

Unfortunately, this quiet magic has a downside: Most IT Administrators don’t know that it exists either, which can lead to confusion. Over the last few years, I’ve received a number of inquiries of the form:

“Eric — I’ve blocked https://example.com/Settings and confirmed that if I enter that URL directly in my browser, I get the block page. But if click the Settings Link in the Inbox page, it’s not blocked. But then I hit F5 and it’s blocked. What’s up with that??”

The answer, as you might guess, is that the URL blocking checks are occurring on real navigations, but not virtual navigations.

Consider, for example, the URLBlocklist policy for Chromium that allows blocking navigation to a specific URL. By default, attempting to directly navigate to that URL with the policy set results in a block page:

But if you instead navigate to the root example.com/ url, then use pushState to change the URL to the same URL, no block occurs:

…Until you hit F5 to refresh the page, at which point the block is applied:

Similarly, you can see the same thing with test pages for SmartScreen or SafeBrowsing. If you click on the first test link in the SafeBrowsing test page, you’ll get Chrome’s block page:

…but if you instead perform a virtual navigation to the same URL, no block occurs until/unless you try to refresh the page:

Similarly, if you create a Custom URL Indicator in Defender’s Network Protection for a specific URL path, you’ll find that a direct navigation to that URL is blocked in Edge, but not if you change the URL using pushState.

Blocks that are implemented by browser extensions typically are bypassed via pushState because the chrome.webNavigation events do not fire when pushState is called. An extension must monitor the tabs.onUpdated event if it wishes to capture a URL change caused by pushState.

Debugging

To see whether a virtual navigation is occurring due to the use of pushState, you can open the F12 Developer Tools and enter the following command into the console:

window.history.pushState = (a,b,c)=>{alert('Tried to pushState with :\n' + a + '\n'+b+'\nURL:'+c); debugger;}

This will (until reload) replace the current page’s pushState implementation with a mock function that shows an alert and breaks into the debugger.

When clicking on the “Signup” link on the page, we see that instead of a true navigation, we break into a call to the pushState API:

Intercepted virtual navigation

You can experiment with pushState() on a simple test page.

Security Implications?

This pushState behavior seems like a giant security bug, right?

Well, no. In the web platform, the security boundary is the origin (scheme://host:port). As outlined in the 2008 paper Beware Finer-Grained Origins, trying to build features that operate at a level more granular than an origin is doomed. For example, trying to apply a special security policy to https://example.com/subpath, which includes a granular path, cannot be secured.

Why not?

Because /subpath is in the same-origin as example.com, and thus any page on example.com can interact (e.g. add script) with content anywhere else on the same origin.

Security features like SafeBrowsing and SmartScreen will typically perform “rollups”, such that if, for example, evil.example.com/phish.html is a known phishing page, the URL Reputation service will typically block all of evil.example.com if it’s believed that the attacker controls the whole origin.

For an IT Administrator, pushState represents a challenge because it’s not obvious (without debugging) whether a given site uses virtual navigations or not. If you absolutely must ensure that a user does not interact with a specific page, you need to block the entire origin. For features like Defender’s Network Protection, you already have to block the entire origin to ensure blocking in Chrome/Firefox, because network-stack level security filters cannot observe full HTTPS URLs, only hostnames (and only requests that hit the network).

Update: Navigation API

The new Navigation API appears to behave much like pushState.

Using an extension I wrote to monitor the browser on Navigation API demo site, you can see that clicking on a link results only in the URL update and fetch of the target content, but the browser’s webNavigation.onBeforeNavigate event handler isn’t called.

The onBeforeNavigate event does not fire for pushState-intercepted navigations

-Eric

Browser Extensions: Powerful and Potentially Dangerous

Regular readers of my blogs know that I love browser extensions. Extensions can make using your browser more convenient, fun, and secure. Unfortunately, extensions can also break web apps in bizarre or amusing ways, dramatically slow your browser performance, leak your personal data, or compromise your device.

The designers of the Chromium extension system created a platform with a huge amount of power and an attack surface dramatically smaller than its predecessors (COM and NPAPI). That smaller attack surface meant that it was much harder for a rogue extension to harm your device, and it was much easier to tell how much access a given extension would get to your pages and data inside your browser. Unfortunately, many common tasks (particular scenarios like blocking ads) require that extensions be granted a very high level of permission.

As a consequence, users quickly get accustomed to approving permission requests like this one:

…which grants the extension the ability to: read your email, send email to your entire address book from you, delete files from your cloud storage, put malicious files into your cloud storage and share them to your friends, use your credit card number to order a thousand beanie babies for delivery to your boss, publish your usernames and passwords as a thread on Twitter, dial your mom and ex-girlfriend at 2am, update your LinkedIn profile to reflect a career as a circus clown, and publish embarrassing pictures on your Facebook wall. And more.

Providing this level of access to your browser is more risky than having your password stolen, because many web systems use 2FA and other techniques to prevent abuse of your password, but these techniques are ineffective against a sock-puppet browser.

But… but… but I want what this extension offers and I trust those guys! Heck, I’ve even reviewed their code using the very cool Chrome extension source viewer!

Well, that’s good, but it’s important to understand the full threat environment. Even if the version of the extension you’ve got today is safe, there’s no guarantee that it will remain safe in the future. Popular extensions (particularly free extensions) are a common target of supply chain attacks. The extension author might get sloppy and accidentally update to a new library with a security bug or trojan code. Or their Google account gets hacked and the bad guy takes over the extension. Or perhaps they accept one of the enticing offers that they’re frequently emailed, offering them a few hundred or thousand dollars to “take over” their extension. Your browser autoupdates to the new version of their extension without any notice, and your previously-secure browser has turned from a “User Agent” into an “Attacker Agent.”

It’s scary.

Over the last few years, the Chrome team has tried to reduce the potential for abuse in the new “Manifest V3” system, but pundits and others have popularized elaborate conspiracy theories that this was just a way for Google to crack down on adblockers for their own business interests. (This is an especially silly claim, since Google ads are trivially blockable in the new system.)

An attacker might not be satisfied with their excessive permissions inside the browser sandbox. Unfortunately, it’s not too hard for a malicious extension to escape from the constraints of the browser sandbox. A technique we’ve seen in the wild:

  1. Trick user to install a malicious extension.
  2. When the user visits a trusted site like Google.com, throw an overlay over it and demand the user download a file.
  3. The malicious .EXE downloaded from the “Update” button originates from a blob: injected on victim site, so the user and client security software may think the file legitimately came from google.com:
Malicious content injected into the Google.com homepage by an evil extension

So, what’s a human to do? Use as few extensions as you can, and prefer your browser’s built-in capabilities as much as you can. Periodically review the extensions in your browser (and your dad’s!) by visiting about:extensions and remove any you don’t recognize or need.

If you work in IT Security, it’s important that you understand the security risk of extensions is almost as great as traditional malware. Develop a policy that helps protect your enterprise from malicious extensions by blocking certain permissions, and if feasible, blocking all extensions except a curated allow-list. Chrome and Edge offer a powerful set of Group Policies to control what extensions are permitted and how those extensions may interact with your domains.

If you’re especially on-the-ball, you can create your own company-internal “Web Store” and allow users to only install extensions that your IT Security team has reviewed and vetted. This approach helps prevent the supply-chain attack because your Internal Web Store will control the update cadence, and you can only allow updates after they’ve been reviewed by experts. If you want to learn more, Google has published a pretty thorough PDF describing Enterprise Management of browser extensions.

Update January 2025: Google has announced a new Chrome WebStore for Enterprise product which aims to allow a curated extension store experience for organizations.

Stay safe out there!

-Eric

Second Seaside Half

I ran my second Galveston Half Marathon on Sunday, February 25th.

The course was identical to last year’s race, starting at Stewart beach heading north before looping back down to the Pleasure Pier before returning to the start/finish line on the beach.

I opened hard, leaving with the 1:52 pacer and running with the 1:45 pacer for the first two miles at an 8:12 pace. Alas, I couldn’t keep that pace up, but stayed fast for the first 4 miles before dropping down significantly.

Looking at my pace chart, you can see where things fell apart, even as my heart rate never got out of control:

Compared to last year, many thing went well: the humidity was dramatically lower, my Coros Pace 3 watch worked well for streaming my music (although I didn’t manage to get it to verbally announce my pace), and nothing on my body really hurt — my left knee was slightly glitchy early on but cleared up quickly, and while I ended up with a pretty gnarly blister on toe #9, it didn’t significantly bother me during the race itself. I had a banana and stroopwaffle before the race and ate a pack of Jelly Belly caffeinated beans while running.

Looking at the course markers, I ran the first 6 miles almost a minute faster than last year, but lost that minute getting to the second turn, finally finishing the race a face-palming three seconds slower than last time:

I was a bit frustrated that the end of the race (inexplicably) snuck up on me. I was just about to drop back to a walk around mile 12 when another racer wheezed “just one more mile” and then I felt like I’d demotivate him if I slowed down, so I kept up a slow jog for a while longer.

When I saw the Mile 13 sign in the distance, I realized that I was almost out of road and I turned on the gas. I had a great pace going into the last two tenths of a mile, frustrated that I think I could’ve maintained that pace for at least a half mile.

After chatting with a few other runners after the end of the race (including my mile 12 inspiration), and walking down to the waterfront, I headed back to our rental house, showered, and headed to the brewery with my friends who’d completed the 5K that morning.

It was a pleasant afternoon and a nice end to my “dry January.”

My next race is my third Capitol 10K on April 7th. (I’m currently a bit worried that I’ve had a (painless) limp in my left leg for the last day or two, but hopefully it clears up quickly.)

The Importance of Feedback Loops

This morning, I found myself once again thinking about the critical importance of feedback loops.

I thought about obvious examples where small bad things can so easily grow into large bad things:

– A minor breach can lead to complete pwnage.
– A small outbreak can become a pandemic.
– A brush fire can spark a continental wildfire.
– Petty theft can grow into large-scale fraud.
– A small skirmish can explode into world war.

On the other hand, careful reactions to that initial stimulus can result in a different set of dramatically-better outcomes… far better than would have developed without that negative initial stimulus:

– A minor breach can avert complete pwnage.
– A small outbreak can prevent a pandemic.
– A brush fire can prevent a continental wildfire.
– Petty theft can prevent large-scale fraud.
– A small skirmish can avert a world war.

When it comes to feedback loops, it matters how fast they are, and how they respond to the stimulus. Early losses can reduce risk by mitigating threats before they become too big to survive.

Sweat the small stuff, before it becomes the big stuff.

Frontloading risk can be a useful strategy, as I covered in a longer post:

Cloaking, Detonation, and Client-side Phishing Detection

Today, most browsers integrate security services that attempt to protect users from phishing attacks: for Microsoft’s Edge, the service is Defender SmartScreen, and for Chrome, Firefox, and many derivatives, it’s Google’s Safe Browsing.

URL Reputation services do what you’d expect — they return a reputation based on the URL, and the browser will warn/block loading pages whose URLs are known to be sources of phishing (or malware, or techscams).

Beyond URL reputation, from the earliest days of Internet Explorer 7’s phishing filter, there was the idea: “What if we didn’t need to consult a URL reputation service? It seems like the browser could detect signals of phishing on the client side and just warn the user if they’re encountered.

Client-side Phishing Detection seems to promise a number of compelling benefits.

Benefits

A major benefit to client-side detection is that it reduces the need for service-side detonation, one of the most expensive and error-prone components of running an anti-phishing service. Detonation is the process by which the service takes a URL and attempts to automatically detect whether it leads to a phishing attack. The problem is that this process is expensive (requiring a fleet of carefully secured virtual machines to navigate to the URLs and process the resulting pages) and under constant attack. Attackers aim to fool service-side detonators by detecting that they’re being detonated and cloaking their attack, playing innocent when they know that they’re being watched by security services. Many of the characteristics that attackers look for to cloak against detonators (evidence of a virtual machine, loading from a particular IP range, etc) can’t work when client-side detonation is performed, because the attackers must show their uncloaked attack page to the intended victim (end-user) if they hope to steal their credentials.

Beyond detonation improvements, browser vendors might find that client-side phishing detection reduces other costs (fewer web service hits, no need to transfer large bloom filters of malicious sites to the client). In the URL Reputation service model, browser vendors must buy expensive threat intelligence feeds from security vendors, or painstakingly generate their own, and must constantly update based on false positives or false negatives as phishers rapidly cycle their attacks to new URLs.

Beyond the benefits to the browser vendors, users might be happy that client-side detection might have better privacy properties (no web service checks) and possibly faster or more-comprehensive protection.

So, how could we detect phish from the client?

Clientside ML

Image Recognition

One obvious approach to detecting a phishing site is to simply take a screenshot the page and compare it to a legitimate login site. If it’s similar enough, but the URL is unexpected, the page is probably a phish. Even back in 2006, when graphics cards had little more power than an Etch-a-Sketch, this approach seemed reasonably practical at first glance.

Unfortunately, if the browser blocks fake login screens based on image analysis, the attacker simply needs to download the browser and tune their attack site until it no longer triggers the browser’s phishing detectors. For example, a legitimate login screen:

…is easily tuned by an attacker such that it no longer trips the clientside detection, while looking equivalent to the vast majority of humans:

Even more subtle changes might work; the field of adversarial ML studies how to confuse image processing models as in this wild example:

However, humans are busy, distracted, and easily fooled, such that attackers don’t even need to be especially clever. Here are two real-world phishing attacks that lured user’s passwords despite not looking much like the legitimate login screen:

Text and Other Metadata

If image processing is too prone to false negatives or has too high a computational cost on low-end devices, perhaps we might look at evaluating other information to recognize spoofing.

For example, we can extract text from the title and body of the page to see whether it’s similar to a legitimate login page. This is harder than it sounds, though, because it’s trivial to add text to a HTML page that is invisible to humans but could trip up an extraction algorithm (white-text-on-white-background, 1 pixel tall, hidden by CSS, etc). Similarly, an attacker might pick synonyms (Login vs. Sign In, Inbox vs Mailbox, etc) such that the text doesn’t match but semantically has the same meaning. An attacker might use multiple character sets to perform a homoglyph spoof (e.g. using a Cyrillic O instead of a Latin O) so that text looks the same to a human but different to a text comparison algorithm. An attacker might use Z-Order or other layout tricks to make text appear in the page in a particular order that differs from the order in the source code. Finally, an attacker might integrate all or portions of text inside carefully-positioned graphics, such that a text-only processor will fail to recognize it.

Making matters more complicated, many websites are dynamic, and their content can change at any time in response to users’ actions, timers, or other factors. Any recognition algorithm must decide how often to run — just on “page load”, or repeatedly as the content of the page changes? The more expensive the recognition algorithm, the more important the timing becomes for performance reasons.

API Observation

For tech scam sites, image and text processing suffer similar shortcomings, but API call observation holds a bit more promise. Most tech scam sites work by abusing specific browser functions, so by watching calls to those functions we may be able to develop useful heuristics to detect a likely attack-in-progress and then do deeper checks for additional evidence. The MalwareBytes security extension uses this approach:

… as does Edge’s new Scareware Blocker.

Given the challenges of client-side recognition (false positives, false negatives, and attacker tuning), what else might we do?

Keystrokes

One compelling approach to finding phish is to just wait for the user to enter their password in the wrong place. While it sounds ridiculous, this approach has a lot of merit — it mitigates (to varying degrees) false positives, false negatives, and attacker-tuning all at once.

Way back in 2006, Microsoft explored this idea and ended up filing a patent (Client side attack resistant phishing protection):

When I joined Google’s Chrome team in 2016, I learned that Google had built and fully deployed a Password Alert extension (open-source) on its employee desktops. If an employee ever typed their Google password into a non-Google Login page, the extension would leap into action. One afternoon, I was distracted while logging into a work site and accidentally switched focus into a different browser window while typing my password. I barely taken my finger off the final key before the browser window was taken over by a warning message and an email arrived in my inbox noting that my password had been automatically locked out because I had inadvertently leaked it. While this extension worked especially well for Google Accounts, available variants allow organizational customization such that an enterprise can force-deploy to their users and trigger reporting to backend APIs when a password reuse event is discovered. The enterprise can then either lock the user’s account or the target site can be added to an allowlist of legitimate login pages.

Chrome later integrated a similar feature directly, not using an extension.

Windows 11 Enhanced Phishing Protection

In 2022, Microsoft took our 2006 idea to the next level with the Enhanced Phishing Protection (EPP) feature of Windows 11 22H2. EPP goes beyond the browser, such that if you type your Windows login password anywhere in Windows (any browser, any chat window, any note-taking app, etc), Defender SmartScreen evaluates the context (what URL was loaded, what network connections are active, etc) and either warns you of your risky action or suggests changing your password:

This service is provided by webthreatdefsvc and its options are controlled by policy or the UI options in the Windows Security App:

Enterprises who have deployed Defender Endpoint Protection receive alerts in their security.microsoft.com portal and can further remediate the threat.

The obvious disadvantage of the “Wait for the bad thing to happen” approach is that it might not protect Patient 0 — the first person to encounter the attack. As soon as the victim has entered their password, we must assume that the bad guy has it: Attackers don’t need to wait for the user to hit Enter, and in many cases would be able to guess the last character if the phishing detector triggered before it was delivered to the app. The best we can do is warn the user and lockdown their account as quickly as possible, racing against the attacker’s ability to abuse the credential.

In contrast, Patient-N and later are protected by this scheme, because the first client that observes the attack sends its “I’ve been phished from <url>” telemetry to the SmartScreen service, which adds the malicious URL and blocks it from subsequently loading in any client protected by that URL reputation service.

Edge’s Scareware Blocker

Updated: 11/21/2025

Scareware sites abuse the HTML5 Fullscreen API to attempt to convince the user that their computer is infected and they must call the attacker for help. These sites now represent the most common attacks on the web, and they tend to rapidly change domains and use cloaking to evade blocking.

In 2025, Edge introduced a clientside detection for scareware sites. If a scareware attack is detected, the attack is interrupted and the user is presented with an explanatory block page. From that block page, they can report the malicious site to SmartScreen for blocking. Want to know more? Check out a demo video.

Conclusions

Attackers and Defenders are engaged in a quiet and ceaseless battle, 24 hours a day, 7 days a week, 366 days a year (Happy leap year!). Defenders are building ingenious protections to speed discovery and blocking of phishing sites, but attackers retain strong financial motivation (many billions of dollars per year) to develop their own ingenious circumventions of those protections.

Ultimately, the war over passwords will only end when we finally achieve our goal of retiring this centuries-old technology entirely — replacing passwords with cryptographically strong replacements like Passkeys that are inherently unphishable.

Stay safe out there!

-Eric

PS: Of course, after we get rid of passwords, attackers will simply move along to other attack techniques, including other forms of social-engineering. I hold no illusions that I’ll get to retire with the end of passwords.

x22i Treadmill Review

I love my treadmill, but two years in, I cannot recommend it.

On New Year’s Day 2022 I bought a NordicTrack x22i Incline Trainer (a treadmill that supports 40% incline and 6% decline) with the aim of getting in shape to hike Kilimanjaro. I was successful on both counts, losing 50 pounds in six months and summiting Kilimanjaro with my brother in mid-2023. Between its arrival January 24, 2022 and today, I’ve run ~1780 miles on it.

The Good

Most people I talk to about running complain about how awful treadmills are, describing them as “dreadmills” and horribly boring. While I’m not an outdoor runner, I’m sympathetic to their criticism, but it doesn’t resonate for me, at all.

The iFit video training series is awesome for me. I’m inspired to get on the treadmill to see what’s next on its 22″ screen (which feels larger). I’ve had the chance to walk, run, and hike all over the world: South America, Hawaii, Japan, Italy, Africa, Europe, Antarctica, and all over the US. I’ve run races I’ll likely never get to run in the real world, including races (mostly marathons) in Hawaii, London, Boston, Jackson Hole, New York, Chicago, Tanzania, and more I’ve probably forgotten. I’ve probably run the Kilimanjaro Half Marathon a dozen times at this point, and I’m currently working my way through a “Kilimanjaro Summit” hiking series, partially retracing my steps up the Western Approach. Along the way, I’ve learned lots of training tips, some phrases in foreign languages and history of lots of interesting places.

The treadmill hardware is pretty nice — the shock absorption of the deck is excellent and I’ve managed not to destroy my knees despite running thousands of miles. Running on pavement in the real world leaves me considerably more sore.

While iFit has a variety of annoyances (there are not nearly enough 10Ks or half marathons, and they don’t add new “hard” workouts fast enough) there’s no question in my mind that the iFit training classes are to thank for the success I’ve had in getting in shape.

The Bad

There are many inexpensive treadmills out there, and most of them don’t seem very sturdy or likely to support a serious and regular running habit.

I was serious about my goals and figured that I should spend enough to ensure that my treadmill would last and never give me a technical excuse not to run. Still, the cost ended up being pretty intimidating, with ~$3800 up-front and $1900 on later expenses.

x22i Treadmill (On Sale)$3170
Delivery and “White Glove” Assembly$299
Sales Tax$286
NordicTrack Heart Rate monitor arm band$100
iFit Video Training Subscription renewal (Years 2-3)$600
20-Amp dedicated circuit$970
Extended warranty (years 2-5)$300ish
Total 3-year costs for the x22i = $5725

Fortunately, Microsoft’s employee fitness program grants $1500 a year, and I was able to put the first year’s payment toward the treadmill and the following year I was able to pay for the subscription content renewal with $900 left over to defray the cost of the Kilimanjaro hike.

The Ugly

Unfortunately, my treadmill has been an escalating source of hassles from the very beginning. The assembly folks failed to fully screw in a few screws (they were sticking so far out that I assumed they used the wrong ones) and they cracked one of the water bottle holders. I complained to the NordicTrack folks and they refunded me the delivery/setup fee and within a few weeks came out to replace the broken water bottle holder.

Throughout the first year, my treadmill frequently tripped the circuit breaker; much to my surprise, the abrupt loss of power never resulted in me crashing into the front handrails, no matter how fast I was going. The treadmill was on a shared 15A circuit and while it was never supposed to approach that level of energy consumption, it clearly did. Sometimes, the trigger was obvious (someone turning on the toaster in the kitchen) while other times the treadmill was the only thing running. Eventually I hooked up a Kill-A-Watt meter and found that it could peak at 16-17 amps when starting or changing the incline, well above what it was supposed to consume, but within the technical specs. I eventually spent the money to get a dedicated 20A circuit, and was angry to discover that it was still periodically tripping. After months of annoyance and research, I eventually discovered that treadmills are infamous for tripping “Arc Fault Circuit Interrupt” breakers that are now required by Texas building code. Since having the electrician swap the AFCI breaker for the “old” type, I don’t think it has tripped again.

After all of the electrical problems, I invested in the extended warranty when it was offered, and I’m glad it did. Somewhere around the one year mark, my treadmill started making a loud banging noise. I looked closer and realized that two screws had broken off the bottom of the left and right rails and I assumed that was the source of the noise. Alas, removing the rails didn’t stop the banging, nor did having them replaced. Over the course of several months, techs came out to replace the side rails, idler roller, drive roller, belt, belt guide, and cushions. As November 2023, the treadmill no longer makes a banging sound, but it’s not nearly as quiet as it once was, and I’m expecting that I’ll probably need more service/parts within a few more months.

UPDATE: In October 2024, at around 2000 miles, the steel of the frame cracked where it holds the motor to the frame. It took two weeks for the technician to come verify that it was, in fact, broken and unfixable. Fortunately, the frame warranty is the longest one, at ten years. A few days later, I was offered either a replacement or a $3170 credit towards a new one. I spend a few days pondering whether to just buy another x22i, add $1500 of my own money for an x24, or get a non-incline 2450. The repair guy suggested that the x22i, with its motor at the back, is an especially unreliable model. :( Ultimately, I decided to get a new x22i, out $380 for shipping, assembly, and removal of the old one. Fingers crossed that the new one holds up better, or that if it does fail, it’s the frame again.

Closing Thoughts

From a cost/hassle point-of-view, I would be much better off getting a membership to the gym a half-mile down the block. I suspect, however, that much of my success with regular running comes from the fact that the treadmill lives between my bedroom and my home office, and it beckons to me every morning on my “commute.” The hassle of getting in the car, needing to dress in more than a pair of sweaty shorts, etc, would give me a lot of excuses to “nope” out of regular runs.

When I first was shopping for a treadmill, someone teased me and suggested that I make sure it had a good bar for hanging clothes on, since that’s probably the most common job for home treadmills. I managed to avoid that trap, and I’ve fallen in love with my treadmill despite its many flaws.

I don’t know whether other treadmills at a similar price point are of higher quality, or whether spending even more would give better results, but it almost doesn’t matter at this point — the iFit video content is the best part of my treadmill, and I don’t think any other ecosystem (e.g. Peloton) is comparable.

-Eric

PS: If I end up replacing my treadmill in a few years, I might get a “regular” treadmill rather than an Incline Trainer, because I don’t use the steep inclines very often and I think that capability adds quite a bit of weight and perhaps some additional components that could fail?