Winter 2026 Runs

I did a reasonably good job running on my treadmill throughout the fall of 2025, in preparation for my second summit of Mount Kilimanjaro over New Years (blog post to come).

Run for the Water 10 Miler

On November 9th, 2025, I ran the ten mile Run for the Water.

The night before, I ate spaghetti and meat sauce with the kids for dinner. Went to bed early (around 10) but slept poorly, waking up every few hours. Woke at 445 but managed to doze for another dream before my alarm went off at 5am. Had a banana and a gassy but not very useful trip to the bathroom. Put on a “Run the mile you’re in” temporary tattoo and posted the pic to Facebook where two members of my family thought it might be real. 😂

The weather was 57F and breezy, but super-sunny. I left the house at 6:10, got to my usual parking space at 6:30, and had time for a quick stop at the pre-race porta potty. My marshmallow-y supershoes (carbon plate) were comfy for the whole race.

I did okay. I wanted to beat my best time (1:28:57), which I decided would be really hard. So I was kinda just aiming to beat 1:30. I got 1:30:33, just a minute and a half slower than my record, and a blazing 34 minutes faster than my glacial 2024 attempt.

Had I known I was that close, I know I coulda beat 1:30 pretty easily, and almost certainly PR’d to boot.

But I feel mostly fine about the race– I worked hard and nothing in particular hurt or felt bad. My watch failed to hold a connection to the cross-body left headphone so I put it in my pocket. What I didn’t realize was that the body of the headphone was a button that, when tapped, lowered the volume, so I ended up running for a frustrating mile or two with no music yet again.

This time, the “giant” hill at mile 5.5 did not seem giant at all; I’m not sure it was even the biggest hill of the course.

Austin International Half Marathon

On January 18th, I had my fourth run of the Austin International Half, finally meeting my sub-two-hour goal with a finish in 1:56:31. There were about 4200 runners this year.

The night before, I ate spaghetti, meat sauce, and garlic bread with the kids somewhat early, and got to sleep around 10:30. I woke at 4am before dozing again until just before my 5:15 alarm. I ended up pulling out of my driveway by 6:35 and made good time to my usual parking spot in the Stonelake garage. A quick trip to the porta potty went well (no line that early) and I then spent fifteen minutes keeping warm in the car.

I was in my tights, jacket, and disposable cloth gloves (which i kept after I took off around mile 7). The wind was mild, so the low temp (30F) wasn’t too awful.

I had a great start, running with the 1:50 pacers for 5.5 miles. I fell behind and lagged a bit between the 10K and the halfway point, but the fear of the 1:55 pacer passing me from behind kept me running. Around mile 9, the 1:55 pacer caught up and I only managed to stay with them for half a mile or so before falling behind again. But the smell of the finish line was in my imagination and I only walked for half of the two hills downtown. I finished in 1:56:31, just over four minutes faster than my 2023 PR and 22 minutes faster than last year.

I waited in line for a solid ten minutes to hit the PR gong for the first time ever. I felt good throughout the race, with only a minute or so worth of worry about the rumble in my belly.

I’m very happy with this outcome.

Galveston Half Marathon

The Galveston Half was moved another week earlier this year, with a predicted start temperature of 37F, well below 2025’s 64F. There were 625 half marathon runners this year.

While I knew I wouldn’t replicate my recent Austin Half results, I definitely hoped to get my fastest time for this course, and ideally, breaking 2 hours again would be amazing.

Fortunately, the somewhat stronger wind didn’t make things feel too cold, and the clear skies warmed things up fast enough that I dropped my hoodie just before mile 4 and my gloves went in my pocket at mile 8 or so.

I remembered the course well having run it four times before (twice for last year’s half), and there were no surprises in how I felt. I started with the 1:50 pacer but only kept up for around 3 miles before falling behind the 1:55 pacer and 2:00 pacer shortly after. But I still managed to stay on the move, with limited bits of walking around the aid stations and a 30 second pause to pee behind the single occupied porta-potty found at mile 6.

My 2:07 finish was a solid 7 minutes faster than my first two attempts, and almost 32 minutes faster than the first half of last year’s full. It wasn’t quite the result I’d hoped for, but I’m counting the PR for this course as a big enough win.

In April, I run the Capitol 10K again and hope to PR (it’s gonna be hard), and on the first Sunday in May I’ll revisit the Sunshine Run (where I had better beat my unimpressive record for that course).

-Eric

Security Software False Positives

Software developers and end-users are often interested in understanding how to resolve incorrect detections from their antivirus/security software, including Microsoft Defender.

Such False Positives (FPs) can disrupt your use of your device by incorrectly blocking innocuous files or processes. However, you should take extreme care before concluding that a given detection is a false positive — attackers work hard to make their malicious files seem legitimate, and your security software is built by experts who work hard to flag only malicious files.

How Do False Positives Occur?

Every security product must perform the difficult task of maximizing true positives (protecting the user/device) while minimizing false negatives (protecting productivity/data). The resulting ratio is called security efficacy.

False-positives can occur for numerous reasons, but most are a result of security software observing what it deems to be suspicious content or behavior on the part of a file or process. Virtually all modern security software consists of a set of signatures and heuristics that attempt to detect indications of malice based on threat intelligence data collected and refined by threat researchers (both humans and automated agents). In some cases, the threat intelligence is scoped too broadly and incorrectly implicates harmless files along with harmful ones.

To correct this, the threat intelligence from your security vendor must be adjusted to narrow the detection so that it applies only to truly malicious files.

Sidenote: Is it really a block?

In some cases a security feature might block a file not as malicious but merely as uncommon; for example, SmartScreen Application Reputation can interrupt download or starting of an app if the app isn’t recognized:

In such cases, users may choose to ignore the risk and continue if they have good reason to believe that the file is safe. Over time, files should build reputation (especially when signed, see below) and these warnings should subside for legitimate files.

False Positives for End-Users

Before submitting feedback, it’s probably worthwhile to first confirm what security product is triggering a block. In some cases, blocks that look like they may be coming from your security software might actually reflect intentional blocks from your network security administrator. For example, users of Microsoft Defender for Endpoint should review this handy step-by-step guide.

To get a broader security ecosystem view of whether a given file is malicious, you can check it at VirusTotal, a free service which will scan files against most of the world’s antivirus engines and which allows end-users to vote on the safety of a given file.

When a security vendor realizes a false positive has occurred, they will typically issue a signature update; while this typically happens entirely automatically, you might want to try updating your signatures just to ensure they’re current.

False positives (and false negatives) for Microsoft Defender Antivirus can be submitted to the Defender Security Intelligence Portal; these submissions will be evaluated by Microsoft Threat researchers and detections will be created or removed as appropriate. Other vendors typically offer similar feedback websites.

In most cases, users may override incorrect detections (if they are very sure they are false positives) using the Windows Security App to “Allow” the file or create an exclusion for its location.

Avoiding False Positives for Software Vendors

If you build software, your best bet for avoiding incorrect detections is to ensure that your code’s good reputation is readily identifiable for each of your products’ files.

To that end, make sure that you follow Best Practices for code-signing, ensuring that every signable file has a valid signature from a certificate trusted by the Microsoft Trusted Root Program (e.g. Digicert, Azure Trusted Signing).

If your files are incorrectly blocked by Microsoft Defender, you can submit a report to the Defender Security Intelligence Portal.

-Eric

Security Surfaces

An important concept in Usable Security is whether a given UI represents a “security surface.” Formally, a security surface is a User Interface component in which the user is presented with information they rely upon to make a security decision. For example, in the browser, the URL in the address bar is a security surface. Importantly, a spoofing issue in a security surface usually represents a security vulnerability.

Just as importantly, not all UI surfaces are security surfaces. There are always many other UI surfaces in which similar information may be presented that are not (and cannot be) security surfaces.

Unfortunately, knowing which surfaces are security surfaces and which are not is simultaneously

  1. Critically important If you think an untrustworthy surface is trustworthy, you might get burned.
  2. Unintuitive There’s usually no indication that a surface is untrustworthy.

Eight years ago, Chromium wrote a document listing the Chrome browser’s security surfaces. Approximately no one has ever seen it.

Non-Security Surfaces

Perhaps the most famous non-security surface is the “status bubble” shown at the bottom left of the browser window that’s designed to show the user where a given link will go:

The Chromium “Status Bubble”

There are many reasons why this UI cannot be a security surface.

The primary issue is that it’s below the line-of-death, in a visual area of the page that an attacker can control to paint anything they like. The line-of-death is a major reason why many UIs cannot be considered security surfaces — if an attacker has control over what the user sees, a UI cannot be “secure.”

But even without that line-of-death problem, there are other vulnerabilities: most obviously, JavaScript can cancel a navigation when it begins, or it can rewrite the target of a link as the user clicks on it. Or the link could point to a redirector site (e.g. bit.ly) rather than the ultimate destination. etc.

Low-Priority Security Surfaces

An example of a low-priority security surface is the edge://downloads page. This page shows a list of downloads and allows the user to open them. In some versions of Edge, this page will show the URL from which the download originated.

A user might look at that URL to decide whether the download originated from a trustworthy place before running it. This means that any issue where the URL shown is incorrect or misleading is arguably a security issue because the user could make a security decision incorrectly based on the “spoofed” URL. URL spoofing might include triggering a download from an open-redirect on a trustworthy domain, or using a URL protocol like data: that allows the attacker to control the contents of the subsequent string (e.g. data:spoofed.com/mime,base64...).

A mitigating factor here is that the edge://downloads page is relatively obscure. Almost all user interaction with downloads occurs using the Downloads bubble that opens from the icon in the browser toolbar. Even if the user does end up making a security decision from the Downloads page, other features like SmartScreen AppRep still provide protection from files not known to be safe.

Challenging Security Surfaces: The “URL Bar”

Perhaps the most complicated security surface in existence is the browser’s address bar, known in Chromium-based browsers as the “Omnibox”.

The “omnibox” in modern Edge

That omnibox naming reflects why this is such a tricky security surface: Because it tries to do multiple things: It tries to be a security surface which shows the current security context of the currently-loaded page (e.g. https://example.com), but it also tries to be a information surface which shows the full URL of the page (e.g. https://example.com/folder/page.html?query=a#fragment), and it tries to be an input surface that allows the user to input a new URL or search query. Only certain portions of the URL information are security-relevant, and the remainder of the URL could be used to confuse the user about the security-relevant parts.

Making matters even more complicated, the omnibox also has to deal with asynchronicity — what should be shown in the box when the user is in the middle of a navigation, where the user’s entered URL isn’t yet loaded and the current URL isn’t yet unloaded?

Windows LNK Inspection

Earlier this year, a security vulnerability report was filed complaining that Windows File Explorer would not show the full target of a .lnk file if the target was longer than 260 characters.

Windows File Explorer’s .lnk Properties Dialog

The reporter complained that without accurate information in the UI, a user couldn’t know whether a LNK file was safe.

This is true, but it’s also the case that the user isn’t being asked to make a security decision in this dialog; the UI where the user is asked to make a security decision doesn’t even show the target at all:

Windows Security Warning prompt when opening a LNK file from the Internet Zone

Even in the exceptionally unlikely event that a user uses Explorer’s LNK inspection UI, very few users could actually make a safe determination based on the information within. Ultimately the guidance in the Security Warning prompt is the right advice: “If you do not trust the source, do not open.”

The Windows team ended up fixing the display truncation earlier this year, but not as a security fix, just a general quality-improvement for Windows 11 24H2+.

Update: Adam complains that the Security Warning prompt doesn’t really provide the user with the information needed to decide whether the link’s source is trustworthy. He’s not wrong.

Defensive Technology: Ransomware Data Recovery

In a prior installment we looked at Controlled Folder Access, a Windows feature designed to hamper ransomware attacks by preventing untrusted processes from modifying files in certain user folders. In today’s post, we look at the other feature on the Ransomware protection page of the Windows Security Center AppRansomware data recovery.

User-Interface

The UI of the feature is simple and reflects the state of your cloud file provider (if any) which for most folks will be OneDrive. Depending on whether OneDrive is enabled, and what kind of account you have, you’ll see one of the following four sets of details:

Windows 11 Ransomware data recovery feature status

What’s it do?

Conceptually, this whole feature is super-simple.

Ransomware works by encrypting your files with a secret key and holding that key for ransom. If you have a backup of your files, you can simply restore the files without paying the bad guys.

However, for backup to work well as a ransomware recovery method, you need

  1. to ensure that your backup processes don’t overwrite the legitimate files with the encrypted versions, and
  2. to easily recognize which files were modified by ransomware to replace them with their latest uncorrupted version.

The mechanism of this feature is quite simple: If Defender recognizes a ransomware attack is underway, it battles the ransomware (killing its processes, etc) and also notifies your cloud file provider of the timestamp of the detected infection. Internally, we’ve called this a shoulder tap, as if we tapped the backup software on the shoulder and said “Uh, hang on, this device is infected right now.

This notice serves two purposes:

  1. To allow the file backup provider to pause backups until given an “all clear” (remediation complete) notification, and
  2. To allow the file backup provider to determine which files may have been corrupted from the start of the infection so that it can restore their backups.

Simple, right?

-Eric

Appendix: Extensibility

As far as I can tell, this feature represents semi-public interface that allows 3P security software and cloud backup software to integrate with the Windows Security Center. OnDataCorruptionMalwareFoundNotification and OnRemediationNotification. Unfortunately, the documentation isn’t public — I suspect it’s only available to members of the Microsoft Virus Initiative program for AV partners.

Windows Shell Previews – Restricted

Windows users who installed the October 2025 Security Updates may have noticed an unexpected change if they use the Windows Explorer preview pane. When previewing many downloaded files, the preview is now replaced with the following text:

Explorer PDF Preview: “The file you are attempting to preview could harm your computer.”

While it also occurs when viewing files on remote Internet Zone file shares, the problem doesn’t occur for other files on your local disk, for remote shares in your Trusted or Intranet zone, or if you manually remove the Mark-of-the-Web from the file.

😬 BUG: Notably, Explorer caches a previously-observed Mark-of-the-Web, and due to a bug (discovered in October 2025) deleting the MotW stream by clicking the Unblock Button doesn’t clear the cached Zone, so users must restart Explorer to actually preview a file.

What happened?

The change in Windows was a trivial one: the value for URLACTION_SHELL_PREVIEW (0x180f) in the Internet Zone (3) was changed from Enable (0) to Disable (3):

Before Windows Explorer asks the registered previewer to show the preview for a file, it consults the SHELL_PREVIEW URLAction to see whether the file’s location allows previews. With this settings change, the permission to show previews is now denied for files that originate from the Internet Zone.

Why?

The reason is a simple one that we’ve covered before: the risk of leaking NTLM credential hashes to the Internet when retrieving resources via SMB via the file: protocol. Leaked hashes could allow an attacker to breach your account.

As we discussed in the post on File Restrictions, browsers restrict use of the file protocol to files that are opened by the file protocol. When you preview a downloaded file in Explorer, the URL to that download uses file: and thus the previewer is allowed to request file: URLs, potentially leaking hashes when the file is previewed. With this change, the threat is blunted because with the previews disabled, you’d have to actually open the downloaded file to leak a hash.

Unfortunately, this fix is a blunt instrument: while HTML files can trivially reference remote subresources, other file types like PDF files typically cannot (we disable PDF scripting in Explorer previews) but are blocked anyway.

If you like, you can revert this change on your own PC by resetting the registry key (or by adding download shares you trust to your Trusted Sites Zone). However, keep in mind that doing so reenables the threat vector, so you’ll want to make sure you have another compensating control in place: for example, disabling NTLM over SMB (more info), and/or configuring your gateway/firewall to block SMB traffic.

-Eric