ERR_BLOCKED_BY_CLIENT and HTML5 Sandbox

Recently, many Microsoft employees taking training courses have reported problems accessing documents linked to in those courses in Chrome and Edge.

In Edge, the screen looks like this:

But the problem isn’t limited to Microsoft’s internal training platform, and can be easily reproduced in Chrome:

What’s going on?

There are a number of root causes for an ERR_BLOCKED_BY_CLIENT message, and the most common root cause is that you’ve installed a content-blocking extension (e.g. an ad-blocker) and it directed the browser to block the page.

But that’s not what’s happening here — we saw this on machines without any content blocking extensions.

What’s happening here is that the PDF viewer is blocked from loading because the new tab was created as a popup under the restrictions of the HTML5 Sandbox. The sandbox rules applied to the new tab include prohibitions on script and extensions, and Chromium’s PDF viewer requires both. So, the user ends up with a totally inexplicable blocking page.

Refreshing the page will not fix it, and shockingly, even navigating the tab to a different, non-PDF URL, will still likely result in failures (either script won’t run, or the page will not load) because the sandboxing limits are not removed upon manual navigation. For instance, Twitter refuses to load:

Twitter shows ERR_BLOCKED_BY_RESPONSE due to its use of Cross-Origin-Opener-Policy

As an end-user, the workaround is easy: Copy/paste the URL from the broken tab to a new one and your document will load just fine.

As a web developer, to avoid creating unexpectedly impaired tabs, you must set the allow-popups-to-escape-sandbox flag; when you do so, new windows will not be restricted.

A quick look showed that our company training app specifies this flag, but the new tab was still impaired.

What gives?

A deeper look showed that the training app contains nested sandboxes — while an inner iframe includes the allow token, that iframe’s parent does not have the token.

The grandparent’s restriction on its child also restricts its grandchild:

Perhaps the Chromium dev tools should warn if a child iframe‘s sandbox directive specifies permissions that that will be denied by the grandparent’s restrictions on the parent?

Mouse Gestures in Edge

Over twenty years ago, the Opera browser got me hooked on mouse gestures, a way for you to perform common browser actions quickly. After I joined the IE team in 2004, I fell in love with a browser extension written by Ralph Hare and I later blogged about it on the IEBlog and helped Ralph get it running in 64bit IE.

Many years passed. By 2015, I had abandoned the outdated IE and moved to Chrome fulltime. When I joined the Chrome team in 2016, I was heartened to note that mouse gestures were one of the very few features slated for inclusion in the first version of Chrome. They were repeatedly postponed and eventually cut, with the idea that perhaps a browser extension was the way to go. I installed the most popular Mouse Gestures extension from the Chrome web store only to later discover that it was sending my browser traffic to a questionable server in China. I uninstalled it and reported it to the Chrome Web Store folks who delisted it. Apparently a while later they slightly reduced the data leakage and got it back up on the Web Store, and in 2019 a new hire PM lead on the Edge team suggested we all install it. I took a look at what it was doing and found that it was still engaged in questionable privacy practices. Bummer.

Fast forward to earlier this year, when I discovered that the Edge team has landed gestures in Edge on Windows! I was excited to see the implementation, and feel like it’s one of several features that makes Edge feel like it’s a batteries-included browser. (Unfortunately, this feature presently seems to be Windows-only. If you’re using a Mac or Linux, you should click the menu … >Help and Feedback > Send Feedback to ask for it.)

Dozens (hundreds?) of times a day, I enjoy the satisfaction of closing browser tabs by right-click-drawing a “L” on them.

To enable Mouse Gestures support in Edge, simply visit edge://flags/#edge-mouse-gesture and enable the feature:

After you restart the browser you can go visit the edge://settings/mouseGesture page to configure them:

Don’t worry about memorizing a ton of shortcuts — I really only use two: back (right-click+left-drag) and close tab (right-click+DrawL).

I smile every time this works, and every time I test something in Chrome I lament their absence.

-Eric

PS: Besides support for Mac OS, one other missing feature I’d love to see is the ability to bind a gesture to an extension or JavaScript-bookmarklet. That would allow me to recreate one of my other IE-era gestures– I could waggle my mouse to run a JavaScript which would remove all ad-like elements from a page.

Going Electric – Solar 1 Year Later

In March of 2023, I had an 8kw solar array installed and I was finally permitted to turn it on starting April 21, 2023.

My pessimistic/optimistic assumption that my buying an expensive solar array was going to be the trigger for technological breakthroughs in solar technology that rendered my panels obsolete wasn’t entirely unfounded. Sure enough, shortly after the install, I started hearing more and more about promising next-generation ‘tandem’ solar cells that will deliver even more power. You’re welcome. Still, those new cells are probably still at least a few years away from broad production, and staying out of the market for another decade didn’t feel like the right choice for me.

The summer of 2023 was a hot one, but my panels achieved one major goal I had in buying them — I stopped cursing the sun on clear summer days.

But how’d I do against the sales pitch from the installer?

Well, not great. The solar installer estimated that I would produce 11.27MWh in my first year. I came in a bit lower than that, producing 10.7MWh, a 5% shortfall. While I never expected the nominally 400W panels to produce at their max for much of the day, they’ve never hit anything close to that (my “8KW” array peaks at 6.4KW).

In the first year, the best panel outperformed the worst by 10%.

I also consumed almost 3MWh (38%) more than expected for the year. Of that excess, my Nissan Leaf explains about half (1.5MWh) for the 6700 miles I’ve driven since I installed the panels.

It looks like peak daily production was 48.4KWh on my 9th day of ownership, although I was hitting 40KWh/day for most of the summer. While Austin’s scorching summer days make the panels less efficient, the longer daylight hours and fewer clouds meant that I hit 1.2MWh/month for July and August.

Over the summer months, you can see the big deficit as the air-conditioner works overtime, but over the winter and spring you can see the solar production outpacing consumption (heating is via natural gas):

My lowest consumption day of the year was 6.1KWh (no one was home).

The most fun part of the system is getting negative bills for electricity:

My panels saved me about $1060 ($.0991/KWh) in the first year, making for a pretty long payback period. I’d initially expected the system, post-credits and deductions, to cost $15000, but it turns out that you have to deduct your utility incentives from the cost against which you’re getting a Federal tax credit, so my $2500 Austin Energy incentive reduced my federal credit down from $7470 to a still-respectable $6720.

So, after credits, my Final Installation Cost was $15,680, leaving a payback period of 15 years. Not awesome, but again, the major goal I had in buying solar was that I stopped cursing the sun on clear summer days.

The Enphase monitoring site says that my 10.7MW/h production has saved 7.6 Tons of CO2, or 128 trees.

Thoughts On Batteries 🔋

When I installed my system, I opted not to buy the battery backup system for the house despite the fact that it meant I’d miss out on the federal tax credit available only when solar is first installed. I reasoned that the battery system would itself cost $1000 per year, and in ten years in Austin I have only lost power for a few days. Besides, battery technology is widely expected to continue to improve with every passing year, and hopefully soon using electric cars’ batteries as home backup will become commonplace (even my Leaf’s tiny battery is 40KWh, twice the capacity of a large home system).

Shortly after that decision to forgo the battery, we had our longest-ever power outage, 56 hours, and I wondered whether I’d made a mistake. Ah well.

Before the winter storms of 2024, I bought a 768Wh power station for $475 and predictably (given my luck) the power company managed to keep the power on throughout this year’s storms.

What’s Next? ⚡️

I’d heard great things about induction cooking, so I decided to dip my toes in with a portable cooktop. I like it a lot — it’s convenient and super-fast for boiling water for HelloFresh meals. I’d like to replace my entire stove, but I will likely need an electrical panel upgrade to do that, since my current panel is already at capacity with the car charger.

In the next few years, I’d like to get rid of natural gas entirely (my monthly bill is $26 even if I don’t use any gas). My water heater will age out first and I’ll likely replace it with a hybrid. The big lift will be replacing the heating and air conditioning with a heat pump — bizarrely, these are not yet common in Texas, but they make a lot of sense in this climate and the new federal incentives should help reduce the costs somewhat.

Browser Security Bugs that Aren’t: JavaScript in PDF

A fairly common security bug report is of the form: “I can put JavaScript inside a PDF file and it runs!

For example, open this PDF file with Chrome, and you can see the alert(1) message displayed:

Support for JavaScript within PDFs is by-design and expected by the developers of PDF rendering software, including common browsers like Chrome and Edge. Much like HTML, PDF files are an active content type and may contain JavaScript.

Periodically, less experienced security researchers excitedly file this issue against browsers, and those reports are quickly resolved “By Design.”

Periodically, more experienced security researchers excitedly file this issue against sites and applications that are willing to host or transfer untrusted PDF files, arguing that this represents a “Stored Cross-Site Scripting vulnerability.”

Their confusion here is somewhat more understandable– if a website allows a user to upload a HTML document containing script, and then serves that HTML document from their domain, any script within it will run in the security context of the serving domain. That describes a classic Stored XSS Attack, and it presents a security threat because the embedded script can steal or manipulate cookies (by accessing the document.cookie property), manipulate web platform storage (IndexedDB, localStorage, etc), conduct request forgery attacks from a 1st party origin, etc.

The story for PDF documents is very different.

The Chrome Security FAQ describes the limitation tersely, noting the set of bindings provided to the PDF are more limited than those provided by the DOM to HTML documents, nor do PDFs get any ambient authority based upon the domain from which they are served.

What does that mean? It means that, while PDF’s JavaScript does run, the universe the script runs in is limited: there’s no access to cookies or storage, very limited ability to make requests (e.g. you can navigate the document’s window elsewhere, but that’s about it), and no ability to make use of the Web Platform’s powerful capabilities exposed by the HTML Document Object Model (DOM) objects like document, window, navigator, et cetera. While the capabilities of JavaScript in PDF are extremely limited, they’re not non-existent, and PDF engine software must take care to avoid introducing new capabilities that void the safety assumptions of PDF-handling code.

Restricting JavaScript in PDF

Engineers should take care that handling of JavaScript in PDF respects app/user settings. For example, if the user turns off JavaScript for a site, PDFs hosted on that site shouldn’t allow script either. This works properly in Chrome and Edge.

Firefox’s global javascript.enabled toggle from the about:config page doesn’t impact script inside its PDF viewer:

Instead, Firefox offers an individual pdfjs.enableScripting preference that can be configured from the about:config page.

Chromium currently ignores Content-Security-Policy headers on PDF responses because it renders PDF files using web technologies that could otherwise be disallowed by the CSP, leading to user confusion and webdev annoyance.

A Slow 10K

I “ran” the Capitol 10K for a third time on Sunday. It did not go well, but not for any of the reasons I worried about. The rain stopped hours before the race, and the course wasn’t wet. My knees and feet didn’t complain. My heart rate felt pretty much under control. I had found the charger for my running watch and got my headphones charged.

No, like my first run, I got screwed by nutrition. After mile 2, I was feeling heartburn for the first time in months (years?), and I began frequently dropping to a walk while I worried about it. Was it the flavored coffee I had in the morning? My dubious dinner (Subway) or the beer I had with it? In hindsight, the real problem was probably some red wine I’d had the prior afternoon — I don’t drink wine often anymore, and I’m more sensitive to it. Ah well, whatever the cause, you can see the impact.

You can see where things went off the rails. My first half was a non-disastrous 4 minutes slower than last year, but my second half took my 6 minutes longer than the first. The only bright spot was the short uphill “King of the hill” segment, which I completed at a pace of 8:15/mi vs. last year’s 9:32/mi.

I spent miles 3 to 5 mostly walking through the neighborhoods that I’d joyfully sped through last year, watching my fellow “A” group people blow by, wondering if I was ever going to be able to run a half marathon again after this puny 10k was killing me.

By mile 5, I was starting to feel somewhat better but I still held off on picking up the pace, recognizing that this was not going to be my day. Unlike last year, I didn’t hear my (phantom) children cheering as I turned the final corner after the bridge and sped up for the finish line.

I finished in 1:06:15, a pokey 10:40/mi pace, 14 minutes slower than last year, 8.5 minutes slower than my fall Daisy Dash, and just 83 seconds faster than my first Cap10K. Not a great result.

I’ve signed up for the Sunshine 10K next month to hopefully find myself back on track, and also got the early-bird discount for signing up for the 2025 Cap10K.

Better luck prep next time?

-Eric

PS: Bonus pic from yesterday’s (cloudy) total eclipse in Austin.

Attacker Techniques: Gesture Jacking

A few years back, I wrote a short explainer about User Gestures, a web platform concept whereby certain sensitive operations (e.g. opening a popup window) will first attempt to confirm whether the user intentionally requested the action.

As noted in that post, gestures are a weak primitive — while checking whether the user clicked or tapped a key is simple, gestures poorly suit the design ideal of signaling an unambiguous user request.

Hijacking Gestures

A recent blog post by security researcher Paulos Yibelo clearly explains a class of attack whereby a user is enticed to hold down a key (say, Enter) and that gesture is treated as both an acceptance of a popup window and results in activating a button on a target victim website. If the button on that website performs a dangerous operation (“Grant access”, “Transfer money“, etc), the victim’s security may be irreversibly compromised.

The author calls the attack a cross window forgery, although I’d refer to it as a gesture-jacking attack, as it’s most similar to the ClickJacking attack vector which came to prominence in 2008. Back then, browsers vendors responded by adding defenses against ClickJacking attacks against subframes, first with IE’s X-Frame-Options response header, and later with the frame-ancestors directive in Content Security Policy. At the time, cross-window ClickJacking was recognized as a threat unmitigated by the new defenses, but it wasn’t deemed an especially compelling attack.

In contrast, the described gesture-jacking attack is more reliable, as it does not rely upon the careful positioning of windows, timing of clicks, and the vagaries of a user’s display settings. Instead, the attacker entices the user to hold down a key, spawns a victim web page, and the keydown is transferred to the victim page. Easy breezy.

Some folks expected that this attack shouldn’t be possible– “browsers have popup-blockers after all!” Unfortunately for their hopes and dreams, the popup blocker isn’t magical. The popup-blocker blocks a popup only if it’s not preceded by a user-gesture. Holding the Enter key is a user-gesture, so the attacker’s page is allowed to spawn a popup window to a victim site.

The Core of the Threat

As with many cool attack techniques, the core of this attack depends upon a built-in web platform behavior. Specifically, when you navigate to a URL containing a fragment:

…the browser will automatically scroll to the first (if any) element with an id matching the fragment’s value, and set focus to it if possible. As a result, keyboard input will be directed to that element.

The Web Platform permits a site that creates a popup to set the fragment on its URL, and also allows it to set the size and position of the popup window.

Web Page Defenses

As noted in Paulos Yibelo’s blog post, a website can help protect itself against unintentional button activations by not adding id attributes to critical buttons, or by randomizing the id value on each page load. Or the page can “redirect” on load to strip off an unexpected URL Fragment.

For Chromium-based browsers, an additional option is available: a document can declare that it doesn’t want the default button-focusing behavior.

The force-load-at-top document policy (added as opt-out for the cool Scroll-to-Text-Fragment feature) allows a website to turn off all types of automatic scrolling (and focusing) from the fragment. In Edge and Chrome, you can compare the difference between a page loaded:

Browser support is not universal, but Firefox is considering adding it.

WebDev Best Practices
  1. Set force-load-at-top (if appropriate) for your scenario, and/or remove id values from sensitive UI controls (e.g. for browsers that don’t support document policy)
  2. Use frame-ancestors CSP to prevent framing
  3. Auto-focus/make default the safe option (e.g. “Deny”)
  4. Disable sensitive UI elements until:
    • Your window is sized appropriately (e.g. large enough to see a security question being asked)
    • The element is visible to the user (e.g. use IntersectionObserver)
    • The user has released any held keys
    • An activation cooldown period (~500ms-1sec) to give the user a chance to read the prompt. Restart the cooldown each time a key is held, your window gains focus, or your window moves.
  5. Consider whether an out-of-band confirmation would be possible (e.g. a confirmation prompt shown by the user’s mobile app, or message sent to their email).

Beyond protecting the decision itself, it’s a good idea to allow the user to easily review (and undo) security decisions within their settings, such that if they do make a mistake they might be able to fix it before the damage is done.

Attacks on Browser UI

It’s not just websites that ask users to make security decisions or confirm sensitive actions. For instance, consider these browser prompts:

Each of these asks the user to confirm a security-critical or privacy-critical change.

As you might expect, attackers have long used gesture-jacking to abuse browser UI, and browser teams have had to make many updates to prevent the abuse:

Common defenses to protect browser UI have included changing the default button to the safe choice (e.g. “Deny”) and introducing an “input protection” activation timer.

Stay safe out there!

-Eric

pushState and URL Blocking

The Web Platform offers a handy API called pushState that allows a website’s JavaScript to change the URL displayed in the address bar to another URL within the same origin without sending a network request and loading a new page.

The pushState API is handy because it means that a Web Application can change the displayed URL to reflect the “current state” of the view of the application without having to load a new page. This might be described as a virtual navigation, in contrast to a real navigation, where the browser unloads the current page and loads a new one with a different URL.

For example, if I click the Settings link in my mail application, the URL may change from https://example.com/Inbox to https://example.com/Settings while JavaScript in the page swaps in the appropriate UI to adjust the app’s settings.

function onSettingsClick() {
ShowSettingsWidgets();
history.pushState({}, '', '/Settings');
}

Then when I click the “Apply” button on the Settings UI, the Settings widgets disappear and the URL changes back to https://example.com/Inbox.

function onApplySettingsClick() {
CloseSettingsWidgets();
history.pushState({}, '', '/Inbox');
}

Why would web developers bother changing the URL at all? There are three major reasons:

  1. Power users may look at the address bar to understand where they are within a webapp.
  2. If the user hits F5 to refresh the page, the currently-displayed URL is used when loading content, allowing the user to return to the same view within the app.
  3. If the user shares or bookmarks the URL, it allows the user to return to the same view within the app.

pushState is a simple and powerful feature. Most end-users don’t even know that it exists, but it quietly improves their experience on the web.

Unfortunately, this quiet magic has a downside: Most IT Administrators don’t know that it exists either, which can lead to confusion. Over the last few years, I’ve received a number of inquiries of the form:

“Eric — I’ve blocked https://example.com/Settings and confirmed that if I enter that URL directly in my browser, I get the block page. But if click the Settings Link in the Inbox page, it’s not blocked. But then I hit F5 and it’s blocked. What’s up with that??”

The answer, as you might guess, is that the URL blocking checks are occurring on real navigations, but not virtual navigations.

Consider, for example, the URLBlocklist policy for Chromium that allows blocking navigation to a specific URL. By default, attempting to directly navigate to that URL with the policy set results in a block page:

But if you instead navigate to the root example.com/ url, then use pushState to change the URL to the same URL, no block occurs:

…Until you hit F5 to refresh the page, at which point the block is applied:

Similarly, you can see the same thing with test pages for SmartScreen or SafeBrowsing. If you click on the first test link in the SafeBrowsing test page, you’ll get Chrome’s block page:

…but if you instead perform a virtual navigation to the same URL, no block occurs until/unless you try to refresh the page:

Similarly, if you create a Custom URL Indicator in Defender’s Network Protection for a specific URL path, you’ll find that a direct navigation to that URL is blocked in Edge, but not if you change the URL using pushState.

Blocks that are implemented by browser extensions typically are bypassed via pushState because the chrome.webNavigation events do not fire when pushState is called. An extension must monitor the tabs.onUpdated event if it wishes to capture a URL change caused by pushState.

Security Implications?

This pushState behavior seems like a giant security bug, right?

Well, no. In the web platform, the security boundary is the origin (scheme://host:port). As outlined in the 2008 paper Beware Finer-Grained Origins, trying to build features that operate at a level more granular than an origin is doomed. For example, trying to apply a special security policy to https://example.com/subpath, which includes a granular path, cannot be secured.

Why not?

Because /subpath is in the same-origin as example.com, and thus any page on example.com can interact (e.g. add script) with content anywhere else on the same origin.

Security features like SafeBrowsing and SmartScreen will typically perform “rollups”, such that if, for example, evil.example.com/phish.html is a known phishing page, the URL Reputation service will typically block all of evil.example.com if it’s believed that the attacker controls the whole origin.

For an IT Administrator, pushState represents a challenge because it’s not obvious whether a given site supports virtual navigations or not. If you absolutely must to ensure that a user does not interact with a specific page, you need to block the entire origin. For features like Defender’s Network Protection, you already have to block the entire origin to ensure blocking in Chrome/Firefox, because network-stack level security filters cannot observe full HTTPS URLs, only hostnames (and only requests that hit the network).

-Eric

Browser Extensions: Powerful and Potentially Dangerous

Regular readers of my blogs know that I love browser extensions. Extensions can make using your browser more convenient, fun, and secure. Unfortunately, extensions can also break web apps in bizarre or amusing ways, dramatically slow your browser performance, leak your personal data, or compromise your device.

The designers of the Chromium extension system created a platform with a huge amount of power and an attack surface dramatically smaller than its predecessors (COM and NPAPI). That smaller attack surface meant that it was much harder for a rogue extension to harm your device, and it was much easier to tell how much access a given extension would get to your pages and data inside your browser. Unfortunately, many common tasks (particular scenarios like blocking ads) require that extensions be granted a very high level of permission.

As a consequence, users quickly get accustomed to approving permission requests like this one:

…which grants the extension the ability to: read your email, send email to your entire address book from you, delete files from your cloud storage, put malicious files into your cloud storage and share them to your friends, use your credit card number to order a thousand beanie babies for delivery to your boss, publish your usernames and passwords as a thread on Twitter, dial your mom and ex-girlfriend at 2am, update your LinkedIn profile to reflect a career as a circus clown, and publish embarrassing pictures on your Facebook wall. And more.

Providing this level of access to your browser is more risky than having your password stolen, because many web systems use 2FA and other techniques to prevent abuse of your password, but these techniques are ineffective against a sock-puppet browser.

But… but… but I want what this extension offers and I trust those guys! Heck, I’ve even reviewed their code using the very cool Chrome extension source viewer!

Well, that’s good, but it’s important to understand the full threat environment. Even if the version of the extension you’ve got today is safe, there’s no guarantee that it will remain safe in the future. Popular extensions (particularly free extensions) are a common target of supply chain attacks. The extension author might get sloppy and accidentally update to a new library with a security bug or trojan code. Or their Google account gets hacked and the bad guy takes over the extension. Or perhaps they accept one of the enticing offers that they’re frequently emailed, offering them a few hundred or thousand dollars to “take over” their extension. Your browser autoupdates to the new version of their extension without any notice, and your previously-secure browser has turned from a “User Agent” into an “Attacker Agent.”

It’s scary.

Over the last few years, the Chrome team has tried to reduce the potential for abuse in the new “Manifest V3” system, but pundits and others have popularized elaborate conspiracy theories that this was just a way for Google to crack down on adblockers for their own business interests. (This is an especially silly claim, since Google ads are trivially blockable in the new system)

So, what’s a human to do? Use as few extensions as you can, and prefer your browser’s built-in capabilities as much as you can. Periodically review the extensions in your browser (and your dad’s!) by visiting about:extensions and remove any you don’t recognize or need.

If you work in IT Security, it’s important that you understand the security risk of extensions is almost as great as traditional malware. Develop a policy that helps protect your enterprise from malicious extensions by blocking certain permissions, and if feasible, blocking all extensions except a curated allow-list. Chrome and Edge offer a powerful set of Group Policies to control what extensions are permitted and how those extensions may interact with your domains.

If you’re especially on-the-ball, you can create your own company-internal “Web Store” and allow users to only install extensions that your IT Security team has reviewed and vetted. This approach helps prevent the supply-chain attack because your Internal Web Store will control the update cadence, and you can only allow updates after they’ve been reviewed by experts. If you want to learn more, Google has published a pretty thorough PDF describing Enterprise Management of browser extensions.

Stay safe out there!

-Eric

Second Seaside Half

I ran my second Galveston Half Marathon on Sunday, February 25th.

The course was identical to last year’s race, starting at Stewart beach heading north before looping back down to the Pleasure Pier before returning to the start/finish line on the beach.

I opened hard, leaving with the 1:52 pacer and running with the 1:45 pacer for the first two miles at an 8:12 pace. Alas, I couldn’t keep that pace up, but stayed fast for the first 4 miles before dropping down significantly.

Looking at my pace chart, you can see where things fell apart, even as my heart rate never got out of control:

Compared to last year, many thing went well: the humidity was dramatically lower, my Coros Pace 3 watch worked well for streaming my music (although I didn’t manage to get it to verbally announce my pace), and nothing on my body really hurt — my left knee was slightly glitchy early on but cleared up quickly, and while I ended up with a pretty gnarly blister on toe #9, it didn’t significantly bother me during the race itself. I had a banana and stroopwaffle before the race and ate a pack of Jelly Belly caffeinated beans while running.

Looking at the course markers, I ran the first 6 miles almost a minute faster than last year, but lost that minute getting to the second turn, finally finishing the race a face-palming three seconds slower than last time:

I was a bit frustrated that the end of the race (inexplicably) snuck up on me. I was just about to drop back to a walk around mile 12 when another racer wheezed “just one more mile” and then I felt like I’d demotivate him if I slowed down, so I kept up a slow jog for a while longer.

When I saw the Mile 13 sign in the distance, I realized that I was almost out of road and I turned on the gas. I had a great pace going into the last two tenths of a mile, frustrated that I think I could’ve maintained that pace for at least a half mile.

After chatting with a few other runners after the end of the race (including my mile 12 inspiration), and walking down to the waterfront, I headed back to our rental house, showered, and headed to the brewery with my friends who’d completed the 5K that morning.

It was a pleasant afternoon and a nice end to my “dry January.”

My next race is my third Capitol 10K on April 7th. (I’m currently a bit worried that I’ve had a (painless) limp in my left leg for the last day or two, but hopefully it clears up quickly.)

The Importance of Feedback Loops

This morning, I found myself once again thinking about the critical importance of feedback loops.

I thought about obvious examples where small bad things can so easily grow into large bad things:

– A minor breach can lead to complete pwnage.
– A small outbreak can become a pandemic.
– A brush fire can spark a continental wildfire.
– Petty theft can grow into large-scale fraud.
– A small skirmish can explode into world war.

On the other hand, careful reactions to that initial stimulus can result in a different set of dramatically-better outcomes… far better than would have developed without that negative initial stimulus:

– A minor breach can avert complete pwnage.
– A small outbreak can prevent a pandemic.
– A brush fire can prevent a continental wildfire.
– Petty theft can prevent large-scale fraud.
– A small skirmish can avert a world war.

When it comes to feedback loops, it matters how fast they are, and how they respond to the stimulus. Early losses can reduce risk by mitigating threats before they become too big to survive.

Sweat the small stuff, before it becomes the big stuff.

Frontloading risk can be a useful strategy, as I covered in a longer post: