Couch to Half Marathon: Closing My First Year of Running

On February 11th, 2022, I took my first jog on my new treadmill, a single mile at 5mph. I’d been taking three mile walks for a couple weeks before, but that jog just under a year ago was my first workout over 4mph.

Yesterday, I ran the 3M Half Marathon in Austin, crossing the finish line two hours and forty-six seconds after I started, the culmination of a year’s worth of work.

Going into Sunday’s Half, I had a few worries– the weather was forecast to be cold (in the 30s), and my upper shins and right ankle had been sore for days in places where I’d never hurt before. I went to bed early on Saturday night and woke up at 4am for a few minutes before fortunately dropping back into a deep sleep. Likely with memories of my failed attempt to run the Decker Half, I dreamt that I’d missed joining my pace group and was forced to run alone anyway despite impossible obstacles (e.g. running in the pitch black with no lamp, etc).

In the morning, I felt a bit like I’d had the dream where I was taking a test where I hadn’t studied– lucky that it didn’t really happen but with a renewed commitment to ensuring that it didn’t become real.

Ultimately, I woke up on time feeling pretty good and excited for the run to start, with bib preattached to my shorts, snacks packed, and ready to go. My all-important trip to the bathroom was ultimately unproductive, but I figured that I would get to the start line early enough to have another shot there. There was a fifteen minute line to park, so I didn’t get to the start line until about 15 minutes before the race. It was perhaps 40 degrees, but an intermittently bitter wind made me reconsider my tank top, so I put on the long-sleeve Decker Challenge shirt instead. A quick look at the porta-potty lines suggested that they’d be at least fifteen minutes, so I decided to just find my pace group and stretch a little bit. Fingers crossed.

Five minutes before the start, I nibbled on a GU energy stroopwafel my brother sent me for Christmas as the announcer bantered with the shivering crowd. I started the race in the 1:55 pace group (8:47/mile) based on my Q4 race paces 8:53 (10 miles) and 8:49 paces (5 miles) and my recent treadmill half marathons. Unfortunately, the vibe didn’t really feel right — the pacers were harder to see (shorter, wearing less-bright clothing), and there were clumps of runners around making it awkward to run near them. For lack of better analysis, the energy just didn’t feel right.

I sped up a bit and quickly encountered the 1:50 pace group (8:24/mile). The two pacers were taller, decked out in bright colors, and chatting happily with their group. I ended up happily running the first five miles with them before fatigue started setting in and I decided that I wasn’t going to be able to keep it up.

I slowed down with the hope of keeping the 1:50 group in sight in case I caught my wind, otherwise, at worst, I’d pick up the 1:55 group when they caught up.

Unfortunately, Mile 6 was over a full minute slower than the prior five, and Mile 7’s improved 9:11 pace wouldn’t be seen again until the second half of mile 11. Crossing the 10K mark wasn’t the mental boost I’d expected it to be, and I ended up skipping the tables with GU and goodies at the mid-six mile mark in an attempt to make up some time.

My first 5K was a respectable 25:45, the second was 27:25, and the third took 30:22:

The race is marketed as “Downhill to Downtown” owing to a net 400 foot drop in elevation over the course:

In practice, things are a lot bumpier than the official chart would have you believe:

…and dropping 400 feet over a 69,168 foot race turns out not to be very noticeable on the pavement. But still, the absence of big hills like my recent Run for the Water was definitely appreciated.

Nevertheless, by the time I reached the 9 mile mark, the 1:55 pacers caught and passed me and I got depressed as I found I couldn’t keep up. I consoled myself that my goal was to beat 2 hours and with my fast start, it shouldn’t be too hard to stay ahead of the 2:00 pacers. Still, I was starting to drag on a series of tiny hills. I took half a minute at a porta-potty (after bursting in on someone who’d failed to lock theirs, #awkward) and I started to feel a bit depressed — why was I even running this? While my body was (amazingly) pain free (even my knees and ankle) so far during this run, I knew it was going to ache for days afterward. Even seeing families cheering on the side for their loved ones started to get me down. Still, I got a mental boost when I crossed the 10 mile mark, and by the middle of the 11th mile, I finally felt like I was “in the groove.” Around that time, I ran with the 2:00 pacer for a bit, and in a moment of lucidity I realized I didn’t just need to match the pacer, I needed to beat him, because by starting with the 1:55 group I must’ve crossed the starting line a bit before him.

Alas, my second wind didn’t last, and a series of small hills as we got downtown took the wind out of my sails. In treadmill runs, I typically try to do the last half mile or mile at 7:30/mile, but I just didn’t feel motivated — I was ahead of the pacer, and what was the point? As the spectators multiplied and the end was clearly approaching, I tried not to get my hopes up– it often turns out that there’s another block or two waiting at the end. When I finally crossed the 13 mile marker, I realized that the “end” in sight was really the end, and I sped up as much as was comfortable.

I crossed the finish line in relief and walked to the “results” tables. I figured I’d probably be somewhere in the neighborhood of 1:59:xx… not a big beat, but a beat nonetheless. Through my sunglasses, I had a hard time reading the Chromebook’s screen; I typed my bib number and saw 2:00:46.

What?!? How?!? I beat the pacer by enough that I couldn’t even see him at the end!

It turns out that the 2:00 pacer crossed the finish line just eight seconds after me, but I’d had a whopping 34 second headstart, and he actually finished late, with a time of 2:00:20.

Over-the-shoulder checks for the pacer failed because he was directly behind me. Doh!

Still happy to be done, I tried to keep my annoyance in check– it wasn’t like I was actually super-close (If I’d missed it by mere seconds, I’d’ve probably been furious), and the reason I wanted to beat two hours was that so I’d never have to run another half, and that didn’t feel so important at the moment. Dejected, I grabbed a swag bag and walked up to the bus line to catch a ride back to the start, without checking out anything else at the finish. Fifteen minutes later, just as we pulled into the parking lot, I realized that everyone else was wearing their medals, and I should at least look at mine, even if I wasn’t going to wear it.

At which point I realized that, contrary to my post-run-delirium assumption, my medal was not, in fact, in the swag bag. Ugh. Perfect. I didn’t want the stupid thing anyway. Except, a quiet voice told me, I kinda did want it. And my friends gave me a fancy rack to show off my race medals, but I don’t have many.

I ended up walking to my car, driving back downtown, and reversing my steps through the corral, back to the finish line, where I grabbed a medal from the previously-overlooked volunteers at the side.

Overall, the entire race was … not what I’d expected. Mentally, I just wasn’t there. If I do ever end up running this far again, I’m going to:

  1. Start at a more sustainable pace
  2. Have a watch that’s more useful (mine is hard to read with sunglasses, and as configured, it makes it too hard to see elapsed time and distance)
  3. Make sure I’ve got good music on my playlist (I had music I liked, but a bunch of it is not what I’d consider “running music”)

My next run is the Capitol 10K in April; I learned a lot last time, and I’ve trained a lot since then. I’ll aim to cut at least 8 minutes off my prior 67:38 result. Update: I’ve booked something else before then.

-Eric

Postscript: Whelp.

Defense Techniques: Reporting Phish

While I have a day job, I’ve been moonlighting as a crimefighting superhero for almost twenty years. No, I’m not a billionaire who dons a rubber bat suit to beat up bad guys– I’m instead flagging phishing websites that try to steal money and personal information from the less tech-savvy among us.

I have had a Hotmail account for over twenty-five years now, and I get a LOT of phishing emails there– at least a few every day. This account turns out to be a great source of real-world threats– the bad guys are (unknowingly) prowling around a police station with lockpicks and crowbars.

Step 1: Report the Lure Email

When I get a phishing email, I first forward it to Netcraft (scam@netcraft.com) and PhishTank. I copy the URL from the lure, then use the Report > Report Phishing option in Outlook to report the phish to Microsoft:

Step 2: Additional Research

If I have time, I’ll go look up the URL on URLScan.io and/or VirusTotal to see what they have to say, before loading it into my browser.

Step 3: Load & Report the Phishing Site

Now, most sources will instruct you to never click on a phishing link and this is, in general, great advice. The primary concern is that an attacker might not just be phishing– they might try to exploit a 0-day in your browser to compromise your PC. This is a legitimate concern, but there are ways to mitigate that risk: only use a fully-patched browser, use a Guest profile to mitigate the risk of ambient credential abuse, ensure that you’ve got Enhanced Security mode enabled to block JIT-reliant attacks, and if you’re very concerned, run inside WDAG or a Virtual Machine.

If the phishing site loads (and is not already down or blocked), I then report it to SmartScreen via the ... > Help and feedback > Report unsafe site menu command:

I also report the phishing site to Google’s Chrome/SafeBrowsing team using the Suspicious Site Reporter extension. This extension allows tech-savvy users to recognize suspicious signals for sites they visit and report malicious sites to SafeBrowsing in a single click:

Importantly, the report doesn’t just contain the malicious URL– it also contains information like the Referrer Chain (the list of URLs that caused the malicious page to load), and a Screenshot of the current page (useful for combatting cloaking).

Attacker Technique: Cloaking from Graders

When a user reports a site as phishing, the report typically is sent to a human grader who evaluates the report to determine whether it’s legitimate. The grader typically will load the reported URL to see whether the target meets the criteria for phishing (e.g. is it asking for credentials and impersonating a legitimate site?).

Phishers do not like it when their websites get blocked quickly. One technique they use to keep their sites alive longer is called “cloaking.” This technique relies upon detecting that their site has been loaded not by a victim but instead by a grader, and if so, playing innocent– either by returning a generic 404, or by redirecting to some harmless page. Phishers have many different strategies for detecting graders, from recognizing known IP ranges (e.g. “If I’m being loaded from an IP block known to be used by Google or Microsoft Corp, I’m probably being graded“) to single-use URLs (e.g. put a token in the URL and if that token is seen more than once, play innocent), geo-targeted phish (e.g. “If I’m phishing a UK bank, but the user’s IP is not in the UK, play innocent”), to fingerprinting the user’s browser to determine how likely it is that it’s a potential victim vs. a grader.

This Coinbase-phish cloaks by redirecting graders to the real Coinbase

Cloaking makes the job of a grader much harder– even if the reporter can go back to the grader with additional evidence, the delay in doing so could be hours, which is often the upper-limit of a phishing site’s lifetime anyway.

Additional Options

If you want to learn even more ways to combat phishing sites, check out the guide at GotPhish.com.

For example, Netcraft also offers a browser extension that shows data about the current website and allows easy reporting of phish:

If doing a good deed isn’t enough, Netcraft also offers some fun incentives for phishing reports— so far, I’ve collected the flash drive, mug, and t-shirt.

Tiered Defenses: Experts as Canaries

One criticism against adding advanced features to browsers to allow analysis or recognition of phishing sites is that the vast majority of users will not be able to make effective use of them. For instance, features like domain highlighting (showing the eTLD+1 in bold text) are meaningless to 99% of users.

But critically, such cues and signals like these are useful to experts, who can recognize the signs of a phish and “pull the alarm” to report phishing sites to SmartScreen, SafeBrowsing, and other threat intel services.

These threat reports, consumed by threat intelligence services, then scale up to “protect the herd.” Browsers’ blocking pages for known phish are demonstrably extremely effective, with high adherence even by novice users.

Making a Difference

Now, it’s easy to wonder whether or not any of this end-user reporting matters — there are millions of new phish a week — can reporting one make a difference?

Beyond my immediate answer (yes), I have personal evidence of the impact. One of my happiest memories of working on the IE team was when the SmartScreen team looked up how many potential victims my phish reports blocked. I shared with them my private reporter ID and they looked up my phishing reports in the backend, then cross-referenced how many phishing blocks resulted from those reports. The number was well into the thousands.

Beyond the immediate blocks, threat reports these days are also used by researchers to identify phishing toolkits and campaigns, and new techniques phishers are adopting. Threat reports are fed into AI/ML models and used to train automatic detection of future campaigns, making the life of phishers more difficult and less profitable.

Thanks for your help in protecting everyone!

-Eric

SlickRun

While I’m best known for creating Fiddler two decades ago, eight years before Fiddler’s debut I started work on what became SlickRun. SlickRun is a floating command line that provides nearly instant access to almost any app or website. Originally written in Visual Basic 3 and released as QuickRun for Windows 3.1, it was soon ported to Borland Delphi and later renamed SlickRun to avoid a name-collision with an unrelated tool.

SlickRun was a part of the story of how I joined Microsoft — when I had my on-campus interview for my first internship, I’d brought a binder of screenshots from apps that I’d written. My interviewer was generally interested but got super-excited as I explained what SlickRun did. “Have you shown this to Microsoft??” he asked. Flummoxed and wondering “Uh, how exactly would I have done that?“, I replied “Uh, I guess I just did?” Five years later when I interviewed for the IE team, the GM interviewing me asked “How often do you type www.goo in the browser address bar and wish it did the right thing?” to which I responded “Uh, less than you might think.” before showing off the autocomplete inside SlickRun. I got that job too.

While I’ve maintained SlickRun routinely over the years, making updates as needed to support 32bit, and then 64bit Windows, and keep it compatible with new paradigms in Windows Vista and beyond, I’ve done relatively little to publicize it to the world at large. It just quietly hums along with a mostly-satisfied userbase of thousands around the world.

Personally, I’ve been using SlickRun nearly daily for almost three decades and have executed almost 200000 commands on my latest fleet of Windows 11 PCs.

Perhaps the biggest problem with SlickRun is that, designed to be small and simple, it offers few affordances to reveal the tremendous amount of power living under the surface. By default, it ships with only a handful of MagicWords (command shortcuts/aliases) but it will never achieve its full power unless the user creates their own MagicWords to match their own needs and terminology.

If a user types HELP, an online help page will open to explain the basics, and for the few who bother to read that page, an advanced usage page reveals some even less obvious features of the tool.

I’ve been meaning to put together a demo reel video for decades now but have never gotten around to it. Mostly, SlickRun has spread organically, with folks seeing it in use on a peer’s desktop and asking “Hey, how … what is that?”

Idle Info Display

Beyond its program-launching features, SlickRun provides a useful little perch for showing information in an always-visible (by default) location on your desktop. If you type SETUP, there’s a variety of display customization options. SlickRun’s “idle” appearance which can show useful things like clocks (in arbitrary time zones), date, battery life, days-until-an-event, machine name, IP address, memory usage, CPU usage, etc:

If SlickRun ever gets in your way (e.g. while watching a full-screen video), just type HIDE to tell it to hide out in your system tray until summoned.

The Basics

Click on SlickRun or hit the hotkey to activate it and enter command mode. (The hotkey is configurable via SETUP. For historical reasons, it defaults to Win+Q which doesn’t work on modern Windows without some simple registry modification due to other tools camping on that key. After a decade, I configured mine to Alt+Q instead.)

Type a command into SlickRun and hit enter to launch it. You can hit the tab key to jump to the end of an autocomplete suggestion if you want to change or add arguments at the end of the command.

Use the up/down arrow keys to scroll through your command history– if you’ve already typed some characters, the history is filtered to just the commands that match. Or hit Alt+F to show a context menu list of all matches (or ALT+Shift+F to loosen the matching to the entire command, not just the prefix). Or, hit Alt+S to show a context menu list containing any Start Menu shortcuts containing what you’ve already typed.

SlickRun loves the internet. Type a url in SlickRun to open it in your default browser. My very favorite MagicWord launches an “I’m Feeling Lucky” Search on Google, so I can type goto SlickRun and https://bayden.com/slickrun/ will open (see this post). This works magically well.

As you can see, you can add MagicWords to launch web searches, where $W$ is filled by a URL-encoded parameter. For example:

After creating this MagicWord, you can type errors 0x1234 and your browser will go to the relevant URL. If you fail to specify a parameter when invoking the MagicWord, you’ll be asked to supply it via a popup window:

You can type CALENDAR to launch a calendar, or CALENDAR 5/6 to jump to May sixth.

You can have a single MagicWord launch multiple commands.

In cases where you have related commands, you can name your MagicWords with a slash in the middle of them; each tab of the tab key will jump to the next slash, allowing you to adjust what is autocompleted as you go.

So, for example, I can type e.g. e{tab}s to get to “Edge Stable” in the autocomplete:

When executing a MagicWord, a $C$ token will replaced by any text found on the clipboard.

Hit Ctrl+I to get a Windows file picker to insert a file path at the current location of the command line string. Or, tap Ctrl+V with one or more files on your clipboard and SlickRun will insert the file path(s) at the current insertion point. Hit Ctrl+T to transpose the last two arguments in the current command (e.g. windiff A B becomes windiff B A) and hit CTRL+\ to convert any Unix-style path separator backslashes (c/src/chrome/) into Windows-style backslashes (c\src\chrome\).

SlickRun can perform simple math operations, with the answer output inline such that you can chain it to a subsequent operation. Try things like

=2^9
=SQR(100)
=123*456
=0x123
=HEX(123)

When running a command, use Shift+Enter to execute a command that should be immediately forgotten. Use Ctrl+Shift+Enter to execute a command elevated (as administrator).

You can create a MagicWord named _a which will execute any time you hit ALT+Enter on a command, so this MagicWord allows you to look up a word by typing define powerful or ?powerful or just typing powerful and submitting via Alt+Enter:

If you name your MagicWord _define, SlickRun will execute it if no other command is found.

Automatic Behaviors

You can use SETUP to configure an hourly chime with an optional offset so you’ll have a minute or more to get ready for your next appointment:

A MagicWord named _STARTUP will be run automatically anytime SlickRun starts. A MagicWord named _DISPLAYCHANGE will run automatically anytime your Windows display resolution changes.

SlickRun flashes when your clipboard is updated, useful for confirming that your attempt to copy something from another app was successful.

Clipboard & Drag/Drop

You can create a MagicWord with the @copy@ command to copy a string to your clipboard, useful if you have a string that you need to use frequently.

You can drag/drop URLs from webpages, or icons from Start/Desktop to create MagicWords pointing to them.

Hit Ctrl+V with one or more files on your clipboard and SlickRun will insert the file path(s) at the current insertion point.

You can drag/drop text from anywhere to SlickRun to add it to your JOT, an auto-saving jotpad, useful for recording addresses, phone numbers, order confirmation numbers, and the like. Type JOT to reopen it later.

Shortcomings

Every tool has its limits, and SlickRun is no exception. There are a bunch of features that I’d like to add, but I haven’t gotten around to over the decades.

The key shortcoming is that SlickRun doesn’t offer roaming for features you’d hope (in particular, the MagicWord list and the text of the JOT).

I’ve always daydreamed about adding “natural” language recognition to SlickRun (including voice recognition) but I’ve never made any significant effort to explore it, even as technology has advanced to the point where doing so might now be practical.

SlickRun should be open-source, but the code is in a language (Delphi/Object Pascal) which is uncommon. While the code works, it’s not of quality that I would be proud for anyone to see. In the early years, I had a collaborator who wrote the performance critical auto-complete logic, but in twenty years only I have lain eyes on the crusty code. I periodically ponder an OSS rewrite in C# (someone else did this as a short-lived project named “MagicWords”) but haven’t found the energy. Fiddler users might recognize that tool’s QuickExec box‘s origins in SlickRun– I partly added QuickExec to Fiddler in the hopes that one day I’d find that I’d added so much functionality to it that I could fork that code out into a SlickRun.NET. Alas, that didn’t happen by the time Telerik acquired Fiddler.

-Eric

2022 EOY Fitness Summary

I spent dramatically more time on physical fitness in 2022 than I have at any other point in my life, in preparation for my planned adventure this June.

My 2022 statistics from iFit on my incline trainer/treadmill show that I walked/jogged/ran almost 700 miles after it was set up on January 24th:

Perhaps surprisingly (given the summer heat), I got the most miles in over the summer months:

Beyond the treadmill, I also ran a few real-world races. Compared to the first half, my use of the exercise bike declined in the latter half of the year, but I still rode a few times a month:

I ended the year 52.7 pounds lighter than I started it, bottoming out at 178.4 pounds in early September before rebounding a bit in the final months of the year. My estimated body fat percentage dropped from a peak of 28.9% to just under 15%.

My FitBit reports 4,186,894 steps, 3184 floors, 2181.91 miles and 1,183,581 calories burned:

My resting heart rate dropped from 64 to 54 beats per minute in the first third of the year, and has bounced around by a beat or two over the rest of the year. I haven’t checked my blood pressure regularly since noting a big improvement in the first third of the year. I got my fourth COVID shot before a second COVID infection in September– I shrugged it off easily in a week.

Looking forward

I’ve got a real-world half marathon (3M) coming up in just over a week, then the Austin Capital 10K coming in April. Then, hiking Kilimanjaro in June.

After that, I’m not sure what’s next: right now, I expect to cut back on running distances to stay around 10Ks, and hope I’ll be able to force myself to start using the rower regularly.

I’m doing another “Dry January” this year. My experiment with alcohol-free beer (Athletic Brewing Company) is a mixed bag– it tastes “fine“, but triggers the same munchies that “real” beer does, which rather limits the point of the exercise. I tried an alcohol-free liquor (“Spirit of Milano“) but I really don’t like it– I’ll stick to cranberry juice.

Attack Techniques: Priming Attacks on Legitimate Sites

Earlier today, we looked at two techniques for attackers to evade anti-phishing filters by using lures that are not served from http and https urls that are subject to reputation analysis.

A third attack technique is to send a lure that entices a user to visit a legitimate site and perform an unsafe operation on that site. In such an attack, the phisher never collects the user’s password directly, and because the brunt of the attack occurs while on the legitimate site, anti-phishing filters typically have no way to block the attacks. I will present three examples of such attacks in this post.

In the first example, the attacker sends their target an email containing lure text that entices the user to click a link in the email:

The attacker controls the text of the email and can thus prime the user to make an unsafe decision on the legitimate site, which the attacker does not control. In this case, clicking the link brings the victim to an account configuration page. If the user is prompted for credentials when clicking the link, the credentials are collected on the legitimate site (not a phishing URL), so anti-phishing filters have nothing to block.

The attacker has very limited control over the contents of the account config page, but thanks to priming, the user is likely to make a bad decision, unknowingly granting the attacker access to the content of their account:

If access is granted, the attacker has the ability to act “as the user” when it comes to their email. Beyond sensitive content within the user’s email account, most sites offer password recovery options bound to an email address, and after compromising the user’s email account the attacker can likely pivot to attack their credentials on other sites.

A second example is a long-running attack which takes place via PayPal. PayPal allows people to send requests for money to one another, with content controlled by the attacker. In this case, the lure is sent by PayPal itself. As you can see, Outlook even notes that “This message is from a trusted sender” without the important caveat that the email also contains untrusted and inaccurate content authored by a malicious party.

A victim encountering this email may respond in one of two ways. First, they might pick up the phone and call the phone number provided by the attacker, and the attack would then continue via telephone– because the attack is now “offline”, anti-phishing filters cannot get in the way.

Alternatively, a victim encountering the email might click on the link, which brings them to the legitimate PayPal website. Anti-phishing filters have nothing to say here, since the victim has been directed to the legitimate site (albeit with dangerous parameters). Perhaps alarmingly, PayPal has decided to “reduce friction” and automatically trust devices you’ve previously used, meaning that users might not even prompted for a password when clicking through the link:

Misleading trust indicators and the desire for simple transactions mean that a user is just clicks away from losing hundreds of dollars to an attacker.

In the final example of a priming attack, a malicious website can trick the user into installing a malicious browser extension. This is often positioned as a security check, and often uses assorted trickery to try to prevent the user from recognizing what’s happening, including sizing and positioning the Web Store window in ways to try to obscure important information. Google explicitly bans such conduct in their policy:

… but technical enforcement is more challenging.

Because the extension is hosted and delivered by the “official” web store, the browser’s security restrictions and anti-malware filters are not triggered.

After a victim installs a malicious browser extension, the extension can hijack their searches, spam notifications, steal personal information, or embark upon other attacks until such time as the extension is recognized as malicious by the store and nuked from orbit.

Best Practices

When building web experiences, it’s important that you consider the effect of priming — an attacker can structure lures to confuse a user and potentially misunderstand a choice offered by your website. Any flow that offers the user a security choice should have a simple and unambiguous option for users to report “I think I’m being scammed“, allowing you to take action against abuse of your service and protect your customers.

-Eric

Attack Techniques: Phishing via Mailto

Earlier today, we looked at a technique where a phisher serves his attack from the user’s own computer so that anti-phishing code like SmartScreen and SafeBrowsing do not have a meaningful URL to block.

A similar technique is to encode the attack within a mailto URL, because anti-phishing scanners and email clients rarely apply reputation intelligence to the addressee of outbound email.

In this attack, the phisher’s lure email contains a link which points at a URL that uses the mailto: scheme to construct a reply email:

A victim who falls for this attack and clicks the link will find that their email client opens with a new message with a subject of the attacker’s choice, addressed to the attacker, possibly containing pre-populated body text that requests personal information. Alternatively, the user might just respond by sending a message saying “Hey, please protect me” or the like, and the attacker, upon receipt of the reply email, can then socially-engineer personal information out of the victim in subsequent replies.

The even lazier variant of this attack is to simply email the victim directly and request that they provide all of their personal information in a reply:

While this version of the attack feels even less believable, victims still fall for the scam, and there are even logical reasons for scammers to target only the most credulous victims.

Notably, while mail-based attacks might solicit the user’s credentials information, they might not even bother, instead directly asking for other monetizable information like credit card or banking numbers.

-Eric

Attack Techniques: Phishing via Local Files

One attack technique I’ve seen in use recently involves enticing the victim to enter their password into a locally-downloaded HTML file.

The attack begins by the victim receiving an email with a HTML file attachment (for me, often with the .shtml file extension):

When the user opens the file, a credential prompt is displayed, with the attacker hoping that the user won’t notice that the prompt isn’t coming from the legitimate logon provider:

Notably, because this file is opened locally, the URL refers to a file path on the local computer, and as a consequence the URL will not have any reputation in anti-phishing services like Windows SmartScreen or Google Safe Browsing.

A HTML form within the file targets a credential collection endpoint on infrastructure which the attacker has either rented or compromised on a legitimate site:

If the victim is successfully tricked into supplying their password, the data is sent in a HTTP POST request:

To help prevent the user from recognizing that they’ve just been phished, the attacker then redirects the victim’s browser to an unrelated error page on the legitimate login provider:

The attacker can later collect the database of submitted credentials from the collection endpoint at their leisure.

Passwords are a terrible legacy technology, and now that viable alternatives now exist, sites and services should strive to eliminate passwords as soon as possible.

-Eric

ProjectK.commit()

Cruising solo across the Gulf of Mexico last Christmas, I had a lot of time to think. Traveling alone, I could do whatever I wanted, whenever I wanted. And this led me to realize that, while I was about to have a lot more flexibility in life, I hadn’t really taken advantage of that flexibility when I was last single. In my twenties, I’d held onto longstanding “one day, I’d really like to…” non-plans (e.g. “I’d should go to Hawaii“) for years without doing anything about them, and years went by without “advancing the plot.” In my thirties, everything was about the kids or otherwise driven by family commitments, without any real pursuits of my own.

This felt, in a word, tragic, so I challenged myself: “Okay, so what’s a big thing you want to do?” I thought: “Well, I should take a cruise to Alaska.” But that didn’t feel particularly ambitious. A periodic daydream tickled: “I’ve always thought it would be neat to hike Kilimanjaro and see the stars at night.” Now that would be something: foreign travel, a new continent, a physical challenge at least an order of magnitude greater than anything I’d done before, and wildly outside my comfort zone in almost every dimension.

It seemed, in a word, perfect. Except, of course, that I knew almost nothing about the trek, and I was in the worst shape of my life– barely under 240 pounds, I’d bought all new clothes for my Christmas cruise because none of my old stuff fit anymore. Still… the prospect was compelling: a star on the horizon at a time when I was starting to feel directionless. Something to think about to pull me forward instead of succumbing to the nostalgia and sentimentality that otherwise seemed likely to drown me. If not now, when?

Project K was born.

When I got back, I published some new years’ resolutions, but decided to withhold explicit mention of Kilimanjaro until I’d convinced myself that I was actually able to get in shape. I set up a home gym, sweating on my previously unused exercise bike and buying an incline trainer over a treadmill because maximizing incline/decline seemed prudent. I ran a 10K. And then I ran much more, including a treadmill half marathon (via iFit) in the shadow of Kilimanjaro. I requested a catalog from a Kilimanjaro tour company. I read a few books: I bought Polepole and The Call of Kilimanjaro, and a friend sent me a third, self-published account (there are approximately a million of them). I learned much more about the challenges of the hike (mostly related to remaining upright at extreme altitude). I idly wondered whether anyone would ever ask what the “ProjectK” tag on my blog meant.

I’d planned to publicly commit to the trip at the end of June, after I’d told my parents and enlisted my older brother to join me. But I chickened out a bit and decided to wait for my annual bonus at work to decide whether I could afford the trip, and by then I was focusing on September’s Alaska cruise and the final details for our family vacation at New Years. Finally, on December 1st, I pulled the trigger and sent in the deposit for our Kilimanjaro trek next summer. So now I’m committed.

We’re booked on the Western Approach, an itinerary with 11 days in-country and 9 days hiking.

There’s still a ton to do– we need flights, gear, shots, and visas, and I still have tons to learn. I need to broaden my workouts to include more training with incline and decline. I’d like to learn some basic Swahili. I need to do some real-world outdoorsing at lower altitude and lower stakes. I’m going to read some more books. I’m going to find advice from some friends who’ve taken the trip before. I’m going to worry about a million things, including the things I haven’t yet thought to worry about. But I’m excited. And that’s something.

Tempus Fugit. Memento mori. Carpe diem.

Missed Half

After last month’s races, I decided that I could reduce some of my stress around my first half marathon (Austin 3M at the end of January) by running a slow marathon ahead of time — a Race 0 if you will. So, I signed up for the Decker Challenge, with a goal of finishing around 2:10, a comfortable pace of around 10 minutes per mile. While the pace is slower than my January goal (an even two hours), I figured it would probably be almost as hard because the Decker course around the Travis County Expo Center has more hills.

On Saturday, I got my gear ready: charged my phone and headphones, packed my Gu gels (including some new, bigger ones with a shot of caffeine), and got my water bottle ready. I put my number bib/timing chip next to my treadmill to motivate me during the week, and tapered training the few days before the race. Saturday night, I had what seemed like a reasonable dinner (salmon, asparagus, couscous), and got to bed reasonably early. I set my alarm for 6:30, but woke up on my own at 6:20am. I’d had almost exactly seven hours of sleep, and plenty of time before the 8am race. I got up, had coffee, went to the bathroom (with little effect), ate a banana, showered, and got dressed in my trusty shorts, tank top, and new (taller) socks.

At 7:20, I was ready to go and got in the car. I realized with some alarm that the race was further away than I’d realized (~22 minutes rather than 15) but figured that my morning was still basically on track. As I drove, I realized that I hadn’t yet figured out whether to put my bib on my shirt or my shorts. Glancing over to my pile of stuff in the passenger seat, I was horrified to realize that I’d brought everything except the one thing I truly needed.

By 7:30, I was back at my house, grabbed the forgotten bib, and decided I should probably have one more try at the bathroom as my belly was grumbling a little. No luck, and I was back on the road by 7:38. Not great, but I could still make the race. Fortunately, Texas roads have high speed limits, but they aren’t designed for driving while attaching paper to one’s pants with metal “safety” pins and I soon gave up.

Luckily, I reached the Expo Center just before 8 and took a left to drive North past the first gate, closed off by a police car and a line of cones. I drove past a second gate with a police car behind the line of cones and kept driving. Surely, the entrance will be here soon, right? After another mile or so, I realized that I must have missed it when I took that first left northward, so I drove past the two coned-off entrances and went another mile south before realizing that there was no way the entrance was this far out. I pulled off the road to figure out whether there was perhaps a back entrance and realized that no, that wasn’t possible either. Finally, I turned north again and drove slowly past the first gate before watching a car drive through the cones at the second gate without the policemen complaining. Ugh. Apparently, crossing the line of cones was expected the whole time… something I’d’ve figured out if I spent more time perusing the map, or if I’d gotten there early enough to watch everyone else doing it.

More than a bit embarrassed, I walked up to the start line around 8:15 (no one was around) and realized that I wouldn’t be able to run with my target pace group (a key goal for this practice run) and might not even be able to follow the course (looking at the map later, I decided this was an unfounded concern).

I ruefully drove back home to run a half on the treadmill instead, kicking myself a bit for missing the race for dumb reasons, but happy to learn an unforgettable lesson in a low-stakes situation. For January, all of my stuff will be completely ready the night before, and I’ll show up at the start much earlier.

Back at home, I settled on running the Jackson Hole Half Marathon and resolved to run it as realistically as possible — I wore a shirt, ran with the number bib on my leg, and carried my Nathan water bottle in hand. I opened the window but left my big fans off; based on past results, I knew that my heart rate is significantly higher when I’m warm.

I felt strong as I started: after the first quarter I started thinking that perhaps I should try to run a full marathon– the first half with the 2:10 target and the second half much more slowly, perhaps 2:40? This thought kept me motivated for a few miles, but around mile 8 I was not feeling nearly so good. By mile 10, I’d surrendered and turned on the fans, and by mile 11 I knew that this wasn’t going to be a marathon day. I finished in around 2:04, happy to be done but a bit depressed that I certainly wouldn’t’ve met my day’s real-world goal had I run the Decker. (I was further a bit misled because the 2:08 reported by my watch included 4 minutes before I started running).

I refilled my water bottle and then jogged another 1.2 miles to “finish” the race with the trainer (I run faster than the target pace) before calling it a day. I cooled off by walking a mile outside and crossed 30,000 steps for the day for the first time.

So, not a bad effort, but I’m definitely running slower than my prior efforts this year. Before Jackson Hole, I’d run six half marathons on the treadmill this summer, finishing four of them under two hours. The second half of Boston was my best time, at 1:50:30. On the other hand, I recovered from this one far more quickly, with no real blisters, and I was feeling so normal that I had to stop myself from running the next day.

What does all of this mean for my January hopes? I don’t know. But I know that this time I won’t forget my bib!

TLS Certificate Verification Changes in Edge

When establishing a secure HTTPS connection with a server, a browser must validate that the certificate sent by the server is valid — that is to say, that:

  • it’s non-expired (current datetime is within the validity period specified in the notBefore and notAfter fields of the certificate)
  • it contains the hostname of the target site in the subjectAltNames field
  • it is properly signed with a strong algorithm, and
  • either the certificate’s signer (Certificate Authority) is trusted by the system (Root CA) or it chains to a root that is trusted by the system (Intermediate CA).

In the past, Chromium running on Windows delegated this validation task to APIs in the operating system, layering a minimal set of additional validation (e.g. this) on top of the verdict from Windows. As a consequence, Chromium-based browsers relied on two things: The OS’ validation routines, and the OS’ trusted root certificate store.

Starting in Edge version 109, Edge will instead rely on code and trust data shipped in the browser for these purposes — certificate chain validation will use Chromium code, and root trust determination will (non-exclusively) depend on a trust list generated by Microsoft and shipped with the browser.

Importantly: This should not result in any user-visible change in behavior for users. That’s true even in the case where an enterprise depends upon a private PKI (e.g. Contoso has their own Enterprise CA for certificates for servers on their Intranet, or WoodGrove Bank is using a “Break-and-Inspect” proxy server to secure/spy on all of their employees’ HTTPS traffic). These scenarios should still work fine because the browser will still check the OS root certificate store if the root certificate in the chain is not in the browser-carried trust list.

Q: If the outcome is the same, why make this change at all?

A: The primary goal is consistency — by using the same validation logic and public CA trust list across all operating systems, users on Windows, Mac, Linux and Android should all have the same experience, not subject to the quirks (and bugs) of the OS-provided verifiers or the sometimes- misconfigured list of OS-trusted CAs.

Update: A colleague observed today that on MacOS, Edge using the system verifier returns NET::ERR_CERT_VALIDITY_TOO_LONG when loading a site secured by a certificate that he generated with a 5-year expiration. When switching to use the Chromium verifier, the error goes away, because Chromium only enforces the certificate lifetime limit on certs chained to public CAs, while Apple has a stricter requirement that they apply to even private CAs.

Please Preview ASAP

I’ve written before about the value and importance of practical time machines, and this change arrives with such a mechanism. Starting in Microsoft Edge 109, an enterprise policy (MicrosoftRootStoreEnabled) and a flag (edge://flags/#edge-microsoft-root-store-enabled) are available to control when the built-in root store and certificate verifier are used.

Please try these out, and if anything breaks in your environment, please report the issue!