Recently, Azure opened their Trusted Signing Service preview program up for individual users and I decided to try it out. The documentation and features are still a bit rough, but I managed to get a binary cloud-signed in less than a day of futzing about.
For many individual developers Azure Trusted Signing will be the simplest and cheapest option, at $10/month. (Microsoft Employees get a $150/month Azure credit for their personal use, so trying it out cost me nothing.)
Note that I’ve never done anything with Azure or any other cloud computing service before– I’m a purely old-school client developer.
First, I visited my.visualstudio.com to activate my Microsoft Employee Azure Subscription credit for my personal Hotmail account. I then visited Azure.com in my Edge Personal Profile and created a new account. There is a bit of weirdness about adding 2FA using Microsoft Authenticator to the account, which I already had enabled– what appears to actually be happening is you’re actually creating a new .onmicrosoft.com “shadow” account for your personal account.
With my account set up, in Azure Portal’s search box, I search for “Trusted Signing”:
and I click Create:
I fill out a simple form, inventing a new resource group (SigningGroup, no idea what this is for), and a new Account Name (EriclawSignerAccount, you’ll need this later), and make the important choice of the $9.99/month tier:
My new signing account then appears:
Click it and the side-panel opens:
It’s very tempting to click Identity validation now (since I know I must need to do that before getting the certificate) but instead you must click Access control (IAM) and grant your account permissions to request identity validation:
In the search box, search for Trusted, select the first one (Trusted Signing Certificate Profile Signer, then select Next:
In the Members tab, click Select members and pick yourself from the sidebar.
Click Select and then Review and Assign to grant yourself the role. Then repeat the process for the Trusted Signing Identity Verifier role.
With your roles assigned, it’s time to verify your identity. Click the Identity Validation button change the dropdown from Organization to Individual, and click New Identity > Public:
Fill in the form with your information. Ensure that it matches your legal ID (Driver’s License):
You’ll then be guided through a workflow involving the Microsoft Authenticator app on your phone and a 3rd party identity verification company. You’ll see a Success message once you correctly link your new Verified ID in the Authenticator app to the Azure Service, but confusingly, you’ll still see Action Required in the Azure dashboard for a few minutes:
Just be patient — after about 10 minutes, you’ll get an email saying the process is complete and Action Required will change to Completed:
Next, click Certificate Profile to create a new certificate:
Click Create > Public
Fill out a simple form selecting your verified identity and naming the profile (I used EricLawCertyou’ll need this later):
In short order, your certificate is ready for use:
Now, using the certificate is somewhat complicated than a local certificate, but many folks are now doing fancy things like signing builds in the cloud and as a part of continuous integration processes etc.
I, however, am looking for a drop-in replacement of my old manual local signing process, however, so I follow the guide here to get the latest version of SignTool, as well as the required DLIB file (which you can just unzip rather than using NuGet if you want) that knows how to talk to the cloud. Select the default paths in the installer because otherwise the thing doesn’t work. Run signtool.bat, which will pull the correct dependencies and then tell you where it put the real signtool.exe:
Now, create a file that will point at your cloud certificate profile; I named mine cloudcert.json. Be sure to put in the correct cloud endpoint URL, the account and profile names you selected (all of which were chosen when setting up the certificate):
Then create a .bat file that points at the newly installed signtool.exe file, using the paths you chose to point at the DLIB, JSON, and file to be signed:
Run your batch file. If it doesn’t work and shows a bunch of local certificates that have nothing to do with the cloud, the DLIB isn’t working. Double-check the path you specified in the command line.
Now, at this point, you’ll probably get another failure complaining about DPAPI:
After you do so, run the script again and you’ll get to a browser login prompt. Exciting, but this next part is subtle!
You may see the account you think you want to use already in the login form. Don’t click it: If you do, you’ll get a weird error message saying that “External Users must be invited” or something of that nature. Instead, click Use another account:
Then click Sign-in Options:
Then click Sign in to an organization:
Specify your .onmicrosoft.com tenant name[1] here and click Next:
Only now do you log into your personal email account as normal, and after you do, you’ll get a success message in your browser and the signature will complete:
You can choose Properties on the Explorer context menu for the file to see your newly added signature:
Many years ago, I wrote the first drafts of Chromium’s Guidelines for Secure URL Display. These guidelines were designed to help feature teams avoid security bugs whereby a user might misinterpret a URL when making a security decision.
From a security standpoint, URLs are tricky because they consist of a mix of security-critical information (the Origin) and attacker-chosen content (the rest of the URL). Additionally, while URLs are conceptually simple, there are many uncommon and complicated features that lead to misunderstandings. In many cases, the best approach for safely-rendering a URL is to instead render its Origin, the most security-sensitive component and the one best protected against spoofing.
The challenge of securely displaying filenames is similar, but not identical. A filename consists of two components of varying levels of trustworthiness:
The (red) attacker-chosen “base name” is entirely untrustworthy. However, the (green) file type-declaring extension at the end of the string is security-critical on many platforms because it determines how the file will be handled.
In most cases when opening a file, the file’s extension is parsed and interpreted by the operating system’s shell, meaning that the OS will correctly choose the handler for a given file, no matter what spoofing tricks an attacker may use.
As a part of this file-invocation process, the OS will correctly apply security policy based on the type of the file (e.g. showing a pre-execution warning for executables and no warning before sending a text file to notepad). If attacker sends a dangerous file (e.g. a malicious executable) with an incorrect extension, the result is typically harmless, for example, showing the code as text inside Notepad:
So, if the OS behaves correctly based on a filename’s actual extension, is there any meaningful spoofing threat at all?
Yes. There are two important threats:
Not all dangerous file types are known by the system
Systems will typically allow a user to run/open potentially unsafe files if they first accept a security prompt
Problem #1: Not All Dangerous Types are marked as such
Windows contains a built-in list of potentially-dangerous file type extensions, but third-party handler software can introduce support for new extensions (e.g. Python or Perl) without properly indicating to Windows that files of that type may be dangerous. As such, the OS will allow the user to invoke files of that type from untrusted sources without warning.
If the user installs a handler for one of these dangerous types, the burden is on the user to avoid invoking a file of that type if they do not trust the file.
However, a spoofing vulnerability that obscures the file’s true type could trick a user into (for example) running a Python script when they thought they were going to open a text file.
Problem #2: Security Prompts
One protection against malicious files is the user recognizing that a file is potentially dangerous before they copy or download it to their computer from an untrusted location. A spoofing attack could trick the user into failing to recognize a potentially-dangerous file (e.g. a .hta file) when a safe file (e.g. a .html file) is expected:
Similarly, another protection against malicious files is the OS warning shown before executing a potentially dangerous file. This warning might be the SmartScreen Application Reputation warning:
…or the decades-old Attachment Execution Services warning:
A spoofing attack against these security UIs could render them ineffective: a user who clicks “Run anyway” or “Open” based on spoofed information would be compromised.
Attacks
In most cases, an attacker has significant latitude when choosing the base filename and thus can execute any of many attacks:
An overlong filename might cause UI truncation, such that the user cannot even see the real extension.
A filename containing many embedded whitespaces (spaces, tabs, or any of dozens of Unicode characters) might push the extension so far away from the start of the filename that it’s either truncated or the user simply doesn’t see it.
A filename containing a Unicode right-to-left override character might display with the extension in the middle. For example:
In HTML, this could render as This is safe and not an exe.txt because the RTL-override character has reversed the text direction in the middle of the string.
A filename of Pic.gif from Download.com might be mistaken as a GIF from Download.com, when it’s really a .com executable from elsewhere.
Prompt to save a file with a confusing name (“Pic.gif from download.com”); the user thinks it’s an image, but it’s an executable of type COM.
Some notes on Filename Limits
In many scenarios, filenames are limited to MAX_PATH, or 260 characters. While Windows can be configured to increase MAX_PATH to ~32k and apps manifested to declare their support, this feature is not broadly used and thus attackers cannot rely upon its availability.
Characters with special meaning in the filesystem, specifically, \/:*?"<>|are not allowed to appear in names. There’s also small list of filenames that are prohibited on Windows to avoid overlapping with device names (e.g. con,lpt1, etc).
At the low-level there are other forms of abuse as noted in the documentation: The shell and the file system have different requirements. It is possible to create a path with the Windows API that the shell user interface is not able to interpret properly. In most cases, the attacker would’ve already had to have significant access to the system to abuse these.
Best Practices
The most important best practice for security UI is to ensure that users can recognize which information is trustworthy and which information isn’t.
To address a spoofing attack in 2024, the Internet Explorer MIME Handler security dialog was enhanced. First, it now clearly identifies itself as a security warning, and indicates that the file may be harmful. Next, it was updated to ensure that the filename extension is broken out on its own line to mitigate the risk of user confusion:
Ideally, untrustworthy information should be omitted entirely. As you can see in this example, this attacker uses a great deal of whitespace in the filename field to try to trick the user into thinking they’re opening a harmless .pdf file instead of a dangerous .hta file. While breaking out the extension to its own line helps somewhat, a user might still be fooled.
In contrast, the SmartScreen AppRep warning dialog hides attacker-chosen information (like the filename) by default to reduce the success of spoofing:
Safe Security UX shows no attacker-controlled content
If your scenario requires that the user must be able to see an attacker-chosen filename, you should follow as many of these best practices as you can:
Break out the filename extension to its own field/line.
Hide the attacker-controlled filename by default.
Ensure that long filenames are displayed fully.
If you must elide text to fit it within the display area, trim from the untrustworthy base filename, ensuring that the extension is visible. (In CSS, you might use rtl text-overflow:ellipsis)
Sanitize or remove potentially spoofy characters (whitespace, Unicode RTL-overrides, etc)
Guard against injection attacks (e.g. if your UI is written in HTML or powered by JavaScript, ensure that you don’t suffer from HTML-injection or XSS attacks in the filename)
Ensure that the display of the filename field is distinct from all other parts of your security UI, and that words chosen by the attacker cannot be mistaken as text or advice from your app.
Limits to Protection Value
While helping users recognize a file’s true extension provides some security value, that value is very limited– relying on users to understand which file types are dangerous and which are not is a dicey proposition.
While combatting spoofing attacks has merit, ultimately, your security protections should not rely on the user and should instead be based on more robust protections (blocking dangerous file types entirely, performing security scans based on a file’s true type, et cetera).
Scammers often try to convince you that you’ve already been hacked and you must contact them or send them money to prevent something worse from happening. I write about these a bunch:
a tech scammer shows a web page that says your PC has a virus and you need to call them or download their program to “fix” it.
A notification spammer shows fake alerts pretending like they’re from your local security software.
An invoice scammer claims they’ve withdrawn money from your account and you need to call them to cancel the transaction.
Another common “Bad thing already happened” scam is to send the user an email telling them that their devices were hacked some time ago and the attacker has recorded videos of the victim engaged in embarrassing activities.
The attacker usually includes some “phony evidence” to try to make their claims seem more credible. In some such scam emails, they’ll include a password previously associated with the email address, gleaned from a dump from an earlier data breach. For example, I got multiple scam emails citing my account’s password from the 2012 breach of LinkedIn:
In today’s attack, the bad guy simply forges the return address to my own email address, hoping I’ll believe this means that they already have access to my account:
Under the hood, Hotmail knows that this return address was forged:
Authentication-Results: spf=fail (sender IP is 195.225.99.200) smtp.mailfrom=hotmail.com; dkim=none (message not signed) header.d=none;dmarc=fail action=none header.from=hotmail.com; Received-SPF: Fail (protection.outlook.com: domain of hotmail.com does not designate 195.225.99.200 as permitted sender) receiver=protection.outlook.com; client-ip=195.225.99.200; helo=willishenryx.com; Received: from willishenryx.com (195.225.99.200) by BL6PEPF00022575.mail.protection.outlook.com (10.167.249.43)
The attacker typically promises the victim that they’ll delete the incriminating videos if the victim pays a ransom in cryptocurrency:
There are various tools that can be used to look up traffic to crypto-currency addresses, and while the address in today’s scam is idle, I’ve previously encountered scams where the attackers had been sent thousands of dollars by several victims. :(
Tragically, it seems entirely plausible that this scheme has killed panicked teens (similar sextortion schemesdefinitely have) who thought something bad had already happened without recognizing that it was all a lie.
Stay safe out there, and make sure your loved ones know that everyone on the Internet is a liar.
On January 19th, I ran the newly-renamed “Austin International Half Marathon” (formerly 3M). The night before I had spaghetti and meat sauce with the kids, and the morning of, I woke at 5:15 and had a cup of coffee and my usual banana. My trip to the bathroom was not very productive which worried me a bit, but my belly felt okay so I didn’t worry too much.
I left the house around 6:35 and parked in my usual spot around 6:50, early enough to get to a Porta-Potty without a line. It was bitterly cold that morning, with a stiff wind blowing southward, making for a rough start to the first northbound leg of the race. But since most of the race was southbound, we had a tailwind for most of it (not that I noticed).
I wore my trusty tights from last year, a $2 pair of disposable gloves I bought at the expo, and (luckily for me), my hoodie from the 2014 CodeMash conference. I assumed I would discard the hoodie and gloves along the way, but it was cold enough that I kept both for the entire race.
Just before starting, instead of my now-customary stroopwaffel, I had a “Salted Watermelon” gel pack (the first time I’ve tried them). One of my two earbuds failed with a low battery (it’s always something) but the other stayed alive for the whole race.
I kept a solid pace for the first three miles, right around 9 minute miles. Alas, as I expected, I couldn’t keep it up and slowed down; last year, I’d managed to keep that pace for the first 10K.
Still, by mile 7 rather than feeling depressed and dying, I was actually enjoying myself a bit. My body felt good overall, my plantar fasciitis was quiet, my legs felt okay, and my greatest annoyance was a slight chafing I felt on my inner thigh (as I forgot to apply Body Glide as I usually do). A cold orange wedge offered by a kind spectator around mile 11 seemed like the most refreshing thing I’ve ever eaten. I particularly enjoyed jogging through the UT Campus on the tree-lined brick path around mile 12.
A big upside to my third run is that the course felt more familiar, and I was no longer surprised/dismayed by the two short uphills in the city blocks at the end (although I didn’t even try to run them). I didn’t go all out at the end like I usually try to do, figuring that shaving tens of seconds wasn’t worth the risk that I’d hurl at the end (my near-miss in 2023’s Turkey Trot top of mind).
All in all, I finished in 2:18:35, nine minutes slower than last year’s race, which was itself nine minutes slower than my first effort.
Third Austin Half (2025) Results
While I’m not loving the direction my times are going, this race was (allegedly) 27 minutes faster than my glacial Dallas Half last month. I’m not entirely convinced that the Dallas numbers are legit (I was pretty sure I was around a still-slow 2:35.)
December 2024 Dallas Results
Spectator signs included a few copies of my favorite (“Seems like a lot of work for a banana”) and I was a bit annoyed to discover that they didn’t even have bananas at the end this year. What a scam!
The most topical spectator sign was “With TikTok banned, I’m so bored I’m watching this Half Marathon!” Two people had signs: “This isn’t that hard– even boys do it” which I’ll confess annoyed me more than amusing me. But that’s another post.
Alas, my failure to apply Body Glide was a worse mistake than I realized, and I spent the next few days (on a work visit to Redmond HQ) walking and sitting gingerly as the chafing turned out to be a lot worse than I realized on the run. Other than that, and a gross-but-routine blister on one toe, I was no worse for the wear. Still, the coming full Marathon in Galveston in just three weeks loomed large in my mind… will I be able to finish?
Galveston Full
Way back at the end of 2023, I ran the London Marathon on my treadmill, but I’ve not approached that distance since. Over a decade ago when Jane was training and running full marathons, I’d decided it was just a stupid distance — training was too time-consuming, and there was too high a risk of injury. When I finally started running myself, I decided that 10Ks would be my main target, with one half marathon “just to see if I could.”
Still, in the afternoon after 2024’s Galveston Half Marathon run (my fourth real-world half), I ended up walking for about 7 miles on the beach, so when it came time to sign up for the 2025 race I decided that I’d sign up for the Full. I expected I’d run as much as I could (maybe 20 miles or so) and just walk the rest.
Alas, between my treadmill being broken for the fall of 2024, weight gain, my ongoing problems with plantar fasciitis, and a lengthy cold after my mid-January trip to Redmond, I was not feeling very ready for the Galveston Full. I had a nice spaghetti dinner the night before and went to bed early, around 10pm. I woke at 4:30 and dozed in bed until 5:30. Alas, my bathroom trip was unproductive, although my belly felt fine and I figured I’d probably visit a porta-potty after the first half.
Race morning was cool and extremely humid (~97%) and clouds kept the morning sun at bay (although not quite as foggy as 2023’s race). A light wind felt nice along the beach for the 7:30am start:
The sun broke through the clouds after a few hours and I had to apply more sunblock around mile 15, an effort that was not entirely effective 🤣l
I’d expected to finish the first half in around the same time I’d achieved last year (~2:15) leaving me as much as 3:45 for the second half (the course officially closes in 6 hours). Alas, knowing that every step in my first lap was going to be repeated on tired legs did not help my motivation, and I finished the first loop in a glacial 2:39, almost as slow as my run in Dallas.
My optimistic plan of breaking down the full race into a Half, walking to mile 14, then running a 10K, then walking to mile 21, then running a 5K, then walking, then running a final 1K immediately fell apart when it made contact with reality. Both of my watches were being weird (they weren’t configured right and kept “pausing” my workout) so I started just using the on the distance marker signs for little intervals — the Half Marathon signs and Marathon signs were .1 miles apart, and I quickly settled into running between them and walking the rest. All told, I ended up running only about 2 miles of the second half, although my walking pace was a respectable 16 minutes/mile, so I didn’t feel too bad about things. I did spend miles 16 to 23 worried that I wasn’t going to finish before the course closed (I didn’t have the energy/motivation to really check my watches), and I watched with dismay when the final pacer passed me.
Between the humidity and the sun, I was increasingly dehydrated through the second half of the race, eventually refilling my water bottle at every aid station and emptying it before the next. Fortunately, sweating so much meant that I barely needed to pee (other than at the start/half/finish line, this race has just 3 porta-potties along the course). Having eaten two packs of Sport Beans in the first half (100 calories each), in the second I consumed 4 GU Liquid Energy packs (also 100 calories each). I probably should’ve had a couple more.
While I was tired and uncomfortable, nothing was really painful (a side/back cramp worried me for just ~1/3 of a mile, and I knew my feet were starting to blister). I felt strong each time I forced myself back into a run. All in all, I wasn’t working nearly as hard as I could’ve, and finishing 15 minutes faster probably would not have been all that challenging. As it was, my final results were… not great.
Still, I finished and I didn’t get hurt, so I’m counting the effort as a win. I figured I’d be immobile for the rest of the day at least, but managed to make it out to grab dinner and my first two drinks at the brewery before watching the Eagles trounce the Chiefs (40-22) in the Super Bowl. My feet are sore, I’ve got some sunburn, but no significant chafing, and blisters only on toes #2 and #9 and a tiny one on my left heel.
I do not plan to ever attempt a marathon again, but I’m very excited to do more half marathons, and hope to start getting some more respectable results in the upcoming Capitol 10K and 10K Sunshine runs in April and May.
I’d intended to write this post weeks ago, but I’ve been rather unproductive.
I ran the Dallas Half Marathon with an out-of-town friend on December 15th. It was a hard and very slow trek, but I managed to get back to a run in the last mile and I didn’t get hurt, so I’m counting it as a win.
<picture of chubby old guy jogging omitted because ain’t nobody paying $15 for that race photo :>
The boys and I went to see the Ravens play the Texans in Houston on Christmas Day.
The game itself was a miserable 31-2 blowout…
…but the (surprise?) halftime artist was neat. Beyonce put on an impressive show, even from our nose-bleed seats.
We spent the night in Houston, and two days later flew to Miami for a seven-night cruise.
We had a pleasant-if-expensive overnight (Miami holiday hotels are $$$!). We first Uber’d to Coral Gables’ Hampton Inn, then experimented with the very convenient light-rail across the street to get to dinner. Noah loved the food at Bocas Grill and Nate and I both liked it too.
Insane dessert milkshake
We easily made our way to the cruise terminal the next morning, again by Uber because the light-rail doesn’t go out to the piers. Alas, once we arrived it was impossible to really appreciate the enormity of the ship from the terminal.
I was worried that Royal’s Icon of the Seas was going to be much too big for my taste, but it really wasn’t– the layout was awesome, and there was so much to do.
View from the elevator lobby near our room
Our balcony room was not nearly as large as the suite on our summer cruise, but it was still very nice; the Icon is less than a year old and most of the updates (e.g. USB ports all over) were very convenient.
We brought 110 rubber ducks to hide on the boat, and Nate and I had a good time finding spots for them.
For our first excursion, we took a smaller ferry boat from St. Kitts to the adjacent Nevis to the sound of a steel drum player:
Once we arrived, Nate and I went in the ocean and he played in the sand, while Noah hung out in the adjacent beach restaurant and watched football.
The ferry’s engine broke down on the way back, but the music played on and the bar stayed open. As the last day of 2024 before the start of Dry January, I took full advantage. Nate and Noah enjoyed endless cups of fruit punch and a dozen varieties of Cheetos. (Who knew there were so many?!?)
Nate does not allow pictures
Nate entertained himself on the trip back (and much of the rest of the cruise) chatting up teen girls, most of whom he met after getting into staring contests with them 🤣. Noah found some kids his own age to hang out with.
After an entertaining tow back to the ship, dinner that night included celebratory hats and noisemakers, and Nate took full advantage.
A massive balloon drop marked the start of the new year. Nate was amongst the revellers, and I took pictures through the skylight from the park above:
Nate brought some of the balloons back to the room:
The following morning we were in St. Thomas. A very short bus ride took us to a beach where Nate lazed under an umbrella and Noah and I took a rented jet ski out for a half hour trip. He whooped wildly as we flew across the water, maxing it out at 77kph. He wanted to move on to one of the two cars (think of a jetski with seating for four with a shell like a sports car), but they were dramatically more expensive ($350 vs. $80) so we didn’t try them. He also was interested in trying the awesome-looking “water jetpack” (Flyboard) but we didn’t manage to figure out where to rent one.
Noah spent the rest of the excursion tossing the football in the water as I tracked down lunch (apparently nobody starts cooking until noon in the US Virgin Islands).
After rushing to finish burgers before our early-afternoon departure, we ran through a souvenir shop and reboarded the Icon. We had the early dinner slot and Noah was happy to again enjoy his “Surf-and-Turf”:
The kids passed time on sea days and in the evenings playing miniature golf, games on the sport court, watching NFL games, and chatting with other kids they met on the trip. (Surprisingly, one of Noah’s classmates was also aboard.)
Nate collected a 3rd place medal in the nightly glow-golf competition:
I spent a fair amount of time just enjoying the scenery and trying to figure out what the kids were up to.
Fortunately, we didn’t discover the cotton candy / candy store until late in the cruise.
One of the biggest draws of the cruise was trying to find Rover, the Icon’s “Chief Dog Officer.” I spotted a stuffed version at the gift shop and surprised Nate with it on his pillow the night before we were slated to meet the real girl:
While we only got a few pets in at the meet-and-greet, Nate was satisfied:
Noah demanded wifi on the trip, and after a ~$150 mistake on our last cruise where he’d left his cellular on for a few minutes, I caved. While I still think getting away from the Internet is the primary reason to cruise, and $25/day is robbery, the Starlink connection managed a pretty impressive 9mbps.
Our final excursion was to CocoCay, Royal’s private island with the impressive “Thrill” waterpark. We’d already visited during our family cruise this summer and thus had already tried all of the slides, so we took it easier this time and had a ton of fun riding our favorites and otherwise relaxing. It was perhaps 5 degrees colder than ideal, but almost perfect.
Icon next to Voyager. Icon is 175 feet longer (17%) and 68 feet (43%) wider than the Voyager class (my old fave).
Shows on board were awesome. The three stand-up comedians were hilarious (very adult) and I saw them twice. The Ice Skating shows were both good and Nate and I watched both together. On the main stage, the juggler was really good (pretty sure I’ve seen him before), The Wizard of Oz was a bit of a snoozer, the “Avengers”-knockoff show was amazing (I saw it twice), and the high-diving acrobatics in the aqua-theater were spectacular.
Adam Kario
The “Effectors” show included some great SFX and a drone show over the audience
When we returned to Miami (too soon!), we took a quick trip to the Everglades to see some gators and ride the airboats. Nate had fun trying to out-scream the roar of the engines:
We had a delayed but otherwise uneventful flight back to Austin with a stop in Houston. United was pretty terrible, both with an ever-growing delay in Houston, and the fact that they still managed to leave my suitcase behind and had to deliver it the following day. But I was still in a good mood after a great vacation.
Looking forward
I fear and despair for what’s going to happen in the United States over this year, but these aren’t useful feelings, so I’m concentrating on what I can control.
My end-of-year revisit to Kilimanjaro’s summit looms in the distance, but I’ve got a lot of other things coming up before then.
First and most pressing — races. On January 19th, my third Austin Half Marathon. The current forecast calls for a frigid 28F, so I’ve got something of an excuse for what I expect to be a very slow pace — how fast can you expect a snowman to run anyway? Hours after the race, I’m flying to Seattle for a week with my team. On Super Bowl Sunday (Feb 19th), I’m going to test myself in the Galveston Marathon. When I (not entirely sober) signed up for the full marathon last year, I reasoned that even if I didn’t train, I could just run the first half and walk the second. That’s still the plan, but as the days fly by I’m less confident that the plan is a slam-dunk. (Bert Kreischer ran a full in 5:33:33, so beating that is my unofficial goal). Fingers crossed!
Other than that — cruises. I’m going to do a quick cruise on the Mariner of the Seas in March, then the boys and I are going to spend Thanksgiving week on Harmony of the Seas. They’re pushing for another trip on the Icon or the (brand new) Star sometime this summer, but I don’t know that we have either the time or the budget for that this year.
This morning, I awoke from a dream. I’d just discovered a ticking time bomb was a fake, and the dream ended as I said to my companion “There’s nothing quite as exhilarating as finding out that today isn’t the day you’re gonna die.”
As I opened my eyes in bed, my now-conscious mind unbidden replied “Okay, Universe, today you have the chance to do something REALLY funny.”
In my youth, I had remarkably little exposure to death– most of my elderly relatives had either passed away before my birth, or were still alive. I lost a distant uncle in my teens. I lost my maternal great-grandfather (99.75) when I was twenty. My first true friend when I was 33, while I was on vacation in the French countryside.
That’s not to say I never thought about death, just that I mostly had the unconsidered invincibility of youth and death was an abstract idea that didn’t occupy much space in my brain, except that I wanted to come to some sort of meaningful end. My general thinking was encapsulated by this clip in Starship Troopers:
If everyone’s gotta die eventually, it’s probably best not to worry too much about it.
Even today, mortality remains at something of a distance. Startlingly, I’m 45 and have yet to attend a funeral. Nevertheless, mortality has been on my mind more than ever these last few years, between losing friends and the end of my marriage. In part, these thoughts are not of my choosing, but it’s also something that I’ve chosen to embrace.
The walls of my house are adorned with Latin phrases: Memento Mori, Memento Vivre, Tempus Fugit, and Carpe Diem are joined by Esse Sequitur Operare, Advance the Plot, and Choose. These are all reminders of the same theme: Just as it’s important to recognize that you’re not going to live forever, it’s also important to realize that you’re currently alive. The grimmest outcome is being alive, but not behaving as if you are. If not now, when?You are what you do.Get busy living.
While striving to make meaningful and long-lasting contributions to the world can be fulfilling and better mankind, it’s also important to put such work in context. Amidst a longer (and somewhat grim) post, a talented writer observed “Look around you. Everything you see will cease to exist one day. Get over it. Sure culture eats strategy for breakfast, but entropy eats everything for dinner.“
This line of thought can lead to very dark places, but anyone who’s ever enjoyed a video game should get it — video games blink out of existence when you hit the power button, but that’s not to say that they’re either pointless or have no impact beyond the screen. In life, as in video games, if you’re never having any fun, you’ve missed the point. And in the darkest times, the fact of mortality provides solace — while all good things come to an end, so do all the bad things.
Do all the good you can, by all the means you can, in all the ways you can, in all the places you can, at all the times you can, to all the people you can, for as long as you can. And strive to have as much fun as you can while doing it.
After spending decades where I sometimes acted like I was living out the marshmallow experiment, a line from Philip Su‘s latest book (life advice to his son) best encapsulates my realization: “You don’t “win” life if you never eat the marshmallow.“
Every day is a gift, and no tomorrow has been promised.
Two years ago, I wrote up some best practices for developers who want to take a file’s security origin into account when deciding how to handle it. That post was an update of a post I’d written six years prior explaining how internet clients (e.g. browsers) mark a file to indicate that it originated from the untrusted Internet.
The tl;dr is that many native apps’ security vulnerabilities can be significantly mitigated by blocking the use of files from the Internet.
Consider GrimResource, an attack vector documented this summer, whereby attackers would send the victim a Microsoft Management Console (.msc file). A user receiving such a file would see a standard security warning prompt:
…and if accepted, the file would open and the attacker could run arbitrary code embedded in the file on the victim’s PC.
From a vulnerability purist’s viewpoint, there’s no vulnerability here — everything works as designed. From a security humanist’s viewpoint, however, this is unnecessarily awful.
A more accurate dialog box might read something like this:
However, thebest fix for this specific case is to forbid MSC files from the Internet entirely. Effectively all legitimate MSC files are pre-installed on the user’s local computer, so any such file from the Internet is almost guaranteed to be malicious. When the Management Console team fixed this bug, they chose the safe approach, simply blocking the file outright with no dangerous options:
Adding this check was pretty trivial.
// Files outside of the Local Computer, Trusted, and Intranet Zones
// are considered "Untrusted".
bool SourceIsUntrusted(LPCWSTR pwszFile)
{
bool fUntrusted = true;
DWORD dwZone = (DWORD)URLZONE_INVALID;
CComPtr<IInternetSecurityManager> pSecMan;
if (SUCCEEDED(CoInternetCreateSecurityManager(nullptr, &pSecMan, 0)))
{
if (SUCCEEDED(pSecMan->MapUrlToZone(pwszFile, &dwZone, 0)))
{
fUntrusted = (dwZone >= URLZONE_INTERNET);
// Note: For the tightest lockdown, instead use
// fUntrusted = (dwZone!=URLZONE_LOCAL_MACHINE);
}
}
return fUntrusted;
}
The .msc loader simply checks whether the source file originates from an untrusted location and if so, it errors out.
There are a few things to note about this SourceIsUntrusted function.
First, it fails closed— if the security manager cannot be created (very unlikely), the file is treated as untrusted. If the security manager cannot return a Zone mapping for the path (possible with various maliciously-crafted NTFS path strings), it’s treated as untrusted.
Next, it allows opening files from the Local Intranet and Trusted Sites security zones, allowing network admins some flexibility if they have some unusual practices in their environment (e.g. storing custom .msc files on an internal file share); they can unblock opening such files by using the Windows Site to Zone Assignment policy.
Finally, you may’ve noticed that the final argument to MapUrlToZone is 0. This is the MapURLToZone Flags argument, and the default value of 0 is usually what you want.
There is, however, an important exception.
Preventing NTLM Hash Leaks
In some cases, your app may wish to block opening remote files, e.g. to prevent a server from being able to see that a given file was opened (a so-called Canary Token), or to prevent leakage of the user’s NTLM hash.
Because Windows will attempt to perform NTLM Single Sign On (SSO), when fetching network file paths (e.g. \\someserver\share\ or file://someserver/share/), it can leak the user’s account information (username) and hash of their password to the remote site. Crucially, NTLM SSO is not today restricted by Windows Security Zone like HTTP/HTTPS SSO is:
By default, Windows limits SSO to only the Intranet Zone for HTTP/HTTPS protocols
By default, the MapURLToZone function will connect to the server for a remote filepath to see whether there’s a Zone.Identifier alternate data stream on the target file. This potentially leaks NTLM information as a part of that connection.
The MUTZ_NOSAVEDFILECHECK flag prevents the MapURLToZone function from looking for that Zone.Identifier stream, protecting the hash.
However, using MUTZ_NOSAVEDFILECHECK flag on a local file will also prevent your code from detecting that the file was downloaded from the Internet. Oops. What’s an app developer to do? The answer is to call it twice:
// Files outside of the Local Computer, Trusted, and Intranet Zones
// are considered "Untrusted". Avoid connecting to the target
// server unless the URL's Zone is trustworthy.
bool SaferSourceIsUntrusted(LPCWSTR pwszFile)
{
bool fUntrusted = true;
DWORD dwZone = (DWORD)URLZONE_INVALID;
CComPtr<IInternetSecurityManager> pSecMan;
if (SUCCEEDED(CoInternetCreateSecurityManager(nullptr, &pSecMan, 0)))
{
if (SUCCEEDED(pSecMan->MapUrlToZone(pwszFile, &dwZone, MUTZ_NOSAVEDFILECHECK))) {
fUntrusted = (dwZone >= URLZONE_INTERNET);
// For files currently stored in trusted locations,
// ensure we also look for any MotW storing the
// original source location.
if (!fUntrusted) {
fUntrusted = (!SUCCEEDED(pSecMan->MapUrlToZone(pwszFile, &dwZone, MUTZ_REQUIRESAVEDFILECHECK))) || (dwZone >= URLZONE_INTERNET);
}
}
}
return fUntrusted;
}
Does every application need to use this more elaborate SaferSourceIsUntrusted function?
No.
It’s only worthwhile to prevent MapUrlToZone from touching the file if nothing else has already touched it first.
For example, if the user opened Windows Explorer to \\SomeServer\SomeShare and double-clicked on SomeMsc.msc, they’ve already performed NTLM SSO on the target SMB server, so stopping MapURLToZone from doing so isn’t going to improve anything. Similarly, if something called ShellExecute(‘\\someserver\someshare\Somemsc.msc`), the Shell itself is going to check for that file’s existence (performing SSO) long before the Management Console handler application gets a chance to touch the file.
On the other hand, imagine that the Management Console team had created a new Application Protocol that allowed any website to open the management console and pass in a target MSC path. A malicious site could construct a link like so:
In this case, nothing touches the file before the handler application gets the target path. Because the very first thing the Management Console does is check the target file’s Zone, it should then use the enhanced SaferSourceIsUntrusted function to avoid performing an unwanted NTLM SSO and leaking the user’s hash.
Educating Windows About Dangerous File Types
By default, Windows allows most files downloaded from the Internet to be passed to their handler application without warning. However, Windows does show a security prompt when opening a potentially-dangerous (high risk) file type that bears an Internet-Zone Mark-of-the-Web:
If your application introduces support for a potentially dangerous file type, you can inform Windows of the danger level by adding or updating the EditFlags DWORD for the type and adding the FTA_AlwaysUnsafe flag (0x20000).
Stay safe out there!
-Eric
PS: As someone who reads far more code than I write, I really prefer early-exit functions.
// Files outside of the Local Computer, Trusted, and Intranet Zones
// are considered "Untrusted". Avoid connecting to the target
// server unless the URL's Zone is trustworthy.
bool SaferSourceIsUntrusted(LPCWSTR pwszFile)
{
DWORD dwZone = (DWORD)URLZONE_INVALID;
CComPtr<IInternetSecurityManager> pSecMan;
if (FAILED(CoInternetCreateSecurityManager(nullptr, &pSecMan, 0))) return true;
if (FAILED(pSecMan->MapUrlToZone(pwszFile, &dwZone, MUTZ_NOSAVEDFILECHECK))) return true;
if (dwZone >= URLZONE_INTERNET) return true;
// For files currently stored in trusted locations,
// ensure we also look for any MotW storing the
// original source location.
if (FAILED(pSecMan->MapUrlToZone(pwszFile, &dwZone, MUTZ_REQUIRESAVEDFILECHECK))) return true;
return (dwZone >= URLZONE_INTERNET);
}
After a frustrating morning with my troublesome P1 Gen 7 laptop, I decided it was time to bite the bullet and stop working off laptops full-time, a habit that I inexplicably fell into at the start of the pandemic.
I first surveyed the high-end desktop options at various vendors, but after the P1 fiasco and a similarly bad experience with HP a decade ago, I wasn’t keen on paying a large premium for a high-end system. Ultimately, after shopping around for a day or two, I dropped $999 on an iBuyPower system at Costco. (Technically, I paid for almost all of it using “Costco cash” from this summer’s cruise).
I’m happy to see that it’s unlikely that I would’ve saved any money by building this system myself. To be honest, it surely would’ve cost more, because I’d’ve probably paid the $200 premium to get an i9-14900 for a slightly faster clock and 4 extra cores.
While none of the components are strictly “top-of-the-line”, they’re close enough to rank in the 98th percentile on PassMark, with a score of 12302.
As is customary with my new machines, shortly after I finished assembling it, I built Chrome.
Taking a bit over 3 hours, it was not fast. Fortunately, I don’t build Chrome often anymore.
Notably, this time was dramatically slower than the 53 minute 2020-era build time on the 3950X. I’m sure Chrome has gotten larger, and I’m not able to disable Defender on a work device, and it’s possible that the Optane in the 3950X really does make a big difference. At some point, I’ll probably retry the 3950X against the current Chromium codebase to see how it fares.
Tomorrow, the 48GB of extra ram arrives, and I get to see if I even notice any change. :)
On a flight back from Redmond last week, I finally read Linus Torvalds’ 2002 memoir “Just For Fun.” I really enjoyed its picture of Linux (and Torvalds) early in its success, with different chapters varyingly swooning that Linux had 12 or 25 million users. But more than that, I enjoyed some of the “behind the scenes” of a world-famous project that started out small before growing out-of-control.
Twenty years ago, I released the first builds of Fiddler, an app I’d begun as a side project while working on the clipart feature team in Microsoft Office. Originally, the idea was to build a debugger for the client/server communications between the Office client applications and the clipart download website. To put it mildly, the project was much more successful than I would’ve ever hoped (or believed) back then. More than anything else, Fiddler was a learning experience for me — when I started, I knew neither HTTP nor C#, so setting out to build a Web Debugger in .NET was quite an ambitious undertaking.
By the time I’d finished officially working on the project, Fiddler and its related projects amounted to perhaps 40000 of lines of code. But that’s misleading– over the years, I probably wrote at least five times as many, constantly rewriting and refactoring as I learned more (and as the .NET Framework grew more powerful over the twelve years Fiddler was under my control). I learned a huge amount while building Fiddler, mostly by making mistakes and then learning from them.
In today’s post, I’d like to summarize some of the mistakes I made in writing Fiddler (big and small) — the sorts of things I’d tell my earlier self if I ever manage to build that time machine. I’ll also talk about some things that went unexpectedly well, and decisions where even now, I couldn’t say whether a different choice would’ve led to a better outcome. Some of these are technical (only of interest to geeks), and some may be interesting for other audiences.
While personal experience tends to be the most vivid teacher, learning from the mistakes of others tends to be more efficient.
The Mistakes
I made a huge number of mistakes while building Fiddler, but I was fast to correct the majority when I became aware of them. The mistakes that were the hardest (or effectively impossible) to fix still linger today.
Collaboration
The core mistake I made with Fiddler was spending the first years thinking about it endlessly, without doing much talking about it. Had I spent more time talking to more experienced developers, I could have avoided most of the big technical mistakes I made. Similarly, had I talked to my few friends in the startup/business community, I would’ve been much more prepared for Fiddler’s eventual sale.
Still, I know I’m being a bit hard on myself here– twenty years ago, it wasn’t clear that Fiddler was really going to amount to more than “Just another side project”– one of a dozen or so I had cooking at any given time.
Threading
When Fiddler was first built, I knew that it needed to do a lot of work in parallel, so I quickly settled upon a model that used a thread per request. When Fiddler received a new connection from a client, it would spawn a new thread and that thread would read the request headers and body, perform any necessary modifications, lookup the target site’s address in DNS, connect to that address, secure the connection with HTTPS, resend the client’s request, read the response headers and body from the server, perform any necessary modifications, and return that response to the client. The thread would then either wait for another request on the client’s connection, or self-destruct if the connection was closed. As you can see, that’s a huge stream of work, and we want it to happen as fast as possible, so I naively assumed that the simplicity of the thread-per-connection would provide the best performance.
What I didn’t realize for a few years, however, is that virtually all of those operations involve a huge amount of “waiting around” — waiting for the network stack to send the full request from the client, waiting to resolve the hostname in DNS, waiting for the connection to the server, waiting for the server to return content, waiting for the client to read the response from Fiddler, and so much more. Taken as a whole, it’s a huge amount of waiting. I didn’t realize this for years, however, and didn’t look closely at the various much more complicated asynchronous programming paradigms added to the .NET Framework over the years. “Why would I want all of that complexity?” I wondered.
Eventually, I learned. I noticed that projects like the .NET Kestrel web server are built entirely around cutting-edge asynchronous programming concepts. While Fiddler would slow to a crawl with a few dozen simultaneous connections, Kestrel started at tens of thousands per second and only got faster from there. When I starting looking closer at Fiddler’s thread performance, I found a huge regression I’d introduced without noticing for years: Early in Fiddler’s development, I’d switched from creating an entirely new thread to using the .NET thread pool. In the abstract, this is better, but I never noticed that when the thread pool was out of threads, the Framework’s code deliberately would wait 500 milliseconds before adding a new thread. This meant that after Fiddler had around 30 connections active, every subsequent new connection was deliberately delayed. Ouch!
Unfortunately, Fiddler’s extensibility model was such that it wouldn’t’ve been possible to completely rewrite it to use .NET’s asynchronous patterns, although from 2014 to 2016, I gradually coaxed the implementation toward those patterns where possible. In the course of doing so, I learned a huge amount about the magic of async/await and how it works under the covers.
Simple Fields
I started building Fiddler (before I knew almost any C# at all) by defining the basic objects: HTTP headers, a Request object, a Response object, a Proxy object, a Connection object, and so forth. In many cases, I exposed data as public fields on each object rather than wrapping fields in C# Properties, reasoning that for most things, Properties represented unnecessary indirection and overhead, and remember, I wanted Fiddler to be fast.
It was quite a few years before I realized the error of my ways– while Properties do, in fact, introduce overhead, it’s the cheap kind of overhead. Many of the optimizations (performance and developer experience) I’d’ve liked to have made to Fiddler in later years were precluded by the need to preserve compatibility– converting fields to properties is a breaking change.
By far, the biggest mistake was exposing the HTTP body data as a simple byte[] field on the request and response objects. While plain byte arrays are conceptually easiest to understand, easy for me to implement, and convenient for extension and script authors, it was a disastrous choice. Many HTTP bodies are tiny, but a large percentage are over the 85kb threshold of .NET’s “Large Object Heap,” resulting in expensive garbage collection. The worst part is that a plain byte array requires contiguous memory, and this is disastrous in the constrained memory space of a 32bit process– after Fiddler ran for a while, address space fragmentation meant that Fiddler would be unable to store even modestly sized responses, triggering a flood of the dreaded “Out of memory” errors, requiring restart of the tool.
Fortunately, 64-bit processors eventually arrived to mitigate the worst of the pain. The address space for a 64bit process is so large that fragmentation no longer completely broke Fiddler, but even in 64bit, a segmented buffer would’ve improved performance and allowed for individual bodies over 4 gigabytes in size.
By the end, I fixed what I could without breaking compatibility. I introduced a bunch of accessor methods that would aim to “do the right thing” and tried to guide extension authors toward using those rather than accessing data fields directly.
SSL/TLS Stack
The original version of Fiddler supported only HTTP traffic. In 2003, this wasn’t a deal-breaker, because relatively few pages outside of online shopping checkout used HTTPS, and even there, secure transport was only sporadically supported. (To mess with HTTPS checkout pages, I had built an IE extension called TamperIE).
Helping matters, I soon discovered a way to get WinINET (the network stack underneath Office and Internet Explorer) to leak HTTPS headers to Fiddler. RPASpy was a DLL that exercised a long-defunct extensibility hook in the network stack (intended for CompuServe‘s Remote Passphrase Authentication) to send the headers to Fiddler. It was a huge hack, and read-only access to only WinINET’s headers alone wouldn’t satisfy every use case, but it was good enough for a while.
I knew that the “right” way to support HTTPS was to implement a TLS monster-in-the-middle, but directly calling into Windows’ SChannel or the open-source OpenSSL libraries seemed like a task well beyond my ability.
I didn’t end up adding proper HTTPS support until after .NET 2 was released. One day, I noticed that the Office Security team’s (unintentional) Fiddler-competitor “MiddleMan” tool now supported HTTPS. When I saw that, I quickly confirmed my hypothesis that .NET had wrapped SChannel (via the SslStream class) and I quickly added equivalent HTTPS-interception support to Fiddler. (The existence of MiddleMan was itself was something of a mistake. In 2004 or so, the Office Security team sent a broad status update suggesting they were going to build a HTTP debugger. I reached out and said I’d already built one, but that message was misinterpreted and ultimately ignored. However, Fiddler might not have ever escaped Microsoft had it become an “official” project, so perhaps things worked out for the best.)
In hindsight, I probably should’ve buckled down and wrapped OpenSSL directly, as this would’ve allowed Fiddler to support HTTPS earlier, and much more importantly, would’ve enabled additional scenarios. For example, a direct wrapper would’ve allowed Fiddler users to control low-level protocol details (like which ciphers are allowed) on a per-connection basis. Most importantly of all, years later the designers of the HTTP2 protocol decided to negotiate the use of that protocol via a TLS extension (ALPN) and .NET stubbornly refused to expose the capability of using that extension before I left Telerik and stopped working on Fiddler at the start of 2016. Fiddler Classic still doesn’t support HTTP2 to this day. Bummer!
Naming Things
When building a new app or platform, naming things is a crucial task; you want to move fast, but changing names later can be very disruptive and take years.
My app was named Fiddler because early on I had the vision that the proxy would have one big benefit over passive network monitors — the ability to rewrite network traffic, or “fiddle with it.” A secondary consideration was that, as the PM for the clipart team, I knew that there was plenty of free “fiddle” art I could use for the tool’s icon and splash screen. It was a few years before I learned that “fiddler” has an extremely unsavory association in some important English-speaking locales.
Fiddler’s objects were mostly easy to name, with easy ones (“Request”, “Response”, “headers”, “body”, “proxy”, “application”, “preferences”) and a handful of intuitive monikers like “Pipe” (a wrapper for a network connection), “Janitor” (an object that would purge expired data).
Perhaps the most important name, however, was the name of the key object that represented a request, its response, timing data, control flags, and other bits of information. I quickly settled upon Session, which was something of an unfortunate choice as there are so many different concepts of a Session in web networking, both on the browser side and the server side. Two decades later, I still sometimes find myself brainstorming better choices. To date, I think the leading contenders are Exchange (which would’ve been the runaway winner if not for a popular Microsoft server product of that name) or Pair (which suffers from the downside of being grossly imprecise, but the major benefits of relative uniqueness and being very easy to type over and over and over).
The Successes
With the biggest mistakes out of the way, let’s look at what went well.
C#
At the time I started writing Fiddler, I was at a “Hello World” level of skill in .NET, with nearly all of my projects written in Borland Delphi (Object Pascal). While I did mock up the very first Fiddler UI in Delphi, I committed to C# early on and learned the language as I built my app.
The C# team rapidly improved the language over the subsequent years (I like to credit my college roommate and long-time C# team member Anson Horton) and for the first few years every new release added awesome improvements (especially generics). Eventually, the C# team’s latest hotness was too much to keep up with (Linq, various async paradigms) and my desire to stay compatible with older frameworks meant that I couldn’t adopt everything, but C# continued to offer both compatibility and productivity improvements in each release. (In hindsight, I probably should’ve looked much closer at Linq, as it’s a very natural fit for many of the tasks that Fiddler scripters and extenders seek to accomplish).
If I were to start Fiddler over today, I’d probably look at building it in Go, which is a language extremely well-suited to building the sorts of high-scale networking code that Fiddler needs. One of my final projects at Google was building TrickUri, a simple cross-platform tool that offers some Fiddler-like functionality in under 400 lines of code.
Extensibility
Remember, Fiddler was designed as a clipart debugger. It only ended up with millions of users worldwide because it proved much more flexible than that, with both a scripting engine and an extensibility model that allowed for both rapid feature iteration and the ability for 3rd parties to extend the tool to support their own scenarios.
Ironically, Fiddler’s extensibility mostly came about due to my laziness! I knew that Fiddler needed a way for users to express filters for which traffic they wanted to see in the tool, and I set out to build a filter UI that looked like the filtering UI in Microsoft’s internal bug tracking tools (initially Raid, then Product Studio).
I quickly realized that the code to do this was going to be messy and hard to maintain. Doodling filter UIs on a notepad in the Visual Studio building after hours while Anson was playing ping pong with one of our friends, I thought “If only I could get my users to just write their own filtering code.” And then the epiphany: “Wait, didn’t I just see some MSDN Magazine article about adding a script engine to your .NET app?” Fiddler went from a niche clipart debugging tool to the only easily extensible proxy tool within a matter of days.
Unfortunately, the only language available was the relatively esoteric JScript.NET which was close enough to the JavaScript used by most of my target users to seem familiar, but different enough to be occasionally infuriating. Still, it worked amazingly well. My only big mistake here was failing to hunt around for JScript.NET resources — eight years later, I discovered that Justin Rogers (a future teammate on the IE browser) had written a whole book about JScript.NET, covering lots of material I had to laboriously work out on my own, and including some tidbits I hadn’t even learned in all those years.
Later, I expanded extensibility support from JScript.NET to any .NET language, allowing other C# developers and empowering folks coming from more esoteric languages (IronPython).
Fiddler would still be useful without rich extensibility (e.g. Telerik’s newer “Fiddler Everywhere” product isn’t extensible) but there’s no question that extensibility is a prime reason for Fiddler Classic’s popularity and longevity. While I haven’t had access to Fiddler’s source code in 9 years, even today I’m still able to broaden the capabilities of the tool by building upon its extensibility model (e.g. NetLogImport, NativeMessagingMeddler).
Preferences
I’ve long loved Firefox’s about:config mechanism– it’s an unobtrusive way to expose configuration knobs and switches that’s easy to use, easy to document, and easy for browser features to consume. While simple “Strings as configuration” has some clear downsides (versus, say, typed enumerations), I think the benefits (development ease, flexibility, documentability) make for a good tradeoff.
Adding a Preferences system to Fiddler in 2010 won the same benefits, and doing so correctly (in a heavily multithreaded and extensible tool) without introducing deadlocks or performance problems resulted in one of the few areas of Fiddler’s code of which I’m truly proud. An older (and probably buggy) version of the original code can be perused here. Type about:config in Fiddler’s QuickExec box to see the preferences.
The Book
Circa 2011, I began exploring the sale of Fiddler to an undisclosed company headquartered in the United Kingdom. My wife and I flew to visit the team and I watched with mounting alarm as I watched two of their product managers use the tool — it seemed like they’d found the most complicated, least efficient means of doing literally everything. They’d been studying Fiddler for a few months in researching the acquisition, and I was honestly shocked that they were even making an offer after seeing how clunky they made the tool look.
I immediately realized the problem — While Fiddler offered many powerful features and convenient shortcuts, there was really no way to learn about how to use it unless you’d seen me at a conference or watched one of the handful of short videos I’d put up on YouTube.
I resolved to write a book about the tool, starting out unimaginatively describing everything from the top-left to the bottom right. I got into a great routine– my wife was training for a marathon, so in the mornings she would go out to run and I would go to the coffee shop to write. Progress on the book was steady, but not as fast as I’d initially predicted. The problem was that I’d sit down and write three pages on how to accomplish some task, but writing prose served as a forcing function for me to really think deeply about the task. That, in turn, immediately caused me to think about how I could make the task easier (so I could reduce the pain of writing), and I would frequently go back to my dev box and code for half an hour to make the task require just the click of a button. And then I’d have to go replace my painstakingly-authored three pages with a single sentence: “To <accomplish goal>, push <that button>.” Fiddler improved rapidly, and I began describing this virtuous loop as book-driven development.
I did not expect to make any meaningful amount money on the book. I was pleasantly surprised to be wrong, eventually earning on the order of $250 per hour I spent on the first and second editions. The Second Edition is still for sale today, and still >95% accurate despite being almost nine years old.
In terms of mistakes — I probably should’ve started writing the book earlier, and I definitely should’ve created video training content for Fiddler. I had done so little research into the topic that it wasn’t until 2013 that I learned that Pluralsight video training publishers got paid for their content. I suspect I left quite a bit of money on the table. Oops. On the plus side, I later became a friend and collaborator to some great folks who filled that gap for me.
The Unknowns
There are some decisions that are hard to reason about– they could’ve made Fiddler dramatically more successful, or prevented it from achieving the success it did.
Win32 UI
When I first started working on Fiddler, the only real UI choice was WinForms, a .NET wrapper around the traditional Win32 UI. However, the .NET soon embarked on a variety of fancier UI frameworks and strongly evangelized those to developers. To avoid breaking compatibility with script and extensions, I never adopted any of the frameworks, and many of them eventually faded away.
Limiting Fiddler to WinForms was something of a double-edge sword — I would’ve been sad if I’d picked up a replacement that was later abandoned by Microsoft, but changing UI frameworks definitely would’ve been a forcing function to better decouple Fiddler’s engine from its UI, a process that I instead undertook much later in extracting the FiddlerCore engine. That decoupling, while belated, resulted in some significant improvements in Fiddler’s architecture that I wish had come earlier.
Perhaps most importantly, however, changing UI frameworks might have resulted in abstractions that would’ve made Fiddler more portable to other platforms.
Cross-Platform
When I first started work on Fiddler, Windows was far and away the most popular developer platform for the web and webservice-consuming client applications. Within a few years, however, Mac rocketed to the platform of choice for web developers, particularly in the US. The fact that Fiddler wasn’t available for Mac was a huge downside, and moving to support that platform represented a huge opportunity. Half-measures (e.g. pointing a Mac’s proxy settings at a Fiddler PC or VM) were just that — useful, but kludgy and annoying.
Fortunately for some, this gap meant that I didn’t completely destroy the market for Mac Web Debugger apps (“Free and good” is extremely hard to compete against), and it later gave potential acquirers an idea of how they might monetize their investment (“We’ll promise to keep Fiddler for Windows free forever, but sell the Mac version“).
When I joined Telerik, one of my first tasks was to see about getting Fiddler stood up on other platforms. Using the Mono framework, I quickly had a reasonably functional version of Fiddler running on Linux. The same code technically also ran on Mac, but the quality of the UI framework on Mac was so poor that the tool couldn’t do much more than boot before hanging or crashing. Still, getting Fiddler stood up on Linux was something of a win– it meant that Mac developers could run Fiddler inside a VirtualBox VM running Linux at no-cost.
Ultimately, supporting cross-platform would have entailed some crucial tradeoffs that are hard to reason about: Would a debugger that was worse (slower, less native) on every platform be more successful than a debugger that was great on only one platform? I still can’t answer that question.
Open-Source
One decision I’d made early on was to build Fiddler without using any open-source software (OSS). Microsoft had a complicated relationship with OSS in the early 2000s, and even looking at OSS code without using it risked concerns from the legal department.
My stance weakened in later years, and I did eventually use a variety of small OSS components (e.g. the Hex editor control, and a ZIP compression library) if they had a liberal (MIT) license. In some cases, I contributed back fixes I made to the OSS code I consumed, in others, I rewrote almost the whole thing.
Similarly, I didn’t make Fiddler itself OSS — as a practical matter, Github didn’t yet exist, and I had no idea how I might do open-source beyond just putting a ZIP file full of code up on my website. Combined with worries about Microsoft’s legal department, and skepticism that anyone else would need to contribute to Fiddler’s code (between the rich extensibility model, and how quickly I responded to users’ feature requests), I never seriously considered opening up Fiddler’s code to external contributions.
Today, I’m very sad that Fiddler isn’t open-source. Telerik has allowed Fiddler Classic to stagnate (with years between minor improvements), and there are many bugs that I (and others) could have easily fixed, and features we could have added… but we can’t because the code is closed-source.
As Fiddler’s community grew, several superstar users (Kevin Jones, Jerome Segura, and others) began doing amazing things with Fiddler’s extensibility model, and there’s no question that the tool would’ve been even more powerful if those and other collaborators could change the code directly.
Fiddler and My Microsoft Career
Not long after I started working on Fiddler, I started describing it as a “Microsoft PowerToy” with the notion that it would join the pantheon of (venerated but then defunct) useful applets built by Microsoft employees but not an “official” product in any way. At some point over the next year or two, I removed all mention of Microsoft in the UI (although to this day it remains in the registry path used by Fiddler) after deciding that such mention was more likely to cause trouble than prevent it. Still, I was always uneasy, knowing that if something bad happened (a serious security incident, a patent claim, etc), Microsoft seemed likely to simply fire me with no questions asked.
When I left the Office team to join the Internet Explorer team, I showed off the app to a few of my interviewers. My new manager Rob opined that they planned to keep me too busy for side-projects, so I should just hand Fiddler off to someone else in the company. “That’s one possibility” I thought to myself, guessing (correctly) that what actually happened is the same thing that happened in Office– every tester who saw Fiddler immediately added it to their toolbox and began using it to debug anything they could. Nevertheless, I embarked on a renewed effort to get someone in Visual Studio interested in the tool; a long mail thread went up to a VP who mused “We should look at more web debugging scenarios in the next version” and that was the last I’d ever heard from them.
However, some other folks in Visual Studio did take an interest, and a satellite team in North Carolina reached out to build some VS integration features for Fiddler, including a “Visual Studio .WebTest Generator” plugin that would use a Fiddler capture to build a test script from captured web traffic. I didn’t hear from them for a few months after our initial kickoff meeting, so I reached out to ask how things were going. Their Engineering Manager reported that they were going to start soon, but their project had been delayed because their prior project had been to build better multi-threaded debugging features for VS, and when they tried using those features on Fiddler itself, Fiddler’s massive and varied use of threads revealed tons of bugs in their tooling. So they had to go fix that first before getting back to extending Fiddler. Oops.
In 2007, I was encouraged to submit Fiddler to the prestigious company-wide Engineering Excellence awards, and won one of the prizes. Bill Gates handed me a big glass spike, and my team (me) got $5000 in morale money to spend. I ended up taking a Mediterranean cruise with my girlfriend (fiancée, by the time we got off the boat).
Around 2010, the IE team visited our Online Services teams in California to ask what they wanted in the next version of the IE developer tools. The feedback was overwhelming: “Just bake in Fiddler.” I was proud, but frustrated, knowing that doing so wouldn’t really be practical for a variety of technical and non-technical reasons.
Other than broad recognition, frequent kudos, the EE Award, and the satisfaction of knowing that I was helping a lot of my colleagues be more effective, Fiddler seemed to have almost no formal impact on my Microsoft career. More than once it was strongly implied that some saw Fiddler as a distraction from my “real” job, and it was proof that I could’ve been working harder on my official duties. It was an occasionally frustrating feeling, but mostly I ignored it.
Still, one of the great unknowns with Fiddler was what might’ve happened had I put more effort into trying to get Microsoft to recognize its impact and get it better integrated into some other product or initiative. Grass-roots usage inside the company neared the top of the charts, and folks at all levels of the organization were using it. (One of my proud moments was when Ray Ozzie mentioned that he was a user).
With more effort at collecting data and evangelizing the possibilities, could I have managed to move the organization to officially embrace Fiddler? Could I have become a successful architect of an important Microsoft platform? Or was my private theory at the time correct: Microsoft would never be interested in anything unless one day it would bring in a billion dollars?
Monetization
As Fiddler usage grew from a handful of friends and coworkers to peak at around 14000 daily downloads, a tantalizing possibility arose: could I turn Fiddler into a business and eventually a full-time job? I knew that the fact that the tool was freeware greatly boosted its popularity, and I knew that trying to charge for it would entail all sorts of hassles, including dealing with Microsoft’s legal department, a dreaded exercise that I had (to my increasing surprise) managed to avoid year after year. The fact that Fiddler was free and was a Windows-exclusive used by tens-of-thousands of Microsoft’s own employees always seemed like a useful hedge against any interference from the suits.
Still though, when the first acquisition offer for Fiddler came in, I sought advice from Lutz Roeder, who had a few years prior sold his .NET Reflector tool to a 3rd-party developer tools company. When I described my daily download metrics, he suggested that I ought to make Fiddler my full-time job. Around the same time, my own manager called me over to an office window to observe: “You’re an idiot. See that red Ferrari down there? As soon as you thought of Fiddler, you should’ve quit Microsoft and made us buy your company later.” I laughed and observed “Naw, Fiddler’s cool, but I’m no Mark Russinovich. That would’ve never happened.” I went on to explain how I’d previously tried to get the Visual Studio team to take more of an interest in Fiddler (for free), to relatively little effect.
Still, I pondered various moneymaking schemes: I wondered whether I should build a “Game Genie for the Web” tool that would allow non-experts to tweak (cheat) on web-based games, but ultimately concluded I’d probably have to spend all of the profits I’d earned on lawyers after game companies sued me.
I considered building a standalone tool called DIABLO (“Demo In A Box for Limited Offlining“) that would allow anyone to easy make their webservice work offline for demo purposes. This was a powerful use for Fiddler, because it meant you could demo your site or service without having to worry about your network connection. (For a few consecutive instances of the Annual Company Meeting at Safeco Field in Seattle, I noticed the Fiddler icon down in the system tray as various teams demo’d their latest wares.)
What I never did do, however, was ask people what sorts of features they would be willing to pay for.
With the benefit of hindsight, there are two clear directions I could’ve taken that would have supported a business: 1) automated testing of web applications (in 2014/2015, I eventually threw together some simple and powerful features in that vein), and 2) team-based collaboration features. The latter ended up being the direction taken by Postman, in 2012 a primitive tool that grew to a business with hundreds of employees, netting hundreds of millions in venture capital financing and a valuation in the billions of dollars a decade later. Oops.
Selling Fiddler
When the first offer came in from a developer tools company seeking to acquire Fiddler, it was very timely– Fiddler felt “pretty much done”, I’d recently been promoted to a Team Lead role, gotten married, and my wife and I were starting to think about having kids. Letting a famous devtools company take over Fiddler’s development seemed like a great outcome for both me and the overall Fiddler community. I knew that some users would howl at having to start paying for Fiddler, but reasoned that many would be happy to do so, and anyone who mistook me for a “Software should be free” zealot clearly wasn’t paying attention to my choice of employer.
Additionally, I’d always felt a little guilty after a friend came back from a conference trip to report “Karl von Randow (developer of the Charles debugger) isn’t exactly your greatest fan.” And it was an understandable feeling — by giving Fiddler away (while cashing my Microsoft paychecks), I was indeed (if unintentionally) hurting Karl’s business by implicitly socializing the notion that “$0 is a reasonable price for a great web debugger.“
I had anticipated a long and painful process to get Microsoft to agree to let me sell Fiddler, but my manager was both a genius and well-connected (formerly one of Bill Gates’ technical assistants) and it was a remarkably painless process to get the required sign-offs. Making matters easier were a few happy accidents: there was no Microsoft source code or secrets in Fiddler, there was no Fiddler source commingled into any of Microsoft’s products, and I had (coincidentally) never used Microsoft-owned hardware when developing Fiddler. Finally, because the plan was that the acquirer would buy Fiddler but leave me behind on my current team at Microsoft, my management chain had no objections.
Still, as I noted above, watching the would-be acquirer use my code in the clumsiest possible fashion set me on the path of writing the Fiddler book, and that work, in turn, reinvigorated my passion for the app. Instead of feeling like Fiddler was “pretty much done,” I started to see deficiencies everywhere. The idea of just handing the code off to another company felt… uncomfortable.
The notion of abandoning the sale was unpleasant — I didn’t want to disappoint the acquirer (whose friendly team I really liked), and I knew that my upcoming time pressures (between Microsoft and parenthood) would truly be a limiter in what I could achieve in the coming years. And, to be frank, walking away from the offer (~18 months of my salary) made me gulp a little.
So, it was even more timely when Chris from Telerik finally got ahold of me on a rainy winter day in 2012; I’d just left the coffee shop in Redmond Town Center where I’d been working on the Fiddler book. Chris been trying to get me to agree to sign a more formal license agreement for FiddlerCore so they could use it in one of their products. I’d previously blew them off, but this time I noted “Well, I’m actually about to sell Fiddler, and I bet the new owners will have lawyers that can help you.” Chris immediately replied “Wait, wait, wait, if Fiddler’s for sale, we’re buying it.” In very short order, we had another call where he offered not only to buy Fiddler (at over double my existing offer), but also hire me at my current Microsoft salary to keep working on it. By moving to work at Telerik’s office in Austin, Texas, our family’s cost-of-living would drop significantly, and my wife and I could easily live on a single income. Win-Win-Win.
Shortly after I’d signed Telerik’s offer, I had lunch one snowy day with a friend who worked at a networking startup. He remarked that he was surprised that I’d sold to Telerik, casually dropping: “We’d’ve given you <3x Telerik’s offer> and you wouldn’t have even had to move.” I dismissed this as empty bluster until a few months later when their company was acquired for two billion dollars. Oops.
Still, happy enough with the Telerik offer, things moved fast, and by October 2012 my wife and I both resigned at Microsoft, packed up the cat, sold my car, and drove her car from Seattle to our new home in Austin. Nine months later, my first son was born.
Working for Telerik could fill a short book in itself, but that’s one I’m not going to write tonight.
Thanks for joining me for this trip down memory lane. I hope you can learn from some of my mistakes!
-Eric
PS: I’ve previously shared some of this history and my learnings in a conference talk. The “Lucking In” audio and slides are on Github.
I’ve written about File Downloads quite a bit, and early this year, I delivered a full tech talk on the topic.
From my very first days online (a local BBS via 14.4 modem, circa 1994), I spent decades longing for faster downloads. Nowadays, I have gigabit fiber at the house, so it’s basically never my connection that is the weak link. However, when I travel, I sometimes find myself on slow WiFi connections (my current hotel is 3mbps) and am reminded of the performance pains of yesteryear.
Over the years, various schemes have been concocted for faster downloads; the most effective concern sending fewer bytes (typically by using compression). Newer protocols (e.g. H3’s QUIC) are able to offer some improvements even without changing the number of bytes transferred.
For the last twenty years or so, one performance idea has hung around at the periphery: “What if we used multiple connections to perform a single download?” You can see this Parallel downloading option today inside Chrome and Edge’s about:flags page:
This is an “Enthusiast” feature, but does it really improve download performance?
Mechanics: How Parallel Downloads Work
Parallel downloads work by using the Range Request mechanism in HTTP that allows a client to request a particular range (portion) of a file. This mechanism is most commonly used to resume incomplete downloads.
For example, say you went offline after downloading the first 3.6mb of a file. When you get back online, the browser can request that the server send only the remainder of the file by sending a Range header that specifies that it wants bytes #3,600,000 and later, like so:
The If-Range request header supplies the ETag received on the first part of the response; if the server can send the requested range of that original response, it will return a HTTP/206 Partial Content response. If the server cannot return the requested portion of the original resource, it will either return a HTTP/200 with the entire file, or a HTTP/416 Requested Range Not Satisfiable error.
The Range Request mechanism is also used for retrieving specific portions of media in other scenarios where that makes sense: imagine the user skips ten minutes ahead in a long web video, or jumps down 50 pages in a PDF file. There’s no point in wasting data transfer or time downloading the part that the user skipped over.
The Parallel Downloads feature uses the same mechanism. Instead of downloading just one range at a time, the browser establishes, say, 4 simultaneous connections to the server and downloads 25% of the file on each connection.
Limitations
Notably, not every server supports Range requests. A server is meant to indicate its support via the Accept-Ranges response header, but the client only gets this header after it gets its first response.
Even if a server does support Range requests, not every response supports ranges. For example, the server may not support returning a specific range for a dynamically-generated download because the bytes of that download are no longer available after the first connection. A client that wishes to support parallel downloads must carefully probe for support and correctly handle all of the possible error cases without breaking the download process.
Finally, standards dictate that browsers should limit themselves to 6 connections per HTTP/1.1 server, so using multiple connections for a single download could cause problems with interacting with a server if multiple connections are being used to download a single file.
Are Parallel Downloads Faster?
From a theoretical point-of-view, parallel downloads should never be faster. The performance overhead of establishing additional connections:
TCP/IP connection establishment requires extra roundtrips
The TCP/IP congestion-controlling behavior called “Slow start” that throttles newly-established connections
… means that performing downloads across multiple parallel connections should never be faster than using a single connection, and it should usually be slower.
Now, having said that, there are two cases where performing downloads in parallel can be faster:
First, there exist file download servers that deliberately throttle download speeds for “free” customers. For example, many file download services (often used for downloading pirated content, etc) throttle download speeds to, say, 100kbps to entice users to upgrade to a “pay” option. By establishing multiple parallel connections, a user may be able to circumvent the throttle and download at, say, 400kbps total across 4 parallel connections. This trick only works if the service is oblivious to this hack; smarter servers will either not support Range requests, reject additional connections, or divide the cap across all of a user’s connections.
Next, HTTP/1 and HTTP2 are based on TCP/IP, and TCP/IP has a “head-of-line” blocking problem that can cause slowness for spotty network connections wherein a dropped packet can cause the stream to stall. If a download is conducted across multiple parallel connections, the impact is muted because each TCP/IP connection stalls independently. A parallel download could conceivably still make progress on the other connections’ chunks until the stalled connection recovers. HTTP3 does not have this problem because it is based on QUIC, and UDP does not suffer from the head-of-line blocking problem.
In some locales, support for Parallel Downloading may be an important competitive feature. In India and China specifically, connection quality is somewhat lower, making it somewhat more likely users will encounter head-of-line blocking in TCP/IP. However, I believe that the main reason that Chromium offers this feature is that niche browsers in those markets trumpet their support for Parallel Downloads in their marketing materials, so natively offering this feature in Chromium is important from a competitive marketing point-of-view.