After last month’s races, I decided that I could reduce some of my stress around my first half marathon (Austin 3M at the end of January) by running a slow marathon ahead of time — a Race 0 if you will. So, I signed up for the Decker Challenge, with a goal of finishing around 2:10, a comfortable pace of around 10 minutes per mile. While the pace is slower than my January goal (an even two hours), I figured it would probably be almost as hard because the Decker course around the Travis County Expo Center has more hills.
On Saturday, I got my gear ready: charged my phone and headphones, packed my Gu gels (including some new, bigger ones with a shot of caffeine), and got my water bottle ready. I put my number bib/timing chip next to my treadmill to motivate me during the week, and tapered training the few days before the race. Saturday night, I had what seemed like a reasonable dinner (salmon, asparagus, couscous), and got to bed reasonably early. I set my alarm for 6:30, but woke up on my own at 6:20am. I’d had almost exactly seven hours of sleep, and plenty of time before the 8am race. I got up, had coffee, went to the bathroom (with little effect), ate a banana, showered, and got dressed in my trusty shorts, tank top, and new (taller) socks.
At 7:20, I was ready to go and got in the car. I realized with some alarm that the race was further away than I’d realized (~22 minutes rather than 15) but figured that my morning was still basically on track. As I drove, I realized that I hadn’t yet figured out whether to put my bib on my shirt or my shorts. Glancing over to my pile of stuff in the passenger seat, I was horrified to realize that I’d brought everything except the one thing I truly needed.
By 7:30, I was back at my house, grabbed the forgotten bib, and decided I should probably have one more try at the bathroom as my belly was grumbling a little. No luck, and I was back on the road by 7:38. Not great, but I could still make the race. Fortunately, Texas roads have high speed limits, but they aren’t designed for driving while attaching paper to one’s pants with metal “safety” pins and I soon gave up.
Luckily, I reached the Expo Center just before 8 and took a left to drive North past the first gate, closed off by a police car and a line of cones. I drove past a second gate with a police car behind the line of cones and kept driving. Surely, the entrance will be here soon, right? After another mile or so, I realized that I must have missed it when I took that first left northward, so I drove past the two coned-off entrances and went another mile south before realizing that there was no way the entrance was this far out. I pulled off the road to figure out whether there was perhaps a back entrance and realized that no, that wasn’t possible either. Finally, I turned north again and drove slowly past the first gate before watching a car drive through the cones at the second gate without the policemen complaining. Ugh. Apparently, crossing the line of cones was expected the whole time… something I’d’ve figured out if I spent more time perusing the map, or if I’d gotten there early enough to watch everyone else doing it.
More than a bit embarrassed, I walked up to the start line around 8:15 (no one was around) and realized that I wouldn’t be able to run with my target pace group (a key goal for this practice run) and might not even be able to follow the course (looking at the map later, I decided this was an unfounded concern).
I ruefully drove back home to run a half on the treadmill instead, kicking myself a bit for missing the race for dumb reasons, but happy to learn an unforgettable lesson in a low-stakes situation. For January, all of my stuff will be completely ready the night before, and I’ll show up at the start much earlier.
Back at home, I settled on running the Jackson Hole Half Marathon and resolved to run it as realistically as possible — I wore a shirt, ran with the number bib on my leg, and carried my Nathan water bottle in hand. I opened the window but left my big fans off; based on past results, I knew that my heart rate is significantly higher when I’m warm.
I felt strong as I started: after the first quarter I started thinking that perhaps I should try to run a full marathon– the first half with the 2:10 target and the second half much more slowly, perhaps 2:40? This thought kept me motivated for a few miles, but around mile 8 I was not feeling nearly so good. By mile 10, I’d surrendered and turned on the fans, and by mile 11 I knew that this wasn’t going to be a marathon day. I finished in around 2:04, happy to be done but a bit depressed that I certainly wouldn’t’ve met my day’s real-world goal had I run the Decker. (I was further a bit misled because the 2:08 reported by my watch included 4 minutes before I started running).
I refilled my water bottle and then jogged another 1.2 miles to “finish” the race with the trainer (I run faster than the target pace) before calling it a day. I cooled off by walking a mile outside and crossed 30,000 steps for the day for the first time.
So, not a bad effort, but I’m definitely running slower than my prior efforts this year. Before Jackson Hole, I’d run six half marathons on the treadmill this summer, finishing four of them under two hours. The second half of Boston was my best time, at 1:50:30. On the other hand, I recovered from this one far more quickly, with no real blisters, and I was feeling so normal that I had to stop myself from running the next day.
What does all of this mean for my January hopes? I don’t know. But I know that this time I won’t forget my bib!
When establishing a secure HTTPS connection with a server, a browser must validate that the certificate sent by the server is valid — that is to say, that:
it’s non-expired (current datetime is within the validity period specified in the notBefore and notAfter fields of the certificate)
it contains the hostname of the target site in the subjectAltNames field
it is properly signed with a strong algorithm, and
either the certificate’s signer (Certificate Authority) is trusted by the system (Root CA) or it chains to a root that is trusted by the system (Intermediate CA).
In the past, Chromium running on Windows delegated this validation task to APIs in the operating system, layering a minimal set of additional validation (e.g. this) on top of the verdict from Windows. As a consequence, Chromium-based browsers relied on two things: The OS’ validation routines, and the OS’ trusted root certificate store.
Starting in Edge version 109, Edge will instead rely on code and trust data shipped in the browser for these purposes — certificate chain validation will use Chromium code, and root trust determination will (non-exclusively) depend on a trust list generated by Microsoft and shipped with the browser.
Importantly: This should not result in any user-visible change in behavior for users. That’s true even in the case where an enterprise depends upon a private PKI (e.g. Contoso has their own Enterprise CA for certificates for servers on their Intranet, or WoodGrove Bank is using a “Break-and-Inspect” proxy server to secure/spy on all of their employees’ HTTPS traffic). These scenarios should still work fine because the browser will still check the OS root certificate store[1] if the root certificate in the chain is not in the browser-carried trust list.
Q: If the outcome is the same, why make this change at all?
A: The primary goal is consistency — by using the same validation logic and public CA trust list across all operating systems, users on Windows, Mac, Linux and Android should all have the same experience, not subject to the quirks (and bugs) of the OS-provided verifiers or the sometimes- misconfigured list of OS-trusted CAs.
Q: I know I can use certmgr.msc to visualize the Windows OS root certificate store. Can I see what’s in Edge’s built-in store?
A: Yes, you can visit the about:system page and expand the chrome_root_store node:
Update: A colleague observed today that on MacOS, Edge using the system verifier returns NET::ERR_CERT_VALIDITY_TOO_LONG when loading a site secured by a certificate that he generated with a 5-year expiration. When switching to use the Chromium verifier, the error goes away, because Chromium only enforces the certificate lifetime limit on certs chained to public CAs, while Apple has a stricter requirement that they apply to even private CAs.
Update: An Enterprise customer noted that their certificates were rejected with ERR_CERT_UNABLE_TO_CHECK_REVOCATION when they had set Group Policy to require certificate revocation checks for non-public CAs (RequireOnlineRevocationChecksForLocalAnchors). The problem was that their CRL was returned with a lifetime of two years, which is not in accordance with the baseline requirements. CRBug and v114 fix to stop checking lifetime for non-public chains.
Update 4/18: An Enterprise customer has reached out to complain that their internal PKI does not work with the new verifier; certificates are rejected with ERR_CERT_INVALID errors. In looking at the collected NetLog, we see:
By marking this extension (1.3.6.1.4.1.311.21.10) as critical, the certificate demands verifier reject the certificate unless it can ensure that the purposes described in this proprietary MS Application Policies extension describe the purpose for which the certificate is being validated.
Unfortunately, Chromium doesn’t have any handling for this vendor-extension and rejects the certificate. Microsoft has now documented that the ApplicationPolicies extension should be ignored if the verifier supports the standard EKU extension (which nearly everything does) and the certificate contains an EKU. Chromium 115+ now follows this guidance.
Update 4/19: An Enterprise customer has reached out to complain that their internal PKI does not work with the new verifier; certificates are rejected with ERR_CERT_INVALID errors. In looking at the collected NetLog, we see:
ERROR: Not permitted by name constraints
The root certificate contains a criticalName Constraints field that Chromium’s verifier enforces. Investigation is underway, but early clues suggest that the problem is that the Name Constraints extension defines an RFC822 email constraint. Chromium does not have a validator for RFC822 constraints, and thus rejects the certificate because it cannot validate the constraint.
It appears that there’s a policy (EnforceLocalAnchorConstraintsEnabled) to opt-out of that enforcement until Chromium 118; the policy is not presently mentioned in the Edge documentation.
Update 5/16: An Enterprise customer reached out to complain that their internal PKI server load increased dramatically after switching to the new verifier. The problem turned out to be that they had enabled revocation checks and had configured their Enterprise root CA (used in their MiTM proxy) with revocation options of LDAP and HTTP. Chromium’s verifier does not support LDAP, so HTTP is now always used. They had misconfigured their HTTP server to always serve the CRL with an Expires header in 1970. Chromium does not have a CRL-specific cache, relying only on the Network Cache, meaning this CRL file was re-fetched from the server every time the root certificate was validated (even though the CRL itself had a nextUpdate 45 days in the future. (Chrome does not presently cache CRLs elsewhere; crbug/1447584).
Adding a max-age=21600 directive to the CRL’s Cache-Control response header will allow this file to be reused for six hours at a time, dramatically reducing load on the CRL server. But, look at the 8/23 update below for an important caveat!
Update 6/22: Another change vs. the Windows certificate verifier was found; Chrome rejects a Name Constraints if it specifies an Excluded tree but doesn’t put any entries in it. https://crbug.com/1457348/
Update 8/23: A Customer who enabled hard-fail revocation checks complained that they receive ERR_CERT_UNABLE_TO_CHECK_REVOCATION until they clear the browser cache. Why?
Here's the HTTP response from the server for the CRL:
HTTP/1.1 200 OK
Content-Type: application/pkix-crl
Last-Modified: Wed, 16 Aug 2023 16:18:53 GMT
Accept-Ranges: bytes
ETag: "75b82a595dd0d91:0"
Server: Microsoft-IIS/10.0
Date: Mon, 21 Aug 2023 11:14:46 GMT
Content-Length: 9329
CRL Version: 2
This Update: 2023-08-16 16:08:52.
Next Update: 2023-08-24 04:28:52
SigAlg: 1.2.840.113549.1.1.11
This CRL contains 164 revoked certificates.
The problem here is related to an outdated CRL cached locally being consulted after its expiration date.
The Heuristic Expiration rules for browser caches are commonly things like: “A file which does not specify its expiration should be considered “Fresh” for 10% of the difference between the Date and Last-Modified date.“ In this case, that delta is 115 hours, so the file is considered “Fresh” for 11.5 hours after it’s fetched.
So, say the user first visited the HTTPS site on 8/23/2023 at 11:45PM and got the CRL shown above. Then, say they tried to visit the same site the following morning at 5am. It’s entirely possible that the now outdated CRL was still in Chromium’s Network Cache, and thus it will not be fetched again. Instead, the CRL will be handed to the verifier, which will say “Hrm, this is outdated and thus it does not answer the question of whether the cert is expired” resulting in a revocation status of UNKNOWN. Because the customer has configured “hard-fail” revocation checking by policy, the user then gets the error page.
Thus, it’s entirely possible that this is indeed a config issue; as the Chrome code notes:
// Note that no attempt is made to refetch without cache if a cached // CRL is too old, nor is there a separate CRL cache.
The customer should set a HTTP Expires response header that is several hours (to account for clock skew) before the Next Update value contained within the CRL file. Alternatively, they could set a Max-Age value of some short period (e.g. 24 hours) and ensure that they are generating new CRL files at a cadence such that every CRL file is good for at least a few days.
Please Preview ASAP
I’ve written before about the value and importance of practical time machines, and this change arrives with such a mechanism. Starting in Microsoft Edge 109, an enterprise policy (MicrosoftRootStoreEnabled) and a flag (edge://flags/#edge-microsoft-root-store-enabled) are available to control when the built-in root store and certificate verifier are used. The policy is slated to be removed in Edge 113 (Update: although, given the breakage above, this seems ambitious).
Please try these out, and if anything breaks in your environment, please report the issue!
-Eric
[1] Chromium uses CertOpenStore to grab the relevant root certificates: trust_store_win.cc; win_util.cc. After the roots are gathered, Chromium parses and stores them in-memory for use with its own verification logic; none of the verification is done through Windows’ CAPI.
I’ve been writing about Windows Security Zones and the Mark-of-the-Web (MotW) security primitive in Windows for decades now, with 2016’s Downloads and MoTW being one of my longer posts that I’ve updated intermittently over the last few years. If you haven’t read that post already, you should start there.
Advice for Implementers
At this point, MotW is old enough to vote and almost old enough to drink, yet understanding of the feature remains patchy across the Windows developer ecosystem.
MotW, like most security primitives (e.g. HTTPS), only works if you use it. Specifically, an application which generates local files from untrusted data (i.e. anywhere on “The Internet”) must ensure that the files bear a MoTW to ensure that the Windows Shell and other applications recognize the files’ origins and treat them with appropriate caution. Such treatment might include running anti-malware checks, prompting the user before running unsafe executables, or opening the files in Office’s Protected View.
Similarly, if you build an application which consumes files, you should carefully consider whether files from untrusted origins should be treated with extra caution in the same way that Microsoft’s key applications behave — locking down or prompting users for permission before the file initiates any potentially-unwanted actions, more-tightly sandboxing parsers, etc.
Writing MotW
The best way to write a Mark-of-the-Web to a file is to let Windows do it for you, using the IAttachmentExecute::Save() API. Using the Attachment Execution Services API ensures that the MotW is written (or not) based on the client’s configuration. Using the API also provides future-proofing for changes to the MotW format (e.g. Win10 started preserving the original URL information rather than just the ZoneID).
If the URL is not known, but you wish to ensure Internet Zone handling, use the special url about:internet.
You should also use about:internet if the URL is longer than 2083 characters (INTERNET_MAX_URL_LENGTH), or if the URL’s scheme isn’t one of HTTP/HTTPS/FILE.
Ensure that you write the MotW to any untrusted file written to disk, regardless of how that happened. For example, one mail client would properly write MotW when the user used the “Save” command on an attachment, but failed to do so if the user drag/dropped the attachment to their desktop. Similarly, browsers have written MotW to “downloads” for decades, but needed to add similar marking when the File Access API was introduced. Recently, Chromium fixed a bypass whereby a user could be tricked into hitting CTRL+S with a malicious document loaded.
Take care with anything that would prevent proper writing of the MotW– for example, if you build a decompression utility for ZIP files, ensure that you write the MotW before your utility applies any readonly bit to the newly extracted file, otherwise the tagging will fail.
Update: One very non-obvious problem with trying to write the :Zone.Identifier stream yourself — URLMon has a cross-process cache of recent zone determinations. This cache is flushed when a user reconfigures Zones in the registry and when CZoneIdentifier.Save() is called inside the Attachment Services API. If you try to write a zone identifier stream yourself, the cache won’t be flushed, leading to a bug if you do the following operations:
1. MapURLToZone("file:///C:/test.pdf"); // Returns LOCAL_MACHINE
2. Write ZoneID=3 to C:\test.pdf:Zone.Identifier // mark Internet
3. MapURLToZone("file:///c:/test.pdf"); // BUG: Returns LOCAL_MACHINE value cached at step #1.
Beware Race Conditions
In certain (rare) scenarios, there’s the risk of a race condition whereby a client could consume a file before your code has had the chance to tag it with the Mark-of-the-Web, resulting in a security vulnerability. For instance, consider the case where your app (1) downloads a file from the internet, (2) streams the bytes to disk, (3) closes the file, finally (4) calls IAttachmentExecute::Save() to let the system tag the file with the MotW. If an attacker can induce the handler for the new file to load it between steps #3 and #4, the file could be loaded by a victim application before the MotW is applied.
Unfortunately, there’s not generally a great way to prevent this — for example, the Save() call can perform operations that depend on the file’s name and content (e.g. an antivirus scan) so we can’t simply call the API against an empty file or against a bogus temporary filename (i.e. inprogress.temp).
The best approach I can think of is to avoid exposing the file in a predictable location until the MotW marking is complete. For example, you could download the file into a randomly-named temporary folder (e.g. %TEMP%\InProgress\{guid}\setup.exe), call the Save() method on that file, then move the file to the predictable location.
Note: This approach (extracting to a randomly-named temporary folder, carefully named to avoid 8.3 filename collisions that would reduce entropy) is now used by Windows 10+’s ZIP extraction code.
Correct Zone Mapping
To check the Zone for a file path or URL, use the MapUrlToZone (sometimes called MUTZ) function in URLMon.dll. You should not try to implement this function yourself– you will get burned.
Because the MotW is typically stored as a simple key-value pair within a NTFS alternate data stream:
…it’s tempting to think “To determine a file’s zone, my code can just read the ZoneId directly.”
Unfortunately, doing so is a recipe for failure.
Firstly, consider the simple corner cases you might miss. For instance, if you try to open with read/write permissions the Zone.Identifier stream of a file whose readonly bit is set, the attempt to open the stream will fail because the file isn’t writable.
Aside: A 2023-era vulnerability in Windows was caused by failure to open the Zone.Identifier due to an unnecessary demand for write permission.
Second, there’s a ton of subtlety in performing a proper zone mapping.
2a: For example, files stored under certain paths or with certain Integrity Levels are treated as Internet Zone, even without a Zone.Identifier stream:
2b: Similarly, files accessed via a \\UNC share are implicitly not in the Local Machine Zone, even if they don’t have a Zone.Identifier stream.
Aside: The February 2024 security patch (CVE-2024-21412) fixed a vulnerability where the handler for InternetShortcut (.url) files was directly checking for a Zone.Identifier stream rather than calling MUTZ. That meant the handler failed to recognize that file://attacker_smbshare/attack.url should be treated with suspicion and suppressed an important security prompt.
2c: As of the latest Windows 11 updates, if you zone-map a file contained within a virtual disk (e.g. a .iso file), that file will inherit the MotW of the containing .iso file, even though the embedded file has no Zone.Identifier stream.
2d: For HTML files, a special saved from url comment allows specification of the original url of the HTML content. When MapUrlToZone is called on a HTML file URL, the start of the file is scanned for this comment, and if found, the stored URL is used for Zone Mapping:
Finally, the contents of the Zone.Identifier stream are subject to change in the future. New key/value fields were added in Windows 10, and the format could be changed again in the future.
Ensure You’re Checking the “Final” URL
Ensure that you check the “final” URL that will be used to retrieve or load a resource. If you perform any additional string manipulations after calling MapUrlToZone (e.g. removing wrapping quotation marks or other characters), you could end up with an incorrect result.
Respect Error Cases
Numerous security vulnerabilities have been introduced by applications that attempt to second-guess the behavior of MapURLToZone.
For example, MapURLToZone will return a HRESULT of 0x80070057 (Invalid Argument) for some file paths or URLs. An application may respond by trying to figure out the Zone itself, by checking for a Zone.Identifier stream or similar. This is unsafe: you should instead just reject the URL.
Similarly, one caller noted that http://dotless was sometimes returning ZONE_INTERNET rather the ZONE_INTRANET they were expecting. So they started passing MUTZ_FORCE_INTRANET_FLAGS to the function. This has the effect of exposing home users (for whom the Intranet Zone was disabled way back in 2006) to increased attack surface.
MutZ Performance
One important consideration when calling MapUrlToZone() is that it is a blocking API which can take from milliseconds (common case) to tens of seconds (worst case) to complete. As such, you should NOT call this API on a UI thread– instead, call it from a background thread and asynchronously report the result up to the UI thread.
It’s natural to wonder how it’s possible that this API takes so long to complete in the worst case. While file system performance is unpredictable, even under load it rarely takes more than a few milliseconds, so checking the Zone.Identifier is not the root cause of slow performance. Instead, the worst performance comes when the system configuration enables the Local Intranet Zone, with the option to map to the Intranet Zone any site that bypasses the proxy server:
In this configuration, URLMon may need to discover a proxy configuration script (potentially taking seconds), download that script (potentially taking seconds), and run the FindProxyForURL function inside the script. That function may perform a number of expensive operations (including DNS resolutions), potentially taking seconds.
Fortunately, the “worst case” performance is not common after Windows 7 (the WinHTTP Proxy Service means that typically much of this work has already been done), but applications should still take care to avoid calling MapUrlToZone() on a UI thread, lest an annoyed user conclude that your application has hung and kill it.
Checking for “LOCAL” paths
In some cases, you may want to block paths or files that are not on the local disk, e.g. to prevent a web server from being able to see that a given file was opened (a so-called Canary Token), or to prevent leakage of the user’s NTLM hash.
When using MapURLToZone for this purpose, pass the MUTZ_NOSAVEDFILECHECK flag. This ensures that a downloaded file is recognized as being physically local AND prevents the MapURLToZone function from itself reaching out to the network over SMB to check for a Zone.Identifier data stream.
In most cases, you’ll want to use < and > comparisons rather than exact Zone comparisons; for example, when treating content as “trustworthy”, you’ll typically want to check Zone<3, and when deeming content risky, you’ll check Zone>3.
Tool: Simple MapUrlToZone caller
Compile from a Visual Studio command prompt using csc mutz.cs:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
I finished the first section of Tommy Rivers’ half-marathon training series (in Bolivia) and have moved on to the second section (Japan). I ran two Austin races in November, notching some real-world running experience in preparation for the 3M Half Marathon that I’ll be running at the end of January.
Run for the Water
On November 6th, I ran the “Run for the Water” ten miler, a charity race in support of providing clean water sources in Burundi.
Fortunately, everything that could’ve gone wrong with this race didn’t– the weather was nice, and my full belly had no complaints. This was my first race experience with music (my Amazon Fire phone to one Bluetooth headphone) and a carried snack (GU chews), and I figured out how to coax my watch into providing pacing information every half mile.
I had two goals for the race: To run the whole thing without stopping, and to beat 1:30 overall.
I achieved both, with a finishing time of 1:28:57, a pace of 8:53 per mile, and 1294 calories expended.
As predicted, I started at a faster pace before leveling out, with my slowest times in the hills around mile six:
The mid-race hills weren’t as bad as I feared, and I spent most of mile 6 and 7 psyching myself up for one final big hill that never arrived. By mile 8, I was daydreaming about blazing through miles 9 and 10, but started lagging and only sprinted at the very end. With an eye toward the half marathon, as I crossed the finish line, I asked myself whether I could run another 3.1 miles in thirty minutes and concluded “probably, but just barely.”
Notably, I managed to keep my heart rate under control for almost the whole race, running nearly the entire thing at just under 85% of my max:
The cool-but-not-cold weather undoubtably helped.
Turkey Trot
On a drizzly Thanksgiving morning, I ran the Turkey Trot 5-miler and had another solid run, although I didn’t take it as seriously and I ended up missing both of my goals: Run the entire thing, and finish in 42 minutes.
After the Capitol 10K in the spring, I was expecting the horde of runners at the start and was prepared for the temptation to join others in walking the hills early in the race. I wasn’t expecting the challenge of running on wet pavement, but I managed to avoid slipping. Alas, after topping the hills at mile 2, I then walked for a tenth of a mile to get my breathing and heart rate back under control.
Despite the shorter distance, my heart rate was considerably higher than during the ten miler earlier in the month:
I ended with a time of 44:06, an 8:49 pace just a hair faster than the ten miler, burning 673 calories in the effort:
So, a set of mixed results: I’m now considering whether I should try running a slow half marathon in December just to prove to myself that I can cover the distance without stressing about my time.
While my 2013 CX-5 is reasonably fuel-efficient (~28mpg in real world driving), this summer I watched in dismay as gas prices spiked. Even when my tank was almost full, watching prices tick up every time I drove past a gas station left me unsettled. I’d been idly considering getting an electric car for years, but between months of fuel price anxiety and upcoming changes in tax credits (that will leave me ineligible starting in 2023) this fall felt like the right time to finally pull the trigger.
On October 24th, I picked up a new 2023 Nissan Leaf.
I originally shopped for plug-in hybrid SUVs with the intent of replacing the Mazda, but none of the brands seemed to have any available, with waitlists stretching well into next year. So, instead I decided I’d look for a pure-electric to use for daily driving, keeping my CX-5 for family vacations and whenever I need to haul a bigger or messier load. (I worried a bit about the cost to have two cars on my insurance, but the new car initially added only $30 a month, which feels pretty reasonable.)
I got the shorter-range version of the Leaf (40kWh) which promises around 150 miles per charge. While it’s compact, it makes good use of its interior room, and I have plenty of headroom despite my long torso. The backseat is very tight, but my sons will still fit for a few more years. In the first 25 days, I’ve put about 550 miles on it, and the car has yielded slightly better than the expected 150-mile range. It’s fun to drive. The only significant disappointment is that my Leaf’s low-end “S” trim doesn’t include the smartphone integration to track charging and enable remote start/AC (which would’ve been very useful in Texas summers). Including tax and all of the assorted fees, I paid sticker at $32K (17 down, 15 financed at an absurdly low 2.25%), before discounting the soon-to-expire $7500 federal tax credit.
For the first few weeks, I was trickle-charging the car using a regular 120V (1.4kWh/h) household socket. While 120V takes more than a day to fully charge the Leaf, even slow charging was much more practical for my needs than I had originally expected. Nevertheless, I spent a total of $2550 on a Wallbox Pulsar Plus 40A Level 2 charger ($550 for the charger, a higher-than-expected $2000 for the new 240V high-amp socket in my garage) to increase the charge speed to the full 6.6kWh/h that the car supports. My current electrical panel only had 30 amps available, which is the max the Leaf will take, but I had the electrician pull a 50 amp wire to simplify things if I ever upgrade to a car and panel with higher capacity. My local electric company will reimburse me $1200 for the charger installation, and there’s also a federal tax credit of 30% capped at $1000. So if everything goes according to plan, L2 charging will only have a net cost of $600.
While I’m enjoying the car, it’s not for everyone– between the small battery and the nearly nonexistent public fast-charging support, the practical range of the Leaf is low. The Leaf only supports the losing CHAdeMO standard that seems likely to go away over the next few years, and the Austin metro area only has two such chargers today (Update: Wrong). It’s also not clear that the Leaf model line has much of a future; the 2023 edition might be the last, or at least the last before a major redesign. (Update: The 2024 Leaf appears to be basically identical to the 2023 model; Nissan’s electric SUV, the Ariya, has arrived in America).
Nevertheless, for my limited needs, the Leaf is a good fit. In a few years, I expect I’ll replace my CX-5 with a hybrid SUV, but for now, I’m stressing a lot less about gas prices (even as they’ve fallen back to under $3 a gallon in Austin 🤷♂️).
When some of the hipper PMs on the Internet Explorer team started using a new “microblogging” service called Twitter in the spring of 2007, I just didn’t “get it.” Twitter mostly seemed to be a way to broadcast what you’d had for lunch, and with just 140 characters, you couldn’t even fit much more.
As Twitter’s founder noted:
…we came across the word “twitter”, and it was just perfect. The definition was “a short burst of inconsequential information”, and “chirps from birds”. And that’s exactly what the product was.
When I finally decided to sign up for the service (mostly to ensure ownership of my @ericlaw handle, in case I ever wanted it), most of my tweets were less than a sentence. I hooked up a SlickRun MagicWord so I could spew status updates out without even opening the website, and spew I did:
It looks like it was two years before I interacted with anyone I knew on Twitter, but things picked up quickly from there. I soon was interacting with both people I knew in real life, and many many more that I would come to know from the tech community. Between growing fame as the creator of Fiddler, and attention from improbable new celebrities:
…my follower count grew and grew. Soon, I was tweeting constantly, things both throwaway and thoughtful. While Twitter wasn’t a source of deep connection, it was increasingly a mechanism of broad connection: I “knew” people all over via Twitter.
This expanded reach via Twitter came as my connections in the real-world withered away from 2013 to 2015: I’d moved with my wife to Austin, leaving behind all of my friends, and within a few years, Telerik had fired most of my colleagues in Austin. Around that time, one of my internet-famous friends, Steve Souders confessed that he’d unfollowed me because I’d started tweeting too much and it was taking over his timeline.
My most popular tweet came in 2019, and it crossed over between my role as a dad and as a security professional:
The tweet, composed from the ziplock bag aisle of Target, netted nearly a million views.
I even found a job at Google via tweet. Throughout, I vague-tweeted various life milestones, from job changes, to buying an engagement ring, to signing the divorce papers. Between separating and divorcing, I wrote up a post-mortem of my marriage, and Twitter got two paragraphs:
Twitter. Unquestionably designed to maximize usage, with all of the cognitive tricks some of the most clever scientists have ever engineered. I could write a whole book about Twitter. The tl;dr is that I used Twitter for all of the above (News, Work, Stock) as well as my primary means of interacting with other people/”friends.” I didn’t often consciously think about how much it messed me up to go from interacting with a large number of people every day (working at Microsoft) to engaging with almost no one in person except [my ex] and the kids. Over seven years, there were days at Telerik, Google, and Microsoft where I didn’t utter a word for nine workday hours at a time. That’s plainly not healthy, and Twitter was one crutch I tried to use to mitigate that.
My Twitter use got worse when it became clear that [my ex] wasn’t especially interested in anything I had to say that wasn’t directly related to either us or the kids, either because our interests didn’t intersect, or because there wasn’t sufficient shared context to share a story in fewer than a few minutes. She’d ask how my day was, and interrupt if my answer was longer than a sentence or two without a big announcement. Eventually, I stopped answering if I couldn’t think of anything I expected she might find interesting. Meanwhile, ten thousand (mostly strangers) on the Internet beckoned with their likes and retweets, questions and kudos.
Now, Twitter wasn’t all just a salve for my crushing loneliness. It was a great and lightweight way to interact with the community, from discovering bugs, to sharing tips-and-tricks, to drawing traffic to blog posts or events. I argued about politics, commiserated with other blue state refugees in Texas, and learned about all sorts of things I likely never would have encountered otherwise.
Alas, Twitter has also given me plenty of opportunities to get in trouble. Over the years, I’ve been pretty open in sharing my opinions about everything, and not everyone I’ve worked for has been comfortable with that, particularly as my follower count crossed into 5 digits. Unfortunately, while the positive outcomes of my tweet community-building are hard to measure, angry PR folks are unambiguous about their negative opinions. Sometimes, it’s probably warranted (I once profanely lamented a feature that I truly believe is bad for safety and civility in the world) while other times it seems to be based on paranoid misunderstandings (e.g. I often tweet about bugs in products, and some folks wish I wouldn’t).
While my bosses have always been very careful not to suggest that I stop tweeting, at some point it becomes an IQ test and they’re surprised to see me failing it.
What’s Next?
While I nagged the Twitter team about annoying bugs that never got fixed over the years, the service was, for the most part, solid. Now, a billionaire has taken over and it’s not clear that Twitter is going to survive in anything approximating its current form. If nothing else, several people who matter a lot to me have left the service in disgust.
You can download an archive of all of your Tweets using the Twitter Settings UI. It takes a day or two to generate the archive, but after you download the huge ZIP file (3gb in my case), it’s pretty cool. There’s a quick view of your stats, and the ability to click into everything you’ve ever tweeted:
If the default features aren’t enough, the community has also built some useful tools that can do interesting things with your Twitter archive.
I’ve created an alternate account over on the Twitter-like federated service called Mastodon, but I’m not doing much with that account just yet.
Strange times.
-Eric
Update: As of November 2024, I’ve left the Nazi Bar and moved to BlueSky. Hope to see you there!
A customer recently wrote to ask whether there was any way to suppress the red “/!\ Not Secure” warning shown in the omnibox when IE Mode loads a HTTPS site containing non-secure images:
Notably, this warning isn’t seen when the page is loaded in modern Edge mode or in Chrome, because all non-secure “optionally-blockable” resource requests are upgraded to use HTTPS. If HTTPS upgrade doesn’t work, the image is simply blocked.
The customer observed that when loading this page in the legacy Internet Explorer application, no “Not Secure” notice was shown in IE’s address bar– instead, the lock icon just silently disappeared, as if the page were served over HTTP.
Background: There are two kinds of mixed content, passive (images, css) and active (scripts). Passive mixed content is less dangerous than active: a network attacker can replace the contents of a HTTP-served image, but only impact that image. In contrast, a network attacker can replace the contents of a HTTP-served script and use that script to completely rewrite the whole page. By default, IE silently allows passive mixed content (hiding the lock) while blocking active mixed content (preserving the lock, because the non-secure download was blocked).
The customer wondered whether there was a policy they could set to prevent the red warning for passive mixed content in Edge’s IE Mode. Unfortunately, the answer is “not directly.”
IE Mode is not sensitive to the Edge policies, so only the IE Settings controlling mixed content apply in this scenario.
When the IE Mode object communicates up to the Edge host browser, the security state of the page in IEMode is represented by an enum containing just three values: Unsecure,Mixed, and Secure. Unsecure is used for HTTP, Secure is used for HTTPS, and Mixed is used whenever the page loaded with mixed content, either active or passive. As a consequence, there’s presently no way for the Edge host application to mimic the old IE behavior, because it doesn’t know whether IEMode displayed passive mixed content, or ran active mixed content.
Because both states are munged together, the code that chooses the UI warning state selects the most alarming option:
Now, even if the Edge UI code assumed the more benign DISPLAYED_INSECURE_CONTENT status, the browser would just show the same “Not secure” text in grey rather than red– the warning text would still be shown.
In terms of what a customer can do about this behavior (and assuming that they don’t want to actually secure their web content): they can change the IE Mode configuration to block the images in one of two ways:
Option #1: Change IE Zone settings to block mixed content. All mixed content is silently blocked and the lock is preserved:
Option #2: Change IE’s Advanced > Security Settings to “Block insecure images with other mixed content”, you see the lock is preserved and the IE-era notification bar is shown at the bottom of the page:
Perhaps the most impactful perk for employees of Microsoft is that the company will match charitable donations up to a pretty high annual limit ($15K/year), and will also match volunteering time with a donation at a solid hourly rate up to that same cap.
Years ago, I volunteered at a food bank in Seattle, but since having kids I haven’t had time for regular volunteer work (perhaps this will change in the future as they get bigger) so I’ve been focusing my philanthropic efforts on donations.
I donate to a few local charities, but most of my donations are to Doctors Without Borders, an organization that does important, amazing work with frugality and an aim toward maximizing impact.
When I returned to Microsoft, I learned about an interesting method to maximize the amount of money received by the charity without the hassle of trying to send them appreciated stock directly.
It’s simple and convenient, especially if you’re already using Fidelity for your stock portfolio.
Open a “Donor Advised Fund” account at Fidelity Charitable. It’s not free, but at $100 or 0.6% a year, I think it’s worth it.
Fund that account by moving appreciated shares of stock from your portfolio into the Fidelity Charitable account. (Optionally, do this when you think the stock is at a “high”)
Select how the funds from those shares (which will be immediately sold) should be invested (you can pick a low-return bond account, or a higher-return, more volatile index fund)
Whenever you want to donate money from your fund to a charitable organization, use a simple form to “recommend a grant” to that organization from your account.
After your grant is sent, visit the Microsoft internal tool to get a match of the amount granted.
Now, if you’re like me, you might wonder why you should bother with this hassle– wouldn’t it be easier to just sell shares and donate the money? Yes, that’s easier, but there are important tax considerations.
First, if you sell appreciated stock, you’re responsible for paying taxes (hopefully at a long-term capital gains rate with the Medicare surtax, so ~18.6% for most of us) on that sale. Then you give all of the proceeds to the charity — you’ll be able to write off what the charity gets as a donation, but that donation doesn’t include what you’d already paid in taxes.
Second, with the Trump-era tax changes, the Standard Deduction for most of us is now quite high, and the Sales-and-Local-Tax-Deduction cap of $10K means that many of us will barely exceed the Standard Deduction if we donate the MS-Matching-Max of $15000/year. However, here’s where the cool trick comes into play:
The IRS grants you the tax deduction of the full value of your appreciated stock when you move that stock to the charitable account.
Microsoft matches the value of your donation when you direct a grant to a charity.
What this means is that you can be strategic in the timing of your actions. Move, say, $30000 of appreciated stock into your charitable account, avoiding taxes on your gains because you didn’t “sell” the stock. Write that full amount off on your taxes this year. Then, later in the year, direct $15000 worth of donations out of your charitable account, getting Microsoft to match your donations up to the limit. Wait until next year and grant the other $15000. (You’ll hopefully have some left over for year three due to gains on your charitable account’s investments).
In this way, you can maximize the size of your donations to charity while minimizing the overhead paid in taxes. [1]
-Eric
[1]: I am, generally, an advocate for higher taxes, and certainly for paying what you owe. However, I am fully willing to follow these steps to maximize the chances that my charitable money goes to paying to save lives in the world’s poorest countries and not to padding the pockets of yet another defense contractor.
Years ago, the dot also used to appear any time the title of a pinned tab changed (because pinned tabs don’t show their titles) but that code was removed in 2018.
Nowadays, web content cannot directly trigger the dot icon (short of showing an alert()) but some sites will draw their own indicator by updating their favicon using JavaScript:
The Microsoft Edge browser makes use of a service called Microsoft Defender SmartScreento help protect users from phishing websites and malicious downloads. The SmartScreen service integrates with a Microsoft threat intelligence service running in the cloud to quickly block discovered threats. As I explained last year, the SmartScreen service also helps reduce spurious security warnings for known-safe downloads — for example, if a setup.exe file is known safe, the browser will not warn the user that it is potentially dangerous.
Sometimes, users find that SmartScreen is behaving unexpectedly; for example, today an Edge user reported that they’re seeing the “potentially dangerous” warning for a popular installer, but no one else has been able to reproduce the warning:
Download warning should not show if SmartScreen reports the file is known-safe
After quickly validating that SmartScreen is enabled in the system’s App & Browser Control > Reputation based protection settings panel:
…we asked the user to confirm that SmartScreen was generally working as expected using the SmartScreen demo page. We found that SmartScreen was generally performing as expected (by blocking the demo phishing pages), so the problem is narrower than a general failure to reach the SmartScreen service, for example.
SmartScreen Logging
At this point, we can’t make much progress without logs from the impacted client. While Telerik Fiddler is a good way to observe traffic between the Edge client and the web service, it’s not always the most convenient tool to use. Historically, SmartScreen used a platform networking stack to talk to the web service, but the team is in the process of migrating to use Edge’s own network stack for this communication. After that refactoring is completed, Edge’s Net Export feature will capture the responses from the SmartScreen service (but due to limitations in the NetLog format, the request data sent to SmartScreen won’t be in those logs).
Fortunately, there’s another logging service in Edge that we can take advantage of– the edge://tracing feature. This incredibly powerful feature allows tracing of the browser’s behavior across most of its subsystems, and it is often used for diagnosing performance problems in web content. But relevant to us here, it also allows capturing data flowing to the SmartScreen web service.
Capture a SmartScreen Trace
To capture a trace of SmartScreen, follow these steps:
Start Microsoft Edge and navigate to edge://tracing
Click the Record button:
3. In the popup that appears, choose the Manually select settings radio button, then click the None button under Record categories to clear all of the checkboxes below it:
4. Scroll down the list of categories and place a checkmark next to SmartScreen
5. At the bottom of the popup, push the Record button:
6. A new popup will appear indicating that recording has started.
7. Open a new tab and perform your repro (e.g. visit the download page and start the download. Allow the download to complete).
8. In the original tab, click the Stop button on the popup. The trace will complete and a trace viewer will appear.
9. Click the Save button at the top-left of the tab:
10. In the popup that appears, give the trace a meaningful name:
11. Click OK and the new trace file will be saved in your Downloads folder with the specified name, e.g. SmartScreenDownloadRep.json.gz 12. Using email or another file transfer mechanism, send this file to your debugging partner.
PS: Your debugging partner will be able to view the SmartScreen traffic by examining the raw JSON content in the log. If you’d like to poke at it yourself, you can look at the data by double-clicking on one of the SendRequestProxy bars in the trace viewer that opened in Step #8: