The Importance of Feedback Loops

This morning, I found myself once again thinking about the critical importance of feedback loops.

I thought about obvious examples where small bad things can so easily grow into large bad things:

– A minor breach can lead to complete pwnage.
– A small outbreak can become a pandemic.
– A brush fire can spark a continental wildfire.
– Petty theft can grow into large-scale fraud.
– A small skirmish can explode into world war.

On the other hand, careful reactions to that initial stimulus can result in a different set of dramatically-better outcomes… far better than would have developed without that negative initial stimulus:

– A minor breach can avert complete pwnage.
– A small outbreak can prevent a pandemic.
– A brush fire can prevent a continental wildfire.
– Petty theft can prevent large-scale fraud.
– A small skirmish can avert a world war.

When it comes to feedback loops, it matters how fast they are, and how they respond to the stimulus. Early losses can reduce risk by mitigating threats before they become too big to survive.

Sweat the small stuff, before it becomes the big stuff.

Cloaking, Detonation, and Client-side Phishing Detection

Today, most browsers integrate security services that attempt to protect users from phishing attacks: for Microsoft’s Edge, the service is Defender SmartScreen, and for Chrome, Firefox, and many derivatives, it’s Google’s Safe Browsing.

URL Reputation services do what you’d expect — they return a reputation based on the URL, and the browser will warn/block loading pages whose URLs are known to be sources of phishing (or malware, or techscams).

Beyond URL reputation, from the earliest days of Internet Explorer 7’s phishing filter, there was the idea: “What if we didn’t need to consult a URL reputation service? It seems like the browser could detect signals of phishing on the client side and just warn the user if they’re encountered.

Client-side Phishing Detection seems to promise a number of compelling benefits.

Benefits

A major benefit to client-side detection is that it reduces the need for service-side detonation, one of the most expensive and error-prone components of running an anti-phishing service. Detonation is the process by which the service takes a URL and attempts to automatically detect whether it leads to a phishing attack. The problem is that this process is expensive (requiring a fleet of carefully secured virtual machines to navigate to the URLs and process the resulting pages) and under constant attack. Attackers aim to fool service-side detonators by detecting that they’re being detonated and cloaking their attack, playing innocent when they know that they’re being watched by security services. Many of the characteristics that attackers look for to cloak against detonators (evidence of a virtual machine, loading from a particular IP range, etc) can’t work when client-side detonation is performed, because the attackers must show their uncloaked attack page to the end-user if they hope to steal their credentials.

Beyond detonation improvements, browser vendors might find that client-side phishing detection reduces other costs (fewer web service hits, no need to transfer large bloom filters of malicious sites to the client). In the URL Reputation service model, browser vendors must buy expensive threat intelligence feeds from security vendors, or painstakingly generate their own, and must constantly update based on false positives or false negatives as phishers rapidly cycle their attacks to new URLs.

Beyond the benefits to the browser vendors, users might be happy that client-side detection could have better privacy properties (no web service checks) and possibly faster or more-comprehensive protection.

So, how could we detect phish from the client?

Clientside ML

Image Recognition

One obvious approach to detecting a phishing site is to simply take a screenshot the page and compare it to a legitimate login site. If it’s similar enough, but the URL is unexpected, the page is probably a phish. Even back in 2006, when graphics cards had little more power than an Etch-a-Sketch, this approach seemed reasonably practical at first glance.

Unfortunately, if the browser blocks fake login screens based on image analysis, the attacker simply needs to download the browser and tune their attack site until it no longer triggers the browser’s phishing detectors. For example, a legitimate login screen:

…is easily tuned by an attacker such that it no longer trips the clientside detection, while looking equivalent to the vast majority of humans:

Even more subtle changes might work; the field of adversarial ML studies how to confuse image processing models as in this wild example:

However, humans are busy, distracted, and easily fooled, such that attackers don’t even need to be especially clever. Here are two real-world phishing attacks that lured user’s passwords despite not looking much like the legitimate login screen:

Text and Other Metadata

If image processing is too prone to false negatives or has too high a computational cost on low-end devices, perhaps we might look at evaluating other information to recognize spoofing.

For example, we can extract text from the title and body of the page to see whether it’s similar to a legitimate login page. This is harder than it sounds, though, because it’s trivial to add text to a HTML page that is invisible to humans but could trip up an extraction algorithm (white-on-white, 1 pixel wide, hidden by CSS, etc). Similarly, an attacker might pick synonyms (Login vs. Sign In, Inbox vs Mailbox, etc) such that the text doesn’t match but means the same thing. An attacker might use multiple character sets to perform a homoglyph spoof (e.g. using a Cyrillic O instead of a Latin O) so that text looks the same to a human but different to a text comparison algorithm. An attacker might use Z-Order or other layout tricks to make text appear in the page in a particular order that differs from the order in the source code. Finally, an attacker might integrate all or portions of text inside carefully-positioned graphics, such that a text-only processor will fail to recognize it.

Making matters more complicated, many websites are dynamic, and their content can change at any time in response to users’ actions, timers, or other factors. Any recognition algorithm must decide how often to run — just on “page load”, or repeatedly as the content of the page changes? The more expensive the recognition algorithm, the more important the timing becomes for performance reasons.

For tech scam sites, image and text processing suffer similar shortcomings, but API observation holds a bit more promise. Most tech scam sites work by abusing specific browser functions, so by watching calls to those functions we may be able to develop useful heuristics to detect an attack-in-progress and do deeper checks for additional evidence. The MalwareBytes security extension uses this approach.

Given the challenges of client-side recognition (false positives, false negatives, and attacker tuning), what else might we do?

Keystrokes

One compelling approach is to just wait for the user to enter their password in the wrong place. While it sounds ridiculous, this approach has a lot of merit — it mitigates (to varying degrees) false positives, false negatives, and attacker tuning all at once.

Way back in 2006, Microsoft explored this idea and ended up filing a patent (Client side attack resistant phishing protection):

When I joined Google’s Chrome team in 2016, I learned that Google had built and fully deployed a Password Alert extension (open-source) on its employee desktops, such that if you ever typed your Google password into a non-Google Login page, the extension would leap into action. One afternoon, I was distracted while logging into a work site and accidentally switched focus into a different browser window while typing my password. I barely taken my finger off the final key before the browser window was taken over by a warning message and an email arrived in my inbox noting that my password had been automatically locked out because I had inadvertently leaked it. While this extension worked especially well for Google Accounts, available variants allow organizational customization such that an enterprise can force-deploy to their users and trigger reporting to backend APIs when a password reuse event is discovered. The enterprise can then either lock the user’s account or the target site can be added to an allowlist of legitimate login pages.

In 2022, Microsoft took our 2006 idea to the next level with the Enhanced Phishing Protection (EPP) feature of Windows 11 22H2. EPP goes beyond the browser such that if you type your Windows login password anywhere in Windows (any browser, any chat window, any note-taking app, etc), Defender SmartScreen evaluates the context (what URL was loaded, what network connections are active, etc) and either warns you of your unsafe activity or suggests changing your password:

Enterprises who have deployed Defender Endpoint Protection receive alerts in their security.microsoft.com portal and can further remediate the threat.

The obvious disadvantage of the “Wait for the bad thing to happen” approach is that it may not protect patient 0 — the first person to encounter the attack. As soon as the victim has entered their password, we must assume that the bad guy has it: Attackers don’t need to wait for the user to hit Enter, and in many cases would be able to guess the last character if the phishing detector triggered before it was delivered to the app. The best we can do is warn the user and lockdown their account as quickly as possible, racing against the attacker’s ability to abuse the credential.

In contrast, Patient-N and later are protected by this scheme, because the first client that observes the attack sends its “I’ve been phished from <url>” telemetry to the SmartScreen service, which adds the malicious URL and blocks it from subsequently loading in any client protected by that URL reputation service.

Conclusions

Attackers and Defenders are engaged in a quiet and ceaseless battle, 24 hours a day, 7 days a week, 366 days a year (Happy leap year!). Defenders are building ingenious protections to speed discovery and blocking of phishing site, but attackers retain strong financial motivation (many billions of dollars per year) to develop their own ingenious circumventions of those protections.

Ultimately, the war over passwords will only end when we finally achieve our goal of retiring this centuries-old technology entirely — replacing passwords with cryptographically strong replacements like Passkeys that are inherently unphishable.

Stay safe out there!

-Eric

PS: Of course, after we get rid of passwords, attackers will simply move along to other attack techniques, including different forms of social-engineering. I hold no illusions that I’ll get to retire with the end of passwords.

x22i Treadmill Review

I love my treadmill, but two years in, I cannot recommend it.

On New Year’s Day 2022 I bought a NordicTrack x22i Incline Trainer (a treadmill that supports 40% incline and 6% decline) with the aim of getting in shape to hike Kilimanjaro. I was successful on both counts, losing 50 pounds in six months and summiting Kilimanjaro with my brother in mid-2023. Between its arrival January 24, 2022 and today, I’ve run ~1780 miles on it.

The Good

Most people I talk to about running complain about how awful treadmills are, describing them as “dreadmills” and horribly boring. While I’m not an outdoor runner, I’m sympathetic to their criticism, but it doesn’t resonate for me, at all.

The iFit video training series is awesome for me. I’m inspired to get on the treadmill to see what’s next on its 27″ screen. I’ve had the chance to walk, run, and hike all over the world: South America, Hawaii, Japan, Italy, Africa, Europe, Antarctica, and all over the US. I’ve run races I’ll likely never get to run in the real world, including races (mostly marathons) in Hawaii, London, Boston, Jackson Hole, New York, Chicago, Tanzania, and more I’ve probably forgotten. I’ve probably run the Kilimanjaro Half Marathon a dozen times at this point, and I’m currently working my way through a “Kilimanjaro Summit” hiking series, partially retracing my steps up the Western Approach. Along the way, I’ve learned lots of training tips, some phrases in foreign languages and history of lots of interesting places.

The treadmill hardware is pretty nice — the shock absorption of the deck is excellent and I’ve managed not to destroy my knees despite running thousands of miles. Running on pavement in the real world leaves me considerably more sore.

While there are a variety of annoyances (there are not nearly enough 10Ks or half marathons, and they don’t add new “hard” workouts fast enough) there’s no question in my mind that the iFit training classes are to thank for the success I’ve had in getting in shape.

The Bad

There are many inexpensive treadmills out there, and most of them don’t seem very sturdy or likely to support a serious and regular running habit.

I was serious about my goals and figured that I should spend enough to ensure that my treadmill would last and never give me a technical excuse not to run. Still, the cost ended up being pretty intimidating, with ~$3800 up-front and $1900 on later expenses.

x22i Treadmill (On Sale)$3170
Delivery and “White Glove” Assembly$299
Sales Tax$286
NordicTrack Heart Rate monitor arm band$100
iFit Video Training Subscription renewal (Years 2-3)$600
20-Amp dedicated circuit$970
Extended warranty (years 2-5)$300ish
Total 3-year costs for the x22i = $5725

Fortunately, Microsoft’s employee fitness program grants $1500 a year, and I was able to put the first year’s payment toward the treadmill and the following year I was able to pay for the subscription content renewal with $900 left over to defray the cost of the Kilimanjaro hike.

The Ugly

Unfortunately, my treadmill has been an escalating source of hassles from the very beginning. The assembly folks failed to fully screw in a few screws (they were sticking so far out that I assumed they used the wrong ones) and they cracked one of the water bottle holders. I complained to the NordicTrack folks and they refunded me the delivery/setup fee and within a few weeks came out to replace the broken water bottle holder.

Throughout the first year, my treadmill frequently tripped the circuit breaker; much to my surprise, the abrupt loss of power never resulted in me crashing into the front handrails, no matter how fast I was going. The treadmill was on a shared 15A circuit and while it was never supposed to approach that level of energy consumption, it clearly did. Sometimes, the trigger was obvious (someone turning on the toaster in the kitchen) while other times the treadmill was the only thing running. Eventually I hooked up a Kill-A-Watt meter and found that it could peak at 16-17 amps when starting or changing the incline, well above what it was supposed to consume, but within the technical specs. I eventually spent the money to get a dedicated 20A circuit, and was angry to discover that it was still periodically tripping. After months of annoyance and research, I eventually discovered that treadmills are infamous for tripping “Arc Fault Circuit Interrupt” breakers that are now required by code. Since having the electrician swap the AFCI breaker for the “old” type, I don’t think it has tripped again.

After all of the electrical problems, I invested in the extended warranty when it was offered, and I’m glad it did. Somewhere around the one year mark, my treadmill started making a loud banging noise. I looked closer and realized that two screws had broken off the bottom of the left and right rails and I assumed that was the source of the noise. Alas, removing the rails didn’t stop the banging, nor did having them replaced. Over the course of several months, techs came out to replace the side rails, idler roller, drive roller, belt, belt guide, and cushions. As November 2023, the treadmill no longer makes a banging sound, but it’s not nearly as quiet as it once was, and I’m expecting that I’ll probably need more service/parts within a few more months.

Closing Thoughts

From a cost/hassle point-of-view, I would be much better off getting a membership to the gym a half-mile down the block. I suspect, however, that much of my success with regular running comes from the fact that the treadmill lives between my bedroom and my home office, and it beckons to me every morning on my “commute.” The hassle of getting in the car, needing to dress in more than a pair of sweaty shorts, etc, would give me a lot of excuses to “nope” out of regular runs.

When I first was shopping for a treadmill, someone teased me and suggested that I make sure it had a good bar for hanging clothes on, since that’s probably the most common job for home treadmills. I managed to avoid that trap, and I’ve fallen in love with my treadmill despite its many flaws.

I don’t know whether other treadmills at a similar price point are of higher quality, or whether spending even more would give better results, but it almost doesn’t matter at this point — the iFit video content is the best part of my treadmill, and I don’t think any other ecosystem (e.g. Peloton) is comparable.

-Eric

PS: If I end up replacing my treadmill in a few years, I might get a “regular” treadmill rather than an Incline Trainer, because I don’t use the steep inclines very often and I think that capability adds quite a bit of weight and perhaps some additional components that could fail?

A Cold and Slow 3M Half

My second run of the 3M Half Marathon was Sunday January 21, 2024. My first half-marathon last year was cold (starting at 38F), but this year’s was slated to be even colder (33F) and I was nervous.

For dinner on Saturday night, I had a HelloFresh meal of meatballs and mashed potatoes, and I went to bed around 9:45pm. I set an alarm for 6, but I woke up around 5:15 am and lingered in bed until 5:30. I drank a cup of coffee right away and then had a productive trip to the bathroom. I ate a banana and had another cup of coffee while I prepped my gear and got dressed.

I put on my new UnderArmor leggings and shorts with the number that I’d attached the night before. I packed an additional running shirt in case it was cold enough to double-layer; something I’d never done before but the forecast called for 33 degrees, five degrees colder than last year’s cold run. I also put on a pair of $3 disposable cotton gloves that I’d picked up at the packet pickup expo the day before. I wore new Balega socks and my trusty orange Hokas (my new ones aren’t quite broken in yet).

My water bottle’s pouch would hold my car key, an iPhone SE to provide tunes streamed to one Bluetooth earpiece (the other died months ago) and snacks: a pack of Gu gummies and some Jelly Belly Energy beans (which I ended up liking most).

I left the house around 6:45 for the 7:30 am race. While waiting at a light in the parking traffic I concluded that I definitely was going to need that second shirt, so I put it on under my trusty Decker Challenge shirt that brought me to the top of Kilimanjaro.

By 7:22 I had parked and was waiting in a long line for a porta-potty near the start, debating about whether or not I should just skip it and go find my pace group. Ultimately, the race began just before I had a turn, although it was nice to dispose of that second cup of coffee. Alas, I was forced to start with the 2:40 pacers. I started my Fitbit a minute or two before my group made it across the starting line. Alas, my second watch (an ancient Timex I found somewhere) crashed when I tried to start it as I crossed the starting line.

I spent the first mile passing folks and by the second mile marker I’d reached the 2:05 pace group. For the next few miles, the 2:00 pace group was in sight in the distance, but I’d never caught up to them, despite my hope of running most of the race with the 1:55 pace group. I consoled myself that I’d probably crossed the start line two minutes after the 2:00 group so my dream of finishing in under 2 hours was probably still possible.

Around mile 5, my energy started to flag, but shortly thereafter an 8yo in a tutu running ahead of me guilted me into realizing that this wasn’t as hard as I was making it out to be. I passed her in about half a mile, grateful for the boost.

Shortly after mile 6, I discarded my gloves which had served me well. By this point I was taking short walking breaks and had concluded that I was unlikely to set any PRs.

Miles 6 through 9 were full of signs. My favorite was the 3yo boy holding a sign that said “This seems like a lot of work for a banana“– I told him it was the best one I’d seen. I groaned a bit at some of the signs held by twenty-somethings; one woman’s read “Find a cute butt and follow it” while another proclaimed: “Wow, that looks really long and hard!” Like last year, around mile 9 I stopped for a pee break although this time it was almost nothing… I had sipped under 16 ounces on the entire run.

By mile 7, my torso was starting to get a bit warm in my double shirts, but in another few miles the breeze had picked up and I was glad that I had them. Sheesh, it was chilly.

Amazingly, nothing hurt. My feet felt good. My legs felt good. My throat and lungs felt fine. My lips, wearing a swipe of chapstick, were fine. My thighs and chest, coated in BodyGlide, were not chafing anywhere. I didn’t have any weird aches in my arms or back. The closest thing I had to any pain was the bottom of my nose, which was getting chapped in the cold.

This year, I was anticipating the two hills downtown, and while I can’t say that I ran up them, they were much less demoralizing than last year. As the finish line approached, it had been a few miles since I’d seen my last pacer, but I figured I was somewhere in the 2:13-2:18 range. I idly hoped I’d still beat my time from last year’s slow Galveston Half.

Ultimately, I crossed the finish with a chip time of 2:09:24, a 9:52/m pace.

I wish FitBit made it easier to trim their data to the actual run portion of the race :)

This year, I made sure not to blow by the volunteers passing out the medals just over the finish line.

Tired but feeling like I could’ve easily run a few more miles at a slow pace, I hopped the bus back to the starting point. I grabbed my phone and a jacket from the car and walked a mile to the bagel shop to get a coffee and celebratory breakfast sandwich… the day felt even colder. Back home on the couch after a long warm shower, I signed up for next year’s race.

Next month is the Galveston Half Marathon. I hope to run the race with the 2:00 pacer the whole way, but I’ll settle for beating 2:09, five minutes faster than last year’s effort.

The Blind Doorkeeper Problem, or, Why Enclaves are Tricky

When trying to protect a secret on a client device, there are many strategies, but most of them are doomed. However, as a long-standing problem, many security experts have tried to chip away at its edges over the years. Over the last decade there’s been growing interest in using enclaves as a means to protect secrets from attackers.

Background: Enclaves

The basic idea of an enclave is simple: you can put something into an enclave, but never take it out. For example, this means you can put the private key of a public/private key pair into the enclave so that it cannot be stolen. Whenever you need to decrypt a message, you simply pass the ciphertext into the enclave and the private key is used to decrypt the message and return the plaintext, crucially without the private key ever leaving the enclave.

There are several types of enclaves, but on modern systems, most are backed by hardware — either a TPM chip, a custom security processor, or features in mainstream CPUs that enable enclave-style isolation within a general-purpose CPU. Windows exposes a CreateEnclave API to allow running code inside an enclave, backed by virtualization-based security (VBS) features in modern processors. The general concept behind Virtual Secure Mode is simple: code running at the normal Virtual Trust Level (VTL0) cannot read or write memory “inside” the enclave, which runs its code at VTL1. Even the highly-privileged OS Kernel code running at VTL0 cannot spy on the content of a VTL1 enclave.

DLLs loaded into an enclave must be signed by a particular type of certificate (currently, as I understand it, only available for Microsoft code) and the code’s signature and integrity are validated before it is loaded into the enclave. After the privileged code is loaded into the enclave, it has access to all of the memory of the current process (both untrusted VTL0 and privileged VTL1 memory). In-enclave code cannot load most libraries and thus can only call a tiny set of external library functions, mostly related to cryptography.

Security researchers spend a lot of time trying to attack enclaves for the same reason that robbers try to rob banks: because that’s where the valuables are. At this point, most enclaves offer pretty solid security guarantees– attacking hardware is usually quite difficult which makes many attacks impractically expensive or unreliable.

However, it’s important to recognize that enclaves are far from a panacea, and the limits of the protection provided by an enclave are quite subtle.

A Metaphor

Imagine a real-world protection problem: You don’t want anyone to get into your apartment, so you lock the door when you leave. However, you’re in the habit of leaving your keys on the bar when you’re out for drinks and bad guys keep swiping them and entering your apartment. Some especially annoying bad guys don’t just enter themselves, they also make copies of your key and share it with their bad-guy brethren to use at their leisure.

You hit on the following solution: you change your apartment’s lock, making only one key. You hire a doorkeeper to hold the key for you, and he wears it on a chain around his neck, never letting it leave his person. Every time you need to get in your apartment, you ask the doorkeeper to let you in and he unlocks the door for you.

No one other than the doorkeeper ever touches the key, so there’s no way for a bad guy to steal or copy the key.

Is this solution secure?

Well, no. The problem is that you never gave your doorkeeper instructions on who is allowed to tell him to unlock the door, so he’ll open it for anyone who asks. Your carefully-designed system is perfectly effective in protecting the key but utterly fails in achieving the actual protection goal, protecting the contents of your apartment.

What does this have to do with enclaves?

Sometimes, security engineers get confused about their goals, and believe that their objective is to keep the private key secret. Keeping the private key secret is simply an annoying requirement in service of the real goal: ensuring that messages can be decrypted/encrypted only by the one legitimate owner of the key. The enclave serves to prevent that key from being stolen, but preventing the key from being abused is a different thing altogether.

Consider, for example, the case of locally-running malware. The malware can’t steal the enclaved key, but it doesn’t need to! It can just hand a message to the code running inside the enclave and say “Decrypt this, please and thank you.” The code inside the enclave dutifully does as it’s asked and returns the plaintext out to the malware. Similarly, the attacker can tell the enclave “Encrypt this message with the key” and the code inside the enclave does as directed. The key remains a secret from the malware, but the crypto system has been completely compromised, with the attacker able to decrypt and encrypt messages of his choice.

So, what can we do about this? It’s natural to think: “Ah, we’ll just sign/encrypt messages from the app into the enclave and the code inside the enclave will validate that the calls are legitimate!” but a moment later you’ll remember: “Ah, but how do we prevent the protect that app’s key?” and we’re back where we started. Oops.

Another idea is that the code inside the enclave will examine the running process and determine whether the app/caller is the expected legitimate app. Unfortunately, this is extremely difficult. While the VTL1 code can read all of app’s VTL0 memory, to confidently determine that the host app is legitimate would require something like groveling all of the executable pages in the process’ memory, hashing them, and comparing them to a “known good” value. If the process contains any unexpected code, it may be compromised. Even if you could successfully implement this process snapshot hash check, an attacker could probably exploit a race condition to circumvent the check, and you’d be forever bedeviled with false-positives caused by non-malicious code injection from accessibility utilities or security tools.

In general, any security checks from inside the enclave that look at memory in VTL0 are potentially subject to a TOCTOU attack — an attacker can change any values at any time unless they have been copied into VTL1 memory.

Another idea would be to prompt the user: the code inside the enclave could pop up an unspoofable dialog asking “Hey, would you like me to sign this message [data] with your key?” Unfortunately, in the Windows model this isn’t possible — code running inside an enclave can’t show any UI, and even if it could, there’s nothing that would prevent such a confirmation UI from being redressed by VTL0 code. Rats.

Conclusion

Before you reach for an enclave, consider your full threat model and whether using the enclave would meaningfully mitigate any threats.

For example, token binding uses an enclave to render cookie theft useless– while token binding doesn’t mitigate the threat of locally-running malware abusing cookies via a sock-puppet browser, it does mitigate the threat of cookies theft via XSS/cookie leaks. Furthermore, it complicates the lives of malicious insiders using tightly-locked down corporate PCs at financial services firms– those PCs are heavily monitored and audited, so forcing an attacker to abuse the key on device is a significant improvement in security posture.

A similar scenario involves Hardware Security Modules (HSMs) used in code-signing and certificate issuance scenarios: while the overall security goal is to prevent misuse of the key, preventing the attacker from egressing the key improves the threat model because it allows other components of the overall system (auditing, alerting, AV, EDR/XDR, etc) to combat attackers attempting to abuse the unexportable key.

-Eric

PS: A great discussion of hardware-backed enclaves on popular phones can be read here.

Coding at Google

I wrote this a few years back, but I’ve had occasion to cite it yet again when explaining why engineering at Google was awesome. To avoid it getting eaten by the bitbucket, I’m publishing it here.

Background: From January 2016 to May 2018, I was a Senior SWE on the Chrome Enamel Security team.

Google culture prioritizes developer productivity and code velocity. The internal development environment has been described (by myself and others) as “borderline magic.” Google’s developer focus carried over to the Chrome organization even though it’s a client codebase, and its open-source nature means that cannot depend upon Google-internal tooling and infrastructure.

I recounted the following experience after starting at Google:

When an engineer first joins Google, they start with a week or two of technical training on the Google infrastructure. I’ve worked in software development for nearly two decades, and I’ve never even dreamed of the development environment Google engineers get to use. I felt like Charlie Bucket on his tour of Willa Wonka’s Chocolate Factory—astonished by the amazing and unbelievable goodies available at any turn. The computing infrastructure was something out of Star Trek, the development tools were slick and amazing, the process was jaw-dropping.

While I was doing a “hello world” coding exercise in Google’s environment, a former colleague from the IE team pinged me on Hangouts chat, probably because he’d seen my tweets about feeling like an imposter as a SWE.  He sent me a link to click, which I did. Code from Google’s core advertising engine appeared in my browser in a web app IDE. Google’s engineers have access to nearly all of the code across the whole company. This alone was astonishing—in contrast, I’d initially joined the IE team so I could get access to the networking code to figure out why the Office Online team’s website wasn’t working.

Neat, I can see everything!” I typed back. “Push the Analyze button” he instructed. I did, and some sort of automated analyzer emitted a report identifying a few dozen performance bugs in the code. “Wow, that’s amazing!” I gushed. “Now, push the Fix button” he instructed. “Uh, this isn’t some sort of security red team exercise, right?” I asked. He assured me that it wasn’t. I pushed the button. The code changed to fix some unnecessary object copies. “Amazing!” I effused. “Click Submit” he instructed. I did, and watched as the system compiled the code in the cloud, determined which tests to run, and ran them.

Later that afternoon, an owner of the code in the affected folder typed LGTM (Googlers approve changes by typing the acronym for Looks Good To Me) on the change list I had submitted, and my change was live in production later that day. I was, in a word, gobsmacked. That night, I searched the entire codebase for misuse of an IE cache control token and proposed fixes for the instances I found.

-Me, 2017

The development tooling and build test infrastructure at Google enable fearless commits—even a novice can make contributions into the codebase without breaking anything—and if something does break, culturally, it’s not that novice’s fault: instead, everyone agrees that the fault lies with the environment – usually either an incomplete presubmit check or missing test automation for some corner case. Regressing CLs (changelists) can be quickly and easily reverted and resubmitted with the error corrected. Relatedly, Google invests heavily in blameless post-mortems for any problem that meaningfully impacts customer experience or metrics. Beyond investing in researching and authoring the post-mortem in a timely fashion, post-mortems are broadly-reviewed and preventative action items identified therein are fixed with priority.

Google makes it easy to get started and contribute. When ramping up into a new space, the new engineer is pointed to a Wiki or other easily-updated source of step-by-step instructions for configuring their development environment. This set of instructions is expected to be current, and if the reader encounters any problems or changes, they’re expected to improve the document for the next reader (“Leave it better than you found it”). If needed, there’s usually a script or other provisioning tool used to help get the right packages/tools/dependencies installed, and again, if the user encounters any problems, the expectation is that they’ll either file a bug or commit the fix to the script.

Similarly, any ongoing Process is expected to have a “Playbook” that explains how to perform the process – for example, Chrome’s HSTS Preload list is compiled into the Chrome codebase from snapshots of data exported from HSTSPreload.org. There’s a “Playbook” document that explains the relevant scripts to run, when to run them, and how to diagnose and fix any problems. This Playbook is updated whenever any aspect of the process changes as a part of whatever checkin changes the process tooling.

As a relatively recent update, the Chromium project now offers a very lightweight contribution experience that can be run entirely in a web browser, which mimics the Google internal development environment (Cider IDE with Borg compiler backend).

Mono-repro, no team/feature branches, Google internally uses a mono-repo into which almost all code (with few exceptions, including Chrome) is checked in, and the permissions allow any engineer anywhere in the company to read it, dramatically simplifying both direct code reuse as well as finding expertise in a given topic. Because Chrome is an open-source project, it uses its own mono-repo containing approximately 25 million lines of code. Chrome does not, in general, use shared branches for feature development, only to fork for the release branches (e.g. Canary is forked in order to create the Dev branch, and there are firm rules about cherry-picking from Main into those branches).

An individual developer will locally create branches for each fix that he’s working on, but those branches are almost never seen by anyone else; his PR is merged to HEAD at which point everyone can see it. As a consequence, landing non-trivial changes, especially in areas where others are merging, often results in many commits and a sort of “chess game” where you have to anticipate where the code will be moving as your pieces are put in. This strongly encourages developers to land code in many small CLs that coax the project toward the desired end-state, each with matching automated tests to ensure that you’re protected against anyone else landing a change that regresses your code. Those tests end up defending your code for years to come.

Because all work is done in Main, there’s little in the way of cross-team latency, because you need not wait for an RI/FI to bring features around to/from other branches.

Cloud build. Google uses cloud build infrastructure (Borg/Goma) to build its projects so developers can work on relatively puny workstations but compile with hundreds to thousands of cores. A clean build of Chrome for Windows that took 46 minutes on a 48 thread Xeon workstation would take 6 minutes on 960 Goma cores, and most engineers are not doing clean builds very often.

This Cloud build infrastructure is heavily leveraged throughout the engineering system—it means that when an engineer puts a changelist up for review, the code is compiled for five to ten different platforms in parallel in the background and then the entire automated test suite is run (“Tryjob”) such that the engineer can find any errors before another engineer even begins their code review. Similarly, artifacts from each landed CL’s compilation are archived such that there’s a complete history of the project’s binaries, which enables automated tooling to pinpoint regressions (performance via perfbots, security via ClusterFuzz, reliability via their version of Watson) and engineers to quickly bisect other types of regressions.

Great code search/blame. Google’s Code Search features are extremely fast and, thanks to the View-All monorepo and lack of branches, it’s very easy to quickly find code from anywhere in the company. Cross-references work correctly, so things like “Find References” will properly find all callers of a specific function rather than just doing a string search for that name. Viewing Git history and blame is integrated, so it’s quick and easy to see how code evolved over time.

24-hour Code Review Culture. Google’s engineering team has a general SLA of 24 hours on code-review. The tools help you find appropriate reviewers, and the automation helps ensure that your CL is in the best possible shape (proper linting, formatting, all tests pass, code coverage %s did not decline) before another human needs to look at it. The fast and simple review tools help reviewers concentrate on the task at hand, and the fact that almost all CLs are small/tiny by Microsoft standards help keep reviews moving quickly. Similarly, Google’s worldwide engineering culture mean that it’s often easy to submit a CL at the end of the day Pacific time and then respond to review feedback received overnight from engineers in Japan or Germany.

Opinionated and Enforced Coding Standards. Google has coding standards documents for each language (e.g. C++) that are opinionated and carefully revised after broad and deep discussions among practitioners interested in participating. These coding standards are, to the extent possible, enforced by automated tooling to ensure that all code is written to the standard, and these standards are shared across teams by default, with any per-project exceptions (e.g. Chrome’s C++) treated as an overlay.

Easily Discovered Area Interest/Ownership Google has an extremely good internal “People Directory” – it allows you to search for any employee based on tags/keywords, so you can very quickly find other folks in the company that own a particular area. Think “Dr Whom/Who+” with 100ms page-load-times, and backed by a work culture where folks keep their own areas of ownership and interest up-to-date because it’s both simple and because if they fail to do so, they’re going to keep getting questions about things they no longer own. Similarly, the OWNERS system within the codebases are up-to-date because they are used to enforce OWNERS review of changes, so after you find a piece of code, it’s easy to find both who wrote it (fast GIT BLAME) and who’s responsible for it today. Company/Division/Team/Individual OKRs are all globally visible, so it’s easy to figure out what is important to a given level of the organization, no matter how remote.

Simple/fast bug trackers. Google’s bug tracker tools are simple, load extremely quickly, and allow filing/finding bugs against anything very quickly. There’s a single internal tracker for most of Google, and a public tracker (crbug.com) for the Chromium OSS project.

Simple/fast telemetry/data science tools. Google’s equivalent of Watson is extremely fast and has code to automatically generate stack information, hit counts, recent checkins near the top-of-stack functions, etc. Google’s equivalent of SQM/OCV is extremely fast and enables viewing of histograms and answering questions like “What percentage of page loads result in this behavior” without learning a query language, getting complicated data access permissions, or suffering slow page loads. These tools enable easy creation of “notifications/subscriptions” so developers interested in an area can get a “chirp” email if a metric moves meaningfully.

Sheriffs and Rotations. Most recurring processes (e.g. bug triage) have both a Sheriff and a Deputy and Google has tools for automatically managing “rotations” so that the load is spread throughout the team. For some expensive roles (e.g. a “Build Sheriff”) the developer’s primary responsibility while sheriff becomes the process in question and their normal development work is deferred until their rotation ends; the rotation tool shows the schedule for the next few months, so it is relatively easy to plan for this disruption in your productivity.

Intranet Search that doesn’t suck While Google tries to get many important design docs and so forth into the repo directly there’s still a bunch of documentation and other things on assorted wikis and Google Docs, etc. As you might guess, Google has an internal search engine for this non-public content that works quite well, in contrast to other places I’ve worked.

Fall 2023 Races

While I’ve been running less, I haven’t completely fallen out of the habit, and I still find spending an hour on the treadmill to be the simplest way to feel better for the rest of the day. Real-world racing remains appealing, for the excitement, the community, and for the forcing function to get on the treadmill even on days when I’m not “feeling it.”

Alas, I’m not very proud of any of my fall race times, but I am glad that I’ve managed to keep up running after the motivation of Kilimanjaro has entered the rear-view mirror, and I’m happy that I recently managed to sneak up on running my longest-by-far distance.

Daisy Dash 10K – 10/22

I had lucky bib #123 for this run. The course wasn’t much to look at, but was pleasant enough, looping around a stadium and a few shopping centers in “Sunset Valley” on the south side of Austin. I was worried going out that my lower back had been sore for a few days, and my belly felt slightly grumbly, but once I started running, neither bothered me at all for the next few hours. Thanks, body!

It was a 68 and breezy; a bit humid, but not nearly as bad as the Galveston Half I ran 9 months ago. I brought along an iPhone and Bluetooth earbuds and for my second race without music, after having stupid technical problems. In Galveston, my earbud died, and this time, I’d failed to download any music. Doh!

I burned 910 calories, and while my heart rate wasn’t awesome, it recovered quickly whenever my pace dropped.

I started out reasonably strong, finishing the first 5K in 26:39. I was wearing my Fitbit watch, but immediately abandoned any attempt to use it to monitor my pace during the run. I’m pleasantly surprised to find that I naturally settled into the exact 7mph pace that is my most common treadmill default.

Sadly, after the first 5K, I was panting and started taking 30 to 90 second walking breaks every three quarters of a mile or so, dropping my pace significantly in the back half of the race.

I tried to psych myself up with thoughts of Kilimanjaro and how short this race was (only 10K! I keep thinking), but I never got into that heavenly rhythm where the minutes just slide by for a while.

Still, I vowed to finish in under an hour, and in the last mile I was excited that I might beat 58 minutes, putting me about 5 minutes behind my second Cap10K in the spring, and still 10 minutes faster than my first real-world 10K.

Ultimately, I finished in 57:40:

While my arches felt a bit sore after the race (pavement is clearly much harder on me than my treadmill), I felt like running an extra four miles wouldn’t’ve been a nightmare… good news for the “Run for the Water” 10 miler coming up in two short weeks. But first, a charity 5K.

Microsoft Giving Campaign 5K – 10/27

The Microsoft Giving Campaign’s charity 5K consisted of five laps around a small lake near Microsoft’s north Austin office. It was fun to meet other Microsoft employees (many of whom started during the pandemic) and go for a fun run around the private Quarry Lake trail.

Alas, I had a hard time with my pacing and less fun than I’d expecting dodging the mud puddles.

In total, it took me 32:41 to run the slightly-extended (3.27) course.

Last year’s 5K was on a different course and I’d gone out too fast, ultimately needing to walk in the third mile, such that when I finished it, I turned around and ran the full course again, this time at a steady pace. While I don’t seem to have recorded my times anywhere, I think both runs were around 27 minutes.

Run for the Water – 10 Miler 11/5

The weather for the race was nearly perfect. I was nervous, having remembered last year’s race, but excited to revisit this race with another year’s worth of running under my belt.

I managed the steep hill at mile 6 well, but fell off pace at miles 8 and 9.

Sprinting at the end, my sunglasses went flying off my shirt; embarrassingly, I had to loop back into the finishers’ chute after crossing the finish line to collect them.

I finished in 1:38:05, a pace of 9:48, a somewhat-disappointing 9:08 slower than last year‘s race. Ah well.

Turkey Trot 5-Miler 11/23

I arrived for the Thanksgiving day race very early and ended up spending almost an hour outside before the race began, getting a bit colder than I’d’ve liked. I warmed up quickly when the run began.

I finished in 45:45, a pace of 9:09, a bit (1:39) slower than last year.

Shortly before the race started, I made the poor decision to eat a free granola bar and it didn’t sit well during the run. After sprinting through the finishers chute, I spent a panicked half-minute trying to find a convenient place to throw up amongst the thousands of bystanders until my heart calmed down and the urge subsided.

Virtual Decker Challenge Half 12/9

Having surveyed the boring Decker Challenge course last year while driving by during my missed half, I decided to sign up for the virtual option this year. The 2022 shirt (made by Craft) has become one of my favorites, and I wore it constantly on Kilimanjaro, including on summit day. I registered for the Virtual event to collect this years’ shirt, but alas, it turned out to be a disappointing no-name black long-sleeve tech shirt. There were basically no other perks to the virtual race — no medal, and nowhere even to report your virtual results. Bummer.

For the race itself, I chose to repeat the Jackson Hole half that I ran last year, beating last year’s time by almost 4 minutes, with a time of 2:00:20. This wasn’t a great time, but I was grateful for it: I’d had quite a bit of wine with dinner on Friday night, but had to run on Saturday morning because on Sunday I was taking my eldest to the Eagles/Cowboys game.

Virtual London Marathon 12/17

On this last weekend before the holidays, with my kids partying at their mom’s and my housemate out of town on a cruise, I was bored and starting to feel a bit stir crazy. I decided I’d run a marathon on my treadmill, picking London with Casey Gilbert. Unfortunately, the race was broken up into segments, so each time I had to switch to the next segment ended up being an “aid station” where I’d refill my water bottle or grab another stroopwafel.

The first half went great– I ran at a constant pace and finished in 1:55, feeling good and ready to keep going; my relatively slow pace kept my heart rate down. I finished the first twenty miles in a respectable 3:04:30, starting to worry that finishing in under 4 hours was going to be tough, but achievable. Unfortunately, things fell apart in miles 21 to 25, and my pace dropped dramatically; even though my heart rate was well under control, my legs were tired and I was beat. I rallied for the final three quarters of a mile back to 7mph, finishing with a somewhat disappointing but perhaps unsurprising 4:20:10.

Still, I’m proud of this race– while it was undoubtably much easier on the body than a real-world marathon, it was in no way easy and I never gave up.

After cooling down, I showered, ate, and caught “Love Actually” at the Alamo Drafthouse for the afternoon. My FitBit reported that I cracked 45000 steps for the day. My intake of three running stroopwafels, two bananas, a leftover fajita, a salad, a steak and vegetables probably did not cover the 4000+ calories I burned in the run.

Two days later, I’m still sore, although all three blisters have mostly faded away.

Early 2024 Races

I’ve got a good slate for early 2024, running the 3M Half again in January, the Galveston Half in February, and the Capitol 10K in April. I really hope to beat last year’s times in the Half Marathons (slow and steady ftw!), but don’t expect I’ll be able to beat my 2023 Cap10K time of 52:25.

Defense Techniques: Blocking Protocol Handlers

Application Protocols represent a compelling attack vector because they’re the most reliable and cross-browser compatible way to escape a browser’s sandbox, and they work in many contexts (Office apps, some PDFs handlers, some chat/messaging clients, etc).

Some protocol handlers are broadly used, while others are only used for particular workflows which may not be relevant in the user or company’s day-to-day workflows.

The Edge team can already block known-unsafe protocol schemes via a browser Component Update, but that only likely to happen if the scheme is broadly exploited across the ecosystem.

An organization or individual may wish to reduce attack surface by blocking the use of unwanted protocol handlers.

To block access to a given protocol handler from browsers, you can set the URLBlocklist policies in Chrome and Edge to prevent access to those protocols from within the browser.

For example, in 2022, the ms-appinstaller handler had a security bug and many organizations that did not need this handler for their environments wished to disable it. They can set the policies:

REG ADD "HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Edge\URLBlockList" /v "1" /t REG_SZ /d "ms-appinstaller:*" /f

REG ADD "HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome\URLBlockList" /v "1" /t REG_SZ /d "ms-appinstaller:*" /f

After the policy is set, attempt to navigate the browser to the protocol will show an error page:


After the 2022 incident, the App Installer team also created a specific group policy to disable the MS-AppInstaller scheme within the handler itself:

REG ADD "HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\AppInstaller" /v "EnableMSAppInstallerProtocol" /t REG_DWORD /d "0" /f

When the App Installer protocol is disabled within the handler itself, invoking the protocol simply shows an error message:

Stay safe out there!

-Eric

PS: On Windows 11 today, unfortunately, there’s not always a simple way to block an arbitrary scheme for the system as a whole, unless the protocol handler application can be uninstalled entirely.

The Windows Set a default for a link type settings pane only allows you to choose a different Microsoft Store app that offers to support the protocol scheme, preventing you from unsetting the scheme entirely or pointing it to a harmless non-handler (e.g. Calculator).

To address this, we have to tell Windows that our generic little executable (saved to C:\windows\alert.exe) knows how to handle the ms-appinstaller protocol scheme, by listing it in the RegisteredApplications key with a URLAssociations key professing support:

Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\com.bayden.alert]

[HKEY_CLASSES_ROOT\com.bayden.alert\shell]

[HKEY_CLASSES_ROOT\com.bayden.alert\shell\open]

[HKEY_CLASSES_ROOT\com.bayden.alert\shell\open\command]
@="\"C:\\windows\\alert.exe\" \"%1\""

[HKEY_CURRENT_USER\Software\RegisteredApplications]
"BaydenAlert"="SOFTWARE\\Bayden Systems\\Alert\\Capabilities"

[HKEY_CURRENT_USER\Software\Bayden Systems\Alert]

[HKEY_CURRENT_USER\Software\Bayden Systems\Alert\Capabilities]

[HKEY_CURRENT_USER\Software\Bayden Systems\Alert\Capabilities\URLAssociations]
"ms-appinstaller"="com.bayden.alert"

After we do this, we can change the protocol handler app from the real one to our stub:

…such that subsequent attempts to invoke the handler are harmlessly displayed:

Attack Techniques: Steganography

Attackers are incentivized to cloak their attacks to avoid detection, keep attack chains alive longer, and make investigations more complicated.

One type of cloaking involves steganography, whereby an attacker embeds hidden data inside an otherwise innocuous file. For instance, an attacker might embed their malicious code inside an image file, not in an attempt to exploit a vulnerability in image parsers, but instead just as a convenient place to stash malicious data. That malware-laden image file may then be hosted anywhere that allows image uploads. There are plenty of such places on the public internet, because image file types are generally not considered dangerous.

In the recent Diamond Sleet attack, the attackers embedded the second stage of the attack as data inside of a PNG file and hosted that file on three unwitting web services, including the popular Imgur and GitHub services. The first stage of the attack code reaches out to one of three URLs, downloads the image, extracts the attack code, decrypts it, and runs the resulting malware.

When parsing the malicious PNG file, we see that the attackers got lazy– they shoved their data in the middle of the file after the end of its final IDAT chunk and before the IEND chunk.

In this case, the attackers didn’t bother formatting their attack as a valid PNG chunk; even though the malicious data is only 498,176 bytes long, the bytes 1518E13A at the front of the malicious content would suggest to a PNG parser to expect almost 354MB of data in the chunk.

But none of that matters to the attacker’s code — they don’t need to parse the file as if it were legitimate, they just grab the part of the file they care about and ignore the rest.

Developers of malicious browser extensions have been using this approach for years, because they learned from experience that the JavaScript files inside extension uploads get more scrutiny from browsers’ web stores’ security reviewers than the other files in the extension packages.

Defenders who want to detect hidden code can try to look for anything suspicious: malformed chunks, unknown chunk types, trailing data, and suspiciously inefficient files (e.g. much larger than the pixel count would suggest). But ultimately, there’s no way to guarantee that you’ll ever detect embedded messages.

A sufficiently motivated attacker could encrypt their malware and then encode it as legitimate pixel data (say, the “random” pixels of the stars in the sky) and there’s no way for a researcher to detect it without knowing the decryption routine. That said, finding an image’s URL inside a captured copy of the first stage or its network traffic is typically a pretty strong indication that there’s something malicious embedded in the file, because attackers tend not to bother downloading data they don’t need.

-Eric