dev

Last update: Feb 11, 2020

I started building browser extensions more than 22 years ago, and I started building browsers directly just over 16 years ago. At this point, I think it’s fair to say that I’m entering the grizzled veteran phase of my career.

With the Edge team continuing to grow with bright young minds from college and industry, I’m increasingly often asked “Where do I learn about browsers?” and I haven’t had a ready answer for that question.

This post aims to answer it.

First, a few prerequisites for developing expertise in browsers:

  1. Curiosity. While browsers are more complicated than ever, there are also better resources than ever to learn how they work. All major browsers are now based on open-source code, and if you’re curious, you no longer need to join a secret priesthood to discover how they operate under the hood.
  2. Willingness to Experiment. Considering how complex browsers are (and because they’re so diverse, across platforms, maker, and version), it’s often easiest to definitively answer questions about how browsers work by trying things, rather than reading an explainer (possibly outdated or a map that doesn’t match the terrain) or reading the code (often complex and potentially misleading). Build test cases and try them in each browser to see what happens. When you encounter surprising behavior, let your curiosity guide you into figuring it out. Browsers contain no magic, but plenty of butterfly effects.
  3. Doggedness. I’ve been doing this for half of my life, and I’m still learning daily. While historical knowledge will serve you well, things are changing in this space every day, and keeping up is an endless challenge. And it’s often fun.

Now, how do you apply these prerequisites and grow to become a master of browsers? Read on.

Fundamental Understanding

Over the years, a variety of broad resources have been developed that will give you a good foundation in the fundamentals of how browsers work. Taking advantage of these will help you more effectively explore and learn on your own.

  • First, I recommend reading the Chrome Comic Book. This short, 38 page comic book from legend Scott McCloud was published alongside the first version of Google Chrome back in 2008. It clearly and simply explains many of the core concepts behind modern browsers as application platforms.
  • HTML5Rocks has a great introduction into How Browsers Work. This is a lengthy and detailed introduction into how browsers turn HTML and CSS into what you see on the screen. Read this article and you’ll understand more about this topic than 90% of web developers.
  • The folks at Google have created a fantastic four-part illustrated series about how modern browsers work: Inside look at modern web browsers. Navigation, the Rendering Engine and Input and Compositing as a part of their Web Fundamentals site.
  • Mozilla wrote a fantastic cartoon introduction to WebAssembly, explaining the basics behind this new technology; there’s tons of other invaluable content on Mozilla Hacks.
  • The Chromium Chronicle is a monthly series geared specifically to the Chromium developers who build the browser.
  • Web Developers should check out Web.Dev, a great source of articles on building fast and secure websites.

Books

If you prefer to learn from books, I can only recommend a few. Sadly, there are few on browsers themselves (largely because they tend to evolve too quickly), but there are good books on web technologies.

Tools

One of the best ways to examine what’s going on with browsers is to just use tools to watch what’s going on as you use your favorite websites.

  • Built-in DevTools (just hit F12!) – They’re amazingly powerful. I don’t know of a great tutorial, but there are likely some on YouTube.
  • Telerik Fiddler – See what requests hit the network and what they contain.
  • The VisBug Chrome Extension – Easily manipulate any page layout, directly in your browser.

Use the Source, Leia

The fact that all of the major browsers are built atop open-source projects is a wonderful thing. No longer do you need to be a reverse-engineering ninja with a low-level debugger to figure out how things are meant to work (although sometimes such approaches can still be super-valuable).

Source code locations:

Navigating the Code

While simply perusing a browser’s source code might give you a good feel for the project, browsers tend to be enormous. Chromium is over 10 million lines of code, for example.

If you need to find something in particular, one often effective way to find it easily is to search for a string shown in the browser UI near the feature of interest. (Or, if you’re searching for a DOM function name or HTML attribute name, try searching for that.) We might call this method string chasing.

By way of example, today I encountered an unexpected behavior in the handling of the “Go to <url>” command on Chromium’s context menu:

So, to find the code that implements this feature, I first try searching for that string:

…but there are a gazillion hits, which makes it hard to find what I need. So I instead search for a string that’s elsewhere in the context menu, and find only one hit in the Chromium “grd” (resources) file:

When I go look at that grd file, I quickly find the identifier I’m really looking for just below my search result:

So, we now know that we’re looking for usages of IDS_CONTENT_CONTEXT_GOTOURL, probably in a .CC file, and we find that almost immediately:

From here, we see that the menu item has the command identifier IDC_CONTENT_CONTEXT_GOTOURL, which we can then continue to chase down through the source until we find the code that handles the command. That command makes use of a variable selection_navigation_url_, which is filled elsewhere by some pretty complicated logic.

After you gain experience in the Chromium code, you might learn “Oh, yeah, all of the context menu stuff is easy to find, it’s in the renderer_context_menu directory” and limit your searches to that area, but after four years of working on Chrome, I still usually start my searches broadly.

Optional: Compile the code

If you’d actually like to compile the code of a major browser, things are a bit more involved, but if you follow the configuration instructions to the letter— your first build will succeed. Back in 2015, Monica Dinculescu created an amazing illustrated guide to contributing to Chromium.

You can compile Chromium or Firefox on a mid-range machine from 2016, but it will take quite a long time. A beefy PC will speed things up a bunch, but until we have cloud compilers available to the public, it’s always going to be pretty slow.

Look at their Bugs

All browsers except Microsoft Edge have a public bug tracker where you can search for known issues and file new bugs if you encounter them.

  • FirefoxFirefox Bugzilla
  • WebkitWebKit Bugzilla
  • ChromiumCRBug
  • Microsoft Edge‘s – Platform bugs that are inherited from Chromium are tracked using CRBug. Sadly, at present there is no public tracker for bugs that reproduce only in Edge. Bugs reported by the “Feedback” button are tracked internally by Microsoft.
  • Braveon GitHub
  • HTML5 Specification – on GitHub

Blogs to Read

  • This One – I write mostly about browsers.
  • My (archived) IEInternals – I started writing this blog because it was the only reliable way for me to find my notes from investigations and troubleshooting years later.
  • Cloudflare’s – Cloudflare is a $5B company whose primary product is their amazing blog. I understand they also run a CDN on the side to generate interesting topics for their blog to talk about.
  • Nasko Oskov’s – Nasko is an engineer on the Chrome Security team and writes mostly about security topics.
  • Chris Palmer’s – Chris is an engineer on the Chrome Security team and writes about secure design.
  • Adam Langley’s – Google’s expert cryptographer
  • Bruce Dawson’s – Bruce is a Chrome Engineer who posts lots of interesting information about debugging and performance troubleshooting, especially on Windows.
  • Anne van Kesteren’s – Anne works on the HTML5 spec.
  • Mark Nottingham’s – Mark co-chairs the HTTP and QUIC working groups

Specific Posts of Interest

People to Follow

I’ve doubtless forgotten some, see who I follow.

The Business of Browsers

Public data reveals each point of marketshare in the browser market is worth at least $100,000,000 USD annually (most directly in the form of payments from the browser’s configured search engine).

Remembering this fact will help you understand many other things, from how browsers pay their large teams of expensive software engineers, to how they manage to give browsers away for free, to why certain features behave the way that they do.

Extra Resources

Browsers are hugely complicated beasts, and tons of fun. If the resources above leave you feeling both overwhelmed and excited, maybe you should become a browser builder.

Want to change the world? Come join the new Microsoft Edge team today!

-Eric

While I do most of my work in an office, from time to time I work on code changes to Chromium at home. With the recent deprecation of Jumbo Builds, building the browser on my cheap 2016-era Dell XPS 8900 (i7-6700K) went from unpleasant to impractical. While I pondered buying a high-end Threadripper, I couldn’t justify the high cost, especially given the limited performance characteristics for low-thread workloads (basically, everything other than compilation).

The introduction of the moderately-priced (nominally $750), 16 Core Ryzen 3950X hit the sweet spot, so I plunked down my credit card and got a new machine from a system builder. Disappointingly, it took almost two months to arrive in a working state, but things seem to be good now.

The AMD Ryzen 3950X has 16 cores with two threads each, and runs around 3.95ghz when they’re all fully-loaded; it’s cooled by a CyberPowerPC DeepCool Castle 360EX liquid cooler. An Intel Optane 905P 480GB system drive holds the OS, compilers, and Chromium code. The key advantage of the Optane over more affordable SSDs is that it has a much higher random read rate (~400% as fast as the Samsung 970 Pro I originally planned to use):

Following the Chromium build instructions, I configured my environment and set up a 32bit component build with reduced symbols:

is_component_build = true
enable_nacl = false
target_cpu = "x86"
blink_symbol_level = 0
symbol_level = 1

Atop Windows 10 1909, I disabled Windows Defender entirely, and didn’t do anything too taxing with the PC while the build was underway.

Ultimately, a clean build of the “chrome” target took just under 53 minutes, achieving 33.3x parallelism.

While this isn’t a fast result by any stretch of the definition, it’s still faster than my non-jumbo local build times back when I worked at Google in 2016/2017 and used a $6000 Xeon 48 thread workstation to build Chrome, at somewhere around half of the cost.

Cloud Compilation

When I first joined Google, I learned about the seemingly magical engineering systems available to Googlers, quickly followed by the crushing revelation that most of those magic tools were not available to those of us working on the Chromium open-source project.

The one significant exception was that Google Chrome engineers had access to a distributed build system called “Goma” which would allow compiling Chrome using servers in the Google cloud. My queries around the team suggested that only a minority of engineers took advantage of it, partly because (at the time) it didn’t generate very debuggable Windows builds. Nevertheless, I eventually gave it a shot and found that it cut perhaps five minutes off my forty-five minute jumbo build times on my Xeon workstation. I rationalized this by concluding that the build must not be very parallelizable, and the fact that I worked remotely from Austin, so any build-artifacts from the Goma cloud would be much further away than from my colleagues in Mountain View.

Given the complexity of the configuration, I stopped using Goma, and spent perhaps half of my tenure on Chrome with forty-five minute build times[1]. Then, one day I needed to do some development on my Macbook, and I figured its puny specs would benefit from Goma in a way my Xeon workstation never would. So I went back to read the Goma documentation and found a different reference than I saw originally. This one mentioned a then unknown to me “-j” command line argument that tells the build system how many cloud cores to use.

This new, better, documentation noted that by default the build system would just match your local core count, but when using Goma you should instead demand ~20x your local core count– so -j 960 for my workstation. With one command line argument, my typical compiles dropped from 45 minutes to around 6.

::suitable_meme_of_wonder_and_fury::

Returning to Edge

I returned to Microsoft as a Program Manager on the Edge team in mid-2018, unaware that replatforming atop Chromium was even a possibility until the day before I started. Just before I began, a lead sent me a 27 page PDF file containing the Edge-on-Chromium proposal. “What do you think?” he asked. I had a lot of thoughts (most of the form “OMG, yes!“) but one thing I told everyone who would listen is that we would never be able to keep up without having a cloud-compilation system akin to Goma. The Google team had recently open-sourced the Goma client, but hadn’t yet open-sourced the cloud server component. I figured the Edge team had engineering years worth of work ahead of us to replicate that piece.

When an engineer on the team announced two weeks later that he had “MSGoma” building Chromium using an Azure cloud backend, it was the first strong sign that this crazy bet could actually pay off.

And pay off it has. While I still build locally from time to time, I typically build Chromium using MSGoma from my late 2018 Lenovo X1 Extreme laptop, with build times hovering just over ten minutes. Cloud compilation is a game changer.

The Chrome team has since released a Goma Server implementation, and several other major Chromium contributors are using distributed build systems of their own design.

I haven’t yet tried using MSGoma from my new Ryzen workstation, but I’ve been told that the Optane drive is especially helpful when performing distributed builds, due to the high incidence of small random reads.

-Eric

[1] This experience recalled a much earlier one: my family moving to Michigan shortly after I turned 11. Our new house featured a huge yard. My dad bought a self-propelled lawn mower and my brother and I took turns mowing the yard weekly. The self-propelled mower was perhaps fifteen pounds heavier than our last mower, and the self-propelling system didn’t really seem to do much of anything.

After two years of weekly mows from my brother and I, my dad took a turn mowing. He pushed the lawn mower perhaps five feet before he said “That isn’t right,” reached under the control panel and flipped a switch. My brother and I watched in amazement and dismay as the mower began pulling him across the yard.

Moral of the story: Knowledge is power.

This is an introduction/summary post which will link to individual articles about browser mechanisms for communicating directly between web content and native apps on the local computer.

This series aims to provide, for each mechanism, information about:

  • On which platforms is it available?
  • Can the site detect that the app/mechanism is available?
  • Can the site send more than one message to the application without invoking the mechanism again, or is it fire-and-forget?
  • Can the application bidirectionally communicate back to the web content via the same mechanism?
  • What are the security implications?
  • What is the UX?

Application Protocols

Blog Post

tl;dr: Apps can register protocol schemes. Browsers will spawn the apps when navigating to the scheme.

Characteristics: Fire-and-Forget. Non-detectable. Supported across all browsers for decades, supported on desktop platforms, but typically not mobile platform. Prompts on launch by default, but usually can be disabled.

Native Messaging via Extensions

Blog Post – Coming someday. For now, see nativeMessaging API.

tl;dr: Browser extensions can communicate with a local native app using stdin/stdout passing JSON between the app and the extension. The extension may pass information to/from web content if desired.

Characteristics: Bi-directional communications. Detectable. Supported across most modern browsers; not legacy IE. Dunno about Safari. Prompts on install, but not required to use.

File downloads (Traditional)

Blog Post – Coming soon.

tl;dr: Apps can register to handle certain file types. User may spawn the app to open the file.

Characteristics: Fire-and-Forget. Non-detectable. Supported across all browsers. Prompts for most file types, but some browsers allow disabling.

File downloads (DirectInvoke)

Blog Post

Internet Explorer/Edge support DirectInvoke, a scheme whereby a file handler application is launched with a URL instead of a local file.

Characteristics: Fire-and-Forget. Non-detectable. Windows only. Supported in Internet Explorer, Edge 18 and below, and Edge 78 and above. Degrades gracefully into a traditional file download.

Local Web Server

Blog Post – Coming someday.

tl;dr: Apps can run a HTTP(S) server on localhost and internet webpages can communicate with that server using fetch/XHR/WebSocket/WebRTC etc.

Future restrictions: Will require caller be a secure context, will require internal website to respond to a CORS pre-flight.

Characteristics: Bi-directional communications. Detectable. Supported across all browsers. Not available on mobile. Complexities around Secure Contexts / HTTPS, and loopback network protections in Edge18/IE AppContainers.

Local Web Server- Challenges with HTTPS

In many cases, HTTPS pages may not send requests to HTTP URLs (depending on whether the browser supports the new “SecureContexts” feature that treats HTTP://localhost as a secure context, and not as mixed content. As a result, in some cases, applications wish to get a HTTPS certificate for their local servers. This is complex and error-prone to do securely. Many vendors used a hack, whereby they’d get a publicly-trusted certificate for a hostname (e.g. loopback.example.com) for which they would later use DNS to point at 127.0.0.1. However, doing things this way requires putting the certificate’s private key on the client (where anyone can steal it). After the private key is released, anyone can abuse it to MITM connections to servers using that certificate. In practice, this is of limited interest (it’s not useful for broadly attacking traffic) but compromise of a private key means that the certificate must be revoked per the rules for CAs. So that’s what happens. Over and over and over.

The inimitable Ryan Sleevi wrote up a short history of the bad things people do here: https://twitter.com/sleevi_/status/1202046844240572416?s=20 after Atlassian got dinged for doing this wrong.

Prior to Atlassian, Amazon Music’s web exposure can be found here: https://medium.com/0xcc/what-the-heck-is-tcp-port-18800-a16899f0f48f

—–

Notes: https://wicg.github.io/cors-rfc1918/#mixed-content

Andrew (@drewml) tweeted at 4:23 PM on Tue, Jul 09, 2019:
The @zoom_us vuln sucks, but it’s definitely not new. This was/is a common approach used to sidestep the NPAPI deprecation in Chrome. Seems like a @taviso favorite:
Anti virus, logitech, utorrent. (https://twitter.com/drewml/status/1148704362811801602?s=03)

Bypass of localhost CORS protections by utilizing GET request for an Image
https://bugs.chromium.org/p/chromium/issues/detail?id=951540#c28

View at Medium.com

There exist WebRTC tricks to bypass HTTPS requirements https://twitter.com/sleevi_/status/1177248901990105090?s=20

How to do this right? There’s a writeup of how Plex got HTTPS certificates for their local servers.

 

Variant: Common Remote Server as a Broker

An alternative approach would be to communicate indirectly. For instance, a web application and a client application using HTTPS/WebSockets could each individually communicate to a common server which brokers messages between them.

HTML5 getInstalledRelatedApps()

While not directly an API to communicate with a local app, the getInstalledRelatedApps() method allows your web app to check whether your native app is installed on a user’s device. Learn more.

AppLinks in Edge/Windows

Allow navigation to certain namespaces (domains) to be handed off to a native application on the local device.

Legacy Plugins/ActiveX architecture

Please no!

Characteristics: Bi-directional communications. Detectable. Support has been mostly removed from most browsers. Generally not available on mobile. One of the biggest sources of security risk in web platform history.

Android Intents

Similar to App Protocols, a web page can launch an application to handle a particular task. Learn more.

Android Instant Apps

Basically, the idea is that navigating to a website can stream/run an Android Application. Learn more.

Some web developers host their pre-production development sites by configuring their DNS such that hostnames ending in .dev point to local servers. Such configurations were not meaningfully impacted when .dev became an official Generic Top Level Domain a few years back, because even as smart people warned that developers should stop squatting on it, Google (the owner of the .dev TLD) was hosting few (if any) sites on the gTLD.

With Chrome 63, shipping to the stable channel in the coming days, things have changed. Chrome has added .dev to the HSTS Preload list (along with the .foo, .page, .app, and .chrome TLDs). This means that any attempt to visit http://anything.dev will automatically be converted to https://anything.dev.

hstspreload

Other major browsers use the same underlying HSTS Preload list, and we expect that they will also pick up the .dev TLD entry in the coming weeks and months.

Of course, if you were using HTTPS with valid and trusted certificates on your pre-production sites already (good for you!) the Chrome 63 change may not impact you very much right away. But you’ll probably want to move your preproduction sites off of .dev and instead use e.g. .test, a TLD reserved for this purpose.

Secure all the things!

-Eric

PS: Perhaps surprisingly, the dotless (“plain”) hostnames http://dev, http://page, http://app, http://chrome, http://foo are all impacted by new HSTS rules as well.

“Magic” is great… except when it isn’t.

Software Design is largely about tradeoffs, and one of the more interesting tradeoffs is between user experience and predictability. This has come up repeatedly throughout my career and in two independent contexts yesterday that I’ll describe in this post.

Developer Magic

I’m working on a tiny UX change to Google Chrome to deemphasize the data component of data: URIs.

Chrome is a multi-platform browser that runs on Windows, Mac, Linux, ChromeOS, Android, and iOS, which means that I need to make the same change in a number of places. Four, to be precise: Views (our cross-platform UI that runs on Windows, Linux and ChromeOS), Cocoa (Mac), Bling (iOS) and Clank (Android). The change for Views was straightforward and I did it first; porting the change to Mac wasn’t too hard. With the Mac change in hand, I figured that the iOS change would be simple, as both are written in Objective C++. I don’t have a local Mac development box, so I have to upload my iOS changes to the Chromium build bots to see if they work. Unfortunately, my iOS build failed, with a complaint from the linker:

Undefined symbols for architecture arm7:

“gfx::Range::ToNSRange() const”, referenced from:

OmniboxViewIOS::SetEmphasis(bool, gfx::Range) in omnibox_view_ios.o

OmniboxViewIOS::UpdateSchemeEmphasis(gfx::Range) in omnibox_view_ios.o

ld: symbol(s) not found for architecture arm7

Hrm… that’s weird; the Mac build worked and the iOS build used the same APIs. Let’s go have a look at the definition of ToNSRange():

ToNSRange() inside an if(defined(OS_MACOSX)

Oh, weird. It’s in an OS_MACOSX block, so I guess it’s not there for iOS. But how did it compile and only fail at linking?

Turns out that the first bit of “magic” is that when OS_IOS is defined, OS_MACOSX is always also defined:

iOS gets both MacOS and iOS defined

I was relieved to learn that I’m not the only person who didn’t know this, both by asking around and by finding code blocks like this:

If defined(win)||defined(mac)||defined(ios)

Okay, so that’s why it compiled, but why didn’t it link? Let’s look at the build configuration file:

image

Hmmm… That’s a bit suspicious. There’s range_mac.mm and range_win.cc both listed within a single target. But it seems unlikely that the Mac build includes the Windows code, or that the Windows build includes the Mac code. Which suggests that maybe there’s some magic not shown in the build configuration that determines what actually gets built. And indeed, it turns out that such magic does exist.

The overall Build Configuration introduces its own incompatible magic, whereby filenames suffixed with _mac only compile on Mac… and that is limited to actual Mac, not including iOS:

sources_assignment_filters

This meant that the iOS compilation had a header file with no matching implementation, and I was the first lucky guy to stumble upon this by calling the missing code.

Magic handling of filenames is simultaneously great (“So convenient”) and awful—I spoke to a number of engineers who knew that the build does this, but had no idea how, or whether or not iOS builds would include _mac-suffixed files. My  instinct for fixing this would be to rename range_mac.mm to just range_apple.mm (because .mm files are compiled only for Mac and iOS), but instead I’ve been told that the right fix is to just temporarily disable the magic:

Add exclusion via set_sources_assignment_filter

Talking to some of the experts, I learned that the long term goal is to get rid of the sources_assignment_filters altogether (No more magic!) but doing so entails a bunch of boring work (No more magic!).

Magic is great, when it works.

When it doesn’t, I spend a lot of time investigating and writing blog posts. In this case, I ended up flailing about for a few hours (because sending my various fix attempts off to the bots isn’t fast) trying to figure out what was going on.

There’s plenty of other magic that happens throughout the Chromium developer toolchain; some of it visible and some of it invisible. For instance, consider what happens when I forget the name of the command that finds out what release a changelist went into:

git find-release

Git “magically” knows what I meant, and points out my mistake.

Elsewhere, however, Chromium’s git “magically” knows what I meant and just does it:

git cl upalod (typo)

Which approach is better? I suppose it depends. The code that suggests proper commands is irritating (“Dammit, if you knew what I meant, you could just do it!”) but it’s also predictable—only legal commands run and typos cannot go overlooked and propagate throughout scripts, documentation, etc.

This same type of tradeoff appeared in a different scenario by the end of the day.

End-User Magic

This repro won’t work forever, but try clicking this link: https://www.kubernetes.io. If you do this right now, you’ll find that the page works great in Chrome, but doesn’t work in IE or Edge:

Edge and IE show Certificate Error

If you pop the site into SSLLabs’ server test, you can see that the server indeed has a problem:

Certificate subject name mismatch

The certificate’s SubjectAltNames field contains kubernetes.io, but not http://www.kubernetes.io.

So, what gives? Why does the original www URL work in Chrome? If you open the Developer Tools console while following the link, you’ll see the following explanation of the magic:

Console warning about automagic redirection

Basically, Chrome saw that the certificate for http://www.kubernetes.io was misconfigured and recognized that sending the user to the bare domain kubernetes.io was probably the right thing to do. So, it just did that, which is great for the user. Right? Right??

Well, yes, it’s great for Chrome users, and maybe for HTTPS adoption– users don’t like certificate errors, and asking them to manually “fix” things the browser can fix itself is annoying.

But it’s less awesome for users of other browsers without this accommodation, especially when the site developers don’t know about Chrome’s magic behavior and close the bug as “fixed” because they tested in Chrome. So other browsers have to adopt this magic if they want to be as great as Chrome (no browser vendor likes bugs whining “Your browser doesn’t work but Chrome does!”). Then, after all the browsers have the magic in place, then other tools like curl and wfetch and wget etc need to adopt it. And the magic is now a hack that lives on for decades, increasing the development cost of all future web clients. Blargh.

Update: http://www.twitter.com in Brazil had this problem in February 2020, but the magic didn’t take effect, because the feature SSLCommonNameMismatchHandling is disabled for that have HSTS enabled.

It’s worth noting that this scenario was especially confusing for users of Microsoft Edge, because its address box has special magic that hides the “www.” prefix, even on error pages. The default address is a “lie”:

Edge hides the

Only by putting focus in the address bar can you see the “truth”:

WWW is showing now on focus

When you’re building magic into your software, consider carefully how you do so.

  • Do you make your magic invisible, or obvious?
  • Is there a way to scope it, or will you have to maintain it forever?
  • Are you training users to expect magic, or guiding them away from it?
  • If you’re part of an ecosystem, is your magic in line with your long-term ecosystem goals?

-Eric

Four weeks ago, emailed notice of a free massage credit revealed that I’ve been at Google for a year. Time flies when you’re drinking from a firehose.

When I mentioned my anniversary, friends and colleagues from other companies asked what I’ve learned while working on Chrome over the last year. This rambling post is an attempt to answer that question.

Non-maskable Interrupts

While I started at Google just over a year ago, I haven’t actually worked there for a full year yet. My second son (Nate) was born a few weeks early, arriving ten workdays after my first day of work.

I took full advantage of Google’s very generous twelve weeks of paternity leave, taking a few weeks after we brought Nate home, and the balance as spring turned to summer. In a year, we went from having an enormous infant to an enormous toddler who’s taking his first steps and trying to emulate everything his 3 year-old brother (Noah) does.

Baby at the hospitalFirst birthday cake

I mention this because it’s had a huge impact on my work over the last year—much more than I’d naively expected.

When Noah was born, I’d been at Telerik for almost a year, and I’d been hacking on Fiddler alone for nearly a decade. I took a short paternity leave, and my coding hours shifted somewhat (I started writing code late at night between bottle feeds), but otherwise my work wasn’t significantly impacted.

As I pondered joining Google Chrome’s security team, I expected pretty much the same—a bit less sleep, a bit of scheduling awkwardness, but I figured things would fall into a good routine in a few months.

Things turned out somewhat differently.

Perhaps sensing that my life had become too easy, fate decided that 2016 was the year I’d get sick. Constantly. (Our theory is that Noah was bringing home germs from pre-school; he got sick a bunch too, but recovered quickly each time.) I was sick more days in 2016 than I was in the prior decade, including a month-long illness in the spring. That ended with a bout of pneumonia that concluded with a doctor-mandated seven days away from the office. As I coughed my brains out on the sofa at home, I derived some consolation in thinking about Google’s generous life insurance package. But for the most part, my illnesses were minor—enough to keep me awake at night and coughing all day, but otherwise able to work.

Mathematically, you might expect two kids to be twice as much work as one, but in our experience, it hasn’t worked out that way. Instead, it varies between 80% (when the kids happily play together) to 400% (when they’re colliding like atoms in a runaway nuclear reactor). Thanks to my wife’s heroic efforts, we found a workable daytime routine. The nights, however, have been unexpectedly difficult. Big brother Noah is at an age where he usually sleeps through the night, but he’s sure to wake me up every morning at 6:30am sharp. Fortunately, Nate has been a pretty good sleeper, but even now, at just over a year old, he usually still wakes up and requires attention twice a night or so.

I can’t remember the last time I had eight hours of sleep in a row. And that’s been extremely challenging… because I can’t remember much else either. Learning new things when you don’t remember them the next day is a brutal, frustrating process.

When Noah was a baby, I could simply sleep in after a long night. Even if I didn’t get enough sleep, it wouldn’t really matter—I’d been coding in C# on Fiddler for a decade, and deadlines were few and far between. If all else failed, I’d just avoid working on any especially gnarly code and spend the day handling support requests, updating graphics, or doing other simple and straightforward grunt work from my backlog.

Things are much different on Chrome.

Roles

When I first started talking to the Chrome Security team about coming aboard, it was for a role on the Developer Advocacy team. I’d be driving HTTPS adoption across the web and working with big sites to unblock their migrations in any way I could. I’d already been doing the first half of that for fun (delivering talks at conferences like Codemash and Velocity), and I’d previously spent eight years as a Security Program Manager for the Internet Explorer team. I had tons of relevant experience. Easy peasy.

I interviewed for the Developer Advocate role. The hiring committee kicked back my packet and said I should interview as a Technical Program Manager instead.

I interviewed as a Technical Program Manager. The hiring committee kicked back my packet and said I should interview as a Developer Advocate instead.

The Chrome team resolved the deadlock by hiring me as a Senior Software Engineer (SWE).

I was initially very nervous about this, having not written any significant C++ code in over a decade—except for one in-place replacement of IE9’s caching logic which I’d coded as a PM because I couldn’t find a developer to do the work. But eventually I started believing in my own pep talk: “I mean, how hard could it be, right? I’ve been troubleshooting code in web browsers for almost two decades now. I’m not a complete dummy. I’ll ramp up. It’ll be rough, but it’ll work out. Hell, I started writing Fiddler not knowing either C# nor HTTP, and that turned out pretty good. I’ll buy some books and get caught up. There’s no way that Google would have just hired me as a C++ developer without asking me any C++ coding questions if it wasn’t going to all be okay. Right? Right?!?”

The Firehose

I knew I had a lot to learn, and fast, but it took me a while to realize just how much else I didn’t know.

Google’s primary development platform is Linux, an OS that I would install every few years, play with for a day, then forget about. My new laptop was a Mac, a platform I’d used a bit more, but still one for which I was about a twentieth as proficient as I was on Windows. The Chrome Windows team made a half-hearted attempt to get me to join their merry band, but warned me honestly that some of the tooling wasn’t quite as good as it was on Linux and it’d probably be harder for me to get help. So I tried to avoid Windows for the first few months, ordering a puny Windows machine that took around four times longer to build Chrome than my obscenely powerful Linux box (with its 48 logical cores). After a few months, I gave up on trying to avoid Windows and started using it as my primary platform. I was more productive, but incredibly slow builds remained a problem for a few months. Everyone told me to just order another obscenely powerful box to put next to my Linux one, but it felt wrong to have hardware at my desk that collectively cost more than my first car—especially when, at Microsoft, I bought all my own hardware. I eventually mentioned my cost/productivity dilemma to a manager, who noted I was getting paid a Google engineer’s salary and then politely asked me if I was just really terrible at math. I ordered a beastly Windows machine and now my builds scream. (To the extent that any C++ builds can scream, of course. At Telerik, I was horrified when a full build of Fiddler slowed to a full 5 seconds on my puny Windows machine; my typical Chrome build today still takes about 15 minutes.)

Beyond learning different operating systems, I’d never used Google’s apps before (Docs/Sheets/Slides); luckily, I found these easy to pick up, although I still haven’t fully figured out how Google Drive file organization works. Google Docs, in particular, is so good that I’ve pretty much given up on Microsoft Word (which headed downhill after the 2010 version). Google Keep is a low-powered alternative to OneNote (which is, as far as I can tell, banned because it syncs to Microsoft servers) and I haven’t managed to get it to work well for my needs. Google Plus still hasn’t figured out how to support pasting of images via CTRL+V, a baffling limitation for something meant to compete in the space… hell, even Microsoft Yammer supports that, for gods sake. The only real downside to the web apps is that tab/window management on modern browsers is still a very much unsolved problem (but more on that in a bit).

But these speedbumps all pale in comparison to Gmail. Oh, Gmail. As a program manager at Microsoft, pretty much your entire life is in your inbox. After twelve years with Outlook and Exchange, switching to Gmail was a train wreck. “What do you mean, there aren’t folders? How do I mark this message as low priority? Where’s the button to format text with strikethrough? What do you mean, I can’t drag an email to my calendar? What the hell does this Archive thing do? Where’s that message I was just looking at? Hell, where did my Gmail tab even go—it got lost in a pile of sixty other tabs across four top-level Chrome windows. WTH??? How does anyone get anything done?”

Communication and Remote Work

While Telerik had an office in Austin, I didn’t interact with other employees very often, and when I did they were usually in other offices. I thought I had a handle on remote work, but I really didn’t. Working with a remote team on a daily basis is just different.

With communication happening over mail, IRC, Hangouts, bugs, document markup comments, GVC (video conferencing), G+, and discussion lists, it was often hard to figure out which mechanisms to use, let alone which recipients to target. Undocumented pitfalls abounded (many discussion groups were essentially abandoned while others were unexpectedly broad; turning on chat history was deemed a “no-no” for document retention reasons).

It often it took a bit of research to even understand who various communication participants were and how they related to the projects at hand.

After years of email culture at Microsoft, I grew accustomed to a particular style of email, and Google’s is just different. Mail threads were long, with frequent additions of new recipients and many terse remarks. Many times, I’d reply privately to someone on a side thread, with a clarifying question, or suggesting a counterpoint to something they said. The response was often “Hey, this just went to me. Mind adding on the main thread?

I’m working remotely, with peers around the world, so real-time communication with my team is essential. Some Chrome subteams use Hangouts, but the Security team largely uses IRC.

XKCD comic on IRC

Now, I’ve been chatting with people online since BBSes were a thing (I’ve got a five digit ICQ number somewhere), but my knowledge of IRC was limited to the fact that it was a common way of taking over suckers’ machines with buffer overflows in the ‘90s. My new teammates tried to explain how to IRC repeatedly: “Oh, it’s easy, you just get this console IRC client. No, no, you don’t run it on your own workstation, that’d be crazy. You wouldn’t have history! You provision a persistent remote VM on a machine in Google’s cloud, then SSH to that, then you run screens and then you run your IRC client in that. Easy peasy.

Getting onto IRC remained on my “TODO” list for five months before I finally said “F- it”, installed HexChat on my Windows box, disabled automatic sleep, and called it done. It’s worked fairly well.

Google Developer Tooling

When an engineer first joins Google, they start with a week or two of technical training on the Google infrastructure. I’ve worked in software development for nearly two decades, and I’ve never even dreamed of the development environment Google engineers get to use. I felt like Charlie Bucket on his tour of Willa Wonka’s Chocolate Factory—astonished by the amazing and unbelievable goodies available at any turn. The computing infrastructure was something out of Star Trek, the development tools were slick and amazing, the process was jaw-dropping.

While I was doing a “hello world” coding exercise in Google’s environment, a former colleague from the IE team pinged me on Hangouts chat, probably because he’d seen my tweets about feeling like an imposter as a SWE.  He sent me a link to click, which I did. Code from Google’s core advertising engine appeared in my browser. Google’s engineers have access to nearly all of the code across the whole company. This alone was astonishing—in contrast, I’d initially joined the IE team so I could get access to the networking code to figure out why the Office Online team’s website wasn’t working. “Neat, I can see everything!” I typed back. “Push the Analyze button” he instructed. I did, and some sort of automated analyzer emitted a report identifying a few dozen performance bugs in the code. “Wow, that’s amazing!” I gushed. “Now, push the Fix button” he instructed. “Uh, this isn’t some sort of security red team exercise, right?” I asked. He assured me that it wasn’t. I pushed the button. The code changed to fix some unnecessary object copies. “Amazing!” I effused. “Click Submit” he instructed. I did, and watched as the system compiled the code in the cloud, determined which tests to run, and ran them. Later that afternoon, an owner of the code in the affected folder typed LGTM (Googlers approve changes by typing the acronym for Looks Good To Me) on the change list I had submitted, and my change was live in production later that day. I was, in a word, gobsmacked. That night, I searched the entire codebase for misuse of an IE cache control token and proposed fixes for the instances I found. I also narcissistically searched for my own name and found a bunch of references to blog posts I’d written about assorted web development topics.

Unfortunately for Chrome Engineers, the introduction to Google’s infrastructure is followed by a major letdown—because Chromium is open-source, the Chrome team itself doesn’t get to take advantage of most of Google’s internal goodies. Development of Chrome instead resembles C++ development at most major companies, albeit with an automatically deployed toolchain and enhancements like a web-based code review tool and some super-useful scripts. The most amazing of these is called bisect-builds, and it allows a developer to very quickly discover what build of Chrome introduced a particular bug. You just give it a “known good” build number and a “known bad” build number and it  automatically downloads and runs the minimal number of builds to perform a binary search for the build that introduced a given bug:

Console showing bisect builds running

Firefox has a similar system, but I’d’ve killed for something like this back when I was reproducing and reducing bugs in IE. While it’s easy to understand how the system functions, it works so well that it feels like magic. Other useful scripts include the presubmit checks that run on each change list before you submit them for code review—they find and flag various style violations and other problems.

Compilation itself typically uses a local compiler; on Windows, we use the MSVC command line compiler from Visual Studio 2015 Update 3, although work is underway to switch over to Clang. Compilation and linking all of Chrome takes quite some time, although on my new beastly dev boxes it’s not too bad. Googlers do have one special perk—we can use Goma (a distributed compiler system that runs on Google’s amazing internal cloud) but I haven’t taken advantage of that so far.

For bug tracking, Chrome recently moved to Monorail, a straightforward web-based bug tracking system. It works fairly well, although it is somewhat more cumbersome than it needs to be and would be much improved with a few tweaks. Monorail is open-source, but I haven’t committed to it myself yet.

For code review, Chrome presently uses Rietveld, a web-based system, but this is slated to change to a different system called Gerrit in the near(ish) future. Like Monorail, it’s pretty straightforward although it would benefit from some minor usability tweaks; I committed one trivial change myself, but the pending migration to a different system means that it isn’t likely to see further improvements.

As an open-source project, Chromium has quite a bit of public documentation for developers, including Design Documents. Unfortunately, Chrome moves so fast that many of the design documents are out-of-date, and it’s not always obvious what’s current and what was replaced long ago. The team does value engineers’ investment in the documents, however, and various efforts are underway to update the documents and reduce Chrome’s overall architectural complexity. I expect these will be ongoing battles forever, just like in any significant active project.

What I’ve Done

“That’s all well and good,” my reader asks, “but what have you done in the last year?”

I Wrote Some Code

My first check in to Chrome landed in February; it was a simple adjustment to limit Public-Key-Pins to 60 days. Assorted other checkins trickled in through the spring before I went on paternity leave. The most fun fix I did cleaned up a tiny UX glitch that sat unnoticed in Chrome for almost a decade; it was mostly interesting because it was a minor thing that I’d tripped over for years, including back in IE. (The root cause was arguably that MSDN documentation about DWM lied; I fixed the bug in Chrome, sent the fix to IE, and asked MSDN to fix their docs).

I fixed a number of minor security bugs, and lately I’ve been working on UX issues related to Chrome’s HTTPS user-experience. Back in 2005, I wrote a blog post complaining about websites using HTTPS incorrectly, and now, just over a decade later, Chrome and Firefox are launching UI changes to warn users when a site is collecting sensitive information on pages which are Not Secure; I’m delighted to have a small part in those changes.

Having written a handful of Internet Explorer Extensions in the past, I was excited to discover the joy of writing Chrome extensions. Chrome extensions are fun, simple, and powerful, and there’s none of the complexity and crashes of COM.

My 3 Chrome Extensions

My first and most significant extension is the moarTLS Analyzer– it’s related to my HTTPS work at Google and it’s proven very useful in discovering sites that could improve their security. I blogged about it and the process of developing it last year.

Because I run several different Chrome instances on my PC (and they update daily or weekly), I found myself constantly needing to look up the Chrome version number for bug reports and the like. I wrote a tiny extension that shows the version number in a button on the toolbar (so it’s captured in screenshots too!):

Show Chrome Version screenshot

More than once, I spent an hour or so trying to reproduce and reduce a bug that had been filed against Chrome. When I found out the cause, I’d jubilently add my notes to the issue in the Monorail bug tracker, click “Save changes” and discover that someone more familiar with the space had beaten me to the punch and figured it out while I’d had the bug open on my screen. Adding an “Issue has been updated” alert to the bug tracker itself seemed like the right way to go, but it would require some changes that I wasn’t able to commit on my own. So, instead I built an extension that provides such alerts within the page until the feature can be added to the tracker itself.

Each of these extensions was a joy to write.

I Filed Some Bugs

I’m a diligent self-hoster, and I run Chrome Canary builds on all of my devices. I submit crash reports and file bugs with as much information as I can. My proudest moment was in helping narrow down a bizarre and intermittent problem users had with Chrome on Windows 10, where Chrome tabs would crash on every startup until you rebooted the OS. My blog post explains the full story, and encourages others to file bugs as they encounter them.

I Triaged More Bugs

I’ve been developing software for Windows for just over two decades, and inevitably I’ve learned quite a bit about it, including the undocumented bits. That’s given me a leg up in understanding bugs in the Windows code. Some of the most fun include issues in Drag and Drop, like this gem of a bug that means that you can’t drop files from Chrome to most applications in Windows. More meaningful bugs relate to problems with Windows’ Mark-of-the-Web security feature (about which I’ve blogged about several times).

I Took Sheriff Rotations

Google teams have the notion of sheriffs—a rotating assignment that ensures that important tasks (like triaging incoming security bugs) always has a defined owner, without overwhelming any single person. Each Sheriff has a term of ~1 week where they take on additional duties beyond their day-to-day coding, designing, testing, etc.

The Sheriff system has some real benefits—perhaps the most important of which is creating a broad swath of people experienced and qualified in making triage decisions around security vulnerabilities. The alternative is to leave such tasks to a single owner, rapidly increasing their bus factor and thus the risk to the project. (I know this from first-hand experience. After IE8 shipped, I was on my way out the door to join another team. Then IE’s Security PM left, leaving a gaping hole that I felt obliged to stay around to fill. It worked out okay for me and the team, but it was tense all around.)

I’m on two sheriff rotations: Enamel (my subteam) and the broader Chrome Security Sheriff.

The Enamel rotation’s tasks are akin to what I used to do as a Program Manager at Microsoft—triage incoming bugs, respond to questions in the Help Forums, and generally act as a point of contact for my immediate team.

In contrast, the Security Sheriff rotation is more work, and somewhat more exciting. The Security Sheriff’s duties include triaging all bugs of type “Security”, assigning priority, severity, and finding an owner for each. Most security bugs are automatically reported by our fuzzers (a tireless robot army!), but we also get reports from the public and from Chrome team members and Project Zero too.

At Microsoft, incoming security bug reports were first received and evaluated by the Microsoft Security Response Center (MSRC); valid reports were passed along to the IE team after some level of analysis and reproduction was undertaken. In general, all communication was done through MSRC, and the turnaround cycle on bugs was typically on the order of weeks to months.

In contrast, anyone can file a security bug against Chrome, and every week lots of people do. One reason for that is that Chrome has a Vulnerability Rewards program which pays out up to $100K for reports of vulnerabilities in Chrome and Chrome OS. Chrome paid out just under $1M USD in bounties last year. This is an awesome incentive for researchers to responsibly disclose bugs directly to us, and the bounties are much higher than those of nearly any other project.

In his “Hacker Quantified Security” talk at the O’Reilly Security conference, HackerOne CTO and Cofounder Alex Rice showed the following chart of bounty payout size for vulnerabilities when explaining why he was using a Chromebook. Apologies for the blurry photo, but the line at the top shows Chrome OS, with the 90th percentile line miles below as severity rises to Critical:

Vulnerability rewards by percentile. Chrome is WAY off the chart.

With a top bounty of $100000 for an exploit or exploit chain that fully compromises a Chromebook, researchers are much more likely to send their bugs to us than to try to find a buyer on the black market.

Bug bounties are great, except when they’re not. Unfortunately, many filers don’t bother to read the Chrome Security FAQ which explains what constitutes a security vulnerability and the great many things that do not. Nearly every week, we have at least one person (and often more) file a bug noting “I can use the Developer Tools to read my own password out of a webpage. Can I have a bounty?” or “If I install malware on my PC, I can see what happens inside Chrome” or variations of these.

Because we take security bug reports very seriously, we often spend a lot of time on what seem like garbage filings to verify that there’s not just some sort of communication problem. This exposes one downside of the sheriff process—the lack of continuity from week to week.

In the fall, we had one bug reporter file a new issue every week that was just a collection of security related terms (XSS! CSRF! UAF! EoP! Dangling Pointer! Script Injection!) lightly wrapped in prose, including screenshots, snippets from websites, console output from developer tools, and the like. Each week, the sheriff would investigate, ask for more information, and engage in a fruitless back and forth with the filer trying to figure out what claim was being made. Eventually I caught on to what was happening and started monitoring the sheriff’s queue, triaging the new findings directly and sparing the sheriff of the week. But even today we still catch folks who lookup old bug reports (usually Won’t Fixed issues), copy/paste the content into new bugs, and file them into the queue. It’s frustrating, but coming from a closed bug database, I’d choose the openness of the Chrome bug database every time.

Getting ready for my first Sherriff rotation, I started watching the incoming queue a few months earlier and felt ready for my first rotation in September. Day One was quiet, with a few small issues found by fuzzers and one or two junk reports from the public which I triaged away with pointers to the “Why isn’t a vulnerability” entries in the Security FAQ. I spent the rest of the day writing a fix for a lower-priority security bug that had been filed a month before. A pretty successful day, I thought.

Day Two was more interesting. Scanning the queue, I saw a few more fuzzer issues and one external report whose text started with “Here is a Chrome OS exploit chain.” The report was about two pages long, and had a forty-two page PDF attachment explaining the four exploits the finder had used to take over a fully-patched Chromebook.

Star Wars trench run photo

Watching Luke’s X-wing take out the Death Star in Star Wars was no more exciting than reading the PDF’s tale of how a single byte memory overwrite in the DNS resolver code could weave its way through the many-layered security features of the Chromebook and achieve a full compromise. It was like the most amazing magic trick you’ve ever seen.

I hopped over to IRC. “So, do we see full compromises of Chrome OS every week?” I asked innocently.

“No. Why?” came the reply from several corners. I pasted in the bug link and a few moments later the replies started flowing in “OMG. Amazing!” Even guys from Project Zero were impressed, and they’re magicians who build exploits like this (usually for other products) all the time. The researcher had found one small bug and a variety of neglected components that were thought to be unreachable and put together a deadly chain.

The first patches were out for code review that evening, and by the next day, we’d reached out to the open-source owner of the DNS component with the 1-byte overwrite bug so he could release patches for the other projects using his code. Within a few days, fixes to other components landed and had been ported to all of the supported versions of Chrome OS. Two weeks later, the Chrome Vulnerability rewards team added the reward-100000 tag, the only bug so far to be so marked. Four weeks after that, I had to hold my tongue when Alex mentioned that “no one’s ever claimed that $100000 bounty” during his “Hacker Quantified Security” talk. Just under 90 days from filing, the bug was unrestricted and made available for public viewing.

The remainder of my first Sheriff rotation was considerably less exciting, although still interesting. I spent some time looking through the components the researcher had abused in his exploit chain and filed a few bugs. Ultimately, the most risky component he used was removed entirely.

Outreach and Blogging

Beyond working on the Enamel team (focused on Chrome’s security UI surface), I also work on the “MoarTLS” project, designed to help encourage and assist the web as a whole in moving to HTTPS. This takes a number of forms—I help maintain the HTTPS on Top Sites Report Card, I do consultations and HTTPS Audits with major sites as they enable HTTPS on their sites. I discover, reduce, and file bugs on Chrome’s and other browsers’ support of features like Upgrade-Insecure-Requests. I publish a running list of articles on why and how sites should enable TLS. I hassle teams all over Google (and the web in general) to enable HTTPS on every single hyperlink they emit. I responsibly disclosed security bugs in a number of products and sites, including a vulnerability in Hillary Clinton’s fundraising emails. I worked to send a notification to many many many thousands of sites collecting user information non-securely, warning them of the UI changes in Chrome 56.

When I applied to Google for the Developer Advocate role, I expected I’d be delivering public talks constantly, but as a SWE I’ve only given a few talks, including my Migrating to HTTPS talk at the first O’Reilly Security Conference. I had a lot of fun at that conference, catching up with old friends from the security community (mostly ex-Microsofties). I also went to my first Chrome Dev Summit, where I didn’t have a public talk (my colleagues did) but I did get to talk to some major companies about deploying HTTPS.

I also blogged quite a bit. At Microsoft, I started blogging because I got tired of repeating myself, and because our Exchange server and document retention policies had started making it hard or impossible to find old responses—I figured “Well, if I publish everything on the web, Google will find it, and Internet Archive will back it up.”

I’ve kept blogging since leaving Microsoft, and I’m happy that I have even though my reader count numbers are much lower than they were at Microsoft. I’ve managed to mostly avoid trouble, although my posts are not entirely uncontroversial. At Microsoft, they wouldn’t let me publish this post (because it was too frank); in my first month at Google, I got a phone call at home (during the first portion of my paternity leave) from a Google Director complaining that I’d written something that was too harsh about a change Microsoft had made. But for the most part, my blogging seems not to ruffle too many feathers.

Tidbits

  • Food at Google is generally really good; I’m at a satellite office in Austin, so the selection is much smaller than on the main campuses, but the rotating menu is fairly broad and always has at least three major options. And the breakfasts! I gained about 15 pounds in my first few months, but my pneumonia took it off and I’ve restrained my intake since I came back.
  • At Microsoft, I always sneered at companies offering free food (“I’m an adult professional. I can pay for my lunch.”), but it’s definitely convenient to not have to hassle with payments. And until the government closes the loophole, it’s a way to increase employees’ compensation without getting taxed.
  • For the first three months, I was impressed and slightly annoyed that all of the snack options in Google’s micro-kitchens are healthy (e.g. fruit)—probably a good thing, since I sit about twenty feet from one. Then I saw someone open a drawer and pull out some M&Ms, and I learned the secret—all of the junk food is in drawers. The selection is impressive and ranges from the popular to the high end.
  • Google makes heavy use of the “open-office concept.” I think this makes sense for some teams, but it’s not at all awesome for me. I’d gladly take a 10% salary cut for a private office. I doubt I’m alone.
  • Coworkers at Google range from very smart to insanely off-the-scales-smart. Yet, almost all of them are humble, approachable, and kind.
  • Google, like Microsoft, offers gift matching for charities. This is an awesome perk, and one I aim to max out every year. I’m awed by people who go far beyond that.
  • Window Management – I mentioned earlier that one downside of web-based tools is that it’s hard to even find the right tab when I’ve got dozens of open tabs that I’m flipping between. The Quick Tabs extension is one great mitigation; it shows your tabs in a searchable, most-recently-used list in a convenient dropdown:

QuickTabs Extension

Another trick that I learned just this month is that you can instruct Chrome to open a site in “App” mode, where it runs in its own top-level window (with no other tabs), showing the site’s icon as the icon in the Windows taskbar. It’s easy:

On Windows, run chrome.exe --app=https://mail.google.com

While on OS X, run open -n -b com.google.Chrome --args --app='https://news.google.com'

Tip: The easy way to create a shortcut to the current page in app mode is to click the Chrome Menu > More Tools > Create Shortcut and tick the Open as Window checkbox.

I now have SlickRun MagicWords set up for mail, calendar, and my other critical applications.


So, that’s it for year one @Google. I’m feeling lucky as I head into year two!

-Eric