Inspecting Certificates in Chrome

With a check-in on Monday night, Chrome Canary build 60.0.3088 regained a quick path to view certificates from the top-level security UI. When the new feature is enabled, you can just click the lock icon to the left of the address box, then click the “Valid” link in the new Certificate section of the Page Information bubble to see the certificate:

Chrome 60 Page Info dropdown showing certificate section

In some cases, you might only be interested in learning which Certificate Authority issued the site’s certificate. If the connection security is Valid, simply hover over the link to see the issuer information in a tooltip:

Tooltip shows Issuer CA

The new link is also available on the blocking error page in the event of an HTTPS error, although no tooltip is shown:

The link also available at the blocking Certificate Error page

Note: For now, you must manually enable the new Certificate section. Type chrome://flags/#show-cert-link in Chrome’s address box and hit enter. Click the Enable link and relaunch Chrome.

image

This section is enabled by default in Chrome 63 along with some other work to simplify the Page Information bubble.

If you want more information about the HTTPS connection, or to see the certificates of the resources used in the page, hit F12 to open the Developer Tools and click to the Security tab:

Chrome DevTools Security tab shows more information

You can learn more about Chrome’s certificate UIs and philosophy in this post from Chrome Security’s Chris Palmer.

-Eric Lawrence

Inspecting Certificates in Chrome

The Trouble with Magic

“Magic” is great… except when it isn’t.

Software Design is largely about tradeoffs, and one of the more interesting tradeoffs is between user experience and predictability. This has come up repeatedly throughout my career and in two independent contexts yesterday that I’ll describe in this post.

Developer Magic

I’m working on a tiny UX change to Google Chrome to deemphasize the data component of data: URIs.

Chrome is a multi-platform browser that runs on Windows, Mac, Linux, ChromeOS, Android, and iOS, which means that I need to make the same change in a number of places. Four, to be precise: Views (our cross-platform UI that runs on Windows, Linux and ChromeOS), Cocoa (Mac), Bling (iOS) and Clank (Android). The change for Views was straightforward and I did it first; porting the change to Mac wasn’t too hard. With the Mac change in hand, I figured that the iOS change would be simple, as both are written in Objective C++. I don’t have a local Mac development box, so I have to upload my iOS changes to the Chromium build bots to see if they work. Unfortunately, my iOS build failed, with a complaint from the linker:

Undefined symbols for architecture arm7:

“gfx::Range::ToNSRange() const”, referenced from:

OmniboxViewIOS::SetEmphasis(bool, gfx::Range) in omnibox_view_ios.o

OmniboxViewIOS::UpdateSchemeEmphasis(gfx::Range) in omnibox_view_ios.o

ld: symbol(s) not found for architecture arm7

Hrm… that’s weird; the Mac build worked and the iOS build used the same APIs. Let’s go have a look at the definition of ToNSRange():

ToNSRange() inside an if(defined(OS_MACOSX)

Oh, weird. It’s in an OS_MACOSX block, so I guess it’s not there for iOS. But how did it compile and only fail at linking?

Turns out that the first bit of “magic” is that when OS_IOS is defined, OS_MACOSX is always also defined:

iOS gets both MacOS and iOS defined

I was relieved to learn that I’m not the only person who didn’t know this, both by asking around and by finding code blocks like this:

If defined(win)||defined(mac)||defined(ios)

Okay, so that’s why it compiled, but why didn’t it link? Let’s look at the build configuration file:

image

Hmmm… That’s a bit suspicious. There’s range_mac.mm and range_win.cc both listed within a single target. But it seems unlikely that the Mac build includes the Windows code, or that the Windows build includes the Mac code. Which suggests that maybe there’s some magic not shown in the build configuration that determines what actually gets built. And indeed, it turns out that such magic does exist.

The overall Build Configuration introduces its own incompatible magic, whereby filenames suffixed with _mac only compile on Mac… and that is limited to actual Mac, not including iOS:

sources_assignment_filters

This meant that the iOS compilation had a header file with no matching implementation, and I was the first lucky guy to stumble upon this by calling the missing code.

Magic handling of filenames is simultaneously great (“So convenient”) and awful—I spoke to a number of engineers who knew that the build does this, but had no idea how, or whether or not iOS builds would include _mac-suffixed files. My  instinct for fixing this would be to rename range_mac.mm to just range_apple.mm (because .mm files are compiled only for Mac and iOS), but instead I’ve been told that the right fix is to just temporarily disable the magic:

Add exclusion via set_sources_assignment_filter

Talking to some of the experts, I learned that the long term goal is to get rid of the sources_assignment_filters altogether (No more magic!) but doing so entails a bunch of boring work (No more magic!).

Magic is great, when it works.

When it doesn’t, I spend a lot of time investigating and writing blog posts. In this case, I ended up flailing about for a few hours (because sending my various fix attempts off to the bots isn’t fast) trying to figure out what was going on.

There’s plenty of other magic that happens throughout the Chromium developer toolchain; some of it visible and some of it invisible. For instance, consider what happens when I forget the name of the command that finds out what release a changelist went into:

git find-release

Git “magically” knows what I meant, and points out my mistake.

Elsewhere, however, Chromium’s git “magically” knows what I meant and just does it:

git cl upalod (typo)

Which approach is better? I suppose it depends. The code that suggests proper commands is irritating (“Dammit, if you knew what I meant, you could just do it!”) but it’s also predictable—only legal commands run and typos cannot go overlooked and propagate throughout scripts, documentation, etc.

This same type of tradeoff appeared in a different scenario by the end of the day.

End-User Magic

This repro won’t work forever, but try clicking this link: https://www.kubernetes.io. If you do this right now, you’ll find that the page works great in Chrome, but doesn’t work in IE or Edge:

Edge and IE show Certificate Error

If you pop the site into SSLLabs’ server test, you can see that the server indeed has a problem:

Certificate subject name mismatch

The certificate’s SubjectAltNames field contains kubernetes.io, but not http://www.kubernetes.io.

So, what gives? Why does the original www URL work in Chrome? If you open the Developer Tools console while following the link, you’ll see the following explanation of the magic:

Console warning about automagic redirection

Basically, Chrome saw that the certificate for http://www.kubernetes.io was misconfigured and recognized that sending the user to the bare domain kubernetes.io was probably the right thing to do. So, it just did that, which is great for the user. Right? Right??

Well, yes, it’s great for Chrome users, and maybe for HTTPS adoption– users don’t like certificate errors, and asking them to manually “fix” things the browser can fix itself is annoying.

But it’s less awesome for users of other browsers without this accommodation, especially when the site developers don’t know about Chrome’s magic behavior and close the bug as “fixed” because they tested in Chrome. So other browsers have to adopt this magic if they want to be as great as Chrome (no browser vendor likes bugs whining “Your browser doesn’t work but Chrome does!”). Then, after all the browsers have the magic in place, then other tools like curl and wfetch and wget etc need to adopt it. And the magic is now a hack that lives on for decades, increasing the development cost of all future web clients. Blargh.

It’s worth noting that this scenario was especially confusing for users of Microsoft Edge, because its address box has special magic that hides the “www.” prefix, even on error pages. The default address is a “lie”:

Edge hides the

Only by putting focus in the address bar can you see the “truth”:

WWW is showing now on focus

When you’re building magic into your software, consider carefully how you do so.

  • Do you make your magic invisible, or obvious?
  • Is there a way to scope it, or will you have to maintain it forever?
  • Are you training users to expect magic, or guiding them away from it?
  • If you’re part of an ecosystem, is your magic in line with your long-term ecosystem goals?

-Eric

The Trouble with Magic

Certified Malice

One unfortunate (albeit entirely predictable) consequence of making HTTPS certificates “fast, open, automated, and free” is that both good guys and bad guys alike will take advantage of the offer and obtain HTTPS certificates for their websites.

Today’s bad guys can easily turn a run-of-the-mill phishing spoof:

HTTP Phish screenshot

…into a somewhat more convincing version, by obtaining a free “domain validated” certificate and lighting up the green lock icon in the browser’s address bar:

HTTPS Phish screenshotPhishing site on Android

The resulting phishing site looks almost identical to the real site:

Real and fake side-by-side

By December 8, 2016, LetsEncrypt had issued 409 certificates containing “Paypal” in the hostname; that number is up to 709 as of this morning. Other targets include BankOfAmerica (14 certificates), Apple, Amazon, American Express, Chase Bank, Microsoft, Google, and many other major brands. LetsEncrypt validates only that (at one point in time) the certificate applicant can publish on the target domain. The CA also grudgingly checks with the SafeBrowsing service to see if the target domain has already been blocked as malicious, although they “disagree” that this should be their responsibility. LetsEncrypt’s short position paper is worth a read; many reasonable people agree with it.

The “race to the bottom” in validation performed by CAs before issuing certificates is what led the IE team to spearhead the development of Extended Validation certificates over a decade ago. The hope was that, by putting the CAs name “on the line” (literally, the address line), CAs would be incentivized to do a thorough job vetting the identity of a site owner. Alas, my proposal that we prominently display the CAs name for all types (EV, OV, DV) of certificate wasn’t implemented, so domain validated certificates are largely anonymous commodities unless a user goes through the cumbersome process of manually inspecting a site’s certificates. For a number of reasons (to be explored in a future post), EV certificates never really took off.

Of course, certificate abuse isn’t limited to LetsEncrypt—other CAs have also issued domain-validated certificates to phishing sites as well:

Comodo cert for a Paypal Phish

Who’s Responsible?

Unfortunately, ownership of this mess is diffuse, and I’ve yet to encounter any sign like this:

image

Blame the Browser

The core proposition held by some (but not all) CAs is that combatting malicious sites is the responsibility of the user-agent (browser), not the certificate authority. It’s an argument with merit, especially in a world where we truly want encryption for all sites, not just the top sites.

That position is bolstered by the fact that some browsers don’t actively check for certificate revocation, so even if LetsEncrypt were to revoke a certificate, the browser wouldn’t even notice.

Another argument is that browsers overpromise the safety of sites by using terms like Secure in the UI—while the browser can know whether a given HTTPS connection is present and free of errors, it has no knowledge of the security of the destination site or CDN, nor its business practices.  Internet Explorer’s HTTPS UX used to have a helpful “Should I trust this site?” link, but that content went away at some point. Security wording is a complicated topic because what the user really wants to know (“Is this safe?”) isn’t something a browser can ever really answer in the affirmative. Users tend to be annoyed when you tell them only the truth– “This download was not reported as not safe.”

The obvious way to address malicious sites is via phishing and malware blocklists, and indeed, you can help keep other users safe by reporting any unblocked phish you find to the Safe Browsing service; this service protects Chrome, Firefox, and Safari users. You can also forward phishing messages to scam@netcraft.com and/or PhishTank. Users of Microsoft browsers can report unblocked phish to SmartScreen (in IE, click Tools > SmartScreen > Report Unsafe Website). Known-malicious sites will get the UI treatment they deserve:

image

Unfortunately, there’s always latency in block lists, and a phisher can probably turn a profit with a site that’s live less than one hour. Phishers also have a whole bag of tricks to delay blocks, including cloaking whereby they return an innocuous “Site not found” message when they detect that they’re being loaded by security researchers’ IP addresses, browser types, OS languages, etc.

Blame the Websites

Some argue that websites are at fault, for:

  1. Relying upon passwords and failing to adopt unspoofable two-factor authentication schemes which have existed for decades
  2. Failing to adopt HTTPS or deploy it properly until browsers started bringing out the UI sledgehammers
  3. Constantly changing domain names and login UIs
  4. Emailing users non-secure links to redirector sites
  5. Providing bad security advice to users

Blame the Humans

Finally, many lay blame with the user, arguing user education is the only path forward. I’ve long given up much hope on that front—the best we can hope for is raising enough awareness that some users will contribute feedback into more effective systems like automated phishing block lists.

We’ve had literally decades of sites and “experts” telling users to “Look for the lock!” when deciding whether a site is to be trusted. Even today we have bad advice being advanced by security experts who should know better, like this message from the Twitter security team which suggests that https://twitter.com.access.info is a legitimate site.

Email from Twitter Security Team

Where Do We Go From Here?

Unfortunately, I don’t think there are any silver bullets, but I also think that unsolvable problems are the most interesting ones. I’d argue that everyone who uses or builds the web bears some responsibility for making it safer for all, and each should try to find ways to do that within their own sphere of control.

Following is an unordered set of ideas that I think are worthwhile.

Signals

Historically, we in the world of computers have been most comfortable with binary – is this site malicious, or is it not? Unfortunately, this isn’t how the real world usually works, and fortunately many systems are moving more toward a set of signals. When evaluating the trustworthiness of a site, there are dozens of available signals, including:

  • HTTPS Certificates
  • Age of the site
  • Has the user visited this site before
  • Has the user stored a password on this site before
  • Has the user used this password before, anywhere
  • Number of visitors observed to the site
  • Hosting location of the site
  • Presence of sensitive terms in the content or URL
  • Presence of login forms or executable downloads

None of these signals is individually sufficient to determine whether a site is good or evil, but combined together they can be used to feed systems that make good sites look safer and evil sites more suspect. Suspicious sites can be escalated for human analysis, either by security experts or even by crowd-sourced directed questioning of ordinary users. For instance, see the heuristic-triggered Is This Phish? UI from Internet Explorer’s SmartScreen system:

image

The user’s response to the prompt is itself a signal that feeds into the system, and allows more speedy resolution of the site as good or bad and subject to blocking.

One challenge with systems based on signals is that, while they grow much more powerful as more endpoints report signals, those signal reports may have privacy implications for individual users.

Reputation

In the real world, many things are based on reputation—without a good reputation, it’s hard to get a job, stay in business, or even find a partner.

In browsers, we have a well-established concept of bad reputation (your site or download appears on a block list) but we’ve largely pushed back against the notion of good reputation. In my mind, this is a critical failing—while much of the objection is well-meaning (“We want a level playing field for everyone”), it’s extremely frustrating that we punish users in support of abstract ideals. Extended Validation certificates were one attempt at creating the notion of good reputation, but they suffered from the whims of CAs’ business plans and the legal system that drove absurdities like:

Security chip showing full legal name of Washington Post's holding company

While there are merits to the fact that, on the web, no one can tell whether your site is a one-woman startup or a Fortune 100 behemoth, there are definite downsides as well.

Owen Campbell-Moore wrote a wonderful paper, Rethinking URL bars as primary UI, in which he proposes a hypothetical origin-to-local-brand mapping, which would allow browsers to help users more easily understand where they really are when interacting with top sites. I think it’s a long-overdue idea brilliant in its obviousness. When it eventually gets built (by Apple, Microsoft, Google, or some upstart?) we’ll wonder what took us so long. A key tradeoff in making such a system practical is that designers must necessarily focus on the “head” of the web (say, the most popular 1000 websites globally)– trying to generalize the system for a perfectly level playing field isn’t practical– just like in the real world.

Assorted

  • Browsers could again make a run at supporting unspoofable authentication methods natively.
  • Browsers could do more to take authentication out from the Zone of Death, limiting spoofing attacks.
  • New features like Must-Staple will allow browsers to respond to certificate revocation information without a performance penalty.
  • Automated CAs could deploy heuristics that require additional validation for certificates containing often-spoofed domains.For instance, if I want a certificate for paypal-payments.com, they could either demand additional information about me personally (allowing a visit from Law Enforcement if I turn out to be a phisher) or they could even just issue a certificate with a validity period a week in the future. The certificate would be recorded in CertificateTransparency logs immediately, allowing brand monitoring firms to take note and respond immediately if phishing is detected when the certificate became valid. Such systems will be imperfect (you need to handle BankOfTheVVest.com as a potential spoof of BankOfTheWest.com, for instance) but even targeting a few high-value domains could make a major dent.

 

Fight the phish!

 

-Eric

Update: Chris wrote a great post on HTTPS UX in Chrome.

Certified Malice

The Line of Death

When building applications that display untrusted content, security designers have a major problem— if an attacker has full control of a block of pixels, he can make those pixels look like anything he wants, including the UI of the application itself. He can then induce the user to undertake an unsafe action, and a user will be none the wiser.

In web browsers, the browser itself usually fully controls the top of the window, while pixels under the top are under control of the site. I’ve recently heard this called the line of death:

Line of death below omnibox

If a user trusts pixels above the line of death, the thinking goes, they’ll be safe, but if they can be convinced to trust the pixels below the line, they’re gonna die.

Unfortunately, this crucial demarcation isn’t explicitly pointed out to the user, and even more unfortunately, it’s not an absolute.

For instance, because the area above the LoD is so small, sometimes more space is needed to display trusted UI. Chrome attempts to resolve this by showing a little chevron that crosses the LoD:

Chrome chevrons

…because untrusted markup cannot cross the LoD. Unfortunately, as you can see in the screenshot, the treatment is inconsistent; in the PageInfo flyout, the chevron points to the bottom of the lock and the PageInfo box overlaps the LoD, while in the Permission flyout the chevron points to the bottom of the omnibox and the Permission box only abuts the LoD. Sometimes, the chevron is omitted, as in the case of Authentication dialogs.

Alas, the chevron is subtle, and I expect most users will fall for a faked chevron, like some sites have started to use1:

Fake chevron in HTML

The bigger problem is that some attacker data is allowed above the LoD; while trusting the content below the LoD will kill your security, there are also areas of death above the line. A more accurate Zones of Death map might look like this:

Zones of Death

In Zone 1, the attacker’s chosen icon and page title are shown. This information is controlled fully by the attacker and thus may consist entirely of deceptive content and lies.

In Zone 2, the attacker’s domain name is shown. Some information security pros will argue that this is the only “trustworthy” component of the URL, insofar as if the URL is HTTPS then the domain correctly identifies the site to which you’re connected. Unfortunately, your idea of trustworthy might be different than the experts’; https://paypal-account.com/ may really be the domain you loaded, but it has no relationship with the legitimate payment service found at https://paypal.com.

The path component of the URL in Zone 3 is fully untrustworthy; the URL http://account-update.com/paypal.com/ has nothing to do with Paypal either, and while spoofing here is less convincing, it also may be harder for the good guys to block because the spoofing content is not found in DNS nor does it create any records in Certificate Transparency logs.

Zone 4 is the web content area. Nothing in this area is to be believed. Unfortunately, on windowed operating systems, this is worse than it sounds, because it creates the possibility of picture-in-picture attacks, where an entire browser window, including its trusted pixels, can be faked:

Paypal window is fake content from evil.example.com

When hearing of picture-in-picture attacks, many people immediately brainstorm defenses; many related to personalization. For instance, if you run your OS or browser with a custom theme, the thinking goes, you won’t be fooled. Unfortunately, there’s evidence that that just isn’t the case.

Story time

Back in 2007 as the IE team was launching Extended Validation (EV) certificates, Microsoft Research was publishing a paper calling into question their effectiveness. A Fortune 500 financial company came to visit the IE team as they evaluated whether they wanted to go into the EV Certificate Authority business. They were excited about the prospect (as were we, since they were a well-known-name with natural synergies) but they noted that they thought the picture-in-picture problem was a fatal flaw.

I was defensive– “It’s interesting,” I conceded, “but I don’t think it’s a very plausible attack.”

They retorted “Well, we passed this screenshot around our entire information security department, and nobody could tell it’s a picture-in-picture attack. Can you?” they slid an 8.5×11 color print across the table.

“Of course!” I said, immediately relieved. I quickly grew gravely depressed as I realized the implications of the fact that they couldn’t tell the difference.

“How?” they demanded.

“It’s a picture of an IE7 browser running on Windows Vista in the transparent Aero Glass theme with a page containing a JPEG of an IE7 browser running on Windows XP in the Luna aka Fisher Price theme?” I pointed out.

“Oh. Huh.” they noted.

My thoughts of using browser personalization as an effective mitigation died that day.

Other mitigations were proposed; one CA built an extension where hovering over the EV Lock Icon (“Trust Badge”) would dim the entire screen except for the badge. One team proposed using image analysis to scan the current webpage for anything that looked like a fake EV badge.

Personally, my favorite approach was Tyler Close’s idea that the browser should use PetNames for site identity– think of them as a Gravatar icon for salted certificate hashes– not only would they make every HTTPS site’s identity look unique to each user, but this could also be used as a means of detecting fraudulent or misissued certificates (in a world before we had certificate transparency).

The Future is Here … and It’s Worse

HTML5 adds a Fullscreen API, which means the Zone of Death looks like this:zodfullscreen

The Metro/Immersive/Modern mode of Internet Explorer in Windows 8 suffered from the same problem; because it was designed with a philosophy of “content over chrome”, there were no reliable trustworthy pixels. I begged for a persistent trustbadge to adorn the bottom-right of the screen (showing a security origin and a lock) but was overruled. One enterprising security tester in Windows made a visually-perfect spoofing site of Paypal, where even the user gestures that displayed the ephemeral browser UI were intercepted and fake indicators were shown. It was terrifying stuff, mitigated only by the hope that no one would use the new mode.

Virtually all mobile operating systems suffer from the same issue– due to UI space constraints, there are no trustworthy pixels, allowing any application to spoof another application or the operating system itself. Historically, some operating systems have attempted to mitigate the problem by introducing a secure user gesture (on Windows, it’s Ctrl+Alt+Delete) that always shows trusted UI, but such measures tend to confuse users (limiting their effectiveness) and often get “optimized away” when the UX team’s designers get ahold of the product.

It will be interesting to see how WebVR tries to address this problem on an even grander scale.

Beyond Browsers

Of course, other applications have the concept of a LoD as well, including web applications. Unfortunately, they often get this wrong. Consider Outlook.com’s rendering of an email:

image

When Outlook has received an email from a trusted sender, they notify the user via a “This message is from a trusted sender.” notice. Which appears directly inside a Zone of Death:

image

Security UI is hard.

-Eric

1 “Why would they fake a permission prompt? What would they gain?” you ask? Because for a real permission prompt, if you click Block,they can never ask you again, while with a fake prompt, they can spam you as much as they like. On the other hand, if you click Allow, they immediately present the real prompt.

The Line of Death

Security UI in Chrome

The combined address box and search bar at the top of the Chrome window is called the omnibox. The icon and optional verbose state text adjacent to that icon are collectively known as the Security Chip:

image

The security chip can render in a number of states, depending on the status of the page:

image Secure – Shown for HTTPS pages that were securely delivered using a valid certificate and not compromised by mixed content or other problems.
image Secure, Extended Validation – Shown for Secure pages whose certificate indicates that Extended Validation was performed.
image Neutral – Shown for HTTP pages, as well as Chrome’s built-in pages, like chrome://extensions, as well as pages containing passive mixed content, or delivered using a policy-allowed SHA-1 certificates.
image Not Secure – Shown for HTTP pages that contain a password or credit card input field. Learn more.
image Not Secure (Red) – What Chrome will eventually show for all HTTP pages. You can configure a flag (chrome://flags/#mark-non-secure-as) to Always mark HTTP as actively dangerous today to get this experience early.
image Not Secure, Certificate Error – Shown when a site has a major problem with its certificate (e.g. it’s expired).
image Dangerous – Shown when Google Safe Browsing has identified this page as a source of malware or phishing attacks.

The flyout that appears when you click the security chip is called PageInfo or Website Settings; it shows the security status of the page and the permissions assigned to the origin website:

image

The text atop the pageinfo flyout explains the security state of the page:

image imageimage mixedexpired

Clicking the Learn More link on the flyout for valid HTTPS sites once opened the Chrome Developer Tools’ Security Panel, but now it goes to a Help article. To learn more about the HTTPS state of the page, instead press F12 and select the Security Panel:

image

The View certificate button opens the Windows certificate viewer and displays the current origin’s certificate. Reload the page with the Developer Tools open to see all of the secure origins in the Secure Origins List; selecting any origin allows you to view information about the connection and examine its certificate.

image

The floating grey box at the bottom of the Chrome window that appears when you over over a link is called the status bubble. It is not a security surface and, like IE’s status bar, it is easily spoofed.

image

Navigation to sites with severe security problems is blocked by an interstitial page.

image

(A list of interstitial pages in Chrome can be found at chrome://interstitials/ ).

Clicking on the error code (e.g. ERR_CERT_AUTHORITY_INVALID in the screenshot below) will show more information about the certificate of the blocked site:

image

Clicking the Advanced link shows more information, and in some cases will show an override link that allows you to disregard the security protection and proceed to the site anyway.

image

If a site uses HTTP Strict Transport Security, the Proceed link is hidden and the user has no visible option to proceed.

image

In current versions of Chrome, the user can type a fixed string (sometimes referred to as a Konami code) to bypass HSTS, but this option is deliberately undocumented and slated for removal from Chrome.

If a HTTPS problem is sufficiently bad, the network stack will not connect to the site and will show a network error page instead.

image

-Eric

PS: There exists a developer reference to Chrome Security UI across platforms, but it’s somewhat outdated.

Security UI in Chrome

Unsecure Content

Chrome has landed their change that allows you to mark unsecure (HTTP) content as insecure or dubious. Visit chrome://flags/#mark-non-secure-as to set the toggle. You can choose to mark as Dubious:

image

…or as Non-Secure:

image

The expectation is that eventually one of these modes will be the default for sites that are transferred over insecure protocols like HTTP.

Personally, I’m not really a fan of either piece of iconography; to me, showing the lock at all implies that the site has some amount of security and maybe it’s just not perfect.

I’m hoping that after some transition period, we’ll end up with a more prominent notification that explains what the issue is and why humans might care.

In December of last year, I made the following proposal with tongue only slightly in cheek:

Meet “Nosy”, your HTTP-content indicator:

Of course, Nosy’s got a lot of things to say:

nosy2

Sites and services need to use secure protocols like HTTPS because users expect it. No, not all users will expect to see the letters HTTPS and probably don’t understand hashes, ciphers, and public key encryption. But they expect that when they visit your site, it was delivered to them unmolested, privately, and as you original designed it. And the only way to realistically ensure that these expectations are met is to use HTTPS.

-Eric Lawrence

Unsecure Content

Security UI

Over a decade ago, Windows started checking the signature of downloaded executables. When invoked, Attachment Execute Services’ (AES) UI displays the publisher’s information for signed executables; unsigned executables instead show a security prompt with a red shield and a bolded warning that the publisher of the file is unknown:

image

In contrast, signed executables show a yellow shield and the name of the publisher and the publisher’s declared name of the application.

When Windows Vista released in late 2006, an “elevation dialog” was introduced to prompt the user for permission to run an executable with elevated (administrator) rights. The new prompt’s design somewhat mirrored that of the earlier AES prompt, where unsigned executables are scary:

image

… and signed executables are less so:

image

As you can see, the prompt’s icon, program name, and publisher name are all pulled from the downloaded file.

To avoid double-prompting the user, the system detects whether a given executable will be elevated, and if so the AES dialog is suppressed and only the elevation prompt is shown.

As a consequence, the security UI in modern Windows is a bit backwards… the lower-risk “run as user” dialog seems complex and scary, while the higher-risk “run as administrator” dialog seems simpler and more trustworthy:

BadDesign

From a security design point-of-view, this seems unfortunate. Application designers should never be in the position of choosing higher-permission requests to get friendlier prompt behavior.

-Eric Lawrence

Security UI