browsers, perf, privacy, security

Private Mode Browsers

InPrivate Mode was introduced in Internet Explorer 8 with the goal of helping users improve their privacy against both local and remote threats. Safari introduced a privacy mode in 2005.

All leading browsers offer a “Private Mode” and they all behave in the same general ways.

HTTP Caching

While in Private mode, browsers typically ignore any previously cached resources and cookies. Similarly, the Private mode browser does not preserve any cached resources beyond the end of the browser session. These features help prevent a revisited website from trivially identifying a returning user (e.g. if the user’s identity were cached in a cookie or JSON file on the client) and help prevent “traces” that might be seen by a later user of the device.

In Firefox’s and Chrome’s Private modes, a memory-backed cache container is used for the HTTP cache, and its memory is simply freed when the browser session ends. Unfortunately, WinINET never implemented a memory cache, so in Internet Explorer InPrivate sessions, data is cached in a special WinINET cache partition on disk which is “cleaned up” when the InPrivate session ends.

Because this cleanup process may be unreliable, in 2017, Edge made a change to simply disable the cache while running InPrivate, a design decision with significant impact on the browser’s network utilization and performance. For instance, consider the scenario of loading an image gallery that shows one large picture per page and clicking “Next” ten times:

InPrivateVsRegular

Because the gallery reuses some CSS, JavaScript, and images across pages, disabling the HTTP cache means that these resources must be re-downloaded on every navigation, resulting in 50 additional requests and a 118% increase in bytes downloaded for those eleven pages. Sites that reuse even more resources across pages will be more significantly impacted.

Another interesting quirk of Edge’s InPrivate implementation is that the browser will not download FavIcons while InPrivate. Surprisingly (and likely accidentally), the suppression of FavIcon downloads also occurs in any non-InPrivate windows so long as any InPrivate window is open on the system.

Web Platform Storage

Akin to the HTTP caching and cookie behaviors, browsers running in Private mode must restrict access to HTTP storage (e.g. HTML5 localStorage, ServiceWorker/CacheAPI, IndexedDB) to help prevent association/identification of the user and to avoid leaving traces behind locally. In some browsers and scenarios, storage mechanisms are simply set to an “ephemeral partition” while in others the DOM APIs providing access to storage are simply configured to return “Access Denied” errors.

You can explore the behavior of various storage mechanisms by loading this test page in Private mode and comparing to the behavior in non-Private mode.

Network Features

Beyond the typical Web Storage scenarios, browser’s Private Modes should also undertake efforts to prevent association of users’ Private instance traffic with non-Private instance traffic. Impacted features here include anything that has a component that behaves “like a cookie” including TLS Session Tickets, TLS Resumption, HSTS directives, TCP Fast Open, Token Binding, ChannelID, and the like.

Automatic Authentication

In Private mode, a browser’s AutoComplete features should be set to manual-fill mode to prevent a “NameTag” vulnerability, whereby a site can simply read an auto-filled username field to identify a returning user.

On Windows, most browsers support silent and automatic authentication using the current user’s Windows login credentials and either the NTLM and Kerberos schemes. Typically, browsers are only willing to automatically authenticate to sites on “the Intranet“. Some browsers behave differently when in Private mode, preventing silent authentication and forcing the user to manually enter or confirm an authentication request.

In Firefox Private Mode and Edge InPrivate, the browser will not automatically respond to a HTTP/401 challenge for Negotiate/NTLM credentials.

In Chrome Incognito, Brave Incognito, and IE InPrivate, the browser will automatically respond to a HTTP/401 challenge for Negotiate/NTLM credentials even in Private mode.

Notes:

  • In Edge, the security manager returns MustPrompt when queried for URLACTION_CREDENTIALS_USE.
  • Unfortunately Edge’s Kiosk mode runs InPrivate, meaning you cannot easily use Kiosk mode to implement a display that projects a dashboard or other authenticated data on your Intranet.
  • For Firefox to support automatic authentication at all, the
    network.negotiate-auth.allow-non-fqdn and/or network.automatic-ntlm-auth.allow-non-fqdn preferences must be adjusted.

Detection of Privacy Modes

While browsers generally do not try to advertise to websites that they are running inside Private modes, it is relatively easy for a website to feature-detect this mode and behave differently. For instance, some websites like the Boston Globe block visitors in Private Mode (forcing login) because they want to avoid circumvention of their “Non-logged-in users may only view three free articles per month” paywall logic.

Sites can detect privacy modes by looking for the behavioral changes that signal that a given browser is running in Private mode; for instance, indexedDB is disabled in Edge while InPrivate. Detectors have been built for each browser and wrapped in simple JavaScript libraries. Defeating Private mode detectors requires significant investment on the part of browsers (e.g. “implement an ephemeral mode for indexedDB”) and as a consequence most browsers have punted on this problem for the time being.

Advanced Private Modes

Generally, mainstream browsers have taken a middle ground in their privacy features, trading off some performance and some convenience for improved privacy. Users who are very concerned about maintaining privacy from a wider variety of threat actors need to take additional steps, like running their browser in a discardable Virtual Machine behind an anonymizing VPN/Proxy service, disabling JavaScript entirely, etc.

The Brave Browser offers a “Private Window with Tor” feature that routes traffic over the Tor anonymizing network; for many users this might be a more practical choice than the highly privacy-preserving Tor Browser Bundle, which offers additional options like built-in NoScript support to help protect privacy.

 

-Eric

Standard
browsers, privacy, tech, web

Cookie Controls, Revisited

Update: The October 2018 Cumulative Security Update (KB4462919) brings the RS5 Cookie Control changes described below to Windows 10 RS2, RS3, and RS4.

Cookies are one of the most crucial features in the web platform, and large swaths of the web don’t work properly without them. Unfortunately, cookies are also one of the primary mechanisms that trackers and ad networks utilize to follow users around the web, potentially impacting users’ privacy. To that end, browsers have offered cookie controls for over twenty years.

Back in 2010, I wrote a summary of Internet Explorer’s Cookie Controls. IE’s cookie controls were very granular and quite powerful. The basic settings were augmented with P3P, a once-promising feature that allowed sites to advertise their privacy practices and browsers to automatically enforce users’ preferences against cookies. Unfortunately, major sites created fraudulent P3P statements, regulators failed to act, and the entire (complicated) system collapsed. P3P was removed from IE11 on Windows 10 and never implemented in Microsoft Edge.

Instead, Edge offers a very simple cookie control in the Privacy and Security section of the settings. Under the Cookies option, you have three choices: Don’t block cookies (the default), Block all cookies, and Block only third party cookies:

CookieSetting

This simple setting hides a bunch of subtlety that this post will explore.

Cookie => Cookie-Like

For the October 2018 update (aka “Redstone Five” aka “RS5”) we’ve made some important changes to Edge’s Cookie control.

The biggest of the changes is that Edge now matches other browsers, and uses the cookie controls to restrict cookie-like storage mechanisms, including localStoragesessionStorageindexedDB, Cache API, and ServiceWorkers. Each of these features can behave much like a cookie, with a similar potential impact on users’ privacy.

While we didn’t change the UI, it would be accurate to change it to:

CookieLike

This change improves privacy and can even improve site compatibility. During our testing, we were surprised to discover that some website flows fail if the browser blocks only 3rd party cookies without also blocking 3rd-party localStorage. This change brings Edge in line with other browsers with minor exceptions. For example, in Firefox 62, when 3rd-party site data is blocked, sessionStorage is still permitted in a 3rd-party context. In Edge RS5 and Chrome, 3rd party sessionStorage is blocked if the user blocks 3rd-party cookies.

Block Setting and Sending

Another subtlety exists because of the ambiguous terminology “third-party cookie.” A cookie is just a cookie– it belongs to a site (eTLD+1). Where the “party” comes into play is the context where the cookie was set and when it is sent.

In the web platform, unless a browser implements restrictions:

  • A cookie set in a first-party context will be sent to a first-party context
  • A cookie set in a first-party context will be sent to a third-party context
  • A cookie set in a third-party context will be sent to a first party context
  • A cookie set in a third-party context will be sent to a third-party context

For instance, in this sample page, if the IFRAME and IMG both set a cookie, these cookies are set in a third-party context:Contexts

  • If the user subsequently visits domain2.com, the cookie set by that 3rd-Party IFRAME will now be sent to the domain2.com server in a 1st-Party context.
  • If the user subsequently visits domain3.com, the cookie set by that 3rd-Party IMG will now be sent to the domain3.com server in a 1st-Party context.

Historically, Edge and IE’s “Block 3rd party cookies” options controlled only whether a cookie could be set from a 3rd party context, but did not impact whether a cookie initially set in a 1st party context would be sent to a 3rd party context.

As of Edge RS5, setting “Block only 3rd party cookies” will now also block cookies that were set in a 1st party context from being sent in a 3rd-party context. This change is in line with the behavior of other browsers.

Edge Controls Impacted By Zones

With the move from Internet Explorer to Edge, the Windows Security Zones architecture was largely left by the wayside.

Zones

However, cookie controls are one of a small number of exceptions to this; Edge applies the cookie restrictions only in the Internet Zone, the zone almost all sites fall into (outside of users on corporate networks).

Perhaps surprisingly, cookie-like features and the document.cookie getter are restricted, even in the Intranet and Trusted zones.

Chrome and Firefox do not take Windows Security Zones into account when applying cookie policies.

Test Cases

I’ve updated my old “Cookies” test page with new storage test cases. You can set your browser’s privacy controls:

Block3rdPartyChrome

Block3rdPartyFF

…then visit the test page to see how the browser limits features from 3rd-party contexts. You can use the Swap button on the page to swap 1st-party and 3rd-party contexts to see how restrictions have been applied. You should see that the latest versions of Chrome, Firefox, and Edge all behave pretty much the same way.

One interesting exception is that when configured to Block 3rd-party Cookies, Edge still allows 3rd-party contexts to delete their own cookies. (This is used by federated logout pages, for instance). Chrome does not allow deletion in this scenario– the attempt to delete cookies is ignored.

 

-Eric


Appendix: Chromium Audit

In the course of our site-compatibility investigations, I had a look at Chromium’s behavior with regard to their cookie controls. In Chromium, Blink asks the host application for permission to use various storages, and these chokepoints check:

cookie_settings_->IsCookieAccessAllowed(origin_url, top_origin_url);

…which is sensitive to the various “Block Cookies” settings.

Mojo messages come up through renderer_host/chrome_render_message_filter.cc, gating access to

Additionally, ChromeContentBrowserClient gates

Elsewhere, IsCookieAccessAllowed is used to limit:

  • Flash Storage (PP_FLASHLSORESTRICTIONS_BLOCK)
  • Client Hints

Of these, Edge does not support WebSQL, FileSystem, SharedWorker, or Client Hints.

Standard
browsers, design, privacy

Chrome Sync

Disclaimer: Hi. I’m an engineer on the Edge browser now, but worked on Chrome Security for a bit over two years. I speak for no one but myself, and I share no internal or confidential information in this post.

Update: The Chrome team announced upcoming changes based on user-feedback.

This weekend, there were a bunch of breathless articles and blogs about a very small change recently made to Chrome. Some of the claims are correct and carefully thought out, but in the swirl of clickbait and confusion, there are a bunch of inaccurate claims as well.

What Changed?

No, Chrome doesn’t “upload your browser history when you check GMail”… unless you tell it to do so.

Yes, Chrome has streamlined the opt-in to the browser’s “Sync” features, such that you no longer need to individually type your username and password when enabling Sync. Whether you consider this “Convenient!” or “Terrible!” is a matter of perception and threat model.

Screenshots

Because many people haven’t seen this change yet, here are some screenshots from Chrome 71.3558.

When you sign into a Google site or service in the browser’s HTML content area, your avatar/profile picture will now appear in the browser UI. If you click on your avatar, a flyout appears offering to enable sync:

Meeple

Similarly, if you use certain browser features that are more valuable with Sync (e.g. creating a bookmark or storing a password), the UI offers to turn on Sync:

OptIn

(Concern: This “Sync as Eric” button appears in the same X,Y location as the “Save Password” button on the preceding flyout/bubble, so you could conceivably click it by accident.)

Bookmark

If you click on one of these options to turn on Sync, a giant flyout appears to tell you what this means:

Get Google smarts

Interestingly/Concerningly, if you click “Settings”, it’s interpreted as “Yes, and show Settings“:

SettingsMeansYes

If you look through the Sync settings (also available by visiting chrome://settings/?search=sync), you’ll get a rich list of controls for what you can choose to sync:

SyncMain

Below that list, you can also find a list of all of the other ways (independent of Sync/Signin) that Chrome can talk back to Google:

OtherSettings

Change Your Mind? Disable Sync

If you change your mind after enabling Sync, or enabled it by accident, it’s easy to change. Click your Avatar and click on the “Syncing to” badge. Then click Turn off. Decide whether or not you want to keep local copies of the data that was sync’d to the server:

TurnOffSync

Note: If you accidentally signed into Chrome and enabled Sync on a device you borrowed from someone else, it’s very important that you check the box to clear local data when you’re done with it, or your information will be left behind on the device.

Fun Stuff for Nerds

If you’re a geek and are interested to better understand how Sync Works, visit the URL chrome://sync-internals/ in Chrome. You can see events in the Sync process and tons of data about what exactly is syncing.

CoolData

Opt-Out

If you don’t want Chrome to Sync, just don’t click buttons that offer to enable it.

Arguably, enabling Sync is now so streamlined that you could conceivably do it by accident (or someone borrowing your PC could do it for you).

If you’re concerned about sign-in to Chrome and want to ensure that you don’t ever activate Sync by accident, you can set a Policy inside HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome:

Regedit

You can set the Windows policy by simply importing this registry script.

After Sync is disabled, buttons asking to turn on Sync disappear, and (interestingly) if you try to sign into Chrome, a warning notice appears:

SyncDisabled

Motivations

Many of the articles that came out this weekend were rife with speculation that this is some new attempt from Google to erode your privacy by vacuuming up even more of your browsing data.

I don’t work for Google and I have no insider knowledge, but I think such attributions are incorrect. I think the correct explanations are much more mundane:

  1. The old UX was just really weird for humans. “Wait, you computer nerds are telling me that I just signed into Google using Google Chrome in the browser content area and now you’re telling me that I need to sign into the Google Chrome Browser using the browser chrome. WTF?!?”
  2. The old UX was dangerously misleading for people who share computers, a worryingly common practice.The new UX makes it at least a bit more clear that if you’re browsing on a borrowed computer, you really should be using a discardable Guest profile.(I think Guest Profiles are one of the coolest little-used features in Chrome).
  3. The new UX streamlines the enabling of Chrome Sync. Chrome Sync provides advantages to both the user and Google.

The Advantage to the User is that their satisfaction with the browser is higher. On average, satisfaction increases when Sync is on because the browser can do more for the user, better protecting them from phishing sites (via the password manager) and sharing their bookmarks, permission settings, etc on every device they use. Users with many devices (me) are especially excited about this benefit– I have enough to remember and configure, and I want my browser to help as much as it can.

The Obvious Advantage to Google is that users who are more satisfied with Chrome are more likely to use Chrome and not switch to some other browser. Chrome is simply stickier because switching to a browser that doesn’t have access to all of the user’s stored passwords, bookmarks, settings, etc, is less appealing.

In a world where browsers are shells around commoditized HTML parsers and script engines (E.g. compare Brave vs. Chrome), sticky features like sync are critical to marketshare. Consider, for instance, Chrome on iOS. It runs atop Apple’s WkWebView and is by-(Apple’s)-design intrinsically inferior in almost every way to the native Safari. Except for one thing… Chrome has my settings and data and Safari doesn’t. So I’ll go out of my way to get Chrome on iOS because, to me, Sync is critical time-saver that Safari can never match, because Safari doesn’t run on Windows and Chrome does. iOS represents a potentially addressable browser market of around a billion devices.

The Possible Unobvious Advantages to Google are what worry the skeptics and fearmongers– what if Google uses my data for something evil? Evil might range from a little evil (showing me ads for Beanie Babies because it sees I’m browsing a lot of Beanie Baby sites) to a lot evil (giving my data to “the Government” or my insurance company or my boss or some other bad person). Similarly, they could start using it recklessly (having my Google Home ask in front of my dinner guests “Hey, Eric, want to continue your search for the best Beanie Baby sales?“).

I personally don’t have significant concerns on this front (I got to see how seriously privacy is taken inside of Google and Chrome), but some people do.

My threat model is not your threat model.

If Evil Google is in your threat model, you can set an option to encrypt your sync data such that only your local Chrome instances can see it:

EncryptSync

If Extra Evil Google is in your threat model, you shouldn’t be using Chrome at all, because obviously Extra Evil Google could just backdoor Chrome before encryption or after decryption.

-Eric

PS: A helpful thread from a Chrome area owner.

Update: The Chrome team announced upcoming changes based on user-feedback.

Standard
privacy, security

Understanding the Limitations of HTTPS

A colleague recently forwarded me an article about the hazards of browsing on public WiFi with the question: “Doesn’t HTTPS fix this?” And the answer is, “Yes, generally.” As with most interesting questions, however, the complete answer is a bit more complicated.

HTTPS is a powerful technology for helping secure the web; all websites should be using it for all traffic.

If you’re not comfortable with nitty-gritty detail, stop reading here. If your takeaway upon reading the rest of this post is “HTTPS doesn’t solve anything, so don’t bother using it!” you are mistaken, and you should read the post again until you understand why.

HTTPS is a necessary condition for secure browsing, but it is not a sufficient condition.

There are limits to the benefits HTTPS provides, even when deployed properly. This post explores those limits.

Deployment Limitations

HTTPS only works if you use it.

In practice, the most common “exploit against HTTPS” is failing to use it everywhere.

Specify HTTPS:// on every URL, including URLs in documentation, email, advertisements, and everything else. Use Strict-Transport-Security (preload!) and Content-Security-Policy’s Upgrade-Insecure-Requests directive (and optionally Block-All-Mixed-Content) to help mitigate failures to properly set URLs to HTTPS.

Mixed Content – By default, browsers will block non-secure scripts and CSS (called “Active Mixed Content”) from secure pages. However, images and other “Passive Mixed Content” are requested and displayed; the page’s lock icon is silently hidden.

Non-secure Links – While browsers have special code to deal with Active and Passive mixed content, most browsers do nothing at all for Latent Mixed Content, where a secure page contains a link to a non-secure resource. Email trackers are the worst.

Privacy Limitations

SNI / IP-Address – When you connect to a server over HTTPS, the URL you’re requesting is encrypted and invisible to network observers. However, observers can see both the IP address you’re connecting to, and the hostname you’re requesting on that server (via the Server Name Indication ClientHello extension).

TLS 1.3 proposes a means of SNI-encryption (Encrypted SNI), but (unless you’re using something like Tor) an observer is likely to be able to guess which server you’re visiting using only the target IP address. In most cases, a network observer will also see the plaintext of the hostname when your client looks up its IP address via the DNS protocol (DNS over HTTPS aims to address).

image

Data Length – When you connect to a server over HTTPS, the data you send and receive is encrypted. However, in the majority of cases, no attempt is made to mask the length of data sent or received, meaning that an attacker with knowledge of the site may be able to determine what content you’re browsing on that site. Protocols like HTTP/2 offer built-in options to generate padding frames to mask payload length, and sites can undertake efforts (Twitter manually pads avatar graphics to fixed byte lengths) to help protect privacy. More generally, traffic analysis attacks make use of numerous characteristics of your traffic to attempt to determine what you’re up to; these are used by real-world attackers like the Great Firewall of China. Attacks like BREACH make use of the fact that when compression is in use, leaking just the size of data can also reveal the content of the data; mitigations are non-trivial.

Ticket Linking – TLS tickets can be used to identify the client. (Addressed in TLS1.3)

Referer Header – By default, browsers send a page’s URL via the Referer header (also exposed as the document.referrer DOM property) when navigating or making resource requests from one HTTPS site to another. HTTPS sites wishing to control leakage of their URLs should use Referrer Policy.

Server Identity Limitations

Certificate Verification – During the HTTPS handshake, the server proves its identity by presenting a certificate. Most certificates these days are issued after what’s called “Domain Validation”, a process by which the requestor proves that they are in control of the domain name listed in the certificate.

This means, however, that a bad guy can usually easily get a certificate for a domain name that “looks like” a legitimate site. While an attacker shouldn’t be able to get a certificate for https://paypal.com, there’s little to stop them from getting a certificate for https://paypal.co.com. Bad guys abuse this.

Some sites try to help users notice illegitimate sites by deploying Extended Validation (EV) certificates and relying upon users to notice if the site they’re visiting has not undergone that higher-level of vetting. Sadly, a number of product decisions and abysmal real-world deployment choices mean that EV certificates are of questionable value in the real-world.

Even more often, attackers rely on the fact that users don’t understand URLs at all, and are willing to enter their data into any page containing the expected logos:

image

One Hop – TLS often protects traffic for only one “hop.” For instance, when you connect to my https://fiddlerbook.com, you’ll see that it’s using HTTPS. Hooray!

What you didn’t know is that this domain is fronted by Cloudflare CDN’s free tier. While your communication with the Content Delivery Network is secure, the request from the CDN to my server (http://fiddlerbook.com) is over plain HTTP because my server doesn’t have a valid certificate[1]. A well-positioned attacker could interfere with your connection to the backend site by abusing that non-secure hop. Overall, using Cloudflare for HTTPS fronting improves security in my site’s scenario (protecting against some attackers), but browser UI limits mean that the protection probably isn’t as good as you expected. Here’s a nice video on this.

Multi-hop scenarios exist beyond CDNs; for instance, a HTTPS server might pull in a HTTP web service or use a non-secure connection to a remote database on the backend.

DOM Mixing – When you establish a connection to https://example.com, you can have a level of confidence that the top-level page was delivered unmolested from the example.com server. However, returned HTML pages often pull in third-party resources from other servers, whose certificates are typically not user-visible. This is especially interesting in cases where the top-level page has an EV certificate (“lighting up the green bar”), but scripts or other resources are pulled from a third-party with a domain-validated certificate.

Sadly, in many cases, third-parties are not worthy of the high-level of trust they are granted by inclusion in a first-party page.

Server Compromise – HTTPS only aims to protect the bytes in transit. If a server has been compromised due to a bug or a configuration error, HTTPS does not help (and might even hinder detection of the compromised content, in environments where HTTP traffic is scanned for malware by gateway appliances, for instance). HTTPS does not stop malware.

Server Bugs – Even when not compromised, HTTPS doesn’t make server code magically secure. In visual form:
image

NoSilverBullets

Client Identity Limitations

Client Authentication – HTTPS supports a mode whereby the client proves their identity to the server by presenting a certificate during the HTTPS handshake; this is called “Client Authentication.” In practice, this feature is little used.

Client Tampering – Some developers assume that using HTTPS means that the bytes sent by the client have not been manipulated in any way. In practice, it’s trivial for a user to manipulate the outbound traffic from a browser or application, despite the use of HTTPS.

Features like Certificate Pinning could have made it slightly harder for a user to execute a man-in-the-middle attack against their own traffic, but browser clients like Firefox and Chrome automatically disable Certificate Pinning checks when the received certificate chains to a user-installed root certificate. This is not a bug.

In some cases, the human user is not a party to the attack. HTTPS aims to protect bytes in transit, but does not protect those bytes after they’re loaded in the client application. A man-in-the-browser attack occurs when the client application has been compromised by malware, such that tampering or data leaks are performed before encryption or after decryption. The spyware could take the form of malware in the OS, a malicious or buggy browser extension, etc.

Real-world Implementation Limitations

Early Termination Detection – The TLS specification offers a means for detecting when a data stream was terminated early to prevent truncation attacks. In practice, clients do not typically implement this feature and will often accept truncated content silently, without any notice to the user.

Validation Error Overrides – HTTPS deployment errors are so common that most user-agents allow the user to override errors reported during the certificate validation process (expired certificates, name mismatches, even untrusted CAs etc). Clients range in quality as to how well they present the details of the error and how effectively they dissuade users from making mistakes.

Further Reading

-Eric

[1] A few days after posting, someone pointed out that I can configure Cloudflare to use its (oddly named) “Full” HTTPS mode, which allows it to connect to my server over HTTPS using the (invalid) certificate installed on my server. I’ve now done so, providing protection from passive evesdroppers. But you, as an end-user, cannot tell the difference, which is the point of this post.

Standard
privacy, security, tech

HTTPS Only Works If You Use It – Tipster Edition

Convoy with three armored tanks and one pickup truck

It’s recently become fashionable for news organizations to build “anonymous tip” sites that permit members of the public to confidentially submit tips about stories of public interest.

Unfortunately, would-be tipsters need to take great care when exploring such options, because many organizations aren’t using HTTPS properly to ensure that the user’s traffic to the news site is protected from snoopers on the network.

If the organization uses any non-secure redirections in loading its “Tips” page, or the page pulls any unique images or other content over a non-secure connection, the fact that you’ve visited the “Tips” page will be plainly visible to your ISP, employer, fellow coffee shop patron, home-router-pwning group, etc.

NYTimes call for Tips, showing non-secure redirects

The New Yorker Magazine call for Tips, showing non-secure redirects

Here are a few best practices for organizations that either a) anonymous tips online or b) use webpages to tell would-be leakers how to send anonymous tips via Tor or non-electronic means:

For end users:

  • Consider using Tor or other privacy-aiding software.
  • Don’t use a work PC or any PC that may have spyware or non-public certificate roots installed.

Stay private out there!

-Eric

Standard
design, privacy, tech

Do Not Lie to Users

Multiple people working on Outlook.com thought this was a reasonable design.

After a user deletes an email, then manually goes into the Deleted Items folder, then clicks Delete again, then acknowledges that they wish to Permanently Delete the deleted item:

Delete

… the item is still not deleted. You can “Recover deleted items” from your Deleted items folder:

Recover

… and voila, they’re all hiding out there:

Purge

Further, if you click the Purge button, you’ll find that it doesn’t actually do anything.

The poor user is expected to:

  1. Be aware of this insane behavior
  2. Individually check a box next to each unwanted message, then click Purge.

Microsoft’s design is offensively anti-privacy.

-Eric

PS: This sums it up pretty well.

image

Standard