I have something to say about that…

Distributed and syndicated content: what’s wrong with this picture?

You know those AMP URLs you get from Google search results and which often pop up on Twitter?

Instead of https://www.rt.com/sport/… you’ll get https://www.google.co.uk/amp/s/www.rt.com/document/…

Screenshot of AMP's RT article, headline: Meet Achilles the Cat, deaf animal psychic
What you’re seeing is Google’s AMP project hosting content for Russia Today. This lets Google load the page during the search results, so that when you click on the link on the search page, the text appears immediately.  (This is solving a big problem, by the way.  That shorter loading time can make the web a far more enjoyable experience.)

Facebook’s Instant Articles and Apple News operate similarly but without the benefit of being on the web or using real URLs — a much worse starting point.

The web relies heavily on the “origin policy”, which amongst other things, helps browsers manage permissions (e.g., access to your location, camera, microphone, etc.), attribute bad actions (phishing attacks), and assist you with things like passwords and filling out forms.  This core aspect of web architecture ties permissions and security settings to a particular origin, like rt.com. Distributing or syndicating content removes that context by hosting one site’s content within a different site, which can confuse users and stop browsers from keeping the web safe.

In the W3C Technical Architecture Group we have been thinking about this issue.  While we understand the value these approaches provide, they also pose serious issues. Fundamentally, we think that it’s crucial to the web ecosystem for you to understand where content comes from and for the browser to protect you from harm. We are seriously concerned about publication strategies that undermine them.

We have published this finding to explain our thoughts in more detail.

This post originally appeared on the W3C TAG blog.

The evergreen web

You know those old browsers in TVs, exercise bikes, kiosks and the like that can’t browse the web anymore? Have you ever noticed how strange it is that they become dusty and increasingly hard to use, when the browsers in your mobile phone or laptop carry on very well?

It happens because no one keeps them up to date. As web technologies (and therefore, websites) evolve around them, they get further away from being able to handle what a site serves them. And as a result, they become increasingly less useful.

A black-and-white old browser with an error message: "Unable to load https://theguardian.com"
Photo from an exercise bike’s defunct browser, from Peter O’Shaughnessy of @samsunginternet

I’ve edited a finding with the W3C Technical Architecture Group about that.

In The Evergreen Web, we write:

Constant evolution is fundamental to the Web’s usefulness. Browsers that do not stay up-to-date place stress on the ecosystem. These products potentially fork the web, isolating their users and developers from the rest of the world.

Browsers are a part of the web and therefore they must be continually updated. Vendors that ship browsers hold the power to keep the web moving forwards as a platform, or to hold it back.