tl;dr The web is an important tool to prevent further corporate control of our society.
I tend to think in longer timescales and bigger picture than just just the next job or the latest shiny thing. I care deeply about where we’re going as a species, and everything I do with my web development and design is best seen through that lens. I don’t want to help put systems into place that lead us further down the path of corporate control and inequality, but away from it.
My argument is that every perceived shortcoming of the web is an acceptable trade-off to favour knowledge, art, and services we depend on being in the hands of the people, rather than concentrating them further into the hands of a few corporations. The inconsistencies and messiness are a feature, not a bug.
Gone in a Flash
I don’t feel any excitement about developing for a platform that may or may not be here in five years; platforms that are more restrictive, less open, and under the control of this or that corporation. The web has been here while multiple platforms have risen and fallen.
Flash was a cautionary tale in this regard: so much wonderful art and creativity was poured into that platform, but it was ultimately a closed system run by a single company, and when Adobe finally marked it as end-of-life, many had to scramble to create emulators, reverse engineer the player software, or create ways to convert content into open web standards. Decades of work and culture risked being lost, whereas open standards agreed upon by all browsers have ensured that even the earliest websites dating back decades are still viewable.
Low barrier to entry
Everyone has a web browser and a text editor. That’s literally all you need to create basic web content. It’s entirely due to this absurdly low barrier to entry that I’m here as a web developer. The web is easily the embodiment of maker or hacker culture, where you can take something apart to learn how it works and how to do it yourself.
This same low barrier is also reflected in how the web is available almost everywhere in some form, on everything from mobile devices to TVs to smart watches. You don’t have to be on a specific platform to access it - it works all across the board.
The long view
It’s still possible to load the earliest websites from the early 1990s, including the very first website. For something as big a part of our culture as what we put online, shouldn’t we want the ability to preserve that for the future, without the risk of being locked out because the app store we put it on no longer exists? The web is designed with backward compatibility, longevity, and resiliency in mind, so that websites we create today will hopefully still work 20 years from now.
An excellent example of the thought and care that’s gone into designing the fundamental technologies powering the web is the <picture>
element and how it can be used to provide a multi-layered fallback mechanism - even for browsers that don’t understand it. The first thing to understand if you’re not a web developer is that the way most browsers handle HTML is that if they don’t understand an element, they’ll simply ignore it and then try to render any elements inside of it that they do understand. The <picture>
element is intended to allow defining multiple variants of an image to be made available for the browser to choose from based on the device’s screen size and other factors. Older browsers won’t understand the new element, but because HTML is designed from the ground up for these scenarios, it’s simply possible to provide an <img>
element inside of the <picture>
for browsers to use.
Building on the previous point, <picture>
is invaluable in migrating over to far more efficient image formats such as WebP without breaking backwards compatibility. Here’s an example:
<picture> <source srcset="Omnipedia_logo.webp 1x" type="image/webp"> <source srcset="Omnipedia_logo.png 1x" type="image/png"> <img src="Omnipedia_logo.png" alt="The Omnipedia logo: a checkered globe with several squares missing at the top."> </picture>
The order of the <source>
and <img>
elements informs the browser in what order to attempt to load the images. If the browser is recent enough that it understands WebP images, it’ll load that. If the browser is a bit older and doesn’t understand WebP but does understand <picture>
, it’ll load the next one that’s in a format it does understand. If the browser is so old that it doesn’t understand <picture>
, it’ll completely ignore everything else and display the <img>
element.
But the web can’t do X
You might be surprised by all the things modern web apps can do. When I speak of web apps, I’m not talking about packaging a web site using Electron, which inefficiently packages a browser engine for each app; rather, websites that run in browser processes in the same way that multiple tabs do, without the need to load a whole new browser engine for each. These work offline and are treated by mobile operating systems like native apps, with their own entry and icon in the app switcher. This works on Android with both Firefox and Chrome. All you do is go to a web site that supports installation, and choose install from your browser’s menu; it’s that easy.
Web sites and web apps have been able to work offline for years now, in much the same way as native apps. We can cache arbitrary parts of a site or app, including images and other content, and show users that content even if they’re disconnected from the internet in a seamless way. If requested by a user, we can send push notifications much like native apps. The next section details other crucial APIs and features that have also traditionally been the domain of native apps, now available to websites and web apps.
Isn’t it slower than native apps?
Technically speaking, yes, but why is that worth embracing corporate-owned platforms? Modern web browsers have been highly optimized to make use of hardware acceleration as much as possible, and there are a wealth of resources out there promoting best practices on how to make web sites and web apps load quickly and work smoothly. When done well, the web performs well enough that it doesn’t matter so much.
For when near-native performance is crucial, WebAssembly has been created to solve just this problem, and is supported in every major browser; you can compile C/C++, Rust, and even Apple’s Swift to WebAssembly, plus many more. WebGL has been around for years for hardware-accelerated 3D graphics, and the upcoming WebGPU intends to expose Vulkan, Direct3D 12, and Apple’s Metal in a performant, powerful and safe manner.
We’ll always need a certain amount of native code to run operating systems, web browsers, and some apps, but I think far more native apps could work as web apps, no longer locking them into one platform but opening them up to anyone on any platform.
What about privacy?
Browser engines have been highly engineered to sandbox and isolate web content processes from each other and from the operating system. This robust and time tested security arguably makes web content more secure than some native apps; installing a native app on some platforms can give malicious code lower level access than web content could ever dream of having.
For any potentially sensitive feature, like tracking a user’s location, browsers deny access unless the user specifically approves it, and modern browsers use heuristics to prevent a website from requesting permissions too often or too early, such as in the case of push notifications.
I don’t buy the argument that the web inherently needs to be less secure and less private than native apps.
Aren’t web apps inconsistent?
While some are undoubtedly far worse than their native counterparts, they don’t have to be. I’ve been using the Starbucks web app for some time now, and it’s just as good as their native Android app. The argument that platforms like iOS and Android enforcing their UI conventions and structure are good because they ensure a consistent user experience doesn’t hold water for me.
But corporation X treats users really well
Just because a given corporation cares about privacy or some other user-centric value now doesn’t guarantee that it won’t one day change management or go down a less ethical route, and even if it never betrays its users in this way, what if it goes out of business, sells off part of its business, or is forced to split up by regulators? It still exists because of and reinforces a system that rewards exploiting users much more than any ethical considerations. Even then, if a corporation is pretty great at placing users’ needs and rights first, by virtue of centralizing power in such a way, it tends to cause problems in the same way a sleeping giant turning over can crush you and not even notice. How many products has Google killed because they weren’t financially viable at the scale that Google operates at, despite there often being a set of users that those products were very important to?
The web as an operating system
What if an operating system was engineered from the ground up to treat the web as a first class citizen, using existing and well established web technologies?
This not a hypothetical question, because Mozilla tried that with Firefox OS; the project ran from 2013 to 2015, and was not shut down because of technical or performance problems (it was targeted at low-power handsets), but rather because the mobile market is incredibly difficult to get into, due to Apple and Google having a stranglehold on it. While Firefox OS is no longer developed, KaiOS was forked from it and continues to this day.
Another notable operating system that places the web at the forefront is webOS, first developed by Palm, then acquired by HP, and is now developed by LG for use on their TVs and smart devices.
All of these have some notable things in common: while the core of the operating system (the Linux kernel) is written in native code, they’re built from the ground up with a web rendering engine running at a lower level than a web browser, allowing far more efficient use of memory and resources than something like Electron; they’re also all designed to work on mobile or other resource-constrained devices.
Conclusion
Many of the arguments against the web seem to boil down to an authoritarian and heavy-handed approach to solve inconsistencies and shortcomings by centralizing under a handful of corporations, trading away freedom for short term convenience. I would argue that we can get many of the benefits of more consistent UX, better performance, and many modern features while avoiding corporate control in a bottom up fashion, by creating best practices and educating developers in a collaborative manner. To do otherwise is to treat the symptoms of bad development rather than the cause.
I could write another few paragraphs on this topic, but instead I’ll leave you with this wonderful quote by Mathias Schäfer (licensed under CC BY-SA):
There are several relevant browsers in numerous versions running on different operating systems on devices with different hardware abilities, internet connectivity, etc. The fact that the web client is not under their control maddens developers from other domains. They see the web as the most hostile software runtime environment. They understand the diversity of web clients as a weakness.
Proponents of the web counter that this heterogeneity and inconsistency is in fact a strength of the web. The web is open, it is everywhere, it has a low access threshold. The web is adaptive and keeps on absorbing new technologies and fields of applications. No other software environment so far has demonstrated this degree of flexibility.