Skip navigation

Tag Archives: technology

Apple, for some reason, has a reputation of making usable products, and of using very good engineering. Of course, I’m out to point out how neither couldn’t be further from the truth.

Engineering for dummies

I am not an engineer. But I do have a minimal understanding of what engineering is all about. Which is apparently more than can be said about all the idiots raving about the quality of Apple’s engineering.

Specially for them, this little excerpt of that difficultly consultable source of information, Wikipedia:

Engineering is the discipline, art and profession of acquiring and applying technical, scientific, and mathematical knowledge to design and implement materials, structures, machines, devices, systems, and processes that safely realize a desired objective or invention.

Now our idiotic Apple apologists seem to not know about the last portion of the definition. Engineering aims at realizing a desired objective. It’s not gratuitous aimless technological feats.

The irony is that Apple is extremely bad at engineering. At times they pull off industrial design decently, and even that, much less frequently than most people think. The rest of this post is dedicated to exemplifying this.

A few cheap shots

Before getting into the more complex arguments, I have to address, with spiteful delight, a few of the most idiotic arguments people give to defend the idea Apple is good at engineering.

The best one, and certainly the most ridiculous one, is the infamous “straight solder line” episode. According to some sources, Steve Jobs insisted all solder lines on the Apple ][ be perfectly straight. This is seen by some as attention to detail.

Sorry folks, that’s just plain idiocy. It has no added value whatsoever. It doesn’t enhance functionality, or provide extra reliability. It is just misspent effort.

Then of course, fast forward to 2010, and you have the iPhone 4 with it’s revolutionary antenna. Indeed, it is quite revolutionary to dare put out a phone with an antenna which is faulty by design in 2010. That’s not even attention to meaningless detail, it’s just sloppy – this kind of thing should be identified during testing.

UPDATE: Come on, serious engineers do not discard the warnings given by other engineers and partners.

With that out of the way, let’s get on to the more serious stuff – and please note I’m talking about computers in the rest of this post.

Computers designed not to be used: hardware

Apple & our favourite rodent

Apple brought us the mouse. Of course, contrary to what uninformed people sometimes believe, they didn’t invent it nor were they the first to put it on the market, but they were definitely the ones to popularize it.

Ironically, they have consistently made the worst mice on the market ever since. Raneko’s excellent mice gallery hereunder will help me illustrate the point. It lists Apple mice chronologically from right to left.

One can vaguely excuse the discomfort induced by most of the early mice it made – awful cubical boxes that actually hurt your hands seen on the right hand side of the picture – by the fact that they were the first mice around. Everybody needs experience to learn. Nobody was doing better at the time.

In 1993, they finally moved in the right direction with something a bit more round, vaguely more comfortable, the Apple Desktop Bus Mouse II. However, by that time, Logitech already had several much more comfortable products out there. Oh yeah, and they included multiple buttons too.

You’d think Apple would learn something from competitors, wouldn’t you? Wrong. They then go out of their way to design the pretty, but ridiculously uncomfortable “hockey puck” mouse for iMacs in 1998. By this time, very ergonomic mice are available for PCs. Good engineering usually does not include going from slightly decent to extremely bad, but that is what Apple did.

In the 2000’s, Apple ups the ante with the awful Mighty Mouse. Sure, it looks nice, but is uncomfortable to use, the long awaited arrival of extra buttons is countered by the fact they are nearly impossible to use. And in the day and age everybody was moving away from mechanical balls under mice because they clog up so easily, Apple goes and puts that on top of the mouse as a scrolling device. Massive failure.

Not to worry, in 2009 Apple shows it still has it’s mojo by producing a stunningly gorgeous mouse, the Magic Mouse, which is still ill-conceived. It’s a pity really. It sports lots of nifty features never seen before on a desktop mouse, but forgets to do what other manufacturers have figured out how to do over twenty years ago: be comfortable.

And that’s the key to it all really. Apple has, for thirty years, consistently failed at making a mouse which is comfortable to use, respecting basic ergonomics of human hands. This has been unacceptable for twenty years. And yet these guys are engineering geniuses? Sorry, but a mouse’s first feature is to be comfortable in your hand when controlling your computer for any period of time.

But Apple makes computers that are supposed to be pretty to look at, not used.

Apple & the keys to your computer

That also shows in their keyboards.

In this domain, their track record is decent. Their keyboards have been more or less acceptable, given the era, most of the time.

But then, in 2009, Apple decides numeric pads, a standard feature on most desktop keyboards since the IBM PC went on the market, are not useful anymore.  This is the time where more and more laptops include numeric pads. Any heavy computer user appreciates a numeric pad, even if, granted, one can live without it.

But better than that, they made the keys perfectly flat. Not concave at all.

Casual computer users won’t mind. But anyone who does extensive work on computers should be complaining. Because flat keys are a huge pain for touch typing. This is not something people have just discovered either. Keyboards have had concave tops for decades for precisely this reason. A good engineer never discards lessons from the past without good reason.

Again, hardware made to be looked at rather than used a lot.

Quality first – sometimes

Apple computers and devices are on average more expensive than other brands. All the more so – and this is seldom taken into account – because it’s usually quite harder to upgrade a Mac than a PC, and if your screen dies, well, you’re basically good to buy another computer.

But Apple loyalists like to claim this is due to the high quality of the products.

Unfortunately for them, that’s just not true. Regardless the design flaws pointed out above (which should actually not be disregarded when paying a premium).

Quality control at Apple means massively delivering non-booting iMacs (or alternatively broken screens). A company that takes build quality seriously – while actually charging a premium for it – would not have that kind of issue. This kind of problem is expected of low-end devices, not top of the range machines.

Even more interesting is the failure rate of Apple laptops. While being bested by Sony, who basically builds premium laptops too, is acceptable, it is quite damning for Apple to be bested by Toshiba and Asus, two companies which are in the commodity laptop business. Much cheaper and more reliable.

What was that about quality again?

It’s the software stupid!

But of course, Apple loyalists will be quick to point out that I’m missing the point, because the Apple experience is all about the quality of their software, their beloved Mac OS X.

Now there’s no disputing the fact Mac OS X is a good OS. But it is far from immune from criticism, including in the engineering & usability department. Given Apple’s reputation, I find that quite ironic.

Usability issues

The Apple menu bar

The Apple menu bar is one of my favourites. It’s a typical case of difference between Windows and Mac OS, so people usually tell me it’s a question of getting used to it. Unfortunately, that’s not all it is.

Just a reminder for those unfamiliar with Mac OS: the Apple menu bar provides the functionality you get with Windows applications in a central unique location for all applications on your Mac: at the top of the screen. Check out the link above if you want to know more.

The idea behind placing all that up there is to conform to Fitts’s law. This basically states that

the time required to rapidly move to a target area is a function of the distance to and the size of the target.

The consequence being that the top of your screen having “infinite width”, it’s easier to acquire the top of your screen than the top of your window.

The problem is not that Fitts’s law is untrue. It’s that it is a very bad metric.

For better or for worse, computer interfaces have been for decades using the desktop metaphor. Here again, Apple deserves credit for popularizing the desktop metaphor (with the Macintosh). The basic idea is that your computer mimics, insofar as applicable, the way you work with your physical desktop.

Now the desktop metaphor is broken in many ways, and most operating systems wander away from it in various ways, usually for good reasons. For instance, spatial file-browsing just sucks for most people, so breaking the metaphor in that case is a good idea.

But the way the Apple menu bar breaks the metaphor is Very Bad™. Basically, it breaks away the application and it’s configuration (and options, etc.). Which is the equivalent of needing to rummage in your desk drawers to configure the stapler that is in your hands. It dissociates an object (the application) from it’s properties (configuration, options etc.). Which is a disruption of your thought process.

Furthermore, if Fitts’s law really mattered to Apple, most actions could be carried out by a context click (equivalent of the right-click in Windows), since no movement whatsoever is needed. But that’s not the case. Understandably so, since the actual right-click is still a despised feature only begrudgingly given to it’s users by Apple.

Adding insult to injury, and perfectly negating any benefits one might have found in Fitts’s law’s application, the Apple menu bar can also result in unnecessary mouse-clicks. Indeed, if another window than the one you want to access the menu of is in focus, you need first to bring that window in focus, and then only do you have the opportunity to access the menu. Talk about usability.

And of course, let’s not forget that with larger and larger screens, multiple screens and the like, reaching the very top left of your screen is getting to be more of a big deal than it was on the screens of thirty years ago.

Window controls

Another usability pet peeve of mine is the non-discoverability of the window controls in OS X.  You see, the close button, the “zoom” button (what comes closest to maximize in the Apple world) and the minimize button sport the red, green and orange colours respectively. No visual cue as to what they do. Just colours. So while red is about as universal as an X for signalling that what the user is about to do is dismiss or reject what he is doing (=close his application), green and orange tell me nothing at all about the current functionality of the button. To have the faintest idea, I need to hover to the button to get the visual cues which Windows (among others) permanently provides.

And just to make it more fun, the colours will change according to your theme. I guess that helps the colour-blind people who would otherwise be screwed… Not really user-friendly.

Ergonomics placement

Finally, another ergonomics issue which people dismiss as a question of habit when it is far from it.

For this example, I’ll start off with the windows way. You see, the Windows menu and the window controls are placed like they are for a reason. The Windows menu is at the bottom left of your screen, while window controls (minimize, maximize, close) are at the top right of your window.

This is purposefully thought out for right-handed people, which are, after all, the vast majority of people (sorry lefties). Why? Those areas, which are destined to be very frequently used, are the ones which require the less physical effort to reach with a mouse. It’s a simple rotation of your arm on your elbow axis which allows you to reach them. Your elbow doesn’t need to move. In Mac OS, those window controls are top left. Which means you necessarily need to move your whole arm (elbow included) to reach them.

Probably not a big deal, but here again, attention to detail is far from characterizing Apple, and in this case does characterize Windows.

Keep it simple… as long as it’s simple

Just for laughs, I’ll add a link to Bruce Tognazzini’s webpage. I disagree with him on a number of questions, but this particular page made me smile in agreement. For reference, the guy is one of the original Mac OS engineers and founded the Apple Human Interface Group. The most important take-away is that Apple’s “keep it simple” philosophy actually makes your life harder over time. But it’s alright as long as you don’t use your computer much…

Engineering issues

Then there are those nice little things in OS X that are just plain dumb. It actually doesn’t even boil down to engineering, but just to plain common sense.

See, that close button I was talking about, it doesn’t actually close the application, it just closes the window. While I can make sense out of that as all geeks can (because no, an application is not just the window), my non-geek friends can’t (because to them the application is just the window).

Again, this simply breaks the desktop metaphor. If I dismiss an object on my desk, it’s not supposed to be lying around on the desk afterwards. Otherwise I would have left it available (which is what minimizing is for in computing). Yet in Mac OS X, that’s what it does, it’s still there, hogging up space (system resources). And it’s even quite confusing to the casual user because it’s there hogging up resources, yet it’s not clearly visible.

And there is no one-click way to actually quit the application. If I want to do it, I have to use keyboard shortcuts or multiple clicks. Great.

Now let’s even concede the difference between closing the window and quitting the application to Apple (we shouldn’t, certainly not to an OS that claims to be so user-friendly, but still, just for the sake of argument). It’s poorly executed.

First, it’s not done consistently. There are applications which you actually quit (for good) by clicking on the close button. Just not all of them. See, those where you can only open a single window are actually quit, while those where multiple windows can be opened are left running (even if only one window is open). Good luck explaining that to a casual user. That’s bad engineering, and bad usability.

Second, if you’re going to do that kind of “close window but not quit application” nonsense in an operating system that is basically built on top of Unix, you should use the suspend function that has been around in Unix for decades, so that you waste no resources. But that is of course not how it is done. Bad engineering.

And then there’s a last unrelated engineering mess-up I find amusing. Mouse acceleration is exceedingly lousy on Macs. It’s one of the reasons I always feel weird when using one. It’s actually so bad that users have actually started a petition to request it be changed and there are several freeware applications out there to correct the problem…

Let me spell it out for you. Not only has Apple succeeded in mis-implementing the algorithm for mouse acceleration so that default behaviour is sucky, but they do not provide controls to fix this problem (even though such controls are fairly trivial to implement). And when it gets hilarious is when you notice this problem was introduced by Mac OS X – previous versions handled this correctly. Again, Apple manages to actually regress instead of progress. Congratulations!

Where does that leave us?

Am I trying to argue that Apple only makes badly engineered unusable lame products? Hardly. Apple makes good products. Definitely very desirable products, and the looks of most of them are absolutely beautiful – even though I am personally starting to tire of the “white iPod” look all over the place and welcome the change that is currently occurring.

Only an idiot would defend the viewpoint that Mac OS is a crappy OS or that the iMac is a bad computer.

However, Apple products have just as many quirks as other manufacturer’s products do, and the track record is actually getting spottier by the day. Furthermore, between the premium price and the much more limited software availability than on the Wintel platform, the products just exclude themselves out of many markets – it’s no surprise Apple doesn’t sell that much to enterprises these days.  And of course, the overarching impression that form is more important than functionality oozes ever more strongly out of the Cupertino-based products (re-read the hardware section if you’re still in denial about this).

As such, what are Apple products? Niche products. In the same way shiny sports cars are niche products. A few of the more affluent people with specific tastes will buy them. In the case of sports cars, because driving fast and “sporty” is enjoyable. In the case of Apple, because owning a beautiful computer is pleasing. And for the geekier because you have a Unix foundation.

But just like the sports car owner would be an idiot to mock the sedan/station wagon owner or deride him about engineering feats when his sports car can’t carry many people around or hurts his back, the Mac owner is in a very bad position to mock computers whose primary goal is not to be aesthetically pleasing, but to get the job done.

And just like sports cars are niche, so are Macs. With a market penetration of 5.1%. Because despite the hype, PCs are the true computers for the rest of us.

In a nutshell

Apple’s engineering and quality is certainly not above industry par. It is just as guilty as the next company of releasing shoddy products (and has a consistent track record of this in the mouse area). It routinely fails to apply basic principles, even going so far as to make things worse over time.

As such, it’s reputation of quality and good engineering is an absolute myth. Given the low standards of the IT industry, this doesn’t stop them from making desirable products, mainly thanks to their aesthetics. As such, there is a very good market for Apple products – albeit a niche one. There’s definitely nothing stupid about buying a Mac, but it’s definitely ridiculous to claim Macs are vastly superior to PCs in any domain other than aesthetics.


These days all the young kids love Google. All the older kids too actually. Heck, pretty much everybody loves Google. I don’t.

A little bit of (my own) history

Sure, I used Google for years. Started somewhere in 1999, after having been a happy AltaVista user. At the time, there were no doubts Google had made some major breakthroughs in search. So of course I switched to Google. And then, there was this other nice side to it: it was the new kid on the block, a nice little company, very far from any corporate behemoth.

Nowadays however, it is in my view the most dangerous corporation on the planet, and definitely much worse than any corporate behemoth I might have worried about in the late 90’s or early 2000’s. And it is absolutely adored by everyone, with the most vocal lovers being people who probably don’t even recall a time before Google.

What does Google do exactly?

Google does search, obviously. It also, and this is not an exhaustive list:

  • Provides hosted email (Gmail)
  • Provides maps & geolocalisation (Google maps)
  • Provides hosted RSS feed management (Google Reader)
  • Provides an operating system (Android)
  • Provides a web browser (Chrome)
  • Provides an office productivity suite (Google docs)

A comparison point

Now, the fact a company provides lots of services is not a bad thing of course. Rather the contrary. The issue is with what puts Google apart from most other players in the same markets. I’ll contrast with Microsoft here, but the argument holds when contrasting with many other companies.

You see, a universal fact about companies is that they want your money – and there’s certainly nothing objectionable to that. What makes companies different from one another, is how they seek to get your money.

Microsoft is interested in your workflow (I see raised eyebrows). The way Microsoft wants to make itself valuable to you, and hence entice you into giving them money, is by defining how you work. What they want, is to make sure that every time you have a task to carry out, you will use (and hopefully pay for) a Microsoft product. Want to use a computer? Here’s Windows for you. Want to write a document or make some calculations? Here’s Office. Email? Outlook/Exchange. Build internal collaboration in your company? SharePoint. Find something? Bing. Play a video game? Xbox.

What matters to Microsoft is what you do. Not exactly the specifics of what you do – they couldn’t care less whether you’re writing internal memos or the next great American novel in MS Word – but how you do it, what tasks are involved.

And then it will monetize those tasks. This can be done directly by making you pay for software, or indirectly. In the indirect method, you get something free, say, internet search. Well, it just so happens that the tight integration of the whole suite of free applications with other paying applications makes said paying applications more attractive.

Never said this monetization of free apps was so effective mind you, but that’s their strategy. They’re interested in your tasks, your workflow.

Back to Google

Now Google provides pretty much the same products and services as Microsoft. So their model is pretty similar, right?

Wrong. Couldn’t be more different.

Google’s revenue comes, nearly exclusively, from advertisement. For all analytical purposes, Google can be seen as an advertisement firm. And ever since the sector came into existence, advertisers have only ever been interested in one thing.

Your data.

Who you are. What you do. Why you do it. How much money that brings you. What books you read. What you like. What you don’t like.

They want to know everything about you.

For those who aren’t following, the reason they want this is to be able to target their ads to you in the most efficient way possible, in order to maximize clicks on the ads, and thus maximize advertising revenue.

It just so happens that Google has all that information on you. After all:

  • It can read you email (Gmail)
  • It knows where you are and where you like to go (Google maps)
  • It knows what you like and are interested in (Google search and Reader)
  • It knows what you do and what data you generate (Google apps)
  • It could know how much relative time you spend on any task (Android)


This is an advertisement’s agency ultimate goal.

It just so happens that this also any totalitarian state’s wet dream come true.

Don’t be evil!

Ah yes, the famous unofficial motto of Google is “don’t be evil”. That should be reassuring, right?

Well, I don’t know about you, but that motto, if anything, makes me worry even more. That somebody in a corporate environment could even consider that a motto speaks volumes. Let me give you an example.

What would you think about one of your kid’s teachers telling you his main goal in his work is “don’t rape the kids”? Would you be reassured? Or very much worried that somebody needs to state the obvious like that to exorcise some inner compulsion?

I feel the same about Google.

The track record

The good news is, as far as I’m aware, Google hasn’t yet launched into massive scale exploitation of the data it has at it’s disposal, nor has it transformed into some evil big brother.


There are some worrying trends, and not only based on the sheer amount of data Google has at it’s disposal.

Google has already been actively seeking collaboration with government for years. And not just for nice stuff, either, for war. Yup, all that nifty Google earth technology has already been purposefully adapted for warfare. That means killing people. It doesn’t get much more evil than that in my book.

Google is also keen on collaborating with the NSA. Oh sure, in this case the aim is that the NSA help Google secure it’s network and work against cybercrime, and not that NSA access Google’s data. But as arguably the organisation with the largest wealth of information on people, Google collaborating with a spying agency makes me nervous.

Oh yeah, and by the way, isn’t it just a tad worrying that Google’s data repositories apparently need securing? So these guys have all your data, but can’t protect it? Because, in case you haven’t been paying attention, that’s what triggered the whole China debacle recently. Actually, China deserves a section of it’s own.

The China thing

People are so busy lauding Google’s courage for pulling out of China these days, that they tend to forget what really happened.

In 2006, Google readily complied with Chinese requests to self-censor it’s contents. They did it in a heartbeat.

Now don’t get me wrong, I definitely think it’s better for oppressed people to get some access to information rather than none at all, so in the long run, I believe Google’s move there was for the best. But what I found worrying was Google’s eagerness to go and adapt their “don’t be evil” motto to fit it’s actions. Far from demonstrating they were a principled company, Google showed it’s principles were highly negotiable based on what they felt was needed at a precise moment in time.

Then, recently, their Chinese infrastructure got hacked into. Fearing for their interests, they pulled out of China. If you bought into Google’s spin that this was because they had a change of heart about Chinese censorship, you’re a fool.

This last event is telling in two ways. First, it confirms that Google really doesn’t care about what is evil or not. If providing censored content was less evil than providing none in 2006, this remains so in 2010. So clearly, it’s all about their own interest, and not whatever is evil.

Second, it shows Google is not all that good at securing it’s data. And that’s downright terrifying. Because even if Google doesn’t do anything bad it has with the data it has collected, if this falls into the wrong hands (especially governmental ones), we’re in for some very bad trouble. Think 1984.

In a nutshell

Google has exactly the kind of data necessary to make an Orwellian hell come true. It has proven that it’s corporate culture is anything but principled, and more than willing to cooperate with governments, including for nefarious purposes such as killing people. On top of that, it has proven unable to secure the very sensitive data it has access to.

If despite all this you are still entrusting your data to Google, I guess you really love Big Brother.

So, web 3.0 huh. What’s that going to be? Lots of speculation, most of it is worthless, so might as well add my own (just as worthless) speculation.

My view of Web 3.0 is really the web as an API. Nothing really all that new, and yet potential new uses may abound.


Many people nowadays seem to forget that the web is only a small portion of the internet. A specific subset of the functionality offered by a global network supporting multiple protocols on top of a basic transportation layer (TCP/IP, in case you were sleeping, or are not that technical in which case you are forgiven).

The thing is, when you come to think of it, the web is a very basic application: it’s not that much more than just a huge document repository, shared across many, many, many servers. It obviously has some very nifty features, which made it the formidable powerhouse it is today. Nothing even comes close to hyperlinks in this regard. Without those, the web would be an irrelevant piece of computing history and nobody would bother anymore.

However, the one feature that enabled the web 2.0 revolution is extensibility. Being able to build lots of nice stuff on top of the web is what allowed stuff like Ajax, RSS & the like to be added on to enable more powerful applications than just simply a static document you can read and which contains links to other potentially interesting documents.

Interestingly, as illustrated by the yesteryear trend of mashups, one of the key functionalities enabled by these extensions to the web, is the access and edition of data away from the original web pages. You can have your tweets appearing on your blogs. You can use twitter to update your facebook status via twitter. You can embed YouTube videos just about anywhere.


For a while, the main consequence of this was web 2.0. In many ways, web 2.0 was a way of making much of networked computing browser-centric. Long gone are the days of using a text editor (or any specialized application) to edit a webpage. Wikis and other frameworks have done away with that, at least for casual users. Even for corporations, portal solutions have made lots of editing more browser-centric. Web apps à la Gmail, Google docs and the like have made even tasks like email or document edition browser-centric. That was in my view a funny move. At the end of the day, the whole web app play is about painstakingly replicating the advanced features of desktop applications in a browser. All those who cried genius about Gmail apparently never used Outlook, or Thunderbird for that matter. Both applications kick Gmail’s ass big time as far as amount and depth of functionality are concerned. But web 2.0 enthusiasts never seemed to notice.

Web 3.0, in my view, will illustrate the opposite move. For highly connected people making the most of what the web has to offer these days – and that’s a lot – the browser doesn’t really cut it anymore. A few illustrations of this:

  • Twitter is probably the best example of this. Typically, Twitter is a tool with a low signal to noise ratio. Any even half serious twitter user will follow at least dozens of people, with only a fraction of the exchanged messages being useful to him. To handle this, many Twitter users have started using Twitter clients, whether lightweight or full blown applications with powerful features.
  • Facebook is likely going the same way, if application availability is any indication.
  • Even pure webmail operators have offered, for already quite some time, the possibility to use standalone mail clients.
  • Serious RSS users have been using RSS readers for years.


These are only a few examples, and as such are not yet indicative of any web 3.0 trend. Web 3.0 kicks in when re-centralizing of these features occurs in a desktop client. The best example of this is the Sobees (lite or desktop) software, but I’m sure there are other examples out there.

What Sobees does (very well, I might add), is centralize your use of social networks, and further, of most Web 2.0 applications. Within one application specifically designed for this purpose (unlike the web browser or even the web page), you can access and update your Twitter, Facebook, MySpace and LinkedIn accounts – soon your RSS feeds.

Thanks to the fact the application is dedicated and custom-tailored to the purpose it is built for, it is extremely powerful and flexible, allowing you to manage all of this web-related information in exactly the way you want. You are no longer constrained by some web application designer’s view of what is a good way to access/update your data. All usability aspects such as font sizes, panel location etc. are configurable, as are themes and connectivity options. Of course, if you’re afraid of tweaking, it comes with a sane default configuration, so don’t fret.

I see this kind of application as being a big trend of the future, and a big part of web 3.0. Powerful centralization of data, tailored to a user’s needs, and, significantly, disconnected from a browser.

I think this will all become even more interesting when the disconnect from browsers becomes greater. I could typically imagine this kind of application including a simple web rendering engine in order to be able to consult links and the like directly in the application. Add email to boot, and you suddenly have a nearly central place to consult all of the most important things you could want to consult on the web. Why would you even bother directly opening a web browser?

So, amusingly enough, the future of the web may reduce the importance of the web (browser). Got to love the irony.

Dreaming of clouds?

Does this mean the browser is destined for the trash pile of history? Hardly. While I do believe it’s importance will wane in the long term, the browser is here to stay – if only because display in a simple web browser will likely always remain an internet lingua franca, and a good way to easily consult anything you want. Amusingly enough, while twitter or facebook desktop client popularity is almost certainly due to the fact people got used to clients on smartphones, I see the smartphone being a long hold-out for web browsers. In a small device, a simple barebones web browser will always remain the most efficient way of consulting a lot websites, even if that becomes less true in the desktop world.

Likewise, I believe there is a case for web-centric computing and maybe even web-centric operating systems. While I hold anyone seriously believing that Chrome OS will be massively adopted and spell out the future of operating systems to be a crackpot, I think that kind of approach will definitely have it’s uses. Think highly mobile sales people. Medical professionals. Waiters in restaurants. Policemen. The list goes on. The key here is extreme portability (in the physical sense), cases where hardware is a hindrance. But as a rule? Hardware gets more powerful and cheaper everyday, what would I really gain in giving mine up to entrust all my data and applications to some service provider? The cloud will be used – it already is – but it will be far from being the only place actual computing takes place. By a long shot.

In a nutshell

Web 3.0 will not be about the browser anymore. The browser is becoming an antiquated access method to the amount and variety of data available to us today. The browser will continue to exist of course, but only as one among several other more appealing ways of consuming the internet. Timeframe? Three to five years. The bets are on.


@spyrosm pointed out I should at least reference other existing views of web 3.0, so here are a few:

  • Semantic web. A popular view, but technical feasibility has been brought into question. My own experience leads me to believe it will never happen.
  • Web services & Software as a service. Very close to what I suggest, much more realistic in my view.
  • Virtual web/3D web. Basically a GUI overhaul with some functionality added on top. I question the business model and actual benefits. Still, there could be useful applications.
  • Cloud computing. Whatever the strengths or weaknesses of the cloud, I see the cloud addressing larger issues than just web experience.
  • The growing up of web 2.0. Basically the idea of web 2.0, with it’s growing pains taken away. A realistic view, probably, largely due to the fact that it is not overly ambitious – which is a good thing in my opinion.


The most interesting thing is probably the existence of many differing views, which reminds us that labels are only that, labels. In other words, don’t get too excited about buzzwords such as web 3.0, they’re only shorthand for complex phenomena, and only capture a part of the reality. Just like language itself.