Philosophy, Technology

John Searle wrongs Computer Programs by denying them the Possibility for Consciousness

This is a great talk, go watch it first. I agree with everything John Searle says, except for his point that a computer program without special hardware can never claim to be conscious.

At 5:10, he says: “All of our conscious states, without exception, are caused by lower-level […] processes in the brain, and they are realized in the brain as higher-level or system features.” He continues: “[consciousness is] the condition that the system is in.” And I agree. But why does he think the same cannot be true for computer programs? Isn’t software “the condition (the state of the bits) that the system (the hardware) is in”?

At 11:00 he goes on claiming that a computation (i.e. manipulating symbols) is only about syntax, while consciousness is also about semantics. It seems that he defines semantics as what arises when symbols are interpreted by consciousness, and I’m fine with that. This however, leads to a circular argumentation when claiming that a computer program cannot have consciousness since it doesn’t possess any intrinsic semantics. Let’s say that semantics arise when symbols are interpreted by a consciousness. Who is to say that a complex computer program cannot do that interpretation just as validly as a human mind can? In a way, he wrongs the computer program exactly in the same way as the materialists wrong him when they tell him: “we’ve done a study of you, and we’re convinced you are not conscious, you are a very cleverly constructed robot.” The computer program might reply, just as he does: “Descartes was right: you cannot doubt the existence of your own consciousness.”

Philosophy, Society

How to compare people, or even nations?

How should two different entities relate to each other?

What do I mean by “entities”? People, organizations or even nations. And what do I mean with “different”? Obviously, that they are not the same, which implies that they are “unequal” in some way. What we think are the important ways to measure, to compare two entities, that’s what determines everything.

Is it wealth? Is it happiness? Is it age? Is it intelligence or technological progress? Without considering the problem of how exactly to define those terms, comparing people or nations with each of these measures gives vastly different results. But irrespective of what measure is chosen, one entity is always inferior when compared to the other. When choosing to measure in “wealth”, a rich CEO or an industrialized nation is “higher up”, or “more advanced”, when compared to a lowly construction worker or a so-called developing nation. But when choosing to measure in “happiness”, it might just be the other way around.

So, back to the question of how two entities should relate to each other, i.e. what nature their relationship should be of. Depending on the measure chosen to compare the two, there will always be inequality. Now, very often inequality is considered to be a negative phenomenon. It is used to point out that something is unevenly or unfairly distributed. But we wouldn’t want all people to be exactly equal to all the others, either. We wouldn’t want all nations, irrespective of the measure chosen, to be the same: the same climate, the same food, the same population density, the same level of energy consumption.

The key appears to be to rigorously define in which measure we want which entities to be equal, or at least to be more equal. And then demand for more equality in that measure. I some cases this may indeed be possible and even desirable. Although it is important to be aware that by choosing someone else’s measure, you invariably agree to play their game. As an example, by demanding “a career” (in the traditional work-a-lot-and-then-get-promoted-sense of the word), women agree to play the game of fighting for promotions. Alternatively (or at least additionally), they could demand less full-time stressing-out jobs for everyone.

Either way, there are a hoist of cases where it isn’t straightforward at all to settle on a measure for comparison. For example, when choosing a country to live in, or a person to marry, you might want to consider both the wealth of the country or person, and how happy you are going to be (among other factors). So you might be tempted to set up an equation to determine the “ultimate value” of each scenario p:

value(p) := λ1*wealth(p) + λ2*happiness(p)

where λ1 and λ2 are weighting factors you need to determine for yourself. For example, if you set both to 0.5, it means wealth and happiness are equally important to you.

It may be worth to point out that by combining the measure of wealth and happiness into one, what we have done in effect, is simply created yet another measure. I’m sure there are some economists or psychologists that have come up with a model like that already.

But now there are many more combinations of how very different entities can end up having the same value. Like a person being very rich but totally unhappy having the same “value” in that measure as one being very poor but extremely happy.

In order to form a balanced and constructive relationship where both entities can learn from each other, it seems important to choose the weights λi such that the value of both entities ends up being equal. That way, neither one appears more advanced in any absolute sense, and neither one is being looked down upon by the other.

It is easy to make the mistake to choose large weights for precisely those measures where you yourself already score high. That way, you’ll never be the loser. But it has the opposite effect as well: without noticing, you will end up behaving like a hegemonic empire. And both entities lose out.

Philosophy, Technology

iOS 7’s Redesign

When the iPhone debuted, it set the standard for a lot of things, among them the GUI of a touch-based mobile operating systems. But that was in 2007, and the design of iOS has gotten a bit stale and heavy by today’s standards. Some even say that “iOS is the Windows XP for mobile devices” – its familiarity is loved by so many that it’s going to be hard for Apple to make radical changes. But yesterday, Apple did just that: it introduced iOS 7 which finally brings a clean redesign of the whole GUI, while keeping all the elements at their familiar places. So, is it any good?


I don’t care much for the new App icons, the rounded corner radius is too large and the Safari icon in particular is just gross. Also, the default home-screen image is reminiscent of some 70s bling, which turns especially awful when seen through a translucent panel; fortunately that image can be swapped out by the user.

But other than that, I like the redesign. It really shines in the Apps as well as the Notification and Control Centers: no visual clutter or heavy chrome, a smaller and crisper color palette, and carefully chosen whitespace and typography. The translucent navigation areas place them clearly on a layer above the content (when scrolling in Safari or playing a movie, they even slide unobtrusively out of sight). The Apps in turn are above the background image, on which the parallax effect is used to form a sort of three dimensional box to peek into.


The new design is not “flat” (whatever that means), but it’s streamlined. And while removing visual clutter always comes at the risk of harming discoverability, I think iOS 7 didn’t go too far. Buttons may be gone, but the interaction elements are at the familiar consistent places and if it’s blue text (or in some Apps red, but always looks like a link) or a wireframe icon, the user knows it’s save to tab.

Everyone should choose a better home-screen image, Apple fix the icons, and then iOS 7 is a solid foundation to improve upon.

Philosophy, Society

Gratitude and why you shouldn’t expect things to “just work”

I’m utterly convinced that the key to lifelong success is the regular exercise of a single emotional muscle: gratitude.

– Geoffrey James (hat tip to swissmiss)

Continuing the line of thought from my two previous posts, it feels like many of us have (or at least I have) come to expect many things that probably shouldn’t be expected. What I mean is this conception of “how things are supposed to work”, and the believe that somewhere on this planet there are those “true professionals” that do things that way. That conception, that if you adhere to a couple of smart principles, including the scientific method, sound reasoning and a lot of grit, you eventually end up with a process that is “the right way”. And the thought that if you do things that way and do them thoroughly, you will eventually come up with a “proper” product, solution, or at least a satisfactory conclusion concerning that particular problem. And if you don’t, you are just doing it wrong, have too little experience and probably aren’t (yet) one of those “true professionals”.

But what I’ve come to realize, is that those “true professionals” don’t really exist. That even the most professional experts, most of the time, have not much of a clue of what they are actually doing and are just making things up along the way. Of course, experience and formal training help, but even the best of us, most of the time, have to navigate a vast decision tree with simply too little information.

So what’s that got to do with gratitude? Well, if you expect that things can, should and are done “the right way” (that is assuming you are not working with or using a product of complete idiots, or believe you are yourself one), then you expect things to “just work” in the “expected way”. If they don’t, you are disappointed. But what’s even worse: if they do, you just ignore the fact that it just worked, without ever feeling any gratitude. Simply because that’s what you were expecting all along, that things work the way they are supposed to. So however things turn out, either you get disappointed or don’t feel much at all.

On the other hand, if you were to expect that things usually don’t “just work”, it is much easier to feel gratitude every time they actually do. And you will have a much happier life, giving you the power to tackle harder problems (not expecting to solve them on the first attempt).

Philosophy, Society

On uncertainty, success and failure

The truth is, most things aren’t as predictable as we’ve come to believe them to be. There are just way too many factors at play, too many unknown variables, and too many variable we don’t even know that we would have to take into consideration.

As noted in these two great TED talks by Alain de Botton and Elizabeth Gilbert, our perception of who is responsible for our success has changed a lot. Before the modern era, poor people were called “unfortunates”, now they are simply “losers”, implying that it is mostly their own fault that they’re in this miserable situation they are now. Never before in history have we as humans believed so much in our own power to make informed decisions, to enforce our will on the world around us. Much was always believed to be in the hand of the gods or some other transcendent entity, but now it’s always our own fault when something doesn’t turn out the way it was supposed or planned to. In the tragedy, the ancient greeks had even an entire genre of drama dedicated to letting the audience empathically follow the stories of people that were simply struck by very bad luck.

It is true that the physical world at certain small scales is quite accurately modelled by sciences like physics and chemistry and that this has enabled us to achieve things our ancestor didn’t even dare to dream about. But the bigger and more complex things get, the harder to model them and be precise about them it is. Some might argue that intuition is better suited to the job then, but this is only sometimes true. Usually a combination of both, intuition and the scientific method, yields the best results. But still, usually things just don’t turn out exactly the way we predicted.

So should we stop trying just because the outcome is not as predictable as we’ve been led to believe? No, of course not. I think we should try nonetheless. And maybe we are lucky and it works. And maybe we fail and try to do better next time. And we fail again. And again. And again. And then we succeed. (Or at least we’ve helped others to come closer to the solution.)

Maybe rapid prototyping and short feedback loops help figuring out when we go into the wrong direction. And systems– and design thinking help seeing the bigger picture.

Philosophy, Society

On optimism, being wrong and truth

A very good TED Talk by Tali Sharot on The optimism bias tells us something many of us have known (or shall we say suspected) already for a long time: the brains of healthy (non-depressed) people are fundamentally tilted towards optimism. For example, even when told that 40% of all marriages break, newly married people still don’t think there is a possibility they’ll get divorced. Likewise we underestimate the likelihood that bad things will happen to us personally. Sharot suggests that when doing financial budgets or when in dangerous situations, we should remember that we have this innate bias and calculatingly adjust for it. But most of the time, the human optimism bias is a good thing. Otherwise we would never dare to venture into anything new or risky.

The flipside of all this optimism is obviously that we always think we’re right. There is a great TED Talk by Kathryn Schulz: On being wrong. We will do almost anything to avoid having that nasty feeling creep up on us when we realize we were wrong. When somebody disagrees with us, we rather think that he or she either:

  1. doesn’t have the same information as we do,
  2. is plain stupid, or if neither of those apply,
  3. that he’s plain evil.

Rather seldom the thought crosses our minds that we ourselves might be wrong. Or that different people have simply different models of the world that they think in and in which they integrate new information, and that there might be a multitude of valid worldviews. So we should pause more often and think about all the complexities of our world, and allow ourselves to recognize that we may not know, or that we may be wrong.

To the smart fellow who chips in now and says we should just abide to the scientific method and reason objectively: 1) objectivity does not exist, see intersubjectivity. 2) read the closing words of this unsettling article by Jonah Lehrer in The New Yorker about the ‘decline effect’ when trying to reproduce the results of scientific studies in fields such as medicine or psychology.

Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.

While I always say that it’s problematic to apply the term “prove” or “proved” to anything other than pure mathematics (in science, theories can only be substantiated or falsified), I agree with the point being made. In anything that involves living beings (as opposed to e.g. physics or chemistry), interactions pretty quickly become so complex that it is hard to tell anything with certainty. That doesn’t mean that the scientific method might not be the best (or shall we say one very good) way to approach these problems as well. But we should keep in mind that just because some theory has been substantiated by a scientific study or even two, that doesn’t mean that it’s the absolute truth.

So the bottom line is that it is good to question things every once in a while, to remember that we actually know pretty little, to remember that we’re wrong way more often than we think. And to really consider other people’s worldviews and try to learn from them. But at some point, if you want to stay sane, you have to start believing in something again. How could you ever start a venture or bring any change to the world without believing in it in the first place? That’s when our innate optimism bias is essential. So that we do something – even when all the estimates say that the odds are low – to see whether it might be possible nonetheless. And sometimes it is. Just sometimes it is.


Déjà vu

I hate that feeling. You look at something or hear a story and then you go on to other stuff while somewhere in the back of your mind you keep thinking about what you just heard or saw. And then, suddenly it strikes you that you might have heard or seen this before, you feel like this wasn’t the first time you heard that story or watched that video. But now it’s already too late. It’s impossible to tell with certainty whether you really had experienced this before and remembered only after contemplating a few minutes the just experienced or whether this was actually the first time and this experience has already become entangled with similar memories of the past, up to the point where you cannot tell anymore whether it is a fresh memory or just a refreshed memory.

Déjà vu, as they say…