Thursday, December 16, 2004

The Onion: Nigeria Chosen To Host 2008 Genocides

The Onion reports:


ABUJA, NIGERIA—At a celebratory press conference Monday, President Olusegun Obasanjo announced that Nigeria's troubled but oil-rich city of Warri has been chosen to host the 2008 Genocides.

Thursday, December 09, 2004

Grief as Precautionary Tale?

I don't know whether anyone has brought this up before, but since I haven't come across it, I thought I might as well jot it down here...

People have often wondered about the biological function of the kind of deep, lasting grief that accompanies events such as the loss of a child. Why is it so debilitating, and why can it be so debilitating for such a long period of time? Wouldn't it make more biological sense to let the person get on with their life and the business of propagating their genes?

In Stephen Pinker's book How the Mind Works (which would more accurately, but with less impact be called, "An Overview of What We Currently Know About How the Mind Works") he notes that such grief seems to act as a kind of deterrent - basically, knowing how awful it feels acts as a major deterrant to any behavior that might lead to those kinds of circumstances arising.

It occurred to me that perhaps the deterrance is not just for you -- but for those around you, and that this factor could help explain why it is so highly debilitating (if in fact this actually requires any separate explanation). Not only do you experience how awful it is, but others can also see how awful an effect it can have on you. And since in the times when evolution was at work we lived mostly in tribal groups with large numbers of relatives, Kin selection would explain how this kind of grief could evolve to serve "both" these purposes.

Perhaps just the right impact on the grief is where it has a large precautionary effect on the other kin members at the expense of really debilitating the grieving person? (i.e. this is the level of grief that results in greatest chance of those genes propagating, because the least number of people carrying the gene make the mistake that led to the grief).

How Explicit Does a Representation Have to Be?

Some people (and I don't think they're the only ones) talk about representations as if something was only a representation if it was in some fairly explicit form that can be read by some process which then interprets its meaning.

For example, consider the rules of grammar that our brains respect (to a certain degree) in the production of speech. Under this view of representation, there are two options for how these rules exist in our brains: either these rules are represented in the brain, or they are in some sense built-in to the brain without being represented. In the latter case the system in some sense knows the rules, but it doesn't explicitly represent them -- the brain just works in a way that respects the rules, just as a calculator doesn't explicitly represent the rules of arithmetic, but just operates in a way that respects them.

This distinction between things that are represented and things that are 'built-in' sounds fine on the surface, but I'm suspicious of there being a real distinction there. Certainly the distinction makes intuitive sense. It's easy to imagine an explicit representation there that is to be read off and used. And its also easy to imagine the other case, of a system with no representation, that just works in a way that respects the rules. However, the thing is that a representation is represented by some encoding scheme. You can represent something an unlimited number of different ways (of course, different ways will have different properites, such as how long it takes to read the representation, how much space it takes up, etc etc) and in each and every case what is important is that you have the right means of interpreting the representation.

And amongst this unlimited variety there are encoding schemes that more explicitly represent the thing and those that do so less explicitly. So my question is, is there just a spectrum here, from very implicit representations right up to very explicit reprsentations? And is it possible to to make any hard divisions of this spectrum? I strongly suspect that it is not possible to make any hard divisions or distinctions.

In the case where the brain simply respects the grammar rules but does not explicitly represent them would still be a case of the brain representing those rules, but just implicitly. Taking another tact on this, I don't see how the brain can "know" the rules yet not in any sense have a representation of them. With the right means of interpretation we can still 'read' the implicit representation (not that we, as people trying to understand how the brain works, necessarily have the means to determine this means of interpretation). In other words, some aspect of the system's organisation implicitly encodes the rules, and the way that that organisation interacts with and influences the behaviour of the rest of the system consitutes the interpretation of those rules.

It may seem a bit of a stretch to call such implicit cases 'representation'. But if what I'm suggesting is correct, then there's no hard distinction between "explicit representation" and "implicit representation", and thus the common meaning of representation is making a hard distinction that doesn't exist. Thus, if we are looking for technical accuracy (and of course we are often not) then we either have to modify the common meaning of the word or find another term that can have a more accurate meaning.

Tuesday, November 16, 2004

The Roachinator and Lycra-Clad Chickens

This is just fantastic. An article in the Australian reports into cutting edge scientific research in robotics (link via The Thin Line).

"The robot, InsBot, developed by researchers in France, Belgium and Switzerland, is capable of infiltrating a group of cockroaches, influencing them and altering their behaviour."

"The third stage, undertaken by the French Centre for Scientific Research in Brittany, was to isolate the molecules that give cockroaches their smell -- to create a cockroach perfume".
I think this is going to be big. I can just imagine it: Roach, by Calvin Klein
"InsBot, which is green, the size of a matchbox and equipped with lasers and a light sensor, was developed by Switzerland's Federal Polytechnic School in Lausanne."
From the description it's obviously some sort of cyber commando roach with shoulder mounted laser deaaath raaaays.
"Other applications are also envisaged for the computer programs developed under the Leurre project. Guy Theraulaz, CRCA director of research, said it may be possible to build chicken-like robots that will be used to stimulate poultry."
That sounds a little raunchy. I don't know if I'd want to eat that at KFC. Wait a minute -- I see, my dirty mind was getting away from me a bit there -- it's some sort of robot that reads shakespeare to the chickens.
"A lot of chickens don't move at all and die as a result. They need to be encouraged to run around. Robots could do that," he said.
Oh, I see. It's good to see the poultry industry making efforts to increase efficiency by utilising modern technology with their robo-chicken exercise instructors. "C'mon girls, lets move those bodies, kick those legs! We want to be 97% fat free. Yeah. Now some star jumps!".



(While it might sound like I'm being sarcstic about this technology, I actually think it's very interesting and has a lot of promise. It's just, some of those lines in that article...).

Sunday, November 14, 2004

Why Computer Games Give us a Way of Looking at the Problem of Qualia

Qualia is the mystery of mysteries. If the world really is just processes involving aggretations of atoms, then what kind of thing is that sensation that is the taste of an orange, what kind of thing is the visual image we see when we look around?

There's lot of other things we don't understand, but with other problems we usually have some idea of what an answer might look like. But with qualia we literally have no idea.

It seems impossible that it could be the product, like everything else, of processes involving atoms. It must be some special sort of entity, we think. And there may be literally no way for us to understand it, or if there is, we may not be smart enough to gain that understanding.

For an abrupt change of context, there's a tribes person living in an untouched, remote area. They have never seen or heard of modern technology, and they are given a gameboy. They see the little person on the screen, they see them moving about an environment, and they see how they can control this person's actions. It must be some sort of magic.

They know of nothing that is even remotely like this gameboy, and they have absolutely no idea how any such thing is possible. They have no idea of how they could even begin to understand it. But we know it can be understood, we know there's no magic.

Could qualia be to us like the gameboy is to the tribesperson?

Friday, November 12, 2004

Thinking Outside of the Ought

Yeah, Paul Graham gets it. He doesn't restrict his thinking to the space of what things ought to be about -- he actually thinks about what's going on. Why does a particular candidate win an election? The political attitudes of the populus, the things that influence these attitudes, etc, of course. Well, at least that's what you'd expect it's about, that's what it ought to be about, but wrong tree argues Graham. The answer he suggests is very simple and easy to see if your view isn't blinkered.

Sure, people understand it's important (read the article to see what I'm referring to), but I think it's clear they don't appreciate just how important it is, how important it is relative to other issues -- after all look at some of the candidates the parties have chosen, as Graham points out, or the apparent lack of real effort to address charisma issues.

And let's put this into the right perspective: the US, the most powerful country in the world, lots at stake in presidential elections, huge resources put into the opposing sides. And: a simple notion that easy to see if you're view isn't blinkered, massively overlooked by all these people over many decades who've had huge stakes in these elections. Election results in the most powerful country in the world could've been different -- that's the size of those blinkers.

Thursday, November 11, 2004

Dawkin's Review of Intellectual Impostures

Richard Dawkin's 1998 review of Sokal and Bricmont's Intellectual Impostures. Pretty good stuff - and interesting for his insights, not just what he says about the book. Right now I'm too lazy to try and describe what the book is about, but here's the first paragraph of the review:

Suppose you are an intellectual impostor with nothing to say, but with strong ambitions to succeed in academic life, collect a coterie of reverent disciples and have students around the world anoint your pages with respectful yellow highlighter. What kind of literary style would you cultivate? Not a lucid one, surely, for clarity would expose your lack of content.

Wednesday, November 10, 2004

Goddamn it irks me when people try to be critical or insuling of something and dress it up in humour on the pretense that they "don't really mean it"; where they want to mean it but be able to claim that they don't mean it. Of course, you can say something and and only half-heartedly mean it, and you can say something purely for the joke -- I'm only talking about the situation where someone is doing it because they want it both ways. Good, now that I've got that off my chest, I return you to your normally scheduled programming... :-)

Sunday, October 24, 2004

The Nature of Language and Thought in an Argument Against Reductionism

More rough notes.

The whole is greater than the sum of the parts: if you lay two planks of wood on top of each other, and they can hold more weight than the combination of what they could individually hold. Together you get more than you had separately, therefore reductionism is wrong.

That argument is flawed, because it is confusing petty reductionism for reductionism. Are any laws of physics violated by the result of putting two planks of wood together? Obviously not. The result of putting the two planks of wood together is fully explainable at a lower-level of the laws of physics.

While that might be the reason why the argument is flawed, I don't think it'll always convince people. We arrive at conclusions through chains of reasoning, and even if our conclusion is shown to be wrong, if we still think our chain of reasoning is valid there's a good chance we'll still think the conclusion is right, too. I think there is one such chain of reasoning for this matter, and that this flawed reasoning comes about because of the nature of language and thought. I'll talk about it now.

It really can seem that we have something here that is 'greater than the sum of the parts' because we have something, the 'strength' of the pieces of wood, that is some amount when the pieces are wood are separate, and yet is more than twice this amount when they are combined. Doesn't it seem that we have in fact gained something here which wasn't there before?

The problem is that in thinking this we are reifying the 'strength'. But wait a moment, doesn't reifying mean "to regard (something abstract) as a material or concrete thing", and isn't the strength of the planks a real thing? It is a real thing, but we must be careful with what we mean by 'real'.

The strength is real, but it is not a substance, such that we have created some new amount of this substance when we put the planks together. Strength is a real property, but it is not a "thing" in itself, and you might describe it as being both "real" and "abstract". It is the product of a number of factors, such as the type of material, the structural arrangement of the item, etc. Quantity of the property (strength) is not simply a function of the quantity of the things making it up.

So what exactly is causing the problem here? Seeing 'strength' as a "real" thing, and thus that we have more of a "real" thing when the planks are combined, and thus that reductionism is violated. The root problem here is in considering that "real" can only mean a real "thing".

The "strength" of something is simply a consequence of the brute physical details of that thing -- there is no thing that is "strength" over and above those details. Those details have real consequences - such as how much weight it can carry before it breaks, but to say there is some actual extra thing called 'strength' responsible for those properties is a mistake. It is confusing a label in our heads for a thing in reality -- whereas that thing in our heads is really a description of reality.

Clarifying Reductionism

More rough notes, written more for my benefit, as an aid to organising my thinking, than as a geninue attempt to convince anyone of anything. It's all for me me me -- it's not for you *.


As Richard Dawkins has noted, reductionism is uncool. Even the name has a negative tinge: it's reducing things to something less than the original.

It is one of those concepts that everyone thinks they understand, because it seems so simple and self-evident: reducing things to their parts. Except, this self-evident view is wrong.

It's describing what Stephen Weinberg has called petty reductionism. Reductionism proper, what Weinberg has called, grand reductionism (he has borrowed both 'petty' and 'grand' from the language of criminal law) is the view that reality is the result of fundamental, universal laws, and that all the systems and apparent 'layers' that we see are simply the results of the operations of these laws.

The following is a short list of the reasons, it seems, people make flawed arguments against reductionism:
  • equating reductionism with -- the obviously wrong -- petty reductionsim
    • this is why people think that "emergence" and the "whole is greater than the sum of the parts" stuff shows reductionism is wrong. Note that unless you think that these things violate the laws of physics, then you don't think these things violate reductionism proper.
  • equating reductionism with a methodology of trying to understand something by understanding its parts, and then using a failure in achieving this as evidence that reductionism (that is, 'grand reductionism') is false. Daniel Dennett has used the term "Greedy Reductionism" for when people use a reductionist methodology and paint an oversimplified picture of a phenomena.
  • assuming that reductionism must mean reducing to some specific thing, and that if you can't reduce it to that, this shows that reductionism is wrong. There is, for example, a very prevalent belief that if reductionism were true then this would mean that human behaviour would simply be reducable to DNA.
  • assuming that the explanation of a phenomena must have the same properties as the phenomena itself. that is, life must be living because it has some life force. or that water molecules must have 'wetness'. this makes it hard to see how something could have an explanation on a lower-level because this invariably seems to involve things that don't have these properties.
  • similar line, thinking that because something seems very complex, that it couldn't possibly have a reductionistic explanation because you can't see how that'd be possible. Why should people expect that they could forsee one?

[*] which I came across via Scott McCloud (to find the specific spot on this page, search within the page for "Penny Arcade")

Tuesday, October 12, 2004

Film Version of Ian McEwan's "Enduring Love"

I just discovered there's a soon-to-be-released film version of Ian McEwan's Enduring Love, one of my favourite books. I hope they do a good job...

Wednesday, September 29, 2004

Language/Thought E.g. Involving Feining Uncertainty

Just jotting down a minor e.g. of language/thought stuff. Someone forgot to mention something that they should have mentioned to someone else, and even though they know they forgot to say it, they say to the other person: "Oh, I might have forgotten to mention to you that...".

Tuesday, September 28, 2004

How Ali G Gets His Mock-Interviews with Prominent People

An article in Slate investigates:

The American version of Da Ali G Show recently wrapped up its second season on HBO, and, once again, a long list of prominent Americans have been embarrassed. Somehow, Sacha Baron Cohen, in the guise of a British would-be gangsta with a penchant for malapropisms and misunderstandings, managed to secure another passel of interviews with people like former EPA Administrator Christine Todd Whitman (who conceded that, yes, whale feces "have got to be massive") and archconservative Patrick Buchanan (who said that Saddam Hussein "was using BLTs on the Kurds")

...

But according to accounts from several people who have fallen for Baron Cohen's ruses—some of whom were too humiliated to go on the record—the come-on begins with a flattering letter sent to an unsuspecting target...."

Friday, September 17, 2004

Universal Undo, Please!

Oh boy oh boy oh boy oh boy. If this blog wasn't PG-13 rated :-) I can tell you that previous sentence would have contained less three-letter words and a lot more with four letters. So I was typing up this long blog post (not this one here, but a different one), and a few minutes out from my last save I hit some combination of keys and -- bam -- my firefox window is gone (I have no idea what keys I pressed, but I'm pretty sure it didn't involve 'ALT-F, X').

Isn't it about time we had a universal undo capability? Along the lines of "If you can do it, you can undo it". So if I accidentally close down firefox, I can bring it up exactly how it was -- same windows, same window state (e.g. scrolled to the same position within the window, and so on. And the same for any other application - and for any other action you can do, inside or outside of an application.

I know there are technical issues here. At the same time, I think that a lot of them are the legacy of software and frameworks that we're stuck with for the time being at least (..yeah, I know that description is a bit vague.. but too much effor to try and make it clearer), and I think that there are places where, with a bit of effort, the scope of undo could be extended. But anyway, it doesn't seem worth my while to get into more technical details about this, so I'll leave it at that.

Wednesday, September 15, 2004

Passed the Dive Medical

That's a big relief: I ended up passing my dive medical. As I wrote last week, the rock-in-the-head incident might've jeopardized it, but when I went back to the doctor today he'd managed to get the hospital records and he was satisfied that there was unlikely to be any problem. So tomorrow I'm going to book the dive course for sometime in the next few weeks...

Photo: Avebury, UK, April 2003



Avebury, UK, April 2003. Posted by Hello

Tuesday, September 07, 2004

Thank you very much, Cara

I had a dive medical this morning. I've wanted to go diving since I was a kid. The results: inconclusive.

Back in year ten (12 years ago) at a school athletics day I was down on the oval walking along talking to a friend, Aaron Bond. There was this big crack, and then I was picking myself off the ground and there was a lot of blood. I had been hit in the head by a rock someone had thrown (though it wasn't directed at me).

I ended up spending a few hours in hospital after which I was able to go home. I think I was lucky that the skull is fairly thick where the rock hit, and I didn't get off too bad -- pain and headaches for some days afterwards and a fairly minor scar were about it.

A few days afterwards, though, the hospital rang and said I'd gotten a hairline fracture, but this apparently didn't require any treatment (my dad answered the call, so I'm not sure exactly what they said).

Apparently past skull fractures can be an issue for diving, because if there was any brain damage, the increased pressure underwater can trigger off epileptic fits. So the doctor has to try and get my hospital records from 12 years ago. I don't know how good his chances will be, and if he can't them I'll apparently have to get some tests done -- I don't know the nature of these -- to rule out any potential problems.

Not a huge deal if it ends up that I can't go diving, though it does irritate me that this has come about because someone was doing something stupid that hurt others.

Sunday, September 05, 2004

"Ethnic Cleansing" is a Pretty Audacious Euphemism

It didn't occur to me till I saw a news report on Sudan this evening - but jeez, isn't the term 'ethnic cleansing' a pretty big bloody euphemism? I'm surprised that it gets used in such a matter-of-fact way in news reports, as if it was a fair way of describing the situation. Oh, you know, it's only a bit of cleansing. Lucky we're cleansing things, and getting rid of all that yucky, dirty, disease carrying stuff!

Tuesday, August 31, 2004

Some Alan Kay Quotes

You've probably heard Alan Kay's famous quote "The best way to predict the future is to invent it." I was looking for a reference for this, when I came across some more quotes from him, some of which I thought were pretty good:

"It [the computer] is the first metamedium, and as such it has degrees of freedom for representation and expression never before encountered and as yet barely investigated."

and on similar lines

"It (the computer) is a medium that can dynamically simulate the details of any other medium, including media that cannot exist physically. It is not a tool, although it can act like many tools."

(though I disagree that this means it isn't a tool).

Two Types of Intuition

More rough notes...

Intuition often gets talked about as if it was just a single type of thing which, depending on the context, is considered either a good thing or a bad thing. I think there's two different forms of intuition, one of which is more reliable than the other.

The reliable type is derived from a large mass of solid experience or knowledge. The ins and outs of the situation the intuition applies to, the subtlties, the important factors, the irrelevant details -- these are all burnt deeply into your brain and, put simply, the intuition corresponds with the way the world is. I will call this learned-intuition.

The other type is derived from the innate and learnt heuristics our brains apply to perceive, and reason about, the world. There is much variation in these heuristics, and comprable variation in their reliability, but being heuristics they are all ultimately shortcut replacements to considered thinking about the situation. This makes them generally less repliable than learned-intuition and -- appropriate!* -- considered thinking. I won't argue this point further in this post (though this is defintely something I want to talk about in the future), and you may not agree with me on this. I will call this type of intuition heuristic-intuition.

Accompianing both forms is a "gut feeling" that tells you the intuition is correct, though often you won't be able to put your finger on why it's correct. People often talk about intuition as if the reliability of both learned- and heuristic- intuition was the same (this and this give some sense of this) -- or rather, they fail to make any distinction between them.

Being aware of the differences between these two forms of intuition means being aware, when an intuitive view comes to mind, of which type it is (I think it should in general be fairly easy to tell if you think about it), and consequently how much trust you should put in it, and consequently whether you should ignore its judgement and instead bring in considered thinking.


[*] Certainly conscious, considered thought is not that great at "fuzzy" tasks, like picking out subtle patterns in things. This is due to its symbolic nature, I'd say.

Wet Floor...

.. Posted by Hello

Wet Floor... Everybody BREAKDANCE!!



Taken outside Toowong Village Shopping Center

Monday, August 30, 2004

Illustration of Person's Psychology Incorrectly Blamed

Some excerpts from Anthony Daniels in The New Criterion (via Arts & Letters Daily). I'm posting this mainly because of my interest in how people tend to assume, unless it is unequivocally shown otherwise, that psychological factors are responsible for other's illnesses -- which this illustrates...


...

Whenever I think of medical progress and its effects upon human existence, which are both profound and yet not quite fundamental, peptic ulcers always come to my mind. This is because much of my childhood and adolescence was dominated by peptic ulceration—my father’s, to be precise. Looking back on it, his ulcers affected the mood in our household (gloom tempered by storm), and even what we ate.

...

In my father’s day, it was more or less an orthodoxy that duodenal ulcer was caused, if not entirely, then at least largely by psychological factors. Indeed, research into the kind of people who got ulcers was almost the foundation of psychosomatic medicine. In a book published in 1937, entitled Civilization and Disease, by Dr. C. P. Donnison, the flowing statement is made: “The statistics indicate that chronic peptic ulcer shows a low incidence in the more primitive races and a greater incidence in those races more in contact with civilization.”

...

In the early 1980s, two Australian investigators showed that duodenal ulcer is, in most cases, an infectious disease, caused by a germ called Helicobacter pylori. This was so novel an hypothesis, rendering so much previous work and opinion nugatory, that the medical world had difficulty in accepting it. I remember the shock of reading the papers in The Lancet twenty years ago. The exasperated Dr. Barry Marshall, who was ridiculed at first, followed Dr. Hunter’s apocryphal example, swallowed a culture of Helicobacter himself to prove the hypothesis, and promptly suffered from severe gastritis.

...

If the bacterial cause of duodenal ulcer had been known during my childhood and adolescence, how different would my life have been? My father would not have paced the floor night after night; he would not have been nearly killed by surgery; his temper would have been more equable. There would have been more affection in the household, and we should not have wasted so much of our substance in silent emotional strife. My father would not have died a lonely old man, little mourned because unloved, knowing that he would not be missed.

Sunday, August 29, 2004

What I'd Like in a Version Control System

When I started the PhD I wanted to use a version control system to keep track of all the PhD-related files. So I installed CVS. But now I've come to the view that it doesn't suit my needs very well, and I'm looking into other options. And I thought the first step there would be to get it clear what I'd like a version control system to do.

When I'm writing something, especially if its something that takes me a lot of effort to put into words, I tend to be constantly creating new files. And passages of text get cut and pasted all over the place and files frequently get renamed. For various reasons, CVS doesn't seem to be suited to this kind of situation.

As a bit of a digression, here's a little on how my writing process seems to go. At any point in the process what I've already written will have tried out certain angles, and explored certain connections or aspects of the concepts, and at certain points it makes sense to start a new file.

I'm might start a new file if it seems more productive to explore something a bit different (because it's more of a "fresh start" and seems to keep things a bit clearer).

Or I might start a new file to "start over again", as what I've already written might give me a better sense of how "it all fits together", and it's usually much more effecient to just start with a new file than to go through and edit what I've already written.

When I put some information about the chronology of files into their filenames, which helps manage things. And at some point I'll go through the earlier files and see if there's anything in them that I should take out and use in the later files (though this description makes it sound a lot more straighfoward than it is in reality).

Back to version control systems. Here's what I'd like one to do (keep in mind this is just a wish list):

  • Seamlessly integrate with the file system and other tools. Pretty obvious, at least as a high level goal, though perhaps not as obvious in the details:
    • being aware of file-system events and the ability to automatically respond to them:
      • whenever a new file/directory is created, automatically put it under version control
      • whenever a file/directory is deleted, no longer keep track of it in the version control system, but retain the older versions of it
      • dealing with file and directory renaming and moving
      • etc
  • be able to automatically commit changes, such as when the file is saved or when the computer is shut down. Optionally present a dialog box for entering comments when the file is commited.
A major theme is that I want version control that doesn't require me to do anything in addition to the way I usually manage the files. Because I'm frequently creating, renaming and removing files, there's too much overhead with something like CVS that requires things like manual commits etc.

I understand there are reasons why you would want the ability to have manual control, and what I'm saying is that I'd like a system that lets you choose what sort of behaviour -- manual or automatic -- you'd like.

And ideally, a system that has this functionality built in / bundled with it (rather than requiring you to write your own tool to implement the automated functionality by itself making the manual calls to the version control system when the file system events occur).

I was going to write a bit about the investigation into version control systems I've done, but I need to finish up this post now... so I'll just briefly say that so far Subversion seems to be the best freely available one. Here's some more info: a book on Subversion; a version-control system comparison; and an article on the newer version-control systems out there.



What I'd really like is a system that does version control down to the level of individual characters, and keeps track of them when they're cut and pasted within and between documents. This is one of the things the Xanadu people wanted to do, and while they had trouble getting their ideas into working systems, there is a system that I recently saw a talk about, called TeNDaX, which actually does this for documents you're editing. From what I could tell from the talk and demo, it seemed pretty good.

Saturday, August 28, 2004

Internally Quiet For a Day

I wonder, if you were completely alone for a day, could you suppress all verbal thoughts? Just how dependent on them are you, and how hard is it to shut them down? Would you just be drifting, or could you make decisions and carry out everyday tasks?

I'm really not sure what I'd expect. I'm trying to think whether I've ever been 'internally quiet' for extended periods of time, but I can't recall how long I've ever gone like that.

And could you spend a day interacting with other people, doing tasks and verbalising on the outside, but not on the inside?

Thursday, August 26, 2004

Fact Coupled With Opinion

I've decided that I might as well start typing up quite rough notes to this blog, rather than waiting till I have something a bit more solid to say and concrete examples to illustrate the concepts with. So here goes...

In the current climate, saying that you understand a viewpoint is tantamount to saying that you approve of that viewpoint. The truth of the matter is, of course, that it's possible to understand a viewpoint that you are very critical of, and which you strongly disaprove of.

Unfortunately, the current climate it is assumed that statements are always expressing opinions. That is, statements of facts and statements of opinions are always coupled.

The standard way of expressing strong disapproval of something is to say that it's "crazy" or "illogical" or that you "don't understand it". I think the reason why understanding is equated with approval is that, it's believed that if something can be understood then it must "make sense".

And behind this view is, I believe, the notion that a view is constructed from some logical chain of reasoning, such that an incorrect view (this is why you strongly disapprove of it) is the result of poor logic. I've previously given some reasons why I think this view is wrong.

As I gave some explanations for then, I think views are more the result of perception and their underlying assumptions than the logic that goes into them.

Wednesday, August 25, 2004

David Deutsch on "And Why?"

David Deutsch's answer to the Edge World Question Center 2001 "What questions have disappeared?". I've noted this down largely for my own future reference. My come in handy for PhD stuff.

"What Questions Have Disappeared...And Why?" Funny you should ask that. "And why? " could itself be the most important question that has disappeared from many fields.
">
"And why?": in other words, "what is the explanation for what we see happening?" "What is it in reality that brings about the outcome that we predict?" Whenever we fail to take that question seriously enough, we are blinded to gaps in our favoured explanation. And so, when we use that explanation to interpret regularities that we may observe, instead of understanding that the explanation was an assumption in our analysis, we regard it as the inescapable implication of our observations.

[more...]

Sunday, August 22, 2004

Illustration of Ungrounded Concepts, By Way of Phillip Seymor Hoffman

Once again, some fairly quick and rough notes on how it seems to me people think...

A month or so ago, I was watching The Movie Show, and they had an interview with the actor Phillip Seymor Hoffman about the movie he was in, Owning Mahowny. In the interview, Hoffman commented on acting.

The way I remember it, he compared being an actor to being one of those people who balance spinning plates, because it involved mentally keeping track of a lot of thigns at once as you go about the performance.

One of the people I was watching it with said that they didn't think that was necessary for acting -- basically, that you could just do it as a natural expression of a character, that once you were familiar with the character and role, it should, to some extent, flow out from you.

Their point was, there's no need for acting to be so "calculated". And the fact (or at least it seems to be a fact -- I don't know this myself) that some of the best actors past and present have had a more "natural" style seems to support the point that it isn't necessary.

I disagreed with that point, becuase... because.. well, I couldn't put it clearly into words then, but I think I can now. The thing is, while some people might not need to be so calculated in their acting, others may well be. Their nature, the way they think, the way they go about doing things - any or all of these things just may not be suited to a particular style of acting.

You might argue that a natural style is better than a more calculated style. The issue here, however, was whether a more calculated style was necessary or not. (In passing I would say that I think this view that the use of "natural talants" makes something surperior is a myth -- but that's something I'll have to talk more about some other time).

Now, to the point of this post. I believe the mistaken view that a calculated style is unnecessary stemmed from thinking in terms of the concept "acting" without bringing "real world" considerations into the thinking. Rather than thinking through this issue of whether the calculated style was necessary in concrete terms, it was thought through in abstract terms.

Why was it thinking in terms of the concept "acting" and not concretely? I'm not sure best how to explain this. Perhaps the following might help. It's meant to be analogous to the stream of thoughts that might've gone through the person's head. It's just mean to be illustrative of the general nature of those thoughts:


you are thinking of the issue of whether a calculated style is necessary or not to be able to act well

you can think of examples of actors who act well, who do not require a calculated style

thus, to act well you do not need a calculated style.


That is, all of the thoughts are referencing the person's concept of "act[ing] well". This is what I meant about when I said they were thinking in terms of their concept of acting.

At no point did the person actually think that acting well (thus acting) requires people to do the acting, and that this fact might have some bearing on the matter. Making this realisation and considering whether it had any bearing is what I meant by bringing "real world" considerations into the thinking and makint it concrete.

If acting were just a single thing, as it implicitly was in the above train of thought, then if we can find any examples of where this thing does not require the calculation, then it shows that such calculation is not required.

In other words, the problem is when concepts are being applied, to think about real or imagined situations or issues, but not being grounded in the concrete details of the situation.



I hope this does not sound like an argument against the use of abstraction in thinking, because it's just an argument against inappropriate use of abstractions in thinking.

Thursday, August 19, 2004

Jon Udell on The Scalability Myth

IT Myth 6: IT doesn't scale: Virtually any technology is scalable, provided you combine the right ingredients and implement them effectively

Here's some quotes:

In the end, scalability isn’t an inherent property of programming languages, application servers, or even databases. It arises from the artful combination of ingredients into an effective solution.

To get a bit off topic, and into my interest in perception, I'll point out that the view he's arguing against, that scalability is an inherent property of things, is another example of people perceiving properties as inherent parts of things. As if a certain application server simply has this property of scalability, regardless of how it is used. What this picture leaves out is that properties are usually context-sensitive, and arise because of the particulars of the given context. For example, colour is not a property that is inherently in an object, but is contextual, as it is dependent upon who the perceiver is - different types of animals have different types of colour perception machinery. The thing is, seeing properties in a context-free fashion is easier to do, and I think you have to learn how to, in general, see them in a context-sensitive fashion.

Formats and protocols that people can read and write enhance scalability along the human axis.

I thought this bit in the article was a bit vague. Scalability of what exactly? The number of varied people that can read and write it? I suppose that would be what he means.

Article on Free-will / Reponsibility

The Guardian reports:

A leading neuroscientist caused a sensation by claiming crimes are the result of brain abnormalities.
...

The idea that someone should not be punished if their abnormal neural make-up leaves them no choice but to break the law is contentious but not new. However, one prominent neuroscientist has sparked a storm by picking it up and turning it round. Writing in the Frankfurter Allgemeine Zeitung, one of Germany's leading newspapers, Wolf Singer argued that crime itself should be taken as evidence of brain abnormality, even if no abnormality can be found, and criminals treated as incapable of having acted otherwise.

His claims have brought howls of outrage from academics across the sciences and humanities. But Singer counters that the idea is nothing but a natural extension of the thesis that free will is an illusion - a theory that he feels is supported by decades of work in neuroscience.

[more...]

Sunday, August 15, 2004

Misinterpreting Negative Obeservations

As with everything on this blog but in particular here: this should be seen as a rough draft. At this stage, I'm not writing this to try to convince anyone, I'm writing it as a starting point for understanding and building my argument


Consider the following

P1: Teddy is unfair to others
P2: That's the way he is, do you expect him to act differently?

Imagine that P1 was just saying this because they wanted to express their disaproval of Teddy, and they wanted to make this disaproval clear to P2. P1 might want to get P2's help in doing something about Teddy's behaviour. In this situation, P2's response to P1 was inappropriate.

P1's statement was an observation that was critical of something (Teddy's behaviour), yet P2 took it as a statement of disbelief and/or a statement that the past should have been different. I think P2's response is illustrative of a common way of thinking. I think kind of thinking people don't necessarily explicitly interpret the statements in the mistaken way, but implicitly interpret them this way, through the way they think about them and respond to them.

In some situations it may be the case that P1's statement really was an expression of disbelief, and/or it was an after-the-fact expression that the past should have been different, such that P2's response would have been fine. But the point I'm trying to make here is that in the situations where neither of these things are the case, it is very common for people to interpret statements as if they were. I'm saying that there's some bias there in the way many people think that makes them interpret statements in this way, even if they really don't have any grounds for doing so.

I want to explain why I believe it is quite valid to make disapproving observations. While it is pretty much just whinging if you're constantly making disapproving observations for no good reason, it's fine to just express your opinion on the topic -- no different to saying you liked or disliked a film, or saying that the weather has been hot lately.

There may be a point to making the disaproving observation. You may just bring up things you notice, and you may be the kind of person that tries to do what they can about things they notice. You may just bring up the thing to see what the other person has to say about the topic -- whether they think Teddy is also unfair to others or not. This could help firm up your opinion -- or you might just find out that you've just caught Teddy at a bad time and that usually he's quite reasonable.

For complex problems, you first need to understand them. And identifying the various issues is an important part of understanding the overal problem. You won't be able to consider what can be done -- or whether anything can be done -- about the overal problem until you understand it. Or more simply, you may simply be stating the issues first before talking about how you want to address them. So disaproving observations are certainly valid things to make.

While these flawed interpretations may seem fairly obvious easy to spot, in practice they can be difficult to pick up and hard to know how to deal with them. You can even end up thinking there was something unreasonable about what you said. I think the first part of the reason for this is that responses based on such interpretations are true, and since truth implies validity, can easily be taken as an effective response.

Here's why the response is true. It says "That's the way he is, do you expect him to act differently?" and it is true because: yes, that is the way Teddy is, and true, I should not have expected him to be different to how he is. The thing is, of course, that a response doesn't just have to be true, it has to also be making a relevant point about the thing it is responding to. Unfortunately, people in general seem satisfied with truth by itself; you ave to be conscious that it has to be an appropriate truth. The second part of why it can be difficult to deal with such responses is that you have to be able to think of why it is not an appropriate type of response -- which is harder to do in the heat of the moment -- and you have to be able to explain this to, or illustrate this for, the other person.

I think this issue is more important than it seems. Here's a very quick sketch of why; I will have to come back to this later on. Such modes of interpretation create an environment where it is difficult to be critical of things unless you've got a solution you want to present. It leads to simplistic views on what's wrong with things, and to simplistic views on how problems can be fixed. Real, complex problems require a deep understanding in order to chart out reasonable ways of addressing them, and a deep understanding of things require you to go for long periods of time knowing a problem exists but not knowing what is causing it or how it can be addressed. In effect, such an environment acts as a barrier to gaining an adequate understanding of problems.

I May Never Have To Get Out of Bed Again

I've just discovered that I can use my laptop while lying down in bed. I feel like... when they... um, like one of those dudes who made a major discovery.

To support the laptop I just bend my knees up a bit, so they're pointing up to the ceiling and so the soles of my feet are flat on the bed. The front edge of the laptop sits around my belly button area, and the back edge sits up towards my knees. And to see the screen clearly, I have to open it up wider than usual.

This is a revelation. Now when my back gets sore I'll be able to keep doing stuff at the computer, and I'll be able to say to people "I've just got to lie down and do some work".

Saturday, August 14, 2004

Word name as model e.g. - "Interference"

I'm just noting this example down for my future reference. I want to collect examples of words whose names suggest what they are about, but misleadingly so. (The term 'stretching' is an example of this; while it implies that you're stretching your muscles, what you're really doing is getting them to relax -- people hurt themselves stretching because they think it really is about stretching the muscles).

Clay Shirky, talking about radio:

If two or more broadcasters are using the same frequency, a standard receiver won't be able to discern one signal from another. Though engineering parlance calls this interference, the waves aren't actually interfering with one another -- rather the profusion of signals is interfering with the receiver's ability to listen to one specific signal.

Wednesday, August 11, 2004

Meaning, Unfolding


    one

   o n e

  on no el

 onl non els

only none else


note: the effect depends on this being formatted properly, so if the words aren't printed in a monospace font and don't form a pyramid shape, the formatting is out. If you're viewing the page via an RSS feed -- I'm mainly talking to you, Planet DSTC people -- then viewing the original post should give you the right formatting

Understanding is Not Enough

Just a short quip:

The hard part is not recognising the existance of -- and understanding -- our perceptual and cognitive quirks, limitations and biases, the hard part is in recognising the situations when they apply to ourselves.

Or perhaps it would be better worded slightly differently (changes highlighted):

The hard part is not recognising the existance of -- and understanding -- our perceptual and cognitive quirks, limitations and biases, the hard part is in learning to recognise the situations when they apply to ourselves.

Crispafier?

As we all know, the downside of microwaves is soft pie pastry and other such soggyness. If you want crispyness, you can heat up the oven, but it takes too long if you also want a quick result, and as an alternative a toaster oven is not that much better. I've heard of microwave ovens with a browning element (so they're a mix between a microwave and a toaster over, I assume), and though I'm not sure how much better they are, I can't imagine that in the 3 1/2 minutes the pie is being microwaved it wouldn't make the pastry that much crispier.

So I was thinking, could you get oven-style crispy pastry at microwave speed? More specifically, something that could make the pastry on a microwaved pie crispy? No having to wait for a body of air to be heated up or anything like that - just zap and it's crisp. Could you instantly generate a blast of heat and use that? Maybe you quickly expose the surface to flames (if this apparartus was part of the microwave, you wouldn't want your pie still wrapped up in paper-towels!). I don't enough to say what the alternatives are, and whether any of them would be technically feasible, nor whether any of the technical feasible options are practically feasable (eoonomically, safety-wise, etc), but I can tell you that the idea is high up there on my list of useful-devicees-I-would-like, right up there with time-machines and personal jetpacks :-).

Monday, August 09, 2004

Interesting-Sounding Talk (in Sydney) On Consciousness

I wish I was in Sydney so I could go along to this event, 'Are you conscious right now?':

Join Dr Susan Blackmore (UK expert on consciousness), Dr Alun Anderson (Editor-in-Chief, New Scientist magazine) and Dr Paul Willis (ABC TV Catalyst) for a drink and discussion on the nature of consciousness: How do you know when you are conscious? Could you ever be ‘aware’ of being unconscious? Does the stream of consciousness really exist? What can we do to develop our own unique consciousness? And what the heck is consciousness anyway? These ideas will be challenged and investigated by New Scientist editor-in-chief, Alun Anderson with questions from you - the audience.

I thought Susan Blackmore's book The Meme Machine was very good, and would like to hear her speak. The event's free and the page for it has details on booking. It's on Friday August 20 from 6.30pm at the College Street Foyer Cafe, Australian Museum, College St, Sydney.

"The Past Inside The Present"

Yesterday I bought Boards of Canada's most recent album (where 'most recent' actually means '2 years old'), Geogaddi. My reaction to this sort of music seems to always go from 'i'm not that impressed' to 'yeah, this isn't bad' to 'man, this is really good'. It's taken about four listens to the album to transition to the second reaction, and I'm currently on my way to the third.

Anyway, the original motivation for this post was the vocal sample

the past inside the present

which is in the album's second song, "Music is Math". I think it's a pretty cool sample, but I wanted to jot it down because I think it's of relevance to my PhD work, which is about the nature of information.

Consider the air particles vibrating against your ear drum (or whatever it is they vibrate against in your ear), "conveying" information about the sound source: the past (what happened at the sound source a moment ago) inside the present (the vibrations against your eardrum).

Not that I think that the line has any major relevance, mind you. (In fact, it hints at the view I think is wrong - that things carry something called information. That's why I put "conveying" in quotes earlier).

Invisible Changes

Thanks to Steven Livingstone for sending me these great visual illusions. They all involve scenes that look static but are actually continuously changing in very hard to detect ways. I thought the Workroom one was excellent, because it seems so obvious -- so there in your face -- once you know what's going on (I had to be told), yet so hard (at least for me) to see. I really like things like that which show up the "seams", so to speak, in our perception, which we are usually oblivious to. Not because there's any fun in belittling our assumed perceptual powers, but because they're illustrative of a truth I think we ought to be aware of.

Saturday, August 07, 2004

The Onion: CIA Asks Bush To Discontinue Blog


CIA Asks Bush To Discontinue Blog

WASHINGTON, DC—In the interest of national security, President Bush has been asked to stop posting entries on his three-month-old personal web log, acting CIA director John E. McLaughlin said Monday.
[more...]

Wednesday, August 04, 2004

Tuesday, August 03, 2004

Everybody! Everybody! Homestar Runner Wiki!

1) What's a Homestar Runner? 2) What's a Wiki? 3) Why am I so excited?

  1. A very funny web-site of various cartoons, most notably Strong Bad's e-mails

  2. A type of website that allows any of its readers to edit pages on the site and add new pages to the site

  3. Actually, I'm not that excited -- the "Everybody! Everybody!" thing is something from the Homestar Runner main page -- though the wiki is pretty cool.

The site is here. While it is just a resource of information related to Homestar Runner 1) Homestar Runner is just so friggin cool 2) it's pretty comprehensive on all things Homestar Runner, and contains some pretty interesting and actually useful information for Homestar Runner fans. Which makes it a perfect resource for all your school assignments.

Like, on the weekend, before I came across the wiki, I was introducing my dad to the joys of Strongbad's e-mails (you can tell I like Homestar Runner), and I wanted to find the one that contained the first episode of Dangeresque!, but since the index of e-mails only give their names and no other details, I couldn't find it. It was fairly easy to find via the wiki because it contains pages with synopsises for each e-mail.

And, via the site, I found some pictures of the guys behind the cartoons:


I'd always wondered who they were. Weird to think that one of these guys (not sure which) does all the character voices (except Marzipan) -- The King of Town, Homestar, Coach Z, Stongbad etc. I figure his real voice is some sort of amalgamation of these.

C'mon, Do the Locomotion

I received an e-mail containing this quote in its signature:

Most people believe that physicists are explaining the world. Some physicists even believe that, but the Wu Li Masters know that they are only dancing with it. - Gary Zukav*

Indeed, but it's been one hell of a dance, and unlike your average waltz it's had a huge impact -- both good and bad -- on this world of ours.

So what, if we don't and can't know what something ultimately is? The fact that we can survive in this world and do all the thigns we can do shows that our understanding of it is not totally broken. And we can always strive to improve that understanding.

But more to the point, why is an absolutely perfect understanding necessarily so significant? You can think of it this way: would it necessarily make a big difference to achieve that extra 0.1% of understanding if you were already 99.9% there?


* I believe this quote is from the book The Dancing Wu Li Masters: An Overview of the New Physics.

Sunday, August 01, 2004

Why Would I Buy A CD I Can't Rip?

I've bought every Beastie Boys album except their first, and I would like to get their latest, except for the stupid copy-controlled CD it comes on. If you haven't heard of copied controlled CDs, they don't meet all the requirements of the CD standard, and thus only work on certain players and -- for some reason I'm not totally sure of -- can't be copied or ripped into MP3s. If you're after more details, there's plenty of pages on the topic out there.


The mark of the devil (aka the copy controlled logo)

Why would technology that stops copies of the CD being made stop me from buying it? These days I have all my music on my laptop in MP3 format. It's much more convenient, and has the big advantage that you can create playlist mixes of music that run for hours and can sample from any of the albums you have. I'm too used to the convenience, and compared to it, the CD's are a pain: you first have to locate the CD case you're after; you can only listen to one CD at a time, and if you want to music to go where you go you have to take the CD; you also have put the CD in the drive when you want to listen to it, and take it out again when you're finished; and while you're listening to the music the CD drive isn't available for use for other things.

I think there's something hamfistedly barbaric about trying to deal with piracy in a way that stops you from playing the music how you want to. I know all you big record company executives read my blog, so I hope you're paying attention, (with a sneer:) y'all!



You can apparently rip these CDs with some software called Isobuster. However, according to its online help, to do this it requires the right sort of CD drive. As I understand it, the capabilities that the drive has to have don't seem to relate to specific features -- that you could look up in some documentation -- so while I might be able to find out if my drive is one of those it will work with, who knows how long that might take -- it doesn't seem to me worth the effort.

Friday, July 30, 2004

Thoughts in Few Words #10

.

progression of knowledge is the removal of explanations based on essences

Wednesday, July 28, 2004

Strictions and Straints

restrictions
constraints
constrictions
restraints

Tuesday, July 27, 2004

Thoughts in Few Words #9

.

As a species, we have an arrogance that makes us believe our intuitive, everyday conceptions of things must be right, that they are not to be questioned, and that they are only to be reliquished when we are forced to do so*.


* two main caveates: i'm generalising, of course; though I've tried to cover some of that generalisation with the 'as a species' qualification; secondly, this of course doesn't mean that being an intitive, everyday conception makes it wrong.

Sunday, July 25, 2004

Evidence For Giant Waves Once Dismissed As Myths

The Scotsman reports:

Monster waves that can sink a supertanker and were once dismissed as a myth abound in the Earth’s oceans, scientists have learned.

Satellite images identified more than 10 individual freak waves more than 82 feet high in just three weeks.

...

Until such evidence became available, most scientists were sceptical about freak waves. Statistics showed that such extraordinary sea conditions should only occur once every 10,000 years.

Saturday, July 24, 2004

Your Steak, Sir, As Shown on the Diagram Here

I'm sitting there eating this piece of meat, doing what we all do at this time, and wondering exactly what cow (or pig, or chicken, etc) did this come from? What did it look like, where did it live, and what did it think about George W Bush? I know that this piece of meat once came from a living animal, but somehow my brain just can't concretely grasp that. That brown shape there is just too abstracted from the notion of a particular living creature. If it'd been the olden days and I'd killed the animal and chopped it up myself, or seen this done, it might be more real. But by and large, pieces of meat are, to my brain, reddy or pinky coloured blobs that come in little styrofoam trays from the supermarket.

Not just what animal did it come from, but exactly where in said animal was this piece of meat located? I mean, I know that rump steak, for example, means meat from cows arse (and I'd like to see more people facing this reality when ordering meals and telling the waiter they'd like "the cows bottom in a delicate wine sauce infused with aromatic herbs"), but where exactly in the 3D object that was cow was this piece of meat located, and relative to the rest of the cow, which way was it oriented? I can imagine this cow grazing around in a paddock, going about its business, and the image is normal except that the cow is slightly translucent and I can see this red blob there inside it, that blob being the thing which in the cow's future is destined for my stomach.

Actually, I can't imagine that. I can imagine being able to imagine it, but I can't actually bring such a picture into my mind. So, I'm thinking, what if we could? It strikes me that you could use this idea as the basis for some simple little pictures or animations. Like a cartoony picture that juxtaposes a scene of a family at home eating their steaks, next to a scene of cows in a paddock, where you can see positions of all those steaks within each of their owner-cows. Oh -- and here's a real visual possibility -- you know what they say about meat pies? Oh, it's crude, but they say it's true: all lips and assholes. What about a picture of a meat pie next to pictures of the 30 or so (just how many cows is the average meat pie sourced from? I'd like to know) cows that contributed to it, highighting those parts doing the contributing?

Friday, July 23, 2004


Posted by Hello

Thursday, July 22, 2004

Digital Fiction and Renovating the DEN

Intimacies is, in the words of its developers, a -- for a good mouthy workout -- digital epistolary novel, or DEN, for an equally ugly but briefer name. Though you might not be familiar with the term, you're probably aware of the epistolary novel form: they're novels consisting of, most commonly, a correspondence of letters between characters, though which we see the story's narrative unfold. They can also consist of such things as diary entries and newpaper clippings -- see Wikipedia if you want more details.

Intimacies presents a story though a series of e-mails, web pages, and instant messages that the reader can view though a program that is meant to simulate the interfaces of our e-mail, web and instant messaging clients.


That story involves an mis-sent e-mail that leads to growing on-line relationship, and eventually to an in-person meeting with some serious consequences. The story takes place over five weeks and you get to see the e-mails between the two protagonists, as well as some e-mails between them and their co-workers, family, friends and a few other characters the story introduces (plus some related web-pages, and one or two instant messaging conversations). While its nothing exceptional, I found it a quite enjoyable read. And since there's less text than you'd find in a typical novel, its also a fairly quick read. You can download a copy for free at the website.

What interests me more than the story is the potential of the medium. I did feel that it worked better as a program than it would have been as a bunch of e-mails printed on paper. The semi-realistic interface did seem to add something. Having an inbox there in front of you and clicking on e-mails to read them seemed to help it feel more real.

What I want to consider is how we can take this further, to make it more immersive. That consideration can start with the way Intimacies was implemented. Aside from some problems in execution that will no doubt be fixed up in future versions of the software, there were some problems that detracted from the immersion.

Though the software presents a simulation of an e-mail client, in some ways it's actually quite unlike a real e-mail client. There isn't an e-mail client for each character, showing the e-mails they've received, but only a single e-mail client, whose inbox contains all of the e-mails, regardless of who they've been sent to. So while the inbox interface does give you a sense of using an e-mail program, and while it gives you a sense of the chronology of the e-mails, since you're seeing e-mails that in reality would be distributed across different e-mail clients on differnet people's computers at different physical locations, since you're seeing them all in the one place, it feels a little weird.

Thus, an obvious change would be to better represent the different characters and the e-mails they receive, by having a separate client for each character. This would introduce some other issues, such as how the proper chronology of e-mails could be presented to the reader, but I'll address that in a moment. Because all the e-mails were lumped together in the original program, it was often difficult distinguish between some of the minor characters. For example, two of the characters are the guy's workmates, and I just couldn't remember which was which. If each character had their own e-mail client, and especially if they each looked different -- different fonts, differnet arrangement, etc -- then this wouldn't be such an issue.

The story takes place over the course of five weeks. The current interface includes five buttons across the top of the window, one for each of the weeks, and below this is a second row of seven buttons, one for each day of the week. So if the reader clicks the Week One button and then clicks the Monday button, the inbox will display all of the e-mails for that day. As soon as you click that button all of those e-mails are there. Giving the user control to move between weeks and days does, I think, help enhance the level of reader-immersion. I think, however, that this temporal immersion could be increased.

The user could be given finer control over the time, along with a greater sense of its progression and the points at which occurrences such as the sending of e-mails happen. I would suggest a little console separate from the e-mail clients, showing the date on something like a desk calendar and the time on an analogue clockface. The console would also contain a button to move time forward to the next occurrence. When the user clicks on this button, the hands would move around to when the next e-mail is sent. If the next e-mail is on the next day, the desk calendar page would flip over. The reader would get a sense of how much time is passing between e-mails. The idea would be that over the course of a day, you might see something like the following: a flurry of e-mail exchanges in the morning, then a long wait for a certain reply for a few hours, which then finally comes late in the afternoon.

Once the time has been moved forwards to the next occurrence, the person's e-mail client that's received the e-mail would be displayed and the console could display that person's name. This Next Occurrence button would make sure the reader sees the e-mails in the correct chronological order.

If occurrences that the user could navigate between included both the sending of e-mails and the receiving of e-mails, then you could exploit this to do things such as the following. You could have a situation where one character has been waiting for a while for a reply to their e-mail and thinks the second character has just ignored their e-mail. Whereas, the second character sent a reply a while back but the network has been slow and it hasn't gone through. So after waiting for a while, the first character sends an angry e-mail to the second character. It'd be like in the digital equivalent of those wacky scenes in movies where each character misunderstands what's going on with the other (I'm not explaining myself very well, but I can't think of a concrete example) -- oh, wait, maybe that's not such a good idea, I find those plot elements really friggin annoying :-). But, as they say, execution is everything, and I'm sure they can be done well, whatever the medium.

The two main feature additions -- having separate e-mail clients for each character and a way to move forwards to the next occurrence -- could be executed in a number of ways, and the details I've given above are meant just to give an idea of how it could be done. There could be different tabs to access each character's e-mail client, for example.

There's also a number of small details to consider, such what to do if the user could jump to a particular person's e-mail client when they are not receiving an e-mail. Perhaps the client could be slighly greyed out, to indicate that perhaps they're receiving e-mails from other people or they're writing e-mails, but that these details, since they aren't of relevance to the story, aren't shown. Also, to make the e-mail clients seem more realistic, you could populate their inboxes with other e-mail unrelated to the story but slighly grey them out and don't let the reader open them from the inbox (because of course there's no real e-mails behind them). I haven't systematically considered all such possibilities, but there doesn't seem like there would be an problematic ones that couldn't be dealt with.

It occurred to me that another way to immerse the reader would be to let them see character's drafting e-mails. That is, to see the e-mail client and the words appear as the character types them, seeing the mouse cursor move, etc. There are of course various ways this could be done. Perhaps it would be too tedious to see this for all the e-mails, but at least sometimes it could give you interesting insights into the characters to see which parts of their replies they draft in a breeze, which things they're saying they have trouble finding the words for, and what they write but then decide to change in their drafting. It's something that seems difficult to convey with static media, and I think you could do some interesting stuff with this. Personally, I would find it interesting to be able to see how people go about writing things. That's a perspective you just don't get much of a chance to have, and from it perhaps you could learn something interesting about people.

Other Ideas for Bright Icon








Posted by Hello

Monday, July 19, 2004

Idea for Bright Icon



The Brights are looking for an icon, and here's my suggestion. Unfortunately I didn't find out about the call for submissions until recently, and they've already shortlisted six candidate designs. I know a lot of effort must have gone into those candidates, but I'm afraid that to me they all feel very unsatisfactory, which is why I had a go at one of my own. The following is the rationale behind the icon:

To me, bright means freedom. It means existence and thought unrestricted and unconstrained by conformance to the supernatural, existence and thought free to develop and grow. Thus, the arrows represent freedom to grow in all directions. The arrows also represent the three spatial dimensions of the universe we live in, conveying a sense of the naturalistic nature of our reality.

Some notes on the graphic design. That image is just meant to convey the basic idea of the design. Here are some ideas for variations. The common line thicknesses and lengths could be varied. Perhaps shading could be used to made to look more three-dimensional, though perhaps that would ruin the simplicity. While the design essentially reflects a three-dimensional coordinate system, I wanted to make it somehow unique and recognisable as a symbol in its own right, and that was the reason why I angled the lines as they are. Perhaps there are other more effective ways of achieving this end.

I actually don't want to give it a name, but it looks like all the submissions have names, so if I must, perhaps "All Directions" might be suitable. (and I just hope this design isn't already used for a company logo or some such) Posted by Hello

Thursday, July 15, 2004

Holographic Video Projector That Fits in Your Pocket

A prototype has been developed, and the researchers "aim to produce practical pocket-sized video projectors in two to five years". See TRN for the details. (via Nova Spivack).

Wednesday, July 14, 2004

Thoughts in Few Words #8

.

what seems different but is the same

Sunday, July 11, 2004


Red and White Apartments, Singapore. Taken on my visit there in Feburary 2003. Posted by Hello

Saturday, July 10, 2004



The 5.5 kg, folding A-Bike (via Gizmodo). Can be folded and unfolded in around
20 seconds and rides on tiny, air-filled tires. Available in 2005 and expected to
cost around US$300.



Sleek aquariums: MoCoLoco via Gizmodo. The MoCoLoco page has more pics.

Thursday, July 08, 2004

A Minor Comment Related to Metaphor

Just a minor comment I made on a post on the Many2Many weblog.

Learning Language in Virtual Immersive Environments

Learning a language is, at least for an adult, hard. The best thing, they say, is to immerse yourself in the langauge, ideally by hearing it and speaking it everyday amongst native speakers. But if this option isn't available, the closer you can get to it, the better. The New York Times reports (reg req'd) that the University of Southern California is developing a virtual approximation of such immersive environments.

The software has been designed to teach arabic to soldiers, and its basic details are as follows. The game takes place in a realistic environment, modeled on an actual Lebanese village. The player can move their character around the village, and interact with computer controlled villagers by speaking through a microphone. The computing system uses AI to interpret the player's vocal input and determine the villager's reaction. The player also has to control their character's body langauge, such as using an appropriate gesture when ending a conversation. The player is put in situations such as "establishing a rapport with the people you meet and finding out where the headman lives".

The article doesn't go into exactly how these details are executed, nor does it give any clear screenshots, but the concept is promising. Apparently versions of the system for other languages are planned (the next likely candidates are Dari, a major language in Afghanistan, and some Indonesian language), and the researchers behind it also see the potential for using similar immersive environments to teach other types of tasks - it should be interesting to see what comes from this.

Tuesday, July 06, 2004

More Than Words For Snow #6: Analog is a Perceptual Property

I unfortunately don't have time to write this up more than briefly, so I'll get straight to the point: when we call some property, such as the level of mercury on a thermometer, analog, what we are really expressing is that it's level can change in increments smaller than we can perceive. As far as I know, the level of mercury in a thermometer must ultimately only be able to express the current temperature descretely, since below a certain gradation-size accuracy would be lost out to the random nature of the jostling which is causing the mercury to rise in the first place. That is, below a certain gradiation-size the fluctuations in the level of the mercury would be due to the random directions of the movements in the jostling of the atoms rather than changes in the degree of excitement in the jostling. The thermometer seems to be analog because it seems to change in a smooth fashion, with no visible gradiation. Similarly, many things that seem analog are in fact fundamentally descrete -- record groves, film etc.

Friday, June 25, 2004

Taxing Advertising

[Update: the article can now be accessed here]

An interesting article over at Kuro5hin, whose thesis is that advertising imposes costs upon society and should thus be taxed accordingly. Here's an overview, which I've actually taken from the article's conclusion (and broken up into separate paragraphs by me):

...advertising imposes costs upon society and should be taxed accordingly. Some of these costs are well known, e.g. annoyance and loss of time and can be accepted provided that consumers are voluntarily exposed to advertising. However a great deal of advertising is imposed upon the consumer, without any compensating benefit being offered.

In addition to simple annoyance, advertising spreads inaccurate and incomplete information which distort consumers purchasing decisions, causing a loss to consumers and diverts valuable investment away from improvements in productivity and quality of goods. Advertising is not entirely bad, but it does not have to be to justify special taxation. The presence of a significant (uncompensated) harm from advertising is enough to justify the tax. Particularly since the revenues from the tax could be used to fund increased spending or to cut other taxes, such as those on labour and investment.

The government needs to generate revenue one way or another to pay for essential services e.g. national defense, the criminal justice system, healthcare. Raising this revenue by taxing bad things (ie. externalities: pollution, advertising etc.) is likely to lead to increased efficiency. So even those of us who think taxes should in general be lower, can still legitimately support this tax, provided cuts in other taxes accompany its enactment.
Read the rest of the article for the explanation.

Though I don't have the relevant knowledge to properly judge the argument, it makes a lot of sense to me and I can't see any flaws in it. At the very least, it's a type of solution that most people seem largely unaware of, and it would be instructive to see why it might or might not be usefully applied to this situation.

In my opinion, the article responses aren't worth reading, because they're, well, sadly pretty juvenile. Of course, I'm just calling it as I see it, and you may disagree with me.

I have an additional negative effect of advertising that I'll breifly add. It's that it helps cultivate a norm whereby things can and are to be evaluated based on their apperances and based on the things they are associated with, rather than on matter of actual substance.

The idea is, we learn our norms from our environment, and advertising makes up a significant part of that envrionment. This kind of evaluation is, thanks to advertising, such a perfasive part of our lives, and I think this rubs off onto our habits and standards for evaluating everything else.

I know people will disagree with this on the grounds that we can easily distinguish between advertising and other arenas of public opinion etc where evaluation of ideas etc come into play. At the moment I'm not sure what is the best way to argue against this view, though it should be obvious that I don't think it is correct.

Wednesday, June 23, 2004

If Google Wrote Windows

Jon Udell on the sorts of search capabilities we ought to have on our computers:

On the Google PC, you wouldn’t need third-party add-ons to index and search your local files, e-mail, and instant messages. It would just happen. The voracious spider wouldn’t stop there, though. The next piece of low-hanging fruit would be the Web pages you visit. These too would be stored, indexed, and made searchable. More ambitiously, the spider would record all your screen activity along with the underlying event streams. Even more ambitiously, it would record phone conversations, convert speech to text, and index that text. Although speech-to-text is a notoriously imperfect art, even imperfect results can support useful search.

Tuesday, June 22, 2004

Leaving Governments Back on Earth

Well, that title's a bit misleading - I just couldn't resist the sensationalist sound of it. This post is actually about space flight. Never before has there been a trip into space that wasn't planned, funded and executed by a government body -- not until yesterday when SpaceShipOne broke past the atmosphere in a historic flight.

Texas Using Wi-Fi to Encourage Use of Driver Rest Areas

Yahoo News reports:

To encourage drivers to take more frequent breaks, the Texas Department of Transportation wants to set up free wireless Internet access at rest stops and travel information centers.

Friday, June 18, 2004

Thoughts in Few Words #7

.

pleased to have life

Thursday, June 17, 2004

Joel on How Microsoft Lost the API War

A very good article by Joel Spolsky on why the future is going to be tough for Microsoft and good for web-based software.

However, there is a less understood phenomenon which is going largely unnoticed: Microsoft's crown strategic jewel, the Windows API, is lost. The cornerstone of Microsoft's monopoly power and incredibly profitable Windows and Office franchises, which account for virtually all of Microsoft's income and covers up a huge array of unprofitable or marginally profitable product lines, the Windows API is no longer of much interest to developers. The goose that lays the golden eggs is not quite dead, but it does have a terminal disease, one that nobody noticed yet.

Wednesday, June 16, 2004

Your Superhero Alter-Ego Problems Solved

You've got the lycra getup, the underpants, the cool catchphrase, but you're stuck on the suitably cool name? No problem, my sparkling imagination has come to the rescue: The Advertiser. Sounds fcking hard to me. You don't want to mess with The Advertiser.

Tuesday, June 15, 2004

Yeah No

Apparently the phrase "Yeah no..." is an Australianism that has arisen in recent times. See the article for details. I was surprised when I saw that. I've always been pretty self-conscious about language use, and in the past I think I've been less prone than most to picking up new sayings and ways of talking, but reading that article made me realise I use "Yeah no" all the time. I had been aware of using the phrase, 'cause I seem to overuse it, but at the same time, I didn't have that much consciousness of it, if you know what I mean. Interestingly, I can't recall having heard others use it, though I'm sure that's because I haven't been on the lookout for it.

Being Bi-lingual Helps Keep Mind in Shape

The Australian reports

When bilingual people age, their brains decline much slower than those who are fluent only in their mother tongue, it was reported yesterday in the journal Psychology and Ageing.

Monday, June 14, 2004

There ought to be.... #2

...little compartments in laptops for storing things, like your set of earbud headphones.

Yeah, you can store stuff in your laptop bag, but laptop and bag often get separated by more than arms reach, and a little compartment you can easily get at could be a lot more convenient that reefing around for the item in the bag. Has anyone tried doing this?

Friday, June 04, 2004

William Gibson on Lies Exposed in Telltale Colors

Here's a link to a New York Times peice from last year written by William Gibson. I'm not sure if I'm remembering correctly, but I think the peice was part of a group of related articles where famous people were asked what they'd like to see technology make possible. Gibson's answer was that "some voodoo thing that unfailingly highlights [in a pieces of text] outright lies, spin and misperception - in different colors".

I've been meaning to post this for a while, and I was intending to add a few of my thoughts on this matter -- on making the accuracy of claims more apparent, but that's something I'll have to leave for later.

I'd set my Mac to show me the outright lies in Pistachio, the spin in sky-blue Bondi, and the misperceptions in succulent Plum. Large swaths of news would probably be Plum, both that written by journalists and some large percentage of politicians' quotes. Perhaps relatively few Pistachio highlights would appear in the actual reportage, indicating direct mendacity on the part of a journalist, though it would be interesting to find out just how few, or how many.

Thoughts in Few Words #6

.

Most of our own past is as a story whose details we recall

Thursday, June 03, 2004

Words Like a Neon Light, With the Wave of a Phone



BBC News reports:

Nokia is making a mobile that lets you write short text messages in mid-air... A motion sensor in the phone makes the lights blink in a sequence that spells out letters when the handset is waved in the air...A trick of human vision turns the sequence of letters into a message that hangs in the air.... could be used by friends to talk to each other across crowded rooms or open-air concerts.... could be used to play games overlaid on city streets, as a heckling device or a novel way to interact with other devices.
New ways to communicate are always interesting, no matter how trivial they may appear to the imagination. For the pervasiveness of communication, the complexity of our lives, and the way the two are intercombined, always outstrips anything we can simply imagine.

Wednesday, June 02, 2004

More Than Words For Snow #5 - Implicit Categorisation

This ones' going to be a quickie... I just want to get the basic idea down, and I'm not worrying too much about expression...

This post is about how language can be used to obfuscate reasoning by implicitly categorising something as something it isn't.

An an example of implicit categorisation I came across prompted me to write this post. I was reading Philosophy: The Basics by Nigel Warburton -- which is not a bad book, BTW -- and specifically, the chapter on morals/ethics, and the part where he outlines neo-Aristotelian Virtue Theory. What this theory is, and my opinion of it, aren't important for this post and I won't be going into them -- I just want to comment on the way Warburton talks about the theory.

The text that's relevant to the example is this. Following the section outlining the basic details of Virtue Theory, is a section titled 'Criticism of Virtue Theory', in which he says "A major difficulty with virtue theory is establishing which patterns of behaviour, desire, and feeling are to count as virtues" and in elaboration of this says "the danger is that virtue theorists simply redefine their prejudices and preferred ways of life as virtues, and the activities they dislike as vices".

If we start considering that text, we can see that "establishing which patterns of behaviour, desire, and feeling are to count as virtues" (which I'll refer to as the "establishment problem") is "a major difficulty" and that this is a "criticism of virtue theory". As I will explain in a moment, when he refers to the establishment problem as a major difficulty, he implies that it is an inherent problem with the theory -- he categorises it as an inherent problem with the theory.

This is unfortunate, because the establishment problem is not an inherent problem with the theory. This ought to be apparent if we consider this for a moment. If virtue theory says that moral behaviour is based on virtuous behaviour, then we have the difficulty of determining what is virtuous. We need to determine how to turn its basic principles and tenants to more concrete courses of action. But this problem isn't particular to Virtue Theory.

It's a problem common to all moral frameworks - we have to determine how to interpret their basic principles and tenants. And regardless of the theory we can do this well or we can do this poorly. It might be argued that this is harder to do (perhaps too hard) in some theories than it is in others, but I do not see why this applies to Virtue Theory (and Warburton does not seem to argue that this is the case).

Thus if this issue of interpretation -- the establishment problem -- is common to all moral frameworks, and is only a problem when it is poorly done, then it should be clear that it is an problem that is independent of any particular moral framework, or, in other words, not an inherent problem with any particular moral theory.

If the establishment theory is not an inherent problem for Virtue Theory, then it can not be rightly described as a "major problem" for it. I've claimed that Warburton implicitly classified it as a major problem, and I want to now explain why I think his text does this. He didn't explicitly say it was -- or wasn't -- an inherent problem, but because he left the issue open and described it as a "major problem" in the section "criticism of virtue theory", the only way we can sensibly interpret his meaning is if we assume that it is an inherent problem.

In fact, there's a second example of implicit categorisation in that passage. By describing the establishment problem as a "major problem" under the heading of "Criticism of virtue theory", and by not elaborating on the what a major problem means in so far as it is a criticism, it is implied that is a major problem that can be counted as a criticism of virtue theory. For it is not necessarily the case that a major problem with a theory has to be a criticism of the theory. For example -- and to take an example that fits in with the theme of morals -- there are major problems -- difficulties -- involved with trying to be a good person, but this doesn't mean these problems constitute a criticism of trying to be a good person.

I'm now going to try getting closer to the heart of this issue of implicit categorisation. In effect a "subject" (the establishment problem) is something that could be interpreted in a number of ways, and is being referred to as a particular type of thing. However, rather than explicitly calling it that type of thing, it is implicitly being referred to as that type of thing. The reference is implicit because the there is no explicit link from the referrer to the referent, and that there is no explicit statement about the nature of the link between the two, that is, of what the referrer is saying about the referent.

The link and the meaning of the link are implicit because they are derived from the following: 1) the referrer being the heading of the section 'criticism of virtue theory' and 2) the referent (the establishment problem) residing within that section and saying something negative about the theory, which leads us to think that we have grounds for critcising the theory (this is the meaning of the link).

We are drawn into this implicit categorisation because this categorisation is the only sensible way to interpret the writing. For if we were to categorise it differently to the implicit categorisation, it would mean that the point the writing was making would be wrong, and there is no apparent reason why it is wrong -- it seems like a fair and adequate point being made. This is, of course, a warning about the dangers of not looking any beyond what things apparently seem to be, and about the importance of considering what things say not what you think they mean -- but those are other stories.