Wednesday, November 30, 2005

Perceptual/Cognitive Models Made From Purely-Speculative Explanations

Some sketching...

If there is no supporting evidence for a belief, then a belief that it's true is pure speculation. We can denote beliefs with no supporting evidence, and any claims expressing them, as being purely-speculative.

If I don't know what is currently in my laundry, and I say that there is a pink elephant in it, I have no reason to believe that this is actually the case and I am making a purely speculative claim.

It's often hard to say a purely-speculative claim is wrong -- how do I know that there isn't an elephant in my laundry right now? -- but if we can see that the claim is purely-speculative then we are unlikely to give it much credence.

We might expect that it'd be obvious if a claim was purely-speculative, but in fact I don't think that's always the case. I think that it is less obvious when the purely-speculative belief is an explanation of some phenomenon, and even less obvious when that purely-speculative explanation constitutes our perceptual/cognitive model of the phenomenon.

I think a far greater proportion of our knowledge than is commonly believed is in fact not knowledge in the normal sense, but explanations. Much of our memory is like this. As is a lot of what comes to mind when we want/need to justify something. I think what I'm more concerned with here is the case when we observe a phenomenon and develop an explanation for it. A purely-speculative explanation does not have any supporting evidence. By definition, a purely-speculative explanation will involve some entities or properties or behaviour that we can not or have not seen.

Sometimes this is fairly subtle. There were many purely-speculative explanations of those points of light in the night sky that we know as stars -- explanations in terms of gods, pinholes which light shined out of, etc. We can see those points of light, but of course we can not see them and their context clearly. The explanations invoked invisible things in or associated with them.

Most historical beliefs about the nature of the natural world are purely-speculative explanations.

I think it is harder to see a belief is purely-speculative when that belief is an explanation. Or at least, such beliefs are easier to take on board. Historically, it's clear that people were Damn-Sure about all sorts of purely-speculative explanations. You could probably even say that people hold purely-speculative beliefs with even more vigour than ones that have supporting evidence.

I think part of their appeal is simply that they provide an explanation. This can simplify matters, bring order to disorder. And it can, without too much effort, be seen as a kind of evidence for it -- evidence for its correctness because it accounts for something.

But, you might ask, doesn't the fact that it provides an explanation actually give it some level of credence? That seems to be a fairly widely held belief, but history shows that purely-speculative explanations are much more likely to be wrong than right.

In fact I think it's quite easy to come up with a false purely-speculative explanation. To account for the phenomenon it just needs to invoke entities/properties that "just are". That is, some sort of essence. Often, this involves intentional agents of some sort. Why does the sun move across the sky? Because there's a god who pulls it across with his flying chariot. Intentional agents that just do stuff are a commonplace occurrence in our everyday lives, and from that it can seem reasonable to invoke them elsewhere.

I think that once we get used to an explanation, it can be come habitual, and we can start to transparently think of the phenomenon in terms of that explanation. It can become an integral part of our perception of the phenomenon.

An explanation is buried most deeply when it essentially constitutes the perceptual/cognitive model of the phenomenon. I think that in such cases it just seems like you know the nature of the item. It doesn't seem be that what you have is just an explanation to account for phenomena, and a purely-speculative one at that (i suppose you could say that it's not just purely-speculative explanations that are problematic in this situation, though I expect they are more common, because if you've got at least some evidence I think you are much more likely to be aware that you're dealing with an explanation).

Our perception is such that we tend to project our perceptual/cognitive models out onto what we -- to use vision as an example -- 'see'. We treat them as part of what is out there, as if we are directly perceiving them.

In vision, we don't really just 'see' some 'raw' image, just blobs of colours at different positions in the visual field, we 'see' a perception of the scene in terms of our perceptual/conceptual concepts. We see lines, objects, depth. We don't just see a face arranged in a particular configuration, we see an 'angry' face. And so on.

We even 'see' hidden things. If we catch part of a coke can, our mind fills in the missing part of the coke logo. We perceive things about people's states of mind from their words and facial expressions. It's all unconscious and automatic. What is completely invisible to us can see to be quite real and 'there' and seem quite tangible to us. (Information, as it is usually conceived, is such a thing -- we can see encodings of information, but no one has ever seen information itself -- and I would suggest that it is purely speculative, but that's another story).

Sometimes our perceptual/cognitive models can actually affect what we perceive. Like when we're reading something in a book or a sign and we actually mistake a word for what we were expecting to see. Analogues of this happen in what we hear. (Note that if we're not consciously on the look out for these misperceptions, we can easily not take note of them when they happen, so we shouldn't necessarily expect that we would be able to recall them having happened to ourselves).

When a purely speculative explanation constitutes our perceptual/mental model of a phenomenon -- whether we have developed it ourselves or picked it up from someone else, or it is innate -- it can become its own justification. We invoke the entities of the explanation all the time as part of our mental descriptions of what is going on. We lose sight of the fact that this is not what the phenomenon is like, but just an explanation, and it becomes simply how we see that phenomenon -- so of course it is right. The belief in its correctness is based on a circular argument.

Sunday, November 27, 2005

Notes on What Qualitative Perceptual Concepts Are

In the last post, I wrote a few notes on 'simplicity' as a 'qualitative perceptual concept'. Here I want to write a few more notes on what I mean by a qualitative perceptual concept.

These are very rough notes, and no doubt not very intelligible to anyone else.

We perceive things in terms of concepts. Our perceptual/cognitive systems perceive items in terms of concepts. These concepts are not there directly in terms of the items, but are part of a 'model' of the nature of things in the world. In terms of interactions, there is a task and there is how it realises it, and there are properties of this.

A perceptual concept is really a specific set of concepts and properties with certain values. It's like a template. When a particular item matches this template, we say that the item has this perceptual concept.

What this means is that a perceptual concept like simplicity is something we know through its effects. We don't necessarily know exactly why something is 'simple' and what 'simple' means. It is qualitative because we do not have conscious access to the details of the template.

'Simple' is a perceptual concept that really has to do with how we stand in realtion to the item; it's not an intrinsic property of the item. Whether we would call an item simple depends on our percpetual and cognitive properties.

There are other percpetual concepts that do actually have to do with the item itself. For example, something that is intelligent is not something to do with the perceiver (of course, different perceivers may dispute whether something is intelligent or not, but this doesn't mean that actual intelligence of something is related to the perceiver). For such concepts, our perception can notice that there's there's this property there in the item, but knowing the effects, as we do with perceptual concepts, is quite different to actually knowing what in the item causes it to have that property. That is, we don't know what intelligence actually is.

Sketching on Simplicity As Qualitative Perceptual Concept

I read this Fast Company article The Beauty of Simplicity

Marissa Mayer, who keeps Google's home page pure, understands that less is more. Other tech companies are starting to get it, too. Here's why making things simple is the new competitive advantage.

And I thought I'd write up some thoughts regarding the notion of 'simplicity'. This is really just a collection of notes, some initial sketching.

In software development, software user-interfaces and design, there's a lot of talk around about simplicity. In the latter two categories, Google's main page and Apple's iPod are common examples. I think this is, on the whole, good, but sometimes I also think Simplicity is treated as a fairly mindless slogan.

As a concept, 'simplicity' is what you could call a qualitative perceptual concept. We look at or interact with something and we get this qualitative feeling that it is simple. But we have trouble going past this qualitative feeling and expressing exactly what it is that gives it simplicity. I think this level of understanding is a bit too vague for pratical terms.

Simplicity seems to be associated with the number of features the item has. But I think that's only a rough correlation, and it's not really what causes things to be 'simple' or 'complex'. Simplicity is not simply having fewer features, and complexity is not simply having more features.

I think that what we call simplicity is really a matter of good design: design that gest the right trade offs for our perceptual and conceptual capabilities, for the sorts of uses we put the software of device to, etc. What we really need is a concrete, sharp understanding of what creates good design, and what doesn't.

The problem with qualatative perceptual is when we start to reasoning in terms them, especially if we are not critical enough when doing so. If we start reasoning back from our qualiatative picture of simplicity as number of features, we start concluding things like a larger number of features as being out-of-hand bad.

Making this worse, when we come to realise recognise something as important, we can be too eager to apply it. It's like saying that when all you've got is a hammer, everything becomes a nail. We start to see it as an end in itself, not simply as an important factor among others.

We tend to associate sophistication with complexity, but these are not necessarily the same. I’m reminded of what Edward Tufte says in his book Envisioning Information, that to make information simpler and easier to digest, the simple-minded view is to give the user less information, where in fact, giving them more information can actually make it easier.

Also, when we're thinking in terms of such qualitative concepts, there's no check on things becoming too subjective. For example, we can falsely attribute out difficultites conceptualising and getting the hang of something that works differently to its complexity. It might not be any more complex. The problem is, when we talk about simplicity and complexity, we're talking about our experience, not the thing itself.

Saturday, November 26, 2005

Article: Turning academia into a cafeteria

In the LA Times, Russell Jacoby writes that "offering students a buffet of bogus 'choices' only undermines intellectual integrity and corrodes academic freedom."

The following are some of the main points.


...We live in a choice-addled society. The jargon of choice, a second cousin of diversity and multiculturalism, undermines intellectual integrity and coherence. "Choice" and "diversity" are universal passwords that unlock all doors. Who can oppose them without appearing authoritarian?

...

The notion was seductive, but it opened the way to teach anything and everything in the name of airing a dispute. Were television situation comedies great literature? Teach the conflict.

...

But the jargon of choice and diversity actually corrodes academic freedom, which once referred to the freedom of college instructors to teach what they considered salient, subject to the review of their peers, not outside authorities. Today, it increasingly means the freedom of students to hear what they — or their parents — want.

...

As attractive as these principles seem to be — diversity, choice, alternatives — what do they actually mean in the classroom? Must an astronomer teach astrology? The course on early Christianity include militant atheists? A class on the Holocaust, the Holocaust deniers? A lecture on 9/11, the conspiracy theorists? These "other viewpoints" all have a bevy of experts behind them. The few qualifiers tossed into the proposed Academic Bill of Rights, which specify that diverse views be aired only "where appropriate," do not undo the damage.

...

Mesmerized by the jargon of choice, we forget a basic principle: Truth itself is partisan.

Thursday, November 24, 2005

Brainiest Kid or Biggest Repository of Superficial Knowledge?

There's this tv quiz show "Australia's Brainiest Kid". It's for schoolkids in their early teens, and they pitch it as being about braininess or intelligence, but its really about 'general knowledge', which really means superficial knowedge of facts.

It may be 'just a tv show' but do we really need more things reinforcing this confusion of general knowledge for intelligence? It's a confusion that seems pretty common in our community, and also entrenched in a lot of ways in our education systems -- both to quite negative effects, I think.

The sort of knowledge that's important is more universal and deeper, including such things as principles and details of how systems work. Facts play a part in such knowledge, but it's mostly peripheral. They're required in learning the deeper knowledge, and you'll know them as a result of knowing that deeper knowledge.

Why Does the ID Movement Claim a Creator is the Only Alternative?

The Intelligent Design movement claims that lifeforms are too complex to have been created by natural selection. I don't belive they have a valid argument, but if they believe there's unaccounted-for complexity, why don't they consider the possibility that it's the result of some other, as yet unknown, naturalistic mechanism? After all, our current knowledge is hardly complete and final. Why is a creator the only acceptable answer?

Ooo, ooo, Duck-shaped Mini Vacuum Cleaner!

Finally. Cute though.


Tuesday, November 22, 2005

Example of System Properties, Not Intention, As Cause - American vs European Working Hours

More of a note for myself than anything else. According to this New Yorker article, the reason Americans work more hours than French or German people is this:

European labor unions are far more powerful and European labor markets are far more tightly regulated than their American counterparts. In the seventies, Europe, like the U.S., was hit by high oil prices, high inflation, and slowing productivity. In response, labor unions fought for a reduced work week with no reduction in wages, and greater job protection. When it was hard to get wage increases, the unions pushed for more vacation time instead.
It's an example of the properties of the system being responsible for the result, rather than it being direct result of intention.

Large Collection of Optical Illusions

A fairly large collection of optical illusions.

(via Reddit)

Edward Tufte's Presentation Tips

Edward Tufte's presentation tips.

Update: the original link appears to no-longer work.  Some of his tips are now listed here.

Monday, November 21, 2005

Bill Burnham's take on Google Base As Gigantic, RSS-fed, XML Database

Bill Burnham speculates on the recently announced Google Base

"Google intends to build the world's largest RSS "reader" which in turn will become the world's largest XML database."
"...once Google assimilates all of these disparate feeds, it can combine them and then republish them in whatever fashion it wishes. Google Base will thus become the automated engine behind a whole range of other Google extensions (GoogleBay, GoogleJobs, GoogleDate) and it will also enable individual users to subscribe to a wide range of highly specific and highly customized meta-feeds."

Is the Lack of Suit and Ties That Bad?

The Sydney Morning Herald reports IT workers dubbed 'worst dressed'

(via Slashdot)
"I think the way in which you present yourself is very important to building relationships and is integral to business and personal success," she said.
Nothing wrong with wanting to look good. But I do have a bit of a problem with the emphasis on what might be called 'business style'. That's trousers, long sleeve shirts, ties, suits, etc. That article seems to criticise the IT field's lack of emphasis on business style:
"Because the majority of IT people are not in front of customers all the time, they tend to slack off," she said
Help-desk staff were named as the worst offenders, followed by those working in technology start-ups, many of whom had continued to wear T-shirts to work as a consequence of the casual web culture of the '90s.

"The internet is now such a massive industry but people haven't caught up in terms of their dress," she said.
It seems to me that business style is really a symbol, representing the business 'class', as oppposed to manual laborours, students, and so forth. It's a symbol that we associate with a certain sort of businessy behaviour and attitutes, with connotations of 'proper', serious and kinda staid behaviour. We feel it's somehow improper for a business person to be wearing more casual clothing.

What I don't really like about it is that it's all about appearance rather then substance, and in particular, it's criticising someone just because they don't conform to what's considered the proper appearance. It seems a very primal form of pressure to conform.



Related to this, Paul Graham writes:
A company that made programmers wear suits would have something deeply wrong with it. And what would be wrong would be that how one presented oneself counted more than the quality of one's ideas. That's the problem with formality. Dressing up is not so much bad in itself. The problem is the receptor it binds to: dressing up is inevitably a substitute for good ideas. It is no coincidence that technically inept business types are known as "suits."

Drug Addiction Example of Attributing Properties to Physical Inputs

New Scientist, reports: "Gaming fanatics show hallmarks of drug addiction"

(via Slashdot)

I think we're meant to be surprised by that. We're supposed to think that the game is somehow acting like a drug. But unlike a drug there's no physical thing you're addicted to, there's not chemical substance with addictive properties. At most, calling it an addiction, like drug addiciton, is mere analogy.

This is good example of how people tend to attribute a property associated with some 'input', in this case, addiction, to be the result of the input having that kind of property. That is, if something is addictive, it's because the input has the property of 'addictiveness'. But often the properties are really more to do with what we do with thoes 'inputs'. Sleeping tables don't make you sleep because they have some 'sleepyness' property (I mentioned an example of this kind of thing, concerning carcinogens, in this post - scroll down to the paragraph starting with 'The second section moves from...').

Addictiveness has to do with our pysiological/psychological processing of 'input', and there's no reason to assume that a 'physical' input like a drug is fundametally different from a 'non-phyiscal' input like a computer game.

I think this case is also an example of how people think of physical 'things', in this case, drugs, as somehow more 'real' than -- supposedly -- non-physical things, in this case, playing computer games.

Cringely's Speculations on Google - Super Content-delivery System

Robert Cringley speculates that Google is planning to have having their own fibre networks and datacenters everywhere, so they can provide content much faster than anyone else. The interesting thing is what this could enable.

(via Slashdot)

Sunday, November 20, 2005

The Effect of the Internet and other Technologies on the UFO movement

[Update: the article can no be accessed here]

In the article "Internet Killed the Alien Star", Douglas Kern contends that the reason the UFO movement, which was so big in the 90s, has been largely killed off these days, is because of the internet and other recent technological developments, such as camera phones.

He cites the following reasons. I don't think the article provides solid evidence that these are the sole or magor reasons, but it seems credible to me that they are important factors.

With webcams and camera phones everywhere, why is there still zero tangible evidence? With improved communications technologies -- including cell phones and instant messaging systems -- it ought to be possible to summon throngs of people to witness a paranormal event, and yet such paranormal events don't seem to happen very often these day.

The instant publishing capability of the internet removes a lot of credibility from the idea that there are conspiracies to suppress UFO information. With the internet it would be pretty difficult to stop its dissemination -- such as something snapped on a camera phone and instantly published -- so if it exists, where is it?

The speed and ease of information dissemination means that hoaxes get uncovered much faster -- though he doesn't explain exacly why this is -- and so does news of this. The notion that UFO material may be falsified/hoaxed is much easier for people to swallow these days. He notes that on the internet "wild rumors and dubious pieces of evidence are quick to circulate, but quickly debunked".

He doesn't mention this, but I also wonder whether the prevalence of email scams has made people more critical (as I've considered previously)?

Adam Bosworth on Applying Lessons From The Web To Dealing With Data

Adam Bosworth writes in ACM Queue: "The Web has much to teach us about managing and modeling distributed data. It's time we began listening." Specifically, how we can apply these lessons to improve Relational Databases, and how RSS 2.0 and Atom apply these lessons to XML.

Wednesday, November 16, 2005

Zubbles - Coloured Bubbles

Popular Science's Best of What's New 2005 reports on, amongst other things, Zubbles



bubbles that are "nearly opaque, with a single vibrant hue" and they don't leave a stain when the bubbles burst.

Easy Eater

My friend, Dan King's dining venue site

Easy Eater - www.easyeater.com

launches on Dec 15, but is accepting listings now. Easy Eater "allows visitors to easily search a detailed, accurate, and up to date online directory of dining venues". At present, the main focus is dining venues in the Sunshine Coast area of Queensland.

Thursday, November 10, 2005

Better Birth-Control Design

In Jorn Barger's words, the "first sane condom-design ever".

This is not the main point of the article, but something I wanted to comment on:

The thing that Andrew told me— and explains in his article— is that all those world health and family-planning organizations that promote birth control around the world are not recommending these condoms because the powers-that-be think they're a hedonistic frill.

The world of STD-prevention is SO FUCKED UP. Condoms should be made as easy, pleasureable, and cost-free as possible, distributed en masse to men the world over.
Isn't that ridiculous, but telling about the way some people think.

Another Example of Assuming Psychological Causes

Another example of the tendancy to assume environmental or, more commonly, psychological causes, for misfortunes, in this case, illnesses. This one is from a review of Walter Gratze's The Curious History of Nutrition:

Though Gratzer appears more interested in anecdotes than in theory, you can't read this book without spotting a theme: We blame psychology and environment for everything, until science comes up with the real cause. Scurvy, blight of the 18th-century sailor, was attributed to low morale, bad air, and all kinds of other folderol, until it was finally proved to be a vitamin C deficiency.

Kansas' Attempt to Redefine Science

MSNBC reports that the Kansas Board of Education has decided to change science standards to be in favour of intelligent design teaching. But there's something else they've done as well:

In addition, the board rewrote the definition of science, so that it is no longer limited to the search for natural explanations of phenomena.
Now that is something to be scared about. Science is natural explanations of phenomenon.

Friday, November 04, 2005

Software as Language

Jon Udell notes that hackability is a key concept behind so-called Web 2.0 applications (such as Google Maps and Flickr). But he also recognises that the term has some unfortunate connotations. In the mainstream, it's associated with illegal hacking into systems, and in general the term implies a workaround, which doesn't really capture what's going on with hackable applications. He suggests the term Democratizing Innovation (which is the title of a recent book).

I think there's an alternative way to conceptualise hackability. To me, the concept underlying hackable applications, and the democratization of innovation that can enable, is software as language.

A programming language provides building blocks -- a set of functions, including those for managing control flow, and a grammar that says how they can be combined -- and when you write a program you create a higher-level set of building blocks which you organise in a specific arrangement.

The idea of software as language is to not just provide the user with the software system as the fixed, specific arrangment, but to give them access to the building blocks so they can use them in their own arrangements and as something to build upon.

In other words, rather than giving them a program, you're giving them a language that they can use to compose their own programs and solutions.

Naturally, you want to design things to facilitiate composition using the building blocks. Such as with the idea, that Jon mentions in the article, of user-innovation toolkits.

Thursday, November 03, 2005

More on False Neutrals

I've talked about false neutrals before in this post

Here's another example. I was watching this news item on tv, that was talking about the possibility that Australia could have a very big dependence upon China within 20 years time. One of the opinions on this implied that since there is a lot of uncertainty in predicting such things, it was a fairer assumption to think this wouldn't be the case. It's a false neutral because it's assuming, without any justification, that the present state -- a smaller dependence on China -- is more likely in the future than the predicted state.

I think that part of what's going on with false neutrals is this. Rather than considering each of the possibilities on their own merits, we're framing the things in terms of the prediction. This is an example of structure capturing - evaluating things from the pov of the current 'subject' of our thoughts. And then from this framing, we're conceptualising it as a zero-sum game. In a zero sum game on entity's gains are the other's losses. Wikipedia describes it as "a situation in which a participant's gain (or loss) is exactly balanced by the losses (or gains) of the other participant(s)".

In the context of the false neutrals situation, the gains and losses translate to certainties and uncertainties. If the prediction is uncertain, then, we reason, this must mean the alternative, the status-quo, must be more certain.

Why would we conceptualise it in this way? Well, I suspect that zero-sums are a pretty general heuristic that our brains apply. I'm not going to try and think of other examples right now, but I do think it is fairly commonly used. I can see why it might be useful to assume that a zero-sum applies, in the absence of any better understanding of a situation.

Another way to think about false neutrals is in terms of how conservative a claim is. I don't mean conservative in the political sense, but in terms of how speculative the claim is. A false neutral is something we falsely think is the most conservative option.

On this basis, here are some other sorts of false neutrals. (apologies for the following being fairly abstract, because it's not illustrated with examples, but unfortunately I don't have time at the moment to try to think up examples).

We tend to think that the most conservative standpoint is the one that is closest to the "known facts".

We also tend to think that the viewpoint left over, when we shoot down some speculative claim, is the more conservative.

Both of these beliefs false. Of course, sometimes the most conservative standpoint is the one closest to the facts, or the alternative to some speculation, but these beliefs are false as general rules.

Why? Because they do not involve considering the actual nature of the supposedly conservative claim, as is required to see how conservative it actually is. In the first case, we may have good reason to belief that the currently "known facts" are very incomplete, and we may know that they are unlikely to be representative of the true picture. That is, we may know that the current picture is likely to be a quirk of our current state of affairs.

In the case of shooting down a speculative standpoint, the alternative to that standpoint may itself be quite speculative. It's quite often the case that the alternative is something that is widely held, and thus is considered 'quite reasonable', but is in fact very speculative and unjustified.

Tuesday, November 01, 2005

Article: Texting teenagers are proving 'more literate than ever before'

The Times reports:

Fears that text messaging may have ruined the ability of teenagers to write properly have been shown to be unfounded after a two-year study revealed that youngsters are more literate than ever before.
Makes sense to me. People who think texting will reduce literacy often seem to do so on the grounds of poor spelling and the heavy use of abbreviations, which are pretty superficial and fairly irrelevant matters as far as true literacy is concerned.

And it's not so surprising that it would improve literacy, since writing is a skill, and the more practice you get, the better you get at it. And in texting those skills are often exercised with content that is non-trivial to express -- describing events, thoughts on things, social interactions, etc.