Wednesday, April 26, 2006

Criticisms of a Guardian Article on Technologies Affects on our Brains

The Guardian has an article indicting the effect of new technologies on our brains -- but I think it's criticisms are extremely trite. The article contains fourteen paragraphs, but the actual criticisms are only given three sentences (quoted below) and are not given any jutifictaion.

...the process of traditional book-reading, which involves following an author through a series of interconnected steps in a logical fashion. We read other narratives and compare them, and so "build up a conceptual framework that enables us to evaluate further journeys... One might argue that this is the basis of education ... It is the building up of a personalised conceptual framework, where we can relate incoming information to what we know already. We can place an isolated fact in a context that gives it significance." Traditional education, she says, enables us to "turn information into knowledge."
The flickering up and flashing away again of multimedia images do not allow those connections, and therefore the context, to build up. Instant yuk or wow factors take over. Memory, once built up in a verbal and reading culture, matters less when everything can be summoned at the touch of a button (or, soon, with voice recognition, by merely speaking).
Why do these images not allow these connections? No reason is given.

Does memory really matter less? No justification is given for the claim. Does being able to look up facts really mean that we remember things less? It's not obvious that it should -- for the memorisation processes are pretty unconscious, and memorisation is involves understanding of the content.

Perhaps what the ability to look things up just gives us more leverage?

And why no mentions of how technology may be increasing our cognitive powers? For example, the increasing sophistication of the plots of television programs, requiring greater memory and connecting the dots?

Steve Pavlina on How to Wake Up Immediately When Your Alarm Goes Off

Steve Pavlina gives some sensible sounding advice on how to get up immediately when your alarm goes off. The article is a bit long-winded, but the point is basically this:

Even though you really would like to get up when the alarm goes off, trying to convince yourself of this at the time just doesn't work very well. You're tired, you don't feel like getting up -- and your brain is just too foggy to effectively fight that.

Instead, he suggests forgetting about your conscious mind and training your subconscious mind to get you out of bed when the alarm goes off, by the only way it can be trained: through practice. So practice lying in bed and jumping up as soon as the alarm goes off, and if you practice it enough it'll become automatic.

Thursday, April 20, 2006

Paul Graham on Discarding the Assumption that it's all About Us

In this post, Paul Graham starts writing about what drives bloggers to blog but ends up talking about the following. It's a very good read.

...The history of ideas since people first started writing them down is a history of gradually discarding the assumption that it's all about us.

...The idea that we're the center of things is difficult to discard.

...So if you want to discover things that have been overlooked till now, one really good place to look is in our blind spot: in our natural, naive belief that it's all about us. And expect to encounter ferocious opposition if you do.
In relation to my PhD research, I actually think that the reason people haven't been able to understand information is because of this this kind of thing, more specifically taking certain elements of their perception for granted.

This has to do with what I wrote the other day about people not being able to understand information because they are looking for a declarative understanding of it rather than a fully imperative one. A fully imperative one forces you to fully take your perception out of the picture.

Wednesday, April 19, 2006

Steve Grand: Moving AI out of its infancy: Changing our Preconceptions

An excellent article by Steve Grand, on where he thinks AI needs to go. It's like a summary of the main points in his book "Growing Up With Lucy".

The main points he makes are:

  • The purpose of the brain is to compensate for all the time it takes for the nerve signals to travel to it and back again.

  • Brains don’t make decisions

  • Brains perform coordinate transforms

  • Nervous tissue is a new state of matter

  • The more complex a robot is, the easier it is to make progress

Things We're Poor at Imagining, Concerning Higher-Rung Skills

Another bit of sketching on laddered skills...

There are some things we are poor at imagining, concerning higher-rung skills.

Our ability to develop a higher-rung skill.

We often ask ourselves the question: could I develop my skills beyond their current level to a higher-rung? Our nature seems to be such that, by default, we feel that we can't.

I think this is a flaw in our nature. It's very difficult to really know that you can't develop the skill. All you can really do is work at it, and keep on working at it -- and invariably, if you do this, you will end up being able to do it.

The lesson is: expect to feel that it's not possible, and take no notice of it.

Why do we think we can't? I think a lot of the cause is that we can't see how we could develop them further. But the thing is, you can't see how it happens. It just does, given the appropriate practice.

I think it's also, in part, because we tend to think if something isn't possible at present then that's because it is not possible, full stop. It's clear, if you look at history, that people always tend to think this. Human flight? Ridiculous! Being able to instantaneously talk to someone on the otherside of the world. Impossible! etc etc.

Whether still-higher rungs exist.

Do any higher rungs exist, beyond the highest level of skills that people currently have? This is really just a variation on the previous item.

We like to think we can tell, and we tend to always think the answer is 'no', but it's very difficult to actually rule out the existence of higher-rungs.

Sure, we may not be able to see how there are any, and at present no one may be able to go futher, but these are poor indicators of the actual nature of the task and what is possible.

An example I like is the case of ollies in skateboarding. That's when the skater is skating along and jumps into the air with the board stuck to their feet - or at least, that's how it looks.

Before the ollie was invented, if you had asked people whether they thought such as thing was possible, I think they would have said: no, it's not possible, it can't be done; none of the experts can do it; I can't see any way that it could be possible.

But that's the thing: you can't imagine it. I doubt the invetor created it by thinking of the idea and then going out and praticing it. It probably evolved out of something they once did, perhaps by acident.

What a higher-rung skill enables

We seem to think that we imagine correctly about what a higher-rung skill, that we don't yet have, would enable us to do. We seem to think that what's required for imagining it is the appropriate amount of effort.

While it may be possible to do in some cases, the general case is simply not possible. The only way is to actually have the skill and see what you can do with it. You can't imagine what would happen if you had the skill.

Essentially, being able to do that would require that you could properly simulate having the skill - and that's more complicated than just having the skill, which you don't.

Tuesday, April 18, 2006

On the Development of Skills from Conscious to Unconscious and Habitual

Just noting a few more things related to laddered skills.

It's not enough to know what a skill involves. It's not enough to put in a lot of effort and think you'll suddenly be able to do it. A skill has to be developed through practice, and this takes time.

Over time, the performance of skills becomes more unconscious and habitual, but their initial development takes a lot of conscious effort.

Not only in actually 'performing' the skill, but to make sure you think of doing it at the required times.

Part of the effort may also involve overriding existing habitual skills occupying the same territory. New skills aren't added into a vaccumm.

What I hope to get into next is a discussion of why cognitive abilities are laddered skills, and how this viewpoint effects how we think should think about them.

The Computer Revolution as a Switch to Imperative Descriptions

In their book "The Structure and Interpretation of Computer Programs", Abelson and Sussman write:

The computer revolution is a revolution in the way we think and in the way we express what we think. The essence of this change is the emergence of what might best be called procedural epistemology--the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative point of view taken by classical mathematical subjects.
I agree, and I'd say the declarative point of view is common to a lot more than just classical mathematics. I think it's an important advance, but one that most people haven't caught up on yet. For example, in the context of my PhD research, I think the reason people get stuck when they try and undersatnd information is because they're stuck in delcarative approaches to it.

I came across this quote in Quotations for Learning and Programming.

Jon Udell on Freeing up the Medicatl Information Monopoly

Jon Udell writes about the limitations caused by the medical professions monopoly on medical information, and what technology can and is doing to free it up.

Pacific "Cargo Cults" Still Going

The Telegraph reports:

Followers of a South Pacific cult, who worship a mythical Second World War American serviceman they hope will one day return bearing riches, believe that their prayers have been answered.

Monday, April 17, 2006

Scott Adams on Respecting the Beliefs of Others

Author of the Dilbert comic, Scott Adams, writes about respecting the beliefs of others:

I also wonder if showing respect for all beliefs is causing more problems than it’s avoiding. The only thing that keeps most people from acting on their absurd beliefs is the fear that other people will treat them like frickin’ retards. Mockery is an important social tool for squelching stupidity. At least that’s what I tell people after I mock them. Or to put it another way, I’ve never seen anyone change his mind because of the power of a superior argument or the acquisition of new facts. But I’ve seen plenty of people change behavior to avoid being mocked.

Sunday, April 16, 2006

On the Character of Lower- and Higher-rung Skill-Sets

Some more details concerning laddered skills.

What would we expect to see in lower-rung skill sets? We might expect to find an incomplete set of capabilities, some of which are simply ineffective.

What about higher-rung skill sets? We might expect that they replace ineffective lower-rung skills and fill in the gaps.

But I think this is wrong. Lower-rung skills tend to provide a more complete set of capabilities than we would expect. But what is 'lower' about them is that they provide a simplified set of capabilities, painted in broader-brushrokes.

And what higher-rung skills tend to do is not simply add or replace, but refine aspects of those broad-brustrokes.

It's also worth noting that even if a particular rung provides some less refined capability, this may be beneficial in some circumstances for pragmatic reasons. So 'higher-rung' does not simply mean better.

Still to come, some of the reasons I'm talking about laddered skills...

Saturday, April 15, 2006

'Science vs. Norse Mythology' comic

Not bad.

Some Interesting Discoveries About Human Perception's Ability to Gleam Information

EurkAlert reports:

Faces tell the stories in UC Riverside Professor Larry Rosenblum's ecological listening lab, as volunteer test subjects show that they can "read" unheard speech -- not just from lips, but from the simple movements of dots placed on lips, teeth and tongue.

They can also recognize people's voices just from seeing their faces, and vice versa, and seem to be able to distinguish among a variety of rooms on campus just from their echoes.

Lancet Calls For Proper Research Using LSD, ecstacy etc

The Guardian reports:

"Use more psychedelic drugs," is not advice you would expect from your GP, but that is the call from an influential US medical journal to researchers.

An editorial in the Lancet says that the "demonisation of psychedelic drugs as a social evil" has stifled vital medical research that would lead to a better understanding of the brain and better treatments for conditions such as depression.

Thursday, April 13, 2006

Laddered Skills

Skills are 'laddered'. Your capabilities develop through several stages, or rungs.

The skills we develop build upon those we already have. Lower-rung skills are pre-requisites for developing higher-rung skills.

You can't avoid these dependencies, even if there's wide variation in the way that you learn the skills. Wide variation in learning styles does not equal complete freedom in the order that we can obtain the skills.

There doesn't have to be just a single ladder leading up to mastery of a skill, however.

You can't avoid these dependencies and, without obtaining the intermediate skills, leapfrog more than one rung from your current position, no matter how much you might grit your teeth and exert effort. Effort can't leapfrog over dependencies. And remember that it takes time to develop each skill.

More on laddered skills to come.

Wednesday, April 12, 2006

First-rung Conceptualisation Frameworks: Essence-based Things and Social/Aesthetic Interpretation

Quick, incomplete sketching...

I’m going to do an oh-so-terrible thing and speak of the human mind if it had a certain structure to it, and worse, that that structure is less than perfect. I know, I know, I am the devil, but I would rather face an imperfect reality and try to understand it so I can try and do something about it.

I think that we ‘naturally’ have certain means, or tools, for conceptualising the world. We only supplant these tools when our culture has accumulated a fairly large amount of knowledge about the world, and we have ourselves learnt a fairly large amount of that. (I'll explain the 'first-rung' terminology used in the title later on in another post).

I think the main tools, or frameworks, are the following. Seeing the world in terms of 'things', where each thing is of a certain type. Being of a certain type means that it has some set characteristics. Characteristics include intentions, functions and properties.

Characteristics apply to the thing as a whole. They don’t apply only to aspects of it, and they don’t apply only contextually to it. They are essences - they are just something that the thing has, and the only explanation of this can be because of other intentions, functions or properties of the thing. The thing can not normally have contradictory characteristics.

Intention is seen as the primary tool available for explaining why things have happened.

There are strong social and ‘aesthetic’ elements to our conceptualisation. Things and situations are interpreted in these ways. As part of the ‘aesthetic’ element, things and situations always have a property of good or bad (to some degree), reflecting our approval/dissaproval of them. The social element includes notions such as power, hierarchical position and fairnes/unfairness.

The problem is that the world can not be very well understood in these terms, but if those are the only tools we have, we will squash our view of other situation into those terms.

Of course, the hard thing is in demonstrating that these actually are the primary tools for conceptualising things, and in showing that they are not very good for conceptualising the world. But all I wanted to do in this post is sketch out some basic details.

Graphically Representing Perceptual Attention-Levels

Here's an idea. I wonder if you could graphically represent the differing levels of perceptual attention we give to things in our environment.

I'm imagining something like this. It could display an animated scene of a person is walking down the street. The viewer would see things from just behind the person, so they are seeing pretty much the same view that person does.

But rather than the scene looking like normal, the appearance of items in it would be modified in order to show how much attention the person is giving to them. So the more attention being given to an item, the bolder and brigher the item would be displayed, and the less attention being given to it, the more muted and washed out it would be displayed.

Items being given low-levels of attention might be grouped together, ignoring the specific items within the group. Like say there's a store front containing a window containing a display. As the person walks past it, they might just treat the whole store-front as a single unit. So perhaps a border could be drawn around it. And perhaps such groupings could be labelled with the category the person's perception puts them in, say 'store-front' or 'building detail'.

The view could rotate around to track what the person is looking at, and the scene would dynamically update to show the varying attention-levels being given to things.

Attention-levels would cahnges as the person is walking along, as things happen within the environment (like a car drives past), and as the person undertakes actions (like talking to someone on the phone).

Say they get a call on their mobile. If the phone is in their pocket, then the calling sound could create bright radiating lines eminating from their pocket.

And as they go to take it out and start talking to the person, the amount of attention given to other things in their environment could fade. And the words coming out of the phone, and their words, could be shown on the screen, with a level of brightness indicating the amount of attention given to them.

You could show situations where there are 'significant' things that are happening, but which the person doesn't notice because the area where it's happening might not being given much attention.

Like someone being mugged on the other side of the street, which they don't notice because they're busy talking on the phone and watching a sports car driving past.

It would be interesting to see how well such a graphical representation of attention-levels could be implemented. And, if it could be implemented well, whether it could be used to useful effect in simulations or computer games.

Warning Sign Number 7

Physicist Robert L. Park outlines 'The Seven Warning Signs of Bogus Science'. To be honest, I wasn't that impressed, but I did want to note the final one:

7. The discoverer must propose new laws of nature to explain an observation. A new law of nature, invoked to explain some extraordinary result, must not conflict with what is already known. If we must change existing laws of nature or propose new laws to account for an observation, it is almost certainly wrong.
It is interesting to note that we are quite happy to violate existing laws when it comes to beliefs that seem "common sense" or which we consider to be matters of philosophy (as if reality contained this separate category of phenomena).

Like with the concept of information. Most people seem to consider it to be some kind of, as yet unexplained, intangible thing. Some intangible thing that interacts with the ordinary physical world in some unknown way. And yet, virtually no-one seems to bat an eyelid about this.

Alcohol 2.0 - Minus the Bad Side-effects

NewScientist reports on the idea of creating:

...a cocktail of drugs that mimics the pleasurable effects of alcohol without the downsides. The idea is only on the drawing board, but there is no scientific reason why it could not be made right now, says psychopharmacologist David Nutt of the University of Bristol in the UK.

Tuesday, April 11, 2006

Feeling and Intuition, Embedded in Reason

Quick sketching.

Sometimes reason works best, sometimes feeling and intuition does. We have to learn what works best in what sorts of situations.

When faced with a situation where we could choose between them, feeling/intuition does not seem to be a suitable guide.

We have to consicously consider whether which is more appropriate, and this is a matter of reason.

And this means that, while feeling and intuition are important, they always need to be embedded in reason.

Mother dies after 911 call is treated as prank

MSNBC reports:

DETROIT - A 5-year-old boy called 911 to report that his mother had collapsed in their apartment, but an operator told him he should not be playing on the phone, and she died before help arrived.

Jurors Biased by Animated Recreations Used in Courtrooms

EurekAlert reports:

Roese and colleagues suggest that by viewing a computer-animated re-creation of an event, a person's confidence is heightened -- but not necessarily accurately. An animation, they say, provides movement as reconstructed by a prosecution, plaintiff or defense witness to reconfirm or heighten a jurist's hindsight feeling that "I knew it all along."

Tuesday, April 04, 2006

Music I Like: You Know You're Right by Nirvana

For a while I've been wanting to write a bit on music I like, but it's one of those things I never get around to.

So in a more bottom-up spirit, I thought I'd start small, and try and do it one song at a time.

This time it's Nirvana's You Know You're Right (Wikipedia entry).

Posthumus releases can be a bit of a worry -- are they just trying to dredge up any old thing they can? -- but I was quite pleasantly surprised by this song. If you don't know its background, it was recorded in 93 or 94 but wasn't released until almost ten years later.

The song is pretty classic Nirvana, combining power with melody, and 'you know you're right' is a nice lyric.

The verse music is also a little unique for them, having a sound that I can only think of describing as a little funky, having more bass and more of a looping structure to it.

And who else but Kurt Cobain could half-sing, half-scream a couple of drawn out utterances of 'pain' and make such a great-sounding chorus?

Things You Don't Understand Always Seem Intangible

In an excellent essay on software patents, Paul Graham makes the point that things "always seem intangible when you don't understand them". I agree 100%. Why do I bring this up? Because 'information', which my PhD is about, is something that everyone thinks is intangible, and I don't think it is -- and I think the reason it appears intangible is the one Graham cites.

This is the relevant part of Graham's article:

Experts can implement, but they can't design. Or rather, expertise in implementation is the only kind most people, including the experts themselves, can measure.
In a footnote to this he says
Design ability is so hard to measure that you can't even trust the design world's internal standards. You can't assume that someone with a degree in design is any good at design, or that an eminent designer is any better than his peers. If that worked, any company could build products as good as Apple's just by hiring sufficiently qualified designers.
Back in the main text he goes on to say:
But design is a definite skill. It's not just an airy intangible. Things always seem intangible when you don't understand them. Electricity seemed an airy intangible to most people in 1800. Who knew there was so much to know about it? So it is with design. Some people are good at it and some people are bad at it, and there's something very tangible they're good or bad at.

Monday, April 03, 2006

Scientific and Technological Development = Removal of Constraints

Some fairly quick sketching...

I think that scientific, and particularly technological, develolpment can be seen as the removal of constraints.

By that, I'm not trying to make some broad rhetoical statement about science and technology giving humanity greater freedom.

What I mean is something more specific. I'll talk about technological development first, then consider the case with science.

When you want to undertake a task, such as being able to communicate with others, there are various constraints that apply.

When communicating by unaided voice, you're constrained to communicating to those within earshot.

Communication technologies remove constraints (and can also add constraints of course). For example, a telephone removes the distance constraint to some extent, and SMS and email remove the constraint that the receiver is currently available to take your communication.

There is a kind of platonic ideal for each type of task, free of all of the possible constraints that could apply to it. Constraints restrict this platonic ideal.

I'm not suggesting that the platonic ideal is a real thing, but just an ideal that we might be able to get very close to but never reach.

For communication, it is being able to instantaneously, perfectly and effortlessly transmit your information to anybody at any time, or something along those lines.

As a particular type of technology develops, constraints are relax or removed and, as a whole, it moves towards the ideal.

I think the utility of this picture of technological development is that it gives you a good way to think about future technological development.

You can think about what the unconstrained version of the task would be like, and thus what the various types of constraints are, and thus the ways that the current state of the technology can move, and what it could end up like.

I get the impression that many people see it as a mystery as to where 'technology could go next'. But the answer is: look at what constraints there are and could potentially be removed.

The story with scientific development is similar - the more you understand about the world, the more you can reason about it and the more powerful you technological capabilities can become.

Ultimately, it's about realising our intentions and removing potential constraints to that.