The Onion reports:
ABUJA, NIGERIA—At a celebratory press conference Monday, President Olusegun Obasanjo announced that Nigeria's troubled but oil-rich city of Warri has been chosen to host the 2008 Genocides.
The Onion reports:
ABUJA, NIGERIA—At a celebratory press conference Monday, President Olusegun Obasanjo announced that Nigeria's troubled but oil-rich city of Warri has been chosen to host the 2008 Genocides.
I don't know whether anyone has brought this up before, but since I haven't come across it, I thought I might as well jot it down here...
People have often wondered about the biological function of the kind of deep, lasting grief that accompanies events such as the loss of a child. Why is it so debilitating, and why can it be so debilitating for such a long period of time? Wouldn't it make more biological sense to let the person get on with their life and the business of propagating their genes?
In Stephen Pinker's book How the Mind Works (which would more accurately, but with less impact be called, "An Overview of What We Currently Know About How the Mind Works") he notes that such grief seems to act as a kind of deterrent - basically, knowing how awful it feels acts as a major deterrant to any behavior that might lead to those kinds of circumstances arising.
It occurred to me that perhaps the deterrance is not just for you -- but for those around you, and that this factor could help explain why it is so highly debilitating (if in fact this actually requires any separate explanation). Not only do you experience how awful it is, but others can also see how awful an effect it can have on you. And since in the times when evolution was at work we lived mostly in tribal groups with large numbers of relatives, Kin selection would explain how this kind of grief could evolve to serve "both" these purposes.
Perhaps just the right impact on the grief is where it has a large precautionary effect on the other kin members at the expense of really debilitating the grieving person? (i.e. this is the level of grief that results in greatest chance of those genes propagating, because the least number of people carrying the gene make the mistake that led to the grief).
Some people (and I don't think they're the only ones) talk about representations as if something was only a representation if it was in some fairly explicit form that can be read by some process which then interprets its meaning.
For example, consider the rules of grammar that our brains respect (to a certain degree) in the production of speech. Under this view of representation, there are two options for how these rules exist in our brains: either these rules are represented in the brain, or they are in some sense built-in to the brain without being represented. In the latter case the system in some sense knows the rules, but it doesn't explicitly represent them -- the brain just works in a way that respects the rules, just as a calculator doesn't explicitly represent the rules of arithmetic, but just operates in a way that respects them.
This distinction between things that are represented and things that are 'built-in' sounds fine on the surface, but I'm suspicious of there being a real distinction there. Certainly the distinction makes intuitive sense. It's easy to imagine an explicit representation there that is to be read off and used. And its also easy to imagine the other case, of a system with no representation, that just works in a way that respects the rules. However, the thing is that a representation is represented by some encoding scheme. You can represent something an unlimited number of different ways (of course, different ways will have different properites, such as how long it takes to read the representation, how much space it takes up, etc etc) and in each and every case what is important is that you have the right means of interpreting the representation.
And amongst this unlimited variety there are encoding schemes that more explicitly represent the thing and those that do so less explicitly. So my question is, is there just a spectrum here, from very implicit representations right up to very explicit reprsentations? And is it possible to to make any hard divisions of this spectrum? I strongly suspect that it is not possible to make any hard divisions or distinctions.
In the case where the brain simply respects the grammar rules but does not explicitly represent them would still be a case of the brain representing those rules, but just implicitly. Taking another tact on this, I don't see how the brain can "know" the rules yet not in any sense have a representation of them. With the right means of interpretation we can still 'read' the implicit representation (not that we, as people trying to understand how the brain works, necessarily have the means to determine this means of interpretation). In other words, some aspect of the system's organisation implicitly encodes the rules, and the way that that organisation interacts with and influences the behaviour of the rest of the system consitutes the interpretation of those rules.
It may seem a bit of a stretch to call such implicit cases 'representation'. But if what I'm suggesting is correct, then there's no hard distinction between "explicit representation" and "implicit representation", and thus the common meaning of representation is making a hard distinction that doesn't exist. Thus, if we are looking for technical accuracy (and of course we are often not) then we either have to modify the common meaning of the word or find another term that can have a more accurate meaning.
"The robot, InsBot, developed by researchers in France, Belgium and Switzerland, is capable of infiltrating a group of cockroaches, influencing them and altering their behaviour."I think this is going to be big. I can just imagine it: Roach, by Calvin Klein
"The third stage, undertaken by the French Centre for Scientific Research in Brittany, was to isolate the molecules that give cockroaches their smell -- to create a cockroach perfume".
"InsBot, which is green, the size of a matchbox and equipped with lasers and a light sensor, was developed by Switzerland's Federal Polytechnic School in Lausanne."From the description it's obviously some sort of cyber commando roach with shoulder mounted laser deaaath raaaays.
"Other applications are also envisaged for the computer programs developed under the Leurre project. Guy Theraulaz, CRCA director of research, said it may be possible to build chicken-like robots that will be used to stimulate poultry."That sounds a little raunchy. I don't know if I'd want to eat that at KFC. Wait a minute -- I see, my dirty mind was getting away from me a bit there -- it's some sort of robot that reads shakespeare to the chickens.
"A lot of chickens don't move at all and die as a result. They need to be encouraged to run around. Robots could do that," he said.Oh, I see. It's good to see the poultry industry making efforts to increase efficiency by utilising modern technology with their robo-chicken exercise instructors. "C'mon girls, lets move those bodies, kick those legs! We want to be 97% fat free. Yeah. Now some star jumps!".
Qualia is the mystery of mysteries. If the world really is just processes involving aggretations of atoms, then what kind of thing is that sensation that is the taste of an orange, what kind of thing is the visual image we see when we look around?
There's lot of other things we don't understand, but with other problems we usually have some idea of what an answer might look like. But with qualia we literally have no idea.
It seems impossible that it could be the product, like everything else, of processes involving atoms. It must be some special sort of entity, we think. And there may be literally no way for us to understand it, or if there is, we may not be smart enough to gain that understanding.
For an abrupt change of context, there's a tribes person living in an untouched, remote area. They have never seen or heard of modern technology, and they are given a gameboy. They see the little person on the screen, they see them moving about an environment, and they see how they can control this person's actions. It must be some sort of magic.
They know of nothing that is even remotely like this gameboy, and they have absolutely no idea how any such thing is possible. They have no idea of how they could even begin to understand it. But we know it can be understood, we know there's no magic.
Could qualia be to us like the gameboy is to the tribesperson?
Yeah, Paul Graham gets it. He doesn't restrict his thinking to the space of what things ought to be about -- he actually thinks about what's going on. Why does a particular candidate win an election? The political attitudes of the populus, the things that influence these attitudes, etc, of course. Well, at least that's what you'd expect it's about, that's what it ought to be about, but wrong tree argues Graham. The answer he suggests is very simple and easy to see if your view isn't blinkered.
Sure, people understand it's important (read the article to see what I'm referring to), but I think it's clear they don't appreciate just how important it is, how important it is relative to other issues -- after all look at some of the candidates the parties have chosen, as Graham points out, or the apparent lack of real effort to address charisma issues.
And let's put this into the right perspective: the US, the most powerful country in the world, lots at stake in presidential elections, huge resources put into the opposing sides. And: a simple notion that easy to see if you're view isn't blinkered, massively overlooked by all these people over many decades who've had huge stakes in these elections. Election results in the most powerful country in the world could've been different -- that's the size of those blinkers.
Richard Dawkin's 1998 review of Sokal and Bricmont's Intellectual Impostures. Pretty good stuff - and interesting for his insights, not just what he says about the book. Right now I'm too lazy to try and describe what the book is about, but here's the first paragraph of the review:
Suppose you are an intellectual impostor with nothing to say, but with strong ambitions to succeed in academic life, collect a coterie of reverent disciples and have students around the world anoint your pages with respectful yellow highlighter. What kind of literary style would you cultivate? Not a lucid one, surely, for clarity would expose your lack of content.
Posted by James at 1:08 am
Goddamn it irks me when people try to be critical or insuling of something and dress it up in humour on the pretense that they "don't really mean it"; where they want to mean it but be able to claim that they don't mean it. Of course, you can say something and and only half-heartedly mean it, and you can say something purely for the joke -- I'm only talking about the situation where someone is doing it because they want it both ways. Good, now that I've got that off my chest, I return you to your normally scheduled programming... :-)
More rough notes.
The whole is greater than the sum of the parts: if you lay two planks of wood on top of each other, and they can hold more weight than the combination of what they could individually hold. Together you get more than you had separately, therefore reductionism is wrong.
That argument is flawed, because it is confusing petty reductionism for reductionism. Are any laws of physics violated by the result of putting two planks of wood together? Obviously not. The result of putting the two planks of wood together is fully explainable at a lower-level of the laws of physics.
While that might be the reason why the argument is flawed, I don't think it'll always convince people. We arrive at conclusions through chains of reasoning, and even if our conclusion is shown to be wrong, if we still think our chain of reasoning is valid there's a good chance we'll still think the conclusion is right, too. I think there is one such chain of reasoning for this matter, and that this flawed reasoning comes about because of the nature of language and thought. I'll talk about it now.
It really can seem that we have something here that is 'greater than the sum of the parts' because we have something, the 'strength' of the pieces of wood, that is some amount when the pieces are wood are separate, and yet is more than twice this amount when they are combined. Doesn't it seem that we have in fact gained something here which wasn't there before?
The problem is that in thinking this we are reifying the 'strength'. But wait a moment, doesn't reifying mean "to regard (something abstract) as a material or concrete thing", and isn't the strength of the planks a real thing? It is a real thing, but we must be careful with what we mean by 'real'.
The strength is real, but it is not a substance, such that we have created some new amount of this substance when we put the planks together. Strength is a real property, but it is not a "thing" in itself, and you might describe it as being both "real" and "abstract". It is the product of a number of factors, such as the type of material, the structural arrangement of the item, etc. Quantity of the property (strength) is not simply a function of the quantity of the things making it up.
So what exactly is causing the problem here? Seeing 'strength' as a "real" thing, and thus that we have more of a "real" thing when the planks are combined, and thus that reductionism is violated. The root problem here is in considering that "real" can only mean a real "thing".
The "strength" of something is simply a conssequence of the brute physical details of that thing -- there is no thing that is "strength" over and above those details. Those details have real consequences - such as how much weight it can carry before it breaks, but to say there is some actual extra thing called 'strength' resonsible for those properties is a mistake. It is confusing a label in our heads for a thing in reality -- whereas that thing in our heads is really a description of reality.
More rough notes, written more for my benefit, as an aid to organising my thinking, than as a geninue attempt to convince anyone of anything. It's all for me me me -- it's not for you *.
As Richard Dawkins has noted, reductionism is uncool. Even the name has a negative tinge: it's reducing things to something less than the original.
It is one of those concepts that everyone thinks they understand, because it seems so simple and self-evident: reducing things to their parts. Except, this self-evident view is wrong.
It's describing what Stephen Weinberg has called petty reductionism. Reductionism proper, what Weinberg has called, grand reductionism (he has borrowed both 'petty' and 'grand' from the language of criminal law) is the view that reality is the result of fundamental, universal laws, and that all the systems and apparent 'layers' that we see are simply the results of the operations of these laws.
The following is a short list of the reasons, it seems, people make flawed arguments against reductionism:
Just jotting down a minor e.g. of language/thought stuff. Someone forgot to mention something that they should have mentioned to someone else, and even though they know they forgot to say it, they say to the other person: "Oh, I might have forgotten to mention to you that...".
An article in Slate investigates:
Oh boy oh boy oh boy oh boy. If this blog wasn't PG-13 rated :-) I can tell you that previous sentence would have contained less three-letter words and a lot more with four letters. So I was typing up this long blog post (not this one here, but a different one), and a few minutes out from my last save I hit some combination of keys and -- bam -- my firefox window is gone (I have no idea what keys I pressed, but I'm pretty sure it didn't involve 'ALT-F, X').
Isn't it about time we had a universal undo capability? Along the lines of "If you can do it, you can undo it". So if I accidentally close down firefox, I can bring it up exactly how it was -- same windows, same window state (e.g. scrolled to the same position within the window, and so on. And the same for any other application - and for any other action you can do, inside or outside of an application.
I know there are technical issues here. At the same time, I think that a lot of them are the legacy of software and frameworks that we're stuck with for the time being at least (..yeah, I know that description is a bit vague.. but too much effor to try and make it clearer), and I think that there are places where, with a bit of effort, the scope of undo could be extended. But anyway, it doesn't seem worth my while to get into more technical details about this, so I'll leave it at that.
That's a big relief: I ended up passing my dive medical. As I wrote last week, the rock-in-the-head incident might've jeopardized it, but when I went back to the doctor today he'd managed to get the hospital records and he was satisfied that there was unlikely to be any problem. So tomorrow I'm going to book the dive course for sometime in the next few weeks...
I had a dive medical this morning. I've wanted to go diving since I was a kid. The results: inconclusive.
Back in year ten (12 years ago) at a school athletics day I was down on the oval walking along talking to a friend, Aaron Bond. There was this big crack, and then I was picking myself off the ground and there was a lot of blood. I had been hit in the head by a rock someone had thrown (though it wasn't directed at me).
I ended up spending a few hours in hospital after which I was able to go home. I think I was lucky that the skull is fairly thick where the rock hit, and I didn't get off too bad -- pain and headaches for some days afterwards and a fairly minor scar were about it.
A few days afterwards, though, the hospital rang and said I'd gotten a hairline fracture, but this apparently didn't require any treatment (my dad answered the call, so I'm not sure exactly what they said).
Apparently past skull fractures can be an issue for diving, because if there was any brain damage, the increased pressure underwater can trigger off epileptic fits. So the doctor has to try and get my hospital records from 12 years ago. I don't know how good his chances will be, and if he can't them I'll apparently have to get some tests done -- I don't know the nature of these -- to rule out any potential problems.
Not a huge deal if it ends up that I can't go diving, though it does irritate me that this has come about because someone was doing something stupid that hurt others.
It didn't occur to me till I saw a news report on Sudan this evening - but jeez, isn't the term 'ethnic cleansing' a pretty big bloody euphemism? I'm surprised that it gets used in such a matter-of-fact way in news reports, as if it was a fair way of describing the situation. Oh, you know, it's only a bit of cleansing. Lucky we're cleansing things, and getting rid of all that yucky, dirty, disease carrying stuff!
You've probably heard Alan Kay's famous quote "The best way to predict the future is to invent it." I was looking for a reference for this, when I came across some more quotes from him, some of which I thought were pretty good:
More rough notes...
Intuition often gets talked about as if it was just a single type of thing which, depending on the context, is considered either a good thing or a bad thing. I think there's two different forms of intuition, one of which is more reliable than the other.
The reliable type is derived from a large mass of solid experience or knowledge. The ins and outs of the situation the intuition applies to, the subtlties, the important factors, the irrelevant details -- these are all burnt deeply into your brain and, put simply, the intuition corresponds with the way the world is. I will call this learned-intuition.
The other type is derived from the innate and learnt heuristics our brains apply to perceive, and reason about, the world. There is much variation in these heuristics, and comprable variation in their reliability, but being heuristics they are all ultimately shortcut replacements to considered thinking about the situation. This makes them generally less repliable than learned-intuition and -- appropriate!* -- considered thinking. I won't argue this point further in this post (though this is defintely something I want to talk about in the future), and you may not agree with me on this. I will call this type of intuition heuristic-intuition.
Accompianing both forms is a "gut feeling" that tells you the intuition is correct, though often you won't be able to put your finger on why it's correct. People often talk about intuition as if the reliability of both learned- and heuristic- intuition was the same (this and this give some sense of this) -- or rather, they fail to make any distinction between them.
Being aware of the differences between these two forms of intuition means being aware, when an intuitive view comes to mind, of which type it is (I think it should in general be fairly easy to tell if you think about it), and consequently how much trust you should put in it, and consequently whether you should ignore its judgement and instead bring in considered thinking.
Some excerpts from Anthony Daniels in The New Criterion (via Arts & Letters Daily). I'm posting this mainly because of my interest in how people tend to assume, unless it is unequivocally shown otherwise, that psychological factors are responsible for other's illnesses -- which this illustrates...
When I started the PhD I wanted to use a version control system to keep track of all the PhD-related files. So I installed CVS. But now I've come to the view that it doesn't suit my needs very well, and I'm looking into other options. And I thought the first step there would be to get it clear what I'd like a version control system to do.
When I'm writing something, especially if its something that takes me a lot of effort to put into words, I tend to be constantly creating new files. And passages of text get cut and pasted all over the place and files frequently get renamed. For various reasons, CVS doesn't seem to be suited to this kind of situation.
As a bit of a digression, here's a little on how my writing process seems to go. At any point in the process what I've already written will have tried out certain angles, and explored certain connections or aspects of the concepts, and at certain points it makes sense to start a new file.
I'm might start a new file if it seems more productive to explore something a bit different (because it's more of a "fresh start" and seems to keep things a bit clearer).
Or I might start a new file to "start over again", as what I've already written might give me a better sense of how "it all fits together", and it's usually much more effecient to just start with a new file than to go through and edit what I've already written.
When I put some information about the chronology of files into their filenames, which helps manage things. And at some point I'll go through the earlier files and see if there's anything in them that I should take out and use in the later files (though this description makes it sound a lot more straighfoward than it is in reality).
Back to version control systems. Here's what I'd like one to do (keep in mind this is just a wish list):
I wonder, if you were completely alone for a day, could you suppress all verbal thoughts? Just how dependent on them are you, and how hard is it to shut them down? Would you just be drifting, or could you make decisions and carry out everyday tasks?
I'm really not sure what I'd expect. I'm trying to think whether I've ever been 'internally quiet' for extended periods of time, but I can't recall how long I've ever gone like that.
And could you spend a day interacting with other people, doing tasks and verbalising on the outside, but not on the inside?
I've decided that I might as well start typing up quite rough notes to this blog, rather than waiting till I have something a bit more solid to say and concrete examples to illustrate the concepts with. So here goes...
In the current climate, saying that you understand a viewpoint is tantamount to saying that you approve of that viewpoint. The truth of the matter is, of course, that it's possible to understand a viewpoint that you are very critical of, and which you strongly disaprove of.
Unfortunately, the current climate it is assumed that statements are always expressing opinions. That is, statements of facts and statements of opinions are always coupled.
The standard way of expressing strong disapproval of something is to say that it's "crazy" or "illogical" or that you "don't understand it". I think the reason why understanding is equated with approval is that, it's believed that if something can be understood then it must "make sense".
And behind this view is, I believe, the notion that a view is constructed from some logical chain of reasoning, such that an incorrect view (this is why you strongly disapprove of it) is the result of poor logic. I've previously given some reasons why I think this view is wrong.
As I gave some explanations for then, I think views are more the result of perception and their underlying assumptions than the logic that goes into them.
Once again, some fairly quick and rough notes on how it seems to me people think...
A month or so ago, I was watching The Movie Show, and they had an interview with the actor Phillip Seymor Hoffman about the movie he was in, Owning Mahowny. In the interview, Hoffman commented on acting.
The way I remember it, he compared being an actor to being one of those people who balance spinning plates, because it involved mentally keeping track of a lot of thigns at once as you go about the performance.
One of the people I was watching it with said that they didn't think that was necessary for acting -- basically, that you could just do it as a natural expression of a character, that once you were familiar with the character and role, it should, to some extent, flow out from you.
Their point was, there's no need for acting to be so "calculated". And the fact (or at least it seems to be a fact -- I don't know this myself) that some of the best actors past and present have had a more "natural" style seems to support the point that it isn't necessary.
I disagreed with that point, becuase... because.. well, I couldn't put it clearly into words then, but I think I can now. The thing is, while some people might not need to be so calculated in their acting, others may well be. Their nature, the way they think, the way they go about doing things - any or all of these things just may not be suited to a particular style of acting.
You might argue that a natural style is better than a more calculated style. The issue here, however, was whether a more calculated style was necessary or not. (In passing I would say that I think this view that the use of "natural talants" makes something surperior is a myth -- but that's something I'll have to talk more about some other time).
Now, to the point of this post. I believe the mistaken view that a calculated style is unnecessary stemmed from thinking in terms of the concept "acting" without bringing "real world" considerations into the thinking. Rather than thinking through this issue of whether the calculated style was necessary in concrete terms, it was thought through in abstract terms.
Why was it thinking in terms of the concept "acting" and not concretely? I'm not sure best how to explain this. Perhaps the following might help. It's meant to be analogous to the stream of thoughts that might've gone through the person's head. It's just mean to be illustrative of the general nature of those thoughts:
IT Myth 6: IT doesn't scale: Virtually any technology is scalable, provided you combine the right ingredients and implement them effectively
Here's some quotes:
The Guardian reports:
Posted by James at 12:14 am
I've just discovered that I can use my laptop while lying down in bed. I feel like... when they... um, like one of those dudes who made a major discovery.
To support the laptop I just bend my knees up a bit, so they're pointing up to the ceiling and so the soles of my feet are flat on the bed. The front edge of the laptop sits around my belly button area, and the back edge sits up towards my knees. And to see the screen clearly, I have to open it up wider than usual.
This is a revelation. Now when my back gets sore I'll be able to keep doing stuff at the computer, and I'll be able to say to people "I've just got to lie down and do some work".
I'm just noting this example down for my future reference. I want to collect examples of words whose names suggest what they are about, but misleadingly so. (The term 'stretching' is an example of this; while it implies that you're stretching your muscles, what you're really doing is getting them to relax -- people hurt themselves stretching because they think it really is about stretching the muscles).
Clay Shirky, talking about radio:
Just a short quip:
As we all know, the downside of microwaves is soft pie pastry and other such soggyness. If you want crispyness, you can heat up the oven, but it takes too long if you also want a quick result, and as an alternative a toaster oven is not that much better. I've heard of microwave ovens with a browning element (so they're a mix between a microwave and a toaster over, I assume), and though I'm not sure how much better they are, I can't imagine that in the 3 1/2 minutes the pie is being microwaved it wouldn't make the pastry that much crispier.
So I was thinking, could you get oven-style crispy pastry at microwave speed? More specifically, something that could make the pastry on a microwaved pie crispy? No having to wait for a body of air to be heated up or anything like that - just zap and it's crisp. Could you instantly generate a blast of heat and use that? Maybe you quickly expose the surface to flames (if this apparartus was part of the microwave, you wouldn't want your pie still wrapped up in paper-towels!). I don't enough to say what the alternatives are, and whether any of them would be technically feasible, nor whether any of the technical feasible options are practically feasable (eoonomically, safety-wise, etc), but I can tell you that the idea is high up there on my list of useful-devicees-I-would-like, right up there with time-machines and personal jetpacks :-).
I wish I was in Sydney so I could go along to this event, 'Are you conscious right now?':
Posted by James at 10:54 pm
Yesterday I bought Boards of Canada's most recent album (where 'most recent' actually means '2 years old'), Geogaddi. My reaction to this sort of music seems to always go from 'i'm not that impressed' to 'yeah, this isn't bad' to 'man, this is really good'. It's taken about four listens to the album to transition to the second reaction, and I'm currently on my way to the third.
Anyway, the original motivation for this post was the vocal sample
the past inside the present
which is in the album's second song, "Music is Math". I think it's a pretty cool sample, but I wanted to jot it down because I think it's of relevance to my PhD work, which is about the nature of information.
Consider the air particles vibrating against your ear drum (or whatever it is they vibrate against in your ear), "conveying" information about the sound source: the past (what happened at the sound source a moment ago) inside the present (the vibrations against your eardrum).
Not that I think that the line has any major relevance, mind you. (In fact, it hints at the view I think is wrong - that things carry something called information. That's why I put "conveying" in quotes earlier).
Posted by James at 6:42 pm
Thanks to Steven Livingstone for sending me these great visual illusions. They all involve scenes that look static but are actually continuously changing in very hard to detect ways. I thought the Workroom one was excellent, because it seems so obvious -- so there in your face -- once you know what's going on (I had to be told), yet so hard (at least for me) to see. I really like things like that which show up the "seams", so to speak, in our perception, which we are usually oblivious to. Not because there's any fun in belittling our assumed perceptual powers, but because they're illustrative of a truth I think we ought to be aware of.
1) What's a Homestar Runner? 2) What's a Wiki? 3) Why am I so excited?
I received an e-mail containing this quote in its signature:
Most people believe that physicists are explaining the world. Some physicists even believe that, but the Wu Li Masters know that they are only dancing with it. - Gary Zukav*
I've bought every Beastie Boys album except their first, and I would like to get their latest, except for the stupid copy-controlled CD it comes on. If you haven't heard of copied controlled CDs, they don't meet all the requirements of the CD standard, and thus only work on certain players and -- for some reason I'm not totally sure of -- can't be copied or ripped into MP3s. If you're after more details, there's plenty of pages on the topic out there.
As a species, we have an arrogance that makes us believe our intuitive, everyday conceptions of things must be right, that they are not to be questioned, and that they are only to be reliquished when we are forced to do so*.
The Scotsman reports:
Monster waves that can sink a supertanker and were once dismissed as a myth abound in the Earth’s oceans, scientists have learned.
Satellite images identified more than 10 individual freak waves more than 82 feet high in just three weeks.
Until such evidence became available, most scientists were sceptical about freak waves. Statistics showed that such extraordinary sea conditions should only occur once every 10,000 years.
I'm sitting there eating this piece of meat, doing what we all do at this time, and wondering exactly what cow (or pig, or chicken, etc) did this come from? What did it look like, where did it live, and what did it think about George W Bush? I know that this piece of meat once came from a living animal, but somehow my brain just can't concretely grasp that. That brown shape there is just too abstracted from the notion of a particular living creature. If it'd been the olden days and I'd killed the animal and chopped it up myself, or seen this done, it might be more real. But by and large, pieces of meat are, to my brain, reddy or pinky coloured blobs that come in little styrofoam trays from the supermarket.
Not just what animal did it come from, but exactly where in said animal was this piece of meat located? I mean, I know that rump steak, for example, means meat from cows arse (and I'd like to see more people facing this reality when ordering meals and telling the waiter they'd like "the cows bottom in a delicate wine sauce infused with aromatic herbs"), but where exactly in the 3D object that was cow was this piece of meat located, and relative to the rest of the cow, which way was it oriented? I can imagine this cow grazing around in a paddock, going about its business, and the image is normal except that the cow is slightly translucent and I can see this red blob there inside it, that blob being the thing which in the cow's future is destined for my stomach.
Actually, I can't imagine that. I can imagine being able to imagine it, but I can't actually bring such a picture into my mind. So, I'm thinking, what if we could? It strikes me that you could use this idea as the basis for some simple little pictures or animations. Like a cartoony picture that juxtaposes a scene of a family at home eating their steaks, next to a scene of cows in a paddock, where you can see positions of all those steaks within each of their owner-cows. Oh -- and here's a real visual possibility -- you know what they say about meat pies? Oh, it's crude, but they say it's true: all lips and assholes. What about a picture of a meat pie next to pictures of the 30 or so (just how many cows is the average meat pie sourced from? I'd like to know) cows that contributed to it, highighting those parts doing the contributing?
Intimacies is, in the words of its developers, a -- for a good mouthy workout -- digital epistolary novel, or DEN, for an equally ugly but briefer name. Though you might not be familiar with the term, you're probably aware of the epistolary novel form: they're novels consisting of, most commonly, a correspondence of letters between characters, though which we see the story's narrative unfold. They can also consist of such things as diary entries and newpaper clippings -- see Wikipedia if you want more details.
Intimacies presents a story though a series of e-mails, web pages, and instant messages that the reader can view though a program that is meant to simulate the interfaces of our e-mail, web and instant messaging clients.
The Brights are looking for an icon, and here's my suggestion. Unfortunately I didn't find out about the call for submissions until recently, and they've already shortlisted six candidate designs. I know a lot of effort must have gone into those candidates, but I'm afraid that to me they all feel very unsatisfactory, which is why I had a go at one of my own. The following is the rationale behind the icon:
To me, bright means freedom. It means existence and thought unrestricted and unconstrained by conformance to the supernatural, existence and thought free to develop and grow. Thus, the arrows represent freedom to grow in all directions. The arrows also represent the three spatial dimensions of the universe we live in, conveying a sense of the naturalistic nature of our reality.
Some notes on the graphic design. That image is just meant to convey the basic idea of the design. Here are some ideas for variations. The common line thicknesses and lengths could be varied. Perhaps shading could be used to made to look more three-dimensional, though perhaps that would ruin the simplicity. While the design essentially reflects a three-dimensional coordinate system, I wanted to make it somehow unique and recognisable as a symbol in its own right, and that was the reason why I angled the lines as they are. Perhaps there are other more effective ways of achieving this end.
I actually don't want to give it a name, but it looks like all the submissions have names, so if I must, perhaps "All Directions" might be suitable. (and I just hope this design isn't already used for a company logo or some such)
Posted by James at 9:38 pm
Posted by James at 1:42 pm
Learning a language is, at least for an adult, hard. The best thing, they say, is to immerse yourself in the langauge, ideally by hearing it and speaking it everyday amongst native speakers. But if this option isn't available, the closer you can get to it, the better. The New York Times reports (reg req'd) that the University of Southern California is developing a virtual approximation of such immersive environments.
The software has been designed to teach arabic to soldiers, and its basic details are as follows. The game takes place in a realistic environment, modeled on an actual Lebanese village. The player can move their character around the village, and interact with computer controlled villagers by speaking through a microphone. The computing system uses AI to interpret the player's vocal input and determine the villager's reaction. The player also has to control their character's body langauge, such as using an appropriate gesture when ending a conversation. The player is put in situations such as "establishing a rapport with the people you meet and finding out where the headman lives".
The article doesn't go into exactly how these details are executed, nor does it give any clear screenshots, but the concept is promising. Apparently versions of the system for other languages are planned (the next likely candidates are Dari, a major language in Afghanistan, and some Indonesian language), and the researchers behind it also see the potential for using similar immersive environments to teach other types of tasks - it should be interesting to see what comes from this.
I unfortunately don't have time to write this up more than briefly, so I'll get straight to the point: when we call some property, such as the level of mercury on a thermometer, analog, what we are really expressing is that it's level can change in increments smaller than we can perceive. As far as I know, the level of mercury in a thermometer must ultimately only be able to express the current temperature descretely, since below a certain gradation-size accuracy would be lost out to the random nature of the jostling which is causing the mercury to rise in the first place. That is, below a certain gradiation-size the fluctuations in the level of the mercury would be due to the random directions of the movements in the jostling of the atoms rather than changes in the degree of excitement in the jostling. The thermometer seems to be analog because it seems to change in a smooth fashion, with no visible gradiation. Similarly, many things that seem analog are in fact fundamentally descrete -- record groves, film etc.
An interesting article over at Kuro5hin, whose thesis is that advertising imposes costs upon society and should thus be taxed accordingly. Here's an overview, which I've actually taken from the article's conclusion (and broken up into separate paragraphs by me):
...advertising imposes costs upon society and should be taxed accordingly. Some of these costs are well known, e.g. annoyance and loss of time and can be accepted provided that consumers are voluntarily exposed to advertising. However a great deal of advertising is imposed upon the consumer, without any compensating benefit being offered.Read the rest of the article for the explanation.
In addition to simple annoyance, advertising spreads inaccurate and incomplete information which distort consumers purchasing decisions, causing a loss to consumers and diverts valuable investment away from improvements in productivity and quality of goods. Advertising is not entirely bad, but it does not have to be to justify special taxation. The presence of a significant (uncompensated) harm from advertising is enough to justify the tax. Particularly since the revenues from the tax could be used to fund increased spending or to cut other taxes, such as those on labour and investment.
The government needs to generate revenue one way or another to pay for essential services e.g. national defense, the criminal justice system, healthcare. Raising this revenue by taxing bad things (ie. externalities: pollution, advertising etc.) is likely to lead to increased efficiency. So even those of us who think taxes should in general be lower, can still legitimately support this tax, provided cuts in other taxes accompany its enactment.
Posted by James at 6:44 pm
Jon Udell on the sorts of search capabilities we ought to have on our computers:
On the Google PC, you wouldn’t need third-party add-ons to index and search your local files, e-mail, and instant messages. It would just happen. The voracious spider wouldn’t stop there, though. The next piece of low-hanging fruit would be the Web pages you visit. These too would be stored, indexed, and made searchable. More ambitiously, the spider would record all your screen activity along with the underlying event streams. Even more ambitiously, it would record phone conversations, convert speech to text, and index that text. Although speech-to-text is a notoriously imperfect art, even imperfect results can support useful search.
Posted by James at 10:06 am
Well, that title's a bit misleading - I just couldn't resist the sensationalist sound of it. This post is actually about space flight. Never before has there been a trip into space that wasn't planned, funded and executed by a government body -- not until yesterday when SpaceShipOne broke past the atmosphere in a historic flight.
Posted by James at 5:54 pm
Yahoo News reports:
To encourage drivers to take more frequent breaks, the Texas Department of Transportation wants to set up free wireless Internet access at rest stops and travel information centers.
A very good article by Joel Spolsky on why the future is going to be tough for Microsoft and good for web-based software.
However, there is a less understood phenomenon which is going largely unnoticed: Microsoft's crown strategic jewel, the Windows API, is lost. The cornerstone of Microsoft's monopoly power and incredibly profitable Windows and Office franchises, which account for virtually all of Microsoft's income and covers up a huge array of unprofitable or marginally profitable product lines, the Windows API is no longer of much interest to developers. The goose that lays the golden eggs is not quite dead, but it does have a terminal disease, one that nobody noticed yet.
You've got the lycra getup, the underpants, the cool catchphrase, but you're stuck on the suitably cool name? No problem, my sparkling imagination has come to the rescue: The Advertiser. Sounds fcking hard to me. You don't want to mess with The Advertiser.
Apparently the phrase "Yeah no..." is an Australianism that has arisen in recent times. See the article for details. I was surprised when I saw that. I've always been pretty self-conscious about language use, and in the past I think I've been less prone than most to picking up new sayings and ways of talking, but reading that article made me realise I use "Yeah no" all the time. I had been aware of using the phrase, 'cause I seem to overuse it, but at the same time, I didn't have that much consciousness of it, if you know what I mean. Interestingly, I can't recall having heard others use it, though I'm sure that's because I haven't been on the lookout for it.
The Australian reports
When bilingual people age, their brains decline much slower than those who are fluent only in their mother tongue, it was reported yesterday in the journal Psychology and Ageing.
...little compartments in laptops for storing things, like your set of earbud headphones.
Yeah, you can store stuff in your laptop bag, but laptop and bag often get separated by more than arms reach, and a little compartment you can easily get at could be a lot more convenient that reefing around for the item in the bag. Has anyone tried doing this?
Here's a link to a New York Times peice from last year written by William Gibson. I'm not sure if I'm remembering correctly, but I think the peice was part of a group of related articles where famous people were asked what they'd like to see technology make possible. Gibson's answer was that "some voodoo thing that unfailingly highlights [in a pieces of text] outright lies, spin and misperception - in different colors".
I've been meaning to post this for a while, and I was intending to add a few of my thoughts on this matter -- on making the accuracy of claims more apparent, but that's something I'll have to leave for later.
I'd set my Mac to show me the outright lies in Pistachio, the spin in sky-blue Bondi, and the misperceptions in succulent Plum. Large swaths of news would probably be Plum, both that written by journalists and some large percentage of politicians' quotes. Perhaps relatively few Pistachio highlights would appear in the actual reportage, indicating direct mendacity on the part of a journalist, though it would be interesting to find out just how few, or how many.
BBC News reports:
Nokia is making a mobile that lets you write short text messages in mid-air... A motion sensor in the phone makes the lights blink in a sequence that spells out letters when the handset is waved in the air...A trick of human vision turns the sequence of letters into a message that hangs in the air.... could be used by friends to talk to each other across crowded rooms or open-air concerts.... could be used to play games overlaid on city streets, as a heckling device or a novel way to interact with other devices.New ways to communicate are always interesting, no matter how trivial they may appear to the imagination. For the pervasiveness of communication, the complexity of our lives, and the way the two are intercombined, always outstrips anything we can simply imagine.
This ones' going to be a quickie... I just want to get the basic idea down, and I'm not worrying too much about expression...
This post is about how language can be used to obfuscate reasoning by implicitly categorising something as something it isn't.
An an example of implicit categorisation I came across prompted me to write this post. I was reading Philosophy: The Basics by Nigel Warburton -- which is not a bad book, BTW -- and specifically, the chapter on morals/ethics, and the part where he outlines neo-Aristotelian Virtue Theory. What this theory is, and my opinion of it, aren't important for this post and I won't be going into them -- I just want to comment on the way Warburton talks about the theory.
The text that's relevant to the example is this. Following the section outlining the basic details of Virtue Theory, is a section titled 'Criticism of Virtue Theory', in which he says "A major difficulty with virtue theory is establishing which patterns of behaviour, desire, and feeling are to count as virtues" and in elaboration of this says "the danger is that virtue theorists simply redefine their prejudices and preferred ways of life as virtues, and the activities they dislike as vices".
If we start considering that text, we can see that "establishing which patterns of behaviour, desire, and feeling are to count as virtues" (which I'll refer to as the "establishment problem") is "a major difficulty" and that this is a "criticism of virtue theory". As I will explain in a moment, when he refers to the establishment problem as a major difficulty, he implies that it is an inherent problem with the theory -- he categorises it as an inherent problem with the theory.
This is unfortunate, because the establishment problem is not an inherent problem with the theory. This ought to be apparent if we consider this for a moment. If virtue theory says that moral behaviour is based on virtuous behaviour, then we have the difficulty of determining what is virtuous. We need to determine how to turn its basic principles and tenants to more concrete courses of action. But this problem isn't particular to Virtue Theory.
It's a problem common to all moral frameworks - we have to determine how to interpret their basic principles and tenants. And regardless of the theory we can do this well or we can do this poorly. It might be argued that this is harder to do (perhaps too hard) in some theories than it is in others, but I do not see why this applies to Virtue Theory (and Warburton does not seem to argue that this is the case).
Thus if this issue of interpretation -- the establishment problem -- is common to all moral frameworks, and is only a problem when it is poorly done, then it should be clear that it is an problem that is independent of any particular moral framework, or, in other words, not an inherent problem with any particular moral theory.
If the establishment theory is not an inherent problem for Virtue Theory, then it can not be rightly described as a "major problem" for it. I've claimed that Warburton implicitly classified it as a major problem, and I want to now explain why I think his text does this. He didn't explicitly say it was -- or wasn't -- an inherent problem, but because he left the issue open and described it as a "major problem" in the section "criticism of virtue theory", the only way we can sensibly interpret his meaning is if we assume that it is an inherent problem.
In fact, there's a second example of implicit categorisation in that passage. By describing the establishment problem as a "major problem" under the heading of "Criticism of virtue theory", and by not elaborating on the what a major problem means in so far as it is a criticism, it is implied that is a major problem that can be counted as a criticism of virtue theory. For it is not necessarily the case that a major problem with a theory has to be a criticism of the theory. For example -- and to take an example that fits in with the theme of morals -- there are major problems -- difficulties -- involved with trying to be a good person, but this doesn't mean these problems constitute a criticism of trying to be a good person.
I'm now going to try getting closer to the heart of this issue of implicit categorisation. In effect a "subject" (the establishment problem) is something that could be interpreted in a number of ways, and is being referred to as a particular type of thing. However, rather than explicitly calling it that type of thing, it is implicitly being referred to as that type of thing. The reference is implicit because the there is no explicit link from the referrer to the referent, and that there is no explicit statement about the nature of the link between the two, that is, of what the referrer is saying about the referent.
The link and the meaning of the link are implicit because they are derived from the following: 1) the referrer being the heading of the section 'criticism of virtue theory' and 2) the referent (the establishment problem) residing within that section and saying something negative about the theory, which leads us to think that we have grounds for critcising the theory (this is the meaning of the link).
We are drawn into this implicit categorisation because this categorisation is the only sensible way to interpret the writing. For if we were to categorise it differently to the implicit categorisation, it would mean that the point the writing was making would be wrong, and there is no apparent reason why it is wrong -- it seems like a fair and adequate point being made. This is, of course, a warning about the dangers of not looking any beyond what things apparently seem to be, and about the importance of considering what things say not what you think they mean -- but those are other stories.