The exploitation of labour in poor countries is a horrible thing.
Companies shouldn't treat workers poorly, they shouldn't make them work in dangerous conditions or harass them, to mention only some of the misdeeds.
But what about low wages? People usually think that giving low-wage jobs to people in poor countries is exploitation. But I think that if companies treated workers properly (with safe working conditions, etc) but still payed the low wages, it wouldn't be exploitation.
As far as I can tell, low-wages is not the negative for the workers that most people think it is.
A normal auction is aligned to the desires of the buyers and sellers. The buyer wants to get as much money as they can, and the potential buyer who wants the item the most gets it because they will pay the most for it.
With the types of work that goes to poor countries, the situation is similar. The company wants the work done as cheaply as possible, and -- this is the crucial bit -- the people who want work the most get it because they are willing to do it for the smallest amount -- the lowest pay.
The pay rate that the most desparate people are willing to do the job for is too low for all the other people, who don't have as great a need for those jobs. For these other people, the minimum amount that it's worth their while to do the job for is a higher rate of pay. So if the company was to pay more for the jobs, there'd be a larger pool of people who'd be willing to do it. And more than likely, some or all of those most desparate to do the work would miss out.
I hope you agree that that would be a bad thing.
But why do companies have to get the work done as cheaply as possible? Could't they pay higher wages, but still give the jobs to those who are most desparate for it? It would be great if you could find a way for this to work in practice, though I doubt there is one.
Sure, some companies could afford to do it, but most would end up being put out of business by compeditors who found other people who were willing to work for low wages. These compeidtors would thus be able to sell their products at a cheaper price, and would end up winning out in the marketplace. There are definiely more subtlties than this, but I don't think anything else can really override this basic reality.
Thursday, May 29, 2008
The exploitation of labour in poor countries is a horrible thing.
Wednesday, May 28, 2008
The exclusive focus on the functional capabilities of software doesn’t just overlook usability, it also overlooks affordances.
You can imagine Tim Berners-Lee circa 1990 telling a hypertext-illiterate friend about this World Wide Web thing he'd just invented
"Have you seen my new invention?"
“what is it?”
“There's this special language, called HTML, which you can write documents in. It allows you to put these things called 'links' into your documents, so that when you view them, you can click on bits of the text -- the links -- to load up another document stored on the network.
Oh, and to view these documents, you need to use a separate program called a browser.”
“why would I want to use that? With <my favourite document editor> I can read and write documents, and with the programs I already use it’s easy to load another document off the network.”
The fact is that there's not a single functional capability in the WWW that can't be achieved in other ways; in fact, the web has (and still does, to a certain extent) less functionality than is available by other means.
What it does have is different Affordances.
I was introduced to the notion of affordances by Donal Norman’s book The Design of Everyday Things. It’s about the design of the everyday objects you interact with, like door handles, alarm clocks and car steering wheels. Why are some easy to use and intuitive, and others awkward and frustrating? Norman looks at how designs do and do not match with our tendancies, and our capabilities for perceiving and interacting with the world.
I’d highly recommend it to anyone, even if the topic isn’t something you’re specifically interested in. It’s a real eye-opener, and it's subject matter is relevant to everybody’s lives.
A hinged door that just presents a flat metal panel at hand height suggests that it opens away from you by pushing it. While a door with a metal bar in the same position that you can grasp suggests that you grab hold of it and pull it. These designs exhibit affordances – they are suggestive of function.
In general, I’d say you have an afforadance when properties of the object or system shape and influence the ways that you use them. This is a bit broader than the original meaning, but I think the term still fits.
Lets go back to the World Wide Web. Links are easy to embed into documents -- you can embed them seamlessly into the text of the document, rather than having to figure out where to put them in relation to the flow of the text -- and it's easy to use a link. You just point at it, click the mouse and then the linked page appears.
Without links, you'd have to find another way to describe the location of the other document, and then the user would have to either cut and paste that location or remember it, and then go to whatever program they would use to access it. And they won't have any forwards or back buttons.
In principle, there isn't a huge difference between these two alternatives. But in practice there is. We’re all very sensitive to how much effort it takes to do things. It doesn't take much for the effort to exceed the threshold where we just won't bother. Trivial amounts of extra effort can break the camel's back. Yes, “in principle” we could make the extra effort if we really wanted to. But the amount of effort involved means that in practice we don’t want to.
So it encourages linkages and jumping between documents. Blogs wouldn't make sense unless this was easy to do.
Or take SMSs or twitter messages (not that I’ve used twitter myself, I've just read about it). You could in principle write exactly the same way you would in emails or blog posts, and just split the text up into multiple SMSs or tweets, but you don’t. SMSs and tweets have differents sorts of affordances to email/blog posts, so you write in different ways.
That’s the thing about affordances. You can have multiple different systems that, in principle, you could do exactly the same things with, but you don’t, because they have different affordances.
Tuesday, May 27, 2008
Criticism should be a positive thing. Not personal attacks or trying to put people down. It should be about identifying opportunitites for improvement. With that in mind...
IT people have a tendency to think like this:
Pieces of software have a function.But this way of thinking overlooks usability and other "non-functional" (which is a bit of misnomer) issues.
For example. Programming languages are for writing programs.
Utilities like ‘ls’ or ‘dir’ are for giving you information about the file system, as are graphical file browsers like Windows Explorer.
Comparing programs means comparing their functional capabilities.
(Many times, I've head people dismiss programs or programming languages because other programs can already do the same thing.)
Sure, you may be able to do the same things, but can you do them as easily? If usability wasn’t a factor, people would be happy to write all software using Assembly langauge.
The notion of a language being ‘easy’ (or ‘hard’ to use) is itself a simplification. Easy or hard for what? There’s always some sorts of tradoffs. Easy for systems programming? Easy for writing simulations? Easy to write? Easy to maintain?
'Easy' or 'hard' is in the context of using a particular program for a specific task a person is undertaking. The can a complex set of factors involved, including the nature of the task and of the person doig it. Here are some basic examples.
Comparing information is easier when the pieces are closer together. The further apart, and the more actions like mouseclicks you need to perform to navigate between them, the tougher it is on your concentration. When you're thinking and your train of thought moves, you want to be able to nimbly adapt the information display, such as changing the ordering of the information from alphabetical to chronological.
Time is an important component. We have limited ability to keep track of things over time. Getting the pieces of information in quick succession is quite different to getting them many seconds apart.
Why are IT prone to thinking of computer systems in this way? Well, in part, it’s nothing specific to the nature of IT people. I think we all tend to see phenomena in ‘functional terms’, and tend to overlook the other aspects to them. And, as I've said before, we tend not to factor in constraints into our conceptualisation of things, and are poor at doing it when we try.
The thing that's different about IT people is simply that they explicitly think about and compare software more than non IT people do.
A contributing factor in the case of systems whose functions it is to present us with information is a simplistic notion of perception. In this view, perception is simply a matter of taking in information, and what we end up with in our heads is simply a reflection of what is out there. The thing is, our perceptual/cognitive systems are highly structured, and constrained in various way.
This is why you can have the same information can be much easier or harder to deal with, depending on how it is respresented and/or how you can interact with it. Our brains look for certain cues, and whether they’re there or not makes big difference. They have all sorts of built in assumptions, expectations and limitations.
Revisiting the question of why IT people see software in this way, there might be one factor that is specific to them. IT people in particular see programs in this way. They're used to thinking of how to use software to solve problems. With a program, as long as it can get hold of the information it needs, the original format doesn't matter so much, as it can then put it in a more usable form, and then solve the problem. The difference with our brains and perceptual systems is that they have more limited capabilities to do this, and to do this fast enough for it to be practical. Our hardware is much more dependent upon the external format.
We don't just compare software that exists. When someone thinks of an idea for a new piece of software, it gets compared with with what we've already got. So this issue of how we compare software applies to developing new softare. If we can be aware of the problems with only doing functional comparisons, we won't be so eager to shoot down new software, for even the best new softare rarely does anything new in functional terms.
Monday, May 26, 2008
Assuming that every bit counts is a recipe for failure, whatever your goal is (including helping the environment)
When you have goals you want to achieve, being satisfied as long you're doing something that's working towards them is a recipe for failure.
Unless you think about priorities and how much tasks will really contribute, it's easy to just feel good about doing something, and it's always tempting to do the easier or more urgent things. Every little bit helps, right?
But not really. You have to have the right priorities. What matters is how much a task contributes. Tasks will differ in how much they contribute to the goals, and there's almost always an unlimited number of trivial tasks you can occupy all your time with. Time can easily fly by, without you having achieved anything of substance.
Sometimes tasks may seem to meaningfully contribute, but in fact really not at all. They might just be addressing symptoms of a deeper problem. As long as that deeper problem remains, the symptoms will continue to arise for you to work on, and you'll end up on an endless treadmill. Really tackling the problem means going for its deeper source.
This doesn't only apply to individuals and their personal, study or work goals. It applies anytime there's any goal to be achieved. Consider the goal of helping the environment. How many times do you hear it said that “every little bit counts”?
Yes, in some trivial sense every little bit is doing something. But is it doing anything really useful? Is it just addressing symptoms, and putting us on an endless treadmill?
The point is, it's really important for us to consider what our priorities should be. We need to think about what actually does and doesn't help, and to what extent things help, so we can know how we should distribute our time.
But I feel that thoughts about the environment don't really get into considering such matters, and tend to be mired in a warm and fuzzy "every little bit counts" view.
Perhaps you think that this is all well and good when it comes to governments and companies, but for the person on the street, all they can do is the little things, so for them it truly is a matter of ‘every little bit counts’.
If you think that, then by all means, have a look at what the person on the street might be able to do, and the relative values of these options. And if you find that there truly aren't any real diffrences between the options, and every little bit really does count, let us know. It's an important question, and a proper answer -- whatever it is -- would do us all good.
Sunday, May 25, 2008
Not infrequently, I'll be doing some task that involves jumping back and forth, or copying files, between a number of different folders. In Windows Explorer, these directories are often not close to each other, so I have to scroll up and down all the time.
Perhaps it'd be useful if Windows Explorer could (optionally) display a cache of the folders you've recently nagivated or moved/copied files to. Then you could drag files to them, or click them to see their contents in the right-hand pane.
A similar 'Recent Folders' section would also be handy in save and open dialog boxes. So often I have to constantly navigate back to the same directory, each time I save or open something. Some save/open dialogues do provide a button for obtaining a list of recent folders, but why not just include a few those folders there in a section within the default listing?
Out of sight out of mind, as they say. You’re in the kitchen or walk past it, and the fridge is just this big white opaque box. You don’t see the food - only when you open the door. And even then, items can be hidden behind other items. But if the fridge was transparent, you could walking past, or in the kitchen and happen to see the fridge contents. You’d be more aware of the food being there.
I always end up wasting food because I don't see it and then forget about it. A transparent fridge might help me waste less (especially if I also stored things in transparent containers). And I think thre’d be something slightly less artificial about your life if the food you’ve brought home and stored away wasn’t so out of sight.
On the downside, food would be exposed to more light, and there's a question of if you could make one that is as well insulated. With the food more visible, it could also tempt you more to snack. I'm not sure if you'd get condensation fogging up the surface, but if there was it could ruin the transparency. Also, if you didn't keep the fridge tidy, it could be messy - but then again, perhaps that would be a motivation to keep it tidy.
Here's a pic of transparent fridge from some sort of Japanese trade show (see the article accompianing the pic). I don't know if it's something that is actually sold or not.
Here's a suggestion for a transparent fridge from back in 2006. As they point out, it might also help save power, if you didn't have to open the fridge door to see what was inside.
Saturday, May 24, 2008
This is just an initial, hasty attempt to write about this stuff. I'll try and revise it a bit tomorrow.
Often, we’ll have a view about a topic without having actually given it explicit consideration. We obtained the view pretty unconsciously. Without explicit consideration, we’ll just feel there’s something shifty about Terry, or that technology is a bad thing.
Our brains were designed so that we don’t need explicit deliberation to form judgements about other people’s character, so our intuitions on that score tend to be fairly reasonable. But our brains weren’t designed with the capabilities required to automatically draw sound judgements on concepts such as technology; though our brains still try to automatically develop views on them.
‘Technology’ is a very abstract concept. It covers so many things that, in concrete terms, differ in so many ways. Technologies span the four corners of the globe, many thousands of years of history, and all aspects of our lives and societies. They can be used with all sorts of intent, and have sorts of consequences (and the intents and results don’t necessarily match – good intent can lead to bad results, and vice versa). Technologies include writing, asprin, glass, bridge construction, computer games, and guns.
If we want to draw reasonable conclusions when thinking about topics like ‘is technology a good or a bad thing?’, we need to have to override our natural intuitons about it, and start thinking explicitly about the question.
Our natural intuions have a powerful grip on us. We might have a negative impression of technology because of various things we’ve heard where bad things happened because of it. If connotations start to accrue around a concept, it tends to snowball. Thereafter, the things we’ll take note of as ‘technologies’ are more likely be things that fit our existing preconception of it – as ‘high-tech’ things that have negative concequences. It’s more likely that each example that fits that view will end up reinforcing it, while, when we come across cases that don’t fit it, we probably won’t think to reevaluate our notion of technology.
The ability to think explicitly about a concept is a skill, and has to be developed like any other skill. When your brain is unconsciously building up a picture of something, it can build a picture of depth and subtlty. But as I’ve said, it can only do this in certain sorts of cases that suit the in-built mechanisms for doing this. When you’re explicitly reasoning about a topic, you have to build that depth and subtlty manually.
Initially, we can only explicitly think of the concept in simplistic, somewhat stereotyped ways. And it simply takes a lot of work in order to draw out and refine that skill – the ability to think explicitly about technology and its nature. Reading things about that concept helps, but I think that can only get you so far. To really develop the skill requires linguistic work. You could try thinking about it in your head, but that’s very difficult to do. What’s more suitable is to talk about or discuss it, or trying to write about it.
In the end, what you want to do is train unconscious aspects of your mind to have a deep and subtle ability to handle that concept. To have a fluency with it. That’s just like the endgoal of developing any other skill – whether driving a car, using a chef’s knife or playing a rugby.
Why is this important when it comes to thinking about concepts? Because to develop a solid position on a particular topic, you need to be able to evaluate your own position well. You have to be able to spot simplifications or omissions within the statements you’re making. You have to have a good nose for these things. And you just can’t do this well enough until you’ve really absorbed the knowledge well.
And I think that it’s likely to be the case that, once you have developed your skill a fair bit, you’re going to reach different sorts of conclusions than you would have initially. You need to develop the skills in order to draw reasonable conclusions.
And all this has a bearing on not just developing, but presenting arguments to others. That is, it has a bearing on things like how non-fiction books ought to be written, when there’s an need to develop a skill in reasoning about a concept, such as there is when you’re trying to answer the question of ‘is technology a good or bad thing?’. So of course, first you would try and make sure you’ve tried to develop your skill in reasoning about that concept, so that you have an accurate conception of it. As far as I can see, this is very rarely done. Usually people don’t really consider their notion of their conception of the relevant concepts. They usually just take for granted the notion of it they’ve pretty unconsciously developed, and go from there. They might, for the purposes of exposition define their notion of that concept. But that’s usually not much more than putting their existing view of it into words. Usually, what they’re more about is justifying the view they’ve already reached.
If a lot of the work in developing reasonable conclusions is developing skills in thinking about the concepts involved, then if you want the reader to appreciate your viewpoint, part of what you need to do is use your writing to help them develop skills in reasoning about the concepts. I did say earlier that reading things involving the concepts helps a bit in developing skills but was secondary to talking/writing about it. What I’m talking about here is a different type of writing about it, where the focus is specifically on trying to understand the nature of the concepts themselves, rather than just saying things concerning the concepts.
This fits in with the notion of laddered skills, particularly mental laddered skills
Tuesday, May 20, 2008
Sometimes you hear the complaint that the internet-users of today don't have the patience to read things online. It used to be that people read relatively long articles in Newspapers and magazines like Time. But not any more. They don't have the attention span and are perhaps lacking in a certain kind of mental depth.
But some of this situation actually reflects a lacking in older styles of writing. They're not informational enough. You have to read a lot of for only a small amount of useful information. That was fine when information was relatively scarce. But when there is so much information out there -- and in truth, a lot of useful information out there -- it's just too costly to read things that have a low information-to-length ratio.
By the way, when I talk about the inforamtion-to-length ratio, I mean the amount of information that people can easily and comfortably derive from the text. It's no good if the text says a lot, but it's bloody difficult to discipher (note, however, that I tend to think that high informational text is likely to be readable. Clever sounding but hard to read text is often just obfuscation).
I think that people will pay attention if the informational payoff is high enough. When it's not they'll skim, and if it's too low, they'll skim a bit then leave.
That's the simple rule for getting people to read stuff on your web-site: give it a high information-to-length ratio. I'm suspect of how well "gimmicks" like highlighting keywords -- so they'll catch readers eyes when scanning -- really work.
So what's an example of a form of writing that has a high information-to-length ratio? The best examples I can think of are Paul Graham's essays.
(and of course I'm not making any claims here about my ability to write this sort of text. and certainly most of the stuff on this blog is just 'trying to get my thoughts together' stuff that doesn't have a high info-to-length ratio).
I'd be nice to have some integration between Transinfo and Google Maps. Transinfo is the public-transport journey planner for the Brisbane area.
Some examples of possible integration (Most or all of which I'm sure have been done in various other places around the world). Showing the real-time location of buses and trains on the map. Overlaying bus routes over the map.
Being able to plan a trip, and marking it out on the map. This'd be especially useful when it involves changing between serveral buses, and having to walk to a different stop to catch the next bus.
Being able to search for a location in Google Maps and to feed that in as a 'from' or a 'to' address in the journey planner. Or if you use Google Maps to mark out how to get from one place to another by car, be able to immediately get public transport journey information.
Monday, May 19, 2008
Another excellent Paul Graham article Lies We Tell Kids
Adults lie constantly to kids. I'm not saying we should stop, but I think we should at least examine which lies we tell and why.
There may also be a benefit to us. We were all lied to as kids, and some of the lies we were told still affect us. So by studying the ways adults lie to kids, we may be able to clear our heads of lies we were told.
I'm using the word "lie" in a very general sense: not just overt falsehoods, but also all the more subtle ways we mislead kids. Though "lie" has negative connotations, I don't mean to suggest we should never do this—just that we should pay attention when we do.
Saturday, May 17, 2008
Thursday, May 15, 2008
Non-linear relationship b/ween improvements in your understanding of a prob and your ability to address it
In relation to my last post...
I think we also tend to assume that, if you understood a problem better, then you idea of how to best address it would simply be a better, improved version of your existing idea of how to address it
But I think that's wrong. A better understanding can lead you to a radically different notion of how to address the problem.
It’s easy to be falsely led into thinking that your current understanding of a problem can’t be improved. That's dangerous, because simplified understandings tend to lead to inappropriate solutions.
You can think it can't be improved, if you expect that, should there exist a way for it to be improved, you could personally see
1) how it was possible for a better understanding to exist
2) what it would look like
3) how it would help you better address the problem
It seems to be just how our brains work that,
If we can’t see these things, we tend to think the thing mustn't exist.
But the problem is, if you do only have a simplified understanding of the problem, then this fact itself can stop you from seeing those things. If you could see those things then you wouldn’t have the simplified understanding.
Just becuase you can't see it, doesn't mean it doesn't exist.
Unless you understand why you understand the problem fully, then you always have to be open to the idea that it's useful to learn more about the problem, and that doing so may radically change your idea of how best to address it.
Monday, May 05, 2008
So far I'm quite impressed by Enso. It provides you with a single 'linguistic commandline' for controlling common operations on your computer, from switching to different applications, to looking up something in google, and spell checking text. It's very slick. It's also free and open-source. Presently windows-only.
Probably the best way to get a feel for it is to watch this overview screencast.
I think this kind of thing is a real step forwards.
Sunday, May 04, 2008
Cuba lifts ban on home computers. Home computers were banned in Cuba?? That's pretty oppressive. I'm not saying that banning computers is anything close to the worst form of oppression, but it is telling of their attitude towards their citizens.
Posted by James at 11:18 a.m.
Saturday, May 03, 2008
A very interesting –- and very readable -- article by Clay Shirky.
What do Gin, sitcoms and Wikipedia have to do with ‘cognitive surplus’, and how will the technological leverage we’re giving to that surplus re-shape society? That might all sound fairly speculative, but I think he's got a pretty solid point.
Clay Shiky: website, wikipedia
His book, Here Comes Everybody: amazon; blog;
reviews: guardian, times higher education, boing boing
Thursday, May 01, 2008
I don't do scrambled eggs much coz they never seem to come out right, but after seeing this I'm going to get some eggs this afternoon and give it a go tomorrow morning.
A few interesting things from the video: don't put salt in early, as it effects the consistency; scramble the eggs in the pan; alternate the pan on and off the heat.
(Update: had a go at it and it came out pretty good, though I didn't cook it quite long enough, and didn't have any creme fraiche. also, 3 eggs is too much for me, next time I'll try using 2)
Alternatively, if you like omelette's: