The still-relevant bits of this blog - an overview, 2003-2018
I've used this blog to jot down thoughts and ideas, which I sometimes want to go over. But by this point it contains so much content that it's hard for me to dive into it. This post is an attempt to provide a quick overview of its content, that: filters out posts that are no longer relevant for me, tries to organise the material into logical groupings, and summarises the basic point of each post.
On the topic of my thoughts and ideas, there's also what I've written on Hacker News, Reddit, Twitter, and Quora.
Basic Research
What is the value of basic research? Not individual pieces of basic research, but that entire category of research? My thoughts on how we can think about this question, and why basic research's value is significantly underestimated.Scientific and Technological Development
It is useful to see scientific and technological development as removing constraints on our ability to achieve goals. This demystifies this process, and suggests possible future developments.Information Ecosystem
The information ecosystem is all the means, across all of society, of storing, presenting, communicating, and using information. In that, how can we help truths to spread and hinder the spread of falsehoods? (one idea this post contains is a source outlining more applied kinds of mistaken reasoning and justification).Thoughts on how we can use fact checking, and linking to supporting evidence, to help improve the information ecosystem. A goal being to make 'providing supporting evidence' more of a norm. And some thoughts on how we can make fact checking practical and as easy as possible.
An authoritative source of current scientific opinion would help. A source for reliable statistics for facts like "X% of working biologists believe in the theory of evolution". It'd be an important resource for informing the public and those making social policies.
Bret Taylor's suggestion for a Wikipedia for data (statistics, listings, etc), which still doesn't seem to have come to fruition.
A few minor things. Identifiers for news topics would help. If a topic (like a particular accident that occurred) could be given a unique identifier, and if all reporting and follow-ups on it (by all different parties) referenced that identifier. Some articles suggesting improvements to news sites. Some sites for structured representations of debates on topics (though it is questionable how much value these have): Debatepedia, spacedebate.org (which uses the Open Debate Engine, as do these two).
Related: Clay Shirky on the use of evidence in society (his response to the 2007 Edge question). He explains why "even today, evidence has barely begun to upend authority", and why, despite this, he's optimistic this is changing.
Interactive Storytelling
Can we use interactivity not for player agency, but as a means of increasing their immersion in a character's perspective? Think of something more like an interactive movie or documentary than a game. I sketch a high-level vision for this.Some ideas for how we could use interactive storytelling teach skills like communication skills.
Minor:
A miscellaneous idea: could you use a character like a Nintendog in narrative-focused game?
(And not something I intend to pursue, but a thought on presenting stories through email correspondence. Including the idea of seeing the emails drafted in real-time.)
Though not necessarily something you'd use for interactive storytelling, here's a concept you could use in a computer game: to graphically represent a character's (like the one the player is controlling) perceptual attention. Not the area they're looking at, but the differing levels of attention they're giving to things within that area.
Cognitive
Tropes in fictional TV and movies that propagate inaccurate, damaging views. Tropes such as 'grief is always overt', 'attractiveness correlates with character', and 'intelligence is Spock-like'.
'Core world-model' as a Creole
(I'm not happy with how I described this in the original post, so the following overview puts it in different terms). Core world-model as a creole (original post titled "Knowledge as a creole"). Creoles are a type of spoken language, and they way they come about shows us that children build their understanding of a language according to an unconscious, structured process that integrates available linguistic details from their environment. I argue that we build our 'core model of the world' in a similar fashion. And like with language learning, it's difficult to modify this or integrate new details once a person is an adult. Many 'paradigm shifts' could be explained, in this view, as a result of the first generation growing up such that new insights are able to be integrated in such a fashion into their 'core model of the world'.
Related to this, Clay Shirky argues that a lack of knowledge is useful for entrepreneurs.
Cases where words have been adapted for use in a new context, where their original meanings are misleading in the new context. "Stretching" muscles, and radio-signal "interference".
"Analog" properties, such as the level of mercury on a thermometer, are really those whose levels can change in increments smaller than we can perceive.
Treating the meaning of some words/phrase in terms of their meaning in the abstract can cause confusion. We need to think of what their meaning is in the particular context in question. Another example of this.
Popper's "What is Dialectic?" is an excellent example the interaction between language and thought and the problems that can be thus caused.
Species, Language and Reality: an example of how language misleads us. The notion of 'species' makes it seem like there's a lack of 'intermediate forms' in fossil records, where there's no actual such issue. (Also related to the 'evolution' topic found below).
Opinions Aren't Always Just Opinions: arguing against common view that opinions are unquestionable.
Phrasing and significance: some simple examples of where language is used to make something sound significant.
The natural world. Giant ocean waves were long dismissed as myths, despite the fact that they actually exist (as has now been definitively shown). Early reports of geysers weren't taken seriously.
How I think file-system tagging should work. To be truly viable, the tagging system needs to utilise hierarchy in two ways: hierarchical tag namespaces, and tags being scoped to sub-hierarchies of the directory hierarchy.Poor language/concept use
Minor:
"Analog" properties, such as the level of mercury on a thermometer, are really those whose levels can change in increments smaller than we can perceive.
Treating the meaning of some words/phrase in terms of their meaning in the abstract can cause confusion. We need to think of what their meaning is in the particular context in question. Another example of this.
Popper's "What is Dialectic?" is an excellent example the interaction between language and thought and the problems that can be thus caused.
Species, Language and Reality: an example of how language misleads us. The notion of 'species' makes it seem like there's a lack of 'intermediate forms' in fossil records, where there's no actual such issue. (Also related to the 'evolution' topic found below).
Opinions Aren't Always Just Opinions: arguing against common view that opinions are unquestionable.
Phrasing and significance: some simple examples of where language is used to make something sound significant.
Label it terrorism, do what you like: a brief post on how labelling things can make readers jump straight to emotional reactions and conclusions, skipping the step of checking whether the label is apt.
Thinking
Minor:
I had a lot of difficulty summarising the following post, and I ended up mostly re-writing it in my attempt. The original also provides a potential explanation of why we tend to overlook the negative processes.
I had a lot of difficulty summarising the following post, and I ended up mostly re-writing it in my attempt. The original also provides a potential explanation of why we tend to overlook the negative processes.
I think critical thinking is probably a lot more important than obtaining knowledge. Not that the latter is unimportant, but that a strong core of critical thinking is necessary, along with obtaining knowledge, for good results.
Knowledge and intelligence can be structures/processes that reflect how things are, so increase our knowledge and intelligence by obtaining more of these things (through reading, practice, etc). That is, they can be 'positive' things. But they can also be 'negative'. Critical thinking. Knowledge and intelligence that enables us to be critical, to not jump to conclusions, to not overgeneralise, to critically evaluate inferences. These negative elements are filters, they're avoid avoiding inaccurate knowledge/reasoning.
It's important to apply such filters. Many ideas come from intuition, and intuition doesn't work very well for many topics (e.g. science, technology and economics). We tend to get knowledge from what we hear/learn from others -- building new knowledge is very difficult and time-consuming -- and there's many inaccurate views out there (e.g. views that aren't accurate but are good at propagating amongst the population). There's many more ways to be wrong than to be right, and once a false view gets into your brain it can be difficult to dislodge.
Positive learning/practice can contribute to being critical, but it doesn't necessarily lead to that, and a critical attitude is somewhat of a parallel track that gets benefits aside from positive learning.
We tend to think that if we can't see how our understanding can be improved, that this is because it can't be. But the fact that we can't see how it can be improved is not because it can't, but because of the limitations of our ability to imagine such details. Basically, if we could see how our understanding could be improved, we'd already have that improved understanding. (This is an example of the generally positivist character of intuitive thinking).
A purely-speculative notion is one without any supporting evidence. When a purely-speculative notion is invoked to explain some phenomenon, it's less obvious that it's purely-speculative. Think of pre-scientific explanations of the natural world, like the spheres many believed to be associated with epicycles. The fact that it's doing explanatory work makes it seem more real, and there is something of it being nested within an explanation that makes it more difficult to take a critical eye to. Such purely-speculative notions may even become a transparent part of how we perceive/conceptualise the phenomenon, such that the fact of the phenomenon's existence and character seems to provide concrete evidence of the existence of the -- in fact, purely speculative -- phenomenon.
Instead of judging whether something is plausible by trying to imagine whether it could exist (the flaw in this is that positivist attitude), we should try to find reasons why it couldn't exist -- and if we can't, be agnostic about whether it could exist or not. If you need to take some action action that's dependent upon whether the thing exists or not, that action needs to go one way or the other, but even this doesn't require you to have a definitive belief.
A benefit of learning to program. There's times when you think you understand something, but in fact you don't really. Programming makes you better at spotting such cases, and I think can help your ability to spot such cases outside of programming as well.
A purely-speculative notion is one without any supporting evidence. When a purely-speculative notion is invoked to explain some phenomenon, it's less obvious that it's purely-speculative. Think of pre-scientific explanations of the natural world, like the spheres many believed to be associated with epicycles. The fact that it's doing explanatory work makes it seem more real, and there is something of it being nested within an explanation that makes it more difficult to take a critical eye to. Such purely-speculative notions may even become a transparent part of how we perceive/conceptualise the phenomenon, such that the fact of the phenomenon's existence and character seems to provide concrete evidence of the existence of the -- in fact, purely speculative -- phenomenon.
In a subsequent post I clarified a couple of things. What I mean by evidence: if an explanation turns out to be false we can see there was never any actual evidence for it, just information that was suggestive to us, that we incorrectly took as evidence. Some explanations really are purely-speculative: they're just-so stories that only have going for them that if they were true then they would provide an explanation. And, that purely-speculative explanations aren't intrinsically bad. They are a necessary part of theorising and obtaining a true explanation. They're bad when the lack of associated evidence isn't recognised.
In a further post, I distinguished between a potential explanation (that demonstrates nothing more than a potential, and neither indicates that it is the explanation nor rules out other potential explanations), and an actual explanation (that demonstrates that this, out of all the possible explanations, is the actual explanation of the phenomenon). No amount of potential explanation evidence can demonstrate that something is the actual explanation. The inability to imagine an alternative explanation is often cited as evidence that a potential explanation must be the actual explanation, but it does not provide any evidence for this.When we draw a conclusion about X (an item, a situation, etc) we should do so by looking at the properties of X that are relevant to whether that conclusion holds or not. It is, however, very easy to take lazy shortcuts --- to jump to conclusions -- and draw that conclusion based on generalisations that apply to X rather than actually looking at the particulars of X. This is an example of that.
Instead of judging whether something is plausible by trying to imagine whether it could exist (the flaw in this is that positivist attitude), we should try to find reasons why it couldn't exist -- and if we can't, be agnostic about whether it could exist or not. If you need to take some action action that's dependent upon whether the thing exists or not, that action needs to go one way or the other, but even this doesn't require you to have a definitive belief.
There's a strong tendency to assume psychological cases for medical conditions. Some examples. Duodenal ulcers used to be blamed on psychological factors. As was scurvy (which was also blamed on environmental factors).
Wikipedia's list of common misconceptions (direct link).
Wikipedia's list of common misconceptions (direct link).
Malcolm Gladwell argues that people tend to underestimate the difficulty of unfamiliar tasks, especially those require expertise (direct link).
Missing the thickets for the forest. Focus too much on the trees and you can miss the forest. But focus too much on the forest and you can miss the intermediate structures -- patterns or structures within the whole -- the thickets.
Fact memorisation supplants thinking. A focus on learning lightweight facts, which give you rote instructions (like the 'tips' that lifestyle shows tend to focus on), hinders ones ability to think about a situation.
Short Michael Shermer article on relativism. "Wronger Than Wrong: Not all wrong theories are equal" (direct link).
In demonstrating or evaluating an argument, we tend to focus on the reasoning between the premises and the conclusion. But I think the larger part of whether the conclusion is true is whether the premises themselves are true. The premises usually aren't just facts, but an outlook, a framing, a kind of model of the world, which setup the kinds of ways we reason from the facts to the conclusions.
Understanding why a map is a map of a particular territory can help us to avoid confusing that map for the territory
Jurors are biased by animated recreations used in courtrooms.
Things we don't understand always seem intangible.
The tendency to assume that things are as we perceive them. An example of this is assuming that since qualia seems inexplicable it is so.
We often wrongly attribute properties to objects that are actually the properties of interactions. For example, that something is addictive has to do with how our bodies/brains process it, not some intrinsic 'addictiveness' property of that thing. So non-physical things (like activities such as video games) can be just as much addictive.
A link to a 10 minute video on what it means to be open-minded (direct link).
On intuition. Distinguishing between learned-intuition and heuristic-intuition. Reason and intuition both have their place, but reason is better for determining which of these is better to use in a particular situation.
Motivation is often overlooked as a kind of emotion, feeling or desire. I have a brief look at what factors can influence motivation and why it's not a simple matter of "wanting to do" the thing.
David Deutsch's reply to the 2001 Edge question, arguing for the importance of looking for explanations for what we see happening (direct link).
Things we don't understand always seem intangible.
The tendency to assume that things are as we perceive them. An example of this is assuming that since qualia seems inexplicable it is so.
We often wrongly attribute properties to objects that are actually the properties of interactions. For example, that something is addictive has to do with how our bodies/brains process it, not some intrinsic 'addictiveness' property of that thing. So non-physical things (like activities such as video games) can be just as much addictive.
A link to a 10 minute video on what it means to be open-minded (direct link).
On intuition. Distinguishing between learned-intuition and heuristic-intuition. Reason and intuition both have their place, but reason is better for determining which of these is better to use in a particular situation.
False neutrals. Assuming that only alternatives to the status quo, which itself is considered 'neutral', need to be subjected to evaluation. And trying to understand why we fall prey to false neutrals.
Janet Rae-Dupree argues that innovation is the result of hard-work, not flashes of brilliance (direct link). An article on the myth of prodigy: "A study of 200 highly accomplished adults found that just 34 percent had been considered in any way precocious as children" (direct link).
An important part of how we 'naturally' view things in the world involves, I think, essence-based things and a predominantly social/aesthetic interpretation.
A blog post (direct link) by John Hagal on an article (direct link) by Rich Karlgaard on the prevalence of, and problems with, zero-sum thinking.
In cool reflection, it's easy to underestimate the force of emotions. When you experience emotions it affects you. But when you imagine such a situation they don't (or to a much lesser degree). This makes it easy to underestimate the force of emotions, and overestimate the ease of just doing what you want.
In cool reflection, it's easy to underestimate the force of emotions. When you experience emotions it affects you. But when you imagine such a situation they don't (or to a much lesser degree). This makes it easy to underestimate the force of emotions, and overestimate the ease of just doing what you want.
Motivation is often overlooked as a kind of emotion, feeling or desire. I have a brief look at what factors can influence motivation and why it's not a simple matter of "wanting to do" the thing.
David Deutsch's reply to the 2001 Edge question, arguing for the importance of looking for explanations for what we see happening (direct link).
"What is it in reality that brings about the outcome that we predict?" Whenever we fail to take that question seriously enough, we are blinded to gaps in our favoured explanation.
I think F. Scott Fitzgerald's test for intelligence embodies a good notion of what intelligence is
On the common but mistaken view that 'how much general knowledge you have' equates to how smart you are.
We think of size in relative-terms, relative to our own size. But this is very misleading. It's more accurate to think of something's size in terms of the amount of 'atomic' detail it consists of, and in such terms everyday objects like ourselves are very large.
Despite how perception presents the world to us, the actual physics (of "subatomic particles", so to speak) is everywhere and everything.
When we think of a book we may think of the physical object, with pages and ink printed on them. But we may get a, in some ways more accurate, picture of it by imagining it as the sequence of letters, spaces and punctuation that make up the book's content, as if that was arranged in one long line that stretches out for quite some distance.
Just an interesting thing to think of: as well as the physical Earth, there is each person's personal conception of the world, the one they actually see and experience in their heads, and there are billions of these.
The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function.Some posts related to that. I think it's an important skill to be able to reason about an issue without the implicit assumption that your position on it is true, and I talk a bit about what is required for doing this. It's good to understand views that you reject. Why does it make sense to others? Doing this can help guard against rejecting things simply because they seem "ridiculous" to you.
On the common but mistaken view that 'how much general knowledge you have' equates to how smart you are.
We think of size in relative-terms, relative to our own size. But this is very misleading. It's more accurate to think of something's size in terms of the amount of 'atomic' detail it consists of, and in such terms everyday objects like ourselves are very large.
Despite how perception presents the world to us, the actual physics (of "subatomic particles", so to speak) is everywhere and everything.
Just an interesting thing to think of: as well as the physical Earth, there is each person's personal conception of the world, the one they actually see and experience in their heads, and there are billions of these.
Physicist Brian Greene: "The difference between making a breakthrough and not can often be just a small element of perception".
Non-experts can be better at solving problems than the experts - an example.
Non-experts can be better at solving problems than the experts - an example.
On the tendency to dismiss things
Some miscellaneous examples of this. There seem to be endless cases of these. What follows is a just a small grab-bag of cases I noted down.
Medical. Where a mother died after her emergency call was treated as a prank. Where a woman died because she was ignored in the emergency room of a hospital. A woman had 17cm scissors left in her after an operation, and she had a huge amount of trouble getting people to believe she was in real pain and that there was something wrong. The person who discovered that duodenal ulcers were caused by a bacteria not stress was ridiculed for holding this view.
The natural world. Giant ocean waves were long dismissed as myths, despite the fact that they actually exist (as has now been definitively shown). Early reports of geysers weren't taken seriously.
Systemic Worldview
The world works in systemic ways, so the ability to think in systemic terms is so important.
People tend to think that the only or best way to address problems or to achieve goals are with 'vertical' (domain or problem-specific) solutions. But I argue that 'horizontal' (domain-agnostic, often 'infrastructure') solutions are often more important (the linked post is talking about this specifically in the context of software). Vertical solutions are appealing because it seems they're directly focusing on the goal you want solved, while the latter seems spread too thinly. The key to appreciating horizontal solutions is that instead of being for a specific goal they help enable them. (This is closely related to the distinction between applied and fundamental research).
It's important to appreciate the kinds of 'systemic effects' that can make systems work in counterintuitive ways. Such as when incentives are poorly aligned. Some examples of this, to do with why enterprise software is so often poor. Another example - Bruce Schneier's notion of CYA (Cover Your Ass) security: "much of our country's counterterrorism security spending is not designed to protect us from the terrorists, but instead to protect our public officials from criticism when another attack occurs".
An example of systemic effects. In the article "Internet Killed the Alien Star", Douglas Kern contends that the UFO movement has been largely killed off because of the internet and other technological developments, such as camera phones.
Change is messy, and this is under-appreciated in practice. In 2009 Clay Shirky wrote "Newspapers and Thinking the Unthinkable". Newspapers are going away, it'll take some time to invent alternative venues for journalism, and the whole process will be messy (and how the history of printing press can help us understand this).
Chris Anderson article 'The Probabilistic Age': "Our brains aren't wired to think in terms of statistics and probability." (direct link)
The following subsections are cases of concepts that help in appreciating things in systemic terms.
It's very easy to underestimate the affect of convenience and reminders in determining whether a person will actually undertake some activity, so we tend to underestimate the importance of affordances.
An example of software affordances: when there's a language for creating some data (e.g. a templating language for generating HTML pages), the degree to which the form of the code matches the form of the data to be generated is an affordance.
Some more examples of affordances, and the utility of having a very general notion of affordances.
Most people only understand the definition of evolution, which by itself provides a misleading picture of it. To be able to think sensibly about evolution requires understanding a bit about what evolutionary processes are like in practice.
Some miscellaneous things relating to evolution:
The Copenhagen Interpretation of quantum physics and the theory of evolution seem fundamentally at odds with each other, and the latter has much better evidence in favour of it.
Could debilitating grief be, in part, a 'precautionary tale' to help others avoid whatever caused it? This could have evolved through kin selection.
Could our 'inner voice' have evolved as a way to re-use existing machinery (for processing external voices) for a general-purpose intra-brain communication mechanism?
The Intelligent Design movement claims that natural selection couldn't have produced lifeforms. Lifeforms are too complex, they say. But even if that were true, why don't they consider the possibility that life's the result of some other, as yet unknown, naturalistic mechanism?
Most descriptions of microbial evolution avoid using the term 'evolution', and this "may have a direct impact on the public perception of the importance of evolutionary biology in our everyday lives.", says a (quite readable) article in PLoS Biology. Related to this, doctors who don't appreciate that antibiotic resistance is an evolved trait tend to prescribe antibiotics in a way that encourages such evolved resistance -- leading to so-called superbugs. I link to a blog post on this topic.
A simple case where reductionism appears wrong but isn't: stacking two planks of wood does more than increase the overall strength by two. The flaw in the reasoning stems from treating an abstraction ('strength') as a literal thing.
The whole is greater than the imagined coming together of the parts: usually when it's claimed that "the whole is greater than the sum of the parts" in some situation, that claim isn't based on a precise, concrete understanding of what the "sum of the parts" really is, but more just on an intuitive impression of what the "sum of the parts" would be. This is another flawed reason why people thing reductionism is wrong.
Skills are 'laddered': you have to go through the lower-rungs on the ladder to get to the higher-level or more advanced competencies. Lower rungs tend to be simplified, rough capabilities; higher-rungs tend to refine them. It's often assumed that the results of advanced competencies can be achieved by simply knowing what they are and trying hard enough, but it takes substantial time and practice to work your way through the rungs up to that level.
It's difficult to imagine higher-rungs above our skill level, which intuitively makes it feel like they don't exist. It's difficult to imagine how we'd reach a higher-level, which makes it feel like there isn't a point in trying. And it's difficult to imagine what having a high-run ability would enable, which makes it feel like there wouldn't be a point anyway. But these are just limitations in our abilities to imagine, which we should expect to experience and which we should ignore.
Acquiring knowledge is acquiring a skill. As we learn a skill we refine our abilities, and develop fluency. I suggest that acquiring knowledge is literally a matter of acquiring skills, and follows this same pattern.
Being able to explicitly reason about a concept is a skill. One rung on the ladder of cognitive skills is being able to explicit reason about concepts. Doing so requires being able to override intuitions and a fair bit of practice. It's an important skill for being able to draw reasonable conclusions about a topic, which also means that non-fiction books arguing for a particular conclusion should pay heed to this and try to help the reader develop skills in reasoning about the concepts involved.
Laddered skills don't just apply to individuals, they also apply to organisations, like companies or countries. A society has to have developed to a certain stage before it can maintain a democracy, for example. Democracy can't be installed into a primitive society in one fell swoop or just by "trying hard enough" to install it. I'm curious about how societies evolve. What are the rungs, such as in how how legal systems and other institutions come to be and get fleshed out?
It's important to appreciate the kinds of 'systemic effects' that can make systems work in counterintuitive ways. Such as when incentives are poorly aligned. Some examples of this, to do with why enterprise software is so often poor. Another example - Bruce Schneier's notion of CYA (Cover Your Ass) security: "much of our country's counterterrorism security spending is not designed to protect us from the terrorists, but instead to protect our public officials from criticism when another attack occurs".
An example of systemic effects. In the article "Internet Killed the Alien Star", Douglas Kern contends that the UFO movement has been largely killed off because of the internet and other technological developments, such as camera phones.
Change is messy, and this is under-appreciated in practice. In 2009 Clay Shirky wrote "Newspapers and Thinking the Unthinkable". Newspapers are going away, it'll take some time to invent alternative venues for journalism, and the whole process will be messy (and how the history of printing press can help us understand this).
Chris Anderson article 'The Probabilistic Age': "Our brains aren't wired to think in terms of statistics and probability." (direct link)
A broader notion of affordances
Donald Norman's notion of affordances: the character of physical objects suggests how we can interact with them. I think there's value in a broader notion of affordances: the character of entities (whether physical objects, or things like mental concepts) shape how we perceive, think about, or do things.It's very easy to underestimate the affect of convenience and reminders in determining whether a person will actually undertake some activity, so we tend to underestimate the importance of affordances.
An example of software affordances: when there's a language for creating some data (e.g. a templating language for generating HTML pages), the degree to which the form of the code matches the form of the data to be generated is an affordance.
Some more examples of affordances, and the utility of having a very general notion of affordances.
Evolution
Our usual language for talking about evolution tends to be misleading, so I suggest some alternatives. And, prompted by a comment on that post, some elaboration.Most people only understand the definition of evolution, which by itself provides a misleading picture of it. To be able to think sensibly about evolution requires understanding a bit about what evolutionary processes are like in practice.
Some miscellaneous things relating to evolution:
The Copenhagen Interpretation of quantum physics and the theory of evolution seem fundamentally at odds with each other, and the latter has much better evidence in favour of it.
Could our 'inner voice' have evolved as a way to re-use existing machinery (for processing external voices) for a general-purpose intra-brain communication mechanism?
Reductionism
'Reductionism' is very often misunderstood. I try to clarify what it really means.The whole is greater than the imagined coming together of the parts: usually when it's claimed that "the whole is greater than the sum of the parts" in some situation, that claim isn't based on a precise, concrete understanding of what the "sum of the parts" really is, but more just on an intuitive impression of what the "sum of the parts" would be. This is another flawed reason why people thing reductionism is wrong.
Laddered skills
It's difficult to imagine higher-rungs above our skill level, which intuitively makes it feel like they don't exist. It's difficult to imagine how we'd reach a higher-level, which makes it feel like there isn't a point in trying. And it's difficult to imagine what having a high-run ability would enable, which makes it feel like there wouldn't be a point anyway. But these are just limitations in our abilities to imagine, which we should expect to experience and which we should ignore.
Acquiring knowledge is acquiring a skill. As we learn a skill we refine our abilities, and develop fluency. I suggest that acquiring knowledge is literally a matter of acquiring skills, and follows this same pattern.
Being able to explicitly reason about a concept is a skill. One rung on the ladder of cognitive skills is being able to explicit reason about concepts. Doing so requires being able to override intuitions and a fair bit of practice. It's an important skill for being able to draw reasonable conclusions about a topic, which also means that non-fiction books arguing for a particular conclusion should pay heed to this and try to help the reader develop skills in reasoning about the concepts involved.
Computing
Minor:
Simulations, for teaching learner drivers, could present them with the sorts of unusual circumstances that can cause accidents (e.g. another car suddenly behaving erratically)
An idea - a smart-phone 'slush pool' for making small payments practical: the user gets convenience because they can make small purchases using this 'slush pool' without needing to enter a password, and at the same time have minimal risk because they require a password to add money to the slush pool and it can only ever contain a small amount of funds (e.g. a max limit of $10). [Written in 2010 - recent technologies like fingerprint and facial identification lessens the need for such measures].
Is there a way to incentivise and support summaries of threads on discussion sites like Reddit?
Would it be useful to have a "Stack Overflow" for finding research material in a subject area? It could help people from outside a field (including those trying to get into the field) to find out about relevant research in the field, from people in that field. I discuss the issue of incentives for contributing to such a resource. Perhaps this need has been, since writing this post, served by Quora or some subreddits. I'm not sure.
Immersive language teaching. 2004 article on software developed by the University of Southern California to provide an immersive virtual environment for learning language in a more natural way.
Using Wikipedia as a source of canonical tag names. To get the canonical tag name for a subject, get the URL of the wikipedia page for that topic and use the right-most part of it as the tag name. e.g. Ten-pin_bowling from http://en.wikipedia.org/wiki/Ten-pin_bowling
Is there a way to incentivise and support summaries of threads on discussion sites like Reddit?
Would it be useful to have a "Stack Overflow" for finding research material in a subject area? It could help people from outside a field (including those trying to get into the field) to find out about relevant research in the field, from people in that field. I discuss the issue of incentives for contributing to such a resource. Perhaps this need has been, since writing this post, served by Quora or some subreddits. I'm not sure.
Immersive language teaching. 2004 article on software developed by the University of Southern California to provide an immersive virtual environment for learning language in a more natural way.
Using Wikipedia as a source of canonical tag names. To get the canonical tag name for a subject, get the URL of the wikipedia page for that topic and use the right-most part of it as the tag name. e.g. Ten-pin_bowling from http://en.wikipedia.org/wiki/Ten-pin_bowling
User-interface
Minor:A user interface principle: operations should be applicable to a type of information, regardless of the context it appears in. If an operation can be applied to an X, it should (unless there's a good reason not to) be able to be applied to Xs anywhere they appear.
Visually referencing web-page elements: we could provide better support for users to provide visual references to elements of webpages or applications, to help in writing instructions for using them. To enable showing rather than telling.
Some minor ideas for operating-system user interfaces. For switching between windows: hit a shortcut key then gesture mouse in direction of desired window, or have the shortcut key bring up a small schematic map of desktop at mouse-pointer location. For file-explorers, have an optional 'Recent Folders' navigation pane. In the top-half of the left-hand pane, it lists the folders you've recently navigated or moved/copied files to.
Dealing with files in a higher-level fashion: not needing to specify a name or location for the file, and not needing to explicitly save them. The posts go into some specifics of this: Sep 2007, Jun 2008, Mar 2009. This is an area where the software landscape has changed since writing these posts, with the way things like Google Docs and mobile apps operate.
Different communication mediums have different affordances. One dimension these affordances range along is from encouraging more lightweight and transient content (twitter, chat), to encouraging content that's more substantial and longer-term (e.g. email). Would it be useful for this to be explicitly acknowledged by the software tools, such that a conversation could be 'promoted' from a lighter-weight medium to a more heavier-weight one, when that made sense, where by this transition was seamlessly handled and tracked by the software?
A couple of thoughts on what the smart-phone analogue of Vim-like modal editing would be.
Fan menus: an idea for taking the combining the best parts of radial/pie-menus and conventional context-menus. The menu rendered like a conventional context-menu, with an additional radial element for more quickly selecting between the menu items. (After having prototyped this, I don't think it quite works, but I do have an idea for another variation that might work better. I intend to write it up sometime).
Some minor ideas for visually representing information in software tools. For word processors or text editors. When searching for text, display something like a mini-map to show an overview of the occurrences in the entire document. (I wouldn't be surprised if this has been implemented in some tools by now). A facility for showing recent changes to the document (in a fashion like Word's 'Track Changes'), indicating the order the changes we made in, and giving control to the user over the time period to show changes for (e.g. 'today', or 'last 3 hours'). Being able to automatically make more concise representations of information, and to dynamically adjust the level of detail in the representation. Making it concise by tailoring it for a particular person and/or purpose.
A few basic ideas for weather forecast sites/apps.
Copy-editing marks and annotations. The Signal vs Noise blog argues that copy-editing marks are better than how 'track changes' edits are rendered, and I see this as an example of our current software's generally poor handling of annotation.
A couple of user-interface ideas. 'Highlighting equivalent links' in the browser: if the user mouses-over a link, and the page contains any other links pointing to the same destination, highlight those as well. Hold-and-swipe touch-screen gesture - an advanced gesture that could be utilised on touch-screen devices: the user holds their thumb on the screen while swiping with another finger.
Idea: personal subject-lines for emails. Instead of seeing software as presenting an objective view of information, see it as a personal tool for the user, suited to how they see things and how they want to use it for the tasks they're trying to achieve. For example, instead of an email inbox simply showing the subject-lines of the received emails, why can't the user change what it says for a subject-line, if that made it more informative for them in the context of how they're going to use that email, or where it'd make it easier for them to find it in the future?
Magic Ink: Information Software and the Graphical Interface, by Bret Victor is a powerful argument for why, in designing software, greater emphasis should be given to graphic design and less to interaction. All his writings are quite good, and worth a read.
Visualisations
These aren't necessarily computing-related, but it made sense to me to put these next to the user-interface section.Minor:
Cecilia Burman provides an effective way of showing what prosopagnosia (face blindness) is like
"If individuals were rocks, then it's like having to remember the characteristics of each rock, and try to realise when you come across this rock again, from your memories. The page uses photos of different rocks so you can see for yourself the sorts of difficulties involved."
A map someone has done of US states, where states have been renamed for countries with similar GDPs.
An antipodes map someone has made: select a location and see where the exact opposite side of the world is located.
Imagine being able to visualise the exact location of a piece of meat (e.g. a steak) within a 3D rendering of the animal it came from. Or within an animated rendering of the animal, as it's grazing around in a paddock. As a way of making visible a reality we know is true but can't see.
An antipodes map someone has made: select a location and see where the exact opposite side of the world is located.
Jon Udell's idea for augmented reality map views (e.g. that could label the details you're seeing outside an airplane window).
Having historical overlays for Google Maps: A service like Google Maps could have a date-slider that allows you view the world at different times in the past, and see the different country names and boundaries and the like. It could even show particular culture's views of the world at a particular time (what places they knew of, and what they called them).
Australia is a large land-mass, but the actual size of the populated area of Australia is much smaller. It would be interesting to be able to visualise this, and compare it to the size of other countries. (And of course, it would be interesting to see such details for other countries, too).
Lokesh Dhakar's visual catalogue of types of baseball pitches. Each picture is a simple graphic that intuitively captures the essential character of a type of pitch.
Video of Hans Rosling showing some impressive visualisation of statistics using the Trandalyzer tool. Graphs are used to show the statistics, but interactivity and animation is used to show changes over time, to transition between related perspectives, and to drill down into more detail. There's an impressive flow: one perspective raises certain questions, so he modifies the view to try and get insight into them.
Lokesh Dhakar's visual catalogue of types of baseball pitches. Each picture is a simple graphic that intuitively captures the essential character of a type of pitch.
Video of Hans Rosling showing some impressive visualisation of statistics using the Trandalyzer tool. Graphs are used to show the statistics, but interactivity and animation is used to show changes over time, to transition between related perspectives, and to drill down into more detail. There's an impressive flow: one perspective raises certain questions, so he modifies the view to try and get insight into them.
Imagine being able to visualise the exact location of a piece of meat (e.g. a steak) within a 3D rendering of the animal it came from. Or within an animated rendering of the animal, as it's grazing around in a paddock. As a way of making visible a reality we know is true but can't see.
Taking before-and-after shots of things like house construction from the exact same positions and angles, would enable better comparison.
Links to some visual illusions. A face that looks calm when viewed up close, but angry when seen from a distance. Invisible changes: images that are continuously changing but which appear static. A large collection of various optical illusions.
Extending cut-and-paste
I think it'd be beneficial for software to better support working with structured information, and to allow users to do so in a fluid way. The following is one possible area for doing this.
Being able to view and interact with the clipboard contents in a window, and being able to edit these contents (and the motivation for this).
Storing clipboard history. Being able to select multiple items in clipboard contents and history. Being able to 'peek' at the next paste item.
How more complicated cut-and-paste operations could be specified, by using CTRL+SHIFT to add modifier keys. How such operations could be specified for text copied from context-menus such as "copy link location" in a web-browser.
Controlling where in the clipboard list to cut/paste to/from, and whether a cut/copy replaces an existing item or inserts a new item, and whether a paste leaves the item in the clipboard or not. Cut/copy the item at the end, or at a specific position (either replacing the item already there, or inserted as a new item there). Paste from anywhere in the list, and 'pop-pastes' that remove the pasted item from the list.
Template paste: paste a 'template' containing the contents of multiple clipboard items.
Formatted paste: when pasting from rich-text documents into plain-text ones, be able to automatically convert rich-text formatting (bold, italics, etc) into plain-text equivalents (e.g. *bold*, /italics/).
Thinking about software
"Non-functional" aspects of software (like usability) are very important, yet new software is often dismissed because it doesn't do anything that existing software can't. IT people, in particular, have a very strong tendency to compare software only in terms of the functions they can perform. An example of this: the claim that choice of implementation language doesn't matter. Affordances are an important kind of non-functional property of software.
Software is a tool, and we can't predict exactly how users will use tools (a quote relating to this, and an example). We should acknowledge this in the design of software. I believe that, ideally, a piece of software would be a program written in it's own very high-level domain language. For example, the WinAmp program you get would be a program in the WinAmp domain language. The grammar of this language would define all the ways we can compose the components of that domain language into valid WinAmp programs. That is, we can customise the program by changing its high-level source code.
Some Alan Kay quotes on computing as the first 'metamedium' that can 'dynamically simulate the details of any other medium'.
Programming
Minor:Even if you're a solo developer, programming is always a team activity. The other members of the programming team the future versions of yourself 6, 12, 24.. months from now. A reason why you should always write code for others to read.
Defining functions within functions: if function A is only going to be used within function B, the code could be made clearer if a language allowed function A could be defined within B (at the start of it).
Sequences of lines of code may form a logical unit, but be too lightweight to break out into a separate function. We may want to document the overall function of such sequences, or even just indicate that they form a logical unit. Comments may be used, but there's no good way to specify the span of lines a comment applies to, so it could be useful to have a lightweight way of indicating that a sequence of lines forms a logical unit, and the post discusses how this could be done and how it'd enable those units to be 'collapsed' to provide a more concise view of a function.
When an experienced programmer is trying to learn a new language, what's the most efficient way to present them with what they need to know? I suspect it's to provide concrete examples of the features, which show the syntax and the results you get for particular inputs. Essentially, a collection of simple, representative examples, showing the exact inputs and outputs.
Teaching Programming
An idea: using a game to teach regular expressions. For example, use regular expressions to attack oncoming patterns of enemies.Against 'simple' as meaning 'fewer elements'
Software being simple doesn't equate to having fewer features, and software being complex doesn't equate to having more features. Simplicity means being well-tailored to user's needs.
To explain this viewpoint, outlining the notion of 'qualitative perceptual concepts', and arguing that complexity is one of these concepts.
Local vs global simplicity. What can make some software seem simpler, from the perspective of that software by itself (at a 'local' level), can actually make it more complex to use in practice, in the context of the user's use of it and other software it's used in conjunction with (at a 'global' level).
Initial vs standing simplicity. How simple the software is initially, vs how simple it is for an experienced user.
Looking at the value of potential new features in isolation, rather than its potential benefit as part of the entire system, is a reason that software can become too complex.
Others have argued against 'minimal features' view of simplicity, and for simplicity as primarily meaning the software suits the user's perspective and tasks: Joel Spolsky (who also brings in the notion of 'elegance' meaning 'grace and economy at achieving some task'), and Dharmesh Shah (1) and (2, which also argues that software should be 'opinionated' but not 'stubborn').
Avdi Grimm teases out the various different sorts of simplicity, and the tradeoffs involved in each type.
More choices may mean more complexity, but there is some nuance to this. How many choices you need to deal with at once is the bigger issue.
Education
Minor:A goal for education: giving people confidence in their learning ability.
On Producing High-Quality Work
Minor:
Links to some articles/posts on this.
"It's easy to make something incredible" - link to a blog post on how high-standards are basically all that's required for producing great things.
Steve Martin: "Be so good they can’t ignore you" (i.e. have really high standards)
A bit more indirectly on this point:
An AV Club interview with Ricky Gervais, including his thoughts on creative control
AVC: How do you establish that kind of creative control?
RG: I just demand it. I just simply wouldn't do anything that I wasn't terribly in charge of. I don't let anything go.
'Personas' provide requirements that are too vague (comment on a 37signals post). An example that I see in terms of high standards being central to producing high-quality creative work.
An article on the design process used by Apple's chief designer, Johnathan Ive. Using intense iteration and trying really hard to find any flaws and areas of improvement.
People often argue against having high-standards on the grounds that it's perfectionism, and perfectionists get stuck and can't produce output. This is a failure to distinguish between the 'immobilised perfectionist' and the 'persistent perfectionist'.
Philosophy
Just because something has always been a philosophical question doesn't mean it will always remain so. We may obtain a concrete understanding of the matter, whereby it will become science.There's learned philosophers but not philosophical experts. Philosophy concerns what we don't have a clear understanding of. It involves discussions and arguments and all the back and forth between these. Progress is in clearing up confusions, and making finer points. This means there's no established knowledge to have expertise in. Instead, being an 'expert' means being more learned in the ins and outs of the web of philosophical discourse. This has significance for the potential for outsiders, not part of the philosophy discipline, to make contributions to philosophical topics.
On writing, presenting, researching, getting things done
Minor:
Paul Graham's essay "Persuade xor Discover": writing to persuade and writing to discover are diametrically opposed, which is important to realise if you wish to discover (direct link).
Some Signal vs Noise posts on communication: on usefulness of counterintuitive-seeming statements, for getting people's attention; on how "the Curse of Knowledge" makes communication difficult; on describing a slice instead of the whole pie'; and on what being a speechwriter is like, and how it's similar to doing graphic design.
A list of authors I think are good at non-fiction communication (written in 2005).
Edward Tufte's presentation tips (direct link). Some presentation-skills lessons from Job's iPhone talk: build tension, stick to one theme per slice, add pizzazz to your delivery, practice, be honest and show enthusiasm (direct link).
Paul Graham's essay "Persuade xor Discover": writing to persuade and writing to discover are diametrically opposed, which is important to realise if you wish to discover (direct link).
Some Signal vs Noise posts on communication: on usefulness of counterintuitive-seeming statements, for getting people's attention; on how "the Curse of Knowledge" makes communication difficult; on describing a slice instead of the whole pie'; and on what being a speechwriter is like, and how it's similar to doing graphic design.
A list of authors I think are good at non-fiction communication (written in 2005).
Richard Hamming's advice on doing research - "You and Your Research" (direct link). I tend to find most advice on doing research a bit ordinary, but I thought this was quite good.
Rough notes on getting things done.
A Signal vs Noise post 'Do it yourself first', on why it's really beneficial for you to have tried to do a kind of work in a company before hiring someone to do it (direct link).
A Signal vs Noise post 'Do it yourself first', on why it's really beneficial for you to have tried to do a kind of work in a company before hiring someone to do it (direct link).
Misc ideas
'use by' labelling on food could be improved by distinguishing between 'use by, if unopened' and 'use by, once opened'.
An idea for between-floor transportation in buildings, combining hop-on-anytime convenience with a small horizontal footprint. In spaces that are too small for escalators, you can have lifts, but they don't have the same hop-on/hop-off convenience. Something that's both compact and has that convenience could make the second or third floors in such spaces more practical for commerce. Could it have other effects, such as helping people in companies spanning multiple floors communicate and collaborate more effectively?
Two thoughts about taxes. 1) Taxes would be better framed as a contribution to society rather than a deduction. 2) Journalism should be funded by taxes, because it's exactly the kind of public good that taxes are intended for -- an important infrastructure that market forces aren't suited to funding (it has strong biases against good quality content).
Imagine a large coffee-table book, with one page for each country in the world. For each country's page, imagine a single large photograph. Think of something like a high-school class photo, but with a larger number of people in it -- something like 200 (instead of four or five rows of people, maybe there seven or eight). Where those 200 people have been chosen to represent the ethnic diversity of the people in the country, and something of the variation within each of the main groups (perhaps 200 people wouldn't be sufficient, maybe 300 or some other number would work better; and yes, whatever set of individuals was chosen would involve some degree of controversy, but there is positive intent behind this idea and some degree controversy shouldn't be a roadblock). Obviously, any such thing would be a major approximation of the actual range of people in the country, but the idea is that even a book containing such major approximations would give most people a greater sense of the diversity out there than they currently have. Personally, I think such a thing could be quite interesting. It's obviously something that could be done as a web-site, too.
Allowing wiki-style edits on every single web-page. A potentially practical way for this to work, and why it could be useful.
An idea for a multi-touch rhythm game, where player needs to perform actions like tapping at three positions on the screen at once, or tapping at two locations, then sliding their those two fingers along a path.
Smartphone user-interface idea, "Flow forms". For quickly entering in data that's a sequence of multiple-choice options. The sets of multiple choice options are laid out one after the other, and there is a vertical 'channel' for each choice option. The options automatically scrolls up the screen, and user moves finger left or right to select which option as it passes. So the user doesn't have to lift their finger, just move it left and right.
Reframing how digital content is sold, from you paying for the content itself to paying for the effort that went into creating it.
Imagine a large coffee-table book, with one page for each country in the world. For each country's page, imagine a single large photograph. Think of something like a high-school class photo, but with a larger number of people in it -- something like 200 (instead of four or five rows of people, maybe there seven or eight). Where those 200 people have been chosen to represent the ethnic diversity of the people in the country, and something of the variation within each of the main groups (perhaps 200 people wouldn't be sufficient, maybe 300 or some other number would work better; and yes, whatever set of individuals was chosen would involve some degree of controversy, but there is positive intent behind this idea and some degree controversy shouldn't be a roadblock). Obviously, any such thing would be a major approximation of the actual range of people in the country, but the idea is that even a book containing such major approximations would give most people a greater sense of the diversity out there than they currently have. Personally, I think such a thing could be quite interesting. It's obviously something that could be done as a web-site, too.
Allowing wiki-style edits on every single web-page. A potentially practical way for this to work, and why it could be useful.
An idea for a multi-touch rhythm game, where player needs to perform actions like tapping at three positions on the screen at once, or tapping at two locations, then sliding their those two fingers along a path.
Reframing how digital content is sold, from you paying for the content itself to paying for the effort that went into creating it.
Minor:
As an alternative scheme for presenting footnotes, write the main body of the text as if it had no footnotes and at the end of each chapter have a 'footnotes' section where, for each footnote, there's a recap of the details from the main body of the text that it is for.
It'd be interesting to have a documentary that covers the differences between fictionalised accounts of police or medical work and the actual character of the work in those fields.
Smartphone app idea, "Battlechat": a competitive real-time text chat game.
Remote-controlled cars or drones could be used as like a Logo-turtle, for teaching programming concepts. There could be more of an emphasis on navigating the physical environment.
A laptop with a screen you could raise-up might have better ergonomics. (I know at least prototypes of such designs have been made).
Misc points
History is all that's happened: history isn't just significant events and 'grand narratives', nor does it just include things that happened a fair while ago -- it's literally everything that has happened. This is important, because the important things we can learn from history often involve conclusions and patterns drawn from less 'historic' seeming details.
Structure makes structure: once there's some structure in a system, that will tend to lead to more structure, leading to more structure, and so on.
Does the existence of spam emails help to make us more critical consumers of information?
We see resolve as an expression of free-will, but resolve can be seen in a way that doesn't involve free-will: as cases where, in our conflicting desires, the 'good' ones outweigh the 'bad' ones.
A quote on the importance of paying attention to initial impressions, to help preserve an accurate account of things.
Assuming that 'every little bit counts' is counter-productive. Resources are limited, so priorities are very important.
Exploitation of poor workers is more about poor treatment than low wages. Low wages are not the the evil they're usually made out to be. They're genuinely important.
Chemicals can be completely natural. On the mistaken notion of 'chemicals' amongst the general public.
Chemicals can be completely natural. On the mistaken notion of 'chemicals' amongst the general public.
Don't be afraid to express non-unique points of view. We may look down a little on saying things that have been said before, with it being seen as better to develop new understandings and viewpoints. But doing so can help organise your thoughts.
Strong leadership of the JFK "moonshot" sort might be what's needed to tackle climate change. One incentive for this is it's an opportunity to be seen as a hero to people in other countries and to future generations.
An analogy to show that, even though qualia seems so inherently mysterious, it need not actually be so: imagine a person in a remote tribe who knows nothing about modern technology, who is given a handheld games console. Nothing from their knowledge or experience could suggest what the game world is or how it might work.
A short quip: "Unseen practice is easily mistaken for brilliance".
Strong leadership of the JFK "moonshot" sort might be what's needed to tackle climate change. One incentive for this is it's an opportunity to be seen as a hero to people in other countries and to future generations.
An analogy to show that, even though qualia seems so inherently mysterious, it need not actually be so: imagine a person in a remote tribe who knows nothing about modern technology, who is given a handheld games console. Nothing from their knowledge or experience could suggest what the game world is or how it might work.
A short quip: "Unseen practice is easily mistaken for brilliance".
Misc
Minor:
Nassim Taleb on the lack of respect for those not doing steady and predictable work. Matt Inglot on society's attitude towards people who don't have a "real job": "the fear of ... doing something different, something with an unsure ending" (direct link).
Jaron Lanier writes about the anti-intellectualism in today's society, the need to overcome it, and suggestions on doing so. One of the topics he focuses on is spirituality and how that can mesh with a naturalistic worldview.
Daniel Dennett's policy recommendation that every child is taught about all world religions (direct link)
Toxic religions depend on enforced ignorance of the young -- and a religion that can flourish in an informed (citizenry) is a benign religion.
A post on Kuro5hin arguing that advertising should be taxed, because of its negative effects (direct link).
I agree with the sentiment in Chris Mooney's article "The Monster That Wouldn't Die" (direct link)
I agree with the sentiment in Chris Mooney's article "The Monster That Wouldn't Die" (direct link)
I'm tired of preachy retreads of the Frankenstein myth, first laid out in Mary Shelley's 19th-century classic and recycled by Hollywood constantly in films from Godsend to Jurassic Park. I'm sick of gross caricatures of mad-scientist megalomaniacs out to accrue for themselves powers reserved only for God. I'm fed up with the insinuation (for it's never an argument, always an insinuation) that there's a taboo against the pursuit of certain kinds of knowledge and that certain technological achievements -- especially those with the potential to affect life itself -- are inherently "unnatural."
...
I'm extremely uncomfortable with the way in which the weapon of the Frankenstein myth is repeatedly used as a club against modern-day medical researchers, who are seeking to cure people, not to become God.
Kevin Kelly looks at how, when copies are free, you have to sell what can't be copied (direct link). He goes into the sorts of non-copyable things that can be sold. I outline some thought about this in terms of the view of technological development as the removal of constraints, and on why that view of technological development is useful.
Related to my PhD
My PhD Scholarship Application.
On distinctions made between 'data', 'information' and 'knowledge'. What we really need is a better understanding of the phenomena. It isn't very helpful to try to make sharp distinctions when our understanding is lacking.
A grab bag of things with some relation to my PhD work:
Paul Graham on discarding the assumption that it's all about us (direct link). I believe this is key to understanding the foundations of information and semantics.
How explicit does a representation have to be? Is there really a hard distinction between explicitly and implicitly representing some details? I suspect not.
On distinctions made between 'data', 'information' and 'knowledge'. What we really need is a better understanding of the phenomena. It isn't very helpful to try to make sharp distinctions when our understanding is lacking.
A grab bag of things with some relation to my PhD work:
Paul Graham on discarding the assumption that it's all about us (direct link). I believe this is key to understanding the foundations of information and semantics.
if you want to discover things that have been overlooked till now, one really good place to look is in our blind spot: in our natural, naive belief that it's all about us.The computer revolution as a switch to imperative descriptions. In "The Structure and Interpretation of Computer Programs", Abelson and Sussman write:
The computer revolution is a revolution in the way we think and in the way we express what we think. The essence of this change is the emergence of what might best be called procedural epistemology--the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative point of view taken by classical mathematical subjects.(Wolfram argues similar points)
How explicit does a representation have to be? Is there really a hard distinction between explicitly and implicitly representing some details? I suspect not.
No comments:
Post a Comment