Improving the spread of truth in the information ecosystem, and more applied cases of reasoning errors
We live in a big 'information ecosystem'. This includes means for storing and communicating information, such as email, web-sites, the mobile phone networks, mass media like tv and radio, and libraries. These enable us to store information in a kind of common pool that others can access. They enable us to discover and learn new information. Schools, universities and other institutions also play a big part of the information ecosystem.
There are finer-grained or more specific elements, too. Regarding the web, there are technologies such as the HTML, and the links it is based on, and search engines. There are sites like Reddit, Facebook, Google News and Wikipedia. There are all the ways we can express data, information and knowledge, whether in relational databases, to different mathematical formalisms, and many others.
Also part of this ecosystem is the actual information content. The information in today's ecosystem is much different, and expanded, compared to what it contained 500 years ago. There are also institutions and practices such as the scientific method.
This description has just scratched the surface of all the elements and factors that play a role in the information ecosystem.
Shifting gear, you can look at properties of the information ecosystem as a whole or specific parts of it (e.g. television). How quickly do they enable information to be transmitted? How cheaply? How well do they preserve information over time? Who is able to add information into the ecosystem, and how many people can that added information reach? And so on.
One of my interests concerns the following -- how do the elements of the ecosystem effect the spread of truths and falsehoods? And what can we do to help the spread of truths, and hinder the spread of falsehoods? Not that we could, of course, prevent falsehoods. But what can we do to reduce the degree to which they spread and continue to be propagated among the population?
There are many potential factors that play a role. To give one or two examples, how does the media being run as a business effect things, given that this makes it biased towards content that will help pay the bills? And regarding what could help improve things, I'd think for example that lowering the barriers to accessing information will, in the longer-term, have a positive effect.
I'm interested in how we can design tools and infrastructure to help improve the spread of truths and reduce the spread of falsehoods in the information ecosystem.
Some examples of initiatives in this space: the Hypothesis project for annotation of content on the web; the PolitiFact political fact-checking site which "rates the accuracy of claims by elected officials"; and this article by Thomas Baekdal on how journalistic practice should change to address the increasingly misinformed public.
I wrote earlier about an idea that also fits into what I'm talking about: a web-site providing an authoritative source about current scientific opinion. I think that'd be really useful for linking to in discussions and arguments on the net.
Here's another idea.
Alongside lists of logical fallacies and lists of heuristics and biases, I think there's room for documentation of some more applied kinds of mistaken reasoning and justification. These are errors that might be reduced to logical fallacies or heuristics/biases, but are associated with more specific kinds of reasoning.
Here's an example. A government might propose increasing the tax on cigarettes as a means to address the associated health issues. Regardless of whether you think such measures are effective or not, the point of them is to reduce the amount of people smoking in the medium or longer-term. Yet, so often in discussions about it you'll hear people argue against it on the grounds that it's not going to stop people from smoking, as if it matter was an all or nothing matter.
Again, I'll point out that regardless of whether such a tax is effective for reducing smoking, that is at least its intention, and arguments of the kinds that I mentioned simply get that intention wrong. It's a common pattern that occurs whenever there's any talk of a measure to reduce some thing or activity that is seen (by at least some people) as harmful.
Treating reduction as a matter of outright prevention is an example of these kinds of 'more applied' mistaken reasoning or justification. I'm not sure if there's this specific case has been given a name or not, but if it has it doesn't seem to be very well known. The same goes for this class of more applied kinds of reasoning errors.
In any case, I think it would be useful to have a name both for the class, and for the specific instances, and to have a web-site that documents them. That way people could link to them in online discussions where they're pertinent. There could be a page for each of these errors, listing real-world examples - such as when measures were proposed that people said wouldn't stop X, but in fact did turn out to reduce the amount of X over time. The examples would be providing evidence that the reasoning is in fact erroneous.
The idea would be to make the site as comprehensive as possible, so that it could become a 'one-stop shop' and gain enough attention that people would actually refer to it practice.
Some other examples of these kinds of more applied errors. A person has made a claim from their personal experience that X was the case. This claim is disputed, and others argue for its validity on the ground of "why would they lie about or make up X?". There are many documented cases of where people have done just that, so this argument by itself doesn't hold up. Being able to point someone to a big list of such cases could help in getting them to accept that point.
As mentioned earlier, it might be true that underlying these errors are kinds of fallacies or biases, but the point here is not to get at the most fundamental causes of the problems, but to have a resource that maps more directly into the kinds of errors that occur in practice. Such a resource could be more useful to link to in practice.
In addition to a site that documents these applied errors, there could be sites devoted to particular disputed topics (global warming, for example), that look at how such errors appear in discussions of these topics. Because such topics are political, I think it'd be very important to make these discussions of them totally separate from the basic documentation of the applied errors. There could be multiple sites/pages devoted to each topic, as there are likely to be different takes on the applied errors that come up in discussions of them.
(if anyone who knows that my PhD is on understanding the nature of information happens to read this, what I'm doing isn't related to the topic of this post. It's concerned more with the fundamentals of what information is and how a system can understand the meaning of the information.).