1. Skip to navigation
  2. Skip to content
  3. Skip to sidebar
Source link: http://archive.mises.org/11707/the-correct-theory-of-probability/

The Correct Theory of Probability

February 22, 2010 by

While probability theory is thought of as a branch of mathematics, its foundations are purely philosophic, and Richard von Mises, in his great work, developed the correct, objective, or “frequency” theory of probability. FULL ARTICLE by Murray N. Rothbard

{ 39 comments }

Daniel Kuehn February 22, 2010 at 9:25 am

That isn’t very convincing (Rothbard’s extension, not von Mises’s original point). Isn’t this precisely why we talk about contingent probabilities? It’s also why probabilties have error terms (which themselves are probabilities).

Conza88 February 22, 2010 at 9:34 am

What Is the Proper Way to Study Man?
http://mises.org/daily/3605

………

Probability, Statistics, and Truth

Richard von Mises’s great classic, Probability, Statistics, and Truth, effected a revolution in the nature of probability theory during the 1920s and 1930s. “Classical” probability theory considered numerical probability to be derived from “equal ignorance” about the potential events being considered…………………………………..

James February 22, 2010 at 9:40 am

Wow is this post out to lunch. Bayesian probability theory is no more meaningless than this blog post of yours. Is it meaningless to say “Obama might be reelected”? This statement implies a degree of Bayesian probability, as do practically all statements about the future. Does that statement have no meaning at all to you? Calling a theory “correct” is what is meaningless. Probability is a human invention, as is all of mathematics. There is no “correct” mathematics. What matters is whether the system is internally consistent and has useful applications. Bayesian probability is among the most useful mathematical tools ever invented. Frequentist methods may look more objective because they make fewer assumptions, but purely frequentist results are useless in practice because the reasoning is backwards. People do not want to know the probability of their data given their hypothesis, they want to know the probability of their hypothesis given their data. To deal with this awkward fact people usually treat frequentist results as if they were Bayesian, leading to many false conclusions.

As a matter of fact, the complete opposite of your statement is true. Social science has long relied on FREQUENTIST statistical hypothesis testing when designing and evaluating research results. This practice has led to thousands of false positives being reported in the literature. Bayesian methods incorporate prior knowledge and give much more accurate results.

You yourself use bayesian reasoning all the time. All human’s do. It is practically required when discussing the future. It is on of the foundations of rationality. It is nothing more than using prior knowledge to predict the future. According to Jeff Hawkins’ recent book “On Intelligence”, prediction is the essence of intelligence itself.

Kerem Tibuk February 22, 2010 at 9:45 am

Probability, or reason of the existence of probability, is the ignorance of all the different variables effecting the chain of causality.

If one could know all the variables effecting the outcome of a certain chain of events there wouldn’t be probability but certainty regarding the outcome.

Probability theory is a method of dealing with this ignorance. One may not know all the variables but if enough variables can be know, a relative number regarding the possible outcomes can be derived.

When a die is thrown if you know all the data effecting the die itself, you would know which side would come up.

But if you knew only the number of possible outcomes (which is usually the basic knowable variable of a hypothetical future event) all you can say is, the probability of a certain side to come up is one sixth.

Or on top of the knowledge of “the number of all the possible outcomes”, if you knew the die was loaded and how it is loaded, you could get a different probability than one sixth regarding a certain side of the die.

If you do not know the number of all the possible outcomes you can even say less.

On the other hand if you know all the variables, or at least almost all variables, you can call the outcome at near certainty. For example if you jump down a cliff you will fall.

So of course probability is not subjective at all but as objective as it can get.

Carlos February 22, 2010 at 9:58 am

I think R. Mises (or this article’s representation) doesn’t go far enough. Nassim Taleb points out that real world probability distributions involving complex human interactions can appear to have a certain distribution with very big sample sizes only to violate that distribution in a crisis (which are more frequent than the distribution would assume). But mathematical probability theory isn’t flawed. The results are always dependent on clear assumptions such as independence of events which is almost never valid in complex social interactions. Poor application of distributions is the real flaw.

Colin Phillips February 22, 2010 at 10:07 am

I am a statistician. That being said, do your own research.

Frequentist and Bayesian methodologies are both branches in the same science. There is no “correct” branch. They are approaches, and like everything else in science, they are models (read “metaphors”) which allow insight into a situation. Frequentist approaches certainly are very useful, and in many cases will converge on a useful statistic very quickly, but they are not universally helpful.

Consider a weather prediction of 25% chance of rain. According to a frequentist model, this means “On an infinite number of days, about which we could have collected the information we have collected, 25% of those days will turn out to be days on which it rained in this particular area.” Not very useful, is it? We’re *not* dealing with an infinite number of days, we are dealing with one, very real, day, about which we’d like to make a judgement call.

Now, the same statistic, as understood in a Bayesian model: “On this day, we should place 25% of our trust in the theory that it could rain.” Isn’t that a lot nicer? Yes, it makes reference to terms like “trust” and “faith” and “belief”, which, granted, do make me queasy at times. But in order to make decisions, we need to have beliefs about future uncertainty. The Bayesian model allows this where the frequentist does not. That doesn’t make it better, just also useful.

El Tonno February 22, 2010 at 10:36 am

Oy my. Absolutism.

This discussion has been done to death in many ouvrages on Intelligent Systems. cf. for example the excellent introductory chapter of Richard E. Neapolitan’s “Probabilistic Reasoning in Expert Systems”, where various philosophical approaches to probabilistic values are laid out. I remember the conclusion being that one should not actually care too much – 100% frequentism being as impossible as 100% positivism. And the calculus of probability works quite well, even if you apply it to values you prefer to call “likelihood”

peter February 22, 2010 at 11:40 am

The subject of probability has been haunting me for a long time now in its application to political calculation.

Perhaps one can argue that in the “pure” philosophical realm being able to quantify the actions of individuals in the electorate is desirable. However, absent a moral foundation, with utility for whatever policy for whoever’s benefit, combined with very powerful statistical tools yields the race to the bottom we now see.

In politics being “correct” in a pure sense demands an understanding of processes which most civilians would deny even when shown unequivocal evidence that their cherished “beliefs” are really just products that have been marketed to them for the benefit of others.

Retaining, or gaining, power on the other hand demands understanding what makes the hoi poloi tick. This “science” cares not for ethics, only victory…and it is almost always easier to capitalize on ignorance and cherished beliefs than speak truth. Much like the high-end proprietary stock and future trading soft ware employed by Wall St. political party’s voter preference modeling seeks profit today (power) over any philosophical or moral restraint…at the expense of killing the goose laying the golden eggs. Much like the Keynsian maxim that individual thrift is good for individuals but bad for a market economy.

I think the whole notion of freedom needs to be re-branded and re-packaged. If the role models for freedom speak a language that only 10 per cent of the population understands and articulate ideas that are now culturally repellent to the liberal/progressive elements, what chance does that idea have of gaining primacy in a society where an equal number completely disagree and a large majority simply lack the tools to understand?

In the pure Platonic sense the ideal needs to be preserved to cast its imperfect shadow on humanity. In the world of politics where the welfare state has been branded as the “ideal” and redistributing from the capable to the needy is trumpeted in spite of its overwhelming failure everywhere it is championed, we need a better plan.

Calling people “lazy socialists” is counterproductive…like asking your wife if she’s tired as opposed to saying “honey you look terrible”. Both staements to your wife convey similar information, one puts you on the couch for the night, the other opens a dialogue.

While I think many progressives are loopy in their thinking, they disagree. Most of them aren’t evil, just ignorant of history, economics and human nature. (Why that is is a whole other story).

All that said, in sales and marketing, a clear differentiation is made between “features and benefits”. The features of Freedom are obvious (and potentially desireable) to most. The specific benefits, not so much.

Unless and until the benefits of Freedom can be contrasted with costs of socialism in a context that it more desirable TODAY for that great horde of voters who vote with their wallets for their percieved self-interest, we are in for a rough spell.

Silas Barta February 22, 2010 at 12:19 pm

Oy veh. This is where Austrians are really “out to lunch”.

One word: Bayes Theorem. Fortunately, James has already said everything I was going to, so I’ll just defer to him. Well, except to say this: frequentists aren’t even being consistent with their claims of probability: how do you know that a given element really belongs in the reference class in which you previously measured frequencies? What countas as similar enough?

Think about it.

@Colin_Phillips: Consider a weather prediction of 25% chance of rain. According to a frequentist model, this means “On an infinite number of days, about which we could have collected the information we have collected, 25% of those days will turn out to be days on which it rained in this particular area.” Not very useful, is it?

Not only that, but 25% of infinity is infinity! (I think a frequentist would say that the ratio of rainy days to total days will approach 25% as the number of days approahces infinity.)

@Kerem_Tibuk: If one could know all the variables effecting the outcome of a certain chain of events there wouldn’t be probability but certainty regarding the outcome.

Actually in the case of systems where quantum effects dominate, there can be unremovable uncertainty — you can prove that no amount of knowledge will enable you to predict the outcome every time. But otherwise, you’re correct.

Guard February 22, 2010 at 12:45 pm

Thanks Peter for that insightful comment. I too have wondered about the validity of statistics not because of the philosophy or mathematics of it, but because of the “knowns” used as inputs, specifically moral inputs. For example, part of the procedure in laboratory psychological experiments often involves lying to the subject about why they are there, what they are doing, the purpose of the experiment, etc. If lying is a factor in the experiment can the outcome be entirely true?

Jerome February 22, 2010 at 12:59 pm

To the Austrian economists, here is the analogy:

Austrian economics is to standard economic “theories” as Bayesian statistics is to frequentist methodology.

Richard von Mises’s approach did not solve anything — it just restricted the application of probability to attempt to deal with the pathologies inherent in the frequentist approach. By so doing he artificially limited it into unusability. I found his book unhelpful in coming to grips with the foundational aspects of probability.

Edwin Thompson Jaynes is the equivalent of Ludwig von Mises in my estimation. Jaynes elucidates a long tradition of Bayesian theory essentially beginning with Aristotle, running through Laplace and into the present. Bayesian theory is opposed by an entrenched tradition and by knee-jerk reactivity, just as Austrian economics is.

Surely you would agree that it would take more than a few cursory hours to come to an appreciation of Ludwig von Mises work, its tradition, and its implications? Perhaps months or years of focused thinking? Why would you assume that you are qualified to discourse on probability theory without making a similar effort to comprehend the issues?

geoih February 22, 2010 at 1:34 pm

And here I thought the whole point was that humans are not coins, dice, or molecules of gas, and quantitative predictions as to their expected future actions were little more than wild guessing.

Abhinandan Mallick February 22, 2010 at 1:35 pm

I find it a little suprising that not much attention has been paid to Kolmogorov’s formulation of probabillity theory by both Austrians and Economic Bayesians.

Perhaps this may provide a better picture of the current state of probabillity theory…

Richard Harris February 22, 2010 at 1:57 pm

A good example of this theory in practice occurs in the field of medicine. The studies done on any drug will show that it is effective in say 60% of patients. Rarely can any drug be found to be effective with substantially higher frequency than 65%.

So doctors by rote will generalize the statistic to all cases as if it is a certainty that the drug works 100% of the time. They do this without bothering to make further investigation into the specifics of any given patient to discover circumstances which might indicate the drug will not be effective for the individual. Thus medical practitioners are guilty of generalizing a probability to a certainty.

peter February 22, 2010 at 2:45 pm

Richard, recognizing that fundamental truth about medicine is what set me off on my mid-life voyage of re-discovery. In fact it is the essence of all of my personal reframing…if the individual is indeed the basis of modern society, how can it be a criminal act to seek self healing by non-government approved methods? Moreover how can it be criminal to advertise such treatments?

In fact I would argue that the evidence for efficacy for modern (non-emergency) medicine is even less convincing than the evidence for man made global warming…but the same scientific/statistical methodology is applied to both and doubters decried as heretics…so what’s up? I think you could make a convincing argument that both models are just business plans (based on ignorance and fear) which require the sanction of the State to outlaw dissent and ensure profit…truth be damned. That said the most ferverent disciple of Gore will decry the predations of “big pharma” but miss the elephant in living room about AGW. Conversely, Gore’s harshest critics will blather endlessly about the “health nuts” and defend mass vaccination in spite of overwhelming statistical “proof” of the many individual harms it causes and defend the pill industry, even though it kills as many as cancer. I am not for a second saying I know how to square this circle, but I am available if anyone wants to start asking the right questions.

Eric February 22, 2010 at 3:06 pm

So, regarding a 2 spot coming up on a dice toss it is meaningless to say that the probability of each throw coming up two is one-sixth.

Go tell that to all the casinos sitting in the Nevada desert.

I guess it’s meaningless to say: but for the known laws of chance, these buildings wouldn’t have much of a probability of being here.

Richard Harris February 22, 2010 at 3:10 pm

A good example of this theory in practice occurs in the field of medicine. The studies done on any drug will show that it is effective in say 60% of patients. Rarely can any drug be found to be effective with substantially higher frequency than 65%.

So doctors by rote will generalize the statistic to all cases as if it is a certainty that the drug works 100% of the time. They do this without bothering to make further investigation into the specifics of any given patient to discover circumstances which might indicate the drug will not be effective for the individual. Thus medical practitioners are guilty of generalizing a probability to a certainty.

töff February 22, 2010 at 4:28 pm

> “it is unscientific and illegitimate to apply probability theory to … events of human action”

Well, wtf are we supposed to do? It’s human nature to try to foresee things.

It’s easily said, after the resolution of a probability, that a fractional measurement of it is meaningless. But *before* resolution, we have no other measurement.

Ron Finch February 22, 2010 at 11:06 pm

toff,

“Well, wtf are we supposed to do?”

Logic. Not Math. The insights of the Austrian school will not die because it explains how things work. Not some of the time like mainstream econ. Always and everywhere.

There are limits to what it can tell you, but within those limits, it is certain.

EIS February 23, 2010 at 1:53 am

Colin Philips,

“Now, the same statistic, as understood in a Bayesian model: “On this day, we should place 25% of our trust in the theory that it could rain.” Isn’t that a lot nicer? Yes, it makes reference to terms like “trust” and “faith” and “belief”, which, granted, do make me queasy at times. But in order to make decisions, we need to have beliefs about future uncertainty”

Sure, it’s nicer, and more convenient, but it’s not in anyway grounded in reality. Your beliefs are just that, and the subjective probabilities you design are merely illusions for your own comfort. I believe this is the point of the article. Never mind the fact that this deals only with the probability of certain simple lotteries, and not complex organisms which are able to act on purely subjective valuations.

Probability theory tries to tame uncertainty, an impossible endeavor (the crises underscores this fact).

Kerem Tibuk February 23, 2010 at 2:37 am

Silas,

“@Kerem_Tibuk: If one could know all the variables effecting the outcome of a certain chain of events there wouldn’t be probability but certainty regarding the outcome.

Actually in the case of systems where quantum effects dominate, there can be unremovable uncertainty — you can prove that no amount of knowledge will enable you to predict the outcome every time. But otherwise, you’re correct.”

I am no physicist but I can evaluate Quantum Physics from a philosophical perspective.

I think the implications driven from QP is wrong and this can all be tied to the concept of “ignorance thus probability”.

I can understand the fact that observation of some phenomenon, although it is real and has an impact in the general causal chain, is impossible (at least for now) because observation itself is a factor that effects the said phenomenon.

But this no way rules out objective reality and theoretic certainty. Probability plays a role in QP not because it is a central concept of QP but because we are ignorant of the variables regarding QP and there is no other abstract tool for humans other than probability when you are ignorant of variables.

Also probability can only be derivative concept of objective reality and the law of causality, because in it it implies an objective result, whether one realizes or not.

When you say there is one in six chance(or probability) of “two” coming when you roll the die, you necessarily have to reference an objective end, in this case number “two” on die being the objective result. If there is no objective result of “two” coming, the probability of “two” coming would in itself be meaningless.

On another note, QP and Neoclassical Macro Economics are very much similar and this shows the philosophical problems with both neoclassical Macro and QP

Both has objects that can not be observed objectively without effecting the actions (or movements) of the studied objects. For one the object is the individual who changes his actions once he knows he is being watched or being experimented on. And the other sub atomic particles that change their actions (or natures) when they are observed, necessarily by some force that effects its movement (maybe even its nature).

These are facts but both Macro and QP derive totally wrong results from it, instead of acknowledgin their ignorance and try to find other methods of dealing with this problem they by basically claim the nature of things change by the size (which should be viewed as very simplistic and approached by extreme skepticism on that reason alone). That is why macro contradicts micro and QP contradict conventional physics that depends on the law of causality.

And on a deeper philosophical issue both are a challenge to objective reality. Not outright challenges, as it is very hard to do that without being diagnosed with schizophrenia, but subtle implied challenges.

Kerem Tibuk February 23, 2010 at 4:32 am

And one more note on probability.

I think probability is very very important. Not as a technical tool for statistician but as a part of a general understanding, philosophy and especially Ethics.

Ignorance that breeds this tool is here to stay. It is a general condition of humans. Future is always uncertain because man can not all of the possible variables that effect the chain of causality that shapes the future. This ignorance, plus rationality and self awareness gives us, free will.

Regarding ethics, that is natural rights ethics based on objective reality, probability has a very important place.

One basic formulation of the ultimate answer on the ultimate question of ethics, “What is the right thing to do?”, is this.

Whatever action by the individual that increases the probability of that individuals survival is the right thing to do and conversely whatever that decreases the probability of the individuals survival is the wrong thing to do.

Of course the hard part is to identify whether an action increases the probability of survival or not as there are many variables.

One way of dealing with this is use general rules but unlike what Kant thought it doesn’t stop there.

The act of lying, generally decreases ones probability of survival, because it decreases his credibility on the eyes of others that help him stay alive.

But sometimes just the opposite is true. If a murderer is after you and you are hiding, you do not have a moral obligation to tell the truth regarding where you are. Because the general moral obligation of “not lying” is not an end in itself but a tool for the most important and ultimate goal of staying alive.

So probability theory is very useful in a state of ignorance and since humans are and will always be in general state of ignorance regarding variables that shape the future, probability will always be an integral part of Ethics.

Edward February 23, 2010 at 6:01 am

As a “correct theory” probability is indeed a branch of mathematics. The disagreement concerns the interpretation. The irony is that Ludwig von Mises, if you read him carefully on probability, didn´t adhere to the interpretation of his brother Richard von Mises. The disagreement is somewhat subtle but it is there…

Joshua Griffith February 23, 2010 at 9:28 am

Here’s a practical example comparing frequentist and Bayesian methods: http://www.inference.phy.cam.ac.uk/mackay/itprnn/ps/457.466.pdf

Edward February 23, 2010 at 10:08 am

Ludwig von Mises, while he clearly rejected frequentism à la Richard von Mises, wasn´t a Bayesian either, but he agreed with Bayesians that probability is about knowledge versus ignorance. His position remains sui generis, however.

Edward February 23, 2010 at 10:15 am

Note also that it is far from obvious from the review that Rothbard has actually read Gillies´ book, which I happen to have read. It´s just an occasion for Rothbard to rehearse his own not to subtle views.

Charles Labianco February 23, 2010 at 10:39 am

Wow! You are absolutely correct. In a few short paragraphs you have destroyed the pseudo-sciences of so-called social sciences: psychology, sociology and the worst of them all: political science, a misnomer. It does not take a scientist to realize that politics works along the principles of two things: money and power to decide. Take all of the statistics in the world and apply them to horse racing: you will lose your shirt. Why? Because you are dealing with the unpredictabilities of animals and humans. You don’t have a laboratory. The only valid and true theories of human nature come from careful study of humans within their environments. Example: by studying real events about large groups, you will conclude that, generally speaking: it’s dangerous to be there; people do weird, dangerous things; they can be led “by their emotions” ; suggestions can be made that are not reasonable which they will follow. It’s called “crowd behavior”. Individuals will often do what a “charismatic” leader suggests and which many other members are doing in spite of those things being contrary to their own personal moral belief system. A person does not need a huge number of “controlled experiments” in order to figure out that “given the events of the twin towers destruction” that the American People will give up their Constitutional liberties in order to “catch the bad guys”. It has been known and stated in quotations from the days of Caesar of Rome. And, generally speaking, these blogs are not very helpful. They do not get to the cause of our problems. People simply like to give their unstudied opinions, most of which are not worth the time for anyone to read them. They certainly do not influence the blog writers. The cause of our problems are us first, and Congress, 2nd. And honestly, even this opinion is not worth much, because it does give any information that is usable against the totalitarian thinking, elitist governing people who create hundreds of ways of subjugating us to their control.

Silas Barta February 23, 2010 at 11:01 am

@Kerem_Tibuk: Sorry, but the experiments show otherwise. If you assume that more knowledge would allow you to better predict the result in QP cases, you reach a contradiction. It’s called Bell’s Theorem and the EPR Paradox. Good explanation here, and here’s an excerpt.

Does Bell’s Theorem prevent us from regarding the quantum description as a state of partial knowledge about something more deeply real?

At the very least, Bell’s Theorem prevents us from interpreting quantum amplitudes as probability in the obvious way. You cannot point at a single configuration, with probability proportional to the squared modulus, and say, “This is what the universe looked like all along.”

In fact, you cannot pick any locally specified description whatsoever of unique outcomes for quantum experiments, and say, “This is what we have partial information about.”

So it certainly isn’t easy to reinterpret the quantum wavefunction as an uncertain belief. You can’t do it the obvious way. And I haven’t heard of any non-obvious interpretation of the quantum description as partial information.

Furthermore, as I mentioned previously, it is really odd to find yourself differentiating a degree of uncertain anticipation to get physical results – the way we have to differentiate the quantum wavefunction to find out how it evolves. That’s not what probabilities are for.

Thus I try to emphasize that quantum amplitudes are not possibilities, or probabilities, or degrees of uncertain belief, or expressions of ignorance, or any other species of epistemic creatures. Wavefunctions are not states of mind. It would be a very bad sign to have a fundamental physics that operated over states of mind; we know from looking at brains that minds are made of parts.

In conclusion, although Einstein, Podolsky, and Rosen presented a picture of the world that was disproven experimentally, I would still regard them as having won a moral victory: The then-common interpretation of quantum mechanics did indeed have a one person measuring at A, seeing a single outcome, and then making a certain prediction about a unique outcome at B; and this is indeed incompatible with relativity, and wrong. Though people are still arguing about that.

Kerem Tibuk February 23, 2010 at 11:36 am

Silas,

As I said I am not a physicist but I can safely say that, the probability of missing variables (ignorance) is higher than the probability of sub atomic particles acting different because they are smaller.

Also another problem is, since molecules, that behave perfectly well regarding the law of causality, is supposedly composed of sub atomic particles that act completely randomly, at what point the rules change and how?

Roy February 23, 2010 at 1:24 pm

Kerem,

I recommend MIT’s OpenCourseWare to remedy your confusion in regard to quantum mechanics.

Local hidden variables have been shown by experiment to be inconsistent with reality.

Sub-atomic particles do not behave completely randomly. I could go on about the time dependent wave function, wave function collapse, etc. but this is not the forum. Go to web.mit.edu/ocw. Physics classes in MIT-speak begin with an 8. 8.04, 8.05 are the first two qm classes.

ron February 23, 2010 at 4:15 pm

The best way to understand Probability Theory may be to begin by reinterpreting it as Improbable Theory .

ChanceH February 23, 2010 at 5:02 pm

Holy Crap. You are making me agree with Silas. That’s not very nice.

If anybody here wants to really say that you can’t put a probability on one time events, I’d like you to put you money where your mouth is next football season. We’ll make lots of even money bets, and I’ll take the pointspread favorite in every game, and you can have the underdog.

Kerem Tibuk February 24, 2010 at 3:48 am

Roy,

I have read about QM and know a little about the implications. Since I am not a physicist I can not evaluate the experiments regarding it, but I can only evaluate it and its implications philosophically.

Philosophy is the mother of all sciences and every natural science should have philosophical, at least epistemological, framework or it is utterly meaningless.

The implications of QM, or the claims regarding the QM is most extraordinary. It is not like Einstein improving on Newtons theory of gravity. It is actually flipping the very iron clad rules of this universe (like the law of causality and the law of identity) on its head.

And extraordinary claims require extraordinary evidence. Experiments that operate within the mainstream framework, and experiments that give these extraordinary conclusions are not equally weighted. So “experiments show this” is not sufficient.

Also one of the main philosophical problems of the implications of QM is that it represents, probability as an alternative to the certainty.

Probabilistic and deterministic are not and can not be alternatives, because the concept of probability depends on the certainty. It is simply ingenious to use a concept like probability that is only meaningful in a deterministic world (deteriministic but where ignorance prevails at the same time) and present it as an alternative to certainty.

Anyways I think QM is meaningless unless it is presented in its epistemological framework. You can not avoid it or, use epistemology that QM rejects to support it when it is convenient.

Roy February 24, 2010 at 6:23 am

Kerem,

You claim it is probable that there is a hidden variable in quantum mechanics causing us to interpret very small objects as wave functions. This hidden variable has been ruled out theoretically, through Bell’s Theorem and experimentally by actually building Bell’s thought experiment and evaluating the results. Experimentation has shown that local hidden variables are inconsistent with reality.

There are many different interpretations of why the wave function collapses – Copenhagen, many worlds, nonlocal hidden variables, etc. – but the top contenders do not make predictions that have manifested themselves in reality via experimental results yet. This is where the philosophers can make hay. No one knows the why.

James February 24, 2010 at 11:44 am

I think what this post was TRYING to say is that the Austrian school prefers deductive philosophical methods over probabilistic empirical methods. In economics there is a lot of value to this approach because the economic system is too dynamic to study using simple mathematical models. You get more mileage out of basic economic principles like “prices influence behavior”. I totally agree with this.

However, this insight has to do with the specific hydraulic type of dynamism that goes on in many politically relevant areas of economics, not with social science in general and actually not with all aspects of the economy. In psychology there are plenty of valid uses for probability. For example, I can take a child’s IQ at age 8 and predict their chances of ever getting a Ph.D. Bayesian methods will allow you do to this more accurately than frequentist methods.

On the other hand, it isn’t like Bayesian probability is inapplicable to the economy. Wal-Mart uses it all the time to predict how much of X product to order at Y time. Financial companies use it to make millions off of computerized trading. Any business world its salt is using Bayesian statistics, it just is difficult to apply to the kinds of policy questions Austrians care about.

Peter February 24, 2010 at 9:28 pm

It is actually flipping the very iron clad rules of this universe (like the law of causality and the law of identity) on its head.

You don’t need QM for that. Try to explain radioactivity according to what you think the “laws” are. How is it that atoms of a radioactive isotope decay at a predictable rate, rather than all at once or completely at random? Pick a particular atom: do you think there’s some “hidden variable” that determines exactly when it will decay? If all the atoms in a sample were produced at approximately the same instant (in a supernova), why don’t they all the same value for that variable? And if they’re produced at different times and places, why do they all have the exact same distribution of values of that variable?

Myrick Crampton February 25, 2010 at 8:14 am

My last job was high frequency trading and market making. We extensively used bayesian and frequentist methods in analyzing and predicting the behavior of the equities market. After six years in that field, I was able to accelerate my retirement by ten years. Yes, it is the case that the market exhibits continuous as well as discontinuous structure changes, but with an a-priori conceptual model backed up by rigorous science, one can predict the direction of equities and options very well over many different time scales (Fx seems to be much harder). The challenge in this domain now is not predicting the direction of the market as much as getting into a position efficiently with regards to the cost structure inherent in trading.

As the time scale gets longer, it gets harder to predict which stocks will do well and which ones won’t, but it still is common for groups to predict three months out, with consistent profits.

Also note that one of the biggest challenges in the long term is the specter of regulatory “reform”. A big reason why the financial companies must spend more and more time in Wa$hington.

Julien Couvreur April 17, 2010 at 8:34 pm

The most interesting book I read on probabilities is E. T. Jaynes’s Probability Theory As Extended Logic (first 3 chapters available as free pdf).
It demonstrates the Bayesian laws of probability from logic and a few assumptions/constraints. The principle is very elegant.
Like Bayes, his notion of probability is not statistical but relative to certainty of knowledge. He shows that probabilities are an extension of Boolean logic (true and false as the only values).

anon April 11, 2011 at 7:17 pm

Wow, this article is terrible.

Comments on this entry are closed.

Previous post:

Next post: