Heh. I found this article:
http://mises.org/journals/liberty/Liberty_Magazine_March_1998.pdf#page=45
The author models government with a bit of integral calc and determines that government doesn't oscillate but reaches an equilibrium point for any given level of pro-government sentiment.
Not sure that this has any real value, but I thought it was cute. If you know a bit about integral calc and logit functions then you might get a chuckle.
A warning about the erroneous page order would have been nice
Woops, yeah. Skip a page after the first page :)
No worries...fixed now.
Actually.... this is a remarkably good idea. It's the old "fight fire with fire" idea. Dressing up sociological theories in mathematical equations is just a way to obfuscate and dazzle. Two can play that game.
Clayton -
Also, there's no guarantee that government follows a logistic growth pattern.
@DavidB: The problem with applying equations to human behavior (there are problems with applying them to any living system but human beings are particularly problematic) is that human behavior is not simply an input-output system. People get confused on this point because they conflate causality with intput-ouput systems. All well-defined input-output systems are causal but not all causal systems are input-output systems.
What do I mean by this? Consider the follow acme equation:
y=f(x)
The dependent variable (y) is the output of f; the independent variable (x) is the input of f. The function f is an input-output system. As long as it always produces the same y given the same x, it is a well-defined input-output system. I think you can see the problem, here... human beings do not always produce the same output given the same input even though we are a causal system. The problem is, of course, that we are not being given the very same input. But then we know that that is impossible... as the old proverb says, "you can never step into the same stream twice." So, right from the get-go, we see that the application of mathematical functions to human behavior is worse than useless, it is misleading and obfuscatory.
The Austrians say it by saying "there are no constants in the social sciences" which is, I think, a bit opaque. Is not average body-temperature a constant, for example? The Austrians would object that body temperature is not a behavior but the distinction is lost on empiricsts, so it's a moot point. The issue is that even if there were an input-output function, you could never give it the very same inputs. This means that the input-output function would be useless. In addition, if you don't have an input-output function and you want to derive one empirically, you can't for the simple reason that it is impossible to characterize human behavior. Characterization necessarily depends on measuring the outputs generated by a given input (e.g. characterizing a transistor).
Finally - to drive the final nail in the coffin - it is provably impossible (yes, provably!) to derive by analytical means an input-output function describing human behavior. This is a non-obvious consequence Godel's incompleteness theorems which Chaitin's work shows more clearly.
@Clayton, great points.
I think what you said was that for y = f(x) to produce the same y for the same x, one would have to actually get all of the relevan data into x, and the known universe is the only computational system with sufficient capacity to achieve that :).
I hope I didn't sound like I was saying that. Here's an example of what I mean.
Think of supply/demand and prices. A price that rises is a signal. My question is can you diagram the propogation of that signal? We don't know that it will cause a person to stop buying the good, we do know that there is more demand or that there is less supply. We also know that it could be an effect of money.
Can we diagram anything about the flow of information in economic systems? Can we diagram anything about the way in which information flows, or the effects of power and how they send signals in political systems?
An example of the same kind of idea in electrical circuits is here. The larger discussion is about potential alternative computing models that can be used to solve the kinds of problems we think of as involving intelligence. But at about 25:30, there's a specific discussion about a computing concept called propogators, and he discusses the difference between how you teach a theory of circuits class and how engineers actually think about the circuit in practice. You might find it interesting, as I believe it might illustrate my point.
One of the points I'd make is that the price quantity, isn't necessarily an interesting specific data. In otherwords, we don't know anyway to quantify some mysterious thing called demand. But we can describe the affect on this mysterious thing, by propogating a known quantified value (or a relative relation) through the system. So, not only does a price NOW reflect a propogation of a change in demand or in supply from the past. We know that the change in the price also propogates back out to the system and acts as a pressure on supply and on demand. We can't quantify D for a specific person, but we know if Pnow < Pearlier then for each Man M who receives the new information Dnow > Dearlier. Does that make sense?
I'm not sure if it does or not. Another point I'd make. One of the things we avoid is using mathematics to talk about economic phenomena, but there are versions of mathematics like set theory and logic, which can in fact operate in concepts of change and relative change, without requiring quantified change. And I think it's reasonable to assume the human brain is actually setup to work that way, not to work off of quantification. We don't think about how many feet it is down, we get a sense from historical data of "too high" or "safe" when we evaluate whether or not to climb down.
Anyway, not sure if I'm making sense. But I thought his point about circuit diagrams was spot on. We do similar diagramming in computer programming to think about the relationships between classes and objects in a system, and contstruct diagrams about signal propogation through a specific process.
Other computer modeling mechanisms like neural nets and others operate on similar principles. Where a specific input to a node conditions one or more outputs, but is not the only input that does so.
@Clayton, great points. I think what you said was that for y = f(x) to produce the same y for the same x, one would have to actually get all of the relevan data into x, and the known universe is the only computational system with sufficient capacity to achieve that :). I hope I didn't sound like I was saying that. Here's an example of what I mean. Think of supply/demand and prices. A price that rises is a signal. My question is can you diagram the propogation of that signal? We don't know that it will cause a person to stop buying the good, we do know that there is more demand or that there is less supply. We also know that it could be an effect of money. Can we diagram anything about the flow of information in economic systems? Can we diagram anything about the way in which information flows, or the effects of power and how they send signals in political systems?
But to what end? I know why governments might want to track this sort of information but I can't see how it will help a voluntary business run any more profitably. And we already do have industries that do something very much what you're describing, they're called futures markets. But rather than trying to distill real-world price information into a monster "God equation", they simply use a wide array of tools (mostly boiling down to gut instinct/experience) to make predictions well enough to actually turn a profit, for those traders that don't bankrupt themselves, that is.
One of the things we avoid is using mathematics to talk about economic phenomena, but there are versions of mathematics like set theory and logic, which can in fact operate in concepts of change and relative change, without requiring quantified change. And I think it's reasonable to assume the human brain is actually setup to work that way, not to work off of quantification. We don't think about how many feet it is down, we get a sense from historical data of "too high" or "safe" when we evaluate whether or not to climb down.
Yes, there is definitely room for expansion of Austrian methodology with ordinal mathematics (even inequality-based mathematics could be useful). However, the primary difficulty will be choosing metrics that are meaningful and properly circumscribed with respect to subjective valuation and temporal variation in subjective valuation.
Hi guys I think it's quit interesting the discussion is turning the way it is. I recently saw the UCLA interview Hayek gave with Jack High. When asked about whether he saw any role for mathematics in economics he responded in a most interesting way. He mentioned how most economists thought of mathematics as concerning quantity mistakenly, when really as according to a mathematician he cites, it is more broadly about the investigation of patterns. I think this might be a most interesting avenue to explore, especially once we consider the more elaborate avenues of economics, for instance in the patterns of inter-relations between price and capital structure.
"When the King is far the people are happy." Chinese proverb
For Alexander Zinoviev and the free market there is a shared delight:
"Where there are problems there is life."
Link?
Clayton : But to what end? I know why governments might want to track this sort of information but I can't see how it will help a voluntary business run any more profitably. And we already do have industries that do something very much what you're describing, they're called futures markets. But rather than trying to distill real-world price information into a monster "God equation", they simply use a wide array of tools (mostly boiling down to gut instinct/experience) to make predictions well enough to actually turn a profit, for those traders that don't bankrupt themselves, that is.
I'm trying to figure out "to what end". On one level, I'd say, "I want to." There are many things man has done just because he wanted to. Many have little usefulness in the world other than to serve as data about what doesn't work. However, if it has any value it might come form application that others put it to, that I've not imagined. If it's useful. It could also be completely flawed and useless (think Marxism or econometric statistical models) and still be latched onto and misapplied and cause great destruction. A technology that has been misapplied would be nuclear physics with regards to nuclear weapons.
But the fact that someone might hit another man in the head more efficiently with a new and improved hammer, doesn't mean a man might not try to build a hammer.
What's been leading me there is some stuff I've done just to try to express relations that I hear, that can be hard to explain clearly in words. If only to express what we mean by valuing. Evaluating a set of potential actions is best described as a function, in that logically I put in a set of potential actions, and the function returns the most valued action. It becomes the action performed Now. So what can we know about what goes on here? Well we know the set of potential actions always contains "No action", seems like that makes sense. Maybe it isn't true, something to think on. We know that the action being performed is currently valued more than anyother action considered. We know that action aims at improving the expected conditions in reality at some point in the future. Therefore the discarded actions (or actions not taken) created realities that are valued less than the reality we are currently acting to create.
In another thread I drew a diagram to try to explain a substantive difference between coercive social action and cooperative social action, in terms of how one parties actions create potential future conditions that the target of the social action is expected to perceive and value in very specific ways.
Meaning that pointing a gun at you takes away a future that you would have reasonably expected, and is used to compel an action you prefer to the threatened reality, but did not prefer to the unthreatened reality. But convincing you of something through argumentation is the same thing. If I accurately convey information you did not have and needed, it's not coercion, even though the effect is the same. If on the other hand I deceive you through communication the offered future that I've substituted for the one you expected is a lie, that's fraud, also the same effect, but in this case we consider it to be criminal. Again, I can diagram these out more clearly in some form of semantic logic language. I'm not sure one exists that works as is, but it'd be neat to try.
I'm not sure what all we can do with stuff like this, but modeling an exchange behavior in terms of this might be interesting. Anyway understanding value in this way and modeling some of the word explanations of Mises in a more formal manner might help clear up the edges and the arguments. It might be able to construct a more formal modeling of Marx's theories and Mises's theories such that proofs and errors could be demonstrated in a formal way, instead of as they are now with informal logic. I'd like for example, to find ways of expressing mathematically the role of labor in value, instead of letting the Marxists consistently go on and on about labor being the source of value.
From such logical theorems, I believe one could construct interesting models for doing simulations. It doesn't mean that the simulations would accurately reflect specific events in reality. But one might be able to demonstrate the outcome of technical designs (laws, courts, norms, company structures, production methods, financial markets) such that one can understand how it might function in reality. Think of it like using CAD tools to design cars,and then running the models through aerodynamic simulations or crash simulations.
Just some thoughts, not saying I'm right, or that it's doable. But as that video I linked mentioned, we need more models for computing.
If you watch that entire video, he draws into sharp contrast how weak we are at computation, when you compare what modern computers do, to what the human mind does. He then points out how the mind, uses multiple overlapping models in problem domains to arrive at "sufficiently confident" solutions instead of "right" solutions. That perhaps this is a clue to better computation. Which in the end is decision making.
An apodictic, apriori science is a great tool. I think it has more it can tell us about the world than we currently realize. Prehaps we figure out how to generate models it can actually say something about. Perhaps even trying and failing will give us interesting information.
What if the US CBO had to generate a signal model that demonstrated the economic forces generated by a specific budget plan for a program, and there was scientific verification and validation of that plan? Given price interference and pressures generated by Obama's Healthcare plan, can you imagine any economist could support it?
I'm not saying it would effect anything, but if projections based on Austrian analysis of future impact (not in raw dollars but in distortions to the market) consistently came true 2-5 years after the plan came into effect, perhaps we would find that more and more members of the public, would start using such analysis in the decision making about programs and politicians to support.
I'm just grasping at straws about how to generate any additional tools for the libertarian toolkit that might undermine government interference in the market.
Bleh... another randomized brain dump
I'm going to have to watch these videos completely.
Hayek : That [has] led me to my latest development, on the insight that we largely had learned certain practices which were efficient without really understanding why we did it; so that it was wrong to interpret the economic system on the basis of rational action. It was probably much truer that we had learned certain rules of conduct which were traditional in our society. As for why we did, there was a problem of selective evolution rather than rational construction.
Hayek is referring to a fundamental difference between the way he and Mises viewed the rational thought and social behavior. I completely agree with everything in this statement. I would argue that rational thought more specifically is what we apply to understand why things go wrong. Think of it as a natural manifestation of Popper's falsification theorem. Something we do because something failed, but have no need to do when things go right.
http://hayek.ufm.edu/index.php?title=Jack_High&p=video1&b=1664&e=3521
@DavidB: I'll reply in detail later, just wanted to respond to this, "instead of as they are now with informal logic."
I think you're incorrect that Austrian arguments are informal. Formal versus informal has nothing to do with the use of non-Latin symbols or non-English words as any symbol only serves the function of distinguishing itself from other symbols. Rather, it is the rigid adherence to rules of definition, implication and conclusion as well as the use of an explicit set of axioms that distinguishes formal from informal reasoning. Austrian arguments are, in fact, formal logic.
Hayek again :
This is, incidentally, another reason why my views have become unpopular: a conception of scientific method became prevalent during this period which valued all scientific fields on the basis of the specific predictions to which they would lead. Now, somebody pointed out that the specific predictions which [economics] could make were very limited, and that at most you could achieve what I sometimes called patterned predictions, or predictions of the principle.
This seemed to the people who were used to the simplicity of physics or chemistry very disappointing and almost not science. The aim of science, in that view, was specific prediction, preferably mathematically testable, and somebody pointed out that when you applied this principle to complex phenomena, you couldn't achieve this.
Another case where Mises and Hayek were thoroughly in agreement. I'm intrigued by suggestions by Hayek that they (Mises and Hayek) viewed the underlying epistemological foundations of Economics (social science) very differently. I'm going to have to look for the substance of that difference.
Clayton : Rather, it is the rigid adherence to rules of definition, implication and conclusion as well as the use of an explicit set of axioms that distinguishes formal from informal reasoning. Austrian arguments are, in fact, formal logic.
I'd simply reply that I'd like to see them structured as such, because in my discussions with Marxist proponents in this forum, I find myself trying to express these things in such formal ways, so that the implications are seen by pointing at the mathematical form and writing a simple sentence, instead of quoting a paragraph or 3 of Mises and then spending 10 posts arguing about what it really meant...
I will say that if I read a passage by Rothbard on Action and a passage by Mises on Action, I can see the congruence, but the wording will be very different. I've seen others get into odd arguments about apparently contradictory statements (in English) by Mises. They may in fact have an underlying form that is expressible in a formal manner. Now we agree that they are in fact identical and logical and true (I think that's what you are meaning by formal logic above), but I don't think this is the same thing as I mean, which is having a formal semantics and grammar for expressing the statements, givens, and deductions in mathematical forms. I guess it might be interesting to look to see if such things have been done.
Hi David, I just want to let you know that much of what you've said above resonates with me and I am currently trying to in my own feeble way, to do some of it.
I'm currently working on a slightly history of thought based price theory dissertation at a neoclassical school and am looking to demonstrate Bohm Bawerk's theory of value and price, as a logically consistent, sounder and algorithmic price theory that can then be used to analyse via a computational model built from sound praxeological rules it inputs, allowing the tracing of movements in price and capital structures which then yield many nontrivial consequences missed by mainstream macro and micro frameworks. If I it ends up half decent, I'd be happy to send it to you when I finially finish it. (I have till the end of September)
I'm trying to figure out "to what end". On one level, I'd say, "I want to." There are many things man has done just because he wanted to. Many have little usefulness in the world other than to serve as data about what doesn't work. However, if it has any value it might come form application that others put it to, that I've not imagined. If it's useful. It could also be completely flawed and useless (think Marxism or econometric statistical models) and still be latched onto and misapplied and cause great destruction. A technology that has been misapplied would be nuclear physics with regards to nuclear weapons. But the fact that someone might hit another man in the head more efficiently with a new and improved hammer, doesn't mean a man might not try to build a hammer.
I think this is a naive view regarding the costliness of ideas and information as well as the pressing nature of hedonic wants. Curiosity is an end like any other and satisfying it can lead to satisfaction in the general sense but your curiosity will never displace your food appetite in priority. I think history is instructive in this regard. A wealthy man (usually a king or noble) with a taste for scientific speculation may decide to patronize an scholar that he is particularly fond of and, thus, free that scholar to do "pure research." Today, we simply take this kind of thing for granted and imagine that it's just in the ordinary course of things that there should be lots of people who do nothing but sit around all day and speculate on things that literally have no conceivable application. But this is actually extremely costly (think of all the things that are consumed by the speculative class as a group) and we only have such a large population of such people because of tax subsidy, which is a pretty horrid reason in my opinion.
If you can convince someone to pay you to engage in useless speculation then more power to you. But my guess is that - outside of tax-funded grants - you're not going to find very much demand for such "services."
I don't think these arguments hold up to scrutiny. If you discard the intentional stance (i.e. the concept of action/purposeful behavior as an axiomatic category in its own right), then there really is no distinction between the mugger holding a gun in your back and the greedy businessman who won't sell you a glass of water at a price you can afford when you are dying of thirst, or even the candy-shop owner who won't sell you a lollipop for less than a dollar because, well, it's his lollipop. All these people are impeding you from getting what you want so, in the purely materialistic view of things, they're all the same to you. Obstacles to your desired state.
This is the whole reason why Mises's idea of human action is so profoundly unpopular. It is simply not allowed to reason about humans as frankly purposeful beings.
modeling some of the word explanations of Mises in a more formal manner might help clear up the edges and the arguments.
You seem to equate formalism with material reductionism. If you want material reductionism, then yes, you're going to be disappointed with Austrian methodology. But Austrian economics is formal. It's just not reductionist.
as they are now with informal logic.
It's not informal.
It doesn't mean that the simulations would accurately reflect specific events in reality.
Then what the hell is the point? I have two programs on my computer that simulate the motions of the heavens (Stellarium and Celestia). I have no interest or use nor does anyone have a use for a program that simulates "something like" the motions of the heavens "but not actually" the motions of the heavens. What possible use could that be???
But one might be able to demonstrate the outcome of technical designs (laws, courts, norms, company structures, production methods, financial markets) such that one can understand how it might function in reality. Think of it like using CAD tools to design cars,and then running the models through aerodynamic simulations or crash simulations.
But we already know that's impossible. Please see my post above regarding the difference between input-output functions and causal systems.
what modern computers do
Modern computers are abysmally stupid. NB: I'm a computer engineer.
An apodictic, apriori science is a great tool. I think it has more it can tell us about the world than we currently realize. Prehaps we figure out how to generate models it can actually say something about. Perhaps even trying and failing will give us interesting information. What if the US CBO had to generate a signal model that demonstrated the economic forces generated by a specific budget plan for a program, and there was scientific verification and validation of that plan? Given price interference and pressures generated by Obama's Healthcare plan, can you imagine any economist could support it?
Please read this. You have an extremely naive view of the issues.
You mean like this or this?
I recommend you try to get a deeper appreciation of what already exists in the toolbox before trying to reinvent the wheel. I'm all for extending the toolbox but what I'm seeing so far from you looks counter-productive.
abskebabs: Hi David, I just want to let you know that much of what you've said above resonates with me and I am currently trying to in my own feeble way, to do some of it. I'm currently working on a slightly history of thought based price theory dissertation at a neoclassical school and am looking to demonstrate Bohm Bawerk's theory of value and price, as a logically consistent, sounder and algorithmic price theory that can then be used to analyse via a computational model built from sound praxeological rules it inputs, allowing the tracing of movements in price and capital structures which then yield many nontrivial consequences missed by mainstream macro and micro frameworks. If I it ends up half decent, I'd be happy to send it to you when I finially finish it. (I have till the end of September)
I'd absolutely love to see it, I'm sure that anything you do here would be interesting and useful. Even if it's "not decent", it's going to have interesting and useful information in it.
I'm currently spending time reading through different types of logical notation, to see if there's some existing formal language that is sufficient to express the ideas.
In addition, I've been looking at different programming languages that are designed for or have facilities for logical programming, and thinking about how I might do the same thing.
If I were to model a human brain, I might have an object called a knowledge base connected to different functions that use the knowledge base one that generates potential actions, one that predicts future conditions based on these generated actions, and one that compares two future conditions and retuns one. It would be interesting to construct a super simplistic simulation based on these simple concepts. Oh wait, we have things like that (AI in games.)
Again, granularity in modeling is part of the issue. For example, in games we get agent behaviors that are really stupid, when compared to that of human players, but this is due to the variety of raw data implied in the game world, that's necessary to engage the human. That level of details is too complex for simple agents to demonstrate anything like intelligent behavior in those environments, the computing power and problem solving paradigms are so complex as to bring our modern computers to their knees. however, in a ridiculously simple environment that approximates a very small subset of real conditions, we could implement simplistic versions of featues of reality. Propogation of knowledge through communication, or through observation. Populate different actors with slightly varied evaluation functions in order to create irregularity in responses.
I think in any system of such a type (and @FOTH will hate this) I think we can demonstrate as a pragmatic issue, that one has to construct a time preference implementation in order to force action to occur at all, and that bugs in games related to agent's freezing might be directly related to time preference failures.
I think one can prove through simulation that we dont know how to achieve intentional (even if it's too simplistic to be called intelligent) action without bringing in the categories of Human Action. Providing quantities in this case is just a device to force real change to occur in the simulation, and should not be interpreted as reflecting any truth about quantities in reality. But that doesn't mean that the way the rules and behaviors we create in such simulations can't point out other mathematical relations (patterns) that by demonstration must also be true about reality.
I program for a living. Computers aren't actually that stupid. But we, as programmers, are. The substrate for computing and the software running in it that nature has produced in our brains, is phenomenally more advanced by comparison. So, we're stupid in terms of producing computational devices and algorithms, nature is far more effective (in a non-intentional way.) I think I wasn't clear. If you watched the video presentation I linked, Gerard Sussman's presentation said that the human brain can do in 7 operations things we can't even conceive of doing at all with computers. He used a pattern recognition problem, that's incredibly simple for the human mind, but is damn near impossible today with modern programming techniques. The rest of his presentation focused on how he thinks one of the problems is that we get stuck with a programming paradigm, (logical, functional, object oriented, aspect-oriented, model-driven) and then drive it into the ground. We have hammers and try to convert problems into nails. He proposed that it might be that if we're going to figure out how to create programs that can solve problems of these higher complexities we need to be experimenting with a wider variety of computational paradigms. And he presented a couple. He presented them based on some intuitive observations of how people actually solve problems in reality. The signal propogator one was an interesting one, drawn from his experiences teaching electrical engineering courses. (I know a little electronics as a consequence of my dad's EE degree and my home environment, 4+ years of electronics in high school, and occupational training later in the military).
I actually watched that lecture some time back. *shrug. I have a rather contrarian view on this whole subject. The conventional wisdom is that computers "smarter than human beings" is a real possibility one day. I disagree. It will never happen, at least, not in any way that we can claim credit for - if silicon were to become more intelligent than human beings, we would be no more responsible for it than our caveman forebears are responsible for rocketry and the Internet. The root fear driving the CW is just a dressed up version of Ludditism; once we feared that machines would obsolete the human body, now we fear that computers (which are really just very tiny and fast machines) will obsolete the human brain. It's all rubbish.
I have my own ideas on "what we need" to get from here to the future. The first thing we need has nothing to do with solving computational problems and everything to do with protecting property: cryptography. But the government doesn't like cryptography, at least, not in the hands of the masses. So we will continue to be impeded in this direction.
The CW is that the primary locus of value is in the software, not the hardware. I disagree. It is the hardware that is valuable property. Software is just potentially useful patterns for driving the valuable hardware. Hence, our entire paradigm regarding execute permissions is upside-down and backwards. We are obsessed with determining whether a user has sufficient rights to execute a particular piece of code. What we should be obsessing over is whether a particulare piece of code has sufficient rights to be allowed to execute on the user's hardware.
In my view, we do not even yet have real computational languages. What we have are very truncated and unwieldy grammatical-lexicons. We falsely imagine that language is something that comes from a central source (a programming language "designer"). We have not yet absored the import of the fact that a language is created by its users and that language is an inherently human behavior. Perl started along this direction but has veered away with Perl6; Perl 5 remains the closest thing in existence today to a real computational language, though other compute communities are starting to get the idea. Unix/Linux is also something approximating an actual language.
Breaking this paradigm will not be the result of a new moonshot computational paradigm. The process is already in motion and is being driven by the collective weight of the commercial Internet world. We will be inexorably pushed along a path towards real language in computation and the CW will eventually be ground to powder by this inevitability.
I'll sum up with Orgel's Second Rule: Evolution is cleverer than you are. That applies to the evolution of language or economic goods as well as to the evolution of biological organisms or anything else.
@Clayton,
sounds like we substantially agree. Glad we got there. I'm a little less pessimistic about setting up rule based systems and seeing emergent behavior arise.
I'm not a Luddite. Any true AI, will be the result of millions and billions of experimental fits and start and iterations. However, we do have some existing parts of those many experimental components in place, and we have an existing model from which to bootstrap. Personally, the only impediment I see to that process is existential risks, be they doomsday or run down to null. Meaning we get trapped here and run out of resources on earth (I think this is nigh impossible with 100million years of potential 21st century level innovation. We could actually fall back to prehistoric levels (20-30k years) and rise back to this level 3k times before the earth becomes uninhabitable from external factors. So as long as we don't destroy the earth's biosphere or create sustainable human existence off the earth, I imagine it's a when not if issue.
I agree with the cryptography angle, that's a power/information mechanism though. The government is angling for more and more bias in the relationship between govt and citizen and this is as you point out information flows in one direction and more and more so, and we are having any tools (crypto) for normalizing the relation taken away.
I don't know that I agree with you about "computational language", but perhaps I don't see what you mean. We continue to see domain specific languages and new "Design Patterns" emerge, to me this is in fact what we hope for form the language domain. It's modeling at it's finest, and it is the tool we use to iterate, do, and predict in the information/data space.
I understand your statement of concern about execution, I think it's analogous to virus/bacterial evolution and the ongoing battle inside the human body (and in nature) to figure out which genetic material (program execution code) gets to run on the underlying hardware. But, oddly enough without hard and fast "rules" embedded in the hardware, the human genome evidently contains enough program code that can generate a decently effective security system, that keeps enough humans alive to propogate the species, if we think in those terms, again dynamical systems, I'm not sure the execution issue is a blocking issue or that the paradigm is upside down. However, that doesn't mean experiments with the inverted paradigm might not be interesting.
One of the things I wonder about is finding the balance in these systems between allowing enough freedom and variation to allow innovation and growth, without being so chaotic and unregulated that the system breaks down and implodes.
Wait that last sentence looks like an argument for government. :( But I think it captures some of my thinking, the question is how to self-regulate without crushing growth and innovation. My hope of course is to trend toward privately owned competing regulation of human behavior.
Not sure how that last paragraph changes how I interpret the previous one about computer ecoysystems.
Lisp is still the finest language ever built from a flexibility stand point. The only issue people have with it is syntax, but hey... what are we to do?
I agree with the Orgel's Second Rule though I've heard that said this way before. But remember, our "idea proposals" are one of the many "populations" in the variation, replication, selection triumvirate of evolution. So, I worry less about how smart I am, and more about putting out ideas that might stick and have substantive value in affecting reality. If that makes sense.
I'd be curious to hear what you meant by "Unix/Linux is also something approximating an actual language."
Anyway, I didn't see anything in that post that makes me pessimistic, though I see that you have a pessimistic vibe about the future of computing.
Interesting side topics.