Will corporations prevent the Singularity?
March 16, 2012 by Ben Goertzel
It occurred to me recently that the world possesses some very powerful intelligent organisms that are directly and clearly opposed to the Singularity — corporations.
Human beings are confused and confusing creatures. We don’t have very clear goal systems, and are quite willing and able to adapt our top-level goals to the circumstances. I have little doubt that most humans will go with the flow as Singularity approaches.
But corporations are a different matter. Corporations are entities/organisms unto themselves these days, with wills and cognitive structures quite distinct from the people that comprise them. Public corporations have much clearer goal systems than humans: to maximize shareholder value.
Singularity introduces uncertainty
And rather clearly, a Singularity is not a good way to maximize shareholder value. It introduces way too much uncertainty. Abolishing money and scarcity is not a good route to maximizing shareholder value — and nor is abolishing shareholders via uploading them into radical transhuman forms!
Yes, of course a corporation must ultimately do what its shareholders tell it to. But human will is a complex creature, and what a human thinks and wants is conditioned heavily on the dynamics of the organizations that human belongs to. The self-organizing dynamics of a corporation, which has properties distinct from those of any of the human minds involved, has an influence on the ideas, feelings and decisions of the individuals involved with the corporation.
A company like Apple or IBM may make (good or bad) decisions different from those that any individual involved would make on their own. Corporate decisions are effected via large numbers of human decisions, but each human involved in such a decision is doing so in the context of the corporation — so it becomes perfectly reasonable to look at these as corporate decisions made via the medium of humans, just as human decisions are made via the medium of neurons. But in a quite concrete and practical sense, it makes sense to think of corporations as having minds of their own.
It seems quite possible that corporations — as emergent, self-organizing, coherent minds of their own — will systematically act against the emergence of a true Singularity, and act in favor of some kind of future in which money and shareholding still has meaning.
Sure, corporations may adapt to the changes as Singularity approaches. But my point is that corporations may be inherently less pliant than individual humans, because their goals are more precisely defined and less nebulous. The relative inflexibility of large corporations is certainly well known.
Superintelligent corporations?
Charles Stross, in his wonderful novel Accelerando, presents an alternate view, in which corporations themselves become superintelligent self-modifying systems — and leave Earth to populate space-based computer systems where they communicate using sophisticated forms of auctioning. This is not wholly implausible.
Yet my own intuition is that notions of money and economic exchange will become less relevant as intelligence exceeds the human level. I suspect the importance of money and economic exchange is an artifact of the current domain of relative material scarcity in which we find ourselves, and that once advanced technology (nanotech, femtotech, etc.) radically diminishes material scarcity, the importance of economic thinking will drastically decrease.
So that far from becoming dominant as in Accelerando, corporations will become increasingly irrelevant post-Singularity. But if they are smart enough to foresee this, they will probably try to prevent it.
Ultimately, corporations are composed of people (until AGI advances a lot more at any rate), so maybe this issue will be resolved, as Singularity comes nearer, by people choosing to abandon corporations in favor of other structures guided by their ever-changing value systems. But one can be sure that corporations will fight to stop this from happening.
Global AI Nanny
One might expect large corporations to push hard for some variety of “global AI Nanny” type scenario, in which truly radical change would be forestalled and their own existence persisted, as part of the AI Nanny’s global bureaucratic infrastructure. M&A with the AI Nanny may be seen as preferable to the utter uncertainty of Singularity.
This line of thinking might seem implausible since, after all, corporations large and small are pushing ahead much of the amazing technological advancement now occurring. However, according to the argument I’m pursuing here, it makes sense that corporations would continue to embrace technological advancement — as long as it seems most probable this will lead them to make more money!
The question is, what happens when it starts to seem likely to large corporations that:
— Further dramatic technological advancement will make money obsolete, AND
— There’s an alternative that would keep a money economy in place, like a corporate-controlled global AI Nanny?
Doesn’t it seem reasonably likely that a network of large corporations, at that point, will try to form a global AI Nanny conglomerate —, to slow progress toward a confusing, potentially money-obsoleting Singularity, and ensure their profitability via mostly benevolent force?
The details are hard to foresee, but the interplay between individuals and corporations as Singularity approaches should be fascinating to watch.