Holotopia: Collective mind

From Knowledge Federation
Revision as of 06:11, 1 July 2020 by Dino (talk | contribs)
Jump to: navigation, search

H O L O T O P I A:    F I V E    I N S I G H T S

The printing press revolutionized communication, and enabled the Enlightenment. But the Internet and the interactive digital media constitute a similar revolution. Hasn't the change we are proposing, from the 'candle' to the 'lightbulb', already been completed?

We look at the socio-technical system by which information is produced and handled in our society, which the new information technology helped us create; and we zoom in on its structure. We readily see that its principle of operation has remained broadcasting—which suited the printing press, but when applied to the new technology exacerbates problems, instead of enabling solutions.

We see, in other words, that we are using our wonderful new technology to do no better than create 'electrical candles'.

Our collective mind needs structural change

What it takes to be informed

Imagine a world where correct understanding of an issue or situation is used for directing action.

In knowledge federation we use the keyword gestalt for such understanding. And we use this keyword to make the intuitive meaning of the word "informed" precise and clear: One is informed, if one has an "appropriate gestalt", or a gestalt that is appropriate to the situation at hand. "Our house is on fire" is a canonical example of a gestalt. An appropriate gestalt correctly points to an action that needs to be taken.

Suppose, now, that we apply this idea to our very handling of information, and of knowledge. What gestalt would result? What course of action would it be pointing to?

Knowledge work has a flat tire

In 2011, when the Knowledge Federation completed its self-organization as a transdiscipline, we decided to "go public" and propose knowledge federation to the Silicon Valley, and to the world. In our workshop at the Triple Helix IX conference of international change-makers in knowledge work, at Stanford University, we used the flat tire metaphor to answer the above questions, and motivate our proposal.

Knowledge Work Has a Flat Tire describes a real-life event where two leading scientists contradicted each other, while presenting to the public and the media the scientific view of an urgent and complex policy issue, the climate change. Obviously, a non-expert reader had no way to resolve this contradiction and decide who was right. Our point was that our present way of informing the public leads to confusion and inaction. And that our situation resembles the situation of people in a car with a punctured tire: Pressing the gas pedal and surging forward (publishing, or broadcasting) is no longer effective. Our situation demands that we stop and take care of a structural defect.

Democracy must evolve

Cybernetics of democracy

We are preparing a book series, to help launch holotopia and knowledge federation. The second book in the series has the working title "Knowledge Federation", and subtitle "Cybernetics of Democracy".

What is "democracy"?

We tend to answer that question in the same way as we answer "What is science?", "journalism" and our various other professions or institutions. We simply reify the practice we've inherited from the past. "Democracy" distinguishes itself in that we've inherited its structure and its operation from a very distant point in history, when just everything was thoroughly different.

The word "democracy" is derived from Greek "demos", which means "people", and "kratos", which means power. So "democracy" is supposed to be a social system where the people have power; where the people are in control. But are people in control?

We added the word "cybernetics" to the subtitle, to suggest the answer. "Cybernetics" is derived from Greek "kybernetike", which means governance. So cybernetics is a scientific study of governance, or of governability. This study is transdisciplinary. Cybernetics shares its larger purpose with general systems science, and with the systems sciences more generally—which is to study systems of all kinds, both natural and human-made, in order to understand how a system's structure influences or "drives" its behavior. And to to then use this understanding create and handle systems of all kinds—and social systems in particular. So all we'll need from cybernetics, to answer our question, is the obvious insight that motivated its development.

Which is that in a bus without a steering wheel and without proper headlights, which is speeding through uncharted terrain in the darkness of the night—nobody is in control!

You might be seeing someone sitting in the driver's seat (Donald Trump; the people who elected him); and believing that he's driving. But the moment you've examined the structure of the bus you've understood that you were wrong—because driving it isn't physically possible.

Does our "democracy" have that sort of structure?

Democracy needs brakes

We intend to begin the Cybernetics of Democracy book by telling the story of Jørgen Randers, who in 1969, having just graduated from college, traveled from Oslo to Boston to do a doctorate in physics at MIT. And who upon hearing a lecture by Jay Forrester, decided that his study would be in systems sciences, or in "system dynamics" more precisely.

In 1972, Randers became one of the authors of The Club of Rome's first and most widely read book report "The Limits to Growth". What followed was an exhausting series of completely nonsensical debates. He and his three co-authors, whose average age was 25, were called "doomsday prophets", and severely attacked. What they were really saying was, however, completely obviously (it didn't even require computer simulation)—and completely ignored. Their point was, namely, that a 'bus' (a human system growing at an accelerating speed on a finite planet) must have 'brakes' to avoid crashing.

Hear Randers summarize his forty years of experience at the 40th anniversary of The Limits to Growth at the Smithsonian), by declaring:

"The horrible fact is that democracy, and capitalism, will not solve those problems. We do need a fundamental paradigm shift in the area of governance."

Social systems behave counterintuitively

Jay Forrester was a creative computing pioneer, who contributed to the computer revolution. In 1956, with several patents and an MIT professor chair, he got the idea that this new machine could be used for a whole new purpose—to model social systems, and understand their behavior.

A colleague who had earlier been the mayor of Boston moved to an office near by, and told him how he often noticed that applying the obvious policy to solve a recurring problem made the problem worse. Forrester made models, and found that this "counterintuitive behavior of social systems" (the title his 1971 research article) was a rule rather than exception. Social systems share that property with all "complex" or "non-linear dynamic systems".

Forrester lobbied to present this insight to the American congress.

Social systems must be "anticipatory"

As a mathematical biologist, focusing specifically on the issue of democracy while on sabbatical in the Center for the Study of Democratic Institutions in Santa Barbara, in 1972, Robert Rosen showed that to be viable, social systems must share a property shared by all living systems—namely that they must be "anticipatory" (make predictions of the future to govern their present behavior). He later summarized his findings in the book titled "Anticipatory Systems", with subtitle "Philosophical, Mathematica and Methodological Foundations".

The mother of all our problems is a paradox

A half-century after the mentioned insights have been made, our "democracy" is still only reacting to contingencies. Our "policy makers" are experts in doing no more than keeping the 'bus' on its present course (keeping the economy growing—for another four-year term).

You might have noticed that The Club of Rome's 1972 simulation study did exactly what Forrester and Rosen found a democracy must be able to do—make models and predictions, to see what sort of condition its present course is leading to. But neither the voters nor the politicians have, even today, a faintest clue that those "doomsday prophets" were just trying to add to our "democracy" a capability that any system of control that deserves that name must have—the capability to 'steer'!

And so the root problem of our democracy, and by extension of our various other problems as well, is not at all a problem but a paradox: We are not using information to understand our world and modify our behavior!

And we are not doing that even when this information is telling us what our systems must be like, if we should become capable of using information to see where we are headed, and what's going on!

To point to this most intriguing and no less alarming issue, to give it visibility and citizenship rights, we have given it a name: the Wiener's paradox.

David Bohm left us this clue, how we may (not!) be able to handle it:

As long as a paradox is treated as a problem, it can never be dissolved.

Wiener's paradox

We are back to 'square one'

What we have seen so far is what we pointed out to begin with—that "knowledge work has a flat tire". Before we tell you how this issue needs to be handled—a solution that was in principle proposed in 1945, and developed well beyond that early vision and in breath-taking details by 1968, which is what we are calling collective mind—we'll stay with the problem, which is as we've seen not really a problem but a paradox, a moment longer. We want to tell you why exactly we are calling it Wiener's paradox. And tell you another brief story, whose points should not be missed.