I have been writing that humanity and its attraction to complexity created the problem by weaving a structure of relationships that keeps getting more and more interwoven, making change increasingly difficult. When the limit is reached, nothing further can change and mankind crashes.
Daniel Schmachtenberger has gone a number of steps further, simplifying the logic: basically that our problems were caused by highly paid intelligence: banks, technology, wars, property, etc. and that artificial intelligence applied to those problems will simply evolve them in the direction that they are going, which is toward catastrophic implementation. The reality is that AI is likely to be owned by the people that caused the problems and will work on their behalf. The only way out is if we had an ownership regime of people who cared about humanity and applied AI to further those goals rather than the goals of the corporate bureaucratic state which created the problems we have.
If this is correct, and I think it is, our task is clearer but harder. AI has to be socialized - owned and controlled - by people of good will and rich human understanding, willing to give up power and profit. Every enterprise, project, hobby must be judged by this imperative logic.
Not thinking is the problem. How will we learn from the dark side of AI to develop the light of equity meta-governance and use ethical AI for moral actions? We need a new learning governance for collaborative and transformational learning. Enjoy musing if you have time for slow thinking. https://equitymoonshot.substack.com/p/how-might-we-co-create-first-principles
complexity "When the limit is reached, nothing further can change and mankind crashes."
—> reminds me of Joseph Tainter's work on Complexity and the Historic Collapse of Empires (e.g. Rome)