AI Marxism Will Save Your Business.
The Idea
A recent study with 78 consultants found that knowledge workers hand AI models tasks beyond their capabilities regularly. This leads to faulty outputs. They nonetheless use those wrong outputs to serve their clients. (Dell’Acqua et al., 2023)
Stories about consultants from big firms, sitting on airplanes, handing off their entire work to Chatbots, have become quite the regular appearance.
Bad employees? Possibly. But in this article I want to take on a different perspective. Most probably the reason is that they simply don't know of nor care to adjust for these limitations. And that happens due to bad management.
The argument in this article is simple: AI is just another machine. But it is treated like a personal co-worker. This has the potential to ruin your company in the future. Be more Marxist about it.
What is AI Marxism?
The idea springs from a core idea of Marxism about the industrialization of labor: machines alienate the worker from their work, enable exploitation, lead to deskilling and create social division.
Pan Enrong (PhD of Philosophy of Science and Technology) is a Chinese scholar spearheading an “interdisciplinary approach that combines research in the philosophy of technology, innovation-driven development, and Marxist studies from the “position and perspective of an engineer”.” (Alicja Bachulska, 2024)
He is one of the many Chinese scholars, exploring the rather niche domain of AI Marxism (人工智能马克思主义 rengong zhineng makesi zhuyi), a theoretical framework that brings Marxist philosophy into the field of human-ai interaction.
Pan Enrong's argument
The science of the past studied the natural world, the science of the present studies human society, and the ‘science of tomorrow’ studies the world of man and machine.
- Pan Enrong, Professor at School of Marxism at Zhejiang University (Alicja Bachulska, 2024)
Pan argues, that philosophy and the humanities need to engage with intelligent technologies to keep their principles alive in a new reality.
No doubt, our world is changing massively through the introduction of AI into commercial use. AI Agents, perceived by many industry experts as the main trend of 2025, will takes this issue to another level.
This leads Pan to three key conclusions about AI and labor alienation:
- Product Alienation: AI-created outputs belong to system owners, not workers or the AI itself.
- Process Alienation: AI's "black box" nature and speed make work processes incomprehensible to humans.
- Social Alienation: These combined effects ultimately increase disconnection between humans themselves.
Unfortunately, Pan doesn't offer a clear solution as to what specifically should be done about this. To understand the issue better, let's break those statements down into the tangible issues within companies:
- Lower work satisfaction due to lack of ownership (or the feeling of).
- Reduced quality: AI allows to complete tasks faster and with less effort, but quality deteriorates when humans deffer decision-making to the machine.
- Creativity is stifled, because "you just ChatGPT it".
- Leadership qualities are not favored in the work environments of entry level roles like analysts.
Gao's 3 new principles for AI
Enough about the problem, let's look at potential solutions.
Fortunately, there are more contemporary Chinese scholars upon which we can draw. One of them, Gao Qiqi, professor at the Fudan University, offers three very clear directives.
Gao's "New Three Principles of Artificial Intelligence" attempt an answer: (Alicja Bachulska, 2024)
- AI should always be an aid.
- Human decision-making should not be less than the golden ratio of 0.618 known from mathematics and the arts, meaning humans should always have more than half of the decision-making under their control.
- Humans should always control the pace of AI development and be ready to pause or slow down at any time.
Originally geared at policymakers, this principle can easily be translated into the context of a private venture:
- AI aids the worker. The worker intelligently applies technology to improve their skill set and output.
- Humans stay at the top of the organizational chart.
- Integration of AI into business processes should proceed controllably, step by step.
Tomorrow's Workforce
Doing my university degree, I experience my co-years expressing growing discomfort with writing essays or formulating thoughts without ChatGPT's assistance. What troubles me is watching their independent thinking capabilities visibly diminish. To me, there is one very probable outcome: students - the future workforce and leaders - are being taught to use technology as a crutch to produce "pretty good" work without honing their own skill sets. This applies to their professional, general managerial and social competencies.
The main reason? There is no deeper understanding of the tool's potential and limitations that are at use.
What I find particularly worrying is how this might transform future workplace dynamics. I can envision a near future where hiring decisions won't just consider the individual's capabilities, but also their proficiency with their own AI assistant - perhaps even listing preferred AI assistant utilization skills on resumes.
I see a striking irony: while this development perfectly illustrates Marxist critiques of technology's impact on labor, it simultaneously serves corporate interests by creating a workforce increasingly dependent on company-controlled AI systems. Knowledge workers are turning into factory workers, operating their digital machines. Little skill required, at least do to okay.
This erosion of human agency in cognition and decision-making could have far-reaching implications for innovation, problem-solving, and leadership quality in our future organizations.
Implications: Manage like a Marxist!
It appears that we are screwed. But realistically, there is potential to prevent my doomsday scenario. AI allows us to skip the difficult parts of decision-making: gaining deep understanding, weighing up alternatives, iteration, and taking responsibility.
I believe it to be the natural tendency for humans, to relent decision-making to others. Those with the more ambition will try to lead longer, but eventually also succumb to the same false comfort of AI.
Interacting with other humans, we have a good incentive to stay in the game. Many things become a social competition, for status, power or simply independence.
Machines don't give us that kick. Students feel compelled to do better than their classmates. But would they also try to beat ChatGPT, if it was sitting amongst them, taking the same exams? I argue that AI isn't perceived as enough of a thread, as other people. Too abstract, intelligent, and crucially: nice.
This is where the responsible manager is needed. To remind workers that they are using just another tool. More capable and comprehensive than anything else, but still an extension of their personal competencies.
Sources
Alicja Bachulska, M. L. (2024). The Idea of China: Chinese Thinkers on Power, Progress, and People. European Council on Foreign Relations.
Dell’Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.457332
More on the topic
This article was mainly inspired by the book “The Idea of China: Chinese Thinkers on Power, Progress, and People” by Alicja Bachulska, Mark Leonard, and Janka Oertel.
You can get it here: https://ecfr.eu/publication/idea-of-china/
Pan Enrong was heavily inspired by the works of Nobel laureate Herbert Alexander Simon, who contributed in various fields on the topic of decision-making. One of his most important works is the book "Administrative Behavior" published in 1947, studying decision-making within companies.
In his book "Co-Intelligence. Living and Working with AI", Ethan Mollick outlines more examples of how collaboration between human and machine can work well.