What can coal mining in the 1950s teach us about adopting Collaborative Intelligence today?
More than you might think
Britain nationalized its coal industry in 1947 as part of its post-WWII industrial reconstruction. At that time, coal was Britain’s primary source of power. Nationalizing the industry enabled the British government to ensure access and increase production.
Despite Britain’s investments to mechanize coal mining operations, coal production did not increase as expected. Many miners left the industry for more attractive factory jobs. Among those who stayed, absenteeism averaged 20%. There were many labor disputes and other problems, including low morale [Trist 1981].
Britain’s National Coal Board asked the Tavistock Institute of Human Relations to study and explain why some mining sites had high productivity and high worker morale when most did not.
They found that the mechanized work routines introduced at most sites had ignored the impact of the new mining technologies on their miners’ job satisfaction (less variety and less autonomy), dignity (from autonomous, multi-skilled generalist to order-taking, single-skilled specialist), and camaraderie and interdependence with other miners.
They had the insight that, when redesigning work, both the human systems (comprised of individuals, their interactions, shared values, etc.) and the technological systems being deployed to complete the work must be jointly optimized if companies are to achieve the expected economic and employee benefits of their redesign.
This insight became the foundation for Socio-Technical Systems (STS) theory, embodied by the continuing work of the Tavistock Institute, which seeks to understand “how humans relate to each other and non-human systems, how we grow in character, [and] how we embrace learning and change.”
The lessons they learned in the 1950s can inform efforts to adopt Generative AI and other collaborative intelligence (CI) technologies today.
Lesson 1: Recognize the “threat” that CI technologies pose to human stakeholders and take specific steps to socialize these technologies.
Different stakeholder groups may perceive different threats from CI technologies. For example:
Some employees may worry that:
AI is coming for my job
the value of their current skills will be diminished
they may have difficulty learning the new skills required to collaborate effectively with intelligent machines
Some customers, clients, or patients may worry that:
valued service employees, with whom they have long-standing relationships, will be replaced by machines
artifacts produced by intelligent machines will diminish the value of products purchased or services delivered
Some executives may worry that the use of machine intelligence will:
expose proprietary or confidential company data
expose the company to unforeseen liabilities
increase reputational risk from the external exposure of improperly vetted work products
Some suppliers or partners may worry that the dynamics of their working relationships will change.
Each perceived threat must be identified, addressed empathetically, and resolved to the stakeholders' satisfaction.
One approach that overcame miners’ resistance to new technologies at the British mining sites (and has proven effective in many other industries over the years) is “socializing” the technologies before their widespread deployment. In Organizational Dynamics, “socializing” is the process by which unknown or mistrusted company outsiders transition into familiar, valued company insiders.
Collaborative intelligence aims to pair humans with intelligent machines (IM) to pursue human-specified goals, so intentional human/machine socialization can effectively address human stakeholders' concerns.
If these concerns are not addressed effectively, organizational resistance will build, and companies will fail to realize CI’s potential.
Lesson 2: Effective work design improves productivity and work-life quality via optimized task sharing.
When designing CI-based workflows, each task must be analyzed along several dimensions to ensure optimal sharing of task completion responsibilities among humans and IMs. Then, the workflow must be jointly optimized for expected value (adjusted for risk) and human engagement and satisfaction (via extrinsic and intrinsic rewards).
If the distinctive characteristics of humans and intelligent machines are not considered or leveraged effectively in the new / redesigned workflows, the benefits of CI will remain unrealized.
Lesson 3: In addition to workflows, build organizational mechanisms that can adapt quickly and easily to the learnings and increasing capabilities of both humans and CI technologies.
CI technologies' power (task-completion capabilities) and scope (kinds of tasks that can be completed) are advancing rapidly. As humans begin to explore and apply these capabilities (consider the many ways people are discovering how to use / apply ChatGPT), their knowledge, skills, and experience will advance rapidly, too.
The organizational architectures (structures, decision rights, etc.) designed to support the execution of CI-based workflows must enable rapid experimentation, learning, and adaptation of these workflows.
If organizational architectures constrain rather than support the growth of CI capabilities, then CI’s many benefits will be elusive.



Mark, great article. Do you have any published or recommended journal articles on CI? An associate and I have been working on a theory we term administrative theory. This work is based on the work of Herbert Simon's theory of near decomposability and the impact it has on achieving the "fit" organization. As we work to expand this theory, development of mechanisms that allow for efficient and effective collaborative governance and administrative tethering, our next step is identifying mechanisms to be used by collaboration administrators in administrative tethering. I am very interested in what you have been writing on CI and thing there are areas that are relevant to our current work.