I am currently reading Why Nations Fail by Daron Acemoglu and James A. Robinson. I’m still working on a review of that book (including reviews of some of the reviews of the book), in the mean time I want to sketch out some thoughts I had on the multiple attempts to stop mechanizations of various industries. My thoughts on this are a mostly broad, but I think this shines a light on how we might or might not be able to handle undesirable technologies that are being developed today.
Examples from the book
As Acemoglu and Robinson sketch out in their book, many attempts were made throughout Europe to stop mechanization of the garment and transport industries, including attempts to stop the proliferation of the steam-powered trains. This was done mainly by monarchs or other elites in order to stay in control of the extractive economies they ruled over and stop social mobility.
While these attempts slowed the spread and development of these technologies, they could not stop them. The main reason cited in Why Nations Fail is the Glorious Revolution in the UK, which slowly liberalized the economy and made it more “inclusive”, giving inventors incentives to do inventing. In short, this allowed entrepreneurs and inventors to flourish and kick off the industrial revolution. Once the technological cat is out of the bag, its spread can not be stopped. Technologies will inevitably flow to your neighbors, and in a global economy everyone is your neighbor or your neighbors’ neighbor.
When thinking about stopping the spread of technology you might first think of the Luddites. But the Luddites were more of a workers movement than an anti-tech movement, they wanted higher pay, and the increased efficiency of the textile machinery was undercutting that. The first attempts to stop technology was overwhelmingly by monarchs and elites to protect their power. Some examples include Queen Anne of Great Britain refusing to grant William Lee on his stocking frame, an automated knitting machine, which he invented in 1589.
The stocking frame was an innovation that promised huge productivity increases, but it also promised creative destruction.
The thing that stopped technological innovation was not a lack of inventors but the active suppression of ideas and technology. Other examples include the printing press invented in 1445. Even though it was allowed to flourish in western Europe, the Ottomans strictly prohibited its use.
Not everyone saw printing as a desirable innovation. As early as 1485 the Ottoman sultan Bayezid II issued an edict that Muslims were expressly forbidden from printing in Arabic. This rule was further reinforced by Sultan Selim I in 1515.
Keeping the masses illiterate and poor secures the power of the few. Francis I, the last emperor of the Holy Roman Empire, and then emperor of Austria-Hungary said in a speech in 1821
“I do not need savants, but good, honest citizens. Your task is to bring young men up to be this. He who serves me must teach what I order him. If anyone can’t do this, or comes with new ideas, he can go, or I will remove him.”
Similarly when when English philanthropist Robert Owen tried to convince the Austrian government of social reform, a close ally Friedrich von Gentz replied
“We do not desire at all that the great masses shall become well off and independent . . . How could we otherwise rule over them?”.
Francis also blocked factories and railways from being built in fear of revolution. The first railways built in the empire were operated by horse-drawn carriages. The same was done by Tsar Nicholas I of Russia. Tsar Nicholas went even further and explicitly banned cotton spinning.
These are just a few examples WNF uses to illustrate the fear of creative destruction by rulers. The rulers were often correct to be afraid. Technology did bring social change, and usually this change was bad for monarchs or other elites. The monarchs of Britain are an exception as their power was already reduced due to the glorious revolution, and so did not stand in the way of other power struggles.
Martin Luther was one of the first revolutionaries to really use the power of the printing press to bring about social change. Later, the enlightenment happened together with the spread of technology and also relied on printing to circulate new ideas.
More examples
In the modern world these same struggles play out on various scales to different effects. In the small we hear stories of programmers automating their job in secret to protect their jobs. Obfuscations is used as a tactic to impress managers.
More interesting for our purposes is the limitation of technologies in the modern world. This has taken various different forms to fulfill many purposes.
Protection and growth
On the company level we see this behaviour in non-compete clauses for workers. These stop workers spreading ideas between companies to secure market power. This also takes the form of lobbying governments for extra regulation, knowing that meeting requirements is too expensive for start ups, or literally just getting a monopoly from the government. These are different examples of rent-seeking. I really recommend reading The Captured Economy by Brink Lindsey and Steven M. Teles for more information on this topic.
A recent example is China refusing to use western covid vaccines, instead insisting to use their own despite it offering less protection. The hope of this strategy is to not become dependent on imports and to develop a strong sector in your own country. This might slow down growth on a global level due to less collaboration and less diversity of specialization, but can lead to better national security and long term development outcomes. One interesting angle on this is Noahpinion’s Developing Country Industrialization Series.
Military and national security
In the other direction we see governments carefully controlling the outflow of technology. In around 600AD the Byzantine empire developed a flamethrower that they effectively deployed in various naval battled. They managed to create pressurized nozzles to project fuel at their enemies. Even if they missed, the fuel would continue burning on the water. This technology was key to various naval battles, and as such was a closely guarded state secret. The secret was so well kept that by the 12th century the Byzantines themselves lost the technology and we still don’t know how they made it or how their pressurized nozzles worked.
More well known and more impressive in regards to secret keeping is the Manhattan project. Even though we now know all about it, at the time the project mostly did not leak despite it employing over 130,000 people and having over 20 sites in the US and Canada. The managers of the project had a good security mindset, and were suspicious of everyone, including the director of Los Alamos Laboratory, the place where they designed the actual bombs! The project vetted over 400,000 potential employees and 600 companies to ensure security. The only truly successful Soviet spy was Klaus Fuchs, who was a member of the British mission.
The biggest current version of this is the “chip war” occurring between US allies and China. This, in reality, is many different technologies that governments have made bets on. China, for examples, seems to be leading in advanced battery and drone technology, while the US still leads in chip design and manufacturing (together with allies) and biotechnology generally. I recommend reading Noahpinion’s post.
Last year the Biden administration has implemented bread export controls on China’s chip sector. Various people are seeing this as the beginning of Cold War 2, although the exact start date is for future historians to decide. More importantly for our discussions, this seems to be working moderately well.
No one company / country can control all of chip design and the materials needed for nuclear weapons can “easily” be controlled. Part of what makes some modern tech so scary is how easily accessible it is.
Thoughts on AI and biotech
There is this story that Paul Erdős supposedly told about the Ramsey numbers.
Suppose aliens invade the earth and threaten to obliterate it in a year's time unless human beings can find the Ramsey number for red five and blue five. We could marshal the world's best minds and fastest computers, and within a year we could probably calculate the value. If the aliens demanded the Ramsey number for red six and blue six, however, we would have no choice but to launch a preemptive attack.
To make this analogy more fitting for AI we can imagine the aliens are on the way, but we don’t know how long they will take to get here. Furthermore, we don’t know if the problem we are trying to solve is the 5 color case or the 6 color case. (If you don’t believe there is a risk I recommend reading this introduction.)
If you spend any time in or around the AI notkilleveryoneism community you will have heard many calls to slow down AI development. Some claim that the aliens are already at our door step and we’ll have to solve the harder problem. Some believe we still have some time and the problem is more like the 5th Ramsey number. I am more on the optimistic side, but, we have not calculated the 5th Ramsey since Erdős came up with his thought experiment in 1990.
This shows a key problem with AI alignment research, and also technical research in general: it is extremely hard to predict how difficult and how long it will take to solve key problems. Sometimes a problem that has been unsolved for 15 years is suddenly solved in an elegant 6 page paper. I also wouldn’t want to be too optimistic, AI capabilities have always been faster than what was thought possible, whereas AI alignment seems to severely lack behind.
While there are some labs that are actively trying to make AI not kill everyone, many more labs seem to be fully locked into the AI arms race. The AI labs with the most resources, producing the most research are probably Google/Deep Mind, Open AI and maybe Facebook. What governments/militaries are doing is of course impossible to know. Due to the profit incentives and decentralized nature of this research, even the more optimistic people in alignment still believe that we can not slow down capabilities research through policy and global coordination: Paul Christiano writes
“Don’t build powerful AI systems” appears to be a difficult policy problem, requiring geopolitical coordination of a kind that has often failed even when the stakes are unambiguous and the pressures to defect are much smaller. I would not expect humanity to necessarily “rise to the challenge” when the stakes of a novel problem are very large. I was 50-50 about this in 2019, but our experience with COVID has further lowered my confidence.
As a mild counter, governments can sometimes effectively crack down on technologies they don’t like, and sometimes they even coordinate to do this, but I suspect it won’t happen as easily for AI (or capabilities research in biology) in the same way due to the downsides being easily ignored. And the downsides look very scary.
The big difference between nuclear weapons and Greek fire and todays x-risks like synthetic biology and AI are that the former were developed exclusively by governments and actively kept secret. The latter are driven by profit incentives, have huge potential upsides for humanity and the potential lethality is hard to perceive and can be researched anywhere.
To my naive mind it seems like we could reduce risks from synthetic biology by regulating retailers of DNA and protein synthesis services. This could be done with a lot of precision by, for example, checking orders against a database, and requiring additional clearance for dangerous substances. In contrast, AI research seems much less easy to regulate. Chips, by design, can be used for anything, and can not be as easily regulated without disrupting basically everything in the economy.
Maybe we will be able to create a Manhattan project for alignment and solve our problem before the aliens get here. Maybe we will solve all the relevant problems in a beautiful 10 page paper. But, until then, we should probably be very careful with our advancement of AI.