While some of the clamor about the possible extinction of humanity by AI taking on a life of its own is reminiscent of the earlier talk about the danger of robotic use for jobs, industry is starting to think very concretely about the possible use of AI or is already in pilot projects. There is no doubt in my mind that this deployment will soon be the success of the industry.
When robots entered industry, there was great fear that they would make human labor superfluous and consequently destroy jobs en masse. Decades later, Germany or Central Europe, respectively, is still an industrial location worldwide – similar to Japan – due to its excellent products, not least because of the mass introduction of robot-assisted automation. At least in industry, there is hardly any discussion anymore about the curse or blessing of robots. The advantages are too clear. And by the way: Currently we have little unemployment, but an estimated half a million missing skilled workers.
We are talking about robots that usually simulate one or two arms. Usually much larger and stronger than human arms. These arms move very precisely as they have been trained to do. Completely without eyes and also long before modern AI. And in places where humans could not possibly reach at all with their arms. We have become accustomed to these arm robots by now.
After its release at the end of 2022, ChatGPT had more than 100 million users more quickly than any other system before it. There is hardly an area in business and society in which this AI based on language models has not already been tried out. Partly with astonishing and surprising success, partly again with serious errors, as we already knew from the language models of the last decades.
The astonishing success is based on the further development of the algorithms, the growing experience in training AI and the unchecked steep increase in the available mass of data. The mistakes can be explained in part by the fact that the technology is still in its infancy. On the other hand, from far too great expectations of AI and its overestimation.
And again there is alarmism, which this time sees not only human work threatened, but humanity itself. AI is virtually a killer of humanity. If we don’t put the brakes on it immediately, it will eventually take on a life of its own and wipe us out. Here, AI is no longer discussed as a technology, but as if it could become competition for humans themselves.
It’s not AI, but its use by humans that can become a danger
To add to the confusion, among the most prominent proponents of such an AI moratorium are, of all people, leading AI minds, such as multi-billionaire Elon Musk and Apple co-founder Steve Wozniak. It is almost funny when Sam Altman, the head of Open AI, i.e. the manufacturer of ChatGPT, also expresses concern before the US Congress about what could go wrong with AI.
For those who want to arm themselves against this doomsday mood and get their feet back on the ground, I recommend reading two crisp guest articles by Prof. Ralf Otte in the FAZ (“Die große KI-Illusion” on 5/24/2023, and “Auslöschungsrisiko? Artificial Intelligence is just vastly overestimated” on 6/19/2023).
Ralf Otte is not a professor of ethics or philosophy. He works at the Institute for Automation Systems at the Technical University of Ulm. So he knows what he is talking about. He gets to the heart of the fundamental difference between humans and AI machines in a way that is easy to understand. This is how he writes at the end of May:
“If a reader were to ask an AI expert to show him a neural network in a computer in concrete terms, with a magnifying glass and tweezers, so to speak, he would learn that there are no neural networks in computer memory at all. All you find there are mathematical equations and algorithms, all deep learning networks are pure algorithmic simulations. Not so in the brain, there are real neuronal networks to be found, even to be detected individually and to be examined under a microscope. Without any software, information processing in the brain is completely mapped in the tissue, in the individual neuron, in the topology of the network.”
And elsewhere in the same article:
“A silicon crystal in a computer (…) is closer to a stone than to a primitive amoeba, a silicon crystal – just like a stone in a garden – will never want anything. Dead systems do not want anything, nor do they fight for existence, no matter how good the corresponding science fiction movies are that deal with such stuff.
That’s why the current calls for an AI moratorium are smokescreens; they cleverly divert concern to the technology itself – and not to where we should really be paying attention: the creeping transfer of power to technical devices that make decisions about us humans. The owners of these technical devices would then have introduced a nearly insurmountable wall of protection between themselves and us.”
It is not AI that is a threat, nor will it ever be based on today’s computers. But a real threat can arise from its use by humans. People and organizations that own AI and market its use on a large scale, such as the major corporations Amazon Web Services, Google, Microsoft and a few others, should therefore be subject to very clear rules in this business. And people who sit at the levers of political power should, first, know and be able to judge what AI is, and second, also be subject to very clear rules when they use AI to exercise their power.
Secure framework for industrial head robots
To date, the manufacturing industry has largely taken a fairly pragmatic approach to AI. It is looking for the useful use cases, the famous use cases, to optimize industrial processes or to use and save energy more effectively, for example. Again, the problem today is not the threat of job destruction. The problem is that industrial process data is a vital asset of industry.
The development data of the engineers, the data from production and assembly, the data from the production facilities and the test equipment, the data from the product use at the customer and many other data – they are crucial for the company and its market success. It cannot easily make them available for training an AI like the private user of a search engine or an e-commerce marketplace. It must be guaranteed that this company capital is not then lost. For example, because the aforementioned companies may be required by the U.S. government to hand over the data. Or because a competitor uses the data to make an unfair move in competition.
Therefore, it is more than desirable that, apart from the corporations from the U.S. and China that have grown into the absolute global market leaders in AI in the consumer sector, there are trustworthy AI providers to whom the data can be given for analysis and evaluation, because contracts can be concluded with them that comply with our rules of industrial competition. GAIA-X and the German Edge Cloud, which now belongs to the Friedhelm Loh Group, are heading in this direction, for example.
As certain as it is that our industrial location would not have achieved and maintained its position in the world without arm robots and the previous forms of automation, it is equally certain that head robots, i.e. AI systems, are now coming to industry. Another challenge here is a typically industrial one: just as arm robots had to be developed and built industrially for a wide variety of purposes in all sectors of industry and beyond, so now a wide variety of head robots have to be developed and built industrially.
Because ChatGPT being available does not solve an automation problem in a machine building company. Industry will have to build its own specialized AI systems. To return to the concern about jobs: There will very likely very soon be a large industry with a lot of jobs focused on researching, developing, building, selling and deploying industrial AI. Hopefully and probably in this country. In the land of hardware specialists and pioneers in industrial automation. Let the head robots come.
(p.s.: By the way, the images are not generated with AI, but downloaded from the image service 123rf).