AI – it is not intelligent !

by Eliane Perret

Nowadays, companies, government agencies, and educational institutions are investing considerable sums of money and resources in cutting-edge computer technologies made possible by artificial intelligence (AI). These technologies are intended to take over routine tasks, write texts, programme, diagnose illnesses, analyse problems, and predict developments. In schools and universities, AI programmes are meant to enable personalised learning processes. While some people are euphoric about this development, others look to the future with concern, fearing for their jobs, (and quite a few suspect easy moneys through new opportunities for fraud, exploiting as-yet-undiscovered security vulnerabilities). This is reason enough to examine the current issues.1

Artificial? Yes – but not intelligent

The term “artificial intelligence” does not quite capture the essence of the matter, because intelligence is not a factor in this context. The term was coined in 1955 by four American computer scientists, who used it in a grant application for a study.2 A more accurate term would be “Simulated Intelligence” (SI)3, since AI simulates or mimics human learning. “Artificial intelligence”, however, is simply about the technical processing of existing data to generate answers to prompts entered by users. AI uses a so-called “Large Language Model” (LLM) that decides at each step – not according to fixed rules, but based on probabilities – which word should come next. So, no independent thought or ideas! Therefore, AI can only be used meaningfully and reliably as a tool in a carefully designed, verifiable, and monitorable context. It can take over repetitive routine tasks and facilitate and accelerate automated processes. A prerequisite is always high-quality input data provided by humans (e. g., precise questions), so that AI can react with parameters also precisely defined by humans. In this respect, AI may amaze us with its efficiency. However, as the author and media scholar Matthias Zehnder aptly points out, as a “statistical parrot” it always draws on the past.

A smooth surface

AI thus operates on a purely mathematical level, to which sentence structure, grammar, declension, and conjugation are also transformed. It statistically analyses vast amounts of data and patterns. For this, it has access to the computer’s hardware: a processor that performs calculations, a hard drive that stores data, and several specialised chips that handle data transfer and screen control. Programmes “tell” the hardware components what to do. However, AI always remains at the symbolic level, that is, on the surface of language. Against this backdrop, it can generate possible word combinations and the continuation of a word sequence. Texts generated in this way may be impressive at first glance – even if the content can be nonsensical and incorrect. Human intervention is needed to verify and take responsibility for their meaning and veracity. If, for instance, AI wrote: “He brought his girlfriend a bouquet of pink roses”, this is correct – but AI does not know what it is writing, because the semantic level of language remains inaccessible. It has no conception of this and cannot put into words what it is like to select the most beautiful roses, to smell their fragrance, to feel the velvety petals between fingers, or to delight in the delicacy of colour. It has never experienced the heart swell when your loved one’s eyes light up as they hold the flowers. Only humans can express this in words. They are masters at putting thoughts, desires, and experiences into words, giving them meaning, and connecting them to what they have lived and experienced – to our lives and the world. That is why AI remains a “machine made of sheet metal and silicon that manipulates algorithms.”4

Stop – wrong thinking

Several misconceptions are common today regarding “artificial intelligence”. AI’s ability to respond quickly and eloquently to prompts can lead users to trust its output and even attribute human-like qualities to it. What they forget is that it is simply an artificial network with millions of artificial synapses. Nor is it a computer programme that follows a clear code that can be corrected if necessary. Its decision-making logic often remains opaque, even to experts. This becomes clear to any AI user when they realise that AI produces different results even with identical inputs, or apologises submissively when it cannot answer questions about the underlying sources.

Endless power consumption and pressing ethical questions

The data required by AI is extracted from the internet without regard for copyright laws. It must be processed using GPUs (Graphic Processing Units)5 over several months, i.e., “trained” to be usable.
    Only then are the prerequisites for using the data in AI models in place. This post-processing involves the simultaneous use of thousands, sometimes even tens of thousands, of GPUs. This is associated with an incredibly high demand for electricity and carries a risk that electricity will become an increasingly scarce resource. It is therefore not surprising that large technology companies are already investing in energy sources – and even demanding the construction of nuclear power plants – to secure the electricity for their own needs. Through extensive lobbying, they safeguard their interests and try to prevent planned regulations. A great deal of power and money is at stake! These political and economic dimensions often remain hidden from today’s AI users. Many of them are enthusiastic about the new technological possibilities and impressed by the speed and (seemingly) limitless knowledge of AI. The development of ever newer AI capabilities has progressed rapidly in recent years. For example, research and testing have been underway for some time now into how AI can automate its processes and independently control them. Where will this lead? Such research and projects raise many very serious ethical questions. Not only is AI easily accessible and used in companies, universities, and schools, but the arms industry is also using it to develop unmanned killing machines that are deployed in modern wars. This should give us pause.

Courage to solve unresolved questions

This is precisely what is often “overlooked” in today’s euphoria – ideology and propaganda cloud our vision. All the more reason then for humans, possessing human intelligence, empathy, intuition, and creativity, to take responsibility as social beings and to assess whether and how new technological achievements can serve humanity. This requires the openness to think broadly when faced with unresolved questions and unexpected emerging problems. It demands human dialogue, debate among equals, enthusiasm born from a shared interest in a common cause, and the courage to give emerging ideas room to be implemented.
    This necessity of a foundation for AI committed to human well-being is also expressed in an interview6 with Dr Ladan Pooyan-Weihs, professor and lecturer at the Lucerne University of Applied Sciences and Arts (Computer Science), when she is asked: “If so many experts are warning of dangers, there must be some truth to them, right?” Her answer: “Technological advances are welcome, but often misunderstood, such as in the claim that AI can cure diseases. In fact, AI is specifically programmed to address a particular problem. It is more precise, faster, and more tireless than humans. But it lacks consciousness, moral capacity, and many other qualities that define us as human.” And when asked whether a Hippocratic Oath for computer scientists – a pledge of digital ethics – would be necessary, she says: “It already exists, but it is not legally binding: The Holberton-Turing Oath [see box] for algorithm developers prioritises humanity over technology. We are still a long way from such an oath being taken, for example, as part of a state examination. There are only individual initiatives, but no comprehensive, globally binding guidelines. In France, for example, I am familiar with the initiative ‘Data for Good’.”7 Saying this, she addresses crucial ethical questions that arise in connection with the development of “artificial intelligence”. A task that has not been even remotely solved today and urgently needs to be tackled! This includes a return to the foundations and achievements of our culture. That would also be an answer to the question posed by the Hungarian pianist, conductor, and music educator András Schiff in the Tonhalle programme: “What constitutes art are the immeasurable elements – inspiration, moods, everything that cannot be analysed. ‘Artificial intelligence’ soaks up all information like a super sponge, but there is not an atom of originality, only imitation. … In that context, humanity would have no value anymore. I cannot applaud that. What is the point?”8

Sources:
    An important source for this article was the website www.matthiaszehnder.ch and the 2019 book Die digitale Kränkung (The Digital Humiliation) by media scholar and author Matthias Zehnder. They are a treasure trove for anyone interested in the topic.

1 The following will primarily focus on texts generated by AI and the fundamental questions associated with them. However, they also apply equally to image creation using AI tools.

2 Salathé, Marcel. Compass Artificial Intelligence: A Guide Through a World in Crazy Times. Lachen: Wörterseh, 2025, p. 27

3 The term “simulated intelligence” was coined by Prof. Dr. Hans Köchler: “The latest stage in the development of digital technology is the promotion of ‘artificial intelligence’ (AI), which should actually be called SI – simulated intelligence.”
In: Köchler, Hans. (2024). DieTrivialisierung des Öffentlichen. Kulturanthropologische Überlegungen zum Digitalzeitalter. Ein Vortrag.
(The trivialisation of the public. Cultural anthropological reflections on the digital age. A lecture.) Publisher: IPO International Progress Organization. p. 18 (of the German original) ISBN 978-3-900704-38-4.

4 Zehnder, Matthias. 2019. p. 113

5 GPUs are specialised computer chips originally developed for computer graphics. The US company Nvidia has a near-monopoly on GPUs and is one of the world’s leading companies.

6 Bonin, Gabriela. “Künstliche Intelligenz gibt es eigentlich nicht.” (Artificial intelligence does not really exist.); https://hub.hslu.ch/informatik/kunstliche-intelligenz-gibt-es-nicht-wichtig-ist-digitale-ethik/

7https://dataforgood.fr

8 Programme booklet Tonhalle Zurich. Concert of 17 January 2026. Interview with the artist Sir András Schiff

The Holberton-Turing Oath

ep. The Holberton-Turing Oath was written in 2018 by the French digital scientist and entrepreneur Aurélie Jean (born 1982) and the Belgian-American computer scientist and entrepreneur Grégory Renard (born 1975). They developed this ethical code, inspired by the Hippocratic Oath, to create a common ethical foundation for artificial intelligence professionals. It was named after the two computer science pioneers Frances Elizabeth “Betty” Holberton (1917–2001) and Alan Mathison Turing (1912–1954), for whom ethical principles in their field were important. The document can be continuously supplemented with contributions from AI experts, philosophers, economists, business leaders, and members of the public.

The Holberton-Turing Oath

As a member of the data science and artificial intelligence profession, I solemnly pledge to dedicate my life to the service of Humanity:

Humanity & Ethics:

  • I will maintain the utmost respect for human life;
  • I will not permit considerations of age, disease or disability, creed, ethnic origin, gender, nationality, political affiliation, religious beliefs, race, sexual orientation, social standing or any other factor to intervene in duty;
  • I will not use my knowledge to violate human rights and civil liberties, even under threat;

Data Science, Art of Artificial Intelligence, Privacy & Personal Data:

  • I will respect the hard-won scientific gains of those scientists and engineers in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow;
  • I will remember that there is an art to Artificial Intelligence as well as science, and that human concerns outweigh technological ones;
  • I will respect the privacy of humans for their personal data are not disclosed to Artificial Intelligence systems so that the world may know;
  • I will remember that I am not encountering dry data, mere zeros and ones, but human beings, whose interactions with my Artificial Intelligence software may affect the person’s freedom, family, or economic stability;
  • I will respect the secrets that are confided in me;

Daily work & Etiquette:

  • I will practice my profession with conscience and dignity;
  • I will foster the honour and noble traditions of the data science and artificial intelligence profession;
  • I will give to my teachers, colleagues, and students the respect and gratitude that is their due;
  • I will share my knowledge for the benefit of the people and the advancement of Data-Science and Artificial Intelligence;
  • I will consider the impact of my work on fairness both in perpetuating historical biases, which is caused by the blind extrapolation from past data to future predictions, and in creating new conditions that increase economic or other inequality;
  • I make these promises to create Artificial Intelligence, first, to collaborate with people for the greater good, rather than usurp the human role and supplant them;

I make these promises solemnly, freely, and upon my honour.

Source: The Holberton-Turing Oath. https://www.holbertonturingoath.org/

Our website uses cookies so that we can continually improve the page and provide you with an optimized visitor experience. If you continue reading this website, you agree to the use of cookies. Further information regarding cookies can be found in the data protection note.

If you want to prevent the setting of cookies (for example, Google Analytics), you can set this up by using this browser add-on.​​​​​​​

OK