Why humans will never understand AI



Large numbers of the trailblazers who started creating counterfeit brain networks didn't know how they really functioned - and we're not any more specific today.

Jack D. Cowan, a mathematician and theoretical biologist in his early 20s, met Wilfred Taylor and his strange new "learning machine" in 1956 while on a year-long trip to London. He was perplexed by the "huge bank of apparatus" upon his arrival. Cowan was confined to standing and observing "the machine doing its thing." It seemed to be carrying out an "associative memory scheme," and it appeared to be capable of learning how to locate connections and retrieve data.

Cowan was witnessing an early analogue form of a neural network, a precursor to the most advanced artificial intelligence of today, including the much-discussed ChatGPT with its ability to generate written content in response to almost any command. Although it may have appeared to be clunky blocks of circuitry that had been hand-soldered together in a mass of wires and boxes, what Cowan was actually witnessing was an early form of a neural network. ChatGPT's fundamental innovation is a brain organization.

They had no idea how the machine was completing this task as Cowan and Taylor stood and observed it. The "analogue neurons" of Taylor's machine brain, the associations it makes with its machine memory, and most importantly, the fact that its automated operation can't really be fully explained all provide the answer to the mystery of Taylor's machine brain. It would require a very long time for these frameworks to track down their motivation and for that ability to be opened.

"Neural networks – also known as artificial neural networks (ANNs) or simulated neural networks (SNNs) – are a subset of machine learning and are at the heart of deep learning algorithms" is IBM's definition of the term "neural networks," which encompasses a wide range of systems. Significantly, the actual term and their structure and design are "motivated by the human mind, imitating the way that natural neurons sign to each other".

There might have been some leftover uncertainty of their worth in its underlying stages, yet as the years have passed simulated intelligence designs have swung immovably towards brain organizations. They are currently frequently perceived to be the fate of simulated intelligence. They have significant ramifications for us and our understanding of human nature. Recent calls to halt new AI developments for six months to ensure confidence in their implications have echoed these concerns.

It would be a mistake to think that the neural network is only interested in shiny, eye-catching new gadgets. They already play a significant role in our lives. Some are powerful because they can be used. Back-propagation techniques were used by a group at AT&T Bell Laboratories in 1989 to train a system to recognize postal codes written on paper. Microsoft's recent announcement that AI will power Bing searches, making it your "copilot for the web," exemplifies how this kind of automation will increasingly influence what we discover and how we comprehend it.

AI can also be trained to perform tasks like image recognition quickly, resulting in their incorporation into facial recognition, for example, by drawing on a large amount of data to identify patterns. Numerous additional applications, including stock market forecasting, have benefited from this ability to recognize patterns.


The way we interpret and communicate is also being altered by neural networks. Another well-known use of a neural network is Google Translate, which was developed by the Google Brain Team.

Also, you wouldn't want to play Shogi or Chess with one. Their grip of rules and their review of systems and all recorded moves implies that they are incredibly great at games (in spite of the fact that ChatGPT appears to battle with Wordle. Neural networks are the components of the systems that are causing problems for human Go players (Go is known for being a difficult strategy board game) and chess grandmasters.

However, their reach extends far beyond these instances and is still growing. At the time of writing, a search of patents that was restricted only to mentions of the exact word "neural networks" produced 135,828 results. The likelihood of fully explaining AI's influence may diminish as a result of this rapid and ongoing expansion. These are the issues I have been analyzing in my exploration and my new book on algorithmic reasoning.

Mysterious layers of 'unknowability'

We can learn a lot about the automated decisions that define our present or those that may have a greater impact in the future by looking back at the history of neural networks. We can also infer from their presence that our comprehension of AI's decisions and effects will likely decline over time. These systems aren't just parts of a system that can't be seen or understood; they're not just black boxes.

There is a good chance that the more artificial intelligence influences our lives, the less we will comprehend how or why.

It's a different thing that has to do with the goals and design of these systems themselves. The unfathomable has long fascinated people. The system is thought to be more authentic and advanced the more opaque it is. It isn't just about the frameworks turning out to be more perplexing or the control of protected innovation restricting access (albeit these are essential for it). It is rather to say that the ethos driving them has a specific and implanted interest in "mysteriousness". The neural network's design and language even encode the mystery. They have layers that are piled high, which is why the term "deep learning" is used to describe them. Within those layers are the "hidden layers," which sound even more mysterious. These systems' mysteries lie far beneath the surface.

There is a good chance that we will have less understanding of how or why artificial intelligence will affect our lives as it advances. AI is getting a lot of attention right now, which is understandable. We want to know how it functions and how decisions and outcomes are reached. The European Union is currently advancing a new AI Act to establish a global standard for "the development of secure, trustworthy, and ethical artificial intelligence" because it is so concerned about the potentially "unacceptable risks" and even "dangerous" applications.

Those new regulations will be founded on a requirement for reasonableness, requesting that "for high-risk computer based intelligence frameworks, the necessities of great information, documentation and detectability, straightforwardness, human oversight, precision and heartiness, are completely important to relieve the dangers to major privileges and wellbeing presented by man-made intelligence". Even though systems that ensure safety fall under the EU's definition of high-risk AI, this concerns more than just self-driving cars; it also raises the concern that systems in the future will have an impact on human rights.

This is one of many calls for AI to be transparent so that its actions can be checked, audited, and evaluated. "Policy debates across the world increasingly see calls for some form of AI explainability, as part of efforts to embed ethical principles into the design and deployment of AI-enabled systems," according to the Royal Society's policy briefing on explainable AI.

However, according to the narrative of neural networks, we are more likely to move away from that goal in the future than toward it.


Comments