News

HUMANS, TECHNOLOGY, COGNITIVE SCIENCES & ARTIFICIAL INTELLIGENCE (Part 1)

HUMANS, TECHNOLOGY, COGNITIVE SCIENCES & ARTIFICIAL INTELLIGENCE (Part 1)

Joan Cornet, Director Digital Health Observatory and Coalition of the Willing at ECHAlliance, muses on how the growth of  artificial intelligence  is causing trouble for the cognitive and emotional system of the simple human.

Humans are the most successful species on this planet.

From small transhumant tribes we have achieved an extraordinarily complex and effective civilization for thousands of years. Although there are huge differences in wealth or access to basic services, most of the world’s population has a higher quality lifestyle than 50 years ago.

Of course, there are many hot topics such as climate change, extreme poverty in parts of the globe, inequality between men and women, wars and massive exiles, etc., but this would be the subject of other articles. And we shouldn’t forget humans, like all nature, are an evolving species. The brevity of our lives prevents us from realizing this as we circulate on the planet.

Despite the success of the human species, individuals, people, are extraordinarily fragile and limited. We’re probably the weakest of the mammals. To combat a hostile environment and our weakness, humans have had to develop artifacts, inventions and technologies that have helped us meet many challenges. These tools and technologies are a key element for individuals and for society. 

Back in time

It all started over a million years ago. We’ll never know when exactly it appeared, we only know of the first tool that was discovered almost a century ago, named “The Oldowan”. We don’t know how to use it either, but it was probably crucial to kill wild animals, and also to get rid of enemies.

The Oldowan was a widespread stone tool in prehistory. These early tools were simple, usually made with one or a few flakes chipped off with another stone. Oldowan tools were used during the Lower Palaeolithic period, 2.6 million years ago up until 1.7 million years ago, by ancient Hominin (early humans) across much of Africa, South Asia, the Middle East and Europe. This technological industry was followed by the more sophisticated Acheulean industry. The term Oldowan is taken from the site of Olduvai Gorge in Tanzania, where the first Oldowan lithics were discovered by the archaeologist Louis Leakey in the 1930s ( extract from Wikipedia) https://en.wikipedia.org/wiki/Oldowan

Oldowan choppers dating to 1.7 million years BP, from Melka Kunture, Ethiopia
By Didier Descouens – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=11291046

It’s a long period of history from the Oldowan tool, 1,7 million years ago, to the Artificial Intelligence we are wrestling with now. The biggest technological changes have taken place in the last two centuries; and now we live an exponential growth of technologies in our own time. With the implementation of Artificial Intelligence, we enter another dimension and surely the growth will be even more exponential!

The limit is not in AI technologies, it is to know how humans can assimilate such important changes, and especially how our cognitive and emotional system can adapt to a new model of society, which has the risk of being divided into have and have nots. Sadly, our brain is still like our ancestors was thousands of years ago. Humans have been around this savannah for quite some time now — numbered in the millions of years — but ‘modern’ humans, the ones we’re used to today, have only been around for less than 200,000 years, new research suggests. (1)

WE NEED MORE TECHNOLOGY TO SURVIVE AND AT THE SAME TIME WE ARE AFRAID OF NEW TECHNOLOGIES

Every technological breakthrough has been followed by a series of criticisms and fears or phobias. A paradigmatic example is that of the Luddites.

The Luddites were a secret oath-based organization of English textile workers in the 19th century, a radical faction which destroyed textile machinery as a form of protest. The group was protesting the use of machinery in a “fraudulent and deceitful manner” to get around standard labour practices. Luddites feared that the time spent learning the skills of their craft would go to waste, as machines would replace their role in the industry

The Luddites were not, as has often been portrayed, against the concept of progress and industrialisation as such, but instead the idea that mechanisation would threaten their livelihood and the skills they had spent years acquiring. The group went about destroying weaving machines and other tools as a form of protest what they believed to be a deceitful method of circumventing the labour practices of the day. The replacement of people’s skilled craft with machines would gradually substitute their established roles in the textile industry, something they were keen to prevent, rather than simply halting the advent of technology. (excerpt from Wikipedia)

Although we can be proud of historical technological successes, when we talk about digital technologies, robotics or Artificial Intelligence similar fears reappear.

A FEW HISTORICAL EXAMPLES

Mixing the words ‘fear’ and ‘technology’ has been a recurring theme for centuries. You don’t need to go back to the Middle Ages or later centuries to see how human beings are frightened of their own creations. And not only that, but a technological novelty, either by the device itself or by the novel use of it, adds new fears to the list. Now that it everything seems to have to be digital or not, new digital phobias have appeared.

  • Socrates, who never wrote, said that the invention of writing would produce forgetfulness and only a semblance of wisdom, but not truth or real judgment. His student Plato, writing on a scroll, agreed, saying that writing was a step backward for truth.
  • In 1877, The New York Times wrote a ferocious attack against Alexander Graham Bell’s telephone for its invasion of privacy. One writer wrote, “We will soon be nothing but transparent heaps of jelly to each other.” The wealthy Mark Twain was the first in his town to put a phone in his house yet passed on an opportunity to be an early investor, thinking it had no market.
  • Marconi invented in 1895 the wireless audio transmission (the radio), he wrote to the Ministry of Post and Telegraphs, explaining his wireless telegraph machine and asking for funding. He never received a response to his letter. Instead, the minister referred Marconi to an insane asylum.
  • Sigmund Freud in his book (1929) asks “What are trains for but to separate ourselves from our children?” making it clear that the train for him was a symbol of the breakdown of the family home, as well as an affective frustration.
  • IBM Chairman and CEO Thomas J. Watson famously said in 1943, “There is a world market for about five computers.”

ARTIFICIAL INTELLIGENCE AND HUMANS COGNITIVE COMPETENCES

Artificial Intelligence (AI) is a science and a set of computational technologies that are inspired by—but typically operate quite differently from—the ways people use their nervous systems and bodies to sense, learn, reason, and act.

AI is already having or is projected to have the greatest impact in the following areas: transportation,

  • healthcare,
  • education,
  • low-resource communities,
  • public safety and security,
  • employment and workplace,
  • home/service robots, and
  • entertainment.

Zimbardo & Gerrig (2007) see artificial intelligence as a domain of the cognitive sciences. The different streams of cognitive sciences, like neurosciences or linguistics, influence AI research massively, as AI (strong or weak AI likewise) wants to mimic these processes, from speech recognition to the ability to engage in complex conversations.

Stanford University has published a research report on AI since 2009. This explains the main pillars of AI research: large scale machine learning, deep learning, reinforcement learning, robotics, computer vision, natural language processing, collaborative systems, crowdsourcing and human computation, algorithmic game theory and computational social choice, internet of things, und neuromorphic computing. Further, eight domains of implementation are defined: transportation, home service and robotics, healthcare, education, low-resource communities, public safety and security, employment and workplace, and entertainment. (4)

Independent of their complexity, all problems have one thing in common: underlying missing pieces of information, which are to find or to complete, to solve a problem. From this initial state, the goal is to move to a target state, by making use of various mental operations.

These are the three elements of the problem space: to solve a problem, the boundaries of the initial and target state must be analysed and defined, to select the needed steps to solve the problem.

If the initial and target states are clearly defined, an algorithm can be used, a systematic methodology, which always leads to the right solution. Like solving a combination lock.

COGNITIVE IMPACT OF ARTIFICIAL INTELLIGENCE There are currently four types of Artificial Intelligence, each of which has its impact on what we call cognition.

Dr. Steven Pinker, Harvard University (USA), investigates invisible, mental processes and the connection to technology: https://www.youtube.com/watch?v=AeoyzqmyWug

Types of Artificial Intelligence

Reactive Machines. This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world.

They can’t interactively participate in the world, the way we imagine AI systems one day might. Instead, these machines will behave the same way every time they encounter the same situation. This can be very good for ensuring an AI system is trustworthy

Theory of Mind –Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behaviour.

This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without considering what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible. At this point in time, one needs to dig deep by integrating human tendencies. You need to think and understand that others have their own beliefs, thoughts, intentions, and desires which will have a significant impact on their decisions and actions. (2)

Limited Memory. Here analysing and studying past experiences help in predicting what the future holds. Imagine the RAM in your computer that stores temporary data and makes predictions on that basis. This Type II class contains machines can investigate the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.

Self-Aware AI – One of the most advanced forms of AI. As the name suggests, systems are developed in such a way that they carry self-awareness. This is, in a sense, an extension of the “theory of mind” possessed by Type III artificial intelligences. Consciousness is also called “self-awareness” for a reason. (“I want that item” is a very different statement from “I know I want that item.”) Conscious beings are aware of themselves, know about their internal states, and can predict feelings of others.

We assume someone honking behind us in traffic is angry or impatient, because that’s how we feel when we honk at others. Without a theory of mind, we could not make those sorts of inferences.

While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences. This is an important step to understand human intelligence on its own. And it is crucial if we want to design or evolve machines that are more than exceptional at classifying what they see in front of them.

Cognitive technologies. Deloitte Insights

AND NOW WHAT?

We are, for the first-time, developing technologies that we might call interactive. On the one hand, we are creating them, we use them more and more every day, and in turn they have an impact in our lives, profoundly transforming our way of being and acting. This is a new paradigm that will bring new phase of our human evolution.

There are many questions to which you will have to find answers. One of the most important challenges is not the technical approach of Artificial Intelligence itself, the challenge is that we encounter an “invisible” technology, unlike the technologies of the past such as steam engine, electricity, telephone, rail, car, medical devices, etc. We can’t see AI, and we can’t know for sure how they work. So that leads us to a number of questions around its use:

  • Will we know how to rely on Artificial Intelligence having these premises?
  • How can we take advantage of AI?
  • What kind of public or private organizations will ensure the ethical use of AI?
  • How can we prevent AI and other digital technologies from dividing our society?
  • Can we prevent AI from being in the hands of large corporations that escape state control?
  • What training and education will be necessary to adapt to a different labour market than the current one?

To be continued…don’t be impatient…


REFERENCES AND SOURCES CONSULTED

  1. The evolution of modern human brain shape.  Simon Neubauer, Jean-Jacques Hublin and Philipp Gunz. Science Advances  24 Jan 2018: Vol. 4, no. 1, eaao5961 DOI: 10.1126/sciadv.aao5961 https://advances.sciencemag.org/content/4/1/eaao5961
  2. A Standard Model of the Mind: Toward a Common Computational Framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics. John E. Laird
    University of Michigan. Christian Lebiere, Carnegie Mellon University Paul Rosenbloom
    University of Southern California. https://wvvw.aaai.org/ojs/index.php/aimagazine/article/view/2744
  3. A Stanford-led survey of trends in artificial intelligence finds advances in working with human languages, global reach. 2018
    https://news.stanford.edu/2018/12/12/artificial-intelligence-report-finds-advances-working-human-language-global-reach/
  4. ARTIFICIAL INTELLIGENCE & LIFE IN 2030 Stanford University. 2016 https://ai100.stanford.edu/sites/g/files/sbiybj9861/f/ai_100_report_0831fnl.pdf
  5. Cognitive tecnologies. Deloitte Insights.
    https://www2.deloitte.com/insights/us/en/focus/cognitive-technologies/technical-primer.html
  6. Cognitive Aspects of Interactive Technology Use: From Computers to Smart Objects and Autonomous Agents. Amon Rapp, Maurizio Tirassa, Tom Ziemke Front. Psychol., 14 May 2019 https://doi.org/10.3389/fpsyg.2019.01078 https://www.frontiersin.org/articles/10.3389/fpsyg.2019.01078/full

NEWS​

Related News

Health technology iVita cluster: A network for regional heath industry in Lithuania

31 Oct 2024
Learn more about the newest member to join our global community: iVita

ADHN at Africa Health Tech Summit 2024!

23 Oct 2024
ADHN to participate in Africa Health Tech Summit 2024, launching Youth in Digital Health Network

ATA 2025 NEXUS Call for General Content Proposals and Research Abstracts

23 Oct 2024
ATA NEXUS 2025 opens call for content proposals and research abstracts for annual conference in New Orleans

Train the Trainers event in Seville

23 Oct 2024
AMR EDUCare hosts 'Train the Trainers' event in Seville to tackle antibiotic resistance

InnoHSupport Kicks Off in Madrid: Advancing Innovation in Healthcare Public Procurement Across Europe

23 Oct 2024
InnoHSupport launches in Madrid to drive innovation in healthcare public procurement across Europe.

XpanDH video – Introducing the European Electronic Health Record Exchange Format (EEHRxF)

23 Oct 2024
Discover how the European Electronic health record exchange format (EEHRxF) is transforming cross-border health data sharing.

Become a member

Join ECHAlliance to amplify your organisation’s message, grow your networks, connect with innovators and collaborate globally.
 
First name *
Last Name *
Email Address *
Country *
Position *
First name *
Last Name *
Email Address *
Country *
Position *