In [1]:
from IPython.display import YouTubeVideo, display, HTML

# Funksjon for å vise flere videoer side ved side
def disp_YoutubeS(*video_info, bredde=400, høyde=300):
    html = '<div style="display: flex; justify-content: space-around;">'
    for info in video_info:
        if isinstance(info, tuple) and len(info) == 2:
            video_id, start_tid = info
        else:
            video_id, start_tid = info, 0
        
        video = YouTubeVideo(video_id, width=bredde, height=høyde, start=start_tid)
        html += f'<div style="margin: 10px;">{video._repr_html_()}</div>'
    html += '</div>'
    display(HTML(html))

Intelligence, computers and AI technology...¶


                       ulf.indahl@nmbu.no
                    kristian.liland@nmbu.no

                   Ormsund Roklub - 08.10.24 

Group: Multivariat Dataanalyse og Maskinlæring

Courses (Realtek, NMBU): INF100, MLA210, MLA310



In [2]:
YouTubeVideo("Z-sTs2o0VuY", start=19, autoplay=0, width=500, height=270) 
Out[2]:

Ca 1990: Time Berners Lee (creator of the WWW):¶

"The goal of the Web is to serve humanity. We build it now so that those who come to it later will be able to create things that we cannot ourselves imagine."¶


Ca1998: Google - mission statement :¶

"Our mission is to organize the world’s information and make it universally accessible and useful."¶


1968 - AI in popular culture:¶

In [3]:
YouTubeVideo("Wy4EfdnMZ5g", start=18, autoplay=0, width=500, height=270) 
Out[3]:

2024 - TSMC (chip production) & China US Tech War:¶

Image 2 Image 1

TIME Magazine: Inside the Taiwan Firm That Makes the World’s Tech Run¶

In [4]:
# TED-talk: AI's Single Point of Failure (Rob Toews - Radical Ventures):
YouTubeVideo("AJGrdtKT3LM", start=4, autoplay=0, width=500, height=270) 
Out[4]:

1930-36, the beginning:¶

A decade before WW2, David Hilbert stated his program & ambition:¶

"An automatic generator for proofs or counter-proofs of mathematical assertions."¶

The fundamentaI questions in Hilbert's program:¶

  1. Completeness - is there a way to prove all true statements?
  2. Consistency - is mathematics free from contradictions?
  3. Decidability (Entscheidungsproblem)) - is there an algorithm that can determine wether a mathematical statement follows from the axioms?

Gödel (1931), Church and Turing (1936) - the theoretical foundations of computing¶

The ambitions in Hilbert's program triggered the fundamental works of

  • Kurt Gödel (1906 - 1978),
  • Alonzo Church (1903 - 1995)
  • Alan Turing (1912 - 1954).

In 1931 the incompleteness theorems published by Kurt Gödel, showed that Hilbert's program was unattainable for key areas of mathematics.

In 1936: Based on his universal machine Turing falsified the Entscheidungsproblem (Descision problem).

  • He did this by proving the so-called Halting problem to be undecidable.
  • His work was published in the reasearch article "ON COMPUTABLE NUMBERS, WITH AN APPLICATION TO THE ENTSCHEIDUNGSPROBLEM".

The same year (1936), Alonso Church reached the same conclusion as Turing based on his $\lambda$-calculus.

The Church–Turing thesis states that "a function) on the natural numbers $\mathbb{N}=\{0,1,2,...\}$ can be calculated by an effective method if and only if it is computable by a Turing machine."

The properties of the universal Turing machine also meant a mathematical proof that programmable computers are possible:¶

In [5]:
# The fundamental ideas of the Universal Machine and Gödels incompleteness:
# -------------------------------------------------------------------------
disp_YoutubeS(("56HGIcFkej0",1), ("HeQX2HjkcNo",814), bredde=350, høyde=250)
  • "... In terms of decisive world war efforts, there is a historical parallel with physics, with Turing as a figure roughly analogous to Robert Oppenheimer .... "

  • "... his theoretical universal machine of 1936, paved the way towards the general-purpose digital computer in 1945.

Andrew Hodges (2012) in ALAN TURING: THE ENIGMA.

Jack Good about "Turing & breaking the Enigma" --- "Birth of the programmable computer" with von Neumann¶

In [6]:
#disp_YoutubeS(["Z-sTs2o0VuY","Po3vwMq_2xA"] , [1382,28], 380, 250)
disp_YoutubeS(("Z-sTs2o0VuY",1382), ("Po3vwMq_2xA",28), bredde=350, høyde=250)

Turing and von Neuman during the 2. world war¶

  • Turing was the "star" researcher in the code breaking at Bletchley Park

  • John von Neumann did essential contributions on explosive lenses when consulting at the Manhattan project.

  • The subsequent Cold War and Nuclear Arms Race) also provided resources and stimulated the development of computing machines.

  • 1945-53: John von Neumann ("... the man from the future ...") developed the insights of Turing on computation, and supervised the construction of the first "modern" computer at IAS Princeton also known as the first stored program electronic computer...

  • ... that in the 1950s started the era of building computers to do H-bomb simulations.
  • REmember E. Teller... - "the father of the hydrogen bomb"...
In [7]:
YouTubeVideo("IGW6sEFDWZ0", start=1, autoplay=0, width=500, height=270)
Out[7]:

More Artificial Intelligence... - when Turing met Shannon¶

Intelligent Machinary (1943 /1948) - first ideas of what was later called Artificial Intelligence¶

In 1943 at Bell Labs, Alan Turing (working for GCHQ) met Claude Shannon (working for NSA) who was later known for establishing the field of Information theory.

The two shared their thougths by discussing Intelligent Machinery that Turing later described in his 1948-report at The National Physical Laboratory (see also Andrew Hodges lecture "Alan Turing: The Enigma" about the meeting between Turing and Shannon).

In [8]:
YouTubeVideo("3NODW5sok5U", start=2960, autoplay=0, width=500, height=270)
Out[8]:

Computing Machinery and Intelligence (1950) about the Turing test (the Imitation Game).

... and Barricelli¶

  • 1950s: Nils Aall Barricelli was a pioneer within simulation/creation of artificial life.
In [9]:
#disp_YoutubeS(["fJltiCjPeMA","EF692dBzWAs"] , [68,677], 350, 225)
disp_YoutubeS(("fJltiCjPeMA",68), ("EF692dBzWAs",677), bredde=350, høyde=225)


The first parallell processing supercomputer¶

1983:¶

Drawing Drawing Drawing

PhD-project
In order to solve his PhD-project in 1983... Danny Hillis founded Thinking Machines Corporation to make the worlds fastest parallell processing supercomputers - the Connection Machines (CMs):

People
Some of the people developing and/or using the CMs during the 1980a and ‘90s:

  • Danny Hillis (inventor and constructor - supervised by Marvin Minsky, Gerald Jay Sussman og Claude Shannon).
  • Brewster Kahle (later the founder of the Internat Archive).
  • Richard Feynman (Nobel Prize in Physics in 1965)
  • Stephen Wolfram (later forunder of Wolfram Research)
  • Tomaso Poggio (later mentor for Demis Hassabis at Google DeepMind)
  • Geoff Hinton (later supervisor for Illya Sutskever at OpenAI)
  • Sidney Brenner (Nobel Prize in Physiology or Medicine in 2002)
  • Sergey Brin co-founding Google in 1998.

Bankrupcy => Sun & Orachle
After bankrupcy, the hardware and software divisions of Thinking Machines Inc. were accuired by Sun Microsystems in 1994 and 1996, respectively. In 1999 Oracle accuired the remaining data mining branch of the company.

=> Orachle
As Sun Microsystems later was aqquired by Oracle (2010), the Darwin data mining toolkit, developed by Thinking Machines' Business Supercomputer Group, also ended up at Oracle and was developed into the Oracle Data Miner.

In [10]:
# Steve Jobs, Danny Hillis, Ray Kurzweil:
#disp_YoutubeS(["8JZBLjxPBUU","7u4x6gjpAbY", "9Z06rY3uvGY"] , [0,2,6], 240, 180)
disp_YoutubeS(("8JZBLjxPBUU",0), ("7u4x6gjpAbY",2), ("9Z06rY3uvGY",6), bredde=240, høyde=180)

How will AI improve our lives in the years to come, and when comes the singularity?¶

Based on the growth of computation Ray Kurzweil highlights AI's transformative impact on various fields and explains his prediction for the singularity: the point at which human intelligence merges with machine intelligence.

In [11]:
YouTubeVideo("uEztHu4NHrs", start=43, autoplay=0, width=500, height=270)
Out[11]:
In [12]:
YouTubeVideo("uEztHu4NHrs", start=200, autoplay=0, width=500, height=270)
Out[12]:

Predictions¶

According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.


Later history and development of AI and Deep Learning¶

  • 1989 - Convolutional Neural Nets were used for Optical character recognition by Yann LeCun (who is now Chief AI Scientsit at Meta AI).
  • 1989 (1993) - WWW became available.
  • 1998 - Google was founded.

  • 2006 - The ImageNet project database was initiated by Fei-Fei Li at Stanford University.
  • 2010 - Google DeepMind was founded by Demis Hassabis together with Mustafa Suleyman and Shane Legg.
  • 2011 - 2017 - The ImageNet Large Scale Visual Recognition Challenge (ILSVRC)
  • 2012 - The "ImageNet moment" occur when the AlexNet designed by Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton wins the ILSVRC by a huge margin.

  • 2015 - OpenAI is founded by Elon Musk, Ilya Sutskever, Sam Altman and others.
  • 2017 - Jensen Huang and Nvidia completes building the first DGX supercomputer that is sold to OpenAI.
  • 2017 - The transformer architecture is published by Google-researchers in the paper Attention is all you need.

  • 2017 - AlphaZero - An algorithm able to learn the games of Chess, Shogi and GO to superhuman level.
  • 2020 - AlphaFold - 3D Protein folding prediction.
  • 2022 - AlphaCode - An AI-powered coding engine creating computer programs.

  • 2022 - ChatGPT is launched.
  • 2023 - Gemeni) - a family of multimodal large language models developed by Google DeepMind.
  • 2023-'24 - Lots of developments of Large Language Models.

  • 2024 - AlphaGeometry - AI program that can solve hard problems in Euclidean geometry.
  • 2024 - AlphaFold 3 - can predict the structure and interactions of all life's molecules.
  • 2024 - GPT-4o - can process and generate text, images and audio twice as fast and half the price of its predecessors.
  • 2024 - GPT-o1 - a GPT utilizing Reinforcement learning generating longer "chains of thought" before returning a final answer... (The o1-preview model is strong in math and performes approximately at a PhD level on benchmark tests related to physics, chemistry, and biology.)
  • 2024/25 - Nvidia will in Q4/Q1 release their newest chip design Blackwell) that is manufactured by TSMC and capable of 20 petaflops, which amounts to 20 quadrillion calculations per second... (Blackwell’s revolutionary technologies enable AI training and real-time LLM inference for models scaling up to 10 trillion parameters.)

Yann LeCun:¶

"... the shelf life of autoregressive LLMs is very short. My bold prediction is that five years from now nobody in the right mind would use them ..."

In [13]:
YouTubeVideo("mViTAXCg1xQ", start=1095, autoplay=0, width=500, height=270)
Out[13]: