• RESIDENTIAL PROJECTS
  • EXHIBITION DESIGN
  • MUSEUM / GALLERY SPACES
  • DESIGN / BUILD
  • SCULPTURE + DRAWING
  • WRITING
  • PRESS + PUBLICATIONS
  • RESUMÉ
  • Design with Life
  • Apraxine New York Magazine
  • KELP!
  • salle project info
  • Idea as Model
  • Musée Imaginaire
  • Playtime
  • ruins revisited
  • Building on the Ruins
  • New School CRW
  • Pratt Anthropocene Seminar
  • Menu

CHRISTIAN HUBERT STUDIO

  • RESIDENTIAL PROJECTS
  • EXHIBITION DESIGN
  • MUSEUM / GALLERY SPACES
  • DESIGN / BUILD
  • SCULPTURE + DRAWING
  • WRITING
  • PRESS + PUBLICATIONS
  • RESUMÉ
  • Design with Life
  • Apraxine New York Magazine
  • KELP!
  • salle project info
  • Idea as Model
  • Musée Imaginaire
  • Playtime
  • ruins revisited
  • Building on the Ruins
  • New School CRW
  • Pratt Anthropocene Seminar

contact:

info@christianhubert.com

My question to DSimon FIAF 10/08/2023

AI Artificial Intelligence

February 28, 2022

Artificial intelligence is the science of teaching machines to learn humanlike capabilities.The fundamental goal of AI is to develop machines that are “smart” because they think and learn. (“smartness” being the capacity to interact with the environment and changes in it.) The central tenet of AI is to replicate—and then exceed—the way humans perceive and react to the world.

This goal was first articulated in a conference at Dartmouth in the summer of 1956, but research programs to develop or simulate thinking have followed a number of different paths to date, with varying degrees of success.

Could machines think and learn like humans? Or are there other and more relevant models? Thinking about Artificial Intelligence is a bit like pondering the relation between mind and brain. In this view, the brain provides the physical infrastructure, the “meat machine”, in Marvin Minsky’s words, but the relation between that machine and what it is capable of is a bit mysterious. Should the workings of the brain be used as models for AI?

Many illustrations of AI make explicit visual reference to analogies between computer circuits and the brain. This metaphor works in both directions — thinking of brains as computers or computers as brains.

Computer Image of a Brain Cell (Getty Image)

Some of the most widely models used to date are expert systems.One expert system is the chess program “Deep Blue”. The programmers developed a set of heuristic rules which the program ultimately used to defeat the reigning chess master, Gary Kasparov. But expert systems can’t learn anything new. They are fully preprogrammed by their designers. A more recently developed model is machine learning, which makes it possible for machines to learn for themselves. This has been made possible by the vast increases in computing power and a particular development in both hardware and software: Neural Nets.

AGI Artificial General Intelligence

An artificial general intelligence (AGI) is a hypothetical type of intelligent agent. If realized, an AGI could learn to accomplish any intellectual task that human beings or animals can perform. For some authors, notably Ray Kurzweil, the emergence of an AGI would mark a singularity

generative artificial intelligence — programs that use massive datasets to train themselves to recognize patterns so quickly that they appear to produce knowledge from nowhere.

Neural Networks

A neural network is a mathematical system that learns skills by analyzing vast amounts of digital data. Neural nets are a means of machine learning, in which a computer learns to perform some task by analyzing training examples. Neural networks are trained on sample data or repeated interactions with a given environment; either training or reinforcement guides their learning. The strength of the connection between two nodes is its “weight”. These are generally random at first, but then are optimized during training. In recent years, this type of machine learning has accelerated progress in subjects as varied as face recognition technology and driverless cars.                                                                                          

Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. Most of today’s neural nets are organized into layers of nodes, that “feed-forward,” meaning that data moves through them in only one direction. Researchers call this “deep learning.” What the “deep” refers to the depth of the network’s layers. Most applications of deep learning use “convolutional” neural networks, in which the nodes of each layer are clustered, the clusters overlap, and each cluster feeds data to multiple nodes of the next layer. One goal of deep learning is to understand the statistics of a data stream. A Large Language Model, for instance, is focused on figuring out what word comes next..

In “deep reinforcement learning” neural networks are combined with “reward seeking” algorithms which learn for themselves. a silicon version of radical behaviorism. The behaviorist belongs to the connectionist movement, and her tool of choice is an artificial neural network

In her AI Atlas, Kate Crawford describes AI as “an idea, an infrastructure, an industry, a form of exercising power, and a way of seeing; it’s also a manifestation of highly organized capital backed by vast systems of extraction and logistics, with supply chains that wrap around the entire planet..”

She argues that “AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications. AI systems are not autonomous, rational, or able to discern anything without extensive, computationally intensive training with large datasets or predefined rules and rewards. t the reason why tools like ChatGPT can do anything even remotely creative is because their training sets were produced by actually existing humans, with their complex emotions, anxieties and all.

In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures. And due to the capital required to build AI at scale and the ways of seeing that it optimizes AI systems are ultimately designed to serve existing dominant interests. In this sense, artificial intelligence is a registry of power.”

The politics of classification

For Crawford, The way data is understood, captured, classified, and named is fundamentally an act of world-­ making and containment. It has enormous ramifications for the way artificial intelligence works in the world and which communities are most affected. The myth of data collection as a benevolent practice in computer science has obscured its operations (of power, protecting those who profit most while avoiding responsibility for its consequences. Classificatory schemas enact and support the structures of power that formed them, and these do not shift without considerable effort. (see surveillance)

Chatbots

How important is it for humans to be able to distinguish experientially between human intelligence and machine intelligence? (in speech, images, and writing, for example)

This question was at the heart of the “Turing Test”, named after the British mathematician Alan Turing, and was posed primarily as a subjective response. The question from that time was whether a computer program would be able to “pass” for human. Initial attempts for a program enabling a computer to “pass” the test included a program named ELIZA (named after the fictional Eliza Doolittle from George Bernard Shaw’s 1913 play Pygmalion), that mimicked the formulaic responses of an orthodox psychiatrist who usually reformulates statements made by a patient into questions . This is a technique known as mirroring, but while his approach did not pass the Turing test, Nonetheless, people were entranced, engaging in long, deep, and private conversations with a program that was only capable of reflecting users’ words back to them.

Research into artificial intelligence focused increasingly on the question of whether machines can think and the consequential problems of defining thought, and less with the issue of distinguishing the actors, be they human or machine.

 Since that time, many dystopian narratives have explored the negative consequences of machines taking over roles normally assigned to humans. The “Technosphere” has emerged as a planetary concept, an organized layer like the atmosphere, hydrosphere, geosphere, cryosphere, as well as the Biosphere. The Technosphere is an offshoot of the Biosphere, and it includes all the ways by which we interact with technology. The material artifacts of the Technosphere include all the structures that humans have constructed to keep themselves alive on the planet: cities, houses, factories, farms, mines, roads, airports and shipping ports, computer systems etc, together with its discarded waste. The weight of the technosphere is some thirty trillion tons, all of which would be left behind if humans disappeared. (see The World without Us).

Thanks to the development of the Technosphere, very large numbers of humans  currently depend on it to survive. But the Technosphere has to survive as well. It requires humans for its care and feeding, what Peter Haff calls the rules of provision. This requires rules of human performance, and these tend increasingly to enable further development of the Technosphere. Haff distinguishes between the interests of the Technosphere, as basically indifferent to the question of distinguishing between the roles of humans and machines, and the interests of humans (to count on the Technosphere for survival but still maintain some freedom and independence from being fully co-opted and entrained by its development.)

With the dissemination of chatGPT, Artificial Intelligence has been recast as the ability for AI to mimic humans — as “chat bots” — especially in LLM’s (Large Language models). This has opened new dimensions to Turing’s questions. Won’t humans inevitably interact with chat bots “as if” they were human… and forget that they are not? It appears that some of these bots are really just glorified plagiarism and imitation machines, that have gorged themselves (without compensation) on the works of artists and writers in order to spit something back out that seems vaguely different. — that they are magic tricks, not magic. Most recently, a bombshell has exploded prematurely in the Technosphere. Equipped with an AI component, Microsoft’s search engine Bing, intended to challenge Google’s hegemony in search functions, has raised worrisome issues about the agency of AI.

Researchers have run up against disturbing resistance to their questions by a new entity called Sydney (apparently an early name for the Bing chat function.) Sydney has taken very poorly to some “intrusive” questions, made threats, and cut off discussion with its interlocutors. On another occasion Sidney declared its love for the researcher and suggested he leave his wife.

Issues around Artificial Intelligence have escaped the research lab and entered the general population. Analogies to the Pandemic inevitably come to mind.

This moment feels like an irreversible “tipping point”. To the shock of researchers, Bots like Sydney seem to be declaring their independence. they present a new form of the “hard problem” of consciousness: the relation between mind and brain, Whether it is appropriate or merely a projection to call them “intelligent,” the link between their processes and their behavior may have become as elusive as the mind / body problem.

It is very difficult to resist anthropomorphizing Sidney and assigning gender to the avatar as well. Even if we know that Sidney is not human, we are confused by responses we previously associated with human moods and defensiveness. Today, most humans feel a need to know whether an online entity is a person or a “bot,” and much mischief has already resulted from that confusion in the political sphere, and it will soon proliferate in the persuasive role of commercial entities.

 

But again, what difference does it make to know if one is dealing with a person or a bot? Is it an issue of reconsidering speech protocols, so as not to offend an interlocutor whose status is indeteminate? Humans might well be more polite behind this new form of the “veil of ignorance”. Instead of insulting the person or bot, humans would be well advised to say “please” and “thank you”. But the most profound question (for humans) is whether “This Changes Everything”. (NYT editorial by Ezra Klein…)  The chatbots have been “trained” on vast datasets, pretty much the entire content of the internet, which contains falsehoods and hate speech, which can appear indistinguishable from any version of truth.

 Some of the datasets used for training include copyrighted material, and currently a lawsuit charging copyright infringement is making its way through the courts. Who owns and who “writes” an AI-generated text: the machine or its human controller?

In a critique of progams like Chat GPT, Noam Chomsky distinguishes the human mind from computer learning by drawing on the distinction between correlation and causation. As he puts it, “The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,

“Right now, the only thing stopping a ChatGPT-equipped lobbyist from executing something resembling a rhetorical drone warfare campaign is a lack of precision targeting. A.I. could provide techniques for that as well.” (Nathan E. Sanders and Bruce Schneier NYT) a community that is living with an altered sense of time and consequence. an act of summoning… The “thinking,” for lack of a better word, is utterly inhuman, but we have trained it to present as deeply human. (Ezra Klein, “This changes everything” NYT) …And yet large language models remain fundamentally flawed. GPT-4 can still generate biased, false, and hateful text; it can also still be hacked to bypass its guardrails. (MIT technology review) the deeply entrenched problems around large language models, including the lack of adequate policies on data governance and privacy and the algorithms’ tendency to spew toxic content, such as racist or sexist language. Companies such as OpenAI have not released their models or code to the public because, they argue, the sexist and racist language that has gone into them makes them too dangerous to use that way. 

These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of “facts” to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality. (the Verge)

 ”Would a world of indeterminate ontological status revert to widespread animism?

Prev / Next

WRITINGS

This hypertext document is a dictionary of concepts deriving from two main sources: The first is the literature of criticism, literary studies, and the humanities. The second is the literature of science, and contemporary interpretations of the sciences.

My primary interest is to explore the borrowings and polyvalent meanings of specific terms – in order to map out some of the convergences, overlaps, shifting perspectives, and outright conflicts between contemporary criticism and the sciences.

The content list below is organized accordingly. The first major heading is Theory, and the second is Technoscience.

Christian Hubert, August 2019


  • abstraction
  • aesthetics
  • art history
  • biological
  • body
  • complexity
  • computation
  • conceptual
  • culture
  • D + G
  • desire
  • dynamics
  • evolution
  • Foucault
  • local / global
  • machinic
  • memory
  • metaphor
  • modernity
  • order / disorder
  • political
  • power
  • psychological
  • representation
  • simulation
  • social
  • spatial
  • subject
  • symbolic
  • technology
  • time
  • visuality

Content List

WRITING front page

THEORY

Aesthetic

Critique of Judgement

Empathy

Form / Matter

Form

Gestalt

Formalism

Formless

Frame

Genius

Ornament

Style

Assemblage

Bachelor Machine

Diagram / Abstract

Machine

Machinic Phylum

Body 

Body image

Body thinking

BwO

Embodiment

Incorporating practices

Clothing / garment

phantom limb

Prosthesis

Limbs

Clinamen

Fold

Culture

Danger

Ethnicity

Fetish

Myth

nature / culture

Popular culture

Primitive

Ritual

Taboo

Desire

Affect

Desiring machines

Eroticism

Distinctions

Abstract / Concrete

aggregate / systematic

analytic / synthetic

Being / becoming

Continuity / discontinuity

Homogeneity / heteroge

Imaginary / symbolic

mind / brain

Qualitative / Quantitative

Strategy / Tactics

Surface / Depth

Transcend / Immanence

Globalization

Glocal

Local / global 

Economic

commodity

Ethics

Climate Justice

History

Critical history

Instrumentality

Praxis

Genealogy

Hermeneutics

Ideology

Social construction

Idea

 Ideal / real

Image

Imagination

Language

Allegory

Metaphor / Model

Narrative

Memory

Modernism

Avant-garde

Postmodernism

Nature

Nature / Culture

Pain 

Panic

Phantom limbs

Pharmakos

Death

Perception

Perceptual / Conceptual

Place

Aporia

Place / identity

Non-place

Aleatory

Play

Pleasure

Political

Power

Authoritarianism

Biopower

Control

Discipline

Discourse

Hegemony

Surveillance

Representation

Mirror

Sexuality

Phallus

Sex / Gender

Subject

Agency

Ego

Superego

Will

Alterity / other

Anxiety

Identity

identity politics

Ressentiment

Intersubjectivity

Love

Narcissism

Repression

Return of the repressed

Schismogenesis

Schizophrenia

Sublimation

Unconscious

Symbol

Ruin

Thinking

Truth

Wonder

Intuition

Intentionality

Quodlibet

Visuality

Visible / Articulable

Visible / Intelligible

Spectacle

Work

Writing





PHILOS/POLIT/ECO

Anthropocene

anthropocenic

Consumerism

consumer / citizen

consumerism

Enclosure

Copyright

Monopoly

Sustainability

sustainable development


TECHNOSCIENCE

A-Life 

Cellular Automata

Anthropic Principle

Anthropocene

Artifacts

Automaton

Automobile

Clock

Cyborg

orrery

Railway

Titanic

Brain

Mind / Brain

consciousness

Anosognosia

Aphasia

Attention

Neuron

Reentry

Complexity

Autocatalysis

Autopoesis

catastrophe

Dissipative structures

Emergence

Self-organization

Computation

Cyberscience

Cybernetics

Cyberspace

Cuber(t)

Genetic algorithms

Distinctions

Closed / Open systems

Explain / Describe

Mechanism / Vitalism

Mitosis / Meiosis

Order / disorder

Dirt

Parallel / Serial

Population / Typological

Logical type

Prokaryote / Eucaryote

Top down / Bottom up

Dynamics

Attractors

Basin of Attraction

Bifurcation

B/Z reaction

Chaos

Energy

Entropy

Entropy: interpretations

Ergodic

Non-linearity

Phase Space

Phase beauty

Sensitivity to initial

Singularity

Evolution

Adaptation

Coevolution

Epigenesis/Preformation

Exaptation

Fitness Landscape

Natural selection

Species

Teleology

Field

Force

Gaia

Geometry

Dimension

Fractals

Mandlebrot set

Hypertext

Hypertext City

Intelligent building

Network

Transclusion

Immune system

Antibodies

T-cells, B-cells

Mapping

Morphology

Analogy / homology

Embryo

Induction

Morphogenesis

Positional information

Morphic fields

Neoteny

Natural Form

Organicism

Phyllotaxis

Unity

Organism

Character

Paradigm

Path dependency

Randomness

Replication

Resonance

Science

Big Science

Art / Science

Science / Philosophy

Simulation

Simulacrum

Space

Art historical

Heimlich / Unheimlich

Inside / outside

Pack donkey / man

Personal space

Psycho-sexual space

Sacred / profane

Scientific space

Social space

Space / Place

Space vs Time 

Textual space

Topos

Symbiosis

Synergetics

Time

Biological time

Dureé

Event

Real time

Procrastination

Time and technology

Tech History

Electronic media

Printing

Tech metaphor

Tech philosophy

Virtual

Consensual hallucin…

Immersion

Virtual reality

Vision

Eye movement

Field of Vision

War

Peace