Artificial Intelligence: A Guide for Thinking Humans (Feat. Melanie Mitchell)

A recording of the event can be found here.

Abstract

Artificial intelligence has been described as “the new electricity”, poised to revolutionize human life and benefit society as much or more than electricity did 100 years ago. AI has also been described as “our biggest existential threat”, a technology that could “spell the end of the human race”. Should we welcome intelligent machines or fear them? Or perhaps question whether they are actually intelligent at all? In this talk, I will describe the current state of artificial intelligence, highlighting the field’s recent stunning achievements as well as its surprising failures. I will consider the ethical issues surrounding the increasing deployment of AI systems in all aspects of our society, and closely examine the prospects for imbuing computers with humanlike qualities.

Melanie Mitchell is the Davis Professor of Complexity at the Santa Fe Institute. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her latest book is Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux).

3 Likes

Articles discussed during the faq:

Would be interesting to explore different strategies to creating analogy making systems that exhibit a higher level of abstraction/analogy making abilities.

While I’m not fully convinced that analogy making is the appropriate abstraction for intelligence, it is clear that it plays a significant role in human intelligence. Melanie’s paper https://arxiv.org/pdf/2102.10717.pdf suggests some practical approaches to building more robust intelligent systems.

4 Likes

Thanks so much for posting those!

On what understanding analogies tells us, Dr. Mitchell noted that Douglas Hofstadter has a strong position on this, and I see he described it in a talk here.

I thought Prof. Riondato’s comment/question about the “machine learning took over AI, deep learning took over machine learning” slides raised a great point, and that Prof. Mitchell’s reply about the distinction between research and application trends was also helpful. Beyond the history, though, I think there are interesting questions about what these pictures should look like, and what we predict they’ll look like in the near future.

1 Like

Yes. One should also take into account how what is considered to belong to the largest set (the AI set, in those figures, but that meant “techniques for AI”, given that “ML” and “DL” were subsets (but is all ML inside AI?)) changed over time and will continue to do so (no matter whether you limit yourself to research, practice, …).

1 Like

One more comment that I had, but did not manage to make at the end, was about the idea that AI is concerned imitating/replicating/studying/… human intelligence.

This anthropocentric view puzzles me.
In many other fields, imitating humans, or even more broadly nature, did not get us very far: an airplane, and even less a helicopter, do not fly like a bird or an insect (and what bird should we even try to imitate? A peregrine falcon, an albatross, or a hummingbird?), nor does scuba equipment make us breath underwater like a fish would. When the first steam-powered “car” was built, we didn’t even know how horses really gallop. A computer is in no way built like anything in nature, certainly not our brain.

So why starting with human intelligence? Why not trying to first study/replicate the intelligence (or, if you prefer, instinct) of some simple animal (assuming again, contrary to my desire, an anthropocentric view where human intelligence is assumed more complex than that of other animals).

Why not try to study/create a different kind of intelligence (see the plane, helicopter, scuba equipment examples) ?

By the way, I do not have answers, nor proposals. AI is not my field (despite publishing at AAAI and ML conferences and journals…so what does that tell us about the pictures from the previous comments?), nor I particularly find these questions so pressing: I believe there are many more interesting tasks for which we need efficient, scalable methods, and that can be solved without “AI”, so I try to focus on those.

Full disclosure: the airplane example is from V. Vapnik, “The Nature of Statistical Learning Theory”. Vapnik, in addition to obtaining fundamental results about what functions can be learned (together with A. Chervonenkis), developed Support Vector Machines, which were hailed, in the 90’s, in ways not so different than DL is today. And still, SVMs are likely considered a minor topic in ML today. This to say: our excitement for the most shiny new object is often bad placed, not just because a new shiny object could come and do better, but also because we may be attracted to the next shiny new object just because it is shiny, without even understanding whether it really is better (and in which cases), than the previous shiny (but now a bit less so) object.

2 Likes

People study and develop AI for a lot of different reasons.

If you’re doing it to understand human cognition, then anthropocentrism makes a lot of sense!

If you’re doing it to understand intelligence and adaptive behavior in the natural world more generally, then it makes sense to broaden your focus, but still you’ll want to concentrate on the methods employed by life, including neurons, genomes, etc.

If you’re only aiming for technologies that solve particular problems then you’re right that biologically inspired methods may not be the best. But sometimes they are the only examples we have of any kind of existing system that does anything of the right kind, so it’s a good place to start.

As Prof. Mitchell noted, some definitions of AI in AI textbooks tie it to human intelligence, and some don’t. When I teach AI I sometimes go through a lot of those definitions and end with one of my own – “thinking about thinking by programming” – which clearly reflects my entry to the field as a philosophy major. FWIW I quote a couple of other textbook definitions in this short piece on AI and evolution.

1 Like

A recording of this event can be found here!

1 Like