Link: “Self-Taught AI Shows Similarities to How the Brain Works” (Quanta, Aug 11, 2022)
Summary: Currently, many artificial neural network systems depend on an extensive database of photos labeled by humans to learn how to classify objects. Much like a student cramming information before an exam, these artificial intelligence systems can sometimes develop superficial understandings. For instance, a neural network may notice that many photos of cows are taken in a field. Therefore, for an image to have a cow, it must have grass. To remedy this, some computer scientists suggest a new strategy: self-supervised learning, which mirrors some of the actual methods our brains use to learn. If an artificial neural network system is given an unlabeled database and asked to fill in specific gaps in the data, will it form a more complex understanding?