Artificial intelligence has certainly made remarkable advances in synthetic learning. Large learning models (LLMs) do amazing things that were, until recently, thought to be far beyond the grasp of lifeless technology. Ironically, the next big challenge will be to do a bit of the opposite… i.e., to master the art of unlearning.
The human brain performs this essential function quite naturally. Without the ability to forget, our lives would be disastrous. Remembering everything, we would soon be overwhelmed by our cumulative past.
All the incorrect things we once thought we knew would continue to clutter up those more correct things we’d subsequently come to better understand.
Without forgetting we would slowly and surely become wracked with guilt, grief, and regret. More profoundly, there’d be no ability to forgive. In short: without forgetting, our lives would become a cacophony of miserably irrelevant distractions.
At a societal level, without selective forgetting we could not make peace with our enemies. Although ignorance of history is said to be a recipe for repeating it, remembering each and every gory detail of humanity’s dark past would not help bring out the best in those of us striving toward a brighter future.
History is largely the art of curating narratives that aim to not just inform but to inspire us. While preserving accuracy, the inability to distill history through selective pruning would leave us with an overwhelming morass of stories from which we could discern no coherence.
But back to the programming challenge: To incorporate selective deletion features into LLM algorithms, the question becomes just how one decides what a computer ought to forget. For the most part, developers have focused on how to expand our digital tools’ capacity to store and recall data. The more, the better.
So at first blush, undoing AI’s recall prowess might seem simple enough: just undo some of the progress made. But given the neural networks and complex interconnected relationships that are generated internally by the LLMs, selectively deleting information presents a whole new seemingly insurmountable challenge.
Reducing the internal probabilistic weights generated by LLMs – the secret sauce that fuels answers to queries – would be a massive undertaking. (It reminds me of the scene in 2001: A Space Odyssey where Dave deprograms HAL). Without getting into the weeds of how this might even work [clearly over my head] the choices of how to delete or merely devalue access to certain information becomes fraught with new sources of AI hallucination.
A simpler approach of simply forgetting information based on aging timestamps would also be reckless. As humans, we remember things from long ago for good reason. And by contrast, we forget certain details of what happened a few days ago – for equally good reason.
Discerning what is important enough to recall and what (and when) to forget remains at the heart of what humans somehow do exceedingly well. Our neural dendrites and axons manage this electrical wiring feat on their own. Leaving us older and wiser as we go through life. By comparison, this only further highlights what AI lacks … viz., judgment, intuition and abstract reflection.
In short, while human brains and computers both boast awe-inspiring capacities, by the laws of physics, resources are never unlimited. Determining how and what is best to jettison to make room for new information and new ideas is a daunting task for a software developer.
Humans, on the other hand, excel at this quite naturally. Toddlers try and fail at simple tasks repeatedly, like walking, talking, or just holding a spoon. But once they meet with success – and enjoy their dopamine blast of surprise – they quickly abandon those prior scripts that failed.
Such simple de-cluttering of thought processes and protection against reliance upon outdated and counterproductive information will be hard to translate into algorithms. Which leaves the human brain as the ever-inspiring decider of what matters most in our lives.

Leave a Reply to David Cancel reply