Cabin

Heiloo
Heiloo
Heiloo
I am rereading Nicholas Nassim Taleb’s Fooled By Randomness. Historical data has much predictive power, but it can predict the Black Swans. A Black Swan is an event that:
The book makes me think about today’s hype around AI.
Artificial Intelligence and Machine Learning live off of historical data. Black Swan events do not occur in historical data. So, the AI is as stupid as humans: our AI sees large amounts of white swans. So, if you ask the AI what color a swan is, it says white. Black swans do not exist for the AI either.
AI is worthless where Black Swan Events are concerned.
AI itself – ChatGPT – agrees:
AI systems, including those based on advanced algorithms like GPT-3.5, have limitations when it comes to predicting black swan events. Black swan events are extremely rare, unpredictable, and have a significant impact. These events are, by definition, unforeseen and often fall outside the range of normal expectations.
AI models, including machine learning algorithms, are generally trained on historical data. If black swan events have never occurred in the training set, the AI model would not have learned to predict them explicitly. Traditional AI models work based on patterns and correlations found in the data they were trained on. They are not capable of inventing entirely new and unprecedented events beyond the scope of their training data.
However, it’s essential to note that AI can be helpful in preparing for unexpected events, even if they cannot predict black swan events directly. Here’s how:
…
Despite these capabilities, it’s important to maintain a cautious approach when relying on AI for preparing for black swan events. Human judgment, expert analysis, and continuous monitoring are essential elements in mitigating the impacts of truly unforeseen and exceptional events.
Wieringerwerf
Heiloo
Leiden
Hoorn
Heiloo
In his new book Talking to Strangers, Malcolm Gladwell explains that we should not rely on intuition. Instead, we should realize that we are dealing with people whose assumptions, perspectives, and backgrounds differ from our own, and before we conclude, we should conduct internal research.
This seems consistent with Daniel Kahneman’s research, as published in Thinking Fast and Slow, in which our System 1 thinking leads us to quick conclusions that our System 2 thinking will try to justify.
Now, the link to AI:
Artificial Intelligence may not just make decisions because it knows more (deeper) about a subject and can process all that information quickly. It also seems that AI can make better decisions because it can ignore a lot of information that’s in people’s heads and the physical presence of things in front of us. AI had better not become more human. That would be disastrous for the quality of its decisions.
Amsterdam