Nice nerdy weaving
Beautiful artwork from chip design and a vice nerdy story about this Navajo weaving from Marilou Schultz.
Beautiful artwork from chip design and a vice nerdy story about this Navajo weaving from Marilou Schultz.
David Heinemeier Hansson writes about the EU law on technology legislation. He is right that the cookie banner laws have led to this awful way where we must wrestle through consent forms while browsing the web. And yes, he is right:
Europe is in desperate need for a radical rethink on how it legislates tech. The amount of squandered potential from smart, capable entrepreneurs on the old continent is tragic. It needn’t be like this. But if you want different outcomes, you have to start doing different things.
He goes on
So little of the core tech innovation that’s driving the future, in AI or otherwise, is happening in Europe. And when it does happen there, it’s usually just incubation, which then heads for America or elsewhere as soon as its ready to make a real impact on the world.
I’m not sure where elsewhere would be. More importantly, there is more nuance to this state of affairs.
America is leading in technology but also in creating technological waste or the enshittification of technology. At least there is a body on this planet that puts boundaries on what monopolistic tech companies can do to citizens. That body is not the US government; it is the EU government. Yes, there is a lot to say about it, but you can state that the EU is protecting its citizens.
Furthermore, DHH could adopt a more critical stance towards the IT industry. While IT became a consumer product, companies like Microsoft, Google, Amazon and Facebook have shown that they do not always act in the best interests of their customers, to say the least. Legislation is not just a socialist or communist necessity, but a fundamental requirement for the proper functioning of capitalism. This is particularly true in the US, where the excessive focus on stockholder value has led to a decline in company ethics.
PS Just this morning, I read that US antitrust laws are working against Google’s anticompetitive behavior.
The University of Delft has a great introduction to Quantum Computing at Qutech Academy. (Buckle up if you want to follow, get your linear algebra skills dusted of and some physics.) Quantum computing is slowly becoming a reality. Today, It is somewhere between research and reality. Like the state of classical computing in the 1950s / 1960s, the difference is that today, we are better able to assess the potential of such technology than we could imagine what computing would mean in the 1950s.
And it will be big. It’s more impactful and real than the current AI hype.
I dug into the Qutech Academy after attending the Qiskit Summer School by IBM, which was somewhat over my head. But it’s an extremely interesting space well worth digging into.
Computer Science professor, writer, and podcaster Cal Newport debunks hysterical reactions to the latest AI developments. Much of this hysteria originates from the media’s search for attention rather than research executed with scientific rigor. “We have summoned an alien intelligence,” writes Harari, who is slowly but surely turning into a Luddite and occupational technology pessimist.
Cal Newport does what Harari and others should have done. In his Deep Questions podcast Defusing AI panic, he takes the subject apart.
Only by taking the time to investigate how this technology actually works—from its high-level concepts down to its basic digital wiring—can we understand what we’re dealing with.
Cal Newport tells us what ChatGPT does and how intelligent it is. We will see that it is pretty limited.
The result of these efforts might very well be jaw-dropping in its nuance and accuracy, but behind the scenes, its generation lacks majesty. The system’s brilliance turns out to be the result less of a ghost in the machine than of the relentless churning of endless multiplications.
…
A system like ChatGPT doesn’t create, it imitates.
…
Consciousness depends on a brain’s ability to maintain a constantly updated conception of itself as a distinct entity interacting with a model of the external world. The layers of neural networks that make up systems like ChatGPT, however, are static…
…
It’s hard to predict exactly how these large language models will end up integrated into our lives going forward, but we can be assured that they’re incapable of hatching diabolical plans, and are unlikely to undermine our economy.
In the podcast, Cal Newport is more technical in his explanations. From the transcript (with light editing for punctuation by me):
What a large language model does is it takes an input. This information moves forward through layers. It’s fully feed forward and out of the other end comes a token which is a part of a word in reality. It’s a probability distribution over tokens but whatever a part of a word comes out the other end that’s all a language model can do. Now, how it generates what token to spit out next can have a huge amount of sophistication …
When I talk to people is when you begin to combine this really really sophisticated word generator with control layers. Something that sits outside of and works with the language model that’s really where everything interesting happens . Okay this is what I want to better understand: the control logic that we place outside of the language models that we get a better understanding of the possible capabilities of artificial intelligence because it’s the combined system language model plus control logic that becomes more interesting. Because what can control logic do?
It can do two things: it chooses what to activate the model with, what input to give it and it can then second: actuate in the real world or the world based on what the model says. So it’s the control logic that can put input into the model and then take the output of the model and actuate that, like take action, do something on the Internet, move a physical thing.”
Something I’ve been doing recently is sort of thinking about the evolution of control logic that can be appended to generative AI systems like large language models…
If you look at the picture I created after Cal Newport’s talk, you can see the different control layers. As Cal Newport points out, that is where the actual work is done. The LLM is static; it gives a word, and that’s it. That control logic knows what to do with the work.
Now, the control logic has increased in complexity. We know better what to do with the answers AI gives us.
Newport fantasizes about a third control layer that can interact with several AI models, keep track of intention, have visual recognition, and execute complex logic. That is where we are approaching Artificial General Intelligence.
But, as Newport points out, Nobody is working on this.
Just as important, this control logic is entirely programmed by humans. We are not even close to AI-generated control logic and self-learning control logic. What Newport calls intentional AI (iAI). It is not clear whether this is possible with our current AI technology.
It’s the control logic where the exciting things happen.
It’s still people doing the control logic.
In 1990, a friend of mine graduated from Fuzzy Logic. This period was probably at the height of the Fuzzy Logic hype. Fuzzy Logic was one of the technologies that would turn societies upside down. Nowadays, Fuzzy Logic is just one technology applied, like others, for the proper purpose and problem space.
What looks like science fiction today is the mainstream technology of tomorrow. Today’s AI is tomorrow’s plumbing. That is my take on Cal Newports’ explanation of today’s state of AI art.
Earlier I wrote that today there are excellent search engines as an alternative to Google search. To repeat the argument against Google search use: with Google search, in addition to being an Internet user, you are also part of the commercial product a product of Google, with all the consequences for reliability of results.
Another way to consume content from the Internet is through RSS feeds. Google doesn’t like that either, because with that, they can’t show you ads either. I switched to Newsblur after using locally installed QuietRSS for a while. I was missing the shared nature of the web, so I switched back to a tool with a web interface. Newsblur is good and has a fair price, but there are excellent other alternatives out there.
I stumbled upon this fascinating article by Stuart Mills looking at the challenges that further development and operations of AI models face.
The costs of model development and operation are increasing. Efficiencies in development and operation are challenging but may be addressed in the future. However, model quality remains a significant challenge that is more difficult to solve.
Data is running out. Solutions such as synthetic data also have their limitations.
There is also a severe challenge around chips. There is a supply shortage in the context of geopolitical tensions between China, the US, and the EU. Also, the environmental costs of running large AI models are significant.
The costs of model development and operation are increasing. Efficiencies in development and operation are challenging but may be addressed in the future. However, model quality remains a significant challenge that is more difficult to solve.
Data is running out. Solutions such as synthetic data also have their limitations.
There is also a severe challenge around chips. There is a supply shortage in the context of geopolitical tensions between China, the US, and the EU. Also, the environmental costs of running large AI models are significant.
Two revenue models may emerge in the AI industry. Each with their own take on the cost aspects highlighted above. The first is the ‘foundation model as a platform’ (OpenAI, Microsoft, Google), which demands increasing generality and functionality of foundation models.
The second is the ‘bespoke model’ (IBM), which focuses on developing specific models for corporate clients.
Government action can support and undermine the AI industry. Investment in semiconductor manufacturing in the US and China may increase the supply of chips, and strategic passivity from governments around regulations such as copyrights is suitable for the industry. Government interventions should regulate the AI industry in areas related to the socially and environmentally damaging effects of data centers, copyright infringement, exploitation of laborers, discriminatory practices, and market competition.
There is no good reason why you should still rely on Google search for your search engine. Read this excellent article on Google’s practices, and Big Tech chills run down your spine. There are ample good alternative search engines these days that do have integrity:
Bing (though also Big Tech and historically suspect)
And there are more.
Lately, I’ve been using Ecosia and DuckDuckGo pretty much side by side, and I don’t feel like I’m missing anything about Google.
When Artificial Intelligence-generated images win photo contests, should we oppose that?
I just think the developments of AI are telling us to do things differently, to stand out. AI has become the competition (and maybe just a tool), just like all other photographers are. So, we have to treat AI as competition, too. You can try to deny this reality, but you can also look at how you, as a photographer or artist, can differentiate yourself from this new collegue/competition.
Ideas:
Personal and analog distinguish you from the aggregated, statistically generated products of AI.
I ran out of max feeds on Feedly. I like the tool, but since I do not want to any more subscription-based software, so I looked for an alternative RSS reader.
It is a service but based on open source.
Tried to install in on my laptop.
Runs for a while, tries to run a sudo. That won’t work on Windows… 🙁
Immediately gave up. Considered for a minute running it on my Linux server, but I am actually looking for something simple.
Install, needs Java. Installed Java.
Needs Java 1.5. That is a very old version.
Gave up.
Installed like a charm.
Unfortunately no OPML import. So took some time to copy from the OMPL export from feedly into this tool.
Now trying this out.
Lovely old-style interface.
Reminds me of the old RSS reader I had years ago.
Some thoughts regarding the current Artificial Intelligence hype.
I am not among the people that believe AI technology is going to make humans superfluous. Like any technology, AI technology can be a blessing and a curse. AI technology does have its dangerous sides, and we should focus our attention on these aspects, and try not to get lost in religious debates. I think we should look at:
The most fundamental threat from AI technologies comes from its inputs. The quality of the output of the AI is directly dependent on the quality of the data fed into the AI’s learning process. If you cannot control the quality of the data you use to train the AI model, you cannot control the quality of the output either. (Most research in this respect is done in the area of so-called biases. What I mean here is more generic: how to assure the reliability of any data feed into an AI.)
Any benevolent social medium platform that exploits AI technology has a great challenge: how do you curate the social media fed into the platform? How to distinguish real trends from fake ones. How do you prevent bots from people or organizations with malicious intentions to enter unreliable information into the stream, thereby using the social media platform to spread incorrect information, or ‘fake news’ as it become known by since Donald Trump.
Governments are debating how to regulate these unreliable and often uncontrollable social media streams. The first and probably the easiest thing to demand from social media companies is to apply journalistic principles: transparency over the sources of information. I think this is the most reliable and easiest measure that can be taken. Examining the algorithms, as is suggested sometimes, is very difficult, for the provider as well as the examiner. Instead of looking inside the AI box, we can judge the input to the AI, and make significant judgments on the reliability of the AI’s outputs.
AI technology de-humanizes. When AI takes over more of our jobs and activities, what is left over to humans? What if AIs at some point have gained superior thinking power over humans? What is the purpose of men then? Humans become subordinate to AI intelligence. We lose the one thing that has always distinguished us from the rest of the living creatures on this planet: our superior thinking abilities. I do not think that the current AI technology is even close to taking over our thinking abilities. Later in this article, I will argue why I think so. Yet, in the sense that AI technology is the next step in taking over tasks that so far only humans could do, yes, AI takes some of that ‘humanity’ away. Jobs will be lost. But that is not a very revolutionary development. The calculator took over calculations, the early computer took over bookkeeping, and the later computer replaced our paper agendas, assembly lines, and our cars.
Our brains need a lot of energy: our 2% brain is responsible for 20% of the energy consumption of our body. AI requires a lot of energy as well. In January 2023, ChatGPT used as much energy as a mid-size town. A single question requires Apple’s Siri about 5 Wh of energy. A household uses something like 5000 kWh (in Europe, in the US it is double that figure). So, 200 questions to Siri is 1kWh. Then 1 million questions to Siri needs as much energy as a household in Europe in a year. That’s a lot.
If I ask Siri 3 questions per day, that is about 1000 questions per year. That is 5 kWh.
Does your Siri also tend to spontaneously answer questions you did not ask? 5W each.
Currently, large organizations drive the development of AI technology. Microsoft, Google, and IBM, all promise to make the world a better place, but skepticism is justified. Not only history has proven these companies strive to dominate the marketplace. According to economic theories, this is totally in line with expectations. And this is not the thing to be afraid of. Monopolies never last at are not scalable. At some point, monopolists can sell a greater quantity only by cutting its price. Microsoft came down, after years of trying to dominate the market with inferior Windows technology. (Yes Microsoft Office still dominates the text processing market, though I am not sure why that is the case, as very viable free products are available for quite some time now). IBM came down. Our own East-India Company came down. Google will come down. Facebook will come down.
What is problematic, however, is when technologies like AI end up solely in the hands of a single or a few parties and a power balance is disrupted. These parties can dominate others with these technologies. Totalitarian ruling could become a big problem. A relativizing thought, however… AI technologies are much more difficult, if not impossible, to contain then technologies nuclear weapons, for example.
You only need moderate capital for AI solutions. Open-source AI technologies are available for everyone. The problem is not the technology, the problem would be when a single party could claim exclusive access to data sources.
GPS data for example.
The current AI technology is an analytical, stochastic process, invented in the last two decades of the 20th century. The process is very compute-intensive. Only recently our computers have become strong enough so we can make practical use of AI inventions.
The AI technology analyses past data, and is very clever at “predicting” future values for this data, or gathering ‘facts’ from data. The technology looks at input data and separates noise from signal by throwing out outliers and assigning likelihood values to remaining data points. With this, it can predict (not even calculate) probabilities for future values. That is all.
Nicholas Nassim Taleb has written a book called Fooled by Randomness. One of the points he proves in this book is that you can not predict “black swan” events from past data. The existence of black swan events can be proven through analytical math, but not predicted through stochastical math.
So AI is worthless when it comes to predicting black swan events. More specifically AI then useless for supposed applications like stock market predictions. Even professors in AI research areas are ignorant of these “black swan” limitations, arguing that just further growth of datasets would enable AI to become perfect predictors of the future (and become uber-humanly intelligent).
Out of curiosity, I asked ChatGPT about its ability to predict black swan events.
Here is his honest, modest answer.
AI systems, including those based on advanced algorithms like GPT-3.5, have limitations when it comes to predicting black swan events. Black swan events are extremely rare, unpredictable, and have a significant impact. These events are, by definition, unforeseen and often fall outside the range of normal expectations.
AI models, including machine learning algorithms, are generally trained on historical data. If black swan events have never occurred in the training set, the AI model would not have learned to predict them explicitly. Traditional AI models work based on patterns and correlations found in the data they were trained on. They are not capable of inventing entirely new and unprecedented events beyond the scope of their training data.
However, it’s essential to note that AI can help prepare for unexpected events, even if it cannot predict black swan events directly. Here’s how:
…
Despite these capabilities, it’s important to maintain a cautious approach when relying on AI for preparing for black swan events. Human judgment, expert analysis, and continuous monitoring are essential elements in mitigating the impacts of truly unforeseen and exceptional events.
Next, I will argue that the learning process of AI is so inherently different than human learning, that it would be naïve to expect that such processes would ever be able to teach an AI so much that they can mimic human intelligence. AIs could mimic aspects of human intelligence, but for development of more human-like intelligence, a different learning model would be required.
When we look at the AI learning process, machine learning, we see this is significantly different from human learning and experience. The difference between human learning and experience and Machine Learning in short. The AI takes larger amounts of information and learns from it. Sometimes this learning is (self-)directed. Humans take in large amounts of data, information, and experience. This human experience includes sensory
information. Humans undergo years of learning from interactions with people. They get complex and ambiguous feedback. They learn from experiences. AI’s do not undergo experiences. At best experiences could be simulated. The learning processes for humans and AIs are so different from Machine Learning that is silly to expect that the two could ever produce similar results.
The AI gets:
Man gets:
I do not think there is much reason to be freightened of AI. The current state of AI is very very far from human intelligence. And the current technology misses fundamental functions to close that gap. We should not be afraid of that. Rather than getting distracted by pessimistic stories warning of the end of humankind, we should focus our efforts on the dangerous aspects of AI.
AI-based social media enables the spreading of incorrect and potentially dangerous information. Other AI based applications struggle with curating the information fed into the tools. Without stringent curation, the AI application undermines its own credibility, as do the providers of services based on unreliable services. Which benevolent being wants to believe a social media channel that bases its information on unreliable data?
Who wants to rely on AI-based decisions if the decisions are learned from an unreliable data set?