AI considered not so harmful

Computer Science professor, writer, and podcaster Cal Newport debunks hysterical reactions to the latest AI developments. Much of this hysteria originates from the media’s search for attention rather than research executed with scientific rigor. “We have summoned an alien intelligence,” writes Harari, who is slowly but surely turning into a Luddite and occupational technology pessimist.

Cal Newport does what Harari and others should have done. In his Deep Questions podcast Defusing AI panic, he takes the subject apart.

Only by taking the time to investigate how this technology actually works—from its high-level concepts down to its basic digital wiring—can we understand what we’re dealing with.

Cal Newport tells us what ChatGPT does and how intelligent it is. We will see that it is pretty limited.

The result of these efforts might very well be jaw-dropping in its nuance and accuracy, but behind the scenes, its generation lacks majesty. The system’s brilliance turns out to be the result less of a ghost in the machine than of the relentless churning of endless multiplications.

A system like ChatGPT doesn’t create, it imitates.

Consciousness depends on a brain’s ability to maintain a constantly updated conception of itself as a distinct entity interacting with a model of the external world. The layers of neural networks that make up systems like ChatGPT, however, are static…

It’s hard to predict exactly how these large language models will end up integrated into our lives going forward, but we can be assured that they’re incapable of hatching diabolical plans, and are unlikely to undermine our economy.

In the podcast, Cal Newport is more technical in his explanations. From the transcript (with light editing for punctuation by me):

What a large language model does is it takes an input. This information moves forward through layers. It’s fully feed forward and out of the other end comes a token which is a part of a word in reality. It’s a probability distribution over tokens but whatever a part of a word comes out the other end that’s all a language model can do. Now, how it generates what token to spit out next can have a huge amount of sophistication …

When I talk to people is when you begin to combine this really really sophisticated word generator with control layers. Something that sits outside of and works with the language model that’s really where everything interesting happens . Okay this is what I want to better understand: the control logic that we place outside of the language models that we get a better understanding of the possible capabilities of artificial intelligence because it’s the combined system language model plus control logic that becomes more interesting. Because what can control logic do?

It can do two things: it chooses what to activate the model with, what input to give it and it can then second: actuate in the real world or the world based on what the model says. So it’s the control logic that can put input into the model and then take the output of the model and actuate that, like take action, do something on the Internet, move a physical thing.”

Something I’ve been doing recently is sort of thinking about the evolution of control logic that can be appended to generative AI systems like large language models…

If you look at the picture I created after Cal Newport’s talk, you can see the different control layers. As Cal Newport points out, that is where the actual work is done. The LLM is static; it gives a word, and that’s it. That control logic knows what to do with the work.

Control layer in contemporary artificial intelligence

Now, the control logic has increased in complexity. We know better what to do with the answers AI gives us.

Newport fantasizes about a third control layer that can interact with several AI models, keep track of intention, have visual recognition, and execute complex logic. That is where we are approaching Artificial General Intelligence.

But, as Newport points out, Nobody is working on this.

Just as important, this control logic is entirely programmed by humans. We are not even close to AI-generated control logic and self-learning control logic. What Newport calls intentional AI (iAI). It is not clear whether this is possible with our current AI technology.

It’s the control logic where the exciting things happen.

It’s still people doing the control logic.

In 1990, a friend of mine graduated from Fuzzy Logic. This period was probably at the height of the Fuzzy Logic hype. Fuzzy Logic was one of the technologies that would turn societies upside down. Nowadays, Fuzzy Logic is just one technology applied, like others, for the proper purpose and problem space.

What looks like science fiction today is the mainstream technology of tomorrow. Today’s AI is tomorrow’s plumbing. That is my take on Cal Newports’ explanation of today’s state of AI art.

Yeah, 16.

The transcription feature in Microsoft Teams works perfectly, as my colleague informs me. He has sent me the transcription of the meeting we just concluded.

OK.

OK. OK.

He used the black from the coop. Stick to you Muslim with with our something. OK, for overhead and he is it document.

AFK Girl California phone and doing the blood. Either get it the document over, so yeah, I’ll need the. Is it a lot?

Yeah, 16.

The meeting was in Dutch.

Norwegian Wood the movie

Yesterday, I re-watched an episode of Twin Peaks, which remains a fantastic David Lynch classic. Being somewhat low-energy, I scrolled through my Justwatch list to see if any other exciting films were available. There, I found Norwegian Wood.

Recently, I reread Haruki Murakami’s book. I still liked it very much. (I rarely reread a single book, with exceptions being Haruki Murakami, Gerrit Krol, Douglas Coupland, Derek Sivers and Seth Godin)
The movie Norwegian Wood has a very similar atmosphere to the book. The film has the typical Murakami-like alienation from the world.

“Of course.”

“Is that a catchphrase of yours?”

I found this again in “The City and Its Uncertain Walls” (in Dutch – De stad en zijn onvaste muren).

→ The City and Its Uncertain Walls

Project 2025, an outlook on US autocracy

Yesterday, I stumbled upon Trump’s Project 2025. This project is an astonishing fascist agenda of the ultra-right wing of the Republican party that seeks to overthrow the government and install an autocratic government in which the president has all the power.

The language on the website is so amazingly hateful. What is behind this hate between left and right in the US that led to this extreme divide between the people in the US?

The 180-day playbook describes a swift transition of the entire government. For example, under the term “personnel is policy,” thousands of political jobs are planned to be re-staffed with “dedicated conservatives.”

Under the term “religious freedom,” which they claimed to correlate with poverty, economic growth, and peace, an orthodox Christian policy is promised to be instituted that bans abortion rights, LGBTQI+ rights, etcetera.

The next conservative Administration must champion the core American value of religious freedom, which correlates significantly with poverty reduction, economic growth, and peace. It should train all USAID staff on the connection between religious freedom and development; integrate it into all of the agency’s programs, including the five-year Country Development and Coordination Strategies due for updates in 2025; strengthen the missions’ relationships with local faith-based leaders, and build on local programs that are serving the poor.

We have enough evidence of what an orthodox Christian society will bring. What any society based on an orthodox religion produces: intolerance, oppression, government violence, racism, discrimination, and other extreme outgrows.

We can only hope that the people in the US, especially those typically supporting the Republican party, will distance themselves from this autocratic threat. The alternative is probably not their ideal either, but it seems approachable and rational.

… o shit

Sometimes, you are telling your story so confidently. How elegantly you solved that solution. Then you get that one question that silences you for a couple of seconds, and all you can say is “… o shit”.

We can become so blind to the omissions in our stories that we overlook the most obvious shortcoming, that question that we overlooked and we should have an answer for.

Today was one of these days. We worked on this for a year and a half, and then this person in the audience asked, “And what if this XYZ widget in your thing here fails?”

… Silence … o shit!