The World Don’t Need No More Images

I have my photo-movie, The World Don’t Need No More Images (full of pictures), just about finished. The first episodes are on YouTube. Is it any good? Probably not. Does it raise a ripple? Unlikely. Is it fun to mix images and sound? Absolutely. Besides, it feels like it has to get out. In this form, this had to be tried and done because I have not seen anything similar done before.

And now: bye. Next.

Failure narrative

From Seth Godin’s The Practice, this creator’s failure narrative:

  • There is more supply than demand; therefore, most of the feedback is rejection. From the market, from the gatekeepers.
  • The work is created with generally available tools. The group that believes they can do the same job or better is large.
  • The fanbase is transient, and the churn is significant.
  • Negative criticism spreads easier than positive feedback.
  • We work in novelty. There is always more novelty for our customers to turn to.
  • We and our customers chase creative magic. By that standard, almost all of our efforts fail.

Then, successful creators have in their favor the benefit of the doubt and tribal cognitive dissonance.

EU state of tech and tech legislation

David Heinemeier Hansson writes about the EU law on technology legislation. He is right that the cookie banner laws have led to this awful way where we must wrestle through consent forms while browsing the web. And yes, he is right:

Europe is in desperate need for a radical rethink on how it legislates tech. The amount of squandered potential from smart, capable entrepreneurs on the old continent is tragic. It needn’t be like this. But if you want different outcomes, you have to start doing different things.

He goes on

So little of the core tech innovation that’s driving the future, in AI or otherwise, is happening in Europe. And when it does happen there, it’s usually just incubation, which then heads for America or elsewhere as soon as its ready to make a real impact on the world.

I’m not sure where elsewhere would be. More importantly, there is more nuance to this state of affairs.

America is leading in technology but also in creating technological waste or the enshittification of technology. At least there is a body on this planet that puts boundaries on what monopolistic tech companies can do to citizens. That body is not the US government; it is the EU government. Yes, there is a lot to say about it, but you can state that the EU is protecting its citizens.

Furthermore, DHH could adopt a more critical stance towards the IT industry. While IT became a consumer product, companies like Microsoft, Google, Amazon and Facebook have shown that they do not always act in the best interests of their customers, to say the least. Legislation is not just a socialist or communist necessity, but a fundamental requirement for the proper functioning of capitalism. This is particularly true in the US, where the excessive focus on stockholder value has led to a decline in company ethics.

PS Just this morning, I read that US antitrust laws are working against Google’s anticompetitive behavior.

Left behind

Randy that bastard surprises us nicely after dinner with the flown-in hotshots when we are waiting for the cab in front of the restaurant, with suddenly his jovial “let’s go drive past the ladies over there” proposal. And a nod in the direction of further down the road. It takes a while for the penny to drop, and we understand that he is inviting us to go with him to the whores. That is clearer.

Then, you start to view someone differently. You hear this pathetic comment at the hotel bar. While leaning somewhat lost over the bar stool, with that boyish look of his shorts, the gritty shirt, and the flip-flops on his feet, while gulping in half a glass of whiskey, he says: my wife has left me.

Quantum

The University of Delft has a great introduction to Quantum Computing at Qutech Academy. (Buckle up if you want to follow, get your linear algebra skills dusted of and some physics.) Quantum computing is slowly becoming a reality. Today, It is somewhere between research and reality. Like the state of classical computing in the 1950s / 1960s, the difference is that today, we are better able to assess the potential of such technology than we could imagine what computing would mean in the 1950s.

And it will be big. It’s more impactful and real than the current AI hype.

I dug into the Qutech Academy after attending the Qiskit Summer School by IBM, which was somewhat over my head. But it’s an extremely interesting space well worth digging into.

Werner Herzog – Every Man For Himself And God Against All

The memoirs of Werner Herzog.

Herzog tells us about his tough youth in Bavaria, factually, as if it were normal. His family is so poor that they can not afford to wear shoes and underwear in summer. He grows up in deep poverty in the almost fairy tale world of the Bavarian mountains—a hard life, his parents somewhat loveless. Herzog brings us from these archaic times into the internet age.

He jumps back to the chaotic times around the Second World War and the weird family situation. His parents are members of the Nazi party. His father is a wild man who married three times—a good-for-nothing, selfish klaploper. Herzog moves around and does not belong anywhere. He lives in the German post-war rubble.

The story jumps back and forth in time and tells about crazy accidents, catastrophes, wounds, illnesses, and crashes. Throughout the book, Herzog speaks about the challenges he takes on without explicitly mentioning them. He seems to have a preference for the risky and weird, which is reflected in the extraordinary topics of his films.

His diary notes under the title Ballad of the Little Soldier are terrible stories about child soldiers. He films people on death row.

He has worked on several films with the crazy and genius actor Klaus Kinsky. From the stories, Kinsky emerges even more disturbed than what we already knew about him.

Herzog’s writing style is entertaining. He starts a story, jumps back in time, returns to the story, jumps forward, and so on. Which feels very natural.

Can’t summarize. A relentless man is probably the best summary.

AI considered not so harmful

Computer Science professor, writer, and podcaster Cal Newport debunks hysterical reactions to the latest AI developments. Much of this hysteria originates from the media’s search for attention rather than research executed with scientific rigor. “We have summoned an alien intelligence,” writes Harari, who is slowly but surely turning into a Luddite and occupational technology pessimist.

Cal Newport does what Harari and others should have done. In his Deep Questions podcast Defusing AI panic, he takes the subject apart.

Only by taking the time to investigate how this technology actually works—from its high-level concepts down to its basic digital wiring—can we understand what we’re dealing with.

Cal Newport tells us what ChatGPT does and how intelligent it is. We will see that it is pretty limited.

The result of these efforts might very well be jaw-dropping in its nuance and accuracy, but behind the scenes, its generation lacks majesty. The system’s brilliance turns out to be the result less of a ghost in the machine than of the relentless churning of endless multiplications.

A system like ChatGPT doesn’t create, it imitates.

Consciousness depends on a brain’s ability to maintain a constantly updated conception of itself as a distinct entity interacting with a model of the external world. The layers of neural networks that make up systems like ChatGPT, however, are static…

It’s hard to predict exactly how these large language models will end up integrated into our lives going forward, but we can be assured that they’re incapable of hatching diabolical plans, and are unlikely to undermine our economy.

In the podcast, Cal Newport is more technical in his explanations. From the transcript (with light editing for punctuation by me):

What a large language model does is it takes an input. This information moves forward through layers. It’s fully feed forward and out of the other end comes a token which is a part of a word in reality. It’s a probability distribution over tokens but whatever a part of a word comes out the other end that’s all a language model can do. Now, how it generates what token to spit out next can have a huge amount of sophistication …

When I talk to people is when you begin to combine this really really sophisticated word generator with control layers. Something that sits outside of and works with the language model that’s really where everything interesting happens . Okay this is what I want to better understand: the control logic that we place outside of the language models that we get a better understanding of the possible capabilities of artificial intelligence because it’s the combined system language model plus control logic that becomes more interesting. Because what can control logic do?

It can do two things: it chooses what to activate the model with, what input to give it and it can then second: actuate in the real world or the world based on what the model says. So it’s the control logic that can put input into the model and then take the output of the model and actuate that, like take action, do something on the Internet, move a physical thing.”

Something I’ve been doing recently is sort of thinking about the evolution of control logic that can be appended to generative AI systems like large language models…

If you look at the picture I created after Cal Newport’s talk, you can see the different control layers. As Cal Newport points out, that is where the actual work is done. The LLM is static; it gives a word, and that’s it. That control logic knows what to do with the work.

Control layer in contemporary artificial intelligence

Now, the control logic has increased in complexity. We know better what to do with the answers AI gives us.

Newport fantasizes about a third control layer that can interact with several AI models, keep track of intention, have visual recognition, and execute complex logic. That is where we are approaching Artificial General Intelligence.

But, as Newport points out, Nobody is working on this.

Just as important, this control logic is entirely programmed by humans. We are not even close to AI-generated control logic and self-learning control logic. What Newport calls intentional AI (iAI). It is not clear whether this is possible with our current AI technology.

It’s the control logic where the exciting things happen.

It’s still people doing the control logic.

In 1990, a friend of mine graduated from Fuzzy Logic. This period was probably at the height of the Fuzzy Logic hype. Fuzzy Logic was one of the technologies that would turn societies upside down. Nowadays, Fuzzy Logic is just one technology applied, like others, for the proper purpose and problem space.

What looks like science fiction today is the mainstream technology of tomorrow. Today’s AI is tomorrow’s plumbing. That is my take on Cal Newports’ explanation of today’s state of AI art.