AI considered not so harmful

Cal Newport

Computer Science professor, writer, and podcaster Cal Newport debunks hysterical reactions to the latest AI developments. Much of this hysteria originates from the media’s search for attention rather than research executed with scientific rigor. “We have summoned an alien intelligence,” writes Harari, who is slowly but surely turning into a Luddite and occupational technology pessimist.

Cal Newport does what Harari and others should have done. In his Deep Questions podcast Defusing AI panic, he takes the subject apart.

Only by taking the time to investigate how this technology actually works—from its high-level concepts down to its basic digital wiring—can we understand what we’re dealing with.

Cal Newport tells us what ChatGPT does and how intelligent it is. We will see that it is pretty limited.

The result of these efforts might very well be jaw-dropping in its nuance and accuracy, but behind the scenes, its generation lacks majesty. The system’s brilliance turns out to be the result less of a ghost in the machine than of the relentless churning of endless multiplications.

A system like ChatGPT doesn’t create, it imitates.

Consciousness depends on a brain’s ability to maintain a constantly updated conception of itself as a distinct entity interacting with a model of the external world. The layers of neural networks that make up systems like ChatGPT, however, are static…

It’s hard to predict exactly how these large language models will end up integrated into our lives going forward, but we can be assured that they’re incapable of hatching diabolical plans, and are unlikely to undermine our economy.

In the podcast, Cal Newport is more technical in his explanations. From the transcript (with light editing for punctuation by me):

What a large language model does is it takes an input. This information moves forward through layers. It’s fully feed forward and out of the other end comes a token which is a part of a word in reality. It’s a probability distribution over tokens but whatever a part of a word comes out the other end that’s all a language model can do. Now, how it generates what token to spit out next can have a huge amount of sophistication …

When I talk to people is when you begin to combine this really really sophisticated word generator with control layers. Something that sits outside of and works with the language model that’s really where everything interesting happens . Okay this is what I want to better understand: the control logic that we place outside of the language models that we get a better understanding of the possible capabilities of artificial intelligence because it’s the combined system language model plus control logic that becomes more interesting. Because what can control logic do?

It can do two things: it chooses what to activate the model with, what input to give it and it can then second: actuate in the real world or the world based on what the model says. So it’s the control logic that can put input into the model and then take the output of the model and actuate that, like take action, do something on the Internet, move a physical thing.”

Something I’ve been doing recently is sort of thinking about the evolution of control logic that can be appended to generative AI systems like large language models…

If you look at the picture I created after Cal Newport’s talk, you can see the different control layers. As Cal Newport points out, that is where the actual work is done. The LLM is static; it gives a word, and that’s it. That control logic knows what to do with the work.

Control layer in contemporary artificial intelligence

Now, the control logic has increased in complexity. We know better what to do with the answers AI gives us.

Newport fantasizes about a third control layer that can interact with several AI models, keep track of intention, have visual recognition, and execute complex logic. That is where we are approaching Artificial General Intelligence.

But, as Newport points out, Nobody is working on this.

Just as important, this control logic is entirely programmed by humans. We are not even close to AI-generated control logic and self-learning control logic. What Newport calls intentional AI (iAI). It is not clear whether this is possible with our current AI technology.

It’s the control logic where the exciting things happen.

It’s still people doing the control logic.

In 1990, a friend of mine graduated from Fuzzy Logic. This period was probably at the height of the Fuzzy Logic hype. Fuzzy Logic was one of the technologies that would turn societies upside down. Nowadays, Fuzzy Logic is just one technology applied, like others, for the proper purpose and problem space.

What looks like science fiction today is the mainstream technology of tomorrow. Today’s AI is tomorrow’s plumbing. That is my take on Cal Newports’ explanation of today’s state of AI art.

Jenny Odell – How To Do Nothing

How To Do Nothing

Jenny Odell wrote a book about how to do nothing, but it is actually about how to do meaningful things.

Odell wants to help us move away from the attention economy to a physical, public reality, by “doing nothing”. She shows us that doing nothing does not mean turning away from the world and live like a hermit, discarding all contact with the world. Alternatively, through turning away from the breaking-news attention seeking media, and instead focusing our attention to details in the real, physical world, we can discover a more satisfying and meaningful way of living.

What we should aim our attention at to be meaningful to the world, is our local environment. I do not know if Odell has invented the term, but she is a great proponent of Bio-regionalism: an attention, interest and familiarity with our local ecology. Which gives us valuable insights into the complex relations with other things.She herself found bird-watching an interest that lifted her attention for her local environment. It makes her drop out of the linear time, and when coming back to everyday life, see things differently.

Odell links in John Cleese – and I love that reference – in a Youtube performance on Creativity. But What I like most about the John Cleese video is this:”Pondering leads to creativity and insurrection.”

She describes uselessness as a strategy. I love this idea. The example Jenny Odell given is an extremely old ugly tree with lots of knots and bolts. How did it get this old? By being so ugly and gnarly. The tree is too difficult for lumberjacks to cut down. All trees around her have been cut down over the past centuries, but she has survived because she is useless. Another similar strategy is being too weird to be of any use. Remain weird, hard to categorize. Exercize “resistance in place” – be hard to appropriate by any capitalist value.

In social media, everything needs to be monetized. Time becomes an economic resource we can not spend on doing “nothing”. However, a sensible way to do nothing has benefits to offer: move away from your FOMO to NOMO – Necessity Of Missing Out, and a sharper ability to listen – ” Deep Listening”.

We should protect out spaces and time for non-instrumental, useless – in the sense of non-commercial – activity and thought, maintenance and care.

Odell tells us to value maintenance over productivity. Instead of productivity, value:

  • maintenance
  • non-verbal communication
  • experience

Of course this reminds of Cal Newport’s Digital Minimalism, but interestingly enough she does not reference him anywhere. Which reminds me of the highly related article Newport recently wrote for the New Yorker on why people are quitting their jobs after the pandemic.

She quotes Epicurus: source of a troubled mind: unnecessary mental bagage due to runaway Desires, Ambitions, Fear and Ego.
An answer to the attention economy could be totally turn away from society, but Odell proposes another approach: “standing apart”, in which we contemplate, and participate, look at the world with a futurist view, instead of a view dominated by perceived urgency. We should not retreat, but practice refusal, boycott and sabotage.

If we apply Cicero‘s Will, Perseverance Drive and Discipline, we can deny provocations outside the sphere of the desire attention. And improve acuity of our attention for other things.

Jenny Odell quotes David Hockney’s critique on photography as being the “cyclops view of the world, but for a split second” (paraphrasing). Instead, reality is a collage, a personal construction of images.

Reality or perception changes when you look at it rather than through it. Like Jeff Wall’s approach to photography. He reconstructs reality instead of taking a picture of it when it appear to him. In such a way he avoids his viewers to look through the picture at the subject rather than at it.

Looking attentively is like jumping into Alice’s rabbit hole. It is fun to do and revives our curiosity. Also it allows us to transcend the self and gain new understanding of things. It helps you not to marinate in conventional wisdom but to be open to change and deviating ideas.

Where (social) media throw context-poor factoids at you, researching a topic more deeply gives you a full understanding of the context of things. That is such a danger of the urgency-driven media: the lack of context they give.

It is not about doing nothing. It’s about doing the right thing, with attention, focus, discipline.

Cal Newport – Digital Minimalism

With all new technology entering our lives, Cal Newport became convinced that we need a philosophy of technology use that steers us with decisions to make on what technology to use, how to use them and confidently ignore everything else. He call this Digital Minimalism. His philosophy is a structured approach to use of digital technology; the minimalism in it is a way to handle the digital abundance abundance we are confronted with.

Digital technologies are taking over out lives, if we let them, especially given they are design to attract our attention. Therefore it is essential we know how to best use these tools, and also about how to retain our autonomy while using these tools.

Newport cites researchers that have found sound indications that the tools addictive, as they are pushing use to behavior that is in the end bad for our well-being. These tools were even designed to be additive: they let us constantly seek for social approval and positive reinforcement. So we should find ways to reverse this and find way to put this technology in our favor instead of against us. Newports Digital Minimalism let’s us focus on a small number of carefully selected activities that support the things we value, and let’s us happily ignore everything else.

Principles behing his philosophy:

  • Clutter is costly – too many things and apps create negative cost.
  • Optimization is important – when you select a tool, you should be clear how you want to use it.
  • Intentionality is satifying – having selected the few tools needed, intentional use is more satisfying.

So how to achieve this? Newport proposes a rapid transformation through Digital Declutter:

  • Put aside any optional technologies in your life. For those non-optional, specify exactly when and how to use.
  • During this period aggressively explore and discover what you find meaningful.
  • After this break, reintroduce technologies, but only selected ones, with a clear intentional use.

Selection criteria for the tools we want to use:

  • Does the tool support my deeper values in some way.
  • Is it the best tool fr its purpose.
  • Then how and are you going to use it.

Having created a clear view on the use of technology, now Newport adds behavioral practices to further exploit a digital minimalist life.

Spend time alone

Humans need time to themselves. It increases happiness and productivity. However with the digital tools constantly begging for our attention if we let them, this need for solitude is becoming more and more unanswered – a state of Solitude Deprivation.
Practices Newport adds: leave home without your phone, take long walks, write letters to yourself.

Don’t Click “Like”

Humans are already wired for social interaction. WIth the digital tools we are pushed for even larger and less local social networks, through short interactions. Studies even show people feel more lonely when using social media extensively and having less offline interactions.
Newport recommends concersation-centric communication. Quality conversations are most meaningful social interactions.
Adopt basline rule: do not use social media as a tool for low-quality relationship nudges. I would add: social media is for marketing.
Consolidate texting.
Hold conversation office hours (and free time for deep work).

Reclaim Leisure

Pursue activities for the satifaction of that activity itself, not some other goal. That is leasure.

Prioritize demanding activities over passive consumption.
Use skills to produce things valuable in the physical world.
Seek activities that require real-world, structured social interactions.

Fix or build something every week.
Learn and apply one new skill every 6 weeks.
Schedule low-quality leasure time.
Join something.

Create leasure plans. Season ones, weekly ones.

Doing nothing is overrated.

Join the Attention Resistance.

Again this is about making technology use intentional. Facebook researchers found that the unintentional, uncontrolled use of Facebook may not be healthy and good use of the software should be practiced.

Practices Newport suggests:

  • Delete Social Media from your phone. Making it less accessible makes using it more intentional.
  • Turn devices is single-purpose devices.
  • Use social media like a pro.
  • Embrace slow media. A small amount of high quality offerings is better than many low-quality crap. Plus be clear on the now and when of the slow media.
  • Dumb down your phone. Make things generally less accessible.

Slow down.