RSS Reader adventure

I ran out of max feeds on Feedly. I like the tool, but since I do not want to any more subscription-based software, so I looked for an alternative RSS reader.

Newsblur came up as best

It is a service but based on open source.

Tried to install in on my laptop.

  • Install Docker Desktop
  • Clone Newsblur repo
  • Install cygwin – I am running on Windows here – need to make newsblur
  • make nb

Runs for a while, tries to run a sudo. That won’t work on Windows… 🙁

Immediately gave up. Considered for a minute running it on my Linux server, but I am actually looking for something simple.

Then tried RSS owl

Install, needs Java. Installed Java.

Needs Java 1.5. That is a very old version.

Gave up.

Tried QuiteRSS

QuiteRSS screenshot RSS reader

Installed like a charm.

Unfortunately no OPML import. So took some time to copy from the OMPL export from feedly into this tool.

Now trying this out.

Lovely old-style interface.

Reminds me of the old RSS reader I had years ago.

AI credibility

Some thoughts regarding the current Artificial Intelligence hype.

I am not among the people that believe AI technology is going to make humans superfluous. Like any technology, AI technology can be a blessing and a curse. AI technology does have its dangerous sides, and we should focus our attention on these aspects, and try not to get lost in religious debates. I think we should look at:

  • Transparency on the (quality of the) inputs for the AI-based software solutions.
  • Transparency on the (energy) cost of AI solutions.
  • Clarity about the possibilities and more importantly, the limitations of AI solutions.
  • Regulations on ownership of important data sets.

The quality of AI outputs

The most fundamental threat from AI technologies comes from its inputs. The quality of the output of the AI is directly dependent on the quality of the data fed into the AI’s learning process. If you cannot control the quality of the data you use to train the AI model, you cannot control the quality of the output either. (Most research in this respect is done in the area of so-called biases. What I mean here is more generic: how to assure the reliability of any data feed into an AI.)

The role of AI on social media

Any benevolent social medium platform that exploits AI technology has a great challenge: how do you curate the social media fed into the platform? How to distinguish real trends from fake ones. How do you prevent bots from people or organizations with malicious intentions to enter unreliable information into the stream, thereby using the social media platform to spread incorrect information, or ‘fake news’ as it become known by since Donald Trump.

Governments are debating how to regulate these unreliable and often uncontrollable social media streams. The first and probably the easiest thing to demand from social media companies is to apply journalistic principles: transparency over the sources of information. I think this is the most reliable and easiest measure that can be taken. Examining the algorithms, as is suggested sometimes, is very difficult, for the provider as well as the examiner. Instead of looking inside the AI box, we can judge the input to the AI, and make significant judgments on the reliability of the AI’s outputs.

AI and humanity

AI technology de-humanizes. When AI takes over more of our jobs and activities, what is left over to humans? What if AIs at some point have gained superior thinking power over humans? What is the purpose of men then? Humans become subordinate to AI intelligence. We lose the one thing that has always distinguished us from the rest of the living creatures on this planet: our superior thinking abilities. I do not think that the current AI technology is even close to taking over our thinking abilities. Later in this article, I will argue why I think so. Yet, in the sense that AI technology is the next step in taking over tasks that so far only humans could do, yes, AI takes some of that ‘humanity’ away. Jobs will be lost. But that is not a very revolutionary development. The calculator took over calculations, the early computer took over bookkeeping, and the later computer replaced our paper agendas, assembly lines, and our cars.

Energy consumption

Our brains need a lot of energy: our 2% brain is responsible for 20% of the energy consumption of our body. AI requires a lot of energy as well. In January 2023, ChatGPT used as much energy as a mid-size town. A single question requires Apple’s Siri about 5 Wh of energy. A household uses something like 5000 kWh (in Europe, in the US it is double that figure). So, 200 questions to Siri is 1kWh. Then 1 million questions to Siri needs as much energy as a household in Europe in a year. That’s a lot.

If I ask Siri 3 questions per day, that is about 1000 questions per year. That is 5 kWh.

Does your Siri also tend to spontaneously answer questions you did not ask? 5W each.

Who benefits from the AIs?

Currently, large organizations drive the development of AI technology. Microsoft, Google, and IBM, all promise to make the world a better place, but skepticism is justified. Not only history has proven these companies strive to dominate the marketplace. According to economic theories, this is totally in line with expectations. And this is not the thing to be afraid of. Monopolies never last at are not scalable. At some point, monopolists can sell a greater quantity only by cutting its price. Microsoft came down, after years of trying to dominate the market with inferior Windows technology. (Yes Microsoft Office still dominates the text processing market, though I am not sure why that is the case, as very viable free products are available for quite some time now). IBM came down. Our own East-India Company came down. Google will come down. Facebook will come down.

What is problematic, however, is when technologies like AI end up solely in the hands of a single or a few parties and a power balance is disrupted. These parties can dominate others with these technologies. Totalitarian ruling could become a big problem. A relativizing thought, however… AI technologies are much more difficult, if not impossible, to contain then technologies nuclear weapons, for example.
You only need moderate capital for AI solutions. Open-source AI technologies are available for everyone. The problem is not the technology, the problem would be when a single party could claim exclusive access to data sources.
GPS data for example.

Why AI is miles away from human intelligence

The current AI technology is an analytical, stochastic process, invented in the last two decades of the 20th century. The process is very compute-intensive. Only recently our computers have become strong enough so we can make practical use of AI inventions.

The AI technology analyses past data, and is very clever at “predicting” future values for this data, or gathering ‘facts’ from data. The technology looks at input data and separates noise from signal by throwing out outliers and assigning likelihood values to remaining data points. With this, it can predict (not even calculate) probabilities for future values. That is all.

Nicholas Nassim Taleb has written a book called Fooled by Randomness. One of the points he proves in this book is that you can not predict “black swan” events from past data. The existence of black swan events can be proven through analytical math, but not predicted through stochastical math.
So AI is worthless when it comes to predicting black swan events. More specifically AI then useless for supposed applications like stock market predictions. Even professors in AI research areas are ignorant of these “black swan” limitations, arguing that just further growth of datasets would enable AI to become perfect predictors of the future (and become uber-humanly intelligent).

Out of curiosity, I asked ChatGPT about its ability to predict black swan events.
Here is his honest, modest answer.

AI systems, including those based on advanced algorithms like GPT-3.5, have limitations when it comes to predicting black swan events. Black swan events are extremely rare, unpredictable, and have a significant impact. These events are, by definition, unforeseen and often fall outside the range of normal expectations.

AI models, including machine learning algorithms, are generally trained on historical data. If black swan events have never occurred in the training set, the AI model would not have learned to predict them explicitly. Traditional AI models work based on patterns and correlations found in the data they were trained on. They are not capable of inventing entirely new and unprecedented events beyond the scope of their training data.

However, it’s essential to note that AI can help prepare for unexpected events, even if it cannot predict black swan events directly. Here’s how:

Despite these capabilities, it’s important to maintain a cautious approach when relying on AI for preparing for black swan events. Human judgment, expert analysis, and continuous monitoring are essential elements in mitigating the impacts of truly unforeseen and exceptional events.

Next, I will argue that the learning process of AI is so inherently different than human learning, that it would be naïve to expect that such processes would ever be able to teach an AI so much that they can mimic human intelligence. AIs could mimic aspects of human intelligence, but for development of more human-like intelligence, a different learning model would be required.

When we look at the AI learning process, machine learning, we see this is significantly different from human learning and experience. The difference between human learning and experience and Machine Learning in short. The AI takes larger amounts of information and learns from it. Sometimes this learning is (self-)directed. Humans take in large amounts of data, information, and experience. This human experience includes sensory
information. Humans undergo years of learning from interactions with people. They get complex and ambiguous feedback. They learn from experiences. AI’s do not undergo experiences. At best experiences could be simulated. The learning processes for humans and AIs are so different from Machine Learning that is silly to expect that the two could ever produce similar results.

The AI gets:

  • Many data sets, in huge volumes, and these are largely uncurated.
  • Large undirected learning.

Man gets:

  • A curated (culturally, geographically, …) stream of information streams books, tv, anything internet,
    school, discussion, talks, magazines, Youtube, art, music …

  • Music,… What is music to an AI? Music is the thing that plugs right into our brains…
  • A in-born character (depending on where you are in the nature/nurture dimension).
  • Years of growing up, interactions with people, animals, things, …
  • Years of sensory information from 5 sources.
  • Emotional feedback from humans, animals…

I do not think there is much reason to be freightened of AI. The current state of AI is very very far from human intelligence. And the current technology misses fundamental functions to close that gap. We should not be afraid of that. Rather than getting distracted by pessimistic stories warning of the end of humankind, we should focus our efforts on the dangerous aspects of AI.

AI is just a tool not a miracle, and its health depends on what you feed it

AI-based social media enables the spreading of incorrect and potentially dangerous information. Other AI based applications struggle with curating the information fed into the tools. Without stringent curation, the AI application undermines its own credibility, as do the providers of services based on unreliable services. Which benevolent being wants to believe a social media channel that bases its information on unreliable data?
Who wants to rely on AI-based decisions if the decisions are learned from an unreliable data set?

AI hulpeloos tegenover Black Swans

Ik herlees Nicholas Nassim Taleb’s Fooled By Randomness. Heel veel voorspellingskracht wordt gevonden in historische data. Maar niet de Black Swans. Ik moet denk aan de hype rond AI die momenteel heerst.

AI en ML leven van historische data. Black Swan events komen niet voor in historische data.

De AI is dus net zo stom als mensen: onze AI ziet grote hoeveelheden witte zwanen. Als je de AI vraag welke kleur een zwaan heeft, zegt hij dus: wit. Zwarte zwanen bestaan ook voor de AI niet.

AI is waardeloos waar het gaat om Black Swan Events.

ChatGPT zegt het zelf:

AI systems, including those based on advanced algorithms like GPT-3.5, have limitations when it comes to predicting black swan events. Black swan events are extremely rare, unpredictable, and have a significant impact. These events are, by definition, unforeseen and often fall outside the range of normal expectations.

AI models, including machine learning algorithms, are generally trained on historical data. If black swan events have never occurred in the training set, the AI model would not have learned to predict them explicitly. Traditional AI models work based on patterns and correlations found in the data they were trained on. They are not capable of inventing entirely new and unprecedented events beyond the scope of their training data.

However, it’s essential to note that AI can be helpful in preparing for unexpected events, even if they cannot predict black swan events directly. Here’s how:

Despite these capabilities, it’s important to maintain a cautious approach when relying on AI for preparing for black swan events. Human judgment, expert analysis, and continuous monitoring are essential elements in mitigating the impacts of truly unforeseen and exceptional events.

AI, negeer alsjeblieft de wereld die voor je ligt

Malcolm Gladwell legt in zijn nieuwe boek Talking to Strangers uit dat we niet op intuïtie moeten vertrouwen. In plaats daarvan moeten we ons realiseren dat we omgaan met mensen van wie de aannames, perspectieven en achtergronden verschillen van de onze, en voordat we conclusies trekken eerst intern onderzoek doen.
Dit lijkt aan te sluiten bij het onderzoek van Daniel Kahneman, zoals gepubliceerd in Thinking Fast and Slow, waarin ons Systeem 1-denken ons tot snelle conclusies leidt die ons Systeem 2-denken zal proberen te rechtvaardigen.

Nu de link naar AI:
Kunstmatige Intelligentie neemt mogelijk niet alleen beslissingen omdat het meer (dieper) weet over een onderwerp en al die informatie snel kan verwerken. Het lijkt er ook op dat AI betere beslissingen kan nemen omdat het veel informatie kan negeren die in het hoofd van mensen zit, en het kan de fysieke aanwezigheid van de dingen voor ons negeren.
Als dat zo is, kan AI maar beter niet menselijker worden. Dat zou rampzalig zijn voor de kwaliteit van haar beslissingen.

Understanding Ponyo’s Puff-Puff Boat Mechanism

I really like the films of Hayao Miyazaki. Since we just visited Japan, we are re-watching his films (on Netflix).

In the film, after a flood, Sosuke and Ponyo go in a special little steamboat powered by a candle in search of Lisa, Sosuke’s mother. This time, the propulsion of this little boat caught my eye. I googled it, and it turns out to be a puff-puff boat (or pop-pop boot in Dutch).

I didn’t understand how this mechanism worked, but of course, you can always find someone who can explain it clearly. The remarkable dynamic balance that is created by heated in a boiler, pushed out, and sucked back in due to the underpressure that is created.

https://kayakaa.com/how-does-ponyo-boat-work/

PS. I found the first link died. Here is (an even more extensive) description.

http://www.nmia.com/~vrbass/pop-pop/aapt/crane.htm

And for those who like to go down the rabbit hole:

Right To Repair

Cory Doctorow is an activist I admire. One of the things he has his sights set on is ‘Right to repair’.

Me iPhone 8 was still fine but the battery went dead within a day. Especially on vacation that sucks. We take long walks and I have several apps open then: AllTrails, GPX Tracker and Relive or Polarsteps. Sometimes Google maps or a local hiking app, too. Each of them don’t use a whole lot of battery power, but enough to drain a faltering battery quickly.

I looked up the cost of a new battery. It’s 55 euros at Apple, 40 euros at someone around the corner, and 20 euros for the battery itself. That was not too bad.

But couldn’t I do it myself? The company ifixit.com accurately documents what you need to do and tells you what parts you need.

Now, the iPhone’s battery appears to have been secured in the device with glue. I immediately understand what Cory Doctorow means. That is an engineering sin. There is no way that is a qualitative reason for that. Apple just wants to make it difficult for you to repair the device yourself.

I ordered the set from iFixit, which, including the tools I didn’t already have, cost me 61 euros, including shipping. That sounds a lot, but I can also use those tools to repair a MacBook Pro 2021, which Apple says should cost 700 euros to repair a screen cable.

I will report how I fared.

I am continuing my search for a repairable laptop. And found it: https://frame.work/nl/en.

Ubuntu runs like a charm on old laptop

This Saturday I installed Ubuntu Linux on my mother’s old laptop, a Dell Vostro Intel Core i3 with 4GB from I don’t know what year, I think 2013. The installation was a breeze, using balenaEtcher to write an image to a USB stick.

Now to see what I will do with it. Host a website that I program all by myself in Django/Python?

Ubuntu means as much as being open to others, devoting yourself to others, showing compassion and respect.

Who is open to my Ubuntu laptop ?

Get Bug hunting

Nice idea: Make a list of bugs, prioritize them and fix them.

I started the list.

At the same time: cancel email newsletters that I never read anyway but collect out of FOMO.

My computers: from Texas Instruments and Toshiba to Ideapad and MacBook Pro

I felt like making a nerdy list. The computers I have owned.

BTW also worked with DEC 10, VAX, ICL mainframe – VME, IBM mainframe – System 390 and beyond, Solaris, Aix.

TI-99/4A. Talks BASIC. Peek and Poke to move you directly into its memory.

Toshiba MSX computer HX-10AA. Failed MSX standard. Could already do a lot more with it.

Tulip PC compatible.

https://www.homecomputermuseum.nl/wp-content/uploads/2018/09/tulip-compact-2-museum-1920x1599.png

IBM PS2. Unimaginative bin. Dialed into the Internet with it for the first time. Via Compuserve.

IBM Thinkpad 500. My first portable, in quotes. Thing weighed like lead.

The IBM ThinkPad 500 was a subnotebook with a monochrome screen. Image via eBay

IBM Thinkpad T20, T30, T41. All very good.

ThinkPad T20. Images via ThinkWiki.org.
T20

Lenovo T410. Na de verkoop van de PC divisie overgestapt op Lenovo.

Lenovo Thinkpad T410 (2537-BU1) i5 520M 2.4Ghz 6GB DVD ...

Lenovo T410
Thinkpad T30

Apple MacBook 2009. This was my first MacBook. Only then did I notice that the user experience of a Mac is so incredibly better than that of Windows. It also boots within 10 seconds, whereas my Windows machines always take over a minute or even (much) longer.

Apple MacBook White 2009 13.3" Screen Laptop (Runs ...

Apple Macbook Pro 2013.

By far the best of them all. Still performs top notch. Indestructible.

MacBook Pro 13" 2013, 8GB 256GB SSD - Apple Bazar
MacBook Pro 2013 15″

Lenovo Ideapad 510. Plastic device. Poor touchpad. But then again is by far the cheapest in the list.

Lenovo IdeaPad 510S-14ISK 80TK0063MH

HP EliteBook 1040 G3. Pretty robust and comfortable.

.

MacBook Pro (13-inch, M1, 2020). With touch bar. Could have left that out from me.

MacBook Pro (13-inch, M1, 2020)

Lenovo Ideapad 5 Pro.

Lenovo IdeaPad 5i Pro Gaming PC 120Hz และ IdeaPad 5 Pro ...