AI considered not so harmful

Computer Science professor, writer, and podcaster Cal Newport debunks hysterical reactions to the latest AI developments. Much of this hysteria originates from the media’s search for attention rather than research executed with scientific rigor. “We have summoned an alien intelligence,” writes Harari, who is slowly but surely turning into a Luddite and occupational technology pessimist.

Cal Newport does what Harari and others should have done. In his Deep Questions podcast Defusing AI panic, he takes the subject apart.

Only by taking the time to investigate how this technology actually works—from its high-level concepts down to its basic digital wiring—can we understand what we’re dealing with.

Cal Newport tells us what ChatGPT does and how intelligent it is. We will see that it is pretty limited.

The result of these efforts might very well be jaw-dropping in its nuance and accuracy, but behind the scenes, its generation lacks majesty. The system’s brilliance turns out to be the result less of a ghost in the machine than of the relentless churning of endless multiplications.

A system like ChatGPT doesn’t create, it imitates.

Consciousness depends on a brain’s ability to maintain a constantly updated conception of itself as a distinct entity interacting with a model of the external world. The layers of neural networks that make up systems like ChatGPT, however, are static…

It’s hard to predict exactly how these large language models will end up integrated into our lives going forward, but we can be assured that they’re incapable of hatching diabolical plans, and are unlikely to undermine our economy.

In the podcast, Cal Newport is more technical in his explanations. From the transcript (with light editing for punctuation by me):

What a large language model does is it takes an input. This information moves forward through layers. It’s fully feed forward and out of the other end comes a token which is a part of a word in reality. It’s a probability distribution over tokens but whatever a part of a word comes out the other end that’s all a language model can do. Now, how it generates what token to spit out next can have a huge amount of sophistication …

When I talk to people is when you begin to combine this really really sophisticated word generator with control layers. Something that sits outside of and works with the language model that’s really where everything interesting happens . Okay this is what I want to better understand: the control logic that we place outside of the language models that we get a better understanding of the possible capabilities of artificial intelligence because it’s the combined system language model plus control logic that becomes more interesting. Because what can control logic do?

It can do two things: it chooses what to activate the model with, what input to give it and it can then second: actuate in the real world or the world based on what the model says. So it’s the control logic that can put input into the model and then take the output of the model and actuate that, like take action, do something on the Internet, move a physical thing.”

Something I’ve been doing recently is sort of thinking about the evolution of control logic that can be appended to generative AI systems like large language models…

If you look at the picture I created after Cal Newport’s talk, you can see the different control layers. As Cal Newport points out, that is where the actual work is done. The LLM is static; it gives a word, and that’s it. That control logic knows what to do with the work.

Control layer in contemporary artificial intelligence

Now, the control logic has increased in complexity. We know better what to do with the answers AI gives us.

Newport fantasizes about a third control layer that can interact with several AI models, keep track of intention, have visual recognition, and execute complex logic. That is where we are approaching Artificial General Intelligence.

But, as Newport points out, Nobody is working on this.

Just as important, this control logic is entirely programmed by humans. We are not even close to AI-generated control logic and self-learning control logic. What Newport calls intentional AI (iAI). It is not clear whether this is possible with our current AI technology.

It’s the control logic where the exciting things happen.

It’s still people doing the control logic.

In 1990, a friend of mine graduated from Fuzzy Logic. This period was probably at the height of the Fuzzy Logic hype. Fuzzy Logic was one of the technologies that would turn societies upside down. Nowadays, Fuzzy Logic is just one technology applied, like others, for the proper purpose and problem space.

What looks like science fiction today is the mainstream technology of tomorrow. Today’s AI is tomorrow’s plumbing. That is my take on Cal Newports’ explanation of today’s state of AI art.

Opting out of Instagram AI

As European users, we can opt out of Instagram and Facebook using our posts for AI training. I’ve exercised this control, as I am the product of Facebook and Instagram, but I strive to limit their use of me as such.

Opting out on Instagram looks deliberately cumbersome. However, from Facebook, which is also owned by Meta, I received an email with very simple instructions.

Now, I am curious if they can prove they are not using my data for AI.

The cost of AI

I stumbled upon this fascinating article by Stuart Mills looking at the challenges that further development and operations of AI models face.

The costs of model development and operation are increasing. Efficiencies in development and operation are challenging but may be addressed in the future. However, model quality remains a significant challenge that is more difficult to solve.

Data is running out. Solutions such as synthetic data also have their limitations.

There is also a severe challenge around chips. There is a supply shortage in the context of geopolitical tensions between China, the US, and the EU. Also, the environmental costs of running large AI models are significant.

The costs of model development and operation are increasing. Efficiencies in development and operation are challenging but may be addressed in the future. However, model quality remains a significant challenge that is more difficult to solve.

Data is running out. Solutions such as synthetic data also have their limitations.

There is also a severe challenge around chips. There is a supply shortage in the context of geopolitical tensions between China, the US, and the EU. Also, the environmental costs of running large AI models are significant.

Two revenue models may emerge in the AI industry. Each with their own take on the cost aspects highlighted above. The first is the ‘foundation model as a platform’ (OpenAI, Microsoft, Google), which demands increasing generality and functionality of foundation models.

The second is the ‘bespoke model’ (IBM), which focuses on developing specific models for corporate clients.

Government action can support and undermine the AI industry. Investment in semiconductor manufacturing in the US and China may increase the supply of chips, and strategic passivity from governments around regulations such as copyrights is suitable for the industry. Government interventions should regulate the AI industry in areas related to the socially and environmentally damaging effects of data centers, copyright infringement, exploitation of laborers, discriminatory practices, and market competition.

AI, duh

When Artificial Intelligence-generated images win photo contests, should we oppose that?

I just think the developments of AI are telling us to do things differently, to stand out. AI has become the competition (and maybe just a tool), just like all other photographers are. So, we have to treat AI as competition, too. You can try to deny this reality, but you can also look at how you, as a photographer or artist, can differentiate yourself from this new collegue/competition.

Ideas:

  • Stories instead of single images. Combine with text.
  • An analog version of your work: a print, a book, wallpaper, toilet paper, t-shirts, quilt covers, printed bags, whatever.
  • Combine your photos into a video.
  • Handmade books.
  • Collages.

Personal and analog distinguish you from the aggregated, statistically generated products of AI.

ai photographer
The competition

AI credibility

Some thoughts regarding the current Artificial Intelligence hype.

I am not among the people that believe AI technology is going to make humans superfluous. Like any technology, AI technology can be a blessing and a curse. AI technology does have its dangerous sides, and we should focus our attention on these aspects, and try not to get lost in religious debates. I think we should look at:

  • Transparency on the (quality of the) inputs for the AI-based software solutions.
  • Transparency on the (energy) cost of AI solutions.
  • Clarity about the possibilities and more importantly, the limitations of AI solutions.
  • Regulations on ownership of important data sets.

The quality of AI outputs

The most fundamental threat from AI technologies comes from its inputs. The quality of the output of the AI is directly dependent on the quality of the data fed into the AI’s learning process. If you cannot control the quality of the data you use to train the AI model, you cannot control the quality of the output either. (Most research in this respect is done in the area of so-called biases. What I mean here is more generic: how to assure the reliability of any data feed into an AI.)

The role of AI on social media

Any benevolent social medium platform that exploits AI technology has a great challenge: how do you curate the social media fed into the platform? How to distinguish real trends from fake ones. How do you prevent bots from people or organizations with malicious intentions to enter unreliable information into the stream, thereby using the social media platform to spread incorrect information, or ‘fake news’ as it become known by since Donald Trump.

Governments are debating how to regulate these unreliable and often uncontrollable social media streams. The first and probably the easiest thing to demand from social media companies is to apply journalistic principles: transparency over the sources of information. I think this is the most reliable and easiest measure that can be taken. Examining the algorithms, as is suggested sometimes, is very difficult, for the provider as well as the examiner. Instead of looking inside the AI box, we can judge the input to the AI, and make significant judgments on the reliability of the AI’s outputs.

AI and humanity

AI technology de-humanizes. When AI takes over more of our jobs and activities, what is left over to humans? What if AIs at some point have gained superior thinking power over humans? What is the purpose of men then? Humans become subordinate to AI intelligence. We lose the one thing that has always distinguished us from the rest of the living creatures on this planet: our superior thinking abilities. I do not think that the current AI technology is even close to taking over our thinking abilities. Later in this article, I will argue why I think so. Yet, in the sense that AI technology is the next step in taking over tasks that so far only humans could do, yes, AI takes some of that ‘humanity’ away. Jobs will be lost. But that is not a very revolutionary development. The calculator took over calculations, the early computer took over bookkeeping, and the later computer replaced our paper agendas, assembly lines, and our cars.

Energy consumption

Our brains need a lot of energy: our 2% brain is responsible for 20% of the energy consumption of our body. AI requires a lot of energy as well. In January 2023, ChatGPT used as much energy as a mid-size town. A single question requires Apple’s Siri about 5 Wh of energy. A household uses something like 5000 kWh (in Europe, in the US it is double that figure). So, 200 questions to Siri is 1kWh. Then 1 million questions to Siri needs as much energy as a household in Europe in a year. That’s a lot.

If I ask Siri 3 questions per day, that is about 1000 questions per year. That is 5 kWh.

Does your Siri also tend to spontaneously answer questions you did not ask? 5W each.

Who benefits from the AIs?

Currently, large organizations drive the development of AI technology. Microsoft, Google, and IBM, all promise to make the world a better place, but skepticism is justified. Not only history has proven these companies strive to dominate the marketplace. According to economic theories, this is totally in line with expectations. And this is not the thing to be afraid of. Monopolies never last at are not scalable. At some point, monopolists can sell a greater quantity only by cutting its price. Microsoft came down, after years of trying to dominate the market with inferior Windows technology. (Yes Microsoft Office still dominates the text processing market, though I am not sure why that is the case, as very viable free products are available for quite some time now). IBM came down. Our own East-India Company came down. Google will come down. Facebook will come down.

What is problematic, however, is when technologies like AI end up solely in the hands of a single or a few parties and a power balance is disrupted. These parties can dominate others with these technologies. Totalitarian ruling could become a big problem. A relativizing thought, however… AI technologies are much more difficult, if not impossible, to contain then technologies nuclear weapons, for example.
You only need moderate capital for AI solutions. Open-source AI technologies are available for everyone. The problem is not the technology, the problem would be when a single party could claim exclusive access to data sources.
GPS data for example.

Why AI is miles away from human intelligence

The current AI technology is an analytical, stochastic process, invented in the last two decades of the 20th century. The process is very compute-intensive. Only recently our computers have become strong enough so we can make practical use of AI inventions.

The AI technology analyses past data, and is very clever at “predicting” future values for this data, or gathering ‘facts’ from data. The technology looks at input data and separates noise from signal by throwing out outliers and assigning likelihood values to remaining data points. With this, it can predict (not even calculate) probabilities for future values. That is all.

Nicholas Nassim Taleb has written a book called Fooled by Randomness. One of the points he proves in this book is that you can not predict “black swan” events from past data. The existence of black swan events can be proven through analytical math, but not predicted through stochastical math.
So AI is worthless when it comes to predicting black swan events. More specifically AI then useless for supposed applications like stock market predictions. Even professors in AI research areas are ignorant of these “black swan” limitations, arguing that just further growth of datasets would enable AI to become perfect predictors of the future (and become uber-humanly intelligent).

Out of curiosity, I asked ChatGPT about its ability to predict black swan events.
Here is his honest, modest answer.

AI systems, including those based on advanced algorithms like GPT-3.5, have limitations when it comes to predicting black swan events. Black swan events are extremely rare, unpredictable, and have a significant impact. These events are, by definition, unforeseen and often fall outside the range of normal expectations.

AI models, including machine learning algorithms, are generally trained on historical data. If black swan events have never occurred in the training set, the AI model would not have learned to predict them explicitly. Traditional AI models work based on patterns and correlations found in the data they were trained on. They are not capable of inventing entirely new and unprecedented events beyond the scope of their training data.

However, it’s essential to note that AI can help prepare for unexpected events, even if it cannot predict black swan events directly. Here’s how:

Despite these capabilities, it’s important to maintain a cautious approach when relying on AI for preparing for black swan events. Human judgment, expert analysis, and continuous monitoring are essential elements in mitigating the impacts of truly unforeseen and exceptional events.

Next, I will argue that the learning process of AI is so inherently different than human learning, that it would be naïve to expect that such processes would ever be able to teach an AI so much that they can mimic human intelligence. AIs could mimic aspects of human intelligence, but for development of more human-like intelligence, a different learning model would be required.

When we look at the AI learning process, machine learning, we see this is significantly different from human learning and experience. The difference between human learning and experience and Machine Learning in short. The AI takes larger amounts of information and learns from it. Sometimes this learning is (self-)directed. Humans take in large amounts of data, information, and experience. This human experience includes sensory
information. Humans undergo years of learning from interactions with people. They get complex and ambiguous feedback. They learn from experiences. AI’s do not undergo experiences. At best experiences could be simulated. The learning processes for humans and AIs are so different from Machine Learning that is silly to expect that the two could ever produce similar results.

The AI gets:

  • Many data sets, in huge volumes, and these are largely uncurated.
  • Large undirected learning.

Man gets:

  • A curated (culturally, geographically, …) stream of information streams books, tv, anything internet,
    school, discussion, talks, magazines, Youtube, art, music …

  • Music,… What is music to an AI? Music is the thing that plugs right into our brains…
  • A in-born character (depending on where you are in the nature/nurture dimension).
  • Years of growing up, interactions with people, animals, things, …
  • Years of sensory information from 5 sources.
  • Emotional feedback from humans, animals…

I do not think there is much reason to be freightened of AI. The current state of AI is very very far from human intelligence. And the current technology misses fundamental functions to close that gap. We should not be afraid of that. Rather than getting distracted by pessimistic stories warning of the end of humankind, we should focus our efforts on the dangerous aspects of AI.

AI is just a tool not a miracle, and its health depends on what you feed it

AI-based social media enables the spreading of incorrect and potentially dangerous information. Other AI based applications struggle with curating the information fed into the tools. Without stringent curation, the AI application undermines its own credibility, as do the providers of services based on unreliable services. Which benevolent being wants to believe a social media channel that bases its information on unreliable data?
Who wants to rely on AI-based decisions if the decisions are learned from an unreliable data set?

Homo Deus – Harari

Aantekeningen en commentaar bij Homo Deus.  

P 58 Voorspellen hoe de wereld er uit ziet in de toekomst is onzin. Moedig er dan vervolgens toch een heel boek aan te wijden.  

P73… Interessante geschiedenis van het gazon – ja het goed onderhouden grasveldje dat onze voor- en achtertuinen domineert. Het gazon is ontstaan als statussysmbool, blijkt. Alleen de rijken konden zich in vroeger tijden een straks bijgewerkt gazon permitteren.   De moderne mens streeft drie doelen na: onsterfelijkheid, geluk, goddelijkheid. Dit laatste vereiste enige verklaring: Harari bedoelt waarschijnlijk dat de mens de hele wereld, alle leven, naar zijn hand kan zetten, en vormen zoals hij dit wil.   Over het verlies van menselijkheid door versmelting van mens en technologie.   De mens heeft zich boven de rest van de wereld gesteld. Belangen van de mens gaan boven de belangen van elk ander wezen. Een niet-mens zou hetzelfde kunnen nastreven. Onstaat er een intelligentie met vergelijkbare intelligentie, dan kan het met de mens afgelopen zijn. De portee van dit boek.   Harari definieert religies breed. Ook communisme en liberalisme en nazisme ziet Harari als religies. Hij legt dit uit. Vele entiteiten bestaan alleen in de gemeenschappelijk verbeelding. Daar geloven we met zijn allen in. Als we er niet meer in zouden geloven zouden deze instituten in elkaar storten.

P147 Harari schrijft “het bestaan van de ziel is dus niet in overeenstemming te brengen met de evolutietheorie.” Wat wel een bijzondere betoogtrant is. Ik zie niet in waarom hetzelfde zo niet zou opgaan voor bewustzijn. 128 Na de verhandeling die concludeert dat de ziel speculatie is verdwijnt ook het bewustzijn inderdaad naar de prullenmand. “Het bewustzijn is het biologisch nutteloze nevenproduct van onze hersenen.” Het heeft morele waarde. En dat maakt ons juist menselijk.

P142 Wat ons mensen uniek maakt is volgens Harari het vermogen om te kunnen samenwerken in grote groepen. Niet ons bewustzijn.

P156 Betekenis ontstaat door een web van verhalen. Betekenis kan langzaam uit elkaar vallen. Ontkerkelijking bijvoorbeeld.

P161 Intersubjectieve entiteiten zijn zaken die alleen bestaan in de gedeelde fantasie (versus objectieve entiteiten zoals een auto en een telefoon, en subjectieve entitieten die alleen in de verbeelding van een enkel persoon bestaan). Voorbeelden van intersubjectieve entiteiten zijn de EU, Wereldbank, religies.

P162 Menselijke verzinsels worden vertaald naar genetische en biologische codes en veranderen zo de objectieve realiteit. P176 Schrift: beschrijft en vervormt de werkelijkheid.

P186 Wat is een werkelijk bestaande entiteit? Definitie: als het kan lijden. Bedrijven, geld, naties bestaan alleen in de verbeelding. Het wordt steeds moeilijker het onderscheid te blijven maken.

P192 Religie is geloof in morele wetten. Dus ook communisme, liberalisme zijn in die zin religies. Geen goden maar wel religies.

P195 Religie wil wereldorde versterken. Een spirituele reis is los komen van de wereldorde. Reizen naar ‘de goede god’ uit het dualisme.

P205 Wetenschap kan de feiten waarop religieuze beweringen zich baseren onderzoeken en relativeren. Zo ethische oordelen tegen feitelijke beweringen afwegen.

P207 Echter niet alle ethische disputen kunnen met wetenschappelijke feiten worden beslecht. Als een doel is van de mens het streven naar meximaliseren van menselijk geluk, hoe kan je feitelijk beoordelen of je dat doel bereikt? Zo is altijd een religie nodig voor sturende ethische principes. P208 Wetenschap geeft macht over oorlog, voedsel, etcetera. Religie wil orde, sociale structuren. Wetenschap en religie aan daarin hand in hand in de maatschappij. In moderne maatschappij vormen humanisme en wetenschap de fundamenten.

P222 Marktkapitalisme: groei is heilig. Kapitalisme is een succes. We hebben meer macht dan ooit. Keerzijde is gebrek aan zingeving. Daarvoor kwam het humanisme op. P242 Wat goed is voor het individu wordt leidend. Zo ontstaat het liberalisme. Nationalisme streeft naar uniciteit van naties. Socialisme houdt rekening met de gevoelens van anderen met als gevolg dat collectivistische instellingen het voor het zeggen hebben. P264 Evolutionair humanisme: ooorlog is essentieel. Het roeit de zwakken uit. Leidt ook naar nazisme.

P271 In de 20e eeuw leek tot en met de jaren 70 het evolutionair humanisme te domineren in de socialistische successen.

P283 Succesvolle religies zijn die religies die de technologische realiteit erkennenen er op anticiperen. In eerste instantie reden voor het succes van het socialisme. Later raakt het socialisme de relatie met de realiteit kwijt en gaat ten onder. Het liberalisme daarentegen heeft technologie omarmd. Maar dit is ook zijn eigen valkuil. Belangrijkste beeld dat Harari wil weergeven in dit boek. P300 Vrije wil gevoel van vrije wil bewustzijn is een stroom van processen. De wil is een uitkomst geen stuurmechanisme. Is daarmee te manipuleren, met transcraniale simulatoren. Harari geeft voorbeelden.

P 308 Onze ik: een ervarende en een verhalende ik. De verhalende ik maakt een verhaal van onze werkelijke ervaringen.

P321 Mensen verliezen hun economische waarde. Veel vormen van intelligentie worden overgenomen door computers. Taken die patroonherkenning vereisen.

P338 Alle vormen van intelligentie worden door computers overgenomen. Wat is het nut nog van de mensheid. P348 De verhalende mens kan losgelaten worden en vergedragen aan de statistische diensten als Google die ons kunnen sturen op basis van gegevens over ons gedrag, DNA, etcetera. Zo kan het liberalisme ondergraven worden. Het verhalende zelf kan worden vervangen door een algoritme dat mijn mening beter kent en beter weet wat goed voor miij is.

P356 Besturing van logistieke en biologische zaken van de mens kan worden overgelaten aan technologie. Ook biologie is gebleken een algoritme te zijn.

P361 Er dreigt een gevaar voor het onstaan van elites: technologie alleen voor de waardevolle elite, incl bijvoorbeeld gezondheidszorg. Hierdoor ontstaat een “geüpgrade” supermens en een massa nutteloze mensen.

P362 Twee nieuwe religies: – datageloof: het stokje moet van de mensheid aan nieuwe entiteiten worden overgedragen. – techno-humanisme: een geupgrade mens blijft relevant, maar moet zich blijven upgraden. Mensen upgraden gebeurt met nanotechnologie, computerinterfaces, genetische modificatie.

P374 Gevaren van upgraden mens: kwijtraken van menselijke eigenschappen als dromen en fantaseren omdat dit afleidt en inefficient is.ook minder intelligentie, omdat te veel intelligentie ook de efficientie in de weg kan zitten. p378 Dilemma van techno-humanisme: hoe mensen upgraden zonder menselijke wil en beleving weg te reduceren?

P384 Kapitalisme is succesvoller dan communisme om het een efficienter informatieverwerkend systeem is dat m.a.w. beter data kan verwerken tot beslissingen. De economie in termen van dataverwerkig : dataisme.

P393 De mensheid als dataverwerkend systeem heeft tot doel een nog efficienter dataverwerkend systeem te maken. Zodra dat klaar is , is de betekenis van de mens ook verdwenen. P394 Hoogste goed van dataisme: vrijheid van informatie.

P399 Wat maakt de mensen waardevol in dataisme: ze kunnen bijdragen aan de datastroom.

P405 Steeds meer algoritmen die door AI en machine learning steeds worden aangescherpt en verbeterd. “Het basisalgoritme zal misschien nog door mensen ontwikkeld worden…”  

Harari gaat er aan voorbij, en veel mensen weten dat niet, dat de huidige AI en machnine learning hype geen fundamenteel nieuwe wetenschap tot basis heeft. Het enige waarop de ontwikkeling nu drijft is rekenkracht. De leermechanismen, gebaseerd zijn op geavanceerde stochastische berekeningen die al in de jaren 90 zijn bedacht, hebben nooit grote toepassingen gekend tot op heden omdat de rekenkracht van computer te laag was. Hardware ontwikkelingen (GPUs oa) zorgen ervoor dat deze algoritmen nu wel bruikbaar zijn geworden.

De basis van deze mechanismen is een stochastisch leerproces. Door proberen van uitkomsten leert een algoritme wat de beste uitkomsten zijn. Dit is krachtig maar men moet zich realiseren dat deze processen dus geen nieuwe algoritmen bedenkken, maar deze dieper weten door te rekenen waardoor bepaalde zaken met meer gegeven beter voorspeld kunnen worden. De uitkomsten zijn waarschijnlijkheden. Door meer data en dieper rekenen kan de waarschijnlijkheid van een antwoord nauwkeuriger worden bepaald. Dat is iets fundamenteel anders dan een algoritme verbeteren, laat staan creeren. Er komen dus door AI en machine learning niet meer slimme algoritmen. Daarvoor is tot op heden nogsteeds de mens nodig. En daar is nog geen begin van een alternatief voor in de computer wetenschap.

Waar de mens, en haar wetenschap goed in is, is het bedenken van regels voor processen, de algoritmen. Het hoogst denkbare is in de natuurkunde een geuniformeerde theorie die geld op grote schaal, en ook op kleine schaal – zowel voor astronomie als voor deeltjestheorie. Een algoritme bepaalt niet een waarschijnlijkheid, een algoritme berekent exact een uitkomst. Een fundamenteel verschil. Overigens, ook algoritmen kunnen zeer onvoorspelbaar gedrag laten zien. De choastheorie heeft dit laten zien. Een prachtig eenvoudig algoritme vertoont een zeer complex gedrag, soms zelf volkomen chaotisch gedrag. Ook Wolfram heeft hier uitgebreid verslag van gedaan in zij “A New Kind of Science”.  

Harari begint met de definitie van drie doelen die de mens nastreeft, en die de drijfveer vormen voor alle technologisch ontwikkelingen die de mens heeft voortgebracht: onsterfelijkheid, geluk en goddelijkheid. Alle datastromen die op gang zijn gebracht zijn middelen om deze doelen te bereiken.  

Het dataisme van Harari stelt dat andere intelligenties deze middelen op een gegeven moment verder zou kunnen ontwikklen zonder dat daar voor de mens nog een rol is weggelegd. Wat in het beeld van Harari echter ontbreekt zijn de doelen die deze intelligenties dan zouden nastreven. Hij definieert vrijheid van informatie als grootste goed, maar waar moet die vrijheid van informatie zich dan naartoe bewegen?  

P408 Al die kennis en informatie. “Tegenwoordig ben je machtig als je weet wat [welke informatie] je kunt negeren”   Tenslotte nog een opmerking: Intelligentie is niet hetzelfde als het algoritme aanscherpen op basis van data en stochastische processen. Harari geeft geen duidelijke definitie van intelligentie, maar spreekt wel van mogelijk intelligente entiteiten anders dan de mens.   Is intelligentie los te koppelen van een bewustzijn? En als een andere niet-menselijke intelligentie een bewustzijn heeft, gaat deze dan niet aan dezelfde uitdagingen ten onder als de mens in het verhaal van Harari?  

Machine, Platform, Crowd by authors by McAfee and Brynjolfsson

Machine, platform, crowd

In Machine, Platform, Crowd, authors McAfee and Brynjolfsson describe three major developments that led to the enormous economic change we have seen over the past decades.The rapid developments in technology (machine) led to possibilities of the forming of powerful new layers that bring consumers and producers closer together (platforms), and how these platform thrive through direct involvement of the consumer in the production and dissemination of the product and services provided through the platforms.

How can companies like Uber, Facebook, Amazon have become so big and influential, considering they are only thin layers? These platforms do not produce goods, and have no or little assets (at least at the outset).

In the book many aspects around these developments are brought together. The authors contrast the old world and the new world: machines versus human intelligence, platform versus product, crowd versus core (core meaning something driven by an organisational structure).

McAfee and Brynjolfsson
Picture by New America

Machines have developed that can crunch the new large volumes of data that the Internet era has enabled. Here we see that technological developments create their own new opportunities. The authors go into why these things are so hard to predict, and have no good answer. New technology enables things we can not foresee. We can dream, but technology continues to surprise us.

The developments of AI have been an important factor. But why computers are better than humans at making (some) decisions.The book goes back to the literature of Kahneman and others. Kahneman has learned us that our decision making is highly subjective and prone to errors. Fast decision making is done by our System 1 thinking, which is impulsive and subjective. Our System 2 is more thoughtful and slow, but tends not to correct System 1 decisions but rather justify those decisions.Our biases make us bad decision makers. And computers can ignore all the subjective crap that clutters our decision making. And of course they can very fast go through last piles of data.

Afbeeldingsresultaat voor kahneman
Kahneman

Though McAfee also shows that if the AI is fed with “biased” data, the computer will also make biased decisions. But, the computer can be easily corrected, while for humans that is a lot more difficult.

In the end, the computer is better at doing specific things. (The worst are Hippo based decisions: Highest Paid Person’s Personal View. A problem common in organisations with narcissistic leaders.) AI is increasingly efficient at making decisions for “narrow” problems.Scientists however indicate that Artificial General Intelligence (AGI) – is a stage we now even getting close to.

The authors do not go into the hypes that are created around AGI. People like Harari in Homo Deus write extensive and interesting perspectives on what the world may become when AI takes over. But these are, I believe, not based on realistic views on the state on AI, or even on what AI might brings us in the future.McAfee and Brynjolfsson do not elaborate on this humbling perspective. They even ignore it later on, where the describe their believe that when given enough data, engineering knowledge, and requirements, computers will be able figure out novel ways to do things. This statement remains unsubstantiated and even contradicts their earlier statements about AGI from an MIT scientist.It is also contradictory to the Polanyi paradox: we do not know what we know. So that engineering knowledge may very well remain buried in human brain mass.Finally, to end this tangent, the claim itself seems somewhat circular. If I rephrase the statement: if we know what to do, how to do it, and have the right inputs, we can program a computer to do it. Well, of course, I would say, because that is as much as the definition of automation.

So how come we see this rise of AI technology now? McAfee and Brynjolfsson summarize:

  • The availability of computing power. The power of CPUs and specially GPUs has reached a level that enabled and boosted the usability of neural network performance.
  • The drastically decreased cost of computing.
  • The availability of large amounts of data.

When will robots be used and when humans? Robots for Dull, Dirty, Dangerous work (DDD) and/or where Dear/Expensive resources are used.But coordination, teamwork, problem solving and very fine hand/foot/senses work is needed. These are all things computer and robot are not good at.Creative and social jobs are safe from robotisation.

Platforms have appeared that killed or diminished existing often large industries. Where products become digital, the fact that these are free (zero cost to copy) and perfect (no loss off quality when copying), economies have radically changed.Two ways are left to make money with these products:

  • Unbundle products – like iTunes sells songs instead of albums.
  • Rebundle products – like Spotify creates subscriptions instead of selling albums/songs.

Complements increase the sales of goods. Like apps increase the sales of iPhones. Free products can be bundled to make money out of them:

  • Freemium products
  • Put ads in free products
  • Add customer service (open source products)
  • Provide a public service (for public organisations)
  • Pairing with products

For platforms, curation of products and reputation systems become crucial to filter and make products find-able to clients.Characteristics of successful platforms:

  • Early – attract a crowd before others do
  • Use economy of complementary products
  • Open up the platforms
  • Guarantee experience through curation/reputation like mechanisms.

Online-to-Offline platforms have emerged. these bring together products and consumers for a market that optimises asset utilisation. When their is a 2-sided market, demand want low prices from multiple suppliers, and supplier want their products in as many consumers as possible.Both sides want economies of scale.Is a product in undifferentiated, prices will come down. Such products are vulnerable for platform destruction.What is less vulnerable: complex services, markets with few participants.

How to make successfully use of crowd-sourced information?

  • Make information findable and organise it
  • Curate bad content

Crowd sourced platforms can only be successful when

  • They are open
  • Everyone can contribute (no credentials needed)
  • Contributions can be verified and reversed (prevent destruction of the asset)
  • They are self organising (distributed trust)
  • They have a geeky leadership

The volume of the crowd knows more than a few experts.Crowd beats core.The core nowadays uses the crowd:

  • To get things done (upwork)
  • For finding a resource
  • For market research
  • To acquire new customers
  • For acquiring innovation

Distrust in organisations leads to a wish for Decentralization of Everything. But “The Nature of the Firm” describes why organisations exist and why their is always a place for them.

The cost of linking parts of the supply chain in more expensive when it needs to be done with different players all the time.In an organisation that handles larger parts of the supply chain, cheap communication drives down costs.More importantly contracts are never complete.

There is always a thing called Residual Rights of Control over assets. The concept is not further elaborated. But in a distributed model the ownership of the produced assets poses problem: who owns the right over the assets.The problem seems incomplete and drives construction of firms.

Firms drive group work and management:

  • To coordinate more complex work: transmission belts for coordination and organisational problem solving
  • Human/social skills
  • People want to work together
  • Best way to get things done

They end with the question: what will we do with all that technology – that is the question we should answer, not: what will technology do with us.

Apply technology to solve real-word problems – in a combination of technology, humans, and other resources.

Sloppy Lifebox print, excellent read from Rudy Rucker

lifeboxcover

Annoying: pages 237 through 240 are shuffled in my print of The Lifebox, the Seashell and the Soul. Though not entirely surprising, the crumbly paper was an indicator of a sloppy edition. Maybe it is a collector’s item now. Mail me and you can have it.

I bought it about ten years ago, and recently reread Rudy Rucker‘s The Lifebox, the Seashell and the Soul It is a very interesting book, although it smells odd here and there.  

Rucker writes about Alan Turing that Turing ‘apparently was given to bringing home sexual partners he met in the streets.’ What does he mean with that phrasing? For historical facts on Turing, I prefer to rely more on Turing’s biographer Andrew Hodges who wrote the respectable biography Alan Turing: The Enigma. Hodges pictures Turing as naive in confessing his homosexuality (for that time), but also describes him to be rather restraint in getting involved in sexual relationships. What I am sure I haven’t read anywhere is that Turing would go skimming the streets looking for ‘sexual partners’ (sexual partners – is that really correct English?).

Also the idolizing references to Stephen Wolfram’s A New Kind of Science are unnecessary. Wolfram generously takes care of that himself.

Also, Rucker’s habit of regularly quoting his own Science Fiction stories to illustrate his theories begins to annoy me at some point. I understand his frame of reference, but he is not very scientific in this way of providing ‘proof’.

But, as said, the book offers a number of very interesting ideas and visions. I realize I sound so negative, but I really enjoyed the book.

Rucker confirms that Artificial Intelligence – the discipline in Computer Science – has not achieved a lot. After my personal introduction to scientific AI, I became very suspicious. I found it odd that AI was dealing with decision rules, (fuzzy) logic, and the like. And used this awful programming language Prolog. Wasn’t there anything better to focus on in Artificial Intelligence? Cognitive is hot these days, but is it really Artificial Intelligence? A step forward, probably.

Rucker describes the life box—a device that captures every aspect of your life. I’m not sure if he invented the concept, but it is the first time I saw it described in such a realistic, predictive way.

This book has become history. The lifebox is there. People like Cathal Gurrin are walking around with cameras and devices recording everything they do all day.