2023-W52 reading notes

Why AI Will Save the World

Source: https://a16z.com/ai-will-save-the-world/

This is wrong on so many levels I don't even know where to start…

General impression

The author doesn't seem to grasp the difference between AI and AGI. AGI would be capable of solving the problems like curing diseases, but the last time I checked, the AI we have now is only capable of remixing its learning material.

Supposed AI benefits, but no cost?

Andreessen claims that in the future, each child would have an AI-tutor, everyone would get an AI mentor/therapist/coach and the economy would grow. However, he fails to give any hints who would pay for AI tutors for all children. Oh wait, maybe by all he meant some — children of the richest people on earth? What about those who can't afford a computer? Who would guarantee that those AI tutors wouldn't re-produce biases from learning material?

How is economy expected to grow if so many peoples' jobs are replaced by AI? Teachers, coaches, therapists — they'd all need to go back to school and study something else. The author doesn't mention this topic though.

Language of emotion

Whenever the author wants to criticize something, he refers to emotion. Whenever he wants something to encourage something, he makes it sound reasonable. So all those who promote and invest in AI are visionaries, while those who criticize AI panic, are hysterical and so on.

However, criticizing is not panic — it's a mental activity of considering facts. The author doesn't seem to be capable of that activity.

Baptists and Bootleggers

The author fails to recognise that people are different and therefore there are many perspectives on AI. From his perspective it's mostly 'baptists' who want to prevent AI from exterminating people by regulation and 'bootleggers' who want to benefit from regulation.

In my opinion he just didn't invest in AI early enough to benefit from it now and that's why he states that AI development shouldn't be restricted.

Seriously flawed overview of AI risks

First of all, the author doesn't treat 4 of these 5 risks seriously.

When addressing the they'd-kill-us-all risk, he claims that for example AI ethicist is a profession where one gets paid for being a 'doomer'. However, in first section of this text he claims that ethics is something good and an outcome of intelligence. So ethics was good before, but now when it touches AI it's bad? Or is it just acceptable for the author that AI is biased?

Then he recognises that it would be hard to guarantee that AI doesn't spread hate, only to sum up that he doesn't like “thought police” to decide what AI can or cannot generate. Of course: free speech, the sacred good of those who were never harassed because they happen to be white cis male.

When addressing it'll-take-jobs risk, he claims it wouldn't replace all of them (sure, it wouldn't take his job, it would merely augment his own intelligence!) but those teachers, coaches or therapists that wouldn't be able to work because AI would do that — they would loose their jobs.

However, the worst part was when he proved that he didn't understand Marx's work, stating that it's been disproved by reality and calling it a fallacy…

Finally, he suggests more technological pseudo-solutions to problems with AI, because regulation would take freedom from him and would give freedom to others, and that would be unusual for him I suppose.

Summary

While I do believe that AI could do a lot for humanity, we would first need to fix a lot of systemic issues. Without good public education, good public healthcare, or good wages, AI will increase inequality. This is not a technological problem, so it can't be solved with technology.

Relevant reading