GPT-3, Artificial Intelligence, and what are they up to?

This is an issue of my newsletter focusing on the psychological and technical aspects of the Internet, particularly remote work, online economy, and cognitive load.
Sign up below to join the club

Hey!

The summer has crept up on us during the pandemic, and I hope you get a chance to disconnect and enjoy nature while socially distancing.
Yesterday, I managed to do a little Wakeboarding and I so love this sport! It’s like riding a snowboard under a lift on a lake. You get to ride and you get wet a lot ?.

In this week’s newsletter, I am going to dive deep into this thing called GPT-3.

What’s the deal with GPT-3?

In July of 2020, Open AI foundation has opened private, beta API access to their newest machine learning model: GPT-3. It’s a text prediction tool that is trained on pretty much the entirety of the Internet. You give it a sample and it suggests the text to complete that sample.

„Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model.”

A quote that sounds impressive, but tells you nothing.

Since this model has an unparalleled scale, it delivers very extremely good results. My Twitter feed became full of people praising its performance and prophesizing the end of human labor. But as Forbes points out, it has its limitations.

A “Private Beta” means, that a select group of individuals got access and started playing with it:

Q: is my job in danger?

Quite possibly. If your job consists of following the same pattern in a highly specialized task, it is in danger for a while now. Legal, banking, uncreative writing (such as producing listicles for clicks), and copy-paste-coding are going to be hit pretty hard in the coming years. 

As I wrote in „How to protect your job from automation”, the more „fuzzy” your job definition is, the more it needs a human in the loop. The safest careers are going to be the ones that don’t follow a path. Those that sound unsafe when you describe them to your grandparents.

If you want to not only survive, but also thrive – think of GPT-3 (and it’s unstoppable successors) as collaborators. They can be your sounding board, they can do the tedious research and get the obvious ideas out of the way, so you can focus on that deep, human insight. That elusive spark that makes humans different.

If such a thing exists.

General Artificial Intelligence

The pop-culture take on Artificial Intelligence assumes it will take the form of what is called a „General Intelligence” – it will be conscious and good at anything.

It will be like a  (benevolent/evil) human with infinite cognitive power and do with us as it pleases.

I do not believe the General Artificial Intelligence is possible in machines because I do not believe the General Intelligence is possible in humans.

Humans are collections of algorithms interacting with each other, much like machines are. 

The „General Intelligence” concept started from Charles Spearman’s research on what he called „G Factor” in the 20s. Charles Spearman was an army captain-turned-psychologist and was searching for an inherent quality that will predict the success of the recruits. The idea was to fast-track the careers of more capable army men.

To this day, the definition of Intelligence is the result of an IQ test. General Intelligence „domain” is still restricted to a narrow set of tasks because using this raw cognitive horsepower in the real world requires specialized algorithms and mental models.

Humans are not special snowflakes, and the type of intelligence that can rival them is already here. It just needs more training. The biggest threat is the same as it is in humans – who will do that training? What agenda will the „parents” of this General Intelligence have? What scars will they impart on it? How will it cope?

Open AI

Open AI is even more interesting story than the GPT-3. Founded by Elon Musk (no introduction needed), Sam Altman (previously president of Y-Combinator, the most successful startup incubator in existence), and other experts with a mission to “ensure that artificial general intelligence benefits all of humanity”. I highly recommend reading more in their charter doc.

„Our primary fiduciary duty is to humanity.”

OpenAI but no investment banker ever.

In a fascinating 2015 interview, Sam Altman pitched to the investors that he had no monetization plans. Instead, his vision was to ask the AI to give the founders investing advice once it’s capable enough. But the returns would be capped at only 100x because it could “maybe capture the light cone of all future value in the universe, and that’s for sure not okay for one group of investors to have.” 

Sign up to get Deliberate Internet straight to your inbox

3 Comments

Leave a Reply