Their argument is not sound, but it is informative paying attention to what they consider "evidence" for AGI. A nice instance of a problem that seems peculiar to AI: it tries to define both its target phenomenon and how well it is doing towards it.
"
We assume, as we think Turing would have done, that humans have general intelligence. [...]
A common informal definition of general intelligence, and the starting point of our discussions, is a system that can do almost all cognitive tasks that a human can do.
"
So that's: "We assume humans have general intelligence, general intelligence is defined as what humans have."
A common informal definition of general intelligence, and the starting point of our discussions, is a system that can do almost all cognitive tasks that a human can do. "
So that's: "We assume humans have general intelligence, general intelligence is defined as what humans have."
How many experts did it take to produce this ?