"Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning.
"And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."
In the New York Times article, Dr Hinton also warned about "bad actors" who would try to use AI for "bad things".
"This is just a kind of worst-case scenario, kind of a nightmare scenario," he added to the BBC.
"You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals."
The scientist warned that this eventually might "create sub-goals like 'I need to get more power'".
He added: "I've come to the conclusion that the kind of intelligence we're developing is very different from the intelligence we have.
"We're biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.
"And all these copies can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person."
Have your say in our news democracy. Click the upvote icon at the top of the page to help raise this article through the indy100 rankings.