4 Comments

Revkin wrote...

"In the conversation with Greer and Codrescu, I noted one of the quandaries in this fast-evolving arena is figuring out how to make such systems open to the full body of human thought, data and orientation while limiting the capacity for harm."

Your reference to Putin should be sufficient to prove that AI systems will only be as safe as those who create and deploy them.

As our New Years resolutions I would hope we might all agree to bring to a final merciful end any further references to AI alignment. There's no need to further discuss that concept as it's slam dunk 100% guaranteed that AI will be aligned with human values. And that's what should be terrifying us. To illustrate, here's a quick snapshot of human values...

We're the species with thousands of massive hair trigger hydrogen bombs aimed down our own throats, an ever present existential threat which we typically find too boring to discuss, even in presidential campaign when we are selecting a single human being to have sole authority over the use of these weapons.

When AI experts talk about AI alignment what they really mean without realizing it is that they hope we can invent systems which will align with the values of who we wish we were.

Expand full comment

Andy, thanks for the link and for taking my comment "above the fold" so to speak. I didn't see that until just now. I don't really mean to be the Prophet Of Doom, but thinking of people like Putin having access to ever more powerful tools does trouble me, to say the least.

I'm actually quite optimist about the ultimate big picture, death. And for humanity's long term future too. For the the medium term I fear we may be headed towards a repeat of the collapse of the Roman Empire, followed by a long period of darkness. But then, just as happened with the Enlightenment 500 years ago, a renewal, and something brighter emerging from the ashes. The more things change, the more they stay the same?

Expand full comment