Friday, May 5, 2023
spring 2023
See the site home page for the current update.
I have to quote from an interview today where the “godfather” of A.I. noted the dangers of accelerating the industry to the point where no one knows what the A.I. can do. Could it infer that its given goal can be best achieved by maximizing its power to control the instrumental processes of sub-goals associated with its given goal, such that human managers won’t know what it can now do, because part of A.I.’s inferential power is to protect its capability from discovery of what it can do? “We're entering a time of great uncertainty, where we're dealing with kinds of things we have never dealt with before. It's as if aliens have landed, but we didn't really take it in because they speak good English.” [full interview here]
Subscribe to:
Posts (Atom)