Kissinger on Artificial Intelligence
Now that’s a title you don’t expect to hear these days. I mean sometimes I forget Henry Kissinger is still alive (no offense Hank) — let’s face it, it seems like the guy has been around forever. So imagine my surprise when I came across his op-ed “How the Enlightenment Ends”, in the June 2018 issue of The Atlantic. Yet despite his self-avowed lack of technological competency, Kissinger crafts a thoughtful (and sobering) warning for the dire consequences should we fail to protect the human cognitive condition in the coming decades of artificial intelligence’s (AI) ascension.
Like many scribes on the topic of AI, Kissinger cites the creation of AI animal AlphaGo, and that machine’s ability to beat master players of the strategy game Go, as the seminal moment when the awe inspiring powers of AI hit the world stage. Actually, I’ll be way more impressed by AlphaGo when it’s able to out “odds make” the teams in Las Vegas who each week set the lines on the array of college and professional sports events. But I digress.
A number of Kissinger’s statements are worth republishing here — along with a dumbed-down (yep, I’m a master at dumbing things down) snapshot from my noggin. Here we go:
“What would be the impact on history of self-learning machines — machines that acquired knowledge by processes particular to themselves, and applied that knowledge to ends for which there may be no category of human understanding”. This doesn’t sound good. Actually, sounds in many ways familiar to the narrative coming out of Washington, D.C. these days.
On the notion that our technological revolution is upending our current world order, leading to an outcome in which the “…culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.” The technology is too complicated for politicians, philosophers, and ethicists (to the extent the latter still exist), and scientists and computer engineers are scarcely outfitted with the tools to fully incorporate balanced oversight into where AI technology may otherwise be left to unfold.
“The internet’s purpose is to ratify knowledge through the accumulation and manipulation of ever expanding data. Human cognition loses it’s personal character. Individuals turn into data, and data become regnant.” Hold on, don’t open a new browser tab, I looked up “regant” for you — it means “reigning; ruling; dominate”. And yes, data does reign supreme in many corners of Silicon Valley, but I’m not ready to give up on the human brain’s ability to really think.
“Users fo the internet emphasize retrieving and manipulating information over contextualizing or conceptualizing its meaning.” Kissinger doubling down here — basically calling us all out as number crunching, data fiends who have no ability to really think. Like I said before, I’m not throwing the towel in just yet, but I do think leaders need to do more to help our team members see the proverbial forest for the trees. Trees are data. The forest is how we piece together ecosystems, strategies, and how we as organizations can employ and execute strategies to navigate complex, ever changing ecosystems.
On the overload of social media and information in general: “All of these pressures weaken the fortitude required to develop and sustain convictions that can be implemented only by traveling a lonely road, which is the essence of creativity.” Henry has a point here. Left to our own devices (pun absolutely intended), we drown in the chatter of inane crap. Little time is left after our brain has been singed by a daily dose of this crap to truly be with ourselves and our thoughts; to actually be bored enough to let our brains do real work, (which is, as Manoush Zomorodi argues in her book “Bored and Brilliant: How Spacing Out Can Unlock Your Most Productive and Creative Self”, central to generating real cognitive breakthroughs).
“Political leaders, overwhelmed by niche pressures, are deprived of time to think or reflect on context, contracting the space available for them to develop vision.” Not buying this one Henry. This is, on a tactical level a “prioritization problem” and on a meta level a “leadership problem”. Politicians who truly choose to lead, set a vision, and prioritize how they spend their time and work with stakeholders to achieve that vision can (and do) block out what Kissinger calls “niche pressures”. Those who choose to play to every buzz or notification that hits their iPhone will feel busy and productive, yet will realize they’ve truly accomplished virually nothing when they come up for re-election.
In defining AI vs. “automation”: “Automation deals with means…AI, by contrast deals with ends; it establishes its own objectives.” Herein may lie the most critical intersection for AI machines and human beings. Indeed, shouldn’t it be humans who define the “ends” and “objectives” — not the machines? Why resign ourselves to a world where AI defines the end state? Assuming a future state that’s defined by AI machines by necessity capitulates the most important contribution (and leverage) humans can bring to an AI future: the definition of what that future ought look like.
From here Kissinger posits three dire outcomes from a world dominated by AI:
“First, that AI may achieve unintended results.” Again, who defines the desired end state — the machine or human? It’s unclear why Kissinger seems so resigned to a future in which AI machines simply overrun the human brain, will, and ability to define what is “right and wrong” when it comes to “results”.
“Second, that in achieving intended goals, AI may change human thought processes and human values.” Okay, at least here we have Henry believing that AI and humans can co-exist to deliver “intended goals” — that’s progress. And haven’t human thought processes and human values changed over the span that humans of walked the earth? I believe so (insert your favorite historical process and human belief that, thankfully, no longer exists). Once more, this point seems a bit too much in the fait accompli camp for my taste — I prefer to think of a future in which AI machines and humans evolve together, and that in that evolution human processes and values improve dramatically as well.
“Third, that AI may reach intended goals, but be unable to explain the rationale for its conclusions.” Kissinger’s point here is intriguing. In the extreme case — the one in which not a single human is capable of decoding how an AI machine reached its conclusion — this might be disconcerting. However, I prefer to view Kissinger’s point here in two shades of brighter light. First, if our machines get to the intended outcome and our human values remain in tact, what’s the downside there? For example, I don’t really know the rationale for every micro decision my surgeon makes when he operates on me, but I trust in his ability to get the right result and to do so with the utmost attention to human values (i.e. keeping me alive). My other thought here is built around optimism — the optimism that human brains can — and will — be able to decode the rationale behind the results our machines generate. This will require intentional effort and oversight of our AI wonders, but I’m confident we’ll be able to deduce the “how” and “why” of our machines.
While much of Kissinger’s assessment of AI’s impact on our future selves does indeed smack as pessimistic, there’s a hint of optimism in one of his final recommendations. “AI developers, as inexperienced in politics and philosophy as I am in technology, should ask themselves some of the questions I have raised here in order to build answers into their engineering efforts.” It’s a very fair point. Concealed in this sentence, one of the last in Henry’s op-ed, is actually a powerful plea. What Kissinger is really asking is that brains like his be included in the construction of our future along side the tech brains that will be closest to the AI machines.
Originally published on Medium on December 9, 2018. This Substack version is maintained as the canonical archive.


