I spend a lot of time talking with business executives about AI. One common challenge is the difficulty in determining what these systems are capable of. This is for two primary reasons:
In the face of both of these challenges it's worth pointing out something important: AI can do things that even the smart person cannot (unassisted) and AI can't do things that a human toddler can tell you (shoes go on your feet, etc.).
So how should we think about AI?
From an anthropological view, AI is less like a creature with will and more like a reflective environment. An intelligent echo chamber that only “lives” when walked into. A mechsuit, if you will.
If AI is reflective, then the interesting study isn’t the machine itself, it's what is possible to achieve with it. (And if you are interested in a more anthropological analysis, you can find that here).
We tend to talk about AI in sweeping terms: how it will change industries, disrupt jobs, shift economies. But right now, business should be focused on what can be accomplished today and how they can best position their company to accomplish it. I believe strongly that CEO's like Mark Benioff, who are firing humans to make room for AI are doing exactly the wrong thing, both ethically, but also from a company value stand point. If AI functions like a mech suit, why on earth would you fire all your potential pilots?
Taken at scale, the push to eliminate human labor in favor of systems that aren't capable of operating independently is an existential risk to humanity. It is the snake of capitalism eating it's own tail. But even looking at it from a single company lens, it's not smart business.
Train your pilots, and get them fitted mech suits. This is how we win.