Machines are redefining what it means to be human.

Abstract digital human face

Artificial intelligence concept of big data or cyber security. 3D illustration

getty

When Alan Turing asked whether machines could think, his pragmatic intellectual brilliance consistent in sidestepping metaphysics to focused instead on tangible outcomes: if a machine’s behavior is indistinguishable from a human’s, treat it as intelligent. Although today there are many other definitions for artificial intelligence (AI), Turing’s framing has become quite prophetic.

Generative AI now drafts memos, summarizes meetings, writes code, tutors kids, and produces plausible images, voices, and music. For many everyday tasks, the output is good enough that the boundary between human and machine performance is as blurry as that between natural and lab-grown diamonds.

As I illustrate in my latest book, that blurriness between AI and human behavior also creates a modern identity crisis. For centuries we defined our intellectual uniqueness by comparison to animals. Today we are benchmarking ourselves against software. If systems can imitate much of what we proudly called “human,” what exactly remains distinct about us?

One vivid reminder comes from the Turing Test competitions where a human “confederate” competes alongside chatbots. As Brian Christian noted in a brilliant book, the human’s job is not to act naturally, but to act in ways a machine would not. That paradox is the new normal. As AI becomes more human-like, people feel pressure to become more deliberately human — which, ironically, means being less spontaneous and more intentional, for instance by faking incompetence (inserting deliberate typos or grammatical errors in text created by generative AI) and imperfection (by swearing, which polite and politically correct AI rarely attempts). Authenticity used to involve self-expression. Increasingly it involves self-selection.

Object Authenticity: When Reality Itself Is in Doubt

AI’s first big culture shock did not come from robot empathy. It came from realism so convincing that truth and forgery swapped places. Deepfakes now generate persuasive faces, voices, and videos. Text models can author confident nonsense. Recommendation algorithms flood us with content that feels hand-picked for our tastes, whether or not it is accurate. The result is a world where object authenticity — is this the original or a replica — is constantly in question.

We imagined that universal access to information would raise collective knowledge. In practice, abundant content makes it easier to be misinformed with great confidence. If, as Stephen Hawkin noted, “the biggest enemy of knowledge is not ignorance, but the illusion of knowledge” Most of us choose comfort over accuracy, which means we construct private realities that flatter our identities and punish dissent. That habit is human, not artificial, but AI scales and lubricates it.

If your assistant drafts most emails and the recipient’s assistant summarizes them, you are already halfway to AI-to-AI work. The pattern will repeat everywhere: students use AI to write essays, instructors use AI to grade; candidates use AI to optimize resumes, recruiters use AI to screen. Once behaviors become predictable, they become automatable. The risk is not only job displacement. It is personal dulling. If tools take over the interesting parts of thinking, we may unwittingly train ourselves to become standardized and replaceable.

A second-order effect follows. As synthetic media gets better, it will become harder for people to pass the reverse Turing Test — proving to other humans that they are not machines. That is already true for images. Many AI-generated faces look more “real” than real ones. Imperfection, once a liability, becomes a signature of humanity.

None of this is strictly dystopian. We could use AI to free time for analog experiences that make life richer: conversation without screens, deep work without notifications, learning for curiosity rather than credentialing. Companies are rediscovering the value of in-person time precisely because digital life flattens nuance. If we treat technology as a scaffold rather than a substitute, we can build workplaces with more creativity, more learning, and more meaning. That requires intention. Without it, the gravity well of efficiency pulls us toward a thinner version of ourselves.

How Technology Can Downgrade Us

The philosopher Martin Heidegger warned that technology encourages us to view the world — including ourselves — as resources to be optimized. That mindset rewards uniformity, predictability, and control. Social platforms make the dynamic unavoidable. They encourage us to craft a brand, chase engagement, and edit life into highlights. The result is a performative authenticity that pressures everyone to be simultaneously unique and algorithmically acceptable.

The psychological costs are visible. Anxiety, loneliness, and tribal anger correlate with heavy digital immersion. Online, disinhibition is common. Distance and anonymity reduce empathy and accountability, so people share more, attack more, and regret more. The behaviors most of us would suppress in a room full of colleagues can feel natural behind a screen. AI does not cause this. It accelerates it.

The antidote is not to “be yourself” more, especially online. It is to practice self-command. Bring your best self, not your whole self. Save your unfiltered reactions for private contexts where they will be received with care. Public spaces are not therapy rooms, and platforms are not friends.

The Vanishing Boundary Between Public and Private

Our digital exhaust offers a remarkably accurate portrait of personality, values, and future behavior. Language patterns signal traits. Likes and follows reveal preferences. Even metadata — when, where, and how we interact — is predictive. Recruiters, insurers, lenders, and partners do not need to read your diary to infer who you are. A model can do it from your footprint.

Surveillance capitalism raises familiar concerns, but the practical reality is simpler. Many people trade information for convenience every day. They know the bargain is uneven, but click accept anyway. The business challenge is trust. If employees and customers believe leaders will use data to help rather than exploit, they will cooperate. If not, they will disengage or leave.

Brain–computer interfaces, such as Elon Musk’s Neuralink, bring the boundary debate into sharper relief. If devices can read intent, nudge behavior, or translate thought into action, then privacy becomes not just a data question but a dignity question. We do not need to speculate about science fiction to prepare for this. The right principles already exist: consent, transparency, minimal collection, clear benefit, and the right to opt out without penalty.

Work, Leadership, and the Practice of Responsible Transparency

There is no question that AI represents the defining leadership challenge of our times, but what does a leader do in this environment? If you are a leader, start by redefining authenticity as responsible transparency rather than unfiltered expression. People do not trust you because you say everything you think. They trust you because you share what is necessary, admit what you do not know, and regulate yourself in service of the mission. That is the difference between psychological safety and anything goes. Safety invites candor, experimentation, and principled dissent. Anything goes invites chaos, cruelty, and fatigue.

With that, here are some practical recommendations to consider:

Model precision and restraint. Say less, listen more, and reward others who do the same. Make it clear that ridicule, contempt, or performative outrage have no place on your team.Use AI as an EQ amplifier, not a mask. Let tools help you slow down before you react, choose better words, and tailor messages to real audiences. Do not outsource sincerity.Design for honest friction. Create rituals where teams challenge assumptions, debate trade-offs, and surface bad news without fear. Curate inputs that cut across echo chambers.Protect analog time. Schedule moments where laptops are closed and phones are parked. Creativity needs boredom and face-to-face nuance.Guard reputation deliberately. Coach people on digital self-presentation. The internet is permanent. Privacy settings help, but judgment helps more.Be transparent about data (and AI!). Explain what you collect and why. Share the benefits and the boundaries. Treat people’s information as you would want yours treated.Keep a human in the loop for human stakes. Use AI to prepare and to draft. Use humans to decide, especially where ethics, fairness, or meaning are involved.Resisting Automation By Being More Human(e)

If you want to remain hard to automate, cultivate the qualities that algorithms find hardest to mimic: originality, taste, judgment, situational awareness, and the ability to unite people around a purpose. That does not require grand performances. It requires small daily acts of discipline. Ask better questions. Read beyond your feed. Seek disconfirming evidence. Write clearly. Show up for people when it is inconvenient. Accept that civility is not performative weakness but social infrastructure.

The uncomfortable truth is that many of us behave in ways that make automation easier. We repeat ourselves. We forward rather than think. We consume passively. We outsource our attention. If we run on rails, we should not be surprised when software replaces the train.

AI is not the end of truth, reality, or human connection. It is the end of taking them for granted. In the coming years the premium on trust will rise, the cost of laziness will grow, and the reward for restraint will compound. Leaders who treat authenticity as disciplined service to others — rather than indulgence of the self — will build teams that innovate and endure, and create organizational cultures that appeal not just to machines, but also humans.