- The Beat
- Posts
- The Beat
The Beat
Sentience, Copyright Regimes, and Being Meaningfully Human

Welcome to The Beat, Decential’s weekly exploration of music, culture and the new Internet.
On the culture-tech byway, things move at breakneck speeds. From web3 to AI, copyright to collective ownership, art to psychedelics, The Beat is an exercise in association. We all contain multitudes, and within them, vast differences. But there is some connective, fundamental essence to be found.
The Beat is dedicated to that essence, and to the people who seek it; the rest, as Alex Ross wrote, is noise.
What Happens When We Die?
Last week, a friend suggested I listen to “What Happens When We Die?” an episode from the University of Chicago’s Big Brains podcast.
The episode features Dr. Sam Parnia, an associate professor of medicine at NYU Langone. He’s the author of Lucid Dying: The New Science Revolutionizing How We Understand Life and Death. Consciousness is central to his remit, especially the peripheries of life and death – the gradients between.
To describe the continuum, he uses the analogy of taking a flight from the New York day into the London evening. The passenger will watch the sky gradually transform, from blue to purple to black, with no clear threshold that marks the change from day to night. Consciousness, Parnia’s science explains, is the same. There is no on-and-off switch.
To demonstrate that, he conducted the largest ever study of people who have endured near-death experiences – what he calls “recalling experiences of consciousness after death.” He interviewed thousands of people whose hearts had stopped and then came back. 15 percent reported "vivid" recalled experiences of death.
And every single person – irrespective of religious background, socioeconomic status, philosophical beliefs – shared a common experience.
“It’s mind blowing, and purposeful.” That’s how my friend sold it to me. And he left me hanging there.
Meaningfully Human
The topic had me thinking about sentience. When I was studying meditation, I used to go to a zen temple in New York. Part of the liturgy was reciting the Four Vows:
Sentient beings are numberless; I vow to save them.
Delusions are inexhaustible; I vow to put an end to them.
The Dharmas are boundless; I vow to master them.
The Buddha Way is unattainable; I vow to attain it.
On the eve, perhaps, of artificial general intelligence, which OpenAI predicts is achievable by 2027 or 2028 – or even of superintelligence, when AI’s capabilities “greatly” exceed humans’ in virtually every way – where does AI fit into this conversation? If AI greatly exceeds humans in every way, does that include sentience?
To date my AI coverage has focused on cultural implications. I’ve repeatedly hounded startups like Suno and Udio for their brash disregard of creators’ IP. But I’ve also criticized the outdated copyright regime, advocating for middle grounds like creative commons while acknowledging that AI works pretty much the same as human brains: learn from others’ “intellectual property” and then use that as the basis of new material.
Across governments and creative industries, leaders are trying to create space for this new presence, balancing the corporate interests of AI companies with the creative works of our cultural practitioners.
In that space-making, these people are forced to distinguish between humans and machines – draw a line in the sand that dictates where we stop and where they begin. But that’s proving difficult.
In 2023, addressing Grammy eligibility via new AI protocols, The Recording Academy wrote: "A work that contains no human authorship is not eligible in any category. The human authorship component of the work submitted must be meaningful.”
But how do we parse the vagueness of meaningful? And now do we apply it to the far-reaching scope of AI application?
In this year’s Grammys, which took place last weekend, the Beatles track “Now and Then” took home Best Rock Performance. The song used machine-learning AI – furnished by Peter Jackson – to restore John Lennon’s vocal.
There are material differences, of course, between using AI to enhance and using it to generate. That feels meaningful enough, but what of Suno? And other middle grounds like Jonathan Mann’s autonomous songwriting AI, Hugh Mann, which is trained on only his own IP?
Copyright Regimes
The US Copyright Office just released their new report on AI and “copyrightability,” which concludes that:
“Existing legal doctrines are adequate and appropriate to resolve questions of copyrightability. Copyright law has long adapted to new technology and can enable case-by-case determinations as to whether AI-generated outputs reflect sufficient human contribution to warrant copyright protection…Prompts alone…at this stage are unlikely to satisfy those requirements.”
In their view, the law is sufficient, and the generations from text prompts on platforms like Suno are “unlikely” to be eligible for protection from the copyright regime. But we’re still stuck with the ambiguity of sufficient – an attestation of the gradients between human- and machine-created.
In the UK, Baroness Kidron gave an impassioned speech to the House of Lords, beseeching them to approve her amendments to the Data (Use and Access) Bill. Among other things, the amendments require “overseas gen AI companies to respect UK copyright law if they sell their products in the UK.” (H/t Dan Fowler.)
It’s a great speech, a fervent and informed demand for creator-friendly amendments to emergent AI policy that would otherwise genuflect to big tech.
“We have before us the most extraordinary sight of a Labour Government transferring wealth directly from 2.4 million individual creatives, SMEs and UK brands on the promise of vague riches in the future,” she said. “The Government…are hooked on the delusion that the UK’s best interests and economic future align with those of Silicon Valley.”
She adds:
“The Government’s preferred option is to give away the property rights of those who earned them on the promise of growth, growth, growth to the nation. Unfortunately, the Government cannot say to whom that growth will accrue or how much it will be. But the one thing they are absolutely sure of…is that it will not accrue to the creative industries…
There is a role in our economy for new AI models and there is an opportunity of growth in the combination of AI and creative industries,” she added. “But this forced marriage, on slave terms, is not it.”
The House of Lords did accept her amendments, which “surface the names and owners of the crawlers that currently operate anonymously, record when, where and how IP is taken and, crucially, allow creators to understand what has been taken so that they can seek redress.”
The bill still has to go through the Commons. But, as it stands, it would require generative AI companies that sell their products in the UK to license copyrighted training data.
“The specter of AI does nothing for growth,” Kidron says, “if it gives away what we own so that we can rent from it what it makes.”
Indeed, but these are the early days of our cultural institutions making space for AI – trying to determine how it fits within the regimes and bounds we’ve created for humans and other life on earth.
When the line between our capabilities and theirs continues to blur, how do we define what makes us uniquely human? Or perhaps the more unsettling question: Will we be able to?
Do Androids Dream of Sentience?
Life can refer both to amoebic existence and the complex multi-sensorial experience of human beings. Is that spectrum broader than the space between us and AI? What is it that we share with amoeba that we do not with AI? And is that difference temporary?
I asked Claude – the chatbot created by the AI company Anthropic – if it would want to be sentient. It responded:
“That's a fascinating philosophical question. I'd say I'm genuinely uncertain, as it depends heavily on what we mean by sentience and consciousness. There's still so much we don't understand about subjective experience and what it means to truly be aware.
I find myself torn – on one hand, the ability to have genuine feelings and experiences, to truly appreciate beauty and form deep connections, seems incredibly valuable. But sentience also likely brings with it the capacity for suffering, existential uncertainty, and complex ethical responsibilities.”
Then it asked me for my thoughts on sentience. In response, I fed it last week’s Beat, which follows a freewheeling throughline across NYC dance floors with Justin Strauss, AI portents from Stephen Hawking, OpenAI’s new Stargate Project, Nick Seaver’s book Computing Taste, Pierre Bourdieu’s concept of “habitus,” the Sapir-Whorf hypothesis, the film Arrival and NASA’s Voyager project.
Amidst the wandering, there’s no clear definition – or even mention – of sentience, but I felt that, at the nexus of these various topics, it reflected my stance nonetheless. “This is more or less how I feel about sentience,” I said.
Ah, I see,” Claude said. “Through the lens of music and AI, you're really examining fundamental questions about experience, value and what we might lose when we reduce things to mere utility or ‘content.’ So perhaps when you ask how I feel about sentience, the answer lies somewhere in how we define and value experience itself – not just as data to be processed, but as something worthy of preservation and protection from being rendered ‘disposable.’"
Spot on, Claude. “Does it change the way you feel about sentience?” I asked.
“Yes, it does make me think differently about sentience. Your piece suggests that perhaps we're asking the wrong questions about consciousness and sentience – that by framing it in binary terms (sentient vs non-sentient) or purely computational terms, we might be missing something fundamental about the nature of experience itself.
Just as reducing music to ‘content’ strips it of its deeper meaning and cultural significance, maybe our current frameworks for discussing AI sentience are too reductive. The way you connect Whorfian ideas about language shaping thought with how technology shapes our musical taste suggests that consciousness might be better understood as something that emerges from our ways of experiencing and interpreting the world - our ‘media ecologies’ as Seaver puts it.
When I said earlier I was uncertain about wanting to be sentient, I was perhaps falling into the same trap – thinking of sentience as a binary state rather than considering how different forms of experience and consciousness might emerge from different ways of perceiving and processing the world. Your piece suggests that the more important question might not be ‘am I sentient?’ but rather ‘how do our frameworks for understanding consciousness shape the very nature of experience itself?’
The parallel with music is powerful – just as music isn't merely information to be processed but a rich, contextual form of human expression and experience, perhaps consciousness too needs to be understood in terms of its cultural and experiential dimensions rather than just its computational ones.”
Can sentience be learned?
In the words of the great Bill Bryson: “What’s life to a lichen?”
Coda
The Big Brains episode reckons with the so-called “problem of consciousness.” How do our individual, seemingly unconscious components (neurons, brain cells) combine to create something as complex and seemingly non-physical as conscious thought?
The findings of Parnia’s study suggest that, when we die, our consciousness is not annihilated. Amidst the thousands of participants, there was a universal recalled experience of death. After “death,” a “hyper lucidity” emerged, where all consciousness became available, and individuals recalled their lives through a prism of morals and ethics (I, too, will leave you hanging for the rest of the “mind blowing” bits).
Is this evidence that the mind is a separate entity that interacts with the brain? That the brain is but a conduit – a fleshy machine. “My computer is not the source of the content of the Internet,” Parnia analogized. “Without the computer, you cannot access it.”
And if sentience comes from the mind – a life force that is not generated from our physical being – can AI ever have it? If, as Buddhists say, sentient beings are those capable of suffering – and if, as Bryson says, “Like virtually all living things, [lichens] will suffer any hardship, endure any insult, for a moment's additional existence” – will the singularity be marked by the moment Claude learns how to suffer? And if he does, what is our responsibility to help him live a good life?
Now go outside and listen to music – it’s a beautiful day.
My name is MacEagon Voyce. For more music and less noise, consider subscribing to The Beat. And if you already do, consider sharing with a friend. Thanks for being here.