The traditional dynamic between philosophy and technology is one of tension—between ideas and actions, theory and practice, “why” and “how”. Philosophers debate the perfect argument to refute our existence; technologists race to redefine it. Philosophers ponder the very essence of our being; technologists want to tinker with it. Philosophers think obsessively about the meaning of technology; technologists don’t think about philosophy at all.
This tension is only increasing. The accelerating pace of innovation is forcing us to confront the big questions of philosophy, those we’ve historically kept safely tucked away in the halls of academia or in the stories of science fiction. Technology seems intent on redefining what it means to be a human or a machine. Genetic wizardry is redefining life. Machines are simulating our own cognition. Along the way, we are willingly ceding more and more of our agency to our own inventions. What does this all mean?
This is the deep end of the philosophical pool that technology is forcing us to swim in. But most big questions seem to drown there, with little chance of ever getting a definitive answer. And most questions in the deep end can seem so grandiose that we are content to leave them there, rarely connecting them to the practical challenges we face in the real world of technology.
This how the deep end of philosophy can be deceptive. Philosophy doesn’t just intersect with technology when confronting the big theoretical questions. Even practical questions are starting to feel suspiciously more philosophical. For example:
Who should be held accountable for decisions made by AI systems—the developers, the users, or the AI?
When is it okay to experiment with geo-engineering to mitigate climate change?
How should autonomous vehicles be programmed to act in scenarios where harm cannot be avoided?
These questions still sound practical. They are about existing technologies and realistic scenarios. But focusing on the practical side of questions like these makes it easy to overlook the philosophical nature that looms below the surface. Our bias is to view the theoretical and practical as two separate things.
My contention is that this is a false dichotomy. Technology blurs the distinction between theory and practice. This is why our technological discourse can often feel so superficial. When theory is disconnected from practice, our stance towards technology will necessarily be incomplete.
The fact that this is becoming increasingly obvious is a good reason why we need more philosophers engaging with technology. The time to resolve the tensions between philosophy and technology is now. It’s our only chance to create a sane relationship to technology, one that can lead to a future we still want to live in.
Step one is realizing that technological questions are just philosophical questions in disguise.
To see why, imagine that you are an oracle known for your boundless technological wisdom. Every government on Earth seeks your counsel on all technological questions. Your wisdom is so absolute that your answers are instantly accepted as universal policies.
As such a technological sage, how would you answer the practical questions above?
Perhaps you would begin by requesting the relevant data and reviewing everything we know about the question. What has been tried so far? Has anything similar worked in the real world? Have any experiments been run?
This is a fine start, but you’ll quickly discover that any data is incomplete. That’s because each of these questions contains a degree of novelty. Perhaps it’s an entirely new technology, or a new scenario where technology is being applied. In either case, you can’t solely depend on data because no data exists yet. It’s something we haven’t encountered before.
So you’ll need something more to construct a good answer. You will need to speculate about how this novelty might play out. You’ll need to imagine how different answers might lead to entirely different worlds, and which of these worlds would be better or worse.
In other words, these questions aren’t just empirical. They are also normative. They are forcing you to consider how the world ought to be. In the act of preferring one imagined world over another, you are claiming that preferred world is a better option.
All sorts of different criteria can be used to evaluate different possible worlds . Perhaps you want to prioritize certain human values like freedom or equality, or you might think technology should maximize economic growth. Each reason represents a different theory—something that can explain why this particular world is the one we should strive to make real.
But you’re still not done! It’s not enough to justify why your preferred world is the best. You need to figure out how to turn that imagined world into our real one. You need to connect your theory to practice so it can be applied to specific scenarios.
Eventually, these practices start to reveal patterns, which can be generalized into principles. As these principles get tested by reality and refined by more practice, you realize they can be further abstracted. Soon they can cover almost every new scenario that arises.
And eventually, the governments of the earth no longer need to consult with you about each new question. Instead, they can reliably use the principles you’ve developed to guide their practices.
Congratulations! You have just created a new philosophy of technology.
This blurring of theory and practice is baked into the very idea of technology. Technology isn’t about accepting our current reality. The entire premise of technology is that innovation helps us change our reality.
In fact, one way to define technology is the art of transforming theory into practice. You first have some notion of how the world could be a better place (this is your theory). Then you build new technology in the hope of bring that world about (this is your practice). You are defining some version of the good and then trying to make that definition real.
Of course technology is never that simple. The consequences of innovation are almost always unforeseeable. Second order effects are too complex to be predicted. But any good theory should account for the complex reality of how innovation works.
Even technologies that are simply filling economic or government needs are connected to theory. It just happens to be the default one: the theory that says our current world is the best we can do, where innovation is largely driven by the market or the state.
We can fall into the trap of assuming that this default theory is the only viable one. Without different theories competing for the mindspace of innovators, theory itself begins to disappear from the discourse. Practice becomes disconnected from theory. Engineers focus on the “how”, to the exclusion of any “why”.
This realization explains why our technological discourse can seem so dysfunctional. We either make practical demands that are don’t account for the realities of our default theory (democratic accountability! more regulation!) and thus go nowhere, or we suggest reactionary new theories that lack the practical basis to ground them in reality (abolish/accelerate capitalism! crypto will save us!).
When practical proposals are disconnected from larger theories, they become reactive and shallow. They can become either too marginal or too fantastic. They are more easily captured by politics and the culture wars. They are not capable of inciting any real change because they aren’t grounded in any larger theory that would make real change possible. They are incomplete.
The entire point of an effective philosophy of technology is to ground practice in theory. The test of whether a philosophy is capable of this is simple. Can it make prescriptive insights and suggestions, based on practices that are logically connected to its theories? Will those practices lead to a world that anyone wants to live in?
This is why we need more philosophers engaging with technology. We need to reconnect theory and practice.
To see how theory can inform practice, consider a (hypothetical) philosophy based on the following: technology should be in service of human values like agency and autonomy. As you might imagine, this theory should translate into practices that prioritize these values when making decisions about technology.
Now let’s take a common question from the current discourse to see how this might work: What should be done about the possibility that AI will lead to more economic inequality?
Many pundits fear that outsized wealth and influence will accrue to a few key AI developers and owners. At the same time, as more of the economy is automated by AI, more of the population will have nothing left to contribute. The majority of economic activity will be left in the hands of just a few AI power brokers.
To rectify this imbalance, many proposed solutions include some form of universal basic income, or UBI. The idea is that since so much wealth will accumulate to the AI owners, they should be heavily taxed so governments can redistribute much of the gains back to the general population.
This may sound like an attractive proposal—free money for everyone! But consider how UBI affects human values like sovereignty and agency. If you are dependent on a government for your income, what happens if that income is taken away? What happens if ideological commitments are required in order to qualify for that income? The relationship between government and citizens dramatically changes when an income dependency is introduced.
Prioritizing human values invites us to expand the solution space. It’s a different theory that forces us to brainstorm other ideas.
Consider something like a public data coalition. Given that current AI models are entirely dependent on public datasets for their training, it seems justified that AI companies should be licensing this data from some sort of public commons.
These coalitions could require licensing to access the data, charging fees which could be distributed to citizens and accomplishing some of the same goals as a UBI. That license could also put ethical restrictions on use, or even require an equity stake in the company that included governance rights. Now citizens are owners, incentivized to align with AI companies to use that data both ethically and profitably.
A data coalition would have its own challenges, but it’s a viable option that is worth exploring. It provides similar solutions to UBI while retaining (and even increasing) the agency and sovereignty of the general population.
It’s a good example of how a philosophy can be guided by theory to expand our practical discourse. The justifications for these practical ideas aren’t reactive, personal, or subjective. They are logically connected to the broader theory they spring from.
This should make clear the crucial role that philosophy has to play. If we want our relationship to technology to conform to our deepest aspirations, then we need theories that can capture those aspirations, and practices to help make them real.
The question then becomes: can philosophy actually do this? It’s a big ask. Philosophy will have to overcome three significant challenges to be part of the solution that technology needs.
The first challenge is developing a theory big enough to answer the hard questions that technology is forcing us to consider. Philosophy is famous for eternally wrestling with the big questions. Much of what Socrates argued about remains unresolved to this day. Perhaps this is the best that philosophy can do—maybe definitive answers aren’t as important as asking the right questions. Technology is here to disagree. We have a whole host of challenging questions that need answers, and we need them now.
The second challenge is connecting theory to practice. At some point, even the grandest theory in the universe needs to cash out in practice. Philosophers traditionally prefer to stay in the land of theory. But as we’ve seen, technology doesn’t work this way. Technology needs something more than thought experiments, where carefully constructed hypotheticals are quarantined from the messy complexity of reality.
Finally, philosophy must account for the reality of innovation. The second that technology touches reality, it takes on a life of its own in ways that we cannot anticipate or plan for. Any philosophy of technology must account for this uncertainty. There is no simple translation of theory into practice that can guarantee the world we hope to achieve.
Can philosophy overcome these challenges? Our relationship to technology depends on it.
In this way, technology is the perfect forcing function for philosophy. If philosophy wants to be a part of the solution, the time to start actively engaging with technology is now. Technology is not going to wait around for philosophers to step up.
If philosophy can’t help answer the big questions, technology is going to answer them for us, whether we like those answers or not.
This is the first in a series exploring the philosophy of technology. In the next installment, we’ll explore the different ways that philosophy can positively impact the real world of technology.
Enjoyed this. Totally agree that we need to reconnect technology with philosophy. The connection is there (and has always been), but we often miss what’s right in front of us.
One of the best things about the recent developments in AI has been the way it’s brought this connection to the forefront in more obvious ways.
Unfortunately most/all of Western philosophers are trapped in the left brained spirit-killing paradigm of the Emissary as described or pointed to by Iain McGilchrist in his book The Master & His Emissary.
As such they can hardly even begin to really see and feel and therefore do anything about the multi-dimensional meta-crisis that is now occurring.
The thus created death-saturated world machine is intrinsically hostile towards the health and well-being of all living-breathing-feeling life forms.
But even so various other philosophers and social critics have been describing the origins and historical developments of what Lewis Mumford called the Invisible Megamachine that now (and then too) controls and patterns every minute fraction of modern civilization.
Mumford, Ellul, Maritain, Postman, Illich etc etc.
At another level both Aldous Huxley in Brave New World, and George Orwell were describing and warning us about the nature of the emerging collective nightmare. We are now living in a time and place wherein both the thus predicted nightmares, and the thus disintegration of civilization can be watched in full color on your TV. And even gleefully aggravated and participated in via various social-media platforms.