This piece was published in Design World’s 10th Anniversary special issue for October 2016.
Is technology our friend or foe? Is it here to save us or to help finish us off? The questions seem urgent given that there is growing anxiety over some recent directions in technological development, particularly the rise of automation.
Automation, in the form of both robotics and artificial intelligence (AI) and the increasing decision-making power of these systems, has the potential to make disruptive societal impacts. And certainly the situation looks grim. With a growing world population and ever-increasing automation, the trend is towards less jobs and tasks for humans to do and more for machines and automation.
So if technology won’t save us, what will? A conscious decision to think and act differently, to adjust our values to align with common human and global goals, chief of which is survival. The problem isn’t a lack of technology but our all-too-human nature, favoring shortsighted short-term interests, satisfying short-term needs, not thinking long term, etc. But also our excessive pride, greed, pettiness, and so on through the long catalog of human folly.
As for AI, there is a budding interest in its long-term social and economic consequences, such as the impact on jobs for humans. In fact, some of Silicon Valley’s tech elites are even proposing something like a basic guaranteed income for everyone as a viable solution, an idea also championed by R. Buckminster Fuller back in the 1960s.
What’s more, recently tech giants Microsoft, Google, Amazon and others have formed a group to address ethical issued raised by AI. The goal is to work towards a standard of ethics for the creation of artificial intelligence based on the potential impact of AI on jobs, transportation, and warfare. Such efforts recognize the need for a balance between technological development and our human limitations.
To make matters worse, if authors like Robert Gordon are correct, our best technological developments (and thus our greatest period of sustained economic growth and expansion) may be well behind us, never to recur again. In his recent book The Rise and Fall of American Growth, Gordon argues that the data point to just such a conclusion. His research suggests that we should not expect growth rates to be the same as during the unique moment of history he describes (1870 to 1970) and that future technological developments may not be anywhere near as revolutionary as the ones that occurred in this time frame, which built the foundation for our present-day world. Which, if true, will make social and political unrest all the more likely in the future.
Our basic problem is this; technology may change, rapidly and in surprising and unforeseen ways, but the humans that use it haven’t changed all that much over the years. Recognizing this basic fact protects against the twin illusions of technological utopianism on the one hand (a blind trust in the view that technology is always wonderful and beneficial, with little to no thought given to consequences) and a psychological fatalism on the other hand (that is, that human nature is unchanging and that therefore we are doomed to repeat the same mistakes over and over again.)
The broader point may simply be to not lose sight of the human. To not give ourselves over to fascination with technology. If we’re to remain human, we must be in charge of our technological development, not the other way around. And the way to do this is to plan and design and act with broader human and societal values in mind.
Our future depends on it.
Have a look at what the other Design World’s editors had to say:
Paul Heney on Looking to the future of AI… in 2026.
Leslie Langnau on Can you think differently?
Lee Teschler on how we’ll see robots building spacecraft in orbit.
Lisa Eitel on the coming age of driverless cars.
Mary Gannon on the future of STEM education.
Leave a Reply
You must be logged in to post a comment.