Niels Bohr, famed physicist and Nobel laureate, once said that “Prediction is very difficult, especially if it’s about the future.” This puts strategists in an awkward position. Whether you are a communication, brand, design, or business strategist, your job isn’t just to understand society as it is now, but also to make educated guesses as to where it is going. Martin Weigel, Head of Planning at Wieden+Kennedy Amsterdam, wrote in his blog canalside view that one of the categories of books a planner should read are those which let “us peer into the near-future…because our task is creating new futures for our clients’ businesses.”
I have had some modest successes with such predictions. In one blog post, I foresaw a number of ways in which crowdfunding would evolve and cause disruptions in the market including the rise of Kickstarter consultants who would specialize in launching successful funding campaigns and Hollywood’s usage of the platform for mid-budget projects. In a follow-on post, I predicted the business climate that would lead to a success story like Oculus Rift’s funding, launch, and sale to Facebook. Looking into the future was also present in some of my work in ad school. While developing work for a mock Nikon pitch, my team and I believed that DSLRs should defend their turf by contrasting the art a high end camera was capable of making with the ephemeral and forgettable snap-capturing of a cellphone camera. A few years later, Apple picked up the same insight, but from the opposite side, and launched the “Shot on a iPhone” campaign to highlight the creative possibilities of their cellphone’s optics.
I’ve had some missteps too. In a blog post, I described an ideal world where media companies would make their content available to any online broadcaster who could pay the licensing fees. If anything, the media landscape has only gotten more fractured. In another post, I underestimated the rapid rate at which competitive video gaming would grow its viewer base and influence in the West without the presence of large non-endemic sponsors and revenue.
Realizing the importance of being able to predict roughly where things were headed, I began to focus on how I could improve my hit rate. I looked at my own results and thought about why I had been right when I succeeded and wrong when I failed. I studied the works of futurists from both past and present. I revisited my studies of business, economics, and history with a fresh eye for how trends in human behavior could be applied to predicting what’s to come. In essence, I studied futurism itself. Through this process, I learned quite a few things about what it takes to more effectively “peer into the future,” as Weigel said.
For starters, many amateur futurists, in particular tech journalists and media personalities, fail to consider how new technologies actually fair when they hit the market and are forced to interact with real people and institutions. As a famous Prussian general put it so succinctly: “No battle plan survives contact with the enemy.” In this context, the enemy isn’t an opponent, per se, it is the complex tangle of social norms, political institutions, corporate structures, and other aspects of the “real world” that create roadblocks to any piece of clever design or technological leap being broadly accepted. As a result, many end up imagining a future where it is impossible to explain realistically how it came into existence. Such advances don’t come from nowhere. Who pays for it? Who owns it? Who holds the liability if it fails? What are the financial or social incentives that allow it to happen? Does it have to rely on multiple companies working together or coordinating in any way? If so, what motivates them to do so? Does it require a monopoly to work? If so, how did that firm lock in their position?
Smart homes give us an opportunity to see how this works. It’s very easy to get excited by the prospect of a house where everything is connected by the Internet of Things. Imagine if your car could use GPS to tell your thermostat how far you were from home and begin heating or cooling the house based on your actual arrival time. Or imagine if your alarm clock could tell your coffee maker to start brewing. If your house had sensors that could tell who was in what room, it could raise and lower lights as people move, turn on and off speakers to make each person’s music only play in the room they are in, or have a movie someone is watching follow them from the living room entertainment center to the bedroom TV. We are already seeing the first wave of smart home accessories needed to make that dream a reality, from GE LED light bulbs that can change color on command to Nest’s smart thermostats. This style of device is still set by the user, though, through a control device such as a remote, app, or wall widget. The next step requires the devices to be able to talk to each other.
Without getting into the technical details, for this to work there needs to be a shared communication language with universal commands. Unfortunately, we are seeing a move towards the exact opposite. So far, as each new company enters the space, they do so with their own siloed technologies and approaches. For example, infrared, Bluetooth, and Wifi are all being used to control current smart home offerings. My bet is that this fracturing will continue as many large manufacturers aim for a product ecosystem model, like Apple has for computers, phones, and tablets. The benefits of such a product ecosystem will be too large to ignore. Even across industries, such as a car talking to an appliance, it is quite possible that companies will strike exclusivity deals with one another to cross-promote: e.g. Honeywell pays Ford to make it so their cars only talk to Honeywell thermostats. As a result, while a universal standard is good for consumers, and might even be better for the strength of the market as a whole, individuals in the system are not incentivized to go for a universal standard since it would mean giving up a potentially large competitive advantage.
It is possible that government could provide a solution, by establishing a protocol and forcing companies to conform. This is unlikely to come from the United States, which has typically taken a laissez faire attitude towards such industry standards, except when safety is an issue. For better or for worse, the US government does not consider it important to ensure all computers have USB 3.0 or that 16:9 be the aspect ration for all high definition TV broadcasts. Even electrical codes, which gave the US its standard plug shape among other things, were created by a private trade association and adopted by cities and states through their building codes, not through an act of the US federal government. European governments are more willing to get involved this way, as the EU did support 16:9 with something called the 16:9 Action Plan and an investment of €228 million. That kind of backing, from a large enough market, could be the tipping point for a standard. On the one hand, it is cheaper to make all products the same rather than customizing for each local economy. If only one region sets a standard, in this global economy it could become everyone’s standard. On the other hand, there are plenty of examples where that doesn’t happen. European eco-diesel cars don’t conform to US emissions standards. NTSC, PAL, and SECAM all exist as television broadcast systems in different parts of the world.
Government, of course, can also be a source of roadblocks to the vision futurists imagine. Often people will consider how government funded R&D can push us forward or how tax incentives can encourage certain behaviors, but rarely do they consider the ways in which ingrained rules, regulations, and industry protections keep rapid development of future technologies at bay.
For example, the FDA has a very stringent set of regulations to ensure the safety of most ingestible things in the market, including prescription drugs. There is nothing wrong with this in theory. Making sure every prescription drug passes a rigorous testing protocol to identify and label side effects, prove effectiveness against a placebo, and more, prevents both doctors from mistakenly prescribing dangerous drugs to their patients and snake oil salesman selling miracle solutions that don’t work. It is also very expensive. Pharmaceutical companies invest upwards of ten years and $2 billion dollars in the process required to get an experimental drug made fully legal, according to a report from the US Department of Health and Human Services. Many in the medical profession believe we sit on the cusp of a revolution in personalized medicine. In the near future we could be using genetic information to perfectly tailor drug regimens for each patient. However, this can only happen if regulations are rethought. If each tailored solution has to be tested as its own drug, there is no way personalized medicine can ever be financially viable.
Another way government gets involved is through protectionist legislation. It is very common for lobbyists who represent established businesses to try and maintain the status quo. Direct sales are increasingly common in all business-to-consumer industries. One of the last holdouts is car dealerships. Tesla tried to implement a direct sale system starting with its Model S line, only to have car dealerships lobby to ban sales without a showroom. These laws now exist in Texas, Maryland, and Arizona and are currently in dispute in at least fourteen other states, according to CNN Money. Similarly, taxi drivers have been protesting, lobbying, and harassing drivers in an effort to force Uber and Lyft out of existence. Taxi medallions had created a non-competitive oligopoly on ride services in cities which, among many other market distortions, caused a complacency among taxi dispatchers about quality of service, pricing structures, and technological adaptation. Despite the obvious advantages of a taxi summoning smartphone app, such a service didn’t really exist until Uber and Lyft came from left field and disrupted the industry. Uber and Lyft have been rewarded with legal battles that threaten their ability to survive.
Bureaucratic inertia and deference to the status quo doesn’t only exist in government, of course. It happens in the private sector as well. Often this manifests in the innate conservatism employees have with their routines and businesses have with their processes. Even as Kanban boards and just-in-time manufacturing were allowing Japanese car makers to steal huge market share from Detroit, American manufacturers resisted adopting the newer, more efficient methods from across the Pacific. This inertia can delay a predicted breakthrough for many years and is often only overcome when the incumbent is pressured by a disruptive outsider. More on the framework of this dynamic can be found by reading Clayton M. Christensen’s The Innovator’s Dilemma.
Businesses may also be resistant to the change because the financials don’t make sense. It can be expensive to make the changes, in both the acquisition costs of the technology and the labor costs of retraining employees, and most businesses make such decisions by considering the return on investment. For a current example, robotic kitchens in fast food restaurants present an interesting case study. McDonald’s and others have threatened to shift their businesses over to robotic food preparation and cooking but so far no one has pulled the trigger. Only point of sale systems in certain locations, usually airports, have been switched over to a person-less experience. This is because McDonald’s kitchens are already highly optimized for human workers. The layouts of the kitchens and the semi autonomous cooking tools have all been specifically and carefully designed to eek every last bit of advantage out of their workers. A robotic kitchen would require tearing all of that out, at great expense, replacing it with not one but multiple burger making machines that can handle many different configurations of sandwich, again at great cost. Just how much does each McDonald’s or Burger King franchise save switching to the new system against the probable multi-million dollar cost of implementing it?
Public opinion can play a part in tanking ideas too. As Kay says in Men in Black: “A person is smart. People are dumb, panicky dangerous animals and you know it.” A good futurist keeps that in the back of their mind whenever they think about what the future might hold. For example, when Eisenhower was pushing nuclear technology in the 50s, it might have seemed logical to expect nuclear power would be a major component in America’s energy strategy moving forward. Yet only a handful of nuclear power plants were ever built. Gains in nuclear reactor technology were paired with increasing fears over nuclear annihilation at the hands of the USSR. Those fears were aggravated by high profile reactor meltdowns, overshadowing the technology’s major successes. France derives 75% of its electricity from nuclear power and makes €3 billion a year as a net exporter of energy without any major failures in four decades of operation. To this day, fears over nuclear energy’s inherent risks is limiting its adoption, even when LFTR reactors can’t meltdown like Chernobyl or Fukushima, breeder reactors greatly reduce the amount of waste created, and concerns over global warming raises the stock of any energy technologies not reliant on fossil fuels. Still, Americans are incredibly reluctant to support breaking ground on new nuclear power plants.
More recently, similar public opinion pushback can be seen in resistance to GMOs. There are some legitimate concerns about whether companies should be able to own the copyright on strains of DNA, but that’s not where most of the resistance to GMOs comes from. Rather, it seems to originate from a concern over fears that GMOs are unsafe or unnatural. In reality, we have been manipulating plants through cross breeding for thousands of years, which is why wild carrots look like brown, twiggy roots and cultivated carrots are orange, thick, and straight. Still, the fact that this is done in a corporate laboratory, instead of on a farm, makes some feel we might be playing God beyond an acceptable level.
So, in summary, there are a bunch of roadblocks that stand between innovative concepts and their real world implementations. There are systems that only work if every actor in them does the exact right thing without any incentive to do so. Governments can be an impediment. They are too slow to change outdated rules and regulations to address modern concerns and are lobbied by existing companies who feel threatened by the innovations. The practical financials of paying for the changes keep seemingly obvious upgrades delayed for years. Finally, public opinion can relegate good ideas to the dustbins of time, seemingly forever. The most common mistakes futurists make, when predicting the future, is to ignore any of these factors in favor of a more interesting, compelling, and, in many cases, more fanciful view of what things will look like down the road.
Of course, there is more to being a good futurist that merely avoiding the common mistakes that I have outlined above. There are proactive things you can do to make sure that your predictions are both more accurate and more useful to your clients. Changing your mindset, not reaching too far into the future, understanding how different trends behave, and knowing how human nature makes some bets safer than others are all ways to take positive steps towards improving predictions.
Perhaps the most valuable change in mindset is learning to let go of the fear. Reading through the laundry list of mistakes I’ve given so far can make the task seem almost impossible. I found that a great way to let go of the fear is to reframe the work. Alvin Toffler, author of Future Shock, differentiated between being a futurist and predicting the future. In his own words: “No serious futurist deals in prediction. These are left for television oracles and newspaper astrologers.” As far as he was concerned, being a futurist is like being a meteorologist. You can look at the trends like you look at prevailing winds. You can do statistical analysis of demographics like you would chart weather patterns. You can make guesses about emergent systems as you would with developing cloud formations. But like rain that comes two days early, or a cloudy day when it was supposed to be sunny, the day to day weather of social trends isn’t always that predictable, and sometimes, major winds blow a storm you could have sworn was going to hit Florida off course into the Gulf of Mexico. With that in mind, as strategists we should shift our thinking away from predictions and towards forecasts.
Once in the forecasting state of mind, a strategists should be looking for insights that contain actionable information. While futurists who write speculative fiction can envision worlds that are many decades in the future, like they did with flying cars in the 50s, the AI singularity in the 70s, and the end of aging in the modern era, strategists need to constrain themselves to what is immediately useful for their clients. While we may eventually live in a world where intergalactic travel between colony planets is commonplace, it is of limited utility for how a client company chooses to operate today. In fact, the dream of such travel is more important to advertising than the reality of its implementation. As a result, I feel the best kinds of forecasts are those that look about three to five years into the future. Typically, companies have extended R&D and production cycles, and established brand identities that aren’t able to pivot instantaneously, and three to five years hits a sweet spot of close enough to be useful and far enough out that companies have the needed time to prepare for what’s ahead. In addition, like a weather forecast, keeping things close makes extrapolating current trends less risky, with less variance in the system, which means they are also more likely to be accurate.
This is why I am so hard on the futurist predictions that may happen eventually, but probably not anytime soon. While you might not be wrong to see a future where all labor is replaced with automation, the complete impact of that system may not be felt for fifty years. Not soon enough to be something companies in many sectors need to be actively preparing for, and far enough out that any number of other factors, like government intervention, could work to prevent it from having the level of impact many are predicting.
When possible, actionable insights should also provide a strategic edge through a unique perspective or prediction. If a client knows something that their competitors do not, they can leverage that for gains in the market. So, try and keep predictions from being too close, too obvious, or too vague, because otherwise what you are providing is essentially common knowledge. That isn’t to say there is anything wrong with common knowledge; it just isn’t necessarily worth paying for. A statement like “electronics are going to continue to be more and more integrated into our daily lives” is too close to the way we live now to warrant any change in a company’s tactics, too simplistic to be unique, and too vague to be useful. Businesses already plan around Moore’s law, the observation by Gordon Moore that computing power doubles approximately every two years. A more interesting, and useful, observation is that we are rapidly approaching the day when processor manufacturing will reach its theoretical limit at an atomic level and Moore’s law will cease to be so reliable.
Some trends are unidirectional. Given a stable economic climate, science and technology march forward. Even in the so-called Dark Ages, when Greek and Roman scholasticism had largely left Europe for Byzantium and western Persia, advances were being made in agriculture, wind and water mill design, military architecture, and more. Some trends are cyclical. Fashion choices tend to swing back and forth. During the 1900s, dress length cycled between short and long every decade or two. This may be because fashion thrives on feeling new, and old looks get stale, or it could be a result of social mores shifting back and forth between permissive and reserved. Understanding what kind of trends you are dealing with can help unlock better forecasts.
Finally, there are some overall safe bets to take. People will almost always favor advancements that take away a nuisance, inconvenience, or pain in their daily life. Things that save time, money, and headache are sure to be winners. Minimalism, like the sleek unibody aluminum designs of Apple machines, may eventually be replaced with a different aesthetic but intuitive design isn’t going anywhere because it saves people the trouble of having to read a complicated manual. No one wants to lose money or have their company fail, so it is also safe to bet on people and organizations working to protect what they’ve got. Whether or not they will be successful is where the nuance comes in. No one wants to die, or be miserable, so advances in medical technology are rarely halted for very long. Even the legislative war over stem cell research during Bush Jr’s years ended in a $600 million dollar research grant under Obama. Lastly, people are driven by emotions, especially fear, so it is a safe bet to predict that emotionally effective narratives driving change will outperform purely logical or reasoned ones.
Being an effective futurist isn’t easy. I fully admit it is easier for me to point out the common mistakes of others and describe in broad strokes how to improve than it is to be consistently good at predicting the future. If it were easy, everyone would be doing it. But when it works, it really works. In the 1960s, Howard Gossage formed a working friendship with Marshall McLuhan. McLuhan wrote The Medium is the Massage [sic], published in 1967, which coined the term “a global village” and predicted the rise of electronic/digital communication connecting and changing the world decades before the invention of the Internet. That friendship played a role in Gossage becoming, quite possibly, the most influential figure in American advertising west of the Mississippi. So look to the future, fellow strategists! Just make sure your future systems make sense, your models are built on realistic incentives, your predictions won’t be rebuffed by governmental regulations or public opinion, and your insights are within the scope of useful and usable information for your clients.