Technology has been woven into our lives—and will continue to be—because the devices will get smaller and smaller.

Using ethnographic field studies, technology research, trend data and even science fiction, Brian David Johnson helps Intel develop an actionable 10- to 15-year vision for the future of technology. We talked to him last fall while researching one of our 10 Trends for 2014, Rage Against the Machine—the idea that as we move further into the digital age, we’re starting to both fear and resent technology. Johnson talked to us about why technology is making consumers anxious and afraid, our evolving relationship to our data “second self,” and the process of working out how to use our devices in a socially acceptable way.

It seems like people are getting more worried about what’s been lost in our embrace of technology and are starting to question their immersion in it. To what extent are you seeing that?

We’re in a really interesting transition point because we’re moving toward something that I’m calling the “screenification” of computers. It’s all about screens now. But as you have the size of computational power getting smaller and smaller, they are spreading out into our lives in really different ways.

The computer used to be a thing that had its own room, and then the computer was the thing that had its own desk. Now it’s under your pillow when you sleep, next to your bed, or you take it into the bathroom stall with you, which is fundamentally different. I don’t think we have come to grips with that yet.

Technology has been woven into our lives—and will continue to be—because the devices will get smaller and smaller as technologies are capable of being embedded into things like wearables, stickables and in-frontables. And we start living in “smart” cities with “smart” cars and “smart” buildings. That relationship will continue to evolve, and it is something we don’t really have a mental model for understanding. The first reaction to the unknown is fear and anxiety for many people.

I contributed a blog for Slate a while back on the four stages of fear as technology comes into our lives. First: It is going to kill us all; it is going to be the end of the world. Second: it is going to steal our children from us. Third: I don’t want anything to do with that; I don’t want that in my life at all. Then you pass that moment when you start to see its use, and it turns into No. 4: “Well, of course, it’s always been like that. We always had the Internet, of course.”

When Sputnik was first launched, satellite technology was the end of the world, and we were all going to die. Now satellite TV is how we watch Monday night football—it is no big deal.

Recently Intel released the results of the “Intel Innovation barometer” where we asked people globally about their attitudes on technology innovation. We found very similar results [to JWT]. But one of the most surprising results uncovered was that Millennials felt that technology was making them less human.

I saw that and wanted to ask about it. Why do you think people feel that technology is making them less human?

We see it as a moment of flux. People were working it out: what it means to have a digital life, what it means to sleep with a smartphone next to your bed, what it might mean to have a smart shoe or have a smart Band-Aid. What does it mean to live in a smart building? I think we are really at the very beginning of working this out as a society.

A lot of what is going on is that we don’t have, from a cultural standpoint, the stories to tell us what that future looks like. And so there’s some anxiety around that. As we start to get those stories—through media, playing with technology, from new companies, living with different generations—that’s how we start to become comfortable. That’s how we get to that fourth phase where we’re like, “Oh, of course, it’s always been like this. I’ve always been able to get the Internet on my smartphone.”

We’ll get new anxieties, of course, because having fear and anxiety is what we do. It’s how we process what’s going on. But those new stories and new metaphors, we’re still developing those. We don’t have those yet.

Is it also just a matter of testing out how technology fits in our lives?

Sometimes we’ll get it really, really right as a culture and society, and sometimes we’ll get it really, really wrong. That’s also the natural ebb and flow, the back and forth of understanding.

The things we’re seeing around data protection and privacy—monitoring not only by governments but also by corporations—this is a needed conversation. For me right now, what’s interesting is the conversation that, as a culture, we are starting to get a handle on what’s appropriate. That conversation a year ago really wasn’t happening.

A few years ago we started talking about de-teching, or taking a break from technology, leaving it behind on vacation and so on. Do you think that concept is extending to a broader questioning of how we use technology?

There’s a very specific correlation to how this has happened before. Look at TV. When I have conversations with people about this and they are worried about technology, what is it doing to their children, I say, “Are you the type of family that watches TV while you eat dinner?” They’ll answer me – they have an opinion. Then I would say, “That’s a decision you’ve made as a parent, as a family. You’ve made that decision. Technology doesn’t get to make the decision. The television doesn’t care if it is on or not.”

We’re just starting to make those decisions about the new technology. I’m a child of the ’70s and the ’80s, back during the “boob tube”—and it was “rotting our children’s brains.” Now we have a much better handle on television and what is socially acceptable for television. We need to figure out what is socially acceptable for these devices. We don’t know yet.

Privacy and anonymity have become big topics over the past year. Do think that is also fueling some of this fearful mindset?

We’re having to have the conversation for the first time. For several years now, I’ve been talking about something called the “secret life of data.” This is the notion that as we look out to the future, data is going to have a life of its own. It’s going to go out and do things: algorithms talking to algorithms, machines talking to machines. Your data about you will go and do things for you. What the ongoing revelations have shown us is that we do have a second self.

For the longest time, people thought about their data as ones and zeros. They really didn’t understand that they had a second self, a representation of themselves. Especially this year, we have seen that we do have a second self and that it does live online. We can now have the conversation about, “What’s appropriate for you to know about that second self and not to know about that second self?” Data became really real this past year.

That conversation will continue, and it will continue at a business level, and it will continue at a policy level. I also think it will continue at a personal level. People now are having a more personal conversation about, “What does that mean to me?”

Do you see a lot of paranoia developing?

I think that now you have an opinion. Sometimes that opinion is paranoia, and sometimes that opinion is, “Oh, I’m fine with it.” For the first time, people are having a reaction. Certainly, the paranoia and the fear bubble up to the forefront because these are the things we feel we need to worry about, but I have gotten reactions across the board.

People are also more aware that their data is valuable, right?

Their data is them. This is the biggest thing. Now data is getting really real. All of a sudden it is like, “Whoa, wait a minute. You’re talking about me, right?” You’re getting to the point where this second self is me, literally. It’s me in a digital version that’s out on the Web. For the longest time, it was about that thing over there. There was that line between their data and themselves, and now that line is going away.

In the “Intel Innovation Barometer,” why do you think it was Millennials who were most apt to say they see technology as dehumanizing or society relying too much on technology?

We see them in the middle of working it out—questioning and soul-searching—more than other folks. Because they have only known a time when there was Internet. A lot of times with Millennials, they are beginning to go through it and getting this understanding of “What does it mean to be digital, what does it mean to have this second self, what does it mean to live in this world where technology is surrounding us?” They have only known a time when this was going on. So their reaction is much stronger.

And how about Gen Z, the youngest generation?

For me, they’re more a generation of builders. They’re so much more makers and hackers, because they’ve always known a time when you can take stuff apart and put it back together, and I mean that very metaphorically—you can hack apps together.

Science fiction often paints a dystopian vision of the future. Do you think that is making some people more fearful of the future, seeing the darker implications?

Science fiction gives us a language to talk about the future. Part of it is the intent of the story. There are two types. One is to entertain you, and if you lived in a future where everything was awesome and everything was great, it would be the most boring story ever. A story is about a person in a place with a problem. The other side is creating science fiction with intent. That’s what Intel’s “Tomorrow Project” looks to do. The goal is to get people to write science fiction based on science fact and to talk about the kind of futures they want to see.

Dystopian science fiction is incredibly important for me as a futurist because it tells us what we should avoid. The perfect example in the longer term is George Orwell’s 1984. It’s a pretty complex idea of what’s going on with technology in a totalitarian state. But with most people when you say Big Brother, they know it’s bad.

On the other side, we also should talk about the future that we want. We don’t have to be completely optimistic, but we can talk about things we want from technology. When you talk to people about gesture technology and new UIs, the words Minority Report will come out. When you talk to people about cell phones, most people of my generation cannot get within a few sentences without saying Star Trek. We also use science fiction to show us the things that are possible and to get us really excited, as well. It serves an incredible purpose to give us those stories.

That’s one of the ways we’re going to get through the moral panic. We’ll get through the anxiety, and then we come up with some new stories about what the future could look like. Science fiction is a powerful tool to do that for us.

Finally, what are a few of the items on your list of things to watch for in 2014?

Certainly, thinking about robots in a very different way, as well as quadcopters and Baxter, the work that Rodney Brooks is doing in Rethink Robotics. I really do think you are beginning to see robots as a mobile computational platform, and that’s really interesting.

Also, I’m still watching as we move away from the screenification of computation, as computation starts to move into our lives in a way we haven’t seen before. Think about wearables, smart buildings and smart cars. Living in a computationally rich environment is very different. It’s just beginning to happen. As we move toward 2020, we’ll see it happen more. It becomes more of a relationship-based computing than command and control. What we do now is command and control; we don’t have a relationship with our technology.

Do you think that will reduce people’s fears?

I think fear will increase in the beginning. We don’t have the metaphors, and we don’t have the language. We’ve got a lot to talk about when it comes to living in this computationally rich environment I described. I’m an optimist, and I think the future’s going to be awesome, but we’ve got a lot of work to do.