First Pass: What’s Wrong with the Grand Challenges for Engineering

October 11, 2010 by Daniel W. Rasmus

At the risk of committing more over-thinking of the Grand Challenges for Engineering, I want to take a first pass at discussing what I think is wrong with them in a very specific way, and honing the list into something more grand.

Here is the current list:

First, energy. These should all be combined. The challenge should be sustainable energy. I don’t think we should care what type of energy wins as long as it sustainable in the broadest sense (no limited supply, not toxic to the environment, eliminates the need to engineer something to fix its problems or byproducts in the future).  So this turns “Make solar energy economical and provide energy from fusion” into:

  • Develop sustainable energy

Second: Develop models for sustainable living. There are several here, from managing the nitrogen cycle, to clean water, to carbon sequestration. It is also closely related to the first two, because sustainable energy is a part of sustainable living.

Restore and improve urban infrastructure is also part of sustainable living. Without the context of sustainable living, we cannot spell out the goals of an urban infrastructure. Infrastructure is part of an overall design, not a separate component engineered in a vacuum.

Next one then is:

  • Create models for sustainable living

Advance health informatics. Wow, that is really going to inspire people. Let’s not cure anything, let’s study the data. If we create a goal maximizing healthy human lifespans, then we end up with informatics and medicines as part of this. It also touches the sustainable living because many of our health care issues come from our consumer-driven approach to life, along with our short-term perspectives on the planet and ourselves. Get people to think sustainable about everything and some health issues are solved without informatics or new medicines.  Sure, there are huge health issues that need to be solved, but again, rather than looking at data and medicine, let’s look at humans holistically and systematically so that we understand the causal relationships between disease and environment, between disease and the genome, between mutation and evolution.

So for number three:

  • Maximize healthy human life-spans

Reverse-engineer the brain. OK, but why? I have been a practitioner and advocate for artificial intelligence for a long time. I love this stuff, I think it is a solution to how we store information and make sense of the world. But making sense of the world and storing information better don’t make it necessary to reverse-engineer the brain, because we aren’t going to make artificial brains. Brains are meat-machines, We are likely going to make hardware, not wetware, so we need to learn from the brain, but not be limited by it as we translate our findings into hardware. We already know how poorly the brain does something (like retrieve specific facts) and how well it does other things (like build 3-D models from scattered light). Let us learn what we need to learn to solve specific representation, storage and retrieval issues, knowing already that the brain is an imperfect model from an engineering perspective. So I would argue that we need to engineer better ways to store and retrieve knowledge, and again, let the engineers figure out what they need to know to do that.

So four:

  • Create better ways to represent, store and retrieve knowledge

Prevent nuclear terror. Talk about putting the genie back in the bottle. This is a political issue, not a scientific issue. Sure, the detection of fissionable material is something that can be engineered, but the distribution and access is in the control of nation-states. It is a physical security issue, not an engineering issue. I would just drop this off the list for engineers. This is a grand political challenge, I believe that when the politics are solved, the engineering is fairly mundane. Now, if we look at this as a part of the energy equation, and worry about sustainability, the issue of where to store nuclear waste becomes a huge issue, but that is not a terror issue, it is a sustainable living issue.

Next on the list is secure cyberspace. I would put this as a sub-bullet way down the list for storing and retrieving human knowledge, because ultimately, the way to solve intellectual property issues and the way to secure state secrets are highly correlated. Solve either and both are solved. So I would also remove this from the list and let it float among the other priorities in the human knowledge challenge.

Enhance virtual reality (VR). No. This is a tool, not a challenge by itself. It could be argued that VR is fundamental to all of these issues, so it needs its own category. I would argue that VR will have unique instances within these categories (being in a human body may require different representation and visualization technology than simulating a sustainable city). So this goes off the list and becomes a means to several ends, not an end in itself.

Advanced personalized learning. Easy, a sub-task of number four. Interesting, but we need to understand learning models in light of new representation, storage and retrieval models. This becomes a bit of an analytics challenge once we have the representation, but it is getting the data to the point it can be discovered and correlated that is important. I already know what is important to me, and once information systems know that too, ordering knowledge to fit my schedule becomes a rather trivial, non-grand issue. A bigger issue is having the time to learn what I need to learn, and that, again, is a representation issue. If we end up with Matrix-like downloading, then it is all about representation and interface, not about personalized learning, and that becomes a given the second I jack in.

Finally, engineer the tools of scientific discovery. Another, “just let people do what they need to do” category. We continue to create great instruments for things that aren’t grand challenges. Understanding cosmology may be a real human need, but the Hubble telescope is mostly a machine to explore philosophy, not science. Yes, it reveals both new truths and raises new questions, but perhaps its biggest question is why the universe matters to people on earth. It won’t answer that question. People need degrees of movement to pursue intellectual and philosophical questions that matter, because people say they matter, not that they have some specific practical purpose. Creating a grand challenge that couches our human need to know in industrial-age garb is intellectually dishonest. So this comes off the list. We will invent what needs to be invented to answer the questions we need to answer, regardless if the source of the need is practical or philosophical.

So this leaves me with four grand challenges.

  1. Develop sustainable energy
  2. Create models for sustainable living
  3. Maximize healthy human life-spans
  4. Create better ways to represent, store and retrieve knowledge

When we look back at the Moon launch, we don’t see leaders talking about building big rocket ships or small, thin, gangling landing craft with undersized computers. We hear and see vision with specific goals and time frames, not methods. NASA battled internally. Apollo was not the only way to reach the moon, it was the way that we kludged our way to the moon. That is how engineering works. That is how evolution works. These challenges are over-thought because they aren’t grand and visionary; they are specific and boring, they eliminate degrees of intellectual movement by their narrowness.

I want to see some vision from our leaders, some unshackling of human capital and financial capital that doesn’t contain micromanagement from the start, but rather a broad goal with a lot of room for invention, and a lot of room for failure. If we don’t fail, we don’t learn. Perhaps it isn’t just the abstraction that is the problem with these grand challenges, but the sense that we don’t have room to fail. Who wants to sign up for a big challenge where efficiency and expediency is more important than breaking through a barrier to knowledge that takes us to the next level of human achievement?

From an e-mail I received from Ray Kurzweil, Oct. 7, 2010:

Thanks, Dan,

I agree with what you write. Larry Page and I pushed solar energy as an existence proof that a path to sustainable environmentally friendly (admittedly redundant) energy was feasible. I agree that a solution does not need to be solar and indeed we are likely to have a hybrid approach. Larry is personally enthusiastic about geothermal. There is, after all, a vast supply of heat energy not far from us under the ground. I am dubious about fusion, which was one of the recommendations of the report. For one thing, it is not a technology that is obviously amenable to exponential approaches. But it had its strong proponents on the committee.

The purpose of reverse-engineering the human brain is so that we understood its principles of operation with a view towards leveraging those principles to create better AI. In the same way that engineering took Bernoulli’s principle and created the whole world of aviation, we can do  with the basic methods of the brain. So the idea is to expand our toolkit of algorithms. There are two other objectives as well, which are to find better ways of fixing the brain (treating it as a network rather than a chemical soup) and gaining more insight into ourselves, which is the ultimate objective of the arts and sciences.

My response to Ray:

Thanks for the note. As I wrote in my book, Rethinking Smart Objects, I agree with the idea of understanding the algorithms from the brain, but we risk simply creating a sub-par emulator in hardware, rather than looking at the combination of the evolutionary model and the engineering model, and not stopping where evolution has taken us, but allowing the machine, using artificial life approaches, to evolve mechanisms that may be better than what the human brain has accomplished (and at a much faster rate). I discuss the color blue in the closing of the book, and say that the color blue has a unique meaning to each human. Who are we to define what blue means to an artificial intelligence? We should figure out how we can allow it to define blue for itself.

Provided by Daniel W. Rasmus: Put Your Future In Context