Ask an artificially intelligent question…

Ask an artificially intelligent question…

There was plenty for a philosophy major to sink his teeth into at ION’s January workshop on Cognizant Autonomous Systems for Safety Critical Applications (CASSCA).

What is knowledge? What is meaning? What is understanding? What is intelligence? What is learning? What is thinking?

These questions excited Plato and Kant, Buddha and Descartes, perhaps out of intellectual or spiritual curiosity. Who’s to say? But the people asking them now are driven, quite literally, by practicalities. They have come to realize that we cannot ride in driverless cars or fly in pilotless plane-taxis, we cannot live in an autonomous, artificially intelligent environment without knowing a bit more exactly what knowledge is, in this brave new world.

Without thinking about what thinking may be, for a machine.

Why does this matter to a GPS/GNSS/PNT readership? Because as positioning and navigation engage more deeply with artificial intelligence (AI) generally, and with autonomy in particular, these issues emerge as part of the environment that such solutions explore, and in which they must verify and validate themselves.

Welcome to the future, it’s yours. Now think about it.

Culture Club. Some of us may have believed that only technical obstacles remain in the path of a driverless car and an otherwise automated society, salted with a few regulatory wrinkles to iron out. But as build-a-robot R&D projects transform into full commercial partnerships, cultural challenges jump up as well: inertia, instability of requirements, unanticipated expectations, magical thinking (the development of empathetic attitudes towards robots), misplaced trust and misplaced distrust. All this according to Signe Redfield, roboticist and mission manager at the U.S. Naval Research Laboratory.

Joao Hespanha, professor of electrical and computer engineering at the University of California, Santa Barbara, outlined three key concepts for AI development: computation, perception and security. The critical questions for the first named are, how much computing will be done onboard the platform, how much learning will be done onboard, and how much of each process will be distributed to offboard computation. Perception, a crux for autonomy, is closely bound in a feedback loop with control. The platform must gather data to make autonomous decisions (control), and those decisions must maximize the gathering of information (perception).

Amply consider security. All safety-critical systems must provide for — and prevent where possible — decisions based on compromised measurements, which may stem from system or environmnetal noise, sensor faults, hacked sensors, or other corruptions.

 Second Wave. We are in the second wave of AI, according to Steven Rogers, senior scientist for sensor fusion at the Air Force Research Laboratory. In the first wave, 60s and 70s, large and complex algorithms, relatively low on data, drove new developments — but they hit real-world problems, hard. Since the mid-80s, we have been in the “classify” stage with relatively simpler programs generating and consuming lots of data. Intense statistical learning will eventually lead to the third wave of AI: Explain.

On a timeline yet to be determined, contextual adaptation will give rise to “explainable” AI, capable of answering unexpected queries. That is, it will have learned how to teach itself.

Some of this stuff gets pretty scary.

Most future knowledge will be machine-generated.

Let’s run through that one more time.

“Most future knowledge on Earth will come from machines extracting it from the environment,” said Rogers. “Machine generation of knowledge is key for autonomy.”

Here’s where the thought processes really started to levitate. “Current sense-making solutions are not keeping pace, not growing as knowledge is growing,” Rogers asserted. And he challenged us with the questions posed at the beginning of this column: in AI, the context we will use to explore much of the future, what is knowledge? What is meaning? And so on.

He gave us one of his answers: “Knowledge is what is used to generate the meaning of the observable for an autonomous system. Correspondingly, machine-generated knowledge is what is used to turn observables into machine-generated meaning.”

Slide from Steven “Cap” Rogers’ presentation at CASSCA.

 

He suggested a book by George Lakoff and Mark Johnson, Metaphors We Live By. Pretty heady stuff for a room full of engineers. I don’t know about you. I’m headed down to the library to check it out.

Requirements, Simple/Not. We got back to earth with some technical challenges we could actually chew on with David Corman, program manager for Cyber-Physical Systems and Smart and Connected Communities at the National Science Foundation. Seemingly simple requirements for safety-critical applications break down into hundreds of requirements that no one has really thought about, Corman said, as he displayed a chart of “Some Example Research Problems.”

Precision agriculture and environmental monitoring are two sectors where he thought autonomous operations come closest to being full realization, because their operational environments are structurally defined enough. In such constrained niches that we more fully understand, we can implement autonomous operations. Elsewhere, “we don’t know how to specify what we want, so that we get only ‘good results’ and no ‘bad results.’ ”

He identified a looming Cambrian explosion in AI, analogous to that for plants and animas following the dinosaur extinction, in which systems interact, gather data, sense the environment, learn, improve and multiply. He suggested we browse “The Seven Deadly Sins of Predicting the Future of AI,” an essay by Rodney Brooks.

The afternoon’s workshop talks followed, from experts in autonomous flight software, legal and insurance aspects of autonomy, the Ohio State University’s Center for Automotive Research, and the U.S. Department of Transportation. But I tell you, this morning done my brain in.

Before folding up, I must mention a short video on autonomous flying taxis displayed by Paul DeBitetto, VP of software engineering at Top Flight Technologies. It depicts Pop.Up, a modular ground and air passenger vehicle for megacities of the future. Check it out.

The CASSCA workshop was organized and moderated by Zak Kassas, an assistant professor at the University of California, Riverside and director of the Autonomous Systems Perception, Intelligence & Navigation (ASPIN) Laboratory. He is also co-author of two cover stories in GPS World, “LTE cellular steers UAV” and “Opportunity for Accuracy.”

ION president John Raquet expressed the hope that we may see a fully fledged conference on this topic in the near future: CASSCA 2019, perhaps, to join the rotating repertory of ION annual meetings.

Agreed. We need to think more.

Don’t look back, the machines may be gaining on us.


GPS World

Share this post

Leave a Reply