Brilliant Machines

Are the machines of tomorrow really different
from what we have now?

Closed loop control: something about the world is an input
I recently got a new car, and one of the features is a “sensor” so it can adjust the rate at which the wipers move to the amount of rain on the windshield. On my prior car, the wipers could be set to a specific intermittent frequency, but they wouldn’t adjust themselves based on what was actually happening. This is the difference between closed loop and open loop control – in the latter case, I was the “cognitive” part of the equation – having to adjust the wiper speed myself so they didn’t run so often they’d make a horrendous noise on the dry glass.

The closed loop control of my new wipers are a big improvement, but they reflect the assumptions of the designer – for instance, that the only time the windshield will get wet is when it’s raining. If it’s triggered by, say, a drop of paint, I’d quickly have a mess I’d be unable to see through. No person would turn on their windshield wipers if paint splashed on their windshield of course, but the engineered system makes the assumption that anything wet is rain, and thus the wipers should be engaged.

Moving beyond the closed loop
As a fighter pilot, John Boyd came up with the idea of the OODA Loop: that a fighter pilot must observe, orient, decide and act all in concert with each other and faster than the enemy does so in order to win the engagement. The OODA loop can be seen as a kind of “cognitive architecture,” that is, a way to improve the simple closed loop control with something far more robust and ultimately able to adapt to situations outside of those the designer had planned on.

It’s important to recognize that even though it is a “loop” it is not a loop construct in programming – everything happens continuously and in parallel. And that’s very different from the kind of sequential step-at-a-time way we usually program computers. And that some parts of the loop, though intuitively appealing, are very hard to precisely define.

 

Brilliant_Minds1

“Observe,” for instance, is more than just sensing – somehow the data (the windshield is wet) has to be turned into appropriate information (wet… with paint) because that will make a difference to how the information is acted upon. “Orientation” is where the magic happens – where we understand the context of what is happening, comparing our current situation to our experiences, and finally thinking fast or slow about it to guide a decision. And because nothing is certain, the outcome can be checked for and further decisions made, theories revised, and “learning happens.”

Brilliant_Minds2

 

Generic decision tree, illustrates
how our decisions influence
outcomes. How can we build
better theories of the uncertain
events so we can predict the
outcome more accurately?

 

 

While today we are in the midst of adding simple closed loop control to pretty much everything – hooking sensors up to actuators with minimal dynamic intelligence – the real promise is held by building machines that can do what we do – understand the context, revise their models of how the world works, and be generally skeptical. Their programmers, like parents, aren’t really gods – they do the best they can and then let their progeny go to figure out how the world really works.

How do you think the world will change when machines are no longer designed, but raised?


0 Comments