Welcome!

The background art you see is part of a stained glass depiction by Marc Chagall of The Creation. An unknowable reality (Reality 1) was filtered through the beliefs and sensibilities of Chagall (Reality 2) to become the art we appropriate into our own life(third hand reality). A subtext of this blog (one of several) will be that we each make our own reality by how we appropriate and use the opinions, "fact" and influences of others in our own lives. Here we can claim only our truths, not anyone else's. Otherwise, enjoy, be civil and be opinionated! You can comment by clicking on the blue "comments" button that follows the post, or recommend the blog by clicking the +1 button.

Tuesday, December 11, 2012

Building Better Machines

A recent article in Scientific American, entitled “The Wisdom of Psychopaths”, comments how similar many of our society’s leaders – politicians, movie stars, major entrepreneurs – are to what is known as “classic psychopaths.”  They are ruthless, uncaring of consequences to others, superficially charming, overwhelmed by their own self worth, driven to satisfy only their own desires and needs.  And we are busy building more of them all the time.  Sometimes we do it by the way we raise our children, sometimes by the entertainment we prefer, sometimes by how we vote on Election Day, sometimes by the values we elevate through our ideologies.  For example, Michael Sandel, in What Money Can’t Buy: The Moral Limits of Markets, reports that Larry Summers, then President of Harvard, spoke in Harvard Chapel about how Economics enables us to “economize on altruism.”  But sometimes, we just leave it to our engineers.
Case 1: many of us are not aware that when gas prices skyrocket or the charges for our heating in winter become outrageous, they are not the product of some malevolent decision by an evil plotter out to do us harm, but simply the calculation of a computer program, possibly thousands of miles away. Knowing only formulas that define relationships between a change in price of gas or electricity and the resulting change in demand, and uncaring about the possibilities of leaving people stranded or freezing, the program sets prices or selects energy sources to maximize profit for the corporation involved.  The program is effectively autonomous in its decisions, and not at all altruistic.
Case 2: General Electric and other major companies are proudly announcing their achievements in industrial robotics, which is maturing as a way to provide “on-demand” production of consumer goods to exact customer specifications.  The results are great: soon you’ll be spelling out exactly what your dream toaster will be, and it will be arriving on your doorstep the next day.  Of course other consequences will occur.  Millions may be out of work as they are replaced by robots, but hey, that’s capitalism, and for every winner there are multiple losers.  The robots won’t care.
Case 3: the current controversy over the use of “killer drones” for attacking specific terrorists is tempered by the knowledge that a human operator is “in the loop”, actually issuing the decision to strike.  But that’s not really efficient, so engineers are busily working on ways to enable the drone to perform autonomously, thereby opening up a whole new category of homicide: “oops, computer error.”
Case 4: the visionaries of the internet are looking forward to 2045, the year they estimate that all-wise intelligent machines will take over the nasty job of making all the decisions that run the world. The near-term project in that pursuit is to develop computers capable of designing new computers smarter and faster than they are; then the new computer will design its successor, etc., etc.  You may have seen one step in that direction recently on Jeopardy when a computer developed by IBM trounced the greatest human Jeopardy champions.  But will the machines making the decisions for us have really human values and emotional intelligence?
Another article in that same Scientific American noted that a three pound human brain contains the complex circuitry and computational capacity equivalent to the entire internet, so I’m not holding my breath for that 2045 dawn of a new age.  We’ll still outnumber the machines about 10 billion to one.  But that visionary goal illuminates a challenge we face in the near and far future: how to control the burgeoning technology we are so rapidly creating to assure its “built-in” values are truly human, not just the residues of defunct 19th century philosophies, and that we ourselves stay human, too.   For example, Economic Determinism and Laissez-faire Capitalism, with their emphasis on “economizing altruism” are an example of thinking based on early understandings of Darwinian evolution and the human genome that are now being outgrown; we don’t want them embedded into our machines, and need also to get them out of our heads.  More generally, we have not yet faced up to the task of building ethical machines or an ethical technology-dominant society.
Isaac Asimov solved the problem neatly with his conception of the three laws of robotics; that was great for fiction, but real-life solutions are going to be a lot harder to arrive at.  It was interesting, though, that his first law was equivalent to an age-old truth – the first line of the Hippocratic Oath, “First, strive to do no harm.”  A step forward some engineers might love the challenge of might be to view such ethical principles as “constraints” built into a linear programming algorithm seeking profit maximization.  Who knows? It might actually achieve something.  That would still not solve the altruism issue; that’s Asimov’s second law.  But it’s a step in the right direction. And many such steps are required before we can unleash autonomous machines on our society.
Of course, the fundamental problem we face is not simply building better machines.  It’s building better people.  We really need to outgrow the stage of human society where an objective observer can point out that our leaders are generally psychopaths.  And that will be the hardest job of all.

No comments: