Perhaps the greatest challenge of the algorithm revolution is that as machines and the algorithms that drive them have become ever-more complex, we are rapidly losing our ability to understand how they work and anticipate unexpected behaviors and weaknesses. From just 145,000 lines of code to place humans on the moon in 1969 to more than 2 billion lines of code to run Google in 2015, today’s systems are labyrinths of interconnected systems.
Built into these systems are the human values of their developers, the commercial needs of their creators, and the paltry limits of human understanding.
Each of these systems is built on top of a layer of trust of other systems such that an error, vulnerability, or mistaken understanding at any level can cascade across the system. Moreover, modern systems are composed of so many components, some of which may be optional or disabled at any given moment, coupled with mistaken human assumptions and the limits of human imagination in designing the systems, that unexpected behaviors and vulnerabilities can emerge.