Page 4 of 4
AI is On Fire! Why Now?
One can't pick up a newspaper or a magazine these days without reading about how Artificial Intelligence (AI) is going to change the world — dramatically and fast. Maybe yes; maybe no. But in either case: Why now? Why is there this sudden shift from mobile-first to AI-first?
- "ARTIFICIAL INTELLIGENCE CAN SOLVE THE WORLD’S GRAND CHALLENGES" from the "IBM Watson AI XPRIZE Wild Card Round" announcement.
- "Here are five global problems that machine learning could help us solve:... healthcare, making driving safer, transforming how we learn, helping us be smarter about energy, helping wildlife...."
- "How Artificial Intelligence Can Solve Industry Challenges.... We’re at a turning point in the information age.... Artificial intelligence will eventually touch nearly every industry on the planet.... Cyber Security, reducing energy costs, healthcare, consumer goods and services, finance, government and environment, procurement...."
- "DeepMind has conquered games, London's Underground and now it wants to help save the planet."
Getting tired? Do you believe? Oh, heck, here’s a couple more:
- "AI can solve world's biggest problems..."
- "How AI is transforming healthcare and solving problems in 2017... New studies and new products are showing how artificial intelligence is transforming the industry."
- "Intelligence At Scale... Until recently, computers struggled to parse unstructured data like Facebook content and YouTube videos. But thanks to recent [advancements in AI] ... that’s changing."
- "Artificial intelligence can solve planet's 'hard problems'..."
Uncle, Uncle, Uncle!
Sorry, but, not so fast. K-12 is not immune to the AI hyperbole:
- "... The area where AI will most profoundly affect our lives is in education."
- "Artificial intelligence technology will positively impact the nature of education"
And one more for good measure:
[We have addressed our assessment of the veracity of these predictions on K-12 in an earlier blog. Our focus, however, in this blog, is a different question: Why now? But, fear not — we will return to AI in ed in future blogs — absolutely!]
AI is not a particularly new technology — there has been considerable effort in the field of artificial intelligence since at least 1955, when John McCarthy, a mathematician turned computer scientist, coined the term "artificial intelligence." And, there has been no shortage of predictions over the years that AI was going to be the ultimate problem solving technology. But, over the last two years there has been a hyper-exponential growth in those sorts of predictions. (N.B. The predictions, above, are all from 2016/2017.)
So, why are companies hyper-rapidly moving to being "AI-first" now?
<Queue the trumpets> Actually, the answer to the question "Why now?" takes only two words — Moore’s Law.
Moore’s Law? Yes! Moore’s Law explains how manufacturers such as Intel, AMD, Samsung, Micron and Toshiba have continually been able to create computing and memory chips that enable individuals and organizations to compute things that simply couldn’t have been computed before. Today, in 2017, businesses — and middle schoolers — have the computer power to process billions upon billions of pieces of data and find the patterns hidden in those billion-billion pieces of data. In 2000, in 2010, even in 2015, we simply didn’t have the Zorch — the computer horsepower — to do the computations necessary to understand the intricacies of self-driving cars, healthcare, etc., etc.
OK, so why is Artificial Intelligence on fire now? Because now we can really do AI! The "Great A.I. Awakening" is upon us because, finally, there is an abundance of low-cost, computational Zorch available!
Lots of hyperbole being batted about! So, catch your breath, and let’s back up.
Gordon Moore, one of the co-founders of Intel, the world’s leading manufacturer of computer chips, noticed a pattern that has come to be called "Moore’s Law."
- "On April 19, 1965 ... studying the trend he’d seen in the previous few years, Moore predicted that every year we’d double the number of transistors that could fit on a single chip of silicon so you’d get twice as much computing power for only slightly more money. When that came true, in 1975, he modified his prediction to a doubling roughly every two years. “Moore’s Law” has essentially held up ever since — and, despite the skeptics, keeps chugging along, making it probably the most remarkable example ever of sustained exponential growth of a technology." (From Thomas Friedman)
What does "doubling roughly every two years" really mean; how does one understand this exponential growth?
- "The performance of technology has increased exponentially over time. Floating operations per second (FLOPS) is a handy way to compare processors with different microarchitectures. We researched the FLOPS of supercomputers, smartphones, smart watches and video game consoles to show how we have seen a 1 trillion-fold increase in performance over the past 60 years [1956-2015]."
- "A single Apple iPhone 5 has 2.7 times the processing power than the 1985 Cray-2 supercomputer." (The bold emphasis is ours in both quotes.)
- "Today, your cell phone has more computer power than all of NASA back in 1969, when it placed two astronauts on the moon. Seems hard to believe, we know, but it is actually true — a hand-held apparatus on which we fling birds at pigs has greater computational capabilities than the arsenal of machines used for guiding crafts through outer space some 45 years ago." (The bold emphasis is ours in the above quote.)
- An infographic shows that there were 2,300 transistors, in 1971, in an Intel 4004 computer chip — which some of us actually programmed, ahem! — while, in 2010, there were 2,600,000,000 transistors in a 16-core SPARC T3 chip.
- "... Moore's Law has continued unabated for 50 years, with an overall advance of a factor of roughly 231, or 2 billion. That means memory chips today store around 2 billion times as much data as in 1965. Or, in more general terms, computer hardware today is around 2 billion times as powerful for the same cost." (Article from 2015)
- If the car engineering had followed Moore’s Law, this would be the result: "[Today] you would be able to go ... 300,000 miles per hour. You would get two million miles per gallon of gas, and all that for the mere cost of 4 cents!" (From Thomas Friedman)
That is what exponential growth means!
The above quotes talk about the increase in power of a computing device. Moore also pointed out that we could take out the benefits of doubling in dollars: the cost today for that 2 year old chip is half what it was then! That point is very important as we move into the IOT — internet of things — era. Most of the IOT devices don’t need a powerful chip; indeed a chip that is two, four, six, eight, even 10 years old might do just fine in an "internet-connected lightbulb." And, because of Moore’s Law, that "old" chip literally costs pennies today.
How, you ask, do they crowd 2,600,000,000 transistors in virtually the same space that held 2,300 transistors? While a technical answer to that question is way beyond the scope of this blogpost, somehow, the engineers have figured out how to make the wires in those transistors narrower and shorter, and the interconnections closer together — and still defuse the heat that all those moving electrons cause!
- "Devices currently in production have feature sizes as small as 10 or 14 nanometers, and IBM has just announced chip with 7 nanometer features.... By comparison, a helical strand of DNA is 2.5 nanometers in diameter, thus commercial semiconductor technology is now entering the molecular and atomic realm." (Article from 2015)
Moore’s Law explains the "why now?" question. That was the easy one. Will AI actually be the universal problem solver that AI-First companies ... are betting on? Are there dangers to all that AI being let loose on our planet? And, will Moore’s Law continue to provide the needed low-cost, computational Zorch? Closer to home: how will AI impact K-12? BRB. J
Cathie Norris is a Regents Professor and Chair in the Department of Learning Technologies, School of Information at the University of North Texas. Visit her site at www.imlc.io.
Elliot Soloway is an Arthur F. Thurnau Professor in the Department of CSE, College of Engineering, at the University of Michigan. Visit his site at www.imlc.io.
Find more from Elliot Soloway and Cathie Norris at their Reinventing Curriculum blog at thejournal.com/rc.