A winter’s tale
Eternity beckons
Growing boom and divide
Healthcare picks up
Powered by data and digital
A winter’s tale
The picture shows a detail of the Hollerith Electric Tabulating System.
Science
00

Dawn of a new era

With the advent of ingenious algorithms and powerful microprocessors, the long-held dream of artificial intelligence has received a much-awaited boost. The advances are set to revolutionize everything – from the way we will drive cars to how we will develop and deliver medical treatments.

Text by Goran Mijuk, pictures by Mark Richards, courtesy of the Computer History Museum

scroll-down
Home
en
de
zh
jp
Share
Share icon
content-image
Enter fullscreen

A small differential analyzer built by theoretical physicist Arnold Nordsieck.

arrow-rightA winter’s tale
arrow-rightEternity beckons
arrow-rightGrowing boom and divide
arrow-rightHealthcare picks up
arrow-rightPowered by data and digital

Published on 07/08/2020

A hundred years from now, historians might wonder when humanity fast-tracked its transition into the digital age. While it may be difficult to pin down an exact moment, the week of October 5 to 9, 2015, might be one of the places to look for. 

During this short period of time, Google-owned artificial intelligence firm DeepMind won the upper hand in the highly complex Chinese board game of Go against a human player. 

The contest was by no means a geek event for game freaks. It was proof-of-concept that artificial intelligence could, at least partially, live up to its ultimate goal of reaching human-like qualities. 

Experts had considered the ancient game, which was already played during the time of Chinese philosopher Confucius more than 2500 years ago, too difficult for a computer to handle. 

The number of possible moves in Go is bigger than the number of atoms in the observable universe. Calculating such an enormous number would be beyond even the most powerful supercomputers.

To overcome the gargantuan computing challenge, researchers at DeepMind devised an ingenious trick: Rather than compute every possible move with their new super-fast microprocessors, engineers developed a process using so-called deep neural networks with millions of connections, which were arranged much like neurons in the human brain. 

The layered neural systems helped reduce computing needs as they narrowed down the search space, calculating only those moves deemed most likely to win the game. Through this staggered approach, known as deep learning, the algorithm tackled the problem in a “much more human-like” manner, as DeepMind CEO Demis Hassabis described the technique.

The artificial intelligence community was aflame after Google published the results in January 2016. A few weeks later, after DeepMind’s system beat South Korean Go champion Lee So-dol, an avalanche of media reports around the world touted the superiority of artificial intelligence systems over human ingenuity. 

AI, and with it the promises of data and digital, had re-awakened.

content-image
Enter fullscreen

The SAGE director console from 1958 showed all air space activity. Operators could request information about objects and use the light gun to assign identification numbers to displayed aircraft.

A win­ter’s tale

DeepMind’s success gave fresh impetus to a sector which for years had captured the imagination of science fiction writers, but had repeatedly failed to live up to the extremely high hopes that some of the leading experts had pinned on it when the field began to emerge in the 1950s.

Right from the outset – taking inspiration from Alan Turing who in 1950 devised the so-called Turing test, which states that a machine can be called intelligent if it can convince humans that it is human – researchers were ready to take on the world. 

When mathematician John McCarthy, who coined the term artificial intelligence, organized the now world-famous Dartmouth Conference in 1956, he and his colleagues aimed to show that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

The initial euphoria led to some remarkable early breakthroughs, such as the first industrial robot in 1961, the construction of a chat-bot in 1964 and an “electronic person” that was able to reflect on its own actions. 

Before long, however, AI fell out of favor. Investors started to doubt the high-flying promises of leading figures such as Marvin Minsky, who wholeheartedly proclaimed in 1970 that within three to eight years of research, “we will have a machine with the general intelligence of a human being.”

What followed is generally known as AI winter. The term describes a period of limited funding and low research activity, which held the sector back for more than a decade until IBM and others started to inject the field with a new lease of life, leading to digital marvels such as supercomputer Deep Blue, which beat reigning chess world champion Gary Kasparov in 1997.

Skepticism lingered, however: “From one perspective, we have made great progress. AI is a fairly well-defined field which has grown over the last 50 years and has helped solve many problems – be it adaptive spam blocking, image/voice recognition, high-performance searching, etc.,” computer researchers at the University of Washington wrote in a 2006 paper outlining the field’s history. 

“On the other hand,” the researchers noted, “the things the originators of AI, such as Turing and McCarthy, set out to do seem just as far away now as they were back then ... There is no computer that can pass the Turing test, there was no mass replacement of experts with software systems, and computers can’t beat humans at some simple but strategic games like Go.”

content-image
Enter fullscreen

Replica of the Hollerith Electric Tabulating System

The 60 million cards punched in the 1890 United States census were fed manually into machines like this for processing. The dials counted the number of cards with holes in a particular position. The sorter on the right would be activated by certain hole combinations, allowing detailed statistics to be generated (for example, the number of married farmers over 40 years of age). An average operator could process about 7000 cards a day, at least 10 times faster than manual methods.

Eter­ni­ty be­ckons

When DeepMind cracked the code of Go a decade later, many researchers and onlookers were naturally euphoric. The victory meant that artificial intelligence had made one big step forward and solved a riddle that had seemed almost impossible to crack just 10 years earlier. Now, many believed it would only be a matter of time before computer systems were created that would pass the Turing test and be capable of replacing highly specialized experts.

Figures such as futurist and data scientist Ray Kurzweil, who had long envisioned that life on earth would be completely transformed through powerful computers, rose to prominence with visions that seem inconceivable even to the most audacious science fiction writers.

Kurzweil, who joined Google as a director of engineering in 2012 after publishing his book How to create a mind, believes that by the year 2029 computers will have human-level intelligence and will pass the Turing test. By 2045, humanity will reach what he calls singularity – the moment when machines will surpass humans.

His thinking has found a host of followers who consider themselves transhumanists. They believe that humanity can strive for eternal life thanks to new technologies. In their vision, humans will not only drive around in self-driving vehicles. We will be able to hook up to computers, cure diseases and replace body parts so that we can effectively live forever.

Reality check 

Such high-flung visions naturally attract critics who say that Kurzweil and his followers are just too optimistic and that they are blind to some of the more dangerous aspects of AI. 

The late Stephen Hawking and even entrepreneur Elon Musk, who believes that by 2020 he will be ready to put full self-driving cars on American avenues, have warned that AI could spiral out of control.

Others again are much more down-to-earth. In a 2015 article in the Wall Street Journal, Vasant Dhar, a professor at New York University’s Stern School of Business and the NYU Center for Data Science, said that, while new digital systems will get smarter over time, “the reality is a slow, steady march up.”

Even DeepMind’s victory has been put into perspective, as experts noted that its success with Go was partly due to the fact that it was focusing on a rules-based game, which was easier to tackle than more general problems such as fully self-driving cars or drug discovery. 

Meanwhile, others tried to dampen euphoria in order not to repeat an AI winter. Among the most vocal critics is Zachary Lipton from Carnegie Mellon, who describes current AI technology as a special form of old-fashioned machine learning algorithms that perform some form of pattern matching and are a far cry from passing the Turing test. Likewise, Gary Marcus from New York University told the Wall Street Journal that, despite its recent advances, “deep learning is just a statistical technique, and all statistical techniques suffer from deviation from their assumptions.”

content-image
Enter fullscreen

Developed during the 1960s and early 1970s, Shakey the Robot was the first so-called general-purpose robot that could reason about its own actions.

Gro­wing boom and di­vi­de

The concerns from some specialists have done little, however, to dampen the frenzy that prevails in the world of business right now. Indeed, the global economy is set to receive a fresh lifeline in the years ahead. 

Thanks to AI-driven technologies – including computer vision, natural language processing, virtual assistants, robotic process automation and advanced machine learning – the global economy could receive a boost of more than 10 trillion dollars by the year 2030, according to research done by consultancy PWC.

Investments are already growing fast, especially since DeepMind’s 2015 feat. According to a recent report by the OECD, private equity investments in AI start-ups have jumped from less than 2 billion dollars in 2011 to more than 16 billion in 2017, with the United States, China and Europe in the lead. “The surge in private investment suggests that investors are increasingly aware of the potential of AI, and are crafting their investment strategies accordingly,” the OECD said.

Meanwhile, consultancy McKinsey believes that the impact of AI will be similar to other transformative, general-purpose technologies like the steam engine, electricity and computing. “The economic impact of AI is likely to be large, comparing well with other general-purpose technologies in history,” McKinsey noted in its report.

The consultancy also urged companies to speed up development because “there is a risk that a widening AI divide could open up between those who move quickly to embrace these technologies and those who do not adopt them.”

This gap, McKinsey warned, is also set to widen “between workers who have the skills that match demand in the AI era and those who don’t. The benefits of AI are likely to be distributed unequally, and if the development and deployment of these technologies are not handled effectively, inequality could deepen, fueling conflict within societies.”

According to McKinsey, this fourth industrial revolution could affect “up to 375 million workers, or 14 percent of the global workforce, [which] may need to change occupations – and virtually all workers may need to adapt to work alongside machines in new ways.”

content-image
Enter fullscreen

Ray Kurzweil has become a leading figure of AI research.

Health­ca­re picks up

Today, most industries have stepped up efforts to boost their data and digital capabilities, including the pharmaceuticals and healthcare industry, which has lagged behind other sectors due both to its complexity and to repeated failures in the field.

Industry blog BenchSci has listed more than 30 pharmaceutical companies that have recently concluded deals in the data and digital sphere and have enhanced their data analytics and AI capabilities, especially in the area of R&D. Among the companies that have boosted their digital footprint are AstraZeneca, BASF, Celgene, GSK and Pfizer. 

And more is yet to come. According to a survey sponsored by industry magazine Verdict AI, 62 percent of the pharma companies are ready to invest in AI soon, while 72 percent believe that AI will be of paramount importance to how they do business in future

Academia is likewise embracing the power of data and digital, given its capacity for enhanced analysis and prediction of diseases. Imperial College London and the University of Melbourne have recently shown that AI can help in establishing the prognosis of ovarian cancer patients and identifying which treatment option would be most effective. Separately, researchers at New York University have developed an AI program that can read slides of tissue samples to diagnose two common types of lung cancer with 97 percent accuracy. 

Other studies have shown similarly impressive results, and doctors are generally positive about the power of data and digital.

In a recent survey by Cardinal Health Specialty Solutions, more than half of the 180 participating oncologists said they were “excited” about the future impact of AI on the industry and expected AI to help enhance the quality of care, improve clinical outcomes and drive operational efficiencies, with some consultancies expecting global efficiency gains in excess of 100 billion dollars by 2026.

content-image
Enter fullscreen

Cray-1, which was built in the 1970s, was one of the most successful supercomputers ever built. Its processing power was low, compared to today’s standards. It had a memory capacity of 8.39 megabytes and had a 64-bit processor.

Powe­r­ed by data and di­gi­tal

Novartis, which has been active in the digital health field since 2009 when it started its collaboration with smart pill producer Proteus Digital Health, is also powering ahead. 

After the Chief Digital Officer role was created in 2017, the digitization of Novartis gained even more momentum when Vas Narasimhan took up the CEO position in 2018 and put the company on a trajectory to become a leading medicines company powered by data and digital.

So far, Novartis has initiated a dozen lighthouse projects – some of which are now already well advanced, such as Nerve Live and PharmaACT – aimed at strengthening business areas from R&D, process automation, manufacturing to sales.

Meanwhile, the company’s digital strategy also includes the boosting of AI capabilities and entails efforts to become a partner of choice with digital leaders.

Novartis, which has already worked with companies such as Google and Microsoft in the past, has tied up with McKinsey’s Quantum Black in the area of clinical trial overview, with IBM Watson in clinical recruitment and Intel in the realm of high-content screening during the past few quarters. In January 2019, the company also announced a partnership with the University of Oxford’s Big Data Institute to predict how patients respond to drugs.

More is yet to come as data and digital look like the next big opportunity to develop innovative and powerful therapies and pave the way for potential cures. But despite the industry hype, breakthroughs may not be just around the corner. Hard work and bold moves will be needed to catapult the pharmaceuticals industry and medicine into the new digital era.

As Vas Narasimhan wrote in a LinkedIn essay in 2019 after the company announced its start-up platform Biome: “It’s early days for the digital revolution in healthcare. We have so much to learn – in terms of the technologies, as well as new and agile ways of working. By working closely with the tech industry, I believe we can challenge conventional thinking, truly reimagine medicine and help further ‘bend the curve of life’ for billions of people around the world.”

icon

Home
en
de
zh
jp
Share
Share icon