Artificial Intelligence

Your Future Doctor May Not Be Human
But It May Be Powered By HPE

I am a nerd.  My nerdiness embraces characters such as Marvel’s “the Avengers”.  Not the “Infinity War” kind but, the “Age of Ultron” kind.  In the “Age of Ultron”, the headliner is an architype for artificial intelligence gone bad.  But then, what could one expect when he Official Handbook of the Marvel Universe lists his occupation as “would-be conqueror, enslaver of men”.  Thus, Ultron represents the ultimate example of artificial intelligence applications gone wrong: intelligence that seeks to overthrow the humans who created it.  Imagine though, taking Ultron’s positives (genius intelligence, stamina, reflexes, subsonic flight speed, and demi-godlike durability) and applying them to a new occupation as “a qualified practitioner of medicine”?

Today, we have boundless information; limitless connections between organizations, people and things; pervasive technology; and infinite opportunities to generate many kinds of value, for our organizations, societies, our families and ourselves.  Technology enhanced with artificial intelligence is all around us. You might have a robot vacuum cleaner ready to leap into action to clean up your kitchen floor. Maybe you asked Siri or Google—two apps using decent examples of artificial intelligence technology—for some help already today. Or as recently documented, Siri sent an email its owner did not want sent because it “thought” it heard certain keywords.  The continual enhancement of AI and its increased presence in our world speak to achievements in science and engineering that have tremendous potential to improve our lives.


What Is Artificial Intelligence, Really?

AI has been around in some incarnation since the 1950’s and its promise to revolutionize our lives has been frequently raised, with many of the promises remaining unfulfilled.  Fueled by the growth of capabilities in computational hardware and associated algorithm development, as well as some degree of hype, AI research programs have ebbed and flowed.

Confusion surrounding AI – its applications in healthcare and even its definition – remains widespread in popular media. Today, AI is shorthand for any task a computer can perform just as well as, if not better than, humans.  AI is not defined by a single technology. Rather, it includes many areas of study and technologies behind capabilities like voice recognition, natural-language processing (NLP), image processing and others that benefit from advances in algorithms, abundant computation power and advanced analytical methods like machine learning and deep learning.

Most of the computer-generated solutions now emerging in healthcare do not rely on independent computer intelligence. Rather, they use human-created algorithms as the basis for analyzing data and recommending treatments.


Ex 1. Main tenets of Artificial Intelligence.



By contrast, “machine learning” relies on neural networks (a computer system modeled on the human brain). Such applications involve multilevel probabilistic analysis, allowing computers to simulate and even expand on the way the human mind processes data. As a result, not even the programmers can be sure how their computer programs will derive solutions.

Starting around 2010, the field of AI has been jolted by the broad and unforeseen successes of a specific, decades-old technology: multi-layer neural networks (NNs). This phase-change reenergizing of a particular area of AI is the result of two evolutionary developments that together crossed a qualitative threshold:

    1. Fast hardware Graphics Processor Units (GPUs) allowing the training of much larger—and especially deeper (i.e., more layers)—networks, and
    2. Large labeled data sets (images, web queries, social networks, etc.) that could be used as training testbeds.

This combination has given rise to the “data-driven paradigm” of Deep Learning (DL) on deep neural networks (DNNs), especially with an architecture termed Convolutional Neural Networks (CNNs).”  More to come on this shortly.


Diagnosing with “The Stethoscope of the 21st Century”

A new kind of doctor has entered the exam room and his name is not Dr. Ultron.  Artificial intelligence is making its way into hospitals around the world. Those wary of a robot takeover have nothing to fear; the introduction of AI into health care is not necessarily about pitting human minds against machines. AI is in the exam room to expand, sharpen, and at times ease the mind of the physician so that doctors are able to do the same for their patients.


Ex 2. Productivity Gains from Artificial Intelligence in Healthcare



Bertalan Meskó, known better as The Medical Futurist, has called artificial intelligence the “the stethoscope of the 21st century.” His assessment may prove to be even more accurate than he expected. While various techniques and tests give them all the information they need to diagnose and treat patients, physicians are already overburdened with clinical and administrative responsibilities, and sorting through the massive amount of available information is a daunting, if not impossible, task.

That’s where having the 21st century stethoscope could make all the difference.

The applications for AI in medicine go beyond administrative drudge work, though. From powerful diagnostic algorithms to finely-tuned surgical robots, the technology is making its presence known across medical disciplines. Clearly, AI has a place in medicine; what we don’t know yet is its value. To imagine a future in which AI is an established part of a patient’s care team, we’ll first have to better understand how AI measures up to human doctors. How do they compare in terms of accuracy? What specific, or unique, contributions is AI able to make? In what way will AI be most helpful — and could it be potentially harmful — in the practice of medicine? Only once we’ve answered these questions can we begin to predict, then build, the AI-powered future that we want.


AI vs. Human Doctors

Although we are still in the early stages of its development, AI is already just as capable as (if not more capable than) doctors in diagnosing patients. Researchers at the John Radcliffe Hospital in Oxford, England, developed an AI diagnostics system that’s more accurate than doctors at diagnosing heart disease, at least 80 percent of the time. At Harvard University, researchers created a “smart” microscope that can detect potentially lethal blood infections: the AI-assisted tool was trained on a series of 100,000 images garnered from 25,000 slides treated with dye to make the bacteria more visible. The AI system can already sort those bacteria with a 95 percent accuracy rate. A study from Showa University in Yokohama, Japan revealed that a new computer-aided endoscopic system can reveal signs of potentially cancerous growths in the colon with 94 percent sensitivity, 79 percent specificity, and 86 percent accuracy.

In some cases, researchers are also finding that AI can outperform human physicians in diagnostic challenges that require a quick judgment call, such as determining if a lesion is cancerous. In one study, published December 2017 in JAMA, deep learning algorithms were able better diagnose metastatic breast cancer than human radiologists when under a time crunch. While human radiologists may do well when they have unrestricted time to review cases, in the real world (especially in high-volume, quick-turnaround environments like emergency rooms) a rapid diagnosis could make the difference between life and death for patients.

AI is also better than humans at predicting health events before they happen. In April, researchers from the University of Nottingham published a study that showed that, trained on extensive data from 378,256 patients, a self-taught AI predicted 7.6 percent more cardiovascular events in patients than the current standard of care. To put that figure in perspective, the researchers wrote: “In the test sample of about 83,000 records, that amounts to 355 additional patients whose lives could have been saved.” Perhaps most notably, the neural network also had 1.6 percent fewer “false alarms” — cases in which the risk was overestimated, possibly leading to patients having unnecessary procedures or treatments, many of which are very risky.

AI is perhaps most useful for making sense of huge amounts of data that would be overwhelming to humans. That’s exactly what’s needed in the growing field of precision medicine.  Hoping to fill that gap is The Human Diagnosis Project (Human Dx), which is combining machine learning with doctors’ real-life experience. The organization is compiling input from 7,500 doctors and 500 medical institutions in more than 80 countries in order to develop a system that anyone — patient, doctor, organization, device developer, or researcher — can access in order to make more informed clinical decisions.


Potential Pitfalls you need to consider.

There are practical actions that IT leaders can take or need to be aware of to cut through the AI confusion, complexity and hype and to position their organizations to successfully exploit AI for real business value.

Do your homework, get calibrated, and keep up.
While most executives won’t need to know the difference between convolutional and recurrent neural networks, you should have a general familiarity with the capabilities of today’s tools, a sense of where short-term advances are likely to occur, and a perspective on what’s further beyond the horizon. Tap your data-science and machine-learning experts for their knowledge, talk to some AI pioneers to get calibrated, and attend an AI conference or two to help you get the real facts; news outlets can be helpful, but they can also be part of the hype machine. Ongoing tracking studies by knowledgeable practitioners, such as the AI Index (a project of the Stanford-based One Hundred Year Study on Artificial Intelligence), are another helpful way to keep up.

    1. Adopt a sophisticated data strategy.
      AI algorithms need assistance to unlock the valuable insights lurking in the data your systems generate. You can help by developing a comprehensive data strategy that focuses not only on the technology required to pool data from disparate systems but also on data availability and acquisition, data labeling, and data governance.
    2. The explainability problem
      Explainability is not a new issue for AI systems.  But, it has grown along with the success and adoption of deep learning, which has given rise both to more diverse and advanced applications and to more opaqueness.  Larger and more complex models make it hard to explain, in human terms, why a certain decision was reached (and even harder when it was reached in real time). This is one reason that adoption of some AI tools remains low in application areas where explainability is useful or indeed required.  Furthermore, as the application of AI expands, regulatory requirements could also drive the need for more explainable AI models.
    3. Bias in data and algorithms
      Many AI limitations can be overcome through technical solutions already in the works.  Bias is a different kind of challenge. Potentially devastating social repercussions can arise when human preferences (conscious or unaware) are brought to bear in choosing which data points to use and which to disregard. Furthermore, when the process and frequency of data collection itself are uneven across groups and observed behaviors, it’s easy for problems to arise in how algorithms analyze that data, learn, and make predictions. Negative consequences can include misrepresented scientific or medical prognoses or distorted financial models.  In many cases, these biases go unrecognized or disregarded under the veil of “advanced data sciences,” “proprietary data and algorithms,” or “objective analysis.”As we deploy machine learning and AI algorithms in new areas, there probably will be more instances in which these issues of potential bias become baked into data sets and algorithms. Such biases have a tendency to stay embedded because recognizing them, and taking steps to address them, requires a deep mastery of data-science techniques including data collection.


HPE is already there.

HPE is hyper-focused on delivering enterprise artificial intelligence breakthroughs that unlock new revenue streams and build competitive advantages for our partners and customers. HPE offers a comprehensive set of computing innovations specifically targeted to accelerate deep learning analytics and insights. Building on a strong track record of comprehensive, workload-optimized compute solutions for AI and deep learning with its purpose-built HPE Apollo portfolio, HPE introduced a portfolio of new deep learning solutions that maximize performance, scale, and efficiency. HPE now offers greater choice for larger scale, dense GPU environments and addresses key gaps in technology integration and expertise with integrated solutions and services offerings.

HPE last year introduced the HPE Deep Learning Cookbook, a set of tools and recommendations to help customers choose the right technology and configuration for their deep learning tasks. It now includes the HPE Deep Learning Performance Guide which uses a massive knowledge base of benchmarking results and measurements in the customer’s environment to guide technology selection and configuration. By combining real measurements with analytical performance models, the HPE Deep Learning Performance Guide estimates the performance of any workload and makes recommendations for the optimal hardware and software stack for that workload.


Ex 4. Main Components of the HPE Deep Learning Cookbook


Capitalizing on the full range of AI and deep learning capabilities requires purpose-built computers capable of learning freely, reasoning, and determining the most appropriate course of action in real time. The new HPE Apollo 6500 Gen10 System best addresses the most important step of training the deep learning model, offering eight NVIDIA Volta GPU support and delivering a dramatic increase in application performance.

The HPE SGI 8600 server is the premier HPC platform for petaflops-scale deep learning environments. A liquid cooled, tray-based, and high-density clustered server, the 8600 now includes support for Tesla GPU accelerators with NVLink interconnect technology. The 8600 utilizes GPU-to-GPU communication which enables 10X the FLOPS per node compared to CPU-only systems, so it is designed to scale efficiently and enable the largest and most complex data center environments with unparalleled power efficiency.

The NVLink 2.0 GPU interconnect is particularly useful for deep learning workloads, characterized by heavy GPU-to-GPU communications. High-bandwidth, low-latency networking adapters (up to four high-speed Ethernet, Intel® Omni-Path Architecture, InfiniBand Enhanced Data Rate [EDR], and future InfiniBand HDR per server) are tightly coupled with the GPU accelerators, which allows the system to take full advantage of the network bandwidth.

Consider what HPE is already doing for healthcare with AI.   The German Center for Neurodegenerative Diseases (DZNE) is using HPE’s memory-driven computing architecture to quickly and accurately process massive amounts and diverse types of data. Generated by genomics, brain imaging, clinical trials and other research into Alzheimer’s disease, the vast data is too much for traditional computing methods; they are simply too slow. Our system’s single, huge pool of addressable memory is easing bottlenecks, and opening the door to a cure.

The United Kingdom is aiming to cut cancer deaths by 10% using artificial intelligence as the key driver of improved health outcomes.  The ambitious new plan calls for the National Health Service (NHS, whose EMR data is hosted on HPE servers), the AI industrial sector, and health charities to use data and AI to transform the diagnosis of chronic diseases, with the goal of seeing around 22,000 fewer people dying from cancer each year by 2033. The plans will see at least 50,000 people each year diagnosed at an early stage of prostate, ovarian, lung, or bowel cancer through the use of emerging technologies that will cross-reference people’s genetics, habits, and medical records with national data to spot those at an early stage of cancer.

HPE is also partnering with other research and clinical organizations seeking to advance discovery in neuroscience. Imagine the data we can capture, and analyze when working with the human brain which has 100 billion neurons and 100 trillion synapses!


This journey is just getting started…

The promise of AI in healthcare is enormous, and the technologies, tools, and processes needed to fulfill that promise haven’t yet fully arrived. But, if you believe it’s better to let other pioneers take the arrows you may also find it’s very difficult to leapfrog from a standing start if you choose not to explore what AI tools can and can’t do now. With researchers and AI pioneers poised to solve some of today’s most difficult conundrums, you may want to partner with a team that understands what is happening on that frontier.  HPE should be that partner.


Reference Support

“Artificial Intelligence Will Redesign Healthcare,” The Medical Futurist, 2017.
“What AI can and can’t do (yet) for your business” McKinsey Quarterly, January 2018“Your Future Doctor May Not be Human. This Is the Rise of AI in Medicine.” January 31, 2018
The One Hundred Year Study (

Leave a Comment