Artificial Intelligence in Medicine - Revolutionizing Healthcare

Artificial Intelligence in Medicine - Revolutionizing Healthcare
AI Tips Dec 08, 2023

Artificial Intelligence in Medicine Introduction

Artificial Intelligence (AI) is currently considered one of the most promising directions not only in the IT industry but also in various other human activities. AI solutions, in particular, are seen as a cornerstone for realizing the concept of the "Digital Economy."

Just as electricity brought about a new industrial revolution in the 19th century, artificial intelligence is becoming a key driver of profound societal and economic transformation in the 21st century. However, unlike previous industrial revolutions, the primary catalyst for these tectonic shifts is not just technology or IT; it is the transformation of society itself. The behavior of consumers is being reshaped by digitization, making them more discerning and demanding. With the application of IT, management has gained qualitative professional tools for observation, control, and supervision. The policies of governments and investors are evolving, with a reluctance to invest in professions and activities burdened by inherited routines from previous years, relying on low-skilled manual labor. There is a decisive shift towards replacing them with robots and AI-based services.

According to IDC data, the market volume of cognitive systems and AI technologies in 2016 amounted to approximately $7.9 billion. Analysts believe that the compound annual growth rate (CAGR) will reach 54% by the end of this decade. As a result, by 2020, the industry's volume will exceed $46 billion. The majority of this market will be dominated by cognitive applications that automatically learn from data and provide various assessments, recommendations, or forecasts. Investments in AI software platforms, providing tools, technologies, and services based on structured and unstructured information, are estimated to reach $2.5 billion annually. The artificial intelligence market in healthcare and life sciences, according to Frost & Sullivan, is also expected to grow by 40% annually, reaching $6.6 billion in 2021.

A Brief History

Artificial intelligence has a long history rooted in Turing's theoretical work on cybernetics dating back to the early 20th century. Although conceptual foundations emerged earlier from philosophical works such as René Descartes' "Discourse on the Method" (1637) and Thomas Hobbes' "Human Nature" (1640).

In the 1830s, English mathematician Charles Babbage conceived the idea of a complex digital calculator—an analytical engine, which, as claimed by the developer, could calculate moves for chess. By 1914, Leonardo Torres y Quevedo, the director of a Spanish technical institute, had built an electromechanical device capable of playing simple endgame chess almost as well as a human.

Since the mid-1930s, following Turing's publications discussing the creation of devices capable of independently solving various complex tasks, the global scientific community began to pay close attention to the issue of artificial intelligence. Turing proposed considering a machine intelligent if an interrogator could not distinguish it from a human during communication. This period also saw the emergence of the concept of the Baby Machine, envisioning the training of artificial intelligence in the manner of a small child rather than creating an instantly "smart adult" robot— a precursor to what is now called machine learning.

In 1954, American researcher Newell decided to write a program for playing chess, involving analysts from the RAND Corporation. The theoretical basis for the program was the method proposed by information theory founder Shannon, with Turing providing its precise formalization.


Summer 1956 marked the first working conference on artificial intelligence at Dartmouth University in the USA, featuring renowned scientists such as McCarthy, Minsky, Shannon, Turing, and others, later acknowledged as the pioneers of artificial intelligence. Over six weeks, scientists deliberated on the possibilities of implementing projects in the field of artificial intelligence. It was during this conference that the term "Artificial Intelligence" (AI) was coined. More details about this landmark event, which served as the starting point for AI, can be found here.

It is worth noting that research on AI did not always proceed smoothly, and success was not guaranteed for its pioneers. Following the explosive interest from investors, technologists, and scientists in the 1950s, coupled with fantastical expectations that computers would soon replace the human brain, the 1960s and 1970s brought a period of profound disappointment. The capabilities of computers at that time were inadequate for complex computations. Scientific thought regarding the development of the mathematical framework for AI also reached an impasse. Echoes of this pessimism can be found in many textbooks on applied informatics published up to the present day. In public culture and even in governmental regulatory documents, the image of a robot or a cybernetic algorithm as a pitiful, unworthy-of-attention agent took shape. One that could only perform its functions under human control.

However, from the mid-90s onwards, interest in AI resurged, and technologies began to advance rapidly. Since then, there has been an explosion of research and patent activity in this field.

Development in the Present Day

The first examples of inspiring and impressive results in the application of AI developments were achieved in activities requiring the consideration of a large number of frequently changing factors and the flexible adaptive response of humans, such as in entertainment and games.

Interest in the ability to create an "intelligent machine" comparable to a human in its intellectual capabilities began to steadily grow since 1997 when the IBM Deep Blue supercomputer defeated the reigning world chess champion, Garry Kasparov. More details about this event can be found here.


Advancements in AI from 2005 to 2008

Between 2005 and 2008, there was a qualitative leap in AI research. The mathematical and scientific community discovered new theories and models for training multilayer neural networks, laying the foundation for the development of deep machine learning theory. The IT industry began producing high-performance, and, most importantly, affordable and accessible computing systems. The collaborative efforts of mathematicians and engineers led to remarkable achievements over the last 10 years, with practical results pouring in from AI projects like an "abundance horn."

In 2011, IBM Watson, a cognitive self-learning system, triumphed over longstanding champions in the game show Jeopardy! (the Russian equivalent of "Svoya Igra").

In early 2016, Google's AlphaGo program defeated Europe's Go champion, Fan Hui. Two months later, AlphaGo secured a victory with a score of 4:1 against Lee Sedol, one of the world's top Go players. This event marked a historical milestone for AI, challenging the belief that a computer could not defeat a player of such caliber due to the intricate abstraction and numerous possible scenarios for consideration. In a sense, the computer needed to "think" creatively in the game of Go.

In January 2017, the Libratus program, developed at Carnegie Mellon University, emerged victorious in a 20-day poker tournament called "Brains Vs. Artificial Intelligence: Upping the Ante," winning over $1.7 million. The next triumph came from an enhanced version of AI called Lengpudashi, facing off against World Series of Poker (WSOP) participant Alan Du and several scientists and engineers. Interestingly, in this scenario, the player planned to win over the AI by exploiting its weaknesses. However, the strategy failed, and the advanced version of Libratus secured another victory. Noam Brown, one of the developers of Libratus, stated that people underestimate artificial intelligence: "People think that bluffing is a human characteristic, but it's not. A computer can understand that if you bluff, the payoff can be greater."

Over the past few years, AI-based solutions have been successfully implemented in various fields, enhancing process efficiency not only in entertainment but also in other industries. Technological giants such as Facebook, Google, Amazon, Apple, Microsoft, Baidu, and several other companies are investing substantial resources in AI research and are already applying various developments in their practical operations. In May 2017, Microsoft announced plans to integrate AI mechanisms into every software product and make them available to every developer.

The reduction in the cost of AI platforms and increased accessibility has allowed not only large corporations but also specialized companies and even startups to work with them. In recent years, numerous small research teams with limited financial capabilities have emerged, offering new and promising ideas and concrete working solutions based on AI. One of the most notable examples is the startup behind the widely popular mobile app Prisma—a team of developers created a service for processing photos with stylization inspired by various artists.

Mass Development and Integration of AI in Various Directions

The widespread development and implementation of AI across multiple domains became possible due to several key factors in the IT industry: the penetration of high-speed internet, significant growth in the performance and accessibility of modern computers coupled with a simultaneous decrease in ownership costs, the advancement of "cloud" solutions and mobile technologies, and the expansion of the open-source software (OSS) market. Industries heavily engaged in mass and distributed consumer services, such as advertising, marketing, commerce, telecommunications, government services, insurance, banking, and fintech, are considered particularly receptive to AI utilization. The wave of change also reached traditionally conservative sectors like education and healthcare.

What is Artificial Intelligence?

In the early 1980s, computer scientists in the field of computation theory, Barr and Feigenbaum, proposed the following definition for AI: "Artificial Intelligence is the field of computer science that deals with the development of intelligent computer systems, i.e., systems possessing capabilities traditionally associated with human intelligence—language understanding, learning, reasoning, problem-solving, etc." Jeff Bezos, CEO of Amazon, describes AI as follows: "Over the last decades, computers have automated many processes that programmers could describe using precise rules and algorithms. Modern machine learning techniques allow us to do the same for tasks where it is much harder to prescribe clear rules." In essence, artificial intelligence now encompasses various software systems and the methods and algorithms applied in them. Their key feature is the ability to solve intellectual tasks in a manner similar to human thought processes, including language understanding, learning, reasoning, problem-solving, and more. Among the most popular applications of AI are forecasting various situations, evaluating digital information to draw conclusions, and analyzing diverse data to uncover hidden patterns (data mining). It's essential to emphasize that, at present, computers cannot model complex processes of higher cognitive activity in humans, such as expressing emotions, love, or creativity. This pertains to the domain of so-called "strong AI," where a breakthrough is expected no earlier than 2030-2050. However, computers successfully handle tasks of "weak AI," acting as cybernetic devices operating according to rules prescribed by humans. The number of successfully implemented projects in the realm of "medium AI" is growing, where IT systems incorporate elements of adaptive self-learning, improving as they accumulate primary data, reclassifying textual, graphical, photo/video, audio data, and more in novel ways.

Neural Networks and Machine Learning – Fundamental Concepts of AI

To date, a diverse array of approaches and mathematical algorithms for constructing AI systems has been accumulated and systematized, such as Bayesian methods, logistic regression, support vector machines, decision trees, algorithm ensembles, and more.

Recently, many experts conclude that the majority of modern and truly successful implementations are solutions built on deep neural network technology and deep learning.

Neural Networks (NNs): 

Neural networks are based on an attempt to recreate a primitive model of nervous systems in biological organisms. In living beings, a neuron is an electrically excitable cell that processes, stores, and transmits information through electrical and chemical signals across synaptic connections. Neurons have complex structures and narrow specialization. By connecting with each other to transmit signals through synapses, neurons create biological neural networks. The human brain, for example, contains an average of around 65 billion neurons and 100 trillion synapses. Essentially, this is the fundamental mechanism of learning and brain activity in all living beings, constituting their intelligence. For instance, in Pavlov's classic experiment, a bell was rung just before feeding a dog each time, and the dog quickly learned to associate the bell's ring with food. Physiologically, the result of the experiment in the dog's brain was the establishment of synaptic connections between areas of the brain cortex responsible for hearing and those responsible for controlling salivary glands. Consequently, when the dog's cortex was stimulated by the sound of the bell, salivation began. Thus, the dog learned to respond to signals (data) from the external world and draw the "correct" conclusion.

The ability of biological nervous systems to learn and correct their mistakes laid the foundation for research in the field of artificial intelligence. The initial task was to artificially reproduce the low-level structure of the brain—i.e., create a computerized "artificial brain." As a result, the concept of an "artificial neuron" was proposed—a mathematical function that transforms multiple input facts into one output, assigning weights of influence to them. Each artificial neuron can take the weighted sum of input signals and, if the cumulative input exceeds a certain threshold level, transmit a binary signal further.


Artificial neurons are combined into networks—connecting the outputs of some neurons to the inputs of others. Connected and interacting artificial neurons form an artificial neural network—a specific mathematical model that can be implemented in software or hardware. In simplified terms, a neural network is just a program—a "black box" that receives input data and produces outputs. Built from a very large number of simple elements, a neural network is capable of solving extremely complex tasks.

Functioning of Neural Networks Illustrated with the Example of Car Brand Recognition in an Image, Source: wccftech.com

The mathematical model of a single neuron (perceptron) was first proposed in 1943 by American neurophysiologists and mathematicians Warren McCulloch and Walter Pitts, who also introduced the definition of an artificial neural network. Physically, the model was computer-simulated in 1957 by Frank Rosenblatt. It can be said that neural networks are one of the oldest ideas for the practical implementation of AI.

Currently, there are numerous models for implementing neural networks. There are "classic" single-layer neural networks used for solving simple tasks. A single-layer neural network is mathematically identical to a regular polynomial, a weight function traditionally used in expert models. The number of variables in the polynomial equals the number of network inputs, and the coefficients before the variables equal the synaptic weight coefficients.

There are mathematical models in which the output of one neural network is directed to the input of another, creating cascades of connections known as multilayer neural networks (MNN) and one of its most powerful variants—convolutional neural networks (CNN).

MNNs possess significant computational capabilities but require substantial computing resources. With the placement of IT systems in cloud infrastructure, multilayer neural networks have become accessible to a larger number of users and are now the foundation of modern AI solutions. In 2016, the U.S. company Digital Reasoning, specializing in cognitive computing technologies, created and trained a neural network consisting of 160 billion digital neurons. This is much more powerful than the neural networks available to companies like Google (11.2 billion neurons) and the U.S. National Laboratory in Livermore (15 billion neurons).

Another interesting type of neural network is the recurrent neural network (RNN), where the output from one layer of the network is fed back to one of the inputs. Platforms with such feedback have a "memory effect" and can track the dynamics of changes in input factors. A simple example is a smile. A person begins to smile with subtle movements of facial muscles and eyes before explicitly showing their emotions. RNN allows detecting such movement at early stages, which is useful for predicting the behavior of a living object over time by analyzing a series of images or constructing a sequential flow of natural language speech.

Machine Learning (ML)

Machine learning is the process of machine analysis of prepared statistical data to find patterns and create algorithms (adjustment of neural network parameters) based on these patterns, which will then be used for predictions.

Algorithms created during the machine learning stage will enable computer artificial intelligence to make correct conclusions based on the provided data.

There are three main approaches to machine learning:

  • Supervised learning
  • Reinforcement learning
  • Unsupervised learning (self-learning)

Supervised Learning:

In supervised learning, specially curated data with already known and reliably determined correct answers are used. The parameters of the neural network are adjusted to minimize errors. In this AI method, correct answers are associated with each input example, revealing potential dependencies between the response and input data. For instance, a collection of X-ray images with specified conclusions serves as the basis for AI training – its "teacher." From the series of models obtained, a human eventually selects the most suitable one, for example, based on the maximum accuracy of predictions.

Often, the preparation of such data and retrospective responses requires significant human intervention and manual selection. The quality of the obtained result is also influenced by the subjectivity of the human expert. If, for any reason, the expert does not consider the entire dataset and its attributes during training, their conceptual model is limited to the current level of science and technology. The resulting AI decision will inherit this "blindness." It is essential to note that neural networks involve functions with nonlinear transformations and hyper-specificity—the outcome of the AI algorithm is unpredictable if parameters outside the bounds of the training dataset are input. Therefore, it is crucial to train the AI system on examples and frequencies adequate to subsequent real operating conditions. The geographical and socio-demographic aspects also strongly influence, generally preventing the use of math models trained on population data from other countries and regions without accuracy loss. The expert is also responsible for the representativeness of the training sample.

Self-learning:

Self-learning is applied where there are no predefined answers and classification algorithms. In this case, AI relies on independent identification of hidden dependencies and ontology search. Machine self-learning allows categorizing samples through the analysis of hidden patterns and "self-recovery" of the internal structure and nature of information. This eliminates the situation of the systemic "blindness" of a doctor or researcher. For example, when developing an AI model for predicting type 2 diabetes, concentrating primarily on blood glucose levels or patient weight. However, simultaneously, they are forced to ignore all other information from the medical history that could also be useful. A deep learning approach enables training AI on the entire multimillion patient database and analyzing any test ever recorded about a patient in their electronic health record.

Mechanisms of deep machine learning typically use multilayer neural networks and a very large number of object instances to train the neural network. The number of records in the training sample should count hundreds of thousands or even millions of examples and, when resources are unlimited, even more. To teach AI to recognize a person's face in a photo, the Facebook development team needed millions of images with metadata and tags indicating the presence of a face in the photo. Facebook's success in implementing facial recognition functionality lay precisely in the vast amount of initial training information: the social network has accounts of hundreds of millions of people who uploaded a massive number of photos, specifying faces and tagging (identifying) people. Deep machine learning based on such data quantity allowed creating reliable artificial intelligence, which now, in milliseconds, not only detects a person's face in an image but often accurately guesses who exactly is depicted in the photo.

Data Quantity and AI Learning Methods:

A substantial amount of training data is crucial for AI to establish necessary classification rules. The more diverse data is loaded into the system during machine learning, the more accurately these rules will be identified, ultimately leading to more precise AI results. For instance, when processing X-rays and MRIs, multilayer neural networks can form a representation of human anatomy and organs based on images. However, computers cannot invent organ names in their computerized classification equivalent to classical medical terminology. Therefore, they initially require a "translator" from their internal machine vocabulary to professional language.

It's essential to note that due to the nonlinearity of multilayer neural networks, there is no "reverse function"; in general, the computer cannot explain to a human why it arrived at a particular conclusion. To prepare a motivated judgment, a human expert is needed, or paradoxically, another neural network trained on tasks involving the creation of accurate interpretations and conclusions in natural human language.

The supervised learning method is more convenient and preferable in situations where there are accumulated and reliable retrospective source data: training based on them requires less time and allows for quicker development of a working AI solution. Where the possibility of obtaining a database with matched information and answers is absent, self-learning methods based on deep machine learning need to be applied, and these solutions will not require human supervision.

For researchers and startups just getting acquainted with AI and exploring its applications in healthcare, it seems reasonable to start with supervised machine learning methods. This will require fewer resources (time, financial) to create a prototype of a working system and practically learn AI techniques. A functioning AI system for a specific task can be obtained more quickly in this case. Currently, there are many high-quality code libraries for artificial neural networks available on the market, such as TensorFlow (https://www.tensorflow.org/) for mathematical modeling, and OpenCV (http://opencv.org/) for image recognition tasks, both provided freely under open-source licenses.

In addition to the practical effect of increased accuracy, which can reach 95% today, AI systems have a high processing speed when handling data. Numerous experiments have been conducted, for example, in pattern recognition from different perspectives, where a human and a computer competed. As long as the image presentation rate was low—1-2 frames per minute—humans undoubtedly outperformed machines. When analyzing pathology images, the human error rate was no more than 3.5%, while the computer made a diagnostic error of 7.5%. However, with an increased rate to 10 frames per minute and above, human reaction weakened, fatigue set in, leading to a complete breakdown in performance. The computer, on the other hand, continuously learned from its mistakes and only improved accuracy in the next series. A promising approach was the paired operation of a human and a computer, where it was possible to increase diagnostic accuracy to 85% at a relatively high image demonstration speed for humans.

Of course, one cannot speak of effective AI model construction and accuracy if the necessary digitized information for their training is absent. Therefore, it is critically important, even if not completely abandoning traditional paper document flow, but duplicating medical document flow both in paper and electronic form, to start accumulating Russian banks of electronic data. Provide an opportunity to use them in anonymized form, without disclosing patients' personal data, for the creation and improvement of domestic AI solutions.

Distinguishing AI Development from Regular Software Development:

The main difference between artificial intelligence (AI) development and regular software programming lies in the fact that when creating AI, a programmer doesn't need to know all the dependencies between input parameters and the desired outcome (answer). In cases where such dependencies are well-known or where there is a reliable mathematical model, such as calculating a statistical report or forming a registry for medical payment, applying artificial intelligence might not be necessary. Modern software products handle these tasks better, more reliably, and within an acceptable time frame.

Deep machine learning technology is effective in situations where clear rules, formulas, and algorithms cannot be defined for solving a problem, for example, "does the X-ray show pathology?" This technology assumes that instead of creating programs to calculate predetermined formulas, the machine is trained using a large amount of data and various methods. These methods enable the machine to identify the formula based on empirical data, thus learning to perform the task in the future. In this context, the development team focuses on data preparation and training rather than attempting to write a program that somehow analyzes an image using predefined algorithms to obtain an answer—whether there is an anomaly in it or not.

A whole class of information systems emerged under the designation "IT+DT+AI+IOT" or Digital Platforms built on this paradigm. "IT" denotes the universal digitization of processes and computerization of workplaces, "DT" involves data accumulation and the use of powerful information processing technologies, and "AI" indicates that AI robotic algorithms will be created based on accumulated data, acting in partnership with humans and autonomously. "IOT" stands for the "Internet of Things"—a computing network consisting of physical objects ("things") equipped with embedded technologies for interaction with each other or the external environment. The development of digital platforms for healthcare is among the strategic priorities for advanced economies worldwide, including Russia.

Risks and Concerns Related to AI:

With the rapid growth of publications on the prospects of AI and the emergence of numerous examples of IT solutions based on it, the number of statements from experts concerned about the consequences of its implementation in the coming years and decades is also increasing.

Expert concerns revolve around the idea that, although artificial intelligence will bring a radical increase in efficiency in various industries, for ordinary people, it may lead to unemployment and career uncertainties. This is because their "human" jobs are being replaced by machines.

AI Concerns and Realities:

And these concerns are not unfounded. For instance, the American company Goldman Sachs has already replaced traders who dealt with trading stocks on behalf of the bank's major clients with an AI-based automated bot. Now, out of the 600 people who worked in 2000, only two remain—trading robots have replaced the others, and 200 engineers are involved in maintaining them. More details here.

On the Amazon electronic platform, the arbitration of mutual claims between buyers and sellers is handled by program robots. They process over 60 million claims per year, almost three times more than the number of all lawsuits filed through the traditional U.S. court system.

Arguments in favor of moderate AI and robot implementation, restraining the pace of their integration with a so-called "robot tax," seem reasonable. Taxes collected from each new robotic workplace could be used to finance programs for education, retraining, and employment of displaced employees.

Similar changes are likely to be expected in the healthcare sector, although for our country, this might even be considered a benefit, given the serious problem of staff shortages, vast territory, and low population density.

Tasks Suitable for AI:

Andrew Ng, who worked in the Google Brain team and the Stanford Artificial Intelligence Laboratory, points out that the media and hype surrounding AI sometimes attribute unrealistic power to these technologies. In reality, the practical applications of AI are quite limited: modern AI is currently only capable of providing accurate answers to simple questions.

In conjunction with a large volume of training data, the realistic and achievable formulation of a task is a crucial condition for the future success or failure of an AI project. Currently, AI cannot solve complex tasks that are beyond a doctor's capabilities, such as creating a fantastic device capable of scanning a person independently and diagnosing any condition while prescribing effective treatment. At present, AI is more capable of addressing simpler tasks, like assessing whether a foreign object or pathology is present in an X-ray or ultrasound image. Do cancer cells exist in cytological material? And so on. However, the steady increase in diagnostic accuracy through AI modules is thought-provoking. Publications have already claimed achieved accuracy values of up to 93% in processing radiological images, MRIs, mammograms; up to 93% accuracy in processing prenatal ultrasound; up to 94.5% in diagnosing tuberculosis; up to 96.5% in predicting ulcer incidents.


Andrew Ng's Perspective on AI:

According to one of the world's gurus, Andrew Ng, the real capabilities of AI can be assessed by a simple rule: "If an ordinary person can perform a mental task in seconds, then we can probably automate it using AI, either now or in the near future."

Specific algorithms or even solutions are not the most crucial elements for AI success in medicine. Successful ideas are openly published, and the software is already available through open-source models. For example, DeepLearning4j (DL4J) - link, Theano - link, Torch - link, Caffe - link, and several others.

An interesting approach to AI development is crowdsourcing through collective expert discussion. On the Kaggle platform link, which had over 40,000 data scientists worldwide registered in 2017, experts solve AI problems posed by commercial and public organizations. The quality of the solutions obtained is sometimes higher than the quality of developments by commercial companies. Participants are often motivated not by a monetary prize (which may not be present) for solving the problem successfully but by a professional interest in the task and an increase in their personal rating as an expert. Crowdsourcing allows saving finances and time for developers and customers who are just starting to work with AI.

In reality, only two aspects are the main barriers to more widespread AI application in healthcare: a large amount of training data and a professional and creative approach to AI training. Without well-considered and high-quality data, AI will not work, as they pose the first serious complexity for implementation. Without talented individuals, the simple application of ready-made algorithms to prepared data will also not yield results because AI needs to be tuned to understand this data for solving a specific applied task.

Artificial Intelligence in Medicine Today:

The field of medicine and healthcare is already considered one of the strategic and promising areas for the effective implementation of AI. The use of AI can massively improve diagnostic accuracy, ease the lives of patients with various illnesses, and accelerate the development and release of new drugs, among other benefits.

Arguably, the largest and most discussed project applying AI in medicine is the American corporation IBM and its cognitive system IBM Watson. Initially, this solution was trained and then applied in oncology, where IBM Watson has been assisting in making accurate diagnoses and finding effective treatment methods for each patient for a long time.

To train IBM Watson, 30 billion medical images were analyzed, requiring IBM to acquire the company Merge Healthcare for $1 billion. In addition, 50 million anonymous electronic medical records were added to this process, which IBM obtained by acquiring the startup Explorys.


IBM's Collaboration and AI Applications in Medicine:

In 2014, IBM announced collaborations with Johnson & Johnson and the pharmaceutical company Sanofi to work on training Watson to understand the results of scientific research and clinical trials. According to company representatives, this collaboration aims to significantly reduce the time for clinical trials of new drugs, enabling doctors to prescribe medications that are most suitable for individual patients. In the same year, IBM announced the development of the Avicenna software capable of interpreting both text and images. Separate algorithms are used for each type of data. Ultimately, Avicenna will be able to understand medical images and records, serving as an assistant to radiologists. Another IBM project addressing a similar task is Medical Sieve, focusing on developing an "medical assistant" artificial intelligence that can quickly analyze hundreds of images for abnormalities. This assists radiologists and cardiologists in dealing with issues where artificial intelligence currently lacks capability.

Recently, IBM developers, in collaboration with the American College of Cardiology, decided to expand Watson's capabilities by offering assistance to cardiologists. In this project, the cognitive cloud platform will analyze a vast amount of medical data related to individual patients. This includes ultrasound images, X-rays, and all other graphical information that helps refine a person's diagnosis. Initially, Watson's capabilities will be used to identify signs of aortic valve stenosis. Aortic stenosis occurs when the aortic valve's opening narrows due to the fusion of its leaflets, hindering the normal flow of blood from the left ventricle to the aorta. Detecting valve stenosis is challenging, despite being a common heart defect in adults (70-85% of cases among all heart defects). Watson will attempt to identify what it "sees" in medical images: stenosis, a tumor, a focus of infection, or simply an anatomical anomaly. It will provide the corresponding assessment to the treating physician to expedite and enhance the quality of their work.

Doctors at Boston Children’s Hospital, specializing in rare pediatric diseases, use IBM Watson to make more accurate diagnoses. Artificial intelligence searches for necessary information in clinical databases and scientific journals stored in the Watson Health Cloud, facilitating the diagnostic process. source

Advancements in AI Applications in Medicine:

It's worth noting that the Watson project, like any innovative product, did not set explicit economic goals for its creators. Costs for its component development stages typically exceeded plans, and its maintenance is quite burdensome when compared to traditional healthcare budgets. It can be considered more as an experimental platform to test prospective IT technologies and inspire researchers. Subsequently, prototypes that are tested and proven are transitioned into mass production, aiming for higher cost-effectiveness and operational suitability under real conditions. At almost every AI conference today, researchers from around the world claim, "We are creating our own Watson, and it will be better than the original."

Using the Emergent artificial intelligence system, researchers identified five new biomarkers that new drugs for treating glaucoma could target. Scientists input information on more than 600,000 specific DNA sequences from 2,300 patients and data on gene interactions into the AI system.

The DeepMind Health project, led by the British company within the Google umbrella, has developed a system capable of processing hundreds of thousands of medical records in a few minutes, extracting relevant information. Although this project, based on data systematization and machine learning, is still in its early stages, DeepMind is already collaborating with Moorfields Eye Hospital (UK) to improve the quality of treatment. Using a million anonymized eye images obtained through a tomograph, researchers are working on creating machine learning algorithms that can help detect early signs of two eye conditions: wet age-related macular degeneration and diabetic retinopathy. Another company under Google, Verily, is engaged in a similar endeavor, using artificial intelligence and Google search algorithms to analyze what keeps a person healthy.

The Israeli company MedyMatch Technology, with just 20 employees, developed an AI and Big Data solution that enables doctors to more accurately diagnose strokes. In real-time, the MedyMatch system compares a patient's brain scan with hundreds of thousands of other images in its "cloud." Strokes can be caused by either bleeding in the brain or a clot. Each of these cases requires a different treatment approach. Despite advancements in CT, diagnostic error rates have remained around 30% over the past 30 years. MedyMatch's system can detect subtle deviations from the norm, minimizing the likelihood of diagnostic and treatment errors.

AI Applications for Patients and Supporting Processes in Healthcare:

In recent times, there has been an increasing focus on applying AI technologies not only to create solutions for healthcare professionals but also for patients. For instance, the mobile app from the British company Your.MD, launched in November 2015, uses AI, machine learning, and natural language processing. This allows a patient to simply say, for example, "I have a headache" and receive recommendations and expert advice from their smartphone. The Your.MD AI system is connected to the world's largest symptom map, created by Your.MD itself, which includes 1.4 million symptoms and required over 350,000 hours for identification. Each symptom was verified by a British healthcare system specialist. The AI selects the most relevant symptom based on the unique profile of the smartphone owner.

Another company, Medtronic, offers an app capable of predicting a critical drop in blood sugar three hours before it happens. Medtronic, in collaboration with IBM, utilizes cognitive analytics on glucose meter and insulin pump data. Through the app, individuals can better understand the impact of daily activity on diabetes. In another interesting project by IBM, this time in collaboration with the diagnostic company Pathway Genomics, the OME app combines cognitive and precision medicine with genetics. The goal is to provide users with personalized information to enhance their quality of life. The first version of the app includes diet and exercise recommendations, metabolism information based on the user's genetic data, a map of the user's habits, and health status information. Future updates will include electronic medical records, insurance information, and additional details.

In addition to direct clinical applications, AI elements can be used in auxiliary processes within medical organizations. For example, AI is appropriate for automated diagnosis of the quality of medical information system work, as well as in ensuring information security. AI systems can assist with providing recommendations for timely adjustments to directories, tariffs, or even detect abnormal employee behavior and recommend training due to suspicions of low professionalism and delayed reactions.

Overview of the Most Promising Development Directions:

Summing up the above, we believe that in the near future, the following tasks in healthcare will be automated with the help of AI:

  1. 1. Automated Diagnostic Methods:


    For example, analyzing X-rays or MRI scans for the automatic detection of pathology, microscopic analysis of biological material, automatic coding of ECGs, electroencephalograms, etc. Storing a large amount of decrypted diagnostic test results electronically, including both the data and formalized conclusions, allows for the creation of reliable and valuable software products that can provide effective assistance to doctors. For instance, autonomously identifying and drawing attention to routine pathology, reducing the time and cost of examinations, introducing outsourcing, and enabling remote diagnostics.


  2. 2. Speech Recognition and Natural Language Understanding Systems:


    These can significantly assist both doctors and patients, from ordinary speech transcription and transforming it into text as a more advanced interface for communicating with medical information systems (MIS), calls to call centers, or voice assistants, to ideas such as automatic language translation for foreign patients, speech synthesis when reading MIS records, a robot-registrar in the hospital's admission department or clinic registry, capable of answering simple questions and routing patients, etc.


  3. 3.Big Data Analysis and Prediction Systems:


    These are also currently solvable AI tasks that can provide significant benefits. For example, real-time analysis of changes in morbidity allows for quickly predicting changes in patient visits to medical organizations or the need for medication, preventing epidemics, or providing an accurate forecast of health deterioration, which can sometimes save a patient's life.


    4. Automatic Classification and Verification Systems:

    Systems for automatic classification and verification aid in connecting patient information found in various forms across different information systems. For instance, they can construct an integrated electronic health record from individual episodes described with varying levels of detail and without clear or consistent structuring of information. A promising technology is the machine analysis of content from social networks and internet portals to quickly obtain sociological, demographic, and marketing information about the performance of the healthcare system and individual healthcare facilities.

    5. Automatic Chatbots for Patient Support:

    Automatic chatbots for patient support can significantly assist in improving patient adherence to a healthy lifestyle and prescribed treatment. Currently, chatbots can learn to respond to routine questions, provide guidance on patient behavior in simple situations, connect patients with the appropriate telemedicine doctor, and offer recommendations for diet, among other things. Such healthcare development towards self-service and greater patient involvement in managing their own health without visiting a doctor can save significant financial resources.

    6. Advancement in Robotics and Mechatronics:

    The development of robotics and mechatronics is another important aspect. The well-known Da Vinci surgical robot is just the first step toward, if not replacing doctors with machines, at least enhancing the quality of work for medical professionals. The integration of robotics with AI is currently considered one of the promising directions of development capable of delegating routine manipulations, including those in medicine, to machines.

    Certainly, when it comes to human health, the principle of "do no harm" is crucial, and it should be accompanied by the rigor of the regulatory framework and a careful evidentiary basis when introducing new technologies. At the same time, should we approach new technologies with skepticism and deny their potential future practical application, ignoring obvious successes?

    Regardless of which part of AI solutions in healthcare will be successfully implemented and which will be reasonably rejected, it should be acknowledged that in the 21st century, AI as a technology will exert the most transformative influence from the set of technologies we apply in the medical profession.

back to top
Package Templates
Close
Please Wait...