Journal of Translational Critical Care Medicine

: 2020  |  Volume : 2  |  Issue : 4  |  Page : 78--82

Evolution of Clinical Medicine: From Expert Opinion to Artificial Intelligence

Antonio Barracca1, Mauro Contini2, Stefano Ledda3, Gianmaria Mancosu3, Giovanni Pintore4, Kianoush B Kashani5, Claudio Ronco6,  
1 abcGo Limited Liability Company (S.r.l), Cagliari, Italy
2 Machine Learning App Developer, Londra, Cagliari, Italy
3 Application Developer, Cagliari, Italy
4 Visual Computing Group, Sardinia Research Center 4 (CRS4), Sardegna, Italy
5 Department of Medicine, Division of Nephrology and Hypertension, Mayo Clinic; Department of Medicine, Division of Pulmonary and Critical Care Medicine, Mayo Clinic, Rochester, Minnesota, USA
6 University of Padova, Padova; Department of Nephrology Dialysis & Transplantation, International Renal Research Institute (IRRIV), San Bortolo Hospital, Vicenza, Italy

Correspondence Address:
Prof. Kianoush B Kashani
Division of Nephrology and Hypertension, Mayo Clinic, 200 First Street SW, Rochester, Minnesota 55905


Artificial intelligence provides a vast opportunity and conquest of the science of knowledge. Twenty-first-century medicine will be characterized by an extraordinary ability to access and process medical information to provide patient-specific, timely, and effective clinical decision support. The knowledge gained by patient care experience and clinicians' expertise has led to many clinical care advances. Access to a large volume of data, along with ever-growing information and knowledge of diseases, can allow us to optimize diagnoses and management strategies by using advances in machine learning and artificial intelligence. Changing the medical culture from only relying on the experts to use medical informatics advances to improve the experts' clinical judgment would be an uphill battle. It is necessary to overcome the clinicians' traditional training to empower them into moving in the data science, statistics, and artificial intelligence era. As the incorporation of artificial intelligence in clinical practice seems inevitable, a thorough understanding of its capacities and flaws is essential to the emergence of a new clinical practice world. This review paper describes some of the nuances of past, current, and future clinical decision support systems and artificial intelligence's impact on this process.

How to cite this article:
Barracca A, Contini M, Ledda S, Mancosu G, Pintore G, Kashani KB, Ronco C. Evolution of Clinical Medicine: From Expert Opinion to Artificial Intelligence.J Transl Crit Care Med 2020;2:78-82

How to cite this URL:
Barracca A, Contini M, Ledda S, Mancosu G, Pintore G, Kashani KB, Ronco C. Evolution of Clinical Medicine: From Expert Opinion to Artificial Intelligence. J Transl Crit Care Med [serial online] 2020 [cited 2022 Jan 23 ];2:78-82
Available from:

Full Text


The volume of knowledge in health care and medicine is overwhelmingly increasing in extent and complexity. From the beginning of the medical schools to high-level specialty/subspecialty training, curricula are designed to complete a long journey from anatomy, the biochemistry of body composition to understanding the very complex body physiology and disease pathophysiology. Diseases could be differentiated from each other by their specific symptoms and signs. Physical examination and further imaging and/or laboratory tests allow appropriate diagnoses, which lead to disease-specific treatment or management strategies. Knowing organ physiology and disease pathophysiology is essential in the proper diagnosis of diseases. This seemingly simple process is a lot more complicated as a lot of diseases share several features. This complexity, added to the ever-growing knowledge of diseases, has changed the landscape of learning to a very lengthy and difficult road.

The three daily occurring scenarios in the clinical settings include (1) clinicians know about the underlying source of illness as it is common or has specific features, (2) care providers would need to ask a colleague who recently faced a similar case, and finally, (3) medical textbook or other references are used to assimilate the appropriate differential diagnosis for the underlying source.

If none of those scenarios that mentioned earlier found to be useful, using search engines, cognitive tutors (e.g., Watson's “Medical Cognitive Tutor”), medical libraries, or other clinical decision support resources (e.g., UpToDate®) that continually update their knowledge base could be practical tools to reach an appropriate conclusion.

During clinical problem solving, complex clinical cases are treated by skillful clinicians through logical steps. This would lead to the formulation of accurate diagnoses and appropriate assessments and plans. In this process, remaining updated with the new medical progress remains essential. Among the available resources, free or open-access medical journals, including over 400 biomedical electronic journals with free access (, and the directory of open-access journals (DOAJ; the archive of over 2500 scientific free online journals; are worth noting. To keep updated on the new development, significant effort and time are required. Despite all these efforts, real-time information during patient encounters remains lagging. Besides, considering many factors related to individual patients in decision-making processes could lead to different conclusions (e.g., while a young patient with renal colic raises suspicion regarding kidney stones, the same colic in an elderly patient may direct the clinician to different diagnoses). In recent decades, changes in patients' complexity, health services, and the volume of patient-generated information have added complexity to medical thinking processes. Teamwork is a growing aspect of medicine, highlighting the importance of appropriate communication skills among clinicians to provide the best clinical decisions and care.[1] All added complexities to the medical decision-making processes named above have resulted in the need to use new and novel tools to enhance clinicians' performance to provide the best care practices to each patient individually. [Figure 1] describes the options the clinicians have for decision-making processes.{Figure 1}

 Medicine and the Computer Advent

Research on computer-assisted diagnosis began in the 1960s with the idea that difficult clinical problems could be solved with mathematical computations. Substantial efforts were placed on applying flowcharts, Boolean algebra, pattern matching, and decision analysis in diagnostic processes.[2] Except for a small number of minor clinical problems, each technique has proven to have little or no practical value. Many believed that a program that could imitate experts' behavior could be beneficial, which may not be an entirely accurate understanding of using computers in medicine. Therefore, the early work on computer-aided diagnosis has been abandoned. In the early 1970s, the focus shifted to studying the actual clinicians' behavior during clinical problem solving.[2],[3],[4] The resulting insights were then used to build clinical problem-solving models that were eventually converted into so-called artificial intelligence programs or expert systems.[5] These programs were developed to emulate the clinical experience in two processes: (1) the “rule-based system,” which was oddly called “artificial intelligence.” One of these systems, MYCIN,[6] was based on the hypothesis that expert knowledge consisted of many independent and situation-specific rules that could be simulated by associating them in a chain of contributions.[7],[8] Each rule consisted of an “If” statement followed by a “Then” conclusion. This meant that the first statement identifies a condition followed by a specific decision or action. For example, if an individual with kidney failure has a pH of 7.26 and plasma bicarbonate of 22 mEq/L, the patient likely has metabolic acidosis. Unfortunately, although these rule-based reasoning systems are logical, they are too simplistic for complex medical cases;[9] (2) in the 1970s, parallel with the rule-based systems, a very different approach to the structuring of human clinical competence was developed. In this system, the ability to construct and evaluate hypotheses by matching a patient's characteristics to stored profiles of specific diseases was considered diagnostic acumen. A numerical value indicated how often a particular outcome was encountered in a given disease. A second value pointed to the strength of a specific result leading to a specific disease. This process produces a general ranking of the potential differential diagnoses. Despite its appealing concept, these programs were unable to provide information regarding the path to the conclusions.[10] In the late 1970s, it was clear that intellectual and technical problems should be improved before the advent of reliable and automated consultation programs.[9],[11]

The contributions that PCs have made in medicine in not negligible. With the computational power of PCs, now statistical tests for research in medicine have become more available than ever. This, in turn, has led to the development of large databases, which have changed the face of clinical medicine. As medical thinking has become very complex, owing to the changes in patients, health services, and medical sciences, our ability to rely on human capabilities alone has been scrutinized.[1] Nowadays, the spread of new technologies like smartphones[12] may enhance decision-making abilities. Their easy use provides real-time access to classifications, pathology databases, and clinic support tools (e.g., cardiovascular risk or prostate cancer calculators). Besides, they would potentially enhance patient compliance by providing appropriate information and reminders. Other technologies have been developed to advance the field. For example, mHealth includes wearable devices[13],[14] to monitor vital parameters that could lead to safer and more effective home care.[15] The National Health Service in the UK reported increasing use of digital technology by 2021. Currently, general practitioner (GP; https://www.england.nhs. uk/london/our./gp-at-hand-fact-sheet/) at hand (run by Babylon and supported by the Hammersmith and Fulham hospitals, Clinical Commissioning Group) is a program that provides such remote monitoring services. GP at hand offers digital or video consultations throughout the day via smartphones.[16] “;Ping, A Good Doctor,” is the world's leading one-stop health-care ecosystem platform in China, combining mobile health and AI technology. A research and development team with over 200 world-class AI experts developed the “AI Doctor,” which now has accumulated more than 300 million clinical data. The “AI Doctor” collects the symptoms and history of the users' illness before providing a preliminary diagnostic suggestion.

Alan Turing, in 1950, published a paper entitled Computing machinery and intelligence.[17] In this article, Turing offers definitions for “machine” and “thinking.” Based on the Turing description, an ideal machine has memory and can manipulate symbols according to predefined rules. Turing also introduced the “imitation game,” in which a human and a machine communicate through a teletype. If the machine could deceive the interlocutor into believing he/she is talking to a real person, one could assume that the machine can “think.” As time goes by, it seems that machines are progressively capable of thinking (e.g., unbeatable chess games). Once machines learn the rules of the play, they can memorize billions of scenarios. In games that the rules are not as visible as chess (e.g., football), the robot game players have more difficulty. The machines are formidable in cataloging and mapping data that already exist. However, when computers encounter a new situation, they cannot improvise new rules. In other words, machines do not have social intelligence. [Figure 2] demonstrates the steps that need to be made to implement an artificial intelligence-based clinical decision support system.{Figure 2}

 Big Data

Advances in genetics and biology have led to improved medical knowledge, which, in turn, resulted in tremendous progress in diagnostic and therapeutic options. Due to the increasing population age and patients' comorbidities, the volume and depth of information required to provide appropriate medical management have drastically increased. Therefore, electronic health records (EHRs), as the repository of this information, continue to expand. The high velocity, veracity, volume, and variety of the EHR qualify it as big data. EHR contains fragmented pieces of data related to each patient's medical history. Linking this information is necessary to allow clinicians to use it for clinical purposes, including but not limited to the prediction of risks of different diseases or the probability of different outcomes.

 Machine Learning

Machine learning (ML) is a tool used in the science of artificial intelligence (AI). ML includes computational statistics, pattern recognition, artificial neural networks, image processing, data mining, and adaptive algorithms. ML algorithms are designed to recognize patterns in data and progressively improve their prediction capabilities. ML could be considered a natural extension of traditional statistical approaches. Indeed, ML is a model that learns from case examples rather than rules. The examples serve as the ML input (classifications), and the computed results are output (labels). For instance, a biopsy sample read by a pathologist could be digitized and converted into classes, i.e., a set of pixels that make up the examination, and labels, i.e., the information classifies the type of disease present in the sample. Using algorithms for assessing the observations and learn from the patterns, ML can detect a map that links the classifications with the labels to create a generalizable model. Thus, ML devices can review and read new biopsy samples' histological structures based on the recorded labels related to linked classes. ML models could be very simple or complex. Deep learning, a ML technology class, uses artificial neural networks to learn extremely complex relationships between classes and labels that exceed one by human capabilities.[18]

 Deep Learning

Classic machine learning limits

The conventional ML has several limitations. We can consider ML as a magic box that can recognize patterns based on previous experience using logical and mathematical rules that connect specific inputs, for example, a list of symptoms, to particular outputs, like the diagnosis of a syndrome. In general, these systems' users are not aware of the internal structure of the algorithms used in ML techniques. The users or investigators can only provide inputs and expect output from ML tools, which would be used for further internal ML training. The primary problem with such systems is that their operation depends on the represented data used for training. For instance, an ML tool is to decide the natural delivery versus cesarean section and only relies on available information in EHRs, known as features or inputs (e. g., presence of a uterine scar). These features direct the algorithms that ML uses to conclude. These features need to be provided to the ML tool in a very well-defined and standardized way (e.g., an magnetic resonance imaging [MRI] that shows a uterine scar could not be used for this specific tool; but its report indicating uterine scar could impact the label).

Defining the features for ML, i.e., feature engineering, is, therefore, essential. To engineer features, a deep understanding of computer and mathematical systems is required. This, in turn, may add to the resource intensity and cost of the tool. Adding the advent of big bata in the medical field associated with growing EHR dimensions, the complexity of ML tools and AI systems is more apparent.

Deep learning

To mitigate the limitations of ML mentioned above, understanding the concept of deep learning is critical. During deep learning, feature engineering would become completely automatic, which makes problem solving through machine learning tremendously easier. Indeed, although the concepts mentioned above existed for a long time, in reality, they came to fruition after the advent of supercomputers. Supercomputers, with their tremendous computational capacities, can analyze the big data to achieve optimal network training.

In deep learning, the main complex problem is decomposed into a sequence of several elementary operations. This process is like a living organism where incredibly complex organs function appropriately based on many simple chemical and physiological processes. Referring to the previous example for prediction of delivery type, if ML has several simpler algorithms, including one that can transform MRI data, it can decompose the main question to several simpler features to reach a conclusion based on them. Some of these features may not be visible by naked human eyes, but algorithms can identify them.

This process is not directly supervised or intervened by the users, as the AI tool continues to learn using appropriate features to minimize errors. The purpose of the various intermediate features is to find multiple simple, intermediate representatives for the data. Therefore, the network's depth will be defined by the number of levels and features between the primary inputs and output of the system. In other words, it is the data that defines the system's rules, which leads to a concatenated network of preliminary actions.

During deep learning, network training is often segregated from its application. This is necessary as network training with several levels requires lengthy computations and a massive volume of data. For instance, in a modern convolutional network, a typical application of a deep learning system used to recognize objects in generic images, about 130 million parameters are adjusted simultaneously to allow appropriate network training.[19] The result of this enormous process would be a small file of about 90 MB, which can be carried on a USB stick or uploaded on a website. These final products can often make accurate predictions on any new image within a few milliseconds, recognizing at least 9,000 different objects. They can, therefore, be used by any user, even those without prior knowledge of the training process.

An exciting feature of multilevel deep learning systems that they can be used on specific problems is also adaptable to different applications. The adaptability process, also known as re-training, is generally only done on the system's final level, and the time-consuming and resource-intensive process of initial training is often not necessary. For example, a generic object recognition network within images, which is trained for weeks by supercomputers, can be specialized within a few hours to perform other tasks (e.g., recognize uterine scar of cancer cells on MRI images).

Remaining issues

Based on current policies, it is currently not entirely clear who has ownership of the trained deep learning networks. The process of training a deep learning network is done by the ML tool itself. This process follows a previously described method, and therefore, applying intellectual property may not be appropriate. On the one hand, the ownership could be claimed by companies that spent their resources to train the network, and on the other hand, it could belong to individuals who provided the data and context expertise.

In practice, the deep learning models require significant resources and considerable amounts of data, which only could be available in major global companies (e.g., Google, Microsoft, Facebook, Amazon, etc.). Companies with resources pursue their business interests when they develop these networks. Simultaneously, these networks could be used as the basis for other applications that may be beneficial to the community (e.g., specialized new applications, such as the Google AI cancer detection project[20]).

The role of data availability in the development of these technologies is undeniable. While in the medical field, the use of these systems is gaining momentum, context experts (i.e., physicians, pharmacists, and other allied health staff) participation in the development and utilization of such devices is of paramount importance. The roles that experts could participate include (1) validation of the final process, (2) providing the data, and (3) supervising the training process. Context experts could provide appropriate data labeling (e.g., annotating clinical images with positive and negative cases) to provide more accurate performance for these models. These experts do not need to have computer science knowledge. One of the most important dilemmas is if we can delegate the diagnosis or management entirely based on these systems, or do the clinicians still need to supervise the magic box heavily?


Considering the growing information and complexity of patient management, using clinical decision support systems may be inevitable. Artificial intelligence represents a formidable opportunity to assist with the intricacies of decision-making processes. There is a need to change the medical culture to accommodate the growing knowledge and incorporation of AI systems. Full delegation of clinical decision-making processes to the AI systems may not be ready for prime time. These would necessitate the supervision and control of the AI systems by clinicians. Training clinicians with clinical skills and medical informatics expertise is the next needed step.

Financial support and sponsorship


Conflicts of interest

There are no conflicts of interest.


1Obermeyer Z, Lee TH. Lost in thought-The limits of the human mind and the future of medicine. N Engl J Med 2017;377:1209-11.
2Reggia J, Tuhrim S, editors. Computer-Assisted Medical Decision Making. New York: Springer-Verlag; 1985.
3Kassirer JP, Gorry GA. Clinical problem solving: A behavioral analysis. Ann Intern Med 1978;89:245-55.
4Elstein AS, Sprafka SA, Shulman L, Gordon M, Allal L. Medical Problem Solving: An Analysis of Clinical Reasoning. Newsletter on Science, Technology & Human Values. Vol. 3. Cambridge, Massachusetts: SAGE Publications Inc., Harvard University Press; 1978. p. 50-1.
5Szolovits P, editor. Artificial Intelligence in Medicine. New York, NY: Routledge; 2018.
6Shortliffe EH, editor. Computer-Based Medical Consultatios: MYCIN. New York, NY: Elsevier Scientific Publishing Company; 1976.
7Buchanam B, Shortliffe E, editors. Rules-Bases Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Reading, Massachusetts: Addison-Welsey Publishing Company; 1984.
8Van Melle W, editor. System Aids in Constructing Consultation Programs. UMI Research Press; 1981.
9Schwartz WB, Patil RS, Szolovits P. Artificial intelligence in medicine. Where do we stand? N Engl J Med 1987;316:685-8.
10Szolovits P, Pauker SG. Categorical and probabilistic reasoning in medical diagnosis. Artif Intell 1978;11:115-44.
11Schwartz WB. Medicine and the computer. The promise and problems of change. N Engl J Med 1970;283:1257-64.
12Otis B, Parviz B. Introducing our smart contact lens project, in Alphabet; 2014. Available from: [Last accessed on 2021 May 04].
13Galloway CD, Valys AV, Shreibati JB, Treiman DL, Petterson FL, Gundotra VP, et al. Development and validation of a deep-learning model to screen for hyperkalemia from the electrocardiogram. JAMA Cardiol 2019;4:428-36.
14Tison GH, Sanchez JM, Ballinger B, Singh A, Olgin JE, Pletcher MJ, et al. Passive detection of atrial fibrillation using a commercially available smartwatch. JAMA Cardiol 2018;3:409-16.
15Pym H. Could the future hospital be in the home? BBC; 2014. Available from: [Last accessed on 2021 May 04].
16Shuren J, Califf RM. Need for a national evaluation system for health technology. JAMA 2016;316:1153-4.
17Turing AM. I – Computing machinery and intelligence. Mind 1950;LIX: 433-60.
18Hinton G. Deep learning – A technology with the potential to transform health care. JAMA 2018;320:1101-2.
19Convolutional Neural Networks (CNNs/ConvNets), in CS231n: Convolutional Neural Networks for Visual Recognition. Stanford CS Class; 2020. Available from: [Last accessed on 2021 May 04].
20Stumpe M. Applying Deep Learning to Metastatic Breast Cancer Detection, In Google AI Blog: The Latest News from Google AI; 2018. Available from: [Last accessed on 2021 May 04].