Artificial Intelligrnce (2)
Artificial Intelligrnce (2)
“PRACTICE SCHOOL’’
SUBMITTED TO
BACHELOR OF PHARMACY
SUBMITTED TO SUBMITTED BY
E-mail [email protected]
Certificate
This is certified that Mr. ANKIT GUPTA student of, ARYAKUL COLLEGE OF
PHARMACY & RESEARCH, LUCKNOW, Affiliated to Dr. A.P.J. ABDUL KALAM
TECHNICAL UNIVERSITY, LUCKNOW bearing Roll No 2003160500019 has completed
his B.Pharm Project Report on “PRACTICE SCHOOL” under the supervision of Ms. SNEHA
SINGH, Aryakul College of Pharmacy & Research, Lucknow, Uttar Pradesh India.
ARYAKUL COLLEGE OF
PHARMACY & RESEARCH
NATKUR P.O CHANDRAWAL, GAURI ROAD, Adj. CRPF BASE CAMP LUCKNOW, 222602 PHONE-
0522 2817724, FAX 0522 2817725 H. O. 130,HIND NAGAR, KANPUR ROAD, LUCKNOW-02,
PHON/FAX-0522-4044406 Website www.aryakulcollege.org
E-mail [email protected]
Certificate
This is certified that Mr. ANKIT GUPTA student of, ARYAKUL COLLEGE OF
PHARMACY & RESEARCH, LUCKNOW, Affiliated to Dr. A.P.J. ABDUL KALAM
TECHNICAL UNIVERSITY, LUCKNOW bearing Roll No 2003160500019 has completed his
B.Pharm Project Report on “PRACTICE SCHOOL” under the supervision of Ms. SNEHA
SINGH , Aryakul College of Pharmacy & Research, Lucknow, Uttar Pradesh India.
(HEAD OF DEPARTMENT)
ARYAKUL COLLEGE OF
PHARMACY & RESEARCH
ARYAKUL COLLEGE OF PHARMACY AND RESEARCH
NATKUR P.O CHANDRAWAL, GAURI ROAD, Adj. CRPF BASE CAMP LUCKNOW, 222602 PHONE-
0522 2817724, FAX 0522 2817725 H. O. 130,HIND NAGAR, KANPUR ROAD, LUCKNOW-02,
PHON/FAX-0522-4044406 Website www.aryakulcollege.org
E-mail [email protected]
Certificate
This is certified that Mr. ANKIT GUPTA student of ARYAKUL COLLEGE OF
PHARMACY & RESEARCH, LUCKNOW, Affiliated to Dr. A.P.J. ABDUL KALAM
TECHNICAL UNIVERSITY, LUCKNOW bearing Roll no.2003160500019 has completed
his B.Pharm. Project Report on “PRACTICE SCHOOL” under my supervision.
(LECTURER)
ARYAKUL COLLEGE OF
PHARMACY & RESEARCH
ARYAKUL COLLEGE OF PHARMACY AND RESEARCH
NATKUR P.O CHANDRAWAL, GAURI ROAD, Adj. CRPF BASE CAMP LUCKNOW, 222602 PHONE-
0522 2817724, FAX 0522 2817725 H. O. 130,HIND NAGAR, KANPUR ROAD, LUCKNOW-02,
PHON/FAX-0522-4044406 Website www.aryakulcollege.org
E-mail [email protected]
Certificate
This is certified that Mr. ANKIT GUPTA student of ARYAKUL COLLEGE OF
PHARMACY & RESEARCH, LUCKNOW, Affiliated to Dr. A.P.J. ABDUL KALAM
TECHNICAL UNIVERSITY, LUCKNOW bearing Roll. 2003160500019 has completed
his B.Pharm Project Report on “PRACTICE SCHOOL ” under the supervision of Ms.
SNEHA SINGH, Aryakul College of Pharmacy & Research, Lucknow, Uttar Pradesh,
India.
After thanking god for her continued bless for completing project work. Every work constitutes great deal
of assistance and guidance from the people concerned and this particular project is of no expectation.
I extremely thank to Dr. SASHAKT SINGH (Managing Director), Dr. DURGESH MANI
TRIPATHI (Principal), Dr. AADITYA SINGH (Deputy Director), Prof. B.K. SINGH (Head Of
Department), Dr. KASHIF SHAKEEL (Professor),Aryakul College of Pharmacy and Research,
Lucknow for their valuable suggestion & guidance, without which the completions of project report is not
be possible.
A project of the nature is surely a result of tremendous support, guidance, encouragement and help.
Wish to place on record my sincere gratitude to Dr. SNEHA SINGH (Associate Professor), Mrs.
MAMTA PANDEY (Associate Professor), Mrs. PRIYANKA KESARWANI (Associate Professor),
Mrs. ANSHIKA SHUKLA (Associate Professor), Ms. SHWETA MISHRA (Lecturer) and the entire
faculty of our department. I thank them for their support and guidance taking this would have been
possible.
I am grateful to my parents for their constant support, love, encouragement and helping me put my best
foot forward in all my endeavors of casting my dreams in life. Also, wish to acknowledgement
enthusiastic encouragement and support extended to me by family members. At last, I would like to thank
all the faculty of pharmacy and research to help me completing this project. I am also thankful to my
friend who provided me their constant support and assistance.
Thanking You
TABLE OF CONTENTS
INTRODUCTION
Artificial intelligence (AI) is defined as ‘a field of science and engineering concerned with
the computational understanding of what is commonly called intelligent behaviour, and with the
creation of artefacts that exhibit such behaviour’. Aristotle attempted to formalise ‘right thinking’
(logic) through his syllogisms (a three part deductive reasoning). Much of the work in the modern
era was inspired by this and the early studies on the operation of mind helped to establish
contemporary logical thinking. Programs which enable computers to function in the ways, that
make people seem intelligent are called artificial intelligent systems. The British mathematician
Alan Turing (1950) was one of the founders of modern computer science and AI. He defined
intelligent behaviour in a computer as the ability to achieve human-level performance in cognitive
tasks, this later became popular as the ‘Turing test’.2 Since the middle of the last century,
researchers have explored the potential applications of intelligent techniques in every field of
medicine.3,4 The application of AI technology in the field of surgery was first successively
investigated by Gunn in 1976, when he explored the possibility of diagnosing acute abdominal pain
with computer analysis.5 The last two decades have seen a surge in the interest in medical AI.
Modern medicine is faced with the challenge of acquiring, analysing and applying the large amount
of knowledge necessary to solve complex clinical problems. The development of medical artificial
intelligence has been related to the development of AI programs intended to help the clinician in the
formulation of a diagnosis, the making of therapeutic decisions and the prediction of outcome. They
are designed to support healthcare workers in their every day duties, assisting with tasks that rely on
the manipulation of data and knowledge. Such systems include Artificial neural networks (ANNs),
fuzzy expert systems, evolutionary computation and hybrid intelligent systems.
Early AI was focused on the development of machines that had the ability to make inferences or
decisions that previously only a human could make. The first industrial robot arm (Unimate;
Unimation, Danbury, Conn, USA) joined the assembly line at General Motors in 1961 and
performed automated die casting.8 Unimate was able to follow step-by-step commands. A few
years later (1964), Eliza was introduced by Joseph Weizenbaum. Using natural language
processing, Eliza was able to communicate using pattern matching and substitution methodology
to mimic human conversation (superficial communication),9 serving as the framework for future
chatterbots.
In 1966, Shakey, “the first electronic person,” was developed. Created at Stanford
Research Institute, this was the first mobile robot to be able to interpret instructions.10 Rather
than simply following 1-step commands, Shakey was able to process more complex instructions
and carry out the appropriate actions.10 This was an important milestone in robotics and AI.
Despite these innovations in engineering, medicine was slow to adopt AI. This early
period, however, was an important time for digitizing data that later served as the foundation for
future growth and utilization of AIM. The development of the Medical Literature Analysis and
Retrieval System and the webbased search engine PubMed by the National Library of Medicine
in the 1960s became an important digital resource for the later acceleration of biomedicine.11
Clinical informatics databases and medical record systems were also first developed during this
time and helped establish the foundation for future developments of AIM.
Most of this time period is referred to as the “AI winter,” signifying a period of reduced funding
and interest and subsequently fewer significant developments.2 Many acknowledge 2 major
winters: the first in the late 1970s, driven by the perceived limitations of AI, and the second in
the late 1980s extending to the early 1990s, driven by the excessive cost in developing and
maintaining expert digital information databases. Despite the lack of general interest during this
time period, collaboration among pioneers in the field of AI continued. This fostered the
development of The Research Resource on Computers in Biomedicine by Saul Amarel in 1971
at Rutgers University. The Stanford University Medical Experimental–Artificial Intelligence in
Medicine, a timeshared computer system, was created in 1973 and enhanced networking
capabilities among clinical and biomedical researchers from several institutions.12 Largely as a
result of these collaborations, the first National Institutes of Health–sponsored AIM workshop
was held at Rutgers University in 1975.11 These events represent the initial collaborations
among the pioneers in AIM. One of the first prototypes to demonstrate feasibility of applying AI
to medicine was the development of a consultation program for glaucoma using the CASNET
model.13 The CASNET model is a causal–associational network that consists of 3 separate
programs: model-building, consultation, and a database that was built and maintained by the
collaborators. This model could apply information about a specific disease to individual patients
and provide physicians with advice on patient management.13 It was developed at Rutgers
University and was officially demonstrated at the Academy of Ophthalmology meeting in Las
Vegas, Nevada in 1976. A “backward chaining” AI system MYCIN, was developed in the early
1970s.14 Based on patient information input by physicians and a knowledge base of about 600
rules, MYCIN could provide a list of potential bacterial pathogens and then recommend
antibiotic treatment options adjusted appropriately for a patient’s body weight. MYCIN became
the framework for the later rulebased system, EMYCIN.11 INTERNIST-1 was later developed
using the same framework as EMYCIN and a larger medical knowledge base to assist the
primary care physician in diagnosis.11 In 1986, DXplain, a decision support system, was
released by the University of Massachusetts. This program uses inputted symptoms to generate a
differential diagnosis.3 It also serves as an electronic medical textbook, providing detailed
descriptions of diseases and additional references. When first released, DXplain was able to
provide information on approximately 500 diseases. Since then, it has expanded to over 2400
diseases.15 By the late 1990s, interest in ML was renewed, particularly in the medical world,
which along with the above technological developments set the stage for the modern era of AIM.
In 2007, IBM created an open-domain question– answering system, named Watson, that
competed with human participants and won first place on the television game show Jeopardy! in
2011. In contrast to traditional systems that used either forward reasoning (following rules from
data to conclusions), backward reasoning (following rules from conclusions to data), or hand-
crafted if-then rules, this technology, called DeepQA, used natural language processing and
various searches to analyze data over unstructured content to generate probable answers.16 This
system was more readily available for use, easier to maintain, and more cost-effective. By
drawing information from a patient’s electronic medical record and other electronic resources,
one could apply DeepQA technology to provide evidence-based medicine responses. As such, it
opened new possibilities in evidence-based clinical decision-making.16,17 In 2017, Bakkar et
al18 used IBM Watson to successfully identify new RNA-binding proteins that were altered in
amyotrophic lateral sclerosis. Given this momentum, along with improved computer hardware
and software programs, digitalized medicine became more readily available, and AIM started to
grow rapidly. Natural language processing transformed chatbots from superficial communication
(Eliza) to meaningful conversation-based interfaces. This technology was applied to Apple’s
virtual assistant, Siri, in 2011 and Amazon’s virtual assistant, Alexa, in 2014. Pharmabot was a
chatbot developed in 2015 to assist in medication education for pediatric patients and their
parents, and Mandy was created in 2017 as an automated patient intake process for a primary
care practice.19,20 DL marked an important advancement in AIM. In contrast to ML, which
uses a set number of traits and requires human input, DL can be trained to classify data on its
own. Although DL was first studied in the 1950s, its application to medicine was limited by the
problem of “overfitting.” Overfitting occurs when ML is too focused on a specific dataset and
cannot accurately process new datasets, which can be a result of insufficient computing capacity
and lack of training data.21 These limitations were overcome in the 2000s with the availability
of larger datasets and significantly improved computing power. A convolutional neural network
(CNN) is a type of DL algorithm applied to image processing that simulates the behavior of
interconnected neurons of the human brain. A CNN is made up of several layers that analyze an
input image to recognize patterns and create specific filters. The final outcome is produced by
the combination of all features by the fully connected layers.5,21 Several CNN algorithms are
now available, including Le-NET, AlexNet, VGG, GoogLeNet, and ResNet.
Healthcare Data:-
Before AI systems can be deployed in healthcare applications, they need to be ‘trained’ through
data that are generated from clinical activities, such as screening, diagnosis, treatment
assignment and so on, so that they can learn similar groups of subjects, associations between
subject features and outcomes of interest. These clinical data often exist in but not limited to the
form of demographics, medical notes, electronic recordings from medical devices, physical
examinations and clinical laboratory and images.12 Specifically, in the diagnosis stage, a
substantial proportion of the AI literature analyses data from diagnosis imaging, genetic testing
and electrodiagnosis . For example, Jha and Topol urged radiologists to adopt AI technologies
when analysing diagnostic images that contain vast data information.13 Li et al studied the uses
of abnormal genetic expression in long non-coding RNAs to diagnose gastric cancer. 14 Shin et
al developed an electrodiagnosis support system for localising neural injury.15 In addition,
physical examination notes and clinical laboratory results are the other two major data sources .
We distinguish them with image, genetic and electrophysiological (EP) data because they
contain large portions of unstructured narrative texts, such as clinical notes, that are not directly
analysable. As a consequence, the corresponding AI applications focus on first converting the
unstructured text to machine-understandable electronic medical record (EMR). For example,
Karakülah et al used AI technologies to extract phenotypic features from case reports to enhance
the diagnosis accuracy of the congenital anomalies.
The above discussion suggests that AI devices mainly fall into two major categories. The first
category includes machine learning (ML) techniques that analyse structured data such as
imaging, genetic and EP data. In the medical applications, the ML procedures attempt to cluster
patients’ traits, or infer the probability of the disease outcomes. The second category includes
natural language processing (NLP) methods that extract information from unstructured data such
as clinical notes/medical journals to supplement and enrich structured medical data. The NLP
procedures target at turning texts to machine-readable structured data, which can then be
analysed by ML techniques.
For better presentation, the flow chart describes the road map from clinical data generation,
through NLP data enrichment and ML data analysis, to clinical decision making. We comment
that the road map starts and ends with clinical activities. As powerful as AI techniques can be,
they have to be motivated by clinical problems and be applied to assist clinical practice in the
end.
Disease focus:-
Despite the increasingly rich AI literature in healthcare, the research mainly concentrates around
a few disease types: cancer, nervous system disease and cardiovascular disease . We discuss
several examples below.
1. Cancer: Somashekhar et al demonstrated that the IBM Watson for oncology would be a
reliable AI system for assisting the diagnosis of cancer through a doubleblinded validation
study.19 Esteva et al analysed clinical images to identify skin cancer subtypes.
3. Cardiology: Dilsizian and Siegel discussed the potential application of the AI system to
diagnose the heart disease through cardiac image.3 Arterys recently received clearance from the
US Food and Drug Administration (FDA) to market its Arterys Cardio DL application, which
uses AI to provide automated, editable ventricle segmentations based on conventional cardiac
MRI images.
The following are the major frequently used software in the healthcare for medical purpose:-
1. OpenMRS:-
OpenMRS is a free, open source software, which enables the system to be as widely accessible
as possible by sites with limited funding. The system is based on open standards for medical data
exchange such as HL7,allowing the exchangeof patient data with other medical information
system.
2. FreeCAD:-
3. 3D Slicer:-
A slicer is a piece of 3D printing software that acts as a link between digital model(generated on
a computer) and the actual model(constructed by 3D printer itself). The 3D printer slicer
software transforms the digital model into printing instruction, called G - code.
4.GNU Health:-
GNU Health is a free/libre health and hospital information system with strong focus on public
health and social medicine. Its functionality includes management of electronic health records
and laboratory information management system.
5. Oscar EMR:-
Oscar is an open source Electronic Medical Record(EMR) solution. Primary features includes
patient record, billing, scheduling, medication, E-forms and messaging.
Other features includes chronic disease management tools, prescription module, health tracker,
inventory management.
FUTURE PROSPECTS
The overall workflow of the reviewed systems is shown in Figure 1. In Figure 1, the
comprehensive review of smart healthcare systems is divided into three major areas: health
monitoring, disease diagnosis, and supportive devices in AAL. Additionally, the software
integration architectures are described in this review. The health monitoring prototypes are
divided according to wearable devices or smartphones, mentioned in Section II. Three major
diseases, like COVID-19, heart disease, and diabetes detection frameworks based on machine
learning algorithms, are demonstrated throughout Section III. Section IV discusses the assistive
prototypes in AAL, the supportive tools in smart homes, and social robots. The major directions
in creating software architecture for smart healthcare are discussed in section V. Section
VI presents an open discussion of the reviewed studies and guidelines for areas of future work.
Finally, section VII concludes the study and summarizes the main key points.
developments of health monitoring through IoT are described in this section, digging deeper into
the details of implementation and technologies utilized.
The Internet of Health Things comprises various interlinked devices that can share and handle
data to enhance patient health. It has become a fast-growing area with numerous investments
associated with the development and use of IoT. Statistics from the McKinsey study depicts that
IoHT will have a financial impact of $ 11.1 trillion in a year by 2025 . Machine learning has
become a significant tool in the arsenal of artificial intelligence techniques used in healthcare. It
enables IoT devices with outstanding capabilities for information inference, data analytics, and
intelligence. Machine learning has become a powerful and effective solution for various IoHT
technology contexts, from big-data cloud computing to smart sensors. An overall system
architecture for disease diagnosis using machine learning algorithms in the IoHT environment is
shown in Figure. The used data in these frameworks are from benchmark datasets or real-time
sensor data sent to the fog/edge/cloud for processing. Afterward, the data are preprocessed, and
necessary features are extracted to fit in the machine learning techniques. Finally, the decision is
transferred to the concerned person to take proper action. The significant developments of
machine learning-based IoHT solutions are demonstrated in this section. We have described
some major disease solutions using machine learning in the IoHT platform that are becoming
significant threats for human-being in recent times.
It is often advanced that algorithmic decision-making might lead to fairer and more
inclusiveoutcomes than human judgment or decisions ‘based on ad hoc rules’. In a semantically
neutral sense, ‘algorithms are designed to discriminate’ and to that end need to ‘give weight to
some factors over others’. But as outlined above, the properties of ML systems bear the risk to
reflect and exacerbate existing bias, which might unfairly affect members of protected groups
based on sensitive categories like gender, race, age, sexual orientation, ability or belief.
To date, few cases have been described in the literature related to AI fairness in a specific
healthcare context. In a recent Nature article, however, Zou and Schiebinger discuss the
groundbreaking work of Esteva and colleagues that used ML to detect skin cancer. It is
highlighted that fewer than 5 per cent of the images this model was trained on were from
individuals with dark skin. Given the issues described above this seems problematic. Indeed, it
is safe to assume that medical AI applications are especially susceptible to bias and
discrimination. Rajkomar and colleagues discern four categories of possible bias in healthcare,
bias in model design, in training data, in interactions with clinicians and in interactions with
patients.Vayena and colleagues particularly emphasize cases in which the data sources
themselves do not reflect true epidemiology within a given demographic, such as population
data biased by the entrenched overdiagnosis of schizophrenia in African Americans. Arguably,
the most relevant case in this context relates to the issue of sample size disparity, where there is
not enough data on a particular group, as outlined above. Indeed, as the literature shows, major
health inequalities notoriously not only persist across but also within countries, tightly
intertwined with social inequalities. There is a 36-year gap in life expectancy between the
poorest and the richest countries in the world, but even within the city of London men in the
richest parts on average live 18 years longer than men in the most deprived neighbourhoods.
There are ten times less physicians in low-income countries than in high-income countries and in
countries with non-state funded healthcare systems costs are often prohibitive. In many parts of
the world, persistent gender inequalities limit women’s access to healthcare. Disparities in
access to healthcare between urban and rural areas also exist in developed and high-income
countries. Stigma related to mental-illness, addiction, certain disease like HIV, sexual preference
or gender identity, poverty, as well as ‘internalised stigma’ in ethnic minorities serve as further
barriers to access. Literature and government data also suggest that large populations are
underrepresented in clinical trials data, which seem to favour predominantly competent adult
white men. Clinical trials data on pregnant women is missing almost entirely.
Evidently, this overview does not claim completeness. However, it still supports the argument
that historical health data accommodates underrepresentation of and bias against large
populations. Naturally, where minorities and even whole populations are excluded from health
services, no health records of them exist. The deterrent effects of the so called ‘digital divide’ in
health have long been documented. Unless health records are digitized, such data will remain
excluded from any future AI development. States and individuals who cannot afford the
necessary technologies or do not have the required ‘digital literacy’ will stay at the sidelines.
FAIRNESS:-
Literally, all public policy papers reviewed for this article qualify potential bias in and
discrimination by AI systems as a major ethical concern that might affect the access to as well
as the results of healthcare. As Hardt holds, accuracy in automated decisions seems to be a
strong indicator for ‘fairness’. Hence, if a classifier is disproportionally inaccurate on minorities,
the decision-making is unfair towards these groups. Sensitive criteria like gender, age, race and
sexual orientation do not provide legitimate reasons to deviate from a formal understanding of
justice in the sense of equal treatment; rather they suggest the need for special protection. Given
the properties of AI systems, existing biases in healthcare might be deeply baked into the
technologies that are designed to play a central role in future care, which could exacerbate social
inequalities. ‘Feedback loops’ might perpetuate existing stigmatization and contribute to ‘self-
fulfilling prophecies’. For example, as advanced by Char and colleagues, given the tendency to
withdraw care in cases of extreme prematurity or brain-damage algorithms could conclude that
these situations are always fatal and adjust their predictions accordingly with obviously lethal
consequences for the concerned patients.
CONCLUSION
REFERENCES
1. Murdoch TB, Detsky AS. The inevitable application of big data to health care. JAMA
2013;309:1351–2.
. 3. Dilsizian SE, Siegel EL. Artificial intelligence in medicine and cardiac imaging: harnessing
big data and advanced computing to provide personalized medical diagnosis and treatment. Curr
Cardiol Rep 2014;16:441.
4. Patel VL, Shortliffe EH, Stefanelli M, et al. The coming of age of artificial intelligence in
medicine. Artif Intell Med 2009;46:5–17.
6. Weingart SN, Wilson RM, Gibberd RW, et al. Epidemiology of medical error. BMJ
2000;320:774–7.
7. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med
2005;165:1493–9.
8. Winters B, Custer J, Galvagno SM, et al. Diagnostic errors in the intensive care unit: a
systematic review of autopsy studies. BMJ Qual Saf 2012;21:894–902.
9. Lee CS, Nagy PG, Weaver SJ, et al. Cognitive and system factors contributing to diagnostic
errors in radiology. AJR Am J Roentgenol 2013;201:611–7.
10. Neill DB. Using artificial intelligence to improve hospital inpatient care. IEEE Intell Syst
2013;28:92–5.
11. Administration UFaD. Guidance for industry: electronic source data in clinical
investigations. 2013 https://www.fda.gov/downloads/ drugs/guidances/ucm328691.pdf (accessed
1 Jun 2017).
12. Gillies RJ, Kinahan PE, Hricak H. Radiomics: images are more than pictures, they are data.
Radiology 2016;278:563–77.
13. Li CY, Liang GY, Yao WZ, et al. Integrated analysis of long noncoding RNA competing
interactions reveals the potential role in progression of human gastric Cancer. Int J Oncol
2016;48:1965–76.
15. Shin H, Kim KH, Song C, et al. Electrodiagnosis support system for localizing neural injury
in an upper limb. J Am Med Inform Assoc 2010;17:345–7.
THANK YOU