0% found this document useful (0 votes)
249 views13 pages

LAWBOT

Uploaded by

abineldho9207
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
249 views13 pages

LAWBOT

Uploaded by

abineldho9207
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 13

LawBot: AI Legal Advisor

Abin Eldho1*, Adith Arun1, Mohamed Bilal k h1, Rohit Jaison1, Sheena kurian K1
1
Dept. of Computer Science and Engineering, KMEA Engineering College, Ernakulam, India
Email:[email protected],[email protected],[email protected],rohit
[email protected], [email protected]

ABSTRACT
The LawBot is a groundbreaking initiative that utilizes the power of artificial intelligence (AI) to
bridge the gap between legal knowledge and accessibility, specifically within the intricate realm
of Indian law and ethics. This ambitious project entails the meticulous development of an AI
chatbot, extensively trained on the Indian Constitution, the laws of India, and intricate ethical
dilemmas. The chatbot boosts a diverse array of features, beginning with its ability to predict case
outcomes. Users can input comprehensive case details to receive predictions about the likely
legal verdicts, empowering both individuals and legal professionals to make well-informed
decisions. Moreover, the chatbot facilitates legal information retrieval with unparalleled ease.
Users can effortlessly request information on specific Indian laws by providing the law’s name or
section number. The LawBot's capacity to deliver relevant legal information is what makes it
unique. It also has the ability to provide concise and understandable explanations, guaranteeing
that legal information is available to everyone who seeks it out. Beyond its practical legal
applications, the AI model is also trained to navigate the complexities of ethical and moral
dilemmas. Complementing its extensive capabilities, the LawBot boasts a user-friendly interface
that caters to a wide spectrum of users. By seamlessly merging AI capabilities with an expansive
knowledge base, this project endeavors to empower individuals and elevate legal literacy across
India, ultimately contributing to the advancement of a more enlightened and just society.

1 INTRODUCTION
LawBot provides a revolutionary answer to the pressing need for easily available and
knowledgeable legal guidance in the ever-changing world of quickly developing technology and
constantly changing legal frameworks. Carefully crafted to shed light on the complex areas
covered by the Indian Constitution, LawBot combines state-of-the-art technology with a deep
understanding of law, and is set to transform the way legal knowledge is accessed and understood.
This revolutionary platform is a beacon for people, companies, and society as a whole. It consists
of two unique chatbots : LawBot Info and LawBot Predict.
LawBot Info is armed with a vast collection of legal datasets covering the Indian Penal Code
(IPC), the Criminal Procedure Code (CrPC), the Constitution of India (COI), and the Civil
Procedure Code (CPC), making it a beacon for legal understanding and accessibility. Its strength is
in its user-friendly design, which enables users to easily traverse the complex legal environment.
Users can quickly and easily access a wealth of specialized legal information by answering
prompts or questions. For example, typing "IPC section 302" into the LawBot Info’s text input
field will cause it to gather and display pertinent information and facts about IPC Section 302. This
feature provides access to accurate and thorough legal knowledge beyond the simple retrieval of
facts. The seamless retrieval mechanism will be valuable for researchers, legal professionals,
students, and the general public alike, enhancing their capacity for detailed research and
understanding of legal content.
LawBot Predict will be trained and tested using the Indian Legal Document Corpus (ILDC)
dataset. The Supreme Court of India (SCI) case proceedings are included in ILDC, together with
original court rulings that provide context. Gold standard judgment decision explanations from
legal professionals are also annotated into a component of ILDC that is classified as a separate test
set. This set is used as an assessment tool to determine how well judgment prediction algorithms
explain things in-depth. A substantial backlog of court cases slows down the legal system in
populous nations like India, frequently as a result of issues like a lack of qualified judges.
Consequently, it may be possible to speed up the legal system by developing a system that can
advise judges on potential outcomes in cases that are currently pending. Nonetheless, an automated
decision system needs to be well-explained in language that people can comprehend, in order for it
to be accepted in court. Therefore, it becomes necessary to not only anticipate the outcome of a
court case but also to explain the reasoning for that outcome.

2 RELATED WORKS
Dr. Mrs. Neeta A. Deshpande [1] explores the development of a medical chatbot using Natural
Language Processing (NLP) for health-related queries. The system employs the Support Vector
Machine (SVM) algorithm for disease prediction, integrates NLP for understanding user queries,
and utilizes word order similarity for analyzing sentence structure. Comparative analysis with
Naïve Bayes and KNN methods reveals the superior accuracy of SVM, particularly beneficial for
medical institutions. Leveraging a large dataset for enhanced performance, the chatbot predicts
diseases based on symptoms and proposes future integration of voice and face recognition
technologies for deeper patient interactions, enriching the user experience.
Myung Sun Baek et.al [2] introduces a cutting-edge smart policing system utilizing machine
learning to predict crime types and risk scores through the analysis of text-based criminal case
summaries. Implemented as a user-friendly GUI-based platform, it empowers field personnel with
rapid identification of crime types and risk assessment. The system's superiority over traditional
algorithms is validated through performance evaluations. The methodology involves constructing a
keyword dictionary, curating datasets, and developing prediction models. Real-time capabilities
are emphasized, achieved through the GUI application platform, showcasing deep learning's
versatility in addressing real-world challenges in crime prediction and risk assessment.
D.Nagamallika et.al [3] introduces a criminal identification system leveraging deep learning
algorithms, featuring MTCNN for face detection, FaceNet for embedding, and OpenCV for
image/video processing. Achieving an 86% accuracy rate, the system locates and matches criminal
faces, extracting data from a database to alert law enforcement. Emphasizing the system's role in
efficient identification, the procedure details algorithm used for face detection, with future
improvements suggested. Continuous adaptation to evolving technologies is stressed, highlighting
the system's potential impact on law enforcement and paving the way for advancements in
criminal identification.
Junyun Cui et.al [4] performs a comprehensive survey on Legal Judgment Prediction (LJP)
by employing an exhaustive analysis of 31 datasets across six languages, evaluating metrics and
models. The process includes categorizing LJP tasks, legal systems, and law domains,
emphasizing the need for additional datasets for specific tasks. It involves machine and expert-
driven metadata extraction, annotating rationale sentences, and categorizing datasets. It explores
the use of pre-trained language models, multi-language corpora, and diverse learning frameworks.
It presents performance metrics for various NLP models, offering insights, recommendations, and
proposing future research directions, addressing challenges like legal reasoning and interpretability
in LJP tasks.
Varun Mandalapu et.al [5]: comprehensively assesses over 150 articles on crime prediction
through machine learning and deep learning. The analysis focuses on 51 selected articles,
exploring diverse algorithms and datasets. They employed word cloud analysis, distribution
mapping, and literature trends to extract key insights. Researchers scrutinize the effectiveness of
regression and classification methods, emphasizing traditional models' efficacy. Ethical
considerations in predictive policing are discussed, and the review concludes with future research
directions.
Umair Muneer et.al [6] conducts a systematic literature review (SLR) on spatio-temporal
crime hotspot detection and prediction, focusing on data mining, machine learning, and time series
analysis. The methodology includes quality assessment, SLR validation, and performance
measures analysis. The paper categorizes crime forecasting approaches, highlighting dataset
challenges and proposing future research directions. Emphasis is placed on the importance of high-
quality, spatio-temporally labeled crime datasets for robust predictive models.
Marc Queudot et.al [7] introduces a transformative solution to limited legal representation.
Focused on immigration and banking, the chatbots integrate legal data using NLP techniques such
as Bag-of-Words and TF-IDF. The bank employee chatbot employs grammatical analysis and
cosine similarity for intent classification. Rigorous testing with real interactions validates its
efficacy. The immigration chatbot, open-sourced for collaborative development, aims to empower
immigrants with precise legal information. Overall, it envisions a future where user-friendly
chatbots bridge access gaps, democratizing legal guidance and fostering inclusivity.
Juin-Hao Ho et.al [8] explores the implementation of a Legal AI Bot for Sustainable
Development in Legal Advisory Institutions. The authors employ a Multicriteria Decision-Making
(MCDM) model and an Analytical Network Process (ANP) to address complexities in adopting
legal AI bots. The methodology integrates DEMATEL, DANP, and M-VIKOR models, providing
a robust framework to analyze user behavior intricacies. The study emphasizes the significance of
legal AI bots, introducing the DDANPV model for sustainable development, while acknowledging
the need for further research to enhance accuracy and understanding of interrelationships among
factors.

3 METHODOLOGY
3.1 Data Collection
The process of obtaining and putting together relevant information that will be utilized to test,
validate, or train a machine learning model is known as data collection. The model's performance
and capacity for generalization are strongly influenced by the caliber and volume of data that were
gathered. The aim is to create or gather a representative set of data that matches the kinds of real-
world situations the model is likely to face. The complexity of legal language, the diversity of
legal documents, data authenticity and quality, and legal annotation and labeling are a few of the
numerous difficulties encountered when collecting legal data. To produce solid and trustworthy
datasets for legal research and model development, legal professionals, domain experts, and
machine learning experts must work together to address these problems.

3.2 Data Preprocessing


Cleaning and converting raw data into a format that can be used for model evaluation and training
is known as data preprocessing. The objective is to improve the data's quality and usability by
tackling problems including absent values, anomalies, and irrelevant data. Data cleaning, text
cleaning and tokenization, legal language normalization, legal annotation and labeling, handling
imbalanced data, text or document vectorization, and feature scaling are some of the activities that
fall under the category of data preprocessing. A clean, organized, and accurate format of legal data
is ensured by efficient data preprocessing, which lays the groundwork for precise and insightful
analysis in machine learning applications used in the legal industry.
FIGURE 1 Methodology Flowchart

3.3 Feature Extraction


Feature extraction is the process of choosing, extracting, or transforming required characteristics
or information from raw data to provide a more informative and meaningful representation.
Reducing dimensionality, emphasizing significant patterns, and improving machine learning
algorithm performance are the objectives. Natural language inputs are often complex and
multidimensional, and chatbots interact with them on a regular basis. Feature extraction techniques
are essential for converting text data into a machine-learning-friendly format in order to address
this complexity.

3.4 Model Selection


Model selection is a critical aspect of the machine learning process that involves choosing the most
suitable algorithm or model for a given task. Selecting the best, among a range of algorithms is
necessary in the field of supervised learning. Each method has its own distinct qualities, levels of
complexity, and advantages. Finding a model that produces precise and reliable predictions and
generalizes smoothly to new, untested data is the main objective of model selection. The chosen
model needs to exhibit strong generalization ability, proving the ability to find the ideal balance
between overfitting, in which the model becomes unduly complicated and learns the training set,
and underfitting, in which the model is unduly simplistic and misses underlying patterns.
Depending on the properties of the data and the underlying patterns, the performance of various
algorithms can range greatly. Selecting an appropriate model is essential for affecting important
performance measures like recall, accuracy, and precision, among others.

3.5 Model Training


In machine learning, the training phase is an essential stage where a model learns the relationships
and patterns from a labeled dataset. The procedure involves providing the model with input data
and matching, accurate outputs (labels). The learning algorithm starts with an initial guess for the
parameter values after the model parameters have been initialized. We assess the model's
performance with a predetermined metric. Training data is fed into the model in batches, and the
model's predictions are compared to the real labels. In order to decrease the difference between its
anticipated outputs and the actual labels, the model iteratively modifies its parameters. The
difference between predictions and labels is measured using a loss function.

3.6 Model Testing and Evaluation


A trained model's effectiveness and ability to generalize are mostly determined during the
model testing and evaluation. This vital stage is testing of the model's predictive or classification
ability by subjecting it to new, unseen data. A unique test dataset that is typical of actual situations
is created in order to start this process. After that, the trained model receives the test dataset with
its input features and uses it to interpret the data and produce predictions or classifications. A
variety of performance metrics unique to each machine learning task are calculated as part of the
evaluation process. Thorough evaluation also makes it easier to discern the model's advantages and
disadvantages in detail, which helps developers make improvements for more reliable and efficient
real-world applications.

4 IMPLEMENTATION
4.1 Data Collection
In order to gather data, LawBot Info consults important legal sources, including the Indian Penal
Code (IPC), the Criminal Penal Code (CrPC), the Civil Penal Code (CPC), and the Indian
Constitution (COI). The IPC.csv, CPC.csv, CrPC.csv, and COI.csv files are carefully selected
from a github repository [9] that contained the csv files of legal textbooks and documents to
include a broad spectrum of legal provisions and concepts. The Indian Legal Document Corpus, or
ILDC, was attained through the LawBot Predict data collection process. The Supreme Court of
India (SCI) case proceedings are included in ILDC, together with original court rulings that
provide context. The information is acquired from fellow researchers who worked on the ILDC for
CJPE: Indian Legal Documents Corpus for Court Judgment Prediction and Explanation [10]. To
guarantee accuracy, it goes through a rigorous validation procedure. The gathered data forms the
groundwork for training the LawBots, empowering them to offer knowledgeable and contextually
appropriate legal perspectives to users and formulate predictions based on the provided
information about a case.

4.2 Data Preprocessing


Four different files namely IPC.csv, CrPC.csv, CPC.csv, and COI.csv—were selected for the
LawBot Info Model's training in the context of data preparation for the LawBot project. These files
are in line with the factual material contained in legal textbooks, covering the Indian Constitution
(COI), the Indian Penal Code (IPC), the Criminal Penal Code (CrPC), and the Civil Penal Code
(CPC). These four files were combined to create the Info_train.csv dataset, which enables effective
model training. The four primary attributes in the Info_train.csv file are chapter, section number,
section title, and section description. In the case of the Indian Constitution, we arranged the article
number, article title, and article description beneath the previously mentioned. Special focus was
given to eliminate any instances of missing values, anomalies, or extraneous data. An
appropriately labeled preprocessed ILDC dataset was used to train the LawBot Predict Model. The
ILDC dataset had four attributes: text, label, split, and name. The label attribute indicates the case's
outcome using 0 or 1, the name attribute indicates the case file's document name, and the split
attribute indicates whether the data will be used for training or testing. The text attribute includes
the annotated text of Supreme Court of India (SCI) case proceedings with original court rulings.

4.3 Feature Extraction


In this phase, a reduction in dimensionality was performed on the Info_train.csv dataset,
specifically transitioning from four attributes to two attributes. Since sections and articles are more
relevant in the legal context, the attribute "chapter" was removed from the dataset. The final two
characteristics, "section number" and "section title," were combined into a single characteristic
called "section details." As a result, there are now two attributes in the dataset: "section details"
and "section description". The purpose of this reorganization was to concentrate on features that
were thought to be more relevant to the legal field and to simplify the dataset. Notably, the ILDC
dataset did not necessitate feature extraction, as it inherently satisfied the prerequisites for training
the LawBot Predict model. This judicious selection and transformation of attributes contribute to
the refinement and optimization of the datasets, aligning them with the specific requirements of the
our project.
FIGURE 2 ILDC dataset

FIGURE 3 Info_train.csv dataset

4.4 Model Selection


Decision tree, random forest, and SVM methods were selected in order to train the LawBot Predict
Model. Decision trees give users and legal experts an understandable and transparent structure that
helps them make decisions. Complex legal judgments are methodically deconstructed into a
number of clearly understood criteria using a sequential tree-like structure. In legal environments,
where stakeholders must understand the elements impacting the model's judgments, interpretability
is critical. Moreover, decision trees automatically measure the significance of elements, such terms
or sentences in case summaries, providing useful information to attorneys who want to know what
factors are most important in determining the decisions reached.
As an ensemble learning technique, Random Forest works well at improving the model's
resilience and capacity for generalization by building several decision trees and combining their
results. Using ensemble approaches is crucial to minimizing overfitting and enhancing overall
prediction stability in the legal area, where complex and diverse case descriptions are common.
Random Forest enhances prediction reliability by reducing the significant variation that is
frequently linked to individual decision trees. This makes it an excellent choice for the complex
process of legal decision-making based on case descriptions.
SVM is a useful tool for analyzing legal case descriptions with plenty of unique words and
phrases since it works well with high-dimensional feature spaces. SVM fits very well with the
goals of the LawBot Predict Model, which uses binary classification to classify cases as accepted
or rejected appeals based on their descriptions. Additionally, by emphasizing margin maximization
across classes, SVM improves the model's capacity to generalize well to new data, which is crucial
in legal situations where predictions must hold up across a wide variety of instances. In summary,
the decision to employ Decision Tree, Random Forest, and SVM methods for training the LawBot
Predict Model stems from a combination of factors such as interpretability, ensemble learning
benefits, ability to handle high-dimensional spaces, and suitability for binary classification tasks.
This strategic selection aims to enhance the model's performance and applicability to the nuanced
nature of legal decision-making based on case descriptions.

4.5 Model Training


Text preparation is the first of several phases in training the LawBots. Several crucial stages are
conducted in the process of text preprocessing to enhance unprocessed textual material in
preparation for further analysis or activities involving natural language processing. Tokenization is
the first step, in which the NLTK functions called ‘nltk.sent_tokenize’ and ‘nltk.word_tokenize’
carefully break the input phrase up into separate words and sentences, respectively. This
fundamental stage prepares the groundwork for a more in-depth examination of the text.
After tokenization, a crucial step is to lowercase the entire phrase so that consistency is
maintained in the analysis that follows. By preventing the model from distinguishing words
depending on their case, this normalization helps to produce a more reliable and consistent
linguistic analysis. Using list comprehension, an important step is made to further filter the
training dataset, which leads to the construction of a new list called new_words. This list
successfully eliminates non-alphanumeric characters from the words by keeping just alphanumeric
characters. By removing unnecessary symbols, this procedure improves the textual data's
consistency and intelligibility. One other crucial aspect of text preprocessing is the deliberate
elimination of stopwords, which are everyday words like "the" and "of" that frequently have little
semantic significance. Eliminating these stopwords from the word list results in a more targeted
dataset that highlights meaningful and pertinent words.
Lemmatization is the next step in this complex process, which makes use of the WordNet
lemmatizer. To provide a uniform portrayal of related phrases, words are transformed to their base
or root form. This helps to capture the natural meaning of the language while also streamlining the
dataset. All in all, these procedures help to provide an input that is more standardized, cohesive,
and viable from an analytical standpoint.
The TF-IDF Vectorizer must be initialized before moving on to the next and most important
phase. This component is carefully configured to transform a set of raw documents into a matrix of
TF-IDF (Term Frequency-Inverse Document Frequency) characteristics by utilizing the scikit-
learn TfidfVectorizer. One stage is to merge the training sentences with the preprocessed train set
to create a combined collection of sentences, which will enable thorough analysis. This
combination is the input that the TF-IDF vectorizer uses to create a comprehensive representation
of the textual data.
Lastly, the TF-IDF vectorizer is fitted using the fit-transform technique on this combined set
of phrases, producing the TF-IDF matrix that is particularly designed for training. This matrix
encapsulates the significance of terms within the textual corpus, laying the groundwork for
subsequent machine learning model training. It is essential to save the learnt properties of the TF-
IDF vectorizer for later usage after it has been trained in machine learning. This is accomplished
by using the pickle module to save the trained TF-IDF vectorizer to a file. This calculated move
guarantees that the vectorizer may be easily used to convert fresh data and preserve feature
extraction consistency, thanks to its acquired understanding of the meaning of words. The TF-IDF
scores connected to the phrases in the training sentences are also recorded in a file by means of the
TF-IDF matrix that is created during training. This two-step saving procedure makes it easier to
use the training matrix and the vectorizer together for future results or predictions.
The above mentioned steps are common in both LawBot Info and LawBot Predict. The
LawBot Info is then trained iteratively until a high-level accuracy is obtained. The LawBot Info
takes the input and performs a consine similarity check with the dataset. The section description
corresponding to the rwo of the section deatils will be returned as result. The Lawbot Predict will
be trained on the ILDC dataset using the algorithms of SVM, decision tree and random forest.

4.6 Model Testing


The LawBot Info and LawBot Predict model's predictions was compared against the actual
outcomes, and performance metrics such as accuracy, precision, and recall are computed to
quantify its effectiveness. To validate the efficacy of the LawBot Info, testing with a sample user
sentence by invoking the train function with a specific input, such as "Kidnapping", was
performed. These type of input sentences served as representative queries of the user to evaluate
the LawBot Info's ability to provide relevant and accurate responses. Subsequently, the obtained
response is printed, allowing for a visual inspection of the LawBot Info's output. We tested the
LawBot Predict's output of all 3 models with a test dataset of 100 inputs which conatined 80 labels
representing 1 and 20 labels representing 0.

5 RESULTS AND DISCUSSION


5.1 Law Info

LawBot's performance is greatly increased by a thorough data preprocessing step that uses
Python's JSON module to handle null values, combine section numbers and titles, and remove
chapter titles and numbers, among other duties. Tokenization, lowercase conversion, elimination
of stopwords, non-alphanumeric character removal, and lemmatization are all steps in the feature
extraction process. The supervised learning model performs better when preprocessed phrases are
converted into features, which is made possible by the TF-IDF vectorization technique. To ensure
best performance during deployment, a labeled dataset is used for model training, and the TF-IDF
vectorizer and matrix are stored for later usage. LawBot responds to test statements accurately,
however it is acknowledged that ongoing optimization is necessary. With the help of labeled
datasets and input, the successful implementation highlights the potential of supervised learning
and natural language processing in the access to legal information.

5.2 Law Predict

Using techniques like SVM, Decision Tree, and Random Forest for the analysis, we ran tests on a
dataset of 100 inputs. The confusion matrices listed below shows how the SVM, Decision Tree,
and Random Forest algorithms' performance is evaluated. It was clear from the performance
evaluation that the Random Forest algorithm performed better than the SVM and Decision Tree
algorithms in terms of accuracy. SVM performed better than the Decision Tree method, even
though it took longer to finish.
FIGURE 4 Confusion Matrices of SVM, Random Forest, Decision Tree

FIGURE 5 Performance Measures of Selected Model

6 CONCLUSION
LawBot is a revolutionary tool when it comes to accessibility and comprehension in the complex
world of India's legal system. Its specific features - LawBot Info and LawBot Predict, converge to
offer immediate, cost-effective, and round-the-clock legal guidance. LawBot's relevance stems
from its capacity to dismantle obstacles to legal knowledge. By breaking down complicated legal
terminology, it gives people, companies, and marginalized communities more power and promotes
a culture in which legal knowledge is seen as a fundamental right rather than a privilege. The
ability of LawBot Predict to reduce delays and speed up decision-making processes is in line with
the urgent demand for quicker decisions inside the legal system. Furthermore,
LawBot's accessibility mitigates financial barriers by minimizing the need for frequent expert
consultations, possibly democratizing legal aid. LawBot is essentially a paradigm change that
bridges the gap between accessibility and legal knowledge. Its essential function in promoting
empowerment and well-informed decision-making creates the conditions for a society in which
legal understanding is available to all. This innovation reimagines legal aid, making it an essential
instrument that enables people to interact with the law with confidence and effectiveness. It also
breaks down barriers and promotes a fairer judicial system for all parties concerned. LawBot helps
people and communities move toward a more inclusive and enlightened legal system by fostering
a shared journey towards justice and legal understanding.

KEYWORDS
Artificial Intelligence (AI)
Machine Learning
Legal Knowledge

References
1. Dharwadkar, R., & Deshpande, N.. (2018). A Medical ChatBot. 60, 41–45.
https://doi.org/10.14445/22312803/IJCTT-V60P106
2. Baek, M.S., Park, W., Park, J., Jang, K.H., & Lee, Y.T. (2021). Smart Policing Technique With
Crime Type and Risk Score Prediction Based on Machine Learning for Early Awareness of Risk
Situation. PP, 1–1. https://doi.org/10.1109/ACCESS.2021.3112682
3. Mandalapu, V., Elluri, L., Vyas, P., & Roy, N.. (2023). Crime Prediction Using Machine
Learning and Deep Learning: A Systematic Review and Future Directions. PP, 1–1.
https://doi.org/10.1109/ACCESS.2023.3286344
4. Butt, U., Letchmunan, S., Hassan, F. H., Ali, M., Baqir, A., & Sherazi, H.. (2020). Spatio-
Temporal Crime HotSpot Detection and Prediction: A Systematic Literature Review. 8, 166553–
166574. https://doi.org/10.1109/ACCESS.2020.3022808
5. Nagamallika, D., Vandana, P.G., Dakshayani, P., Manikanta, R.A., & Kumar, K.K. (2021).
CRIMINAL IDENTIFICATION SYSTEM USING DEEP LEARNING.
6. Cui, J., Shen, X., Nie, F., Wang, Z., Wang, J., & Chen, Y. (2022). A Survey on Legal Judgment
Prediction: Datasets, Metrics, Models and Challenges.
7. Ho, J.H., Lee, G.G., & Lu, M.T.. (2020). Exploring the Implementation of a Legal AI Bot for
Sustainable Development in Legal Advisory Institutions. 12, 5991.
https://doi.org/10.3390/su12155991
8. Queudot, M., Charton, É., & Meurs, M.J.. (2020). Improving Access to Justice with Legal
Chatbots. 3, 356–375. https://doi.org/10.3390/stats3030023
9. https://github.com/civictech-India/Indian-Law-Penal-Code-Json/tree/main
10. Malik, V., Sanjay, R., Kumar Nigam, S., Ghosh, K., Guha, S., Bhattacharya, A., & Modi, A..
(2021). ILDC for CJPE: Indian Legal Documents Corpus for Court Judgment Prediction and
Explanation. https://doi.org/10.18653/v1/2021.acl-long.313

You might also like