Ii Project Documentation Template
Ii Project Documentation Template
On
BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE AND ENGINEERING
Submitted
By
CERTIFICATE
This is to certify that the project entitled “ GROCERY OFFER SUGGESTION SYSTEM FOR
RETAIL MARKETS” is a bonafide work carried out by
Jahnavi Adabala (178R1A0561)
Sowmya Chikkam (178R1A0569)
Namrata Rikkibe (178R1A05A3)
Sowmya Maale (178R1A0596)
in partial fulfillment of the requirement for the award of the degree of BACHELOR OF
TECHNOLOGY in COMPUTER SCIENCE AND ENGINEERING from CMR Engineering
College, affiliated to JNTU, Hyderabad, under our guidance and supervision.
The results presented in this project have been verified and are found to be satisfactory. The
results embodied in this project have not been submitted to any other university for the award of
any other degree or diploma.
This is to certify that the work reported in the present project entitled "GROCERY OFFER
SUGGESTION SYSTEM FOR RETAIL MARKETS" is a record of bonafide work done by us in
the Department of Computer Science and Engineering, CMR Engineering College, JNTU Hyderabad.
The reports are based on the project work done entirely by us and not copied from any other source. We
submit our project for further development by any interested students who share similar interests to
improve the project in the future.
The results embodied in this project report have not been submitted to any other University or Institute
for the award of any degree or diploma to the best of our knowledge andbelief.
We are extremely grateful to Dr. A. Srinivasula Reddy, Principal and Dr. Sheo Kumar, HOD,
Department of CSE, CMR Engineering College for their constant support.
I am extremely thankful to Dr. M. Kumara Swamy, Professor, Internal Guide, Department of CSE,
for her constant guidance, encouragement and moral support throughout the project.
I will be failing in duty if I do not acknowledge with grateful thanks to the authors of the
references and other literatures referred in this Project.
I express my thanks to all staff members and friends for all the help and co-ordination extended in
bringing out this Project successfully in time.
Finally, I am very much thankful to my parents who guided me for every step.
1. INTRODUCTION
1.1 Introduction & Objectives 1
1.2 Purpose of the Project 2
1.3 Existing system & Disadvantages 2
1.4 Proposed system with Features 3
2. LITERATURE SURVEY 6
5. OUTPUT SCREENS 72
6. CONCLUSION 73
7. FUTURE ENHANCEMENTS 74
Face recognition technology enables offline grocery stores to do what the online shopping
recommendation systems have been doing for years.
Using a combination of in-store cameras and facial recognition software, shops can now
easily asses the purchase data of the customers and provide offers on products based on the
purchase history.
We are designing a system where cameras at entry recognize the customer and links his
previous purchase history stored in the database.
1. INTRODUCTION
All online retail apps have proved that, when a retailer knows its customers, it can serve them
more effectively. But how does that work in a physical, offline grocery retail environment? One
way is to integrate a facial recognition system. Face recognition technology enables offline
grocery stores to do what the online shopping recommendation systems have been doing for
years i.e., cameras in offline grocery shops identify their shoppers/customers, link them to past
purchases and generate personalized product recommendations based on the data.
Using a combination of in-store cameras and facial recognition software, shops can now easily
assess the purchase data of the customers and recommend products based on the history. This
core data can be collected at each stage of the customer journey, tracking shoppers (and how they
interact with a store) from entry to checkout. Not only can face recognition technology identify
and classify customers, it can help retailers optimize and plan their product offerings.
E-commerce has already implemented the recommendation system with many benefits such as
boosting up customer level of interaction, increasing sales, the diversity of items sold, customer
satisfaction or loyalty, and also understanding customers’ demand better. Such benefits are
expected to be achieved in traditional or offline retail stores.
We are designing a system where cameras at entry recognize the customer and link his previous
purchase history stored in the database. Based on that purchase history he will get a
recommendation of the offers on the product which he wishes to buy. And also displays on the
screen at the entrance as soon as he steps into the store.
Customer details are procured when he first visits the Shop. His details like name, phone number
will be taken at the entry point and also captures the images of person Purchase data will be
taken at the exit.
In grocery stores, large-scale transaction data with identification, such as point of sales (POS)
data, is being accumulated as a result of the introduction of frequent shopper programs (FSPs).
The accumulated POS data have been used to examine customer shopping behavior, especially
by professionals in the marketing field1. Although the recommendations based on this data are
often adopted in e-commerce shopping stores, they are rarely introduced in face-to-face selling,
such as in brick-and-mortar grocery stores. Therefore, introducing a system based on these
recommendations to grocery stores could induce customers to visit the store to make a purchase.
We propose two recommended systems based on stored POS data. The first system gathers the e-
mail address during the registration procedure and directly determines recommended products
based on the stored POS data and sends reminders with discount information to customers by
email. When constructing this system for grocery stores, the sparsity of evaluation values
presents a problem. Evaluation values are constructed based on customers’ purchase frequency
of product
Using a combination of in-store cameras and facial recognition software, shops can now easily
assess the purchase data of the customers and recommend products based on the history. This
core data can be collected at each stage of the customer journey, tracking shoppers (and how they
interact with a store) from entry to checkout. Not only can face recognition technology identify
and classify customers, it can help retailers optimize and plan their product offerings. We are
designing a system where cameras at entry recognize the customer and link his previous
purchase history stored in the database. Based on that purchase history he will get a
recommendation of the
Customer details are procured when he first visits the Shop. His details like name, phone number
will be taken at the entry point and also captures the images of person Purchase data will be
taken at the exit.
● Customers enter the store and store cameras identify the face of the customer.
● The identified face is stored in the cloud object storage service and the respective image
URL is stored in Cloudant DB.
● This image is then sent to the custom model build for face recognition using Visual
recognition.
Kolesar and Galbraith contend that e-retail encompasses three main activities: (1) a product
search activity that provides detailed information on the products under evaluation, which is
usually referred to as a product-evaluation or information-gathering (IG) facility; (2) an online
purchase function that facilitates consumer interaction by reducing the transaction costs; and (3)
a product delivery capability that facilitates the final product’s distribution to consumers. Darley
et al. present an overall review to understand to what extent the current marketing and consumer
behavior body of literature can be transferred to the analysis of online consumer behavior and
preferences. The paper uses the model proposed by Engel, Kollat and Blackwell as the analytical
framework to synthesize the findings from the literature. The Engel–Kollat–Blackwell (EKB)
model proposes five core stages of the decision-making process, as follows: (1) problem
recognition; (2) search; (3) alternative evaluation purchase; (4) choice; and (5) outcomes. The
authors contend that the underlying alternative evaluation depends on internal and external
factors. The internal factors are categorized into three different classes: (1) cognitive or beliefs;
(2) affective or attitudes; and (3) cognitive or intentions. On the other hand, the external or
environmental factors are subdivided into four categories: (1) individual differences or
characteristics such as motives, values, lifestyle and personality; (2) socio-cultural factors, such
as culture, social class, reference groups and family; (3) situational and economic factors; and (4)
online attributes, such as a Web site’s quality, a Web site’s interface, Web site satisfaction and
Web site experience. The model is finally extended with the consequences of the decision-
making process, in which the relevance finally focuses on the satisfaction/dissatisfaction
construct. The inherent model recognizes the complexity and the multidimensionality of online
consumer behavior, and that—besides being a comprehensive approach—other interactions,
antecedents and consequences can be finally envisaged and included. In this respect, the authors
highlight that empirically tested constructs and relationships are preferred to conceptual
approaches. Two key constructs are found: (1) personal satisfaction and Sustainability; and (2)
trust, security and company reputation. These two constructs are clearly related to other three
core constructs that have gained significant attention from both scholars and practitioners:
purchase, repurchase and product return. The authors conclude that the research agenda could
benefit from exploring the differences in consumers’ preferences between brick-and-mortar and
online outlets (brick to click
As several studies indicate(e.g. Deloitte & Harrison Group, 2010; POPAI, 2011), grocery
shoppers have begun to find ways to spend less and reduce risk in the midst of the current
economic and financial crisis. They have learned new tactics to save money on supermarket
purchases and manage their household pantry, while shopping trips have also become more
careful and focused (Deloitte & Harrison Group, 2010). Consumers’ grocery shopping routine
now regularly includes strategic and tactical features like clarifying wants versus needs, delaying
gratification, lowering quality requirements, frequent channel, store and brand switching, an
intense use of coupons, loyalty cards, shopping lists and other promotional offers, stockpiling
and increasing purchase of private label products, among others (Deloitte & Harrison Group,
2010; POPAI, 2011). Furthermore, consumers are becoming increasingly less loyal to national
14 brands and also less likely to engage in impulse buying or new product trials, as the new aim
for grocery shopping is household gratification while maintaining quality but minimizing
expenditure (Deloitte & Harrison Group, 2010). Consumers are no longer afraid or ashamed to
be seen shopping for a bargain, often viewing price as the single most important factor in
choosing among retail brands and also a motive to patronize multiple stores, formats and retail
brands (POPAI, 2011). Shoppers are also increasingly synergizing between the off- and on-line
channels, in order to maximize the value of their purchases (POPAI, 2011; Deloitte & Harrison
Group, 2010). To the same end, they are also becoming more receptive to new electronic
shopping tools and savvier as to which fit better their purchase needs and plans, increasingly
seeking all sorts of information resources available to gain more control over their shopping
experience (POPAI, 2011). And while these new approaches and strategies are mostly based on
cutting down expenditure, most consumers still do not feel like they are sacrificing much, and
thus show no intention of returning to old shopping habits when the economy recovers (POPAI,
2011; Deloitte & Harrison Group, 2010). According to A.C. Nielsen’s annual report on consumer
confidence (2010), Portuguese consumers are no exception to this scenario. As fellow shoppers
worldwide, they are also changing their spending habits – e.g., eating out less, buying less
garments and more private labels, being more concerned about energy and gas spending –, and
Anaconda Navigator is a free and open-source distribution of the Python and R programming
languages for data science and machine learning related applications.
In order to run, many scientific packages depend on specific versions of other packages. Data
scientists often use multiple versions of many packages and use multiple environments to
separate these different versions.
The command-line program anaconda is both a package manager and an environment manager.
This helps data scientists ensure that each version of each package has all the dependencies it
requires and works correctly.
Navigator is an easy, point-and-click way to work with packages and environments without
needing to type conda commands in a terminal window. You can use it to find the packages you
want, install them in an environment, run the packages, and update them – all inside Navigator.
● JupyterLab
● Jupyter Notebook
● Spyder
● VSCode
● Glueviz
● Orange 3 App
● RStudio
● Enter the url and download the latest version of anaconda navigator.
You can also use Jupyter Notebooks the same way. Jupyter Notebooks are an increasingly
popular system that combine your code, descriptive text, output, images, and interactive
interfaces into a single notebook file that is edited, viewed, and used in a web browser.
The Jupyter Notebook is an open-source web application that allows you to create and share
documents that contain live code, equations, visualizations and narrative text. Uses include: data
cleaning and transformation, numerical simulation, statistical modeling, data visualization,
machine learning, and much more.
Jupyter is a free, open-source, interactive web tool known as a computational notebook, which
researchers can use to combine software code, computational output, explanatory text and
multimedia resources in a single document. Computational notebooks have been around for
decades, but Jupyter in particular has exploded in popularity over the past couple of years. This
rapid uptake has been aided by an enthusiastic community of user–developers and a redesigned
architecture that allows the notebook to speak dozens of programming languages — a fact
reflected
3.1.3 Spyder
Spyder is a free and open-source scientific environment written in Python, for Python, and
designed by and for scientists, engineers and data analysts. It features a unique combination of
the advanced editing, analysis, debugging, and profiling functionality of a comprehensive
development tool with the data exploration, interactive execution, deep inspection, and beautiful
visualization capabilities of a scientific package.
Components
Editor
Work efficiently in a multilanguage editor with a function/class browser, code analysis tools,
automatic code completion, horizontal/vertical splitting and go-to-definition.
IPythonConsole
Harness the power of as many IPython consoles as you like in one GUI. Run code by line, cell or
file; or work interactively with debugging, plots and magic commands.
Variable Explorer
Interact with and modify variables on the fly: plot a histogram or time series, edit a date frame or
numpy array, sort a collection, dig into nested objects, and more.
Plots
Browse, zoom, copy and save the figures and images you create.
Debugger
Help
It is used for Image processing. OpenCV (Open Source Computer Vision Library) is a library of
programming functions mainly aimed at real-time computer vision. OpenCV-Python is a library
of Python bindings designed to solve computer vision problems. All the OpenCV array structures
are converted to and from NumPy arrays. This also makes it easier to integrate with other
libraries that use NumPy such as SciPy and Matplotlib.
OpenCV (Open Source Computer Vision Library) is an open source computer vision and
machine learning software library. OpenCV was built to provide a common infrastructure for
computer vision applications and to accelerate the use of machine perception in the commercial
products. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and
modify the code.
The library has more than 2500 optimized algorithms, which includes a comprehensive set of
both classic and state-of-the-art computer vision and machine learning algorithms. These
algorithms
Along with well-established companies like Google, Yahoo, Microsoft, Intel, IBM, Sony,
Honda, Toyota that employ the library, there are many startups such as Applied Minds,
VideoSurf, and Zeitera, that make extensive use of OpenCV. OpenCV’s deployed uses span the
range from stitching street view images together, detecting intrusions in surveillance video in
Israel, monitoring mine equipment in China, helping robots navigate and pick up objects at
Willow Garage, detection of swimming pool drowning accidents in Europe, running interactive art
in Spain and New York, checking runways for debris in Turkey, inspecting labels on products in
factories around the world on to rapid face detection in Japan.
It has C++, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and
Mac OS. OpenCV leans mostly towards real-time vision applications and takes advantage of
MMX and SSE instructions when available. A full-featured CUDA and OpenCL interfaces are
being actively developed right now. There are over 500 algorithms and about 10 times as many
functions that compose or support those algorithms. OpenCV is written natively in C++ and has
a templated interface that works seamlessly with STL containers.
3.1.5 NumPy
It is an open source project and you can use it freely. NumPy stands for Numerical Python.
NumPy offers comprehensive mathematical functions, random number generators, linear algebra
NumPy is a Python library used for working with arrays. It also has functions for working in the
domain of linear algebra, fourier transform, and matrices. NumPy was created in 2005 by Travis
Oliphant. It is an open source project and you can use it freely.
Even for the delete operation, the NumPy array is faster. Because the NumPy array is densely
packed in memory due to its homogeneous type, it also frees the memory faster. So overall a task
executed in NumPy is around 5 to 100 times faster than the standard python list, which is a
significant leap in terms of speed.
Python is by far one of the easiest programming languages to use. NumPy is one such Python
library. NumPy is mainly used for data manipulation and processing in the form of arrays. It's
high speed coupled with easy to use functions make it a favorite among Data Science and
Machine Learning practitioners.
NumPy is a library for the Python programming language, adding support for large, multi-
dimensional arrays and matrices, along with a large collection of high-level mathematical
functions to operate on these arrays. Moreover, NumPy forms the foundation of the Machine
Learning stack.
Keras is an open-source software library that provides a Python interface for artificial neural
networks. Keras acts as an interface for the TensorFlow library. Up until version 2.3 Keras
supported multiple backends, including TensorFlow, Microsoft Cognitive Toolkit, Theano, and
PlaidML.
Keras is a powerful and easy-to-use free open source Python library for developing and
evaluating deep learning models. It wraps the efficient numerical computation libraries Theano
and TensorFlow and allows you to define and train neural network models in just a few lines of
code.
Keras is a high-level neural networks API developed with a focus on enabling fast
experimentation. Being able to go from idea to result with the least possible delay is key to doing
good research. Keras has the following key features:
● User-friendly API which makes it easy to quickly prototype deep learning models.
Keras contains numerous implementations of commonly used neural-network building blocks such
as layers, objectives, activation functions, optimizers, and a host of tools to make working with
image and text data easier to simplify the coding necessary for writing deep neural network code.
The code is hosted on GitHub, and community support forums include the GitHub issues page,
and a Slack channel.
In addition to standard neural networks, Keras has support for convolutional and recurrent neural
networks. It supports other common utility layers like dropout, batch normalization, and pooling.
Keras allows users to productize deep models on smartphones (iOS and Android), on the web, or
on the Java Virtual Machine. It also allows use of distributed training of deep-learning models on
clusters of Graphics processing units (GPU) and tensor processing units (TPU).
Keras is a minimalist Python library for deep learning that can run on top of Theano or
TensorFlow. It was developed to make implementing deep learning models as fast and easy as
possible for research and development.
3. Create the yml file (For MacOS user, TensorFlow is installed here)
6. Activate Anaconda.
TensorFlow is used to create large-scale neural networks with many layers. TensorFlow is
mainly used for deep learning or machine learning problems such as Classification, Perception,
Understanding, Discovering, Prediction and Creation.
Just like you might have done with Keras, it's time to build up your neural network, layer by
layer. If you haven't done so already, import tensorflow into your workspace under the
conventional alias tf. Then, you can initialize the Graph with the help of Graph(). You use this
function to define the computation.
TensorFlow makes it easy for beginners and experts to create machine learning models for
desktop, mobile, web, and cloud.
TensorFlow is a popular deep learning framework. You'll also preprocess your data: you'll learn
how to visualize your images as a matrix, reshape your data and rescale the images between 0
and 1 if required. With all of this done, you are ready to construct the deep neural network
model.
For researchers, Tensorflow is hard to learn and hard to use. Research is all about flexibility, and
lack of flexibility is baked into Tensorflow at a deep level. For machine learning practitioners
such as myself, Tensorflow is not a great choice either.
TensorFlow makes it easy for beginners and experts to create machine learning models. See the
sections below to get started. Guides explain the concepts and components of TensorFlow.
It's hard because it's very powerful and very complex. One thing I notice is a lot of ML tutorials
and resources more than any other field get outdated very quickly.
TensorFlow provides excellent functionalities and services when compared to other popular deep
learning frameworks. These high-level operations are essential for carrying out complex parallel
computations and for building advanced neural network models. TensorFlow is a low-level
library which provides more flexibility.
The IBM Cloud platform combines platform as a service (PaaS) with infrastructure as a service
(IaaS) to provide an integrated experience. The platform scales and supports both small
development teams and organizations, and large enterprise businesses. Globally deployed across
data centers around the world, the solution you build on IBM Cloud spins up fast and performs
reliably in a tested and supported environment you can trust!
IBM Cloud provides solutions that enable higher levels of compliance, security, and
management, with proven architecture patterns and methods for rapid delivery for running
mission-critical workloads. Available in data centers worldwide, across 19 countries with multi
zone regions in North and South America, Europe, Asia, and Australia, you are enabled to
deploy locally with global scalability.
● With a public cloud, the resources are made available to you over the public internet. It
is a multi-tenant environment, and resources like hardware and infrastructure are
managed by IBM.
● A hybrid cloud solution is a combination of public and private, giving you the flexibility
to move workloads between the two based on your business and technological needs.
IBM uses Red Hat OpenShift on IBM Cloud, the market-leading hybrid cloud container
platform for hybrid solutions that enables you to build once and deploy anywhere. With
IBM Cloud Satellite, you can create a hybrid environment that brings the scalability and
on-demand flexibility of public cloud services to the applications and data that runs in
your secure private cloud.
● Support for multi cloud and hybrid multi cloud solutions is also available, which makes
it easy for you to work with different vendors. IBM Cloud Paks are software products
for hybrid clouds that enable you to develop apps once and deploy them anywhere.
● Virtual Private Cloud (VPC) is available as a public cloud offering that lets you
establish your own private cloud-like computing environment on shared public cloud
infrastructure. With VPC, enterprises can define and control a virtual network that is
logically isolated from all other public cloud tenants, creating a private, secure place on
the public cloud.
With our open source technologies, such as Kubernetes, Red Hat OpenShift, and a full range of
compute options, including virtual machines, containers, bare metal, and serverless, you have the
control and flexibility that's required to support workloads in your hybrid environment. You can
deploy cloud-native apps while also ensuring workload portability.
Whether you need to migrate apps to the cloud, modernize your existing apps by using cloud
services, ensure data resiliency against regional failure, or use new paradigms and deployment
As the following diagram illustrates, the IBM Cloud platform is composed of multiple
components that work together to provide a consistent and dependable cloud experience.
● A robust console that serves as the front end for creating, viewing, managing your
cloud resources
● An identity and access management component that securely authenticates users for
both platform services and controls access to resources consistently across IBM
Cloud
● A search and tagging mechanism for filtering and identifying your resources
● An account and billing management system that provides exact usage for pricing
plans and secure credit card fraud protection.
Whether you have existing code that you want to modernize and bring to the cloud or you're
developing a brand new application, your developers can tap into the rapidly growing ecosystem
of available services and runtime frameworks in IBM Cloud.
If you're a developer and you're just trying out IBM Cloud, you can go straight to the catalog and
browse the products that you'd like to explore and add to your Lite account. When you're ready
to get started with an environment and get apps running in production, consider setting up the
basics in your account:
● User access groups for organizing users and service IDs into one entity to make
assigning access a streamlined process.
● Resource groups for organizing your resources to make assigning access to a set of
resources quick and easy.
As a financial officer for your company, you might be interested in simplifying how you manage
billing and usage across multiple teams and departments. With a Subscription account, you can
create an IBM Cloud enterprise, which offers centralized account management, consolidated
billing, and top-down usage reporting. An enterprise consists of an enterprise account, account
groups, and individual accounts.
● The enterprise account is the parent account to all other accounts in the enterprise.
Billing for the entire enterprise is managed at the enterprise account level.
● Account groups provide a way to organize related accounts. And, you get a unified
view of resource usage costs across all accounts that are included in an account
group.
IBM Cloud provides a full-stack, public cloud platform with various products in the catalog,
including options for compute, storage, networking, end-to-end developer solutions for app
development, testing and deployment, security management services, traditional and open source
databases, and cloud-native services. You can find all of these services on the Services tab in the
catalog. The lifecycle and operations of these services are the responsibility of IBM.
The Software tab includes a growing catalog of software products, including Cloud Paks, starter
kits, Helm charts, Operators, and virtual server images. Even though you're responsible for the
lifecycle management, deployment, and configuration of these software products on your own
computer resources, you can take advantage of a simplified installation process to get up and
running quickly.
The catalog also supports command-line interfaces (CLIs) and a RESTful API for users to
retrieve information about existing products, and create, manage, and delete their resources.
The following table lists the filter options that you can use when you search the catalog.
AI/Machine Learning: Products that enable systems to learn from data rather than through
explicit programming.
Analytics: Products that facilitate the analysis of data, typically large sets of business data, by the
use of mathematics, statistics, and other means.
Blockchain: Products that facilitate the process of recording transactions and tracking assets in a
business network.
Compute: Infrastructure resources that serve as the basis for building apps in the cloud.
Containers: A standard unit of software that packages up code and all its dependencies so the app
runs quickly and reliably from one computing environment to another.
Databases: Products that provide some form of access to a database without the need for setting
up physical hardware, installing software, or configuring for performance.
Developer Tools: Products that support developing, testing, and debugging software.
Integration: Products that facilitate the connection of data, apps, APIs, and devices across an
organization to be more efficient, productive, and agile.
Internet Of Things: Products that support receiving and transferring data over wireless networks
without human intervention.
Logging and Monitoring: Products that support storing, searching, analyzing, and monitoring log
data and events. And, products that support reviewing and managing the operational workflow
and processes being logged.
Mobile: Products with specific or special utility for users creatings things to be used on mobile
devices.
Networking: Products that support or augment the linking of computers so they can operate
interactively.
Storage: Products that support data to be created, read, updated, and deleted.
You can view the pricing details for each service when you're browsing the catalog. If you
choose a service plan with a paid plan, you can estimate your costs by using the cost estimator
tool.
IBM Cloud billing provides multiple services that ensure the IBM Cloud platform can securely
manage pricing, accounts, usage, and more.
Account management
Account management maintains the billing relationship with the customer. Each account is a
billing entity that represents a customer. This service controls account lifecycle, subscription,
user relationship, and organization.
Usage metering
With usage metering, service providers can submit metrics that are collected for resource
instances that are created by IBM Cloud users. Third-party service providers that deliver an
integrated billing service are required to submit usage for all active service instances every hour.
Usage reports
Usage reports return the summary for the account for the specified month. Account billing
managers are authorized to access the reports.
The IBM Cloud Security and Compliance Center offers a single location where you can validate
that your resources are meeting continuous security and compliance.
You can create profiles and config rules to ensure that specific areas of your business adhere to
your defined requirements or industry regulations. From the Security and Compliance Center
dashboard, you can download detailed reports that you can use to provide evidence to
Creating resources
The resource controller is the next-generation IBM Cloud platform provisioning layer that
manages the lifecycle of IBM Cloud resources in your account. Resources are created globally in
an account scope. The resource controller supports the creation of resources both synchronously
and asynchronously. Examples of resources include databases, accounts, processors, memory,
and storage limits.
In general, resources that are tracked by the provisioning layer are intended to associate usage
metrics and billing, but that isn’t always the case. In some cases, the resource might be
associated with the provisioning layer to ensure that its lifecycle can be managed along with the
account lifecycle. The resource controller uses IBM Cloud Identity and Access Management
(IAM) for authentication and authorization of actions that are taken against the provisioning
layer.
The resource controller provides common APIs to control the lifecycle of resources from
creating an instance to creating access credentials to removing access to deleting an instance.
The search service is a global and shared resource properties repository that is integrated within
This service also manages tags that are associated with a resource. You can create, delete, search,
attach, or detach tags with the Tagging API. Tags are uniquely identified by a CRN identifier.
Tags have a name, which must be unique within a billing account. You can create tags in
key:value pairs or label format.
Observability offers a single location where you can monitor and observe your applications and
services in IBM Cloud.
With the IBM Log Analysis service, you can add log management capabilities to your IBM
Cloud architecture and you can manage system and application logs. It offers advanced features
to monitor and troubleshoot, define alerts, and design custom dashboards.
You can gain operational visibility into the performance and health of your applications,
services, and platforms with the IBM Cloud Monitoring service. It offers a full stack telemetry
with advanced features to monitor and troubleshoot, define alerts, and design custom dashboards.
Use the IBM Cloud Activity Tracker service to monitor the activity of your IBM Cloud account,
investigate abnormal activity and critical actions, and comply with regulatory audit requirements.
In addition, you can be alerted to actions as they happen. The events that are collected comply
with the Cloud Auditing Data Federation (CADF) standard.
Viewing status
The IBM Cloud Status page is the central place to find all unplanned incidents, planned
maintenance, announcements, and security bulletin notifications about key events that affect the
IBM Cloud platform. You can filter these categories by selecting specific locations, components,
types of ongoing events, or by using keyword searches.
Depending on your IBM Cloud account type, you can choose to receive email notifications about
IBM Cloud platform-related items and resource-related items from the Notification preferences
page. Platform-related items include announcements, billing and usage, and ordering. Resource-
related items include incidents, maintenance, security bulletins, and resource activity.
● Fill the necessary information and Verify your email then click Create account.
● Login using the credentials provided while registering and click on proceed.
Benefits
Save on associated storage costs that include server, power, and data center space requirements.
Reduce downtime
Maintain more streamlined storage environments to help lower day-to-day touch points for IT
storage teams.
Enable higher developer productivity from the increased agility in object-based storage
environments.
Increase scalability and performance with object-based storage environments to win more
business.
How is it used?
IBM Cloud Object Storage offers a scalable, secure destination to back up your critical data. It
reduces the cost of backups while still retaining immediate access.
Data archiving
Consolidate archive data and store it in IBM Cloud Object Storage, where it is cost-effective,
permanently available and protected.
Build integrated apps using compute runtimes and microservices and IBM Cloud Object Storage
services for data storage. SDKs support API functions.
Create a data lake in IBM Cloud Object Storage and extract actionable insights, using query-in-
place, analytics and machine-learning tools.
Features
Smart Tier
Smart Tier automates tier classification and cost optimization based on data activity. It’s ideal for
unknown or changing data usage patterns.
Built for 99.99999999% data durability. Select the resiliency option for the location, availability
and performance you need. Individual results vary.
Manage encryption keys with IBM Key Protect. Set role-based policies and access permission
with IBM Cloud Identity and Access Management Services.
Securely move data to IBM Cloud Object Storage with the natively integrated Aspera high-speed
data transfer option. Upload data at no cost.
Analyze data fast in IBM Cloud Object Storage with IBM SQL Query. Tap into your data simply
to extract, transform and load (ETL) data.
Quota management
Manage storage cost by limiting bucket usage. Once the limit is reached, you can’t add data
unless you extend the limit. Set a threshold-based alert.
Integration with IBM Cloud Hyper Protect Crypto Services offers enhanced data security,
encryption options and more granular control and authority
Accelerated archive
With a faster restore option of up to two hours, you can access your dormant data faster, while
saving on storage costs for your long-term data.
CloudantDB
A fully managed, distributed database optimized for heavy workloads and fast-growing web and
mobile apps, IBM Cloudant is available as an IBM Cloud service with a 99.99% SLA. Cloudant
elastically scales throughput and storage, and its API and replication protocols are compatible
with Apache CouchDB for hybrid or multi cloud architectures.
How is it used?
Here, we will create a serverless web application by hosting static website content on github
pages and implementing the application back end, using IBM Cloud Functions.
In this tutorial, you'll learn how to use IBM Cloud Functions along with cognitive and data
services to build a serverless back end for a mobile app.
This tutorial shows how to set up an IoT device and gather data in the IBM Watson IoT
Platform. Create visualizations and use advanced ML services to analyze historical data and
detect anomalies.
It shows how to pair the API and powerful replication protocol of cloudant with Apache
CouchDB in a hybrid cloud environment.
Features
Serverless
Instantly deploy an instance, create databases and independently scale throughput capacity and
data storage to meet your application requirements.
Secure
Encrypt all data, with optional user-defined encryption key management through IBM Key
Protect, and integrate with IBM Identity and Access Management.
Global availability
Get continuous availability as cloudant distributes data across availability zones and 6 regions
for app performance and disaster recovery requirements.
3.1.9 Node-RED
Node-RED is a programming tool for wiring together hardware devices, APIs and online
services in new and interesting ways.
It provides a browser-based editor that makes it easy to wire together flows using the wide range
of nodes in the palette that can be deployed to its runtime in a single click.
Node-RED provides a browser-based flow editor that makes it easy to wire together flows using
the wide range of nodes in the palette. Flows can be then deployed to the runtime in a single-
click.
JavaScript functions can be created within the editor using a rich text editor.
A built-in library allows you to save useful functions, templates or flows for re-use.
Built on Node.js
The light-weight runtime is built on Node.js, taking full advantage of its event-driven, non-
blocking model. This makes it ideal to run at the edge of the network on low-cost hardware such
as the Raspberry Pi as well as in the cloud.
With over 225,000 modules in Node's package repository, it is easy to extend the range of palette
nodes to add new capabilities.
Social Development
The flows created in Node-RED are stored using JSON which can be easily imported and
exported for sharing with others.
An online flow library allows you to share your best flows with the world.
RAM : 8GB
NIC : 4 x10/100mb
Module: tensorflow.keras.preprocessing.image
Classes
Class ImageDataGenerator: Generate batches of tensor image data with real-time data
augmentation.
Functions
We are applying the function ImageDataGenerator to the test and training datasets.
● With width_shift_range=2 possible values are integers [-1, 0, +1], same as with
width_shift_range=[-1, 0, +1], while with width_shift_range=1.0 possible values are
floats in the interval [-1.0, +1.0).
We are giving the path of both test and train datasets. After this, it will display the number of
images and classes.
Model groups layers into an object with training and inference features.
Sequential Model
A Sequential model is appropriate for a plain stack of layers where each layer has exactly one
input tensor and one output tensor.
Class Convolution2D
This layer creates a convolution kernel that is convolved with the layer input to produce a tensor
of outputs. If use_bias is True, a bias vector is created and added to the outputs. Finally, if
activation is not None, it is applied to the outputs as well.
When using this layer as the first layer in a model, provide the keyword argument input_shape
(tuple of integers or None, does not include the sample axis), e.g. input_shape=(128, 128, 3) for
128x128 RGB pictures in data_format="channels_last". You can use None when a dimension has
variable size.
Class MaxPooling2D
It Downsamples the input along its spatial dimensions (depth, height, and width) by taking the
maximum value over an input window (of size defined by pool_size) for each channel of the
input. The window is shifted by strides along each dimension.
Input shape:
Class Flatten
If inputs are shaped (batch,) without a feature axis, then flattening adds an extra channel
dimension and output shape is (batch, 1).
Class Dense
Dense implements the operation: output = activation(dot(input, kernel) + bias) where activation is the
element-wise activation function passed as the activation argument, kernel is a weights matrix
created by the layer, and bias is a bias vector created by the layer (only applicable if use_bias is
True). These are all attributes of Dense.
If the input to the layer has a rank greater than 2, then Dense computes the dot product between
the inputs and the kernel along the last axis of the inputs and axis 1 of the kernel (using
tf.tensordot). For example, if input has dimensions (batch_size, d0, d1), then we create a kernel
with shape (d1, units), and the kernel operates along axis 2 of the input, on every sub-tensor of
shape (1, 1, d1) (there are batch_size * d0 such sub-tensors). The output in this case will have
shape (batch_size, d0, units).
Besides, layer attributes cannot be modified after the layer has been called once (except the
trainable attribute). When a popular kwarg input_shape is passed, then keras will create an input
● We are adding the convolution layer and giving the dimensions for the input image.
Activation ‘relu’ is used for regression .
● We are adding the MaxPooling layer, and giving the pool size i.e., size of the maximum
extracted features.
● We are doing the above two steps for two hidden layers.
● Then using the Flatten class we are converting the input into one dimensional.
● Next we are adding the dense layer. Dense is used to add the input layer and for
initialising the kernel. We are using two hidden layers and one output layer.
● Units: Number of inputs. Initializers define the way to set the initial random weights of
Keras layers.
● The softmax function is used as the activation function in the output layer of neural
network models that predict a multinomial probability distribution.
● Softmax is used as the activation function for multi-class classification problems where
class membership is required on more than two class labels.
Here, we are compiling the model using the compile function. The compilation is the final step in
creating a model.
Loss function is used to find error or deviation in the learning process. Keras requires a loss
function during the model compilation process.
Keras provides quite a few loss function in the losses module and they are as follows −
● mean_squared_error
● mean_absolute_error
● mean_absolute_percentage_error
● mean_squared_logarithmic_error
● hinge
● categorical_hinge
● log cosh
● huber_loss
● categorical_crossentropy
● sparse_categorical_crossentropy
● binary_crossentropy
● kullback_leibler_divergence
● poisson
● cosine_proximity
● is_categorical_crossentropy
Optimization is an important process which optimizes the input weights by comparing the
prediction and the loss function.
Metrics is used to evaluate the performance of your model. It is similar to loss function, but not
used in the training process. Keras provides quite a few metrics as a module, metrics and they
are as follows
● accuracy
● binary_accuracy
● sparse_categorical_accuracy
● top_k_categorical_accuracy
● sparse_top_k_categorical_accuracy
● cosine_proximity
● clone_metric
fit_generator is used when either we have a huge dataset to fit into our memory or when data
augmentation needs to be applied. Here we start by first initializing the number of epochs we are
going to train our network for along with the batch size.
Performing data augmentation is a form of regularization, enabling our model to generalize better.
However, applying data augmentation implies that our training data is no longer “static” — the
data is constantly changing.
Each new batch of data is randomly adjusted according to the parameters supplied to
ImageDataGenerator .
Thus, we now need to utilize Keras’ fit_generator function to train our model.
As the name suggests, the fit_generator function assumes there is an underlying function that is
generating the data for it. The function itself is a Python generator.
Internally, Keras is using the following process when training a model with fit_generator :
1. Keras calls the generator function supplied to fit_generator (in this case, aug.flow ).
3. The fit_generator function accepts the batch of data, performs backpropagation, and
updates the weights in our model.
4. This process is repeated until we have reached the desired number of epochs.
We compute the steps_per_epoch value as the total number of training data points divided by the
batch size. Once Keras hits this step count it knows that it’s a new epoch.
NaN’s and None will be converted to null and datetime objects will be converted to UNIX
timestamps.
Here, we are loading our gesture model and testing our model by giving the image in our test set.
In this, we will be creating a cloud object storage(COS ) bucket and Customer Cloudant database
Once you login to IBM click on Storage, open Cloud object storage service.
Resiliency: Regional
Location: jp-tok
Now that we have created the bucket, let's give the public access to the bucket.
Once the bucket is created we can upload any documents or images or video files to the bucket
either through the console or through programming. In order to Upload documents through
Programing you should need credentials as follows:
● API Key
● Authentication endpoint
● COS CRN
Authentication Endpoint :
If you have remembered we have created our Bucket in jp-tok Region, each bucket region will
have a different endpoint. you get this endpoint from the endpoints option.
S3.jp-tok.cloud-object-storage.appdomain.cloud
Now let's create service credentials to save the other two necessary creds.
Click on grocery and save the marked cred for future purposes.
"apikey": "1-xLp8tyZtKw9owdRdlGBHorT9n98AmVr8SiN2-SJMsL",
"endpoints": "https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints",
"iam_apikey_name": "Grocery",
"iam_role_crn": "crn:v1:bluemix:public:iam::::serviceRole:Writer",
"resource_instance_id": "crn:v1:bluemix:public:cloud-object-
storage:global:a/8956b3127dfe48a5aa1b2fab1e8bf1bf:2d1c5aa1-5459-4ab9-9076-
f609ef602407::"
In order to store the image URL and details of customers, you have to create Databases.
We will be creating a database to store image URLs using programming, but let us create a
customer database to store details manually.
In order to store the product’s offers, you need to create a product offers database.
As we are going to save the Image URL through programming we need some credentials which
are to be configured in the python script. Lets save the credentials from IBM Dashboard click on
services.
"username": "apikey-v2-23wv2y7u5lm5jsd32ecfdg8o4ij0pfw4g02q0wxre86i"
"password": "7d45bd6c69660ea483a269aafc26084a"
"url": "https://apikey-v2-
23wv2y7u5lm5jsd32ecfdg8o4ij0pfw4g02q0wxre86i:7d45bd6c69660ea483a269aafc26084a@b6
e4b265-79c5-47ca-ae29-b0013433eda4-bluemix.cloudantnosqldb.appdomain.cloud"
OpenCV Analysis
Let's build a python script that captures the video frames, detects the faces in video frames and
stores the image in Cloud object storage. The image url is also stored in Cloudant DB.
initialize the creds like API Key Authentication endpoint Cos CRN etc as shown in the below
image .
Copy the API key and paste it into the code at COS_API_KEY_ID
Initialize Cloudant credentials Copy user name password and URL and paste it in the below code
● database_name - Database name which stores the images URL of detected Face
● pic: name of the image that gets appended to image the URL
Cos - Establishing the connection to cloud object storage service using given credentials.
let us create a function that stores image in the cloud object storage bucket
We are Configuring the image properties of images that will be stored in COS.
Once all the necessary initializations are done, let's write the script for storing the image in Cos,
and parlelley lets create a URL for the detected image and store it in cloud DB.
Loop over the block to first read the frames and apply face detection algorithm as shown in the
below image
Showcase the detected image on the open cv window with a rectangle drawn around the face and
call the defined function to store the image in the bucket created. In our case it is userimages.
In the below code, we are checking whether we have the database or if not it will create the
database with the name given (customer images).
Cloudant BD accepts JSON documents so we are creating a JSON document which has the id
and URL of the image.
Once the url is created we are storing the JSON document in the created database.
You can terminate the video capturing by pressing q on the window while the program is
running. The video capturing stops, the camera gets released and the open cv window gets
closed.
Now that you have finished writing the code, let’s run the code and see whether the
Another window opens and you can see the live stream of the video and the detected face.
Navigate to the bucket created in Cos and see the saved images.
Navigate to cloudant DB, open the customer images database and see the url is stored.
● Drag an HTTP request node on to the flow which fetches the mage URL from Cloudant
DB
● Give the Image to the visual recognition node to recognize the customer's name
● Grab the previous purchase history of the customer based on recognised name
Now let's create a flow that fetches Image from Cloudant DB:
?include_docs=true&descending=true&limit=1
https://83d0fbd2-6007-46f4-bf6e-e34b4edbb4bb-
bluemix.cloudant.com/customerimages/_all_docs?include_docs=true&descending=true&limit=1
The dataset is divided into train set and test set. Train set contains four classes and each class
contains 160 images. Test set contains four classes and each class contains 40 images.
This is how the output looks. In the dashboard, the image of the customer along with the
customer details and the product offers are displayed.
The “Grocery Offer Suggestion System for Retail Markets” project has been successfully
completed. The goal of the system is achieved, and the problem is solved.
Grocery offer system will provide appropriate product offer suggestions to the customers. By
using the previous purchased history, we provided the offers to the customers in a physical
grocery store. This will induce the customers to purchase products in the retail market, which
will also boost the business of the grocery store.
As seen earlier, recommendations are suggestions or lists of items that users might like and these
recommendations are independent of the consumer. The field is continually evolving and
changing.
This helps in creation of custom alternatives that meet the individual customer’s preferences
based on the purchase history and the domain knowledge.
Pan, S.; Giannikas, V.; Han, Y.; Grover-Silva, E.; Qiao, B. Using customer-related data to
enhance e-grocery home delivery. Ind. Manag. Data Syst. 2017,117, 1917–1933.
Darley, W.K.; Blankson, C.; Luethge, D.J. Toward an integrated framework for online consumer
behavior and decision making process: A review. Psychol. Mark. 2010,27, 94–116.
Engel, J.F.; Blackwell, R.D.; Miniard, P.W. Consumer Behavior, 5th ed.; Dryden: Hinsdale, IL,
USA, 1986.
Nguyen, D.H.; de Leeuw, S.; Dullaert, W.E. Consumer behaviour and order fulfilment in online
retailing: A systematic review. Int. J. Manag. Rev. 2018,20, 255–276.
Rotem-Mindali, O.C.; Salomon, I. Modeling consumers’ purchase and delivery choices in the
face of the information age. Environ. Plan. B Plan. Des.