0% found this document useful (0 votes)
64 views

Empathise: Design Thinking Human Needs Brainstorming Prototyping

Uploaded by

Luke Mazarello
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views

Empathise: Design Thinking Human Needs Brainstorming Prototyping

Uploaded by

Luke Mazarello
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Design Thinking 

is a design methodology that provides a solution-based approach


to solving problems. It’s extremely useful in tackling complex problems that are
ill-defined or unknown, by understanding the human needs involved, by re-
framing the problem in human-centric ways, by creating many ideas
in brainstorming sessions, and by adopting a hands-on approach
in prototyping and testing. Understanding these five stages of Design Thinking
will empower anyone to apply the Design Thinking methods in order to solve
complex problems that occur around us — in our companies, in our countries, and
even on the scale of our planet.

We will focus on the five-stage Design Thinking model proposed by the Hasso-
Plattner Institute of Design at Stanford (d.school). d.school is the leading
university when it comes to teaching Design Thinking. The five stages of Design
Thinking, according to d.school, are as follows: Empathise, Define (the problem),
Ideate, Prototype, and Test. Let’s take a closer look at the five different stages of
Design Thinking.

1. Empathise

Author/Copyright holder: Teo Yu Siang and Interaction Design Foundation.


Copyright licence: CC BY-NC-SA 3.0

The first stage of the Design Thinking process is to gain an empathic


understanding of the problem you are trying to solve. This involves consulting
experts to find out more about the area of concern through observing, engaging and
empathizing with people to understand their experiences and motivations, as well
as immersing yourself in the physical environment so you can gain a deeper
personal understanding of the issues involved. Empathy is crucial to a human-
centered design process such as Design Thinking, and empathy allows design
thinkers to set aside their own assumptions about the world in order to gain insight
into users and their needs.

Depending on time constraints, a substantial amount of information is gathered at


this stage to use during the next stage and to develop the best possible
understanding of the users, their needs, and the problems that underlie the
development of that particular product.

2. Define (the Problem)

Author/Copyright holder: Teo Yu Siang and Interaction Design Foundation.


Copyright licence: CC BY-NC-SA 3.0

During the Define stage, you put together the information you have created and
gathered during the Empathise stage. This is where you will analyse your
observations and synthesise them in order to define the core problems that you and
your team have identified up to this point. You should seek to define the problem
as a problem statement in a human-centred manner.

To illustrate, instead of defining the problem as your own wish or a need of the
company such as, “We need to increase our food-product market share among
young teenage girls by 5%,” a much better way to define the problem would be,
“Teenage girls need to eat nutritious food in order to thrive, be healthy and grow.”

The Define stage will help the designers in your team gather great ideas to
establish features, functions, and any other elements that will allow them to solve
the problems or, at the very least, allow users to resolve issues themselves with the
minimum of difficulty. In the Define stage you will start to progress to the third
stage, Ideate, by asking questions which can help you look for ideas for solutions
by asking: “How might we… encourage teenage girls to perform an action that
benefits them and also involves your company’s food-product or service?”

3. Ideate

Author/Copyright holder: Teo Yu Siang and Interaction Design Foundation.


Copyright licence: CC BY-NC-SA 3.0

During the third stage of the Design Thinking process, designers are ready to start
generating ideas. You’ve grown to understand your users and their needs in the
Empathise stage, and you’ve analysed and synthesised your observations in the
Define stage, and ended up with a human-centered problem statement. With this
solid background, you and your team members can start to "think outside the box"
to identify new solutions to the problem statement you’ve created, and you can
start to look for alternative ways of viewing the problem. There are hundreds
of Ideation techniques such as Brainstorm, Brainwrite, Worst Possible Idea,
and SCAMPER. Brainstorm and Worst Possible Idea sessions are typically used to
stimulate free thinking and to expand the problem space. It is important to get as
many ideas or problem solutions as possible at the beginning of the Ideation phase.
You should pick some other Ideation techniques by the end of the Ideation phase to
help you investigate and test your ideas so you can find the best way to either solve
a problem or provide the elements required to circumvent it.

4. Prototype
Author/Copyright holder: Teo Yu Siang and Interaction Design Foundation.
Copyright licence: CC BY-NC-SA 3.0

The design team will now produce a number of inexpensive, scaled down versions
of the product or specific features found within the product, so they can investigate
the problem solutions generated in the previous stage. Prototypes may be shared
and tested within the team itself, in other departments, or on a small group of
people outside the design team. This is an experimental phase, and the aim is to
identify the best possible solution for each of the problems identified during the
first three stages. The solutions are implemented within the prototypes, and, one by
one, they are investigated and either accepted, improved and re-examined, or
rejected on the basis of the users’ experiences. By the end of this stage, the design
team will have a better idea of the constraints inherent to the product and the
problems that are present, and have a clearer view of how real users would behave,
think, and feel when interacting with the end product.

5. Test
Author/Copyright holder: Teo Yu Siang and Interaction Design Foundation.
Copyright licence: CC BY-NC-SA 3.0

Designers or evaluators rigorously test the complete product using the best
solutions identified during the prototyping phase. This is the final stage of the 5
stage-model, but in an iterative process, the results generated during the testing
phase are often used to redefine one or more problems and inform
the understanding of the users, the conditions of use, how people think, behave,
and feel, and to empathise. Even during this phase, alterations and refinements are
made in order to rule out problem solutions and derive as deep an understanding of
the product and its users as possible.

The Non-Linear Nature of Design Thinking


We may have outlined a direct and linear Design Thinking process in which one
stage seemingly leads to the next with a logical conclusion at user testing.
However, in practice, the process is carried out in a more flexible and non-linear
fashion. For example, different groups within the design team may conduct more
than one stage concurrently, or the designers may collect information and
prototype during the entire project so as to enable them to bring their ideas to life
and visualise the problem solutions. Also, results from the testing phase may reveal
some insights about users, which in turn may lead to another brainstorming session
(Ideate) or the development of new prototypes (Prototype).
Author/Copyright holder: Teo Yu Siang and Interaction Design Foundation.
Copyright licence: CC BY-NC-SA 3.0

It is important to note that the five stages are not always sequential — they do not
have to follow any specific order and they can often occur in parallel and be
repeated iteratively. As such, the stages should be understood as different modes
that contribute to a project, rather than sequential steps. However, the amazing
thing about the five-stage Design Thinking model is that it systematises and
identifies the 5 stages/modes you would expect to carry out in a design project –
and in any innovative problem-solving project. Every project will involve activities
specific to the product under development, but the central idea behind each stage
remains the same.

SOME COST EFFECTIVE SALES TECHINQUE:


Pyramid Model – Referrals
Word of mount
Freemium model
Sharing Economy models
Free model
Combined models (Freemium + Subscription)
Micro payment model
8 Ways the Internet of Things Will Change the Way We Work
The "Internet of Things" (IoT) may sound like the futuristic wave of talking
refrigerators and self-starting cars, but Internet-connected devices that communicate
with one another will affect our lives outside the "smart home" as well. For workers,
IoT will change the way we work by saving time and resources and opening new
opportunities for growth and innovation.

1. Even more data

The Internet of Things will be a data machine. This means that companies will have
to rethink how they collect and analyze information — not only will decision-makers
need to learn and adapt to a new form of data intelligence, but the amount and type
of information produced by IoT will also introduce new or expanded roles for data
analysts, strategists, and even customer service.

"Companies will have access to an enormous flood of data that all these connected
devices will generate," said Mary J. Cronin, professor at Boston College, Carroll
School of Management, and author of "Smart Products, Smarter Services: Strategies
for Embedded Control." "But that data needs to be analyzed to understand more
about customers and trends. Companies will need to start using IoT data as part of
their planning in order to stay competitive and to offer innovative new services and
products."

2. Know where everything is, all the time

"IoT has the potential to make the workplace life and business processes much more
productive and efficient," Cronin said.

One significant way IoT will increase productivity and efficiency is by making location
tracking much simpler and more seamless. As currently done in hospitals, Internet-
connected equipment and devices will all be geographically tagged, which will save
workers time hunting things down and save money by reducing the loss rate.  

"Companies can track every aspect of their business, from managing inventory and
fulfilling orders as quickly as possible to locating and deploying field service
staff. Tools and factories and vehicles will all be connected and reporting their
locations," Cronin said.

3. Get anywhere faster

IoT is the next big thing in your daily commute. The interconnectivity of mobile
devices, cars and the road you drive on will help reduce travel time, thus enabling
you get to work faster or run errands in record time.

Today, the "connected car" is just the start of IoT capability. "AT&T, together with
automotive manufacturers such as GM and BMW, are adding LTE connectivity to the
car and creating new connected services, such as real-time traffic information and
real-time diagnostics for the front seat and infotainment for those in the back seat,"
said Macario Namie, vice president of marketing at Jasper Wireless, a machine-to-
machine (M2M) platform provider. 

In the future, IoT will integrate everything from streets to stoplights.

"Imagine a world in which a city’s infrastructure installed roadside sensors, whose


data could be used to analyze traffic patterns around the city and adjust traffic light
operations to minimize or perhaps eliminate traffic jams," Namie said. "This could
save a few minutes, if not hours of our day."

4. Cheaper, greener manufacturing

Thanks to IoT, device interconnectivity will facilitate the adoption of "smart grid"
technologies, which use meters, sensors and other digital tools to control the flow of
energy and can integrate alternative sources of power, such as solar and wind.

"The Internet of Things will drastically lower costs in the manufacturing business by
reducing wastage, consumption of fuel and the discarding of economically unviable
assets," Namie said. "IoT can also improve the efficiency of energy production and
transmission and can further reduce emissions by facilitating the switch to
renewables."

5. Completely remote mobile device management (MDM)

IT departments may have remote access to computers and mobile devices, but IoT
will also enable remote control of other Internet-connected devices, said Roy Bachar,
founder and chief executive officer of MNH Innovations and member of the Internet
of Things Council.

Bachar, who also works with CommuniTake, a startup that provides remote-access
technology, said that the cutting-edge technology that has given them full control
over smartphones and tablets now allows remote management over other devices,
including Android cameras and set-top boxes, among others.

Soon, MDM technologies will extend to the remote management of IoT devices,
which will introduce changes for IT departments and IoT-connected employees.

"It's clear that the telecommunication giants will play a major role in the IoT domain
and they are all introducing solutions. I believe that as early 2014, we will see the
introduction of platforms for managing the IoT applications as well as solutions
offered by companies, such as CommuniTake, for remote management of IOT
devices," Bachar said.

6. Increased device management complexity

According to Bachar, as the number of connected devices grows, so does the


complexity of managing them. For instance, today workers use smartphones for
communication, productivity and entertainment. With IoT, they will have an additional
function: controlling IoT-connected devices. "Many of the future IoT-connected
devices will not have a screen. The way to take control over the device will be via
smartphones," Bachar said.

"The complexity will also increase due to the variety of operating systems," he
added. Thus, employees and IT departments will have a broader range of platforms
to deal with, not just Android or iOS, Bachar said.

Both of these instances may require training for employees to learn how to control
and manage connected, cross-platform devices.

7. Save time and get more out of your day

Other than controlling other IoT devices, your smartphone will also be much like a
remote control for your life, said Brendan Richardson, co-founder and chief executive
officer of PsiKick, a Charlottesville, Va.-based startup that develops IoT wireless
sensors.

One of the most convenient aspects of IoT is that you have devices that "know" you
and will help save time by allowing you to get in and out of places and conduct
transactions faster using a mobile device.

"The iPhone or Android will increasingly interact with a whole range of sensors that
you never see and don't own, but which provide your smartphone with valuable
information and act on your behalf through an app," Richardson said.

With these sensors, even just getting your morning coffee will eliminate the need to
wait in line for a less stressful start to your day. For instance, wireless sensors can
detect when you walk into a Starbucks, which alerts the barista of your likely order
based on your order history.  You can then confirm or choose a different order, then
pay for it using your phone, Richardson said.

8. You may actually have to work harder


IoT may make workers' lives easier on many levels, but Richardson said IoT also
means big changes in every industry.

"Every business and every industry will be disrupted over the next 30 years,"
Richardson said. "We're seeing this now beginning with the regular old Internet. It's
being driven by data and large-scale efficiencies when you convert something to bits
rather than atoms."

Richardson cited the evolution of movie rentals as an example.

"Netflix more or less destroyed Blockbuster by using the Internet to vastly improve
the logistics of exchanging DVDs and removing pesky late fees. Then they converted
the atoms of a DVD into bits and deliver 80 percent of their movies over broadband
now. [You get] more movies on demand and lower costs.  And an entire industry —
the DVD rental business — is consigned to the archive of history."

Richardson said such disruptions will happen in every industry, so companies and
their employees have to be prepared.

Cloud Computing
Cloud computing transforms IT infrastructure into a utility: It lets you ‘plug into'
infrastructure via the internet, and use computing resources without installing and
maintaining them on-premises.

What is cloud computing?


Cloud computing is on-demand access, via the internet, to computing resources—
applications, servers (physical servers and virtual servers), data storage, development tools,
networking capabilities, and more—hosted at a remote data center managed by a cloud
services provider (or CSP). The CSP makes these resources available for a monthly
subscription fee or bills them according to usage.

Compared to traditional on-premises IT, and depending on the cloud services you select,
cloud computing helps do the following:

 Lower IT costs: Cloud lets you offload some or most of the costs and effort of
purchasing, installing, configuring, and managing your own on-premises
infrastructure. 
 Improve agility and time-to-value: With cloud, your organization can start using
enterprise applications in minutes, instead of waiting weeks or months for IT to
respond to a request, purchase and configure supporting hardware, and install
software. Cloud also lets you empower certain users—specifically developers and
data scientists—to help themselves to software and support infrastructure.
 Scale more easily and cost-effectively: Cloud provides elasticity—instead of
purchasing excess capacity that sits unused during slow periods, you can scale
capacity up and down in response to spikes and dips in traffic. You can also take
advantage of your cloud provider’s global network to spread your applications
closer to users around the world.
The term ‘cloud computing’ also refers to the technology that makes cloud work. This
includes some form of virtualized IT infrastructure—servers, operating system software,
networking, and other infrastructure that’s abstracted, using special software, so that it can be
pooled and divided irrespective of physical hardware boundaries. For example, a single
hardware server can be divided into multiple virtual servers.
Virtualization enables cloud providers to make maximum use of their data center resources.
Not surprisingly, many corporations have adopted the cloud delivery model for their on-
premises infrastructure so they can realize maximum utilization and cost savings vs.
traditional IT infrastructure and offer the same self-service and agility to their end-users.
If you use a computer or mobile device at home or at work, you almost certainly use some
form of cloud computing every day, whether it’s a cloud application like Google Gmail or
Salesforce, streaming media like Netflix, or cloud file storage like Dropbox. According to a
recent survey, 92% of organizations use cloud today (link resides outside IBM), and most of
them plan to use it more within the next year.

Cloud computing services


IaaS (Infrastructure-as-a-Service), PaaS (Platform-as-a-Service) , and SaaS (Software-as-a-
Service) are the three most common models of cloud services, and it’s not uncommon for an
organization to use all three. However, there is often confusion among the three and what’s
included with each:
SaaS (Software-as-a-Service)

SaaS—also known as cloud-based software or cloud applications—is application software


that’s hosted in the cloud and that you access and use via a web browser, a dedicated desktop
client, or an API that integrates with your desktop or mobile operating system. In most
cases, SaaS users pay a monthly or annual subscription fee; some may offer ‘pay-as-you-go’
pricing based on your actual usage.

In addition to the cost savings, time-to-value, and scalability benefits of cloud, SaaS offers
the following:

 Automatic upgrades: With SaaS, you take advantage of new features as soon as the


provider adds them, without having to orchestrate an on-premises upgrade.
 Protection from data loss: Because your application data is in the cloud, with the
application, you don’t lose data if your device crashes or breaks.

SaaS is the primary delivery model for most commercial software today—there are hundreds
of thousands of SaaS solutions available, from the most focused industry and departmental
applications, to powerful enterprise software database and AI (artificial intelligence)
software.

PaaS (Platform-as-a-Service)

PaaS provides software developers with on-demand platform—hardware, complete software


stack, infrastructure, and even development tools—for running, developing, and managing
applications without the cost, complexity, and inflexibility of maintaining that platform on-
premises.
With PaaS, the cloud provider hosts everything—servers, networks, storage, operating
system software, middleware, databases—at their data center. Developers simply pick from a
menu to ‘spin up’ servers and environments they need to run, build, test, deploy, maintain,
update, and scale applications.

Today, PaaS is often built around containers, a virtualized compute model one step removed
from virtual servers. Containers virtualize the operating system, enabling developers to
package the application with only the operating system services it needs to run on any
platform, without modification and without need for middleware.
Red Hat OpenShift is a popular PaaS built around Docker containers and Kubernetes, an
open source container orchestration solution that automates deployment, scaling, load
balancing, and more for container-based applications.
Learn more about PaaS
IaaS (Infrastructure-as-a-Service)

IaaS provides on-demand access to fundamental computing resources–physical and virtual


servers, networking, and storage—over the internet on a pay-as-you-go
basis. IaaS enables end users to scale and shrink resources on an as-needed basis, reducing
the need for high, up-front capital expenditures or unnecessary on-premises or ‘owned’
infrastructure and for overbuying resources to accommodate periodic spikes in usage.  

In contrast to SaaS and PaaS (and even newer PaaS computing models such as containers and


serverless), IaaS provides the users with the lowest-level control of computing resources in
the cloud.

IaaS was the most popular cloud computing model when it emerged in the early 2010s. While
it remains the cloud model for many types of workloads, use of SaaS and PaaS is growing at
a much faster rate.

Learn more about IaaS


Serverless computing 
Serverless computing (also called simply serverless) is a cloud computing model that
offloads all the backend infrastructure management tasks–provisioning, scaling, scheduling,
patching—to the cloud provider, freeing developers to focus all their time and effort on the
code and business logic specific to their applications.

What's more, serverless runs application code on a per-request basis only and scales the
supporting infrastructure up and down automatically in response to the number of requests.
With serverless, customers pay only for the resources being used when the application is
running—they never pay for idle capacity. 

FaaS, or Function-as-a-Service, is often confused with serverless computing when, in fact, it's


a subset of serverless. FaaS allows developers to execute portions of application code (called
functions) in response to specific events. Everything besides the code—physical
hardware, virtual machine operating system, and web server software management—is
provisioned automatically by the cloud service provider in real-time as the code executes and
is spun back down once the execution completes. Billing starts when execution starts and
stops when execution stops.
Learn more about serverless
Types of cloud computing
Public cloud
Public cloud is a type of cloud computing in which
a cloud service provider makes computing resources—anything from SaaS applications, to
individual virtual machines (VMs), to bare metal computing hardware, to complete
enterprise-grade infrastructures and development platforms—available to users over the
public internet. These resources might be accessible for free, or access might be sold
according to subscription-based or pay-per-usage pricing models.

The public cloud provider owns, manages, and assumes all responsibility for the data centers,


hardware, and infrastructure on which its customers’ workloads run, and it typically provides
high-bandwidth network connectivity to ensure high performance and rapid access to
applications and data. 

Public cloud is a multi-tenant environment—the cloud provider's data center infrastructure is


shared by all public cloud customers. In the leading public clouds—Amazon Web Services
(AWS), Google Cloud, IBM Cloud, Microsoft Azure, and Oracle Cloud—those customers
can number in the millions.
The global market for public cloud computing has grown rapidly over the past few years, and
analysts forecast that this trend will continue; industry analyst Gartner predicts
that worldwide public cloud revenues will exceed USD 330 billion by the end of 2022 (link
resides outside IBM).

Many enterprises are moving portions of their computing infrastructure to the public cloud
because public cloud services are elastic and readily scalable, flexibly adjusting to meet
changing workload demands. Others are attracted by the promise of greater efficiency and
fewer wasted resources since customers pay only for what they use. Still others seek to
reduce spending on hardware and on-premises infrastructures.

Learn more about public cloud


Private cloud

Private cloud is a cloud environment in which all cloud infrastructure and computing


resources are dedicated to, and accessible by, one customer only. Private cloud combines
many of the benefits of cloud computing—including elasticity, scalability, and ease of service
delivery—with the access control, security, and resource customization of on-
premises infrastructure.

A private cloud is typically hosted on-premises in the customer's data center. But a private


cloud can also be hosted on an independent cloud provider’s infrastructure or built on rented
infrastructure housed in an offsite data center.

Many companies choose private cloud over public cloud because private cloud is an easier


way (or the only way) to meet their regulatory compliance requirements. Others
choose private cloud because their workloads deal with confidential documents, intellectual
property, personally identifiable information (PII), medical records, financial data, or
other sensitive data.

By building private cloud architecture according to cloud native principles, an organization


gives itself the flexibility to easily move workloads to public cloud or run them within
a hybrid  cloud (see below) environment whenever they’re ready.
Learn more about private cloud
Hybrid cloud

Hybrid cloud is just what it sounds like—a combination of public and private cloud
environments. Specifically, and ideally, a hybrid cloud connects an organization's private
cloud services and public clouds into a single, flexible infrastructure for running the
organization’s applications and workloads.

The goal of hybrid cloud is to establish a mix of public and private cloud resources—and


with a level of orchestration between them—that gives an organization the flexibility to
choose the optimal cloud for each application or workload and to move workloads freely
between the two clouds as circumstances change. This enables the organization to meet its
technical and business objectives more effectively and cost-efficiently than it could with
public or private cloud alone.

Watch my video, “Hybrid Cloud Explained” (6:35):

Learn more about hybrid cloud


Multicloud and hybrid multicloud

Multicloud is the use of two or more clouds from two or more different cloud providers.
Having a multicloud environment can be as simple using email SaaS from one vendor and
image editing SaaS from another. But when enterprises talk about multicloud, they're
typically talking about using multiple cloud services—
including SaaS, PaaS, and IaaS services—from two or more of the
leading public cloud providers. In one survey, 85% of organizations reported using
multicloud environments.
Hybrid multicloud is the use of two or more public clouds together with a private cloud
environment. 

Organizations choose multicloud to avoid vendor lock-in, to have more services to choose


from, and to access to more innovation. But the more clouds you use—each with its own set
of management tools, data transmission rates, and security protocols—the more difficult it
can be to manage your environment. Multicloud management platforms provide visibility
across multiple provider clouds through a central dashboard, where development teams can
see their projects and deployments, operations teams can keep an eye on clusters and nodes,
and the cybersecurity staff can monitor for threats.

What Is Proof of Work (PoW)?


Proof of work (PoW) describes a system that requires a not-insignificant but
feasible amount of effort in order to deter frivolous or malicious uses of
computing power, such as sending spam emails or launching denial of
service attacks. The concept was subsequently adapted to securing digital
money by Hal Finney in 2004 through the idea of "reusable proof of work"
using the SHA-256 hashing algorithm.

Following its introduction in 2009, Bitcoin became the first widely adopted
application of Finney's PoW idea (Finney was also the recipient of the first
bitcoin transaction). Proof of work forms the basis of many
other cryptocurrencies as well, allowing for secure, decentralized consensus.

KEY TAKEAWAYS

 Proof of work (PoW) is a decentralized consensus mechanism that


requires members of a network to expend effort solving an arbitrary
mathematical puzzle to prevent anybody from gaming the system.
 Proof of work is used widely in cryptocurrency mining, for validating
transactions and mining new tokens.
 Due to proof of work, Bitcoin and other cryptocurrency transactions can
be processed peer-to-peer in a secure manner without the need for a
trusted third party.
 Proof of work at scale requires huge amounts of energy, which only
increases as more miners join the network.
 Proof of Stake (POS) was one of several novel consensus mechanisms
created as an alternative to proof of work.

Understanding Proof of Work


This explanation will focus on proof of work as it functions in
the bitcoin network. Bitcoin is a digital currency that is underpinned by a kind
of distributed ledger known as a "blockchain." This ledger contains a record of
all bitcoin transactions, arranged in sequential "blocks," so that no user is
allowed to spend any of their holdings twice. In order to prevent tampering,
the ledger is public, or "distributed"; an altered version would quickly be
rejected by other users.

The way that users detect tampering in practice is through hashes, long


strings of numbers that serve as proof of work. Put a given set of data
through a hash function (bitcoin uses SHA-256), and it will only ever generate
one hash. Due to the "avalanche effect," however, even a tiny change to any
portion of the original data will result in a totally unrecognizable hash.
Whatever the size of the original data set, the hash generated by a given
function will be the same length. The hash is a one-way function: it cannot be
used to obtain the original data, only to check that the data that generated the
hash matches the original data.

Generating just any hash for a set of bitcoin transactions would be trivial for a
modern computer, so in order to turn the process into "work," the bitcoin
network sets a certain level of "difficulty." This setting is adjusted so that a
new block is "mined" – added to the blockchain by generating a valid hash –
approximately every 10 minutes. Setting difficulty is accomplished by
establishing a "target" for the hash: the lower the target, the smaller the set of
valid hashes, and the harder it is to generate one. In practice, this means a
hash that starts with a very long string of zeros.

Proof of work was initially created as a proposed solution to the growing


problem of spam email.

Special Considerations
Since a given set of data can only generate one hash, how do miners make
sure they generate a hash below the target? They alter the input by adding an
integer, called a nonce ("number used once"). Once a valid hash is found, it is
broadcast to the network, and the block is added to the blockchain.

Mining is a competitive process, but it is more of a lottery than a race. On


average, someone will generate acceptable proof of work every ten minutes,
but who it will be is anyone's guess. Miners pool together to increase their
chances of mining blocks, which generates transaction fees and, for a limited
time, a reward of newly-created bitcoins.1

Proof of work makes it extremely difficult to alter any aspect of the blockchain,
since such an alteration would require re-mining all subsequent blocks. It also
makes it difficult for a user or pool of users to monopolize the network's
computing power, since the machinery and power required to complete the
hash functions are expensive.

 
If part of a mining network begins accepting an alternative proof of work, it is
known as a hard fork.

Example of Proof of Work


Proof of work requires a computer to randomly engage in hashing functions
until it arrives at an output with the correct minimum amount of leading
zeroes. For example, the hash for block #660000, mined on December 4,
2020
is 00000000000000000008eddcaf078f12c69a439dde30dbb5aac3d9d94e9c1
8f6.2  The block reward for that successful hash was 6.25 BTC.

That block will always contain 745 transactions involving just over 1,666
bitcoins, as well as the header of the previous block. If somebody tried to
change a transaction amount by even 0.000001 bitcoin, the resultant hash
would be unrecognizable, and the network would reject the fraud attempt.

Proof of Work FAQs


What Does Proof of Work Mean?
PoW requires nodes on a network to provide evidence that they have
expended computational power (i.e. work) in order to achieve consensus in a
decentralized manner and to prevent bad actors from overtaking the network.

How Does Proof of Work Validate a Crypto Transaction?


The work itself is arbitrary. For Bitcoin, it involves iterations of SHA-256
hashing algorithms. The "winner" of a round of hashing, however, aggregates
and records transactions from the mempool into the next block. Because the
"winner" is randomly-chosen proportional to the work done, it incentivizes
everybody on the network to act honestly and record only true transactions.

Why Do Cryptocurrencies Need Proof of Work?


Because they are decentralized and peer-to-peer by design, blockchains
such as cryptocurrency networks require some way of achieving both
consensus and security. Proof of work is one such method that makes it too
resource-intensive to try to overtake the network. Other proof mechanisms
also exist that are less resource-intensive, but which have other drawbacks or
flaws, such as proof of stake (PoS) and proof of burn. Without a proof
mechanism, the network and the data stored within it would be vulnerable to
attack or theft.

Does Bitcoin Use Proof of Work?


Yes. It uses a PoW algorithm based on the SHA-256 hashing function in
order to validate and confirm transactions as well as to issue new bitcoins into
circulation.
How Does Proof of Stake (PoS) Differ from PoW?
PoS is a consensus mechanism that randomly assigns the node that will mine
or validate block transactions according to how many coins that node holds.
The more tokens held in a wallet, the more mining power is effectively
granted to it. While PoS is far less resource-intensive, it has several other
flaws including a greater chance of a 51% attack in smaller altcoins and
incentives to hoard tokens and not use them.

ARTICLE SOURCES
Related Terms
Solana (SOL)
Solana is a blockchain platform designed to host decentralized applications.
Based on Proof of History, it processes transactions quickly at low cost.
 more
What Is Litecoin (LTC)?
Launched in the year 2011, Litecoin (LTC) is an alternative cryptocurrency
based on the model of Bitcoin.
 more
Blockchain Explained
A blockchain is a digitally distributed, decentralized, public ledger that exists
across a network. It is most noteworthy in its use with cryptocurrencies and
NFTs.
 more
What Is Proof of Burn for Cryptocurrency?
The proof of burn (POB) consensus algorithm combines the proof of work
(POW) and proof of stake (POS) and partially overcomes their shortcomings.
 more
Bitcoin Mining
Breaking down everything you need to know about Bitcoin mining, from
blockchain and block rewards to proof of work and mining pools.
 more
Understanding Hash
A hash is a function that converts an input of letters and numbers into an
encrypted output of a fixed length.
 more
EXS: AMAZON, NETFLIX, FLIPKART

You might also like