Empathise: Design Thinking Human Needs Brainstorming Prototyping
Empathise: Design Thinking Human Needs Brainstorming Prototyping
We will focus on the five-stage Design Thinking model proposed by the Hasso-
Plattner Institute of Design at Stanford (d.school). d.school is the leading
university when it comes to teaching Design Thinking. The five stages of Design
Thinking, according to d.school, are as follows: Empathise, Define (the problem),
Ideate, Prototype, and Test. Let’s take a closer look at the five different stages of
Design Thinking.
1. Empathise
During the Define stage, you put together the information you have created and
gathered during the Empathise stage. This is where you will analyse your
observations and synthesise them in order to define the core problems that you and
your team have identified up to this point. You should seek to define the problem
as a problem statement in a human-centred manner.
To illustrate, instead of defining the problem as your own wish or a need of the
company such as, “We need to increase our food-product market share among
young teenage girls by 5%,” a much better way to define the problem would be,
“Teenage girls need to eat nutritious food in order to thrive, be healthy and grow.”
The Define stage will help the designers in your team gather great ideas to
establish features, functions, and any other elements that will allow them to solve
the problems or, at the very least, allow users to resolve issues themselves with the
minimum of difficulty. In the Define stage you will start to progress to the third
stage, Ideate, by asking questions which can help you look for ideas for solutions
by asking: “How might we… encourage teenage girls to perform an action that
benefits them and also involves your company’s food-product or service?”
3. Ideate
During the third stage of the Design Thinking process, designers are ready to start
generating ideas. You’ve grown to understand your users and their needs in the
Empathise stage, and you’ve analysed and synthesised your observations in the
Define stage, and ended up with a human-centered problem statement. With this
solid background, you and your team members can start to "think outside the box"
to identify new solutions to the problem statement you’ve created, and you can
start to look for alternative ways of viewing the problem. There are hundreds
of Ideation techniques such as Brainstorm, Brainwrite, Worst Possible Idea,
and SCAMPER. Brainstorm and Worst Possible Idea sessions are typically used to
stimulate free thinking and to expand the problem space. It is important to get as
many ideas or problem solutions as possible at the beginning of the Ideation phase.
You should pick some other Ideation techniques by the end of the Ideation phase to
help you investigate and test your ideas so you can find the best way to either solve
a problem or provide the elements required to circumvent it.
4. Prototype
Author/Copyright holder: Teo Yu Siang and Interaction Design Foundation.
Copyright licence: CC BY-NC-SA 3.0
The design team will now produce a number of inexpensive, scaled down versions
of the product or specific features found within the product, so they can investigate
the problem solutions generated in the previous stage. Prototypes may be shared
and tested within the team itself, in other departments, or on a small group of
people outside the design team. This is an experimental phase, and the aim is to
identify the best possible solution for each of the problems identified during the
first three stages. The solutions are implemented within the prototypes, and, one by
one, they are investigated and either accepted, improved and re-examined, or
rejected on the basis of the users’ experiences. By the end of this stage, the design
team will have a better idea of the constraints inherent to the product and the
problems that are present, and have a clearer view of how real users would behave,
think, and feel when interacting with the end product.
5. Test
Author/Copyright holder: Teo Yu Siang and Interaction Design Foundation.
Copyright licence: CC BY-NC-SA 3.0
Designers or evaluators rigorously test the complete product using the best
solutions identified during the prototyping phase. This is the final stage of the 5
stage-model, but in an iterative process, the results generated during the testing
phase are often used to redefine one or more problems and inform
the understanding of the users, the conditions of use, how people think, behave,
and feel, and to empathise. Even during this phase, alterations and refinements are
made in order to rule out problem solutions and derive as deep an understanding of
the product and its users as possible.
It is important to note that the five stages are not always sequential — they do not
have to follow any specific order and they can often occur in parallel and be
repeated iteratively. As such, the stages should be understood as different modes
that contribute to a project, rather than sequential steps. However, the amazing
thing about the five-stage Design Thinking model is that it systematises and
identifies the 5 stages/modes you would expect to carry out in a design project –
and in any innovative problem-solving project. Every project will involve activities
specific to the product under development, but the central idea behind each stage
remains the same.
The Internet of Things will be a data machine. This means that companies will have
to rethink how they collect and analyze information — not only will decision-makers
need to learn and adapt to a new form of data intelligence, but the amount and type
of information produced by IoT will also introduce new or expanded roles for data
analysts, strategists, and even customer service.
"Companies will have access to an enormous flood of data that all these connected
devices will generate," said Mary J. Cronin, professor at Boston College, Carroll
School of Management, and author of "Smart Products, Smarter Services: Strategies
for Embedded Control." "But that data needs to be analyzed to understand more
about customers and trends. Companies will need to start using IoT data as part of
their planning in order to stay competitive and to offer innovative new services and
products."
"IoT has the potential to make the workplace life and business processes much more
productive and efficient," Cronin said.
One significant way IoT will increase productivity and efficiency is by making location
tracking much simpler and more seamless. As currently done in hospitals, Internet-
connected equipment and devices will all be geographically tagged, which will save
workers time hunting things down and save money by reducing the loss rate.
"Companies can track every aspect of their business, from managing inventory and
fulfilling orders as quickly as possible to locating and deploying field service
staff. Tools and factories and vehicles will all be connected and reporting their
locations," Cronin said.
IoT is the next big thing in your daily commute. The interconnectivity of mobile
devices, cars and the road you drive on will help reduce travel time, thus enabling
you get to work faster or run errands in record time.
Today, the "connected car" is just the start of IoT capability. "AT&T, together with
automotive manufacturers such as GM and BMW, are adding LTE connectivity to the
car and creating new connected services, such as real-time traffic information and
real-time diagnostics for the front seat and infotainment for those in the back seat,"
said Macario Namie, vice president of marketing at Jasper Wireless, a machine-to-
machine (M2M) platform provider.
Thanks to IoT, device interconnectivity will facilitate the adoption of "smart grid"
technologies, which use meters, sensors and other digital tools to control the flow of
energy and can integrate alternative sources of power, such as solar and wind.
"The Internet of Things will drastically lower costs in the manufacturing business by
reducing wastage, consumption of fuel and the discarding of economically unviable
assets," Namie said. "IoT can also improve the efficiency of energy production and
transmission and can further reduce emissions by facilitating the switch to
renewables."
IT departments may have remote access to computers and mobile devices, but IoT
will also enable remote control of other Internet-connected devices, said Roy Bachar,
founder and chief executive officer of MNH Innovations and member of the Internet
of Things Council.
Bachar, who also works with CommuniTake, a startup that provides remote-access
technology, said that the cutting-edge technology that has given them full control
over smartphones and tablets now allows remote management over other devices,
including Android cameras and set-top boxes, among others.
Soon, MDM technologies will extend to the remote management of IoT devices,
which will introduce changes for IT departments and IoT-connected employees.
"It's clear that the telecommunication giants will play a major role in the IoT domain
and they are all introducing solutions. I believe that as early 2014, we will see the
introduction of platforms for managing the IoT applications as well as solutions
offered by companies, such as CommuniTake, for remote management of IOT
devices," Bachar said.
"The complexity will also increase due to the variety of operating systems," he
added. Thus, employees and IT departments will have a broader range of platforms
to deal with, not just Android or iOS, Bachar said.
Both of these instances may require training for employees to learn how to control
and manage connected, cross-platform devices.
Other than controlling other IoT devices, your smartphone will also be much like a
remote control for your life, said Brendan Richardson, co-founder and chief executive
officer of PsiKick, a Charlottesville, Va.-based startup that develops IoT wireless
sensors.
One of the most convenient aspects of IoT is that you have devices that "know" you
and will help save time by allowing you to get in and out of places and conduct
transactions faster using a mobile device.
"The iPhone or Android will increasingly interact with a whole range of sensors that
you never see and don't own, but which provide your smartphone with valuable
information and act on your behalf through an app," Richardson said.
With these sensors, even just getting your morning coffee will eliminate the need to
wait in line for a less stressful start to your day. For instance, wireless sensors can
detect when you walk into a Starbucks, which alerts the barista of your likely order
based on your order history. You can then confirm or choose a different order, then
pay for it using your phone, Richardson said.
"Every business and every industry will be disrupted over the next 30 years,"
Richardson said. "We're seeing this now beginning with the regular old Internet. It's
being driven by data and large-scale efficiencies when you convert something to bits
rather than atoms."
"Netflix more or less destroyed Blockbuster by using the Internet to vastly improve
the logistics of exchanging DVDs and removing pesky late fees. Then they converted
the atoms of a DVD into bits and deliver 80 percent of their movies over broadband
now. [You get] more movies on demand and lower costs. And an entire industry —
the DVD rental business — is consigned to the archive of history."
Richardson said such disruptions will happen in every industry, so companies and
their employees have to be prepared.
Cloud Computing
Cloud computing transforms IT infrastructure into a utility: It lets you ‘plug into'
infrastructure via the internet, and use computing resources without installing and
maintaining them on-premises.
Compared to traditional on-premises IT, and depending on the cloud services you select,
cloud computing helps do the following:
Lower IT costs: Cloud lets you offload some or most of the costs and effort of
purchasing, installing, configuring, and managing your own on-premises
infrastructure.
Improve agility and time-to-value: With cloud, your organization can start using
enterprise applications in minutes, instead of waiting weeks or months for IT to
respond to a request, purchase and configure supporting hardware, and install
software. Cloud also lets you empower certain users—specifically developers and
data scientists—to help themselves to software and support infrastructure.
Scale more easily and cost-effectively: Cloud provides elasticity—instead of
purchasing excess capacity that sits unused during slow periods, you can scale
capacity up and down in response to spikes and dips in traffic. You can also take
advantage of your cloud provider’s global network to spread your applications
closer to users around the world.
The term ‘cloud computing’ also refers to the technology that makes cloud work. This
includes some form of virtualized IT infrastructure—servers, operating system software,
networking, and other infrastructure that’s abstracted, using special software, so that it can be
pooled and divided irrespective of physical hardware boundaries. For example, a single
hardware server can be divided into multiple virtual servers.
Virtualization enables cloud providers to make maximum use of their data center resources.
Not surprisingly, many corporations have adopted the cloud delivery model for their on-
premises infrastructure so they can realize maximum utilization and cost savings vs.
traditional IT infrastructure and offer the same self-service and agility to their end-users.
If you use a computer or mobile device at home or at work, you almost certainly use some
form of cloud computing every day, whether it’s a cloud application like Google Gmail or
Salesforce, streaming media like Netflix, or cloud file storage like Dropbox. According to a
recent survey, 92% of organizations use cloud today (link resides outside IBM), and most of
them plan to use it more within the next year.
In addition to the cost savings, time-to-value, and scalability benefits of cloud, SaaS offers
the following:
SaaS is the primary delivery model for most commercial software today—there are hundreds
of thousands of SaaS solutions available, from the most focused industry and departmental
applications, to powerful enterprise software database and AI (artificial intelligence)
software.
PaaS (Platform-as-a-Service)
Today, PaaS is often built around containers, a virtualized compute model one step removed
from virtual servers. Containers virtualize the operating system, enabling developers to
package the application with only the operating system services it needs to run on any
platform, without modification and without need for middleware.
Red Hat OpenShift is a popular PaaS built around Docker containers and Kubernetes, an
open source container orchestration solution that automates deployment, scaling, load
balancing, and more for container-based applications.
Learn more about PaaS
IaaS (Infrastructure-as-a-Service)
IaaS was the most popular cloud computing model when it emerged in the early 2010s. While
it remains the cloud model for many types of workloads, use of SaaS and PaaS is growing at
a much faster rate.
What's more, serverless runs application code on a per-request basis only and scales the
supporting infrastructure up and down automatically in response to the number of requests.
With serverless, customers pay only for the resources being used when the application is
running—they never pay for idle capacity.
Many enterprises are moving portions of their computing infrastructure to the public cloud
because public cloud services are elastic and readily scalable, flexibly adjusting to meet
changing workload demands. Others are attracted by the promise of greater efficiency and
fewer wasted resources since customers pay only for what they use. Still others seek to
reduce spending on hardware and on-premises infrastructures.
Hybrid cloud is just what it sounds like—a combination of public and private cloud
environments. Specifically, and ideally, a hybrid cloud connects an organization's private
cloud services and public clouds into a single, flexible infrastructure for running the
organization’s applications and workloads.
Multicloud is the use of two or more clouds from two or more different cloud providers.
Having a multicloud environment can be as simple using email SaaS from one vendor and
image editing SaaS from another. But when enterprises talk about multicloud, they're
typically talking about using multiple cloud services—
including SaaS, PaaS, and IaaS services—from two or more of the
leading public cloud providers. In one survey, 85% of organizations reported using
multicloud environments.
Hybrid multicloud is the use of two or more public clouds together with a private cloud
environment.
Following its introduction in 2009, Bitcoin became the first widely adopted
application of Finney's PoW idea (Finney was also the recipient of the first
bitcoin transaction). Proof of work forms the basis of many
other cryptocurrencies as well, allowing for secure, decentralized consensus.
KEY TAKEAWAYS
Generating just any hash for a set of bitcoin transactions would be trivial for a
modern computer, so in order to turn the process into "work," the bitcoin
network sets a certain level of "difficulty." This setting is adjusted so that a
new block is "mined" – added to the blockchain by generating a valid hash –
approximately every 10 minutes. Setting difficulty is accomplished by
establishing a "target" for the hash: the lower the target, the smaller the set of
valid hashes, and the harder it is to generate one. In practice, this means a
hash that starts with a very long string of zeros.
Special Considerations
Since a given set of data can only generate one hash, how do miners make
sure they generate a hash below the target? They alter the input by adding an
integer, called a nonce ("number used once"). Once a valid hash is found, it is
broadcast to the network, and the block is added to the blockchain.
Proof of work makes it extremely difficult to alter any aspect of the blockchain,
since such an alteration would require re-mining all subsequent blocks. It also
makes it difficult for a user or pool of users to monopolize the network's
computing power, since the machinery and power required to complete the
hash functions are expensive.
If part of a mining network begins accepting an alternative proof of work, it is
known as a hard fork.
That block will always contain 745 transactions involving just over 1,666
bitcoins, as well as the header of the previous block. If somebody tried to
change a transaction amount by even 0.000001 bitcoin, the resultant hash
would be unrecognizable, and the network would reject the fraud attempt.
ARTICLE SOURCES
Related Terms
Solana (SOL)
Solana is a blockchain platform designed to host decentralized applications.
Based on Proof of History, it processes transactions quickly at low cost.
more
What Is Litecoin (LTC)?
Launched in the year 2011, Litecoin (LTC) is an alternative cryptocurrency
based on the model of Bitcoin.
more
Blockchain Explained
A blockchain is a digitally distributed, decentralized, public ledger that exists
across a network. It is most noteworthy in its use with cryptocurrencies and
NFTs.
more
What Is Proof of Burn for Cryptocurrency?
The proof of burn (POB) consensus algorithm combines the proof of work
(POW) and proof of stake (POS) and partially overcomes their shortcomings.
more
Bitcoin Mining
Breaking down everything you need to know about Bitcoin mining, from
blockchain and block rewards to proof of work and mining pools.
more
Understanding Hash
A hash is a function that converts an input of letters and numbers into an
encrypted output of a fixed length.
more
EXS: AMAZON, NETFLIX, FLIPKART