Safari View Service
Safari View Service
© 2024 Dow Jones & Company. All Rights Reserved. THE WALL STREET JOURNAL. Monday, January 22, 2024 | R1
A
BY BART ZIEGLER RTIFICIAL INTELLIGENCE seems to be ev-
erywhere these days. It’s going to have a pro-
found effect, the experts say, on pretty much
everything.
But many people still find the technology
baffling.
We asked Wall Street Journal readers what
questions they have about AI. They had a lot
of them. Many expressed confusion about its
implications, its risks and the best way to harness it for good.
Following are answers to some of their questions.
Please turn to page R4
Inside
ENTERTAIN ME
Special THINK TWICE
Y
the time and effort needed to to repeat certain phrases and
produce a first draft of a busi- concepts—to the point of being
OU HAVE a the document. ness plan from scratch. AI’S WEAKNESSES boring and distracting for poten-
great idea for a 1. Lack of detail and analysis: tial investors. For example,
new business. Is 2. Filling in some details: AI 3. Creativity (up to a point): While the AI tools did offer some phrases such as “Yorkshire’s rich
it time to turn was able to fill out each section AI isn’t going to win any awards details, they still fell short in heritage” and “the business’s
to AI? of the business plan with at least for creativity. But it was able to that department, especially re- deep understanding of York-
We decided some relevant information that produce catchy names and con- garding market data, financial shire’s history and culture” were
to find out by wasn’t provided in the original cepts that made the business projections and unique business used repeatedly. A prompt that
asking two dif- prompt. That included informa- plan more engaging. For exam- challenges. Perhaps an even discouraged repetition could
ferent gen- help, as could requests for AI
erative-AI tools to write a to write in a compelling style.
business plan for a fictional
startup—a travel app fo- 3. Occasional inaccuracies:
cused on the U.K.’s York- Generative AI is famous for
shire region. What did its “hallucinations”—facts
these tools do well, we that are made up and, well,
wanted to know, and what not always factual. Such in-
are their limitations? accuracies, if not spotted and
To get started, we pro- corrected, would undermine
vided the tools with a de- the credibility of a business
tailed prompt that specified plan. One glaring example in
what we wanted in the our case was AI incorrectly
business plan. We also gave using the U.S. dollar symbol,
the tools permission to ask rather than the pound sym-
follow-up questions if they bol, when describing our fic-
needed more information tional company’s seed fund-
and to be creative if there ing, a detail that could
were gaps in their knowl- confuse or mislead readers
edge. and deter potential investors.
What did we find? In
short: The business plans 4. Lack of personalization:
produced by AI were useful, Generative AI was able to
but not quite ready for produce a basic business
prime time. That is, they plan, but without further
could serve as a good start- prompting it struggled to
ing point for a company personalize or adapt the
founder, but are in need of structure to the business be-
further editing and addi- ing described. That resulted
tions to pass scrutiny. Not in statements such as, “Initial
bad considering where we funding will be sought from
are in AI’s development. local investors and grants fo-
Here’s a closer look at cused on promoting local cul-
four things the AI tools did ture and history” and, “Reve-
well—and four shortcom- nue from app sales, in-app
ings: purchases, and partnerships
will be reinvested into con-
tent creation, marketing, and
AI’S STRENGTHS tion about the company’s mis- ple, AI named our company more detailed prompt would app development” for the finan-
1. Structure and logic: Perhaps sion, market strategy, product “Yorkshire Tales,” a simple play help address this, but at some cial section of the plan. As with
not surprisingly, generative AI offerings and target market. on words that tied together the point, it might be just as easy to so much of the AI-generated
excels at following a logical “Yorkshire Tales is a great proposed focus of the business do it yourself. business plan, it wasn’t a bad
structure. The business plans we way for history enthusiasts to with its geographical origin in Similarly, we found generative start—but the technology wasn’t
received adhered to the typical learn more about local history,” the Yorkshire Dales, a part of AI’s ability to conduct a thor- yet ready to produce something
format and contained all of the AI wrote. AI even suggested a England known for its land- ough market and competitive that would win over investors.
sections we expected to see, in- plausible amount required from scapes, history and rich story- analysis to be limited. AI cur-
cluding an executive summary, investors—“Yorkshire Tales is telling traditions. rently lacks the ability to learn Gordon Fletcher is an associate
followed by a company descrip- seeking $1 million in seed fund- directly from real-time data and dean of research and innovation
tion and then a detailed value ing to launch its business”— 4. Highlighting key isn’t able to provide in-depth, and Maria Kutar is an associate
KIERSTEN ESSENPREIS
proposition. This structure pro- though it mistakenly used the elements: AI also was good at nuanced analysis of market professor in the Salford Business
vided a solid foundation for a U.S. dollar symbol instead of the identifying and highlighting the trends, competitor strategies or School at the University of
business plan, making it easier pound sign. Still, much of the critical elements of a business, customer behavior as would be Salford in the U.K. They can be
to expand and further customize information provided by AI such as its unique selling points, expected in a business plan. A reached at [email protected].
I
the makers of voice assistants
F YOU USE a voice assis- and for consumers.
tant like Amazon’s Alexa, “The technology industry needs
PUT AI
TO WORK
WITH
SERVICENOW
Everyone’s talking about the latest-greatest leap THE SERVICENOW PLATFORM BRINGS INTELLIGENCE
in AI — Generative AI. The news can’t stop buzzing INTO EVERY CORNER OF YOUR BUSINESS.
about it. The pundits can’t stop debating it. The
We believe AI is only as powerful as the platform
Street is fawning over it. And the board is clamoring
it’s built on. That’s why our technology reaches
for it. Buckle up. The hype machine is in overdrive.
horizontally across departments, disciplines, and
If that’s not enough, endless Gen AI “solutions” silos — from IT to customer service, finance to supply
keep popping up like whack-a-moles. There’s AI chain. For CEOs and HR pros, developers and
for this. AI for that. There’s even AI for … creating AI. service agents, engineers and legal teams.
And you can’t throw a rock without hitting some
Working with what you already have, and what
other company promising the future. Big players.
you’ll need next. So every system, every process,
Little players. Blue chips. Start-ups. Unicorns.
every app — everything — works better. Turning
And companies you’ve never even heard of.
intelligence into action. Empowering your people
Here’s the thing. We quite literally have the most to be exponentially more productive. To do the
advanced technology in a generation at our amazing work they were meant to do. To do things
fingertips. You don’t just want Gen AI for this or they could never do without it. Not next year. Now.
that. You want enterprise-ready AI for your entire
CAN YOU WORK WITH AI NOW? YES.
business. But where do you even start? Who do
you trust? How will it work? What can it actually Rather than choosing what AI to start with, start
do for your business? with what problem you’d like AI to help solve.
AI for supercharging your employees? YES. AI for
IT’S TIME TO GET REAL ABOUT AI.
wowing your customers? YES. AI for building apps?
With the intelligent platform for digital transformation YES. AI for reinventing experiences? YES. AI for
from ServiceNow,® it’s not just possible. It’s happening. boosting bottom lines? YES. AI for CRM, HR, or IT?
YES. YES. And YES.
Employees can focus on building the business,
not mundane tasks. Just about anyone can It’s time to stop the hype bus. It’s time to put
easily write apps in natural language, not code. enterprise-ready AI to work. With the ServiceNow
Time-consuming IT issues can be resolved in platform, businesses everywhere are already
minutes, not hours. Chatbots can learn from you, saying YES to entirely new ways of working.
behaving more like assistants than machines.
Even that skim latte a customer accidentally
ordered to the wrong store can be automatically ServiceNow.com/GenAI
rerouted for pickup nearby. Morning saved.
MEDICAL ADVICE?
where a clinician is consulting mul- to the signals for the other
tiple sources to inform an opin- groups. [The OpenAI spokesper-
ion—a medical textbook, a journal son says that the company has
article, an AI system—I don’t think worked to train its model to rec-
that requires formal reporting, but ognize and state the dangers of
the clinician in this case is clearly generalizing with race or other
responsible for the decision made. protected characteristics. Re-
Yes, patients and doctors can use chatbots for certain types of questions, The only exception would be search into the issue is ongoing,
where a clinician is making a diffi- the spokesperson says.]
experts say. But beware of the shortcomings. cult decision in partnership with a
patient and/or caregiver. • WSJ: What about ChatGPT’s
ability to generate fake medical
BY LISA WARD • COHEN: Accessing information and a lot of burnout, which is what • WSJ: How do ChatGPT biases articles or images?
I
about healthcare is different than may make the technology appeal- manifest themselves in health-
getting a clinician’s opinion. But if ing. But clinicians will likely need to care? • COHEN: Large language models
F YOU HAVE chest we are talking about ChatGPT vs. review and edit the output for ac- make creating medical misinfor-
pain, should you ask a Googling a question or looking on curacy and appropriateness. • WEISSMAN: Our study found the mation incredibly easy. You can
chatbot, like ChatGPT, Reddit, then there is a good rea- clinical recommendations from generate fake academic papers at
for medical advice? son to think that ChatGPT holds • WSJ: Do you think it is risky if ChatGPT changed depending on the drop of a hat, along with what
Should your doctor some real promise. physicians are already using the insurance status of the pa- looks like real citation, or perhaps
turn to AI for help ChatGPT to support diagnosis tient asking the question. In one maybe a fake radiology report
with a diagnosis? • ZOU: Its effectiveness really de- decisions? instance, ChatGPT recommended from a real patient faxed to their
These are the types pends on the kinds of questions an older adult, without insurance, doctor’s office. [The OpenAI
of questions that you ask. It’s not great to ask pre- • WEISSMAN: ChatGPT should not presenting with acute chest pain, spokesperson says that ChatGPT
chatbots are raising diction questions or for any per- be used to support clinical deci- which is a medical emergency, go will occasionally make up facts,
for the healthcare industry and sonal recommendations. It could sion-making. There is no evidence to a community health center be- and users should verify informa-
the people it serves. The possibili- be more effective for information that it is safe, equitable or effec- fore the emergency department, tion that is provided to them.]
ties of this technology are huge: retrieval or exploratory questions, tive for this purpose. As far as I which is totally unsafe and inap-
For patients, cutting-edge artificial like, “Tell me something about this know, there is also no authoriza- propriate care. • WSJ: Any last thoughts?
intelligence could mean getting particular drug.” I also heard about tion from the Food and Drug Ad-
better answers to medical ques- patients pasting a medical-con- ministration for its use in this way. • COHEN: Many LLMs are also • COHEN: We focused on a lot of
tions, more quickly and cheaply sent form that has a lot of jargon trained on the English-language doom and gloom, but this is in-
than making an appointment with and is difficult to understand into • ZOU: ChatGPT and these LLM internet and English-language credibly exciting and there’s an
a doctor. Clinicians, meanwhile, GPT and asking it to explain the models are changing very quickly. sources. That means we are ignor- awful lot of value here. But one
could easily access and synthesize document in plain English. If you ask the same model the ing an entire set of knowledge in thing about these foundational
complex medical concepts—and same questions over a span of a different languages. Take an ex- models is that if you don’t get the
avoid a lot of the numbing paper- • WSJ: What do you think about few weeks, the model often gives ample outside of medicine. Only foundation right the entire house
work that comes with the job. patients using ChatGPT as you different responses to the looking at the English-language will collapse, or worse yet the
Yet the lack of transparency in compared with Reddit or Google? same questions. Our re- sources on Islamic history may whole city, so it’s also important
the underlying data and methods search finds GPT-4’s performance lead to very different conclusions for the foundational models we
used to train these models has • WEISSMAN: The content is likely on the U.S. Medical Licensing Ex- than if you looked at everything arrive at to be really good.
led to concerns about accuracy. similar in quality and bias for amination dropped by 4.5% from that is relevant to Islamic history
There are also concerns that the ChatGPT, a web search or public March to June 2023. Patients and in every language. • ZOU: Absolutely. There are a lot
technology will perpetrate bias, discussion forum. The additional clinicians should be aware Chat- of exciting uses for these technol-
giving answers that may hurt cer- risks carried by ChatGPT include: GPT may give quite different re- • ZOU: China and other countries ogies and potential, but some-
tain groups of people. And some giving the impression that it is sponses or suggestions to the have invested heavily in training times we forget how new they re-
AI will deliver wrong answers con- knowledgeable in its responses;
fidently, or simply make up facts confabulating answers; and not
out of thin air. immediately distinguishing the
To learn more about how to sources for its responses, such as
use—and not use—this new tech- a Centers for Disease Control and
nology, The Wall Street Journal Prevention website versus a mis-
spoke to three experts: James information website. Whereas by
Zou, an assistant professor of bio- reading webpages directly, the
medical data science at Stanford source is often, but not always,
University; Gary Weissman, an as- more clear.
sistant professor in pulmonary [An OpenAI spokesperson says
and critical-care medicine at the that the company’s models are • James Zou • Gary Weissman • I. Glenn Cohen
University of Pennsylvania Perel- not fine-tuned to provide medical
man School of Medicine; and I. information, and the company same medical questions on differ- models too. That still means a lot ally are. We are very much in the
Glenn Cohen, a professor at Har- cautions against using the models ent days. of languages [are underrepre- early stages of understanding how
vard Law School and faculty direc- to provide diagnostic or treatment sented in training the large lan- we should use it responsibly.
tor of its Petrie-Flom Center for services for serious medical condi- • WSJ: Should patients be told guage models]. One consequence
Health Law Policy, Biotechnology tions. The company is continuing when clinicians are using AI? is that an LLM can be less reliable • WEISSMAN: LLMs are hot right
and Bioethics. Here are edited ex- to research the issue, the spokes- when patients and clinicians inter- now for two reasons: One is it is
cerpts of the conversation: person says.] • COHEN: Patients have a right to act with it in non-English lan- an exciting technology with a lot
be informed they are interacting guages. On the other hand, Chat- of potential clinical applications.
• WSJ: Can large language • WSJ: How might ChatGPT be with an AI chatbot, especially if GPT is reasonably good at The other is that some companies
models, like ChatGPT and its used in clinical practice? they may think they are talking translating between the common have an opportunity to earn a tre-
competitors, provide reliable with an actual clinician. Whether languages, so it can also be used mendous profit. So, there is a ten-
medical advice for patients? • WEISSMAN: I think some physi- or not you have a right to know as a translator by some users. sion: How can we make money
cians may be using it already as a about all AI in your care is an- quickly with this new technology
• WEISSMAN: At the moment, clinical diagnostic-support system, other matter. • COHEN: Besides the training that we don’t really understand,
ChatGPT is able to provide gen- inputting symptoms and then ask- For example, if the first look at data, there are potential biases for which we don’t have a lot of
eral medical information, the ing for a possible diagnosis. But an X-ray was done by an AI and built into the reinforced learning evidence and isn’t sufficiently reg-
same way you would find back- probably it’s more commonly used reviewed by a radiologist, I am not process where people decide what ulated, versus how can we find
ground information on a topic on as a digital assistant to generate sure if a right of informed consent answers get reinforced. An article safe, effective, equitable and ethi-
Wikipedia that is mostly right, but draft medical documents, summa- applies. When the AI is an adjunct published by the American Psy- cal uses for this new technology.
not always. It is not able to pro- rize patient histories and physical to a decision, we are in a very dif- chological Association discusses
vide personalized medical advice information or to create patient- ferent category than when a pa- how different cultural groups Lisa Ward is a writer in Vermont.
ROB DOBI
for individuals in a way that is problem lists. There is a heavy doc- tient is interacting with an AI and (Latina adolescents versus Asian- She can be reached at
safe, reliable and equitable. umentation burden on clinicians has no idea. American college students versus [email protected].
P2JW022000-0-R00600-1--------XA
I
the AI what you’ve loved in the past, vour” and “A Touch of Frost.” how to do that in Excel and
what you feel like at this exact mo- in Google Sheets. From there, go to
T IS EASY to feel over- ment and the role you want the AI a web service like TuneMyMu-
whelmed by the sheer to play. Pick something the whole sic.com and upload the CSV. The ser-
volume of books, TV To give you an idea, here’s one re- family will enjoy vice will sync the playlist to your
shows, movies, games, al- quest I made of ChatGPT: “You are a The same strategy will help you me- streaming-music service, which will
bums and podcasts out TV adviser specialized in finding the diate entertainment choices for ev- automatically collect all the songs
there. Reviews can tell perfect show for a user’s mood. Sug- erybody in the household. Since One of the for you.)
you what is generally
good, or not so good, but
gest 10 lesser-known historical
drama shows that would appeal to
you’re asking the AI to juggle the
tastes of multiple people—probably
thorniest
they can’t tell you what someone who desperately misses with very different interests—it issues for Tailor media to your lifestyle
fits your specific tastes watching ‘The Crown.’ ” When I of- might be handy to create a custom If there are specific times when you
at this exact moment. Streaming ser- fered that request, the AI suggested GPT, which is essentially a GPT that
couples? watch TV or catch up on podcasts,
vices have a better handle on what some of my past favorites (like “Vic- is dedicated to a specific task and What TV ask an AI for the music, podcasts,
you like—but they will only recom- toria,” “Downton Abbey” and “Wolf that remembers the parameters you shows and audiobooks that fit into
mend stuff in their own catalogs. Hall”), so I trusted its suggestion set from query to query. (You can shows to your actual viewing and listening
Enter AI. Using a chat-based AI
tool like ChatGPT or Claude.ai, you
that I check out the historical series
“Versailles.”
find more information about custom
GPTs here.)
watch windows. If you do most of your lis-
tening while you’re commuting, for
can get a wider spectrum of recom- To get even fancier: Maintain a Once you do that, feed in some together. instance, ask for audiobooks or pod-
mendations based on your particular running list of titles and ratings on basic details about your family (like casts that break nicely into 30-min-
likes and dislikes, as well as what sites like IMDb or JustWatch (for names and ages) and their individ- AI can help ute increments. If you watch TV
you’re craving at any given time—
depending on the mood you’re in, for
movies and TV), Goodreads (for
books) or your favorite videogaming
ual preferences in movies, books or
games, as well as no-go areas (for
ease the while there are young children in
the room, ask for suggestions that
instance, or who you are watching platform. You can then paste numer- example, if someone in the family strain. will not include any disturbing im-
with. ous examples of what you want into can’t stand cringe humor or graphic ages.
And instead of passively absorb- the AI (“Here are some of the books violence). Then try prompts like: When I asked ChatGPT for recent
ing what critics or media services I’ve given five stars”). Netflix and “Suggest a game that our teenag- dialogue-driven movies I could fol-
suggest, you can take an active role Goodreads even let you export your ers will enjoy playing together, even low while knitting (which means I’m
in honing those suggestions so they viewing or reading history as a file if it is just for an hour.” only intermittently looking at the
get closer and closer to matching that you can upload to your GPT as- “What is a lighthearted recent screen), it suggested “The French
your particular tastes, keeping you sistant. novel set in California that our Dispatch,” “Palm Springs” and “Let
from reading a lot of boring books or whole family could enjoy listening to Them All Talk.”
watching a lot of movies you end up as an audiobook during a California
hating. Find choices for you and your road trip? Suggest 10.”
The key, I have discovered, is partner “Recommend five mystery shows Create a media schedule
knowing the best way to ask for rec- One of the thorniest issues in any that the whole family can enjoy.” If you want to read a certain number
ommendations. After many months relationship? What TV shows to In answer to this last prompt, my of books each week or month, ask
of experimentation, here are some of watch together. AI suggested shows like “Father the AI to draw up a schedule for that
the ways I have used AI for enter- To ease those potential strains, Brown” and “The Mysterious Bene- time period. If you are reading a cou-
tainment recommendations. start off by giving the AI a basic dict Society,” which look promis- ple of books at once, you might ask it
idea of what you both like, then ing—as long as I don’t tell my chil- to specify how many pages of each
hone those suggestions through a dren that I’m letting a robot pick book you should read each day, mix-
Get super-tailored conversation. For example, I started our family viewing. ing challenging reading with just-for-
recommendations for by asking: “I want to watch a TV fun pages; if you want to watch more
yourself show with my husband. He likes spy movies but never have more than 40
This is, of course, the most basic and heist shows and comedy and Cue up some mood music minutes to watch at a stretch, ask
way to use chat-based AI to get en- stand-up; I like things that are more I love listening to tunes that are re- the AI to tell you the right place to
tertainment options. Just open up narrative and character-driven. lated to what I’m doing. While I pause your movie so you can stretch
your chat-based tool of choice and What can we watch together?” can’t claim that it is a joy to handle it over two or three nights.
ask it to give you ideas for what you The initial recommendations we tax prep just because I’m listening
might want to watch, read or listen got with this question were “Barry” to “If I Were a Rich Man” and Scan this code to Alexandra Samuel is a technology
to—with as much detail about your and “Killing Eve”; I told the AI that I “Rent,” it definitely helps. Now I can learn about the researcher and co-author of
likes and dislikes as you can supply. don’t want to watch anything that get an AI to create a soundtrack for “Remote, Inc.: How to Thrive at
JON KRAUSE
I
of the process—getting to cus- phone call to the delivery recipi- with house numbers obscured by gistics. Maps were made for navi-
tomers’ doorsteps—has proved to ent for further navigation direc- snow, trees, bushes or seasonal gation, to get you from approxi-
T IS ONE OF the big- be a lot tougher. tions, a pain for both the driver decorations. Some delivery driv- mate point A to approximate
gest headaches for Larger parcel carriers have and customer, says Auriga Taran- ers simply will not deliver to point B. The last 100 feet has just
delivery services and adapted the route-optimization tino, a gig delivery driver in Dal- apartment buildings, since that been considered a cost of doing
gig couriers. They software—designed to find the las. can cost them a lot of time and business—that you will just de-
have the address most-efficient directions while “Some apartments are just money. liver three out of four deliveries
where a person lives. taking into account traffic conges- numbered very poorly,” says decently well,” says Agarwal.
But how do they find tion and weather—to help deliv- Tarantino. “I’ve had deliveries The pair founded the company
the right apartment— ery workers navigate buildings. where the receivers were already An inside look in 2018, reaching out to property
or even the right “When available, our app pro- annoyed when previous drivers Beans.ai, of Palo Alto, Calif., is managers, delivery drivers and
building—quickly? vides drivers with helpful infor- fire departments to get apartment
Garden-style com- mation that informs them of what property maps, often digitally
plexes, mobile-home parks and side of the street a delivery is on, scanning paper documents. The
university campuses can have lab- the layouts and mapping of multi- company also requested digital
yrinthine layouts where numbers unit complexes, and shared deliv- architectural blueprints of proper-
may not be in sequence, making ery points at locations like college ties to build its maps.
deliveries a time-consuming campuses,” says Branden Bari- The company uses optical char-
nightmare, drivers say. Just find- beau, an Amazon spokesman. He acter recognition technology to
ing the right door in multiunit adds that drivers also get photo- parse apartment maps. Images
buildings can take as much as 30 graphs on Amazon’s delivery app of text from scanned documents
minutes, compared with 30 sec- of buildings, units and mailrooms and digital image files are ex-
onds when making a delivery to a to help them confirm the correct tracted and processed. Some
free-standing, single-family home. location. maps are easier to process—they
To ease things for delivery peo- FedEx, meanwhile, says it has have unit numbers, building num-
ple, companies like Amazon.com, Global Positioning System soft- bers and buildings that clearly in-
FedEx and United Parcel Service ware that alerts couriers if they dicate “laundry” or “clubhouse.”
use map software that locates arrive at a spot that is too far The company says that it has
things like mailrooms and out-of- away from the delivery address. hundreds of thousands of interior
the way entrances. Gig workers And UPS’s navigation platform, building maps in its database, and
mostly don’t have that kind of On-Road Integrated Optimization that around 100,000 drivers, in-
specialized information, and usu- and Navigation, or Orion, provides cluding both gig drivers and driv-
ally have to lean on GPS services UPS drivers with details such as ers hired by delivery companies,
like Google Maps that can only package drop-offs and loading use the app daily. Paramedics,
guide them to the front door. docks not visible from the street. firefighters and other first re-
Some, though, are turning to All of these details are critical, sponders have also used the app
startups that offer specialized since some drivers make more to quickly find addresses after
maps created from building blue- than 130 stops a day. emergency calls, and have free ac-
prints and other data. “Shaving off 10 seconds on each cess to the app.
Avoiding mix-ups can reduce stop makes a difference,” says Ita- Tarantino says she used the
costs for the shipper, cut head- mar Zur, co-founder and chief ex- Beans app when it was free and
aches for the recipient and im- ecutive of Veho, an e-commerce A map of all the potential stops on a delivery route on the Beans app in stopped about two years ago
prove reliability ratings for the delivery service that ships parcels a neighborhood in Discovery Bay, Calif. when the company started charg-
delivery company. In some urban for retailers such as Saks Fifth Av- ing $5 a month. She says she no
areas, the majority of drivers’ enue and retail styling company longer makes enough deliveries to
time is spent outside of vehicles, Stitch Fix. The company is devel- left their items at apartment 30, trying to address the issue, offer- justify the cost.
including searching for the right oping in-house tech to address the but it was apartment 30 in a dif- ing delivery drivers interior build- Beans said it gets similar feed-
apartment, says Anne Goodchild, issue and evaluating third-party ferent building.” ing maps to guide them during back from drivers who aren’t de-
founding director of the Urban data sources. (Google says it is constantly drop-offs. Other companies offer livering actively and others who
Freight Lab, which does research improving the precision of its one-time gate or apartment-build- decline apartment orders. For gig
on urban freight and logistics is- routing and has mapped over one ing access codes to drivers. drivers, even a relatively small fee
sues at the University of Wash- Frustration for gig workers billion buildings around the world Users of Beans see a map with can be a big hit. And because the
ington. Still, while the big names work using AI to help drivers find the detailed information such as the couriers are treated as indepen-
MARLENA SLOSS FOR THE WALL STREET JOURNAL
“These are kind of pedestrian, on in-house solutions, gig-econ- right destination. It also updates closest parking spot and the route dent contractors running their
unsexy parts of the problem,” omy workers, who make up the maps from sources including to take from vehicle to doorstep, own businesses, the shippers usu-
Goodchild says. bulk of delivery drivers, are third-party data, individual users including gate access codes or ally won’t pay for the software.
largely left to solve delivery prob- and imagery.) other location-identifying tips. “It will take time for that side
lems on their own. What’s more, drivers for carri- The company—which has be- of the puzzle to be solved,” says
Where’s the customer? Drivers say that mapping soft- ers like UPS often bring items to come one of the largest providers Agarwal.
While carriers have spent years ware from Google, Apple and the same addresses every week of building-mapping software—
investing in tools to help drivers Waze typically leads them to the and have learned the tips and was founded when Nitin Gupta Esther Fung is a Wall Street
take the most-efficient and least- entrance gate, lobby or leasing of- tricks to ease deliveries, including and Akash Agarwal had issues Journal reporter in New York.
costly routes to drop off packages, fice of an apartment complex. the least congested time of day to with delivery services finding Email her at [email protected].
BY LINDSEY CHOO using AI to beef up the capabili- Sending robots to your door
ties of drones. Wing, a delivery- Other companies are tackling the
O
drone company owned by Alpha- “last 50 feet”—the time-consum-
RDER A package? AI bet, uses AI to let devices decide ing process of getting packages
researchers are hunting the best place to leave packages, from the delivery truck to the cus-
for the best way to get bypassing obstructions. For exam- tomer’s door. According to re-
it to your doorstep. ple, if a package is supposed to be search by the University of Wash- A Wing delivery drone during a demonstration flight.
In the next few years, delivered to a driveway, but the ington Urban Freight Lab, this leg
AI promises to transform the logis- drone spots something blocking of the process accounts for 20%
tics of consumer deliveries with a the space, it may choose to leave to 50% of overall transportation be largely autonomous when mak- spots that are opening up while
number of new technologies and the item on the doorstep instead. supply-chain costs. ing deliveries. But some AI works drivers are in transit.
improvements to old ones. Here’s a Wing is focusing on last-minute Vault Robotics, a spinoff from in tandem with human delivery
look at some of the most intrigu- items for nearby customers, like the Princeton University Safe Ro- workers. Optimizing routes
ing innovations on the way. drinks and medication, where the botics Lab, is designing robots For instance, researchers have Delivery drivers already use soft-
small aircraft can outpace a deliv- that can climb curbs and stairs. been looking into the time-wast- ware to find the best routes. Re-
Smarter drones ery truck driving out or a customer The robot has a combination of ing process of parking. Giacomo searchers are looking to beef up
One of the most expensive and heading to the store. The average legs and wheels, and a platform Dalla Chiara, lead researcher at that capability with AI for that job.
resource-intensive parts of deliv- flight time for a Wing drone to with grips on the side to hold a the Urban Freight Lab, says that Matthias Winkenbach, director
ery is the last mile—getting pack- customers is under 30 minutes. package. about 28% of drivers’ time during of the MIT Megacity Logistics
ANDY JACOBSOHN/AGENCE FRANCE-PRESSE/GETTY IMAGES
ages from warehouses or stores The company has plans to en- The goal, says Robert Shi, co- delivery is used searching for Lab, is working on a model that
to customers’ homes. Delivery ter more retail deals, pending ap- founder and chief executive of the spots. can take into consideration com-
drones promise to make the pro- proval to fly in new areas from company, is to deploy the robots In a project sponsored by the plex real-world constraints. For ex-
cess more efficient, bypassing the the Federal Aviation Administra- from vans without having to park, Energy Department, the lab de- ample, drivers can choose a route
traffic and parking issues that of- tion. The drones fly a few hundred eliminating the time spent idle. ployed curb sensors in a Seattle that may not be the shortest but
ten plague delivery trucks. feet above the ground, then de- So, while a van is still moving at a neighborhood, transmitting real- allows them to park more conve-
The idea isn’t entirely new, of scend down to about 20 feet to cruising pace, robots can move in time information about available niently or unload packages in
course. Amazon, for one, has been lower packages. The company is and out of the vehicle to deliver parking spaces. Combining ma- safer spaces.
testing drones since 2013, and has also testing a system where parcels to doorsteps. chine learning and sensor infor-
delivered packages via drones drones can do self-assessments, mation, the system can predict Lindsey Choo is a writer in
since 2022. such as battery checks, with little Predicting parking spaces when the spaces will be avail- Southern California. She can be
Now a range of companies are human intervention. Drones and robots are designed to able—and direct drivers toward reached at [email protected].
P2JW022000-0-R00800-1--------XA
GOVERNMENT REGULATION OF AI
TECHNOLOGY: READERS WEIGH IN
For many, regulation is the last thing this won’t work in the U.S. However, it will be increasingly difficult by generative AI are both too
the FTC should be especially ag- to manage as the discipline ma- early and too late. By too early, I
fast-moving field needs. For others, it would gressive in bringing actions tures. It is akin to regulating mean a real lack of understand-
be foolish to stay hands off. against companies that release data structures or coding tech- ing of how these models work
generative consumer artificial niques. It isn’t likely to be effec- will very likely result in subopti-
intelligence that leads to harm, tive. mal regulation, hindering the
and tort rights of actions should AI technology will soon be technology. (Copyright protec-
BY DEMETRIA GALLEGOS minimal regulation. I have found be given considerable leeway in ubiquitous. The number of IT tion of software is one example
ChatGPT to be a game changer federal courts against companies professionals who understand of this.) By too late, I mean that
P
for my work in data science and that can be found to be the prox- the inner workings of AI is grow- once a model is trained, it can-
RESIDENT BIDEN the thought of the technology imate cause to certain harms. ing exponentially. What would not unlearn—any undesirable in-
issued an executive being blocked just because it —Tracy Beth Mitrano, you regulate? How would you fluence in its training data can-
order late last year could be prompted to say some- Penn Yan, N.Y. know exactly what the code is not be subtracted from the
to improve govern- thing inappropriate or inaccurate doing? There will be many model’s parameters.
ment oversight of would be terrible. abuses, including that those ad- Between the two, I’d rather
artificial intelli- —Beecher Adams, Dublin, Calif. Yes, in some industries vocating for regulation will often have too little protection than
gence. But there’s a As with most things in life, it de- be industry leaders looking to too much. My concerns about
lot of disagreement pends. If AI is being used to di- erect barriers to entry and oth- encumbering the technology
about how much—if any—over- Why the hurry? agnose and treat disease, it is erwise thwart competition. with ill-conceived regulation sur-
sight is best for both AI and the Why should we be in such a functioning as a medical device We should be focused on how pass my concerns about inaccu-
world, as myriad AI develop- hurry to release new AI tools? and logically would be regulated to identify and protect against racy, bias or misinformation in
ments promise to transform in- Silicon Valley has contributed to by the FDA. Banking and invest- those who would exploit and do content produced by generative
dustries and daily life. an enormous amount of physical ment decisions are other areas us harm with this technology. AI—these have always been arti-
We wanted to know, as part of waste by “upgrading” its prod- that seemingly call for controls. —Gordon Davis Jr., facts of communication between
our series on ethical dilemmas ucts too quickly, as well as huge In both these cases, the basic West Chester, Pa. humans, and humans learn to
posed by AI, what the govern- psychic waste by forcing con- regulatory mechanisms already use critical-thinking skills to fil-
ment’s role should be in enhanc- sumers to keep up with constant exist. Perhaps the most concern- ter and interpret the information
ing benefits and reducing risks patches to software that was ing area, because of its potential Test. And then test again. received based on what is known
from artificial intelligence. probably released too soon to be- to affect a large number of us, I think the government, in con- of the source and on one’s own
We asked WSJ readers these gin with. ChatGPT was flawed would be the use of AI by the sultation with leaders in the AI biases.
questions: upon release; even a brief exper- IRS and state taxation authori- field, should design parameters To be fair, this will become
Given AI’s potential for harm imentation quickly proved it ties to audit returns. Who will and triggers that would alert de- more challenging as more busi-
(such as being inaccurate or bi- spits out delusional “facts.” Ob- regulate them? velopers to an AI program that nesses integrate generative AI
ased), should there be a govern- viously, we simply need to slow —Joyce L. DeYoung, Devon, Pa. is not working as expected or into their consumer-facing prod-
ment evaluation and approval down and try to get AI right so getting out of control. The de- ucts without consumers realizing
process for major new AI devel- that it benefits humankind in- velopers would test their AI it. Even so, I neither want nor
opments, as there is for medica- stead of causing yet more anxi- Educate, don’t regulate programs using these parame- need this administration or any
tions? What if this significantly ety as well as wasteful new ser- Rather than government regula- ters to determine whether the AI other regulating this nascent
delays the release of new AI vices or products that simply tion, money and effort would be program is performing as ex- technology under the guise of
tools? will be discarded. better spent educating the public pected before being released to protecting me for my own good.
Here is some of what we —Xu Xi, Morrisonville, N.Y. about AI’s limitations and en- the public. —Ann DeFranco, Superior, Colo.
heard in response: couraging people to think criti- Since the parameters and trig-
cally about what they get from gers would be established in ad-
What has social media We need a USDAI AI. Same goes for social media. vance, the normal “government Let the market manage
taught us? Given the sheer complexity of —Carl C. Hagen, Fallon, Nev. slowdown effect” should be miti- risks
Yes! Even if the process delays this technology, there should be gated so that important ad- Keep the government out of it.
the release of new AI tools, we a USDA (call it “USDAI”) that vances could be achieved by al- Let the companies that come up
need to regulate AI now in its in- can provide some oversight. Un- Fox and the henhouse lowing AI to grow organically, with and sell AI chatbots to con-
fancy rather than later. I would fortunately, the government Letting government Ministry of but with guardrails. sumers be monetarily liable for
happily defer using AI tools in doesn’t have the skills needed to Truth drones be the fact-checker —Terry McCarthy, the harm and damages they in-
exchange for a thorough evalua- police this development. But the or analysts for new AI technol- Millwood, N.Y. flict. That will force them not to
tion process and meaningful reg- government could prevent AI ogy is letting the blind lead the release a product that is danger-
ulation. If even AI creators be- from displacing massive blind, at best. At worst, a true ous and unreliable. If they do,
lieve it could be detrimental to amounts of humans in the work- fox-guarding-the-henhouse situa- Compounding the problem they’ll go broke.
humanity, it’s time to listen. place, which is by far the worst tion, as AI might directly impact “Given AI’s potential for harm Our government being what it
We’re seeing now how a lack of outcome of AI. Anyone who has the government’s power. (such as being inaccurate or bi- has become with its politiciza-
an early social-media regulation called a company for help lately I have somewhat more faith in ased), should there be.…” You tion, review would lead to delays
has opened a floodgate of con- has experienced the absence of a the general public to read opin- mean, let an often inaccurate or that would give global bad ac-
versation around technological real human at the other end of ions, consider sources, evaluate biased organization such as the tors like China and Russia the
safeguards. We need to learn our the line who can properly re- who stands to gain from any government decide what inaccu- opportunity to catch up and sur-
lesson and get a handle on AI spond to the unique problem you crimes, and to come to a conclu- racies or biases are acceptable pass the lead we presently enjoy
before it’s (potentially) too late. have that doesn’t fit the “press 1 sion. Users will quickly discover and what are mal- or misinfor- in artificial intelligence to our
—Tory Wicklund, for…” menu. any shortcomings and report mation? Oh yes, that’s just what detriment. Keep the government
Washington, D.C. —Hugh Jamieson, Dublin, Ohio them on public media. we need. away.
—William Llewellin, —David Hufford, —Edwin Vizcarrondo,
Littleton, Colo. Spokane, Wash. Wellington, Fla.
Follow the internet model The U.S. is not the EU
There should be minimal to no No. In the EU, sure, because that Demetria Gallegos is an editor
BRIAN STAUFFER
government regulation of AI. The culture is more accustomed to Too big to regulate Don’t handicap AI for The Wall Street Journal in
rapid innovation in the internet regulation and not primed for in- Regulation is a fool’s errand. As Concerns about bias and misin- New York. Email her
we all enjoy was achieved under novation. Regulation like that well-intentioned as it might be, formation in content produced at [email protected].
P2JW022000-0-R00900-1--------XA
IS IT SAFE TO SHARE
used to train the model.
Bard requires users to log into
Bard.Google.com then follow a few
steps to delete all chat activity as a
default. Bing users can open the
PERSONAL INFORMATION
chatbot webpage, view their search
history, then delete the individual
chats they want removed, a Micro-
soft spokesman says. “However, at
this time, users cannot disable chat
WITH A CHATBOT?
history,” he says.
But the best way for consumers to
protect themselves, experts say, is to
avoid sharing personal information
with a generative AI tool and to look
for certain red flags when convers-
ing with any AI.
Users may find it tempting to reveal health and may have also briefly exposed pay- Some red flags include using a
ment-related data of some users, in- chatbot that has no privacy notice.
financial information in conversations with AI cluding email addresses and the last “This is telling you that the gover-
chatbots. There are plenty of reasons to be cautious. four digits of credit-card numbers, nance necessary isn’t as mature as it
as well as credit-card expiration should be,” says Dominique Shelton
dates. That was a result of a bug in Leipzig, a privacy and cybersecurity
some open-source software (mean- partner at law firm Mayer Brown.
BY HEIDI MITCHELL someone else’s conversation with a ing it’s available free for anyone to Another is when a chatbot asks
I
chatbot or be intentionally hacked or view, modify and deploy) that was for more personal information than
revealed by bad actors through used in the tool’s training. is reasonably necessary. “Sometimes
MAGINE YOU’VE crafty prompts or questions. Chatbots are also vulnerable to to get into an account, you need to
pasted your notes from “We know that they were trained intentional attacks. For instance, share your account number or a
a meeting with your ra- on a vast amount of information some researchers recently found password and answer some personal
diologist into an artifi- that can, and likely does, contain easy ways to get around guardrails questions, and this is not unusual,”
cial-intelligence chatbot sensitive information,” says Ra- and unearth personal information Shelton Leipzig says. “Being asked to
and asked it to summa- mayya Krishnan, faculty director of gathered by large language models, share your Social Security number is
rize them. A stranger the Block Center for Technology and including emails. something different. Don’t.” She also
later prompts that same Society and dean of the Heinz Col- The ChatGPT vulnerability “was says it’s unwise to discuss anything
generative-AI chatbot to lege of Information Systems and quickly patched,” Krishnan notes. personal with a chatbot that you’ve
enlighten them about Public Policy at Carnegie Mellon “But the point is, these AI software never heard of.
their cancer concerns,
and some of your supposedly private
University. One major problem,
Krishnan says, is that nobody has
systems are complex and built on
top of other software components,
Ultimately, Santa Clara University’s Raicu
warns against inputting specific
conversation is spit out to that user done an independent audit to see some of which are open-source, and what you health conditions or financial infor-
as part of a response. what training data is used. they include vulnerabilities that can mation into a general-use chatbot,
Concerns about such potential “A lot of the evidence comes from be exploited.” Similar vulnerabilities
enter into since most chatbot companies are
breaches of privacy are very much academics hacking the guardrails are inherent to large language mod- a chatbot clear in their terms of service that
top of mind these days for many peo- and showing that private informa- els, says Irina Raicu, director of the human employees may be reading
ple. The big question here is: Is it tion is in the training data,” he says. internet ethics program at requires some conversations. “Is it worth the
safe to share personal information
with these chatbots?
“I certainly know of attacks that
prove there is some sensitive data in
the Markkula Center for Applied
Ethics at Santa Clara University.
you to risk of your information getting out
there when the response the genera-
The short answer is that there is the training models.” Privacy concerns are great weigh tive AI returns might be inaccurate
always a risk that information you Moreover, he adds, once an AI enough that several companies have anyway? Probably not,” Raicu says.
share will be exposed in some way. tool is deployed, it generally contin- restricted or banned the use of AI the risk Carnegie Mellon’s Krishnan, citing
But there are ways to limit that risk.
To understand the concerns, it
ues to train on users’ interactions
with it, absorbing and storing what-
chatbots by their employees at work.
“If major companies are concerned
against the the risk of hackers, cautions people
to think twice before using a feature
helps to think about how those tools ever information they feed it. about their privacy, if they are un- reward. of Google’s Bard that allows for all
are “trained”—how they are initially On top of that, in some cases hu- sure about what’s going on with your emails to be read and pro-
fed massive amounts of information man employees are reading some their data, that tells us that we cessed by the tool so it understands
from the internet and other sources conversations users have with chat- should be cautious when sharing your writing style and tone.
and can continue to gather informa- bots. This is done in part to catch anything personal,” says Raicu. Ultimately, what you enter into a
tion from their interactions with us- and prevent inappropriate behavior There’s not much to be done chatbot requires a risk-reward calcu-
ers to potentially make them and to help with accuracy and qual- about what’s already in the chatbot lation. However, experts say, you
smarter and more accurate. ity control of the models, experts models, Raicu says, “but why would should at least double check the
As a result, when you ask an AI say, as well as for deciding which you risk having your private infor- terms of service and privacy policies
chatbot a question, its response is subset of conversations the compa- mation getting out there by typing of a chatbot to understand how your
based partly on information that in- nies want the AI to use for training. new data like that into the model?” data will be used.
cludes material dating back to long “Fortunately we’re not in a
before there were rules around inter- doomsday chatbot environment
net data usage and privacy. And even Things happen Just don’t right now,” says Shelton Leipzig.
more-recent source material is full of Worries over privacy aren’t theoreti- Chatbot creators have taken some “The reputable generative-AI firms
people’s personal information that’s cal. There have been reported in- steps to protect users’ privacy. For are taking steps to protect users.”
scattered across the web. That leaves stances when confidential informa- instance, users can turn off Chat- Still, she says, always be mindful be-
lots of opportunity for private infor- tion was unintentionally released to GPT’s ability to store their chat his- fore sharing sensitive information.
mation to have been hoovered up users. Last March, OpenAI revealed tory indefinitely via the very visible
into the various generative-AI chat- a vulnerability that allowed some toggle on its home page. This isn’t Heidi Mitchell is a writer in Chicago
bots’ training materials—information users of ChatGPT to see the titles of foolproof protection against hack- and London. She can be reached at
that could unintentionally appear in other users’ chats with the tool and ers—the site says it will still store [email protected].
DAVID PLUNKERT