Security Blog
The latest news and insights from Google on security and safety on the Internet
CVE-2015-7547: glibc getaddrinfo stack-based buffer overflow
February 16, 2016
Posted by Fermin J. Serna, Staff Security Engineer and Kevin Stadmeyer, Technical Program Manager
Have you ever been deep in the mines of debugging and suddenly realized that you were staring at something far more interesting than you were expecting? You are not alone! Recently a Google engineer noticed that their SSH client segfaulted every time they tried to connect to a specific host. That engineer filed a ticket to investigate the behavior and after an intense investigation we discovered the issue lay in glibc and not in SSH as we were expecting.
Thanks to this engineer’s keen observation, we were able determine that the issue could result in remote code execution. We immediately began an in-depth analysis of the issue to determine whether it could be exploited, and possible fixes. We saw this as a challenge, and after some intense hacking sessions, we were able to craft a full working exploit!
In the course of our investigation, and to our surprise, we learned that the glibc maintainers had previously been alerted of the issue via their bug tracker in July, 2015. (
bug
). We couldn't immediately tell whether the bug fix was underway, so we worked hard to make sure we understood the issue and then reached out to the glibc maintainers. To our delight, Florian Weimer and Carlos O’Donell of
Red Hat
had also been studying the bug’s impact, albeit completely independently! Due to the sensitive nature of the issue, the investigation, patch creation, and regression tests performed primarily by Florian and Carlos had continued “off-bug.”
This was an amazing coincidence, and thanks to their hard work and cooperation, we were able to translate both teams’ knowledge into a comprehensive patch and regression test to protect glibc users.
That patch is available
here
.
Issue Summary:
Our initial investigations showed that the issue affected all the versions of glibc since 2.9. You should definitely update if you are on an older version though. If the vulnerability is detected, machine owners may wish to take steps to mitigate the risk of an attack.
The glibc DNS client side resolver is vulnerable to a stack-based buffer overflow when the getaddrinfo() library function is used. Software using this function may be exploited with attacker-controlled domain names, attacker-controlled DNS servers, or through a man-in-the-middle attack.
Google has found some mitigations that may help prevent exploitation if you are not able to immediately patch your instance of glibc. The vulnerability relies on an oversized (2048+ bytes) UDP or TCP response, which is followed by another response that will overwrite the stack. Our suggested mitigation is to limit the response (i.e., via DNSMasq or similar programs) sizes accepted by the DNS resolver locally as well as to ensure that DNS queries are sent only to DNS servers which limit the response size for UDP responses with the truncation bit set.
Technical information:
glibc reserves 2048 bytes in the stack through alloca() for the DNS answer at _nss_dns_gethostbyname4_r() for hosting responses to a DNS query.
Later on, at send_dg() and send_vc(), if the response is larger than 2048 bytes, a new buffer is allocated from the heap and all the information (buffer pointer, new buffer size and response size) is updated.
Under certain conditions a mismatch between the stack buffer and the new heap allocation will happen. The final effect is that the stack buffer will be used to store the DNS response, even though the response is larger than the stack buffer and a heap buffer was allocated. This behavior leads to the stack buffer overflow.
The vectors to trigger this buffer overflow are very common and can include ssh, sudo, and curl. We are confident that the exploitation vectors are diverse and widespread; we have not attempted to enumerate these vectors further.
Exploitation:
Remote code execution is possible, but not straightforward. It requires bypassing the security mitigations present on the system, such as ASLR. We will not release our exploit code, but a non-weaponized Proof of Concept has been made available simultaneously with this blog post. With this
Proof of Concept
, you can verify if you are affected by this issue, and verify any mitigations you may wish to enact.
As you can see in the below debugging session we are able to reliably control EIP/RIP.
(gdb) x/i $rip
=> 0x7fe156f0ccce <_nss_dns_gethostbyname4_r+398>: req
(gdb) x/a $rsp
0x7fff56fd8a48: 0x4242424242424242 0x4242424242420042
When code crashes unexpectedly, it can be a sign of something much more significant than it appears; ignore crashes at your peril!
Failed exploit indicators, due to ASLR, can range from:
Crash on free(ptr) where ptr is controlled by the attacker.
Crash on free(ptr) where ptr is semi-controlled by the attacker since ptr has to be a valid readable address.
Crash reading from memory pointed by a local overwritten variable.
Crash writing to memory on an attacker-controlled pointer.
We would like to thank Neel Mehta, Thomas Garnier, Gynvael Coldwind, Michael Schaller, Tom Payne, Michael Haro, Damian Menscher, Matt Brown, Yunhong Gu, Florian Weimer, Carlos O’Donell and the rest of the glibc team for their help figuring out all details about this bug, exploitation, and patch development.
Building a safer web, for everyone
February 9, 2016
Posted by Gerhard Eschelbeck, VP, Security and Privacy
[Cross-posted from the
Official Google Blog
]
Today is
Safer Internet Day
, a moment for technology companies, nonprofit organizations, security firms, and people around the world to focus on online safety, together. To mark the occasion, we’re rolling out new tools, and some useful reminders, to help protect you from online dangers of all stripes—phishing, malware, and other threats to your personal information.
1. Keeping security settings simple
The
Security Checkup
is a quick way to control the security settings for your Google Account. You can add a recovery phone number so we can help if you’re ever locked out of your account, strengthen your password settings, see which devices are connected to your account, and more. If you complete the Security Checkup by February 11, you’ll also get
2GB of extra Google Drive storage
, which can be used across Google Drive, Gmail, and Photos.
Safer Internet Day is a great time to do it, but you can—and should!—take a Security Checkup on a regular basis. Start your Security Checkup by visiting
My Account
.
2. Informing Gmail users about potentially unsafe messages
If you and your Grandpa both use Gmail to exchange messages, your connections are
encrypted
and
authenticated
. That means no peering eyes can read those emails as they zoom across the web, and you can be confident that the message from your Grandpa in size 48 font (with no punctuation and a few misspellings) is really from him!
However, as our
Safer Email Transparency Report
explains, these things are not always true when Gmail interacts with other mail services. Today, we’re introducing changes in Gmail on the web to let people know when a received message was not encrypted, if you’re composing a message to a recipient whose email service doesn’t support
TLS encryption
, or when the sender’s domain couldn’t be authenticated.
Here’s the notice you’ll see in Gmail before you send a message to a service that doesn’t support TLS encryption. You’ll also see the broken lock icon if you receive a message that was sent without TLS encryption.
If you receive a message that can’t be authenticated, you’ll see a question mark where you might otherwise see a profile photo or logo:
3. Protecting you from bad apps
Dangerous apps that phish and steal your personal information, or hold your phone hostage and make you pay to unlock it, have no place on your smartphone—or any device, for that matter.
Google Play helps protect your Android device by rejecting bad apps that don’t comply with our
Play policies
. It also conducts more than 200 million daily security scans of devices, in tandem with our
Safe Browsing
system, for any signs of trouble. Last year, bad apps were installed on fewer than 0.13% of Android devices that install apps only from Google Play.
Learn more about these, and other Android security features — like app sandboxing,
monthly security updates
for Nexus and other devices, and our
Security Rewards Program
—in new research we’ve made public on our
Android blog
.
4. Busting bad advertising practices
Malicious advertising “botnets” try to send phony visitors to websites to make money from online ads. Botnets threaten the businesses of honest advertisers and publishers, and because they’re often made up of devices infected with malware, they put users in harm’s way too.
We've worked to keep botnets out of our ads systems, cutting them out of advertising revenue, and making it harder to make money from distributing malware and
Unwanted Software
. Now, as part of our effort to
fight bad ads online
, we’re reinforcing our existing botnet defenses by automatically filtering traffic from three of the top ad fraud botnets, comprising more than 500,000 infected user machines. Learn more about this update on the
Doubleclick blog
.
5. Moving the security conversation forward
Recent events—
Edward Snowden’s disclosures
, the
Sony Hack
, the
current conversation around encryption
, and more—have made online safety a truly mainstream issue. This is reflected both in news headlines, and popular culture: “
Mr. Robot
,” a TV series about hacking and cybersecurity, just won a Golden Globe for Best Drama, and
@SwiftOnSecurity
, a popular security commentator, is named after Taylor Swift.
But despite this shift, security remains a complex topic that lends itself to lively debates between experts...that are often unintelligible to just about everyone else. We need to simplify the way we talk about online security to enable everyone to understand its importance and participate in this conversation.
To that end, we’re teaming up with
Medium
to host a virtual roundtable about online security, present and future. Moderated by journalist and security researcher
Kevin Poulsen
, this project aims to present fresh perspectives about online security in a time when our attention is increasingly ruled by the devices we carry with us constantly. We hope you’ll
tune in
and check it out.
Online security and safety are being discussed more often, and with more urgency, than ever before. We hope you’ll take a few minutes today to learn how Google protects your data and how we can work toward a safer web, for everyone.
No More Deceptive Download Buttons
February 3, 2016
Posted by Lucas Ballard, Safe Browsing Team
In
November
, we announced that Safe Browsing would protect you from social engineering attacks - deceptive tactics that try to trick you into doing something dangerous, like installing
unwanted software
or
revealing your personal information
(for example, passwords, phone numbers, or credit cards). You may have encountered social engineering in a deceptive download button, or an image ad that falsely claims your system is out of date. Today, we’re expanding Safe Browsing protection to protect you from such deceptive embedded content, like social engineering ads.
Consistent with the social engineering policy we announced in November, embedded content (like ads) on a web page will be considered social engineering when they either:
Pretend to act, or look and feel, like a trusted entity — like your own device or browser, or the website itself.
Try to trick you into doing something you’d only do for a trusted entity — like sharing a password or calling tech support.
Below are some examples of deceptive content, shown via ads:
This image claims that your software is out-of-date to trick you into clicking “update”.
This image mimics a dialogue from the FLV software developer -- but it does not actually originate from this developer.
These buttons seem like they will produce content that relate to the site (like a TV show or sports video stream) by mimicking the site’s look and feel. They are often not distinguishable from the rest of the page.
Our fight against unwanted software and social engineering is still just beginning. We'll continue to improve Google's
Safe Browsing
protection to help more people stay safe online.
Will my site be affected?
If visitors to your web site consistently see social engineering content, Google Safe Browsing may warn users when they visit the site. If your site is flagged for containing social engineering content, you should troubleshoot with Search Console. Check out our
social engineering help for webmasters
.
Google Security Rewards - 2015 Year in Review
January 28, 2016
Posted by Eduardo Vela Nava, Google Security
We launched our
Vulnerability Reward Program
in 2010 because rewarding security researchers for their hard work benefits everyone. These financial rewards help make our services, and the web as a whole, safer and more secure.
With an open approach, we’re able to consider a broad diversity of expertise for individual issues. We can also offer incentives for external researchers to work on challenging, time-consuming, projects that otherwise may not receive proper attention.
Last January, we summarized these efforts in our first ever
Security Reward Program ‘Year in Review’
. Now, at the beginning of another new year, we wanted to look back at 2015 and again show our appreciation for researchers’ important contributions.
2015 at a Glance
Once again, researchers from around the world—Great Britain, Poland, Germany, Romania, Israel, Brazil, United States, China, Russia, India to name a few countries—participated our program.
Here's an overview of the rewards they received and broader milestones for the program, as a whole.
Android Joins Security Rewards
Android was a newcomer to the Security Reward program initiative in 2015 and it made a significant and immediate impact as soon as it joined the program.
We
launched
our Android VRP in June, and by the end of 2015, we had paid more than $200,000 to researchers for their work, including our largest single payment of $37,500 to an Android security researcher.
New Vulnerability Research Grants Pay Off
Last year, we began to provide researchers with
Vulnerability Research Grants
, lump sums of money that researchers receive before starting their investigations. The purpose of these grants is to ensure that researchers are rewarded for their hard work, even if they don’t find a vulnerability.
We’ve already seen positive results from this program; here’s one example. Kamil Histamullin a researcher from Kasan, Russia received a VRP grant early last year. Shortly thereafter, he found an issue in YouTube Creator Studio which would have enabled anyone to delete any video from YouTube by simply changing a parameter from the URL. After the issue was reported, our teams quickly fixed it and the researcher was was rewarded $5,000 in addition to his initial research grant. Kamil detailed his findings on his
personal blog
in March.
Established Programs Continue to Grow
We continued to see important security research in our established programs in 2015. Here are just a few examples:
Tomasz Bojarski found 70 bugs on Google in 2015, and was our most prolific researcher of the year. He found a bug in our vulnerability submission form.
You may have read about Sanmay Ved, a researcher from who was able to buy google.com for one minute on Google Domains. Our initial financial reward to Sanmay—$ 6,006.13—spelled-out Google, numerically (squint a little and you’ll see it!). We then doubled this amount when
Sanmay donated his reward to charity
.
We also injected some new energy into these existing research programs and grants. In December, we
announced
that we'd be dedicating one million dollars specifically for security research related to Google Drive.
We’re looking forward to continuing the Security Reward Program’s growth in 2016. Stay tuned for more exciting reward program changes throughout the year.
Why attend USENIX Enigma?
January 11, 2016
Posted by Parisa Tabriz, Security Princess & Enigma Program Co-Chair
[Cross-posted from the
Google Research Blog
]
Last August, we
announced USENIX Enigma
, a new conference intended to shine a light on great, thought-provoking research in security, privacy, and electronic crime. With Enigma beginning in just a few short weeks, I wanted to share a couple of the reasons I’m personally excited about this new conference.
Enigma aims to bridge the divide that exists between experts working in academia, industry, and public service, explicitly bringing researchers from different sectors together to share their work. Our speakers include those spearheading the defense of digital rights (
Electronic Frontier Foundation
,
Access Now
), practitioners at a number of well known industry leaders (
Akamai
,
Blackberry
,
Facebook
,
LinkedIn
,
Netflix
,
Twitter
), and researchers from multiple universities in the U.S. and abroad. With the diverse
session topics and organizations represented
, I expect interesting—and perhaps spirited—coffee break and lunchtime discussions among the equally diverse list of conference attendees.
Of course, I’m very proud to have some of my Google colleagues speaking at Enigma:
Adrienne Porter Felt will talk about blending research and engineering to solve usable security problems. You’ll hear how Chrome’s usable security team runs user studies and experiments to motivate engineering and design decisions. Adrienne will share the challenges they’ve faced when trying to adapt existing usable security research to practice, and give insight into how they’ve achieved successes.
Ben Hawkes will be speaking about
Project Zero
, a security research team dedicated to the mission of, “making
0day
hard.” Ben will talk about why Project Zero exists, and some of the recent trends and technologies that make vulnerability discovery and exploitation fundamentally harder.
Kostya Serebryany will be presenting a 3-pronged approach to securing C++ code based on his many years of experiencing wrangling complex, buggy software. Kostya will survey multiple dynamic sanitizing tools him and his team have made publicly available, review control-flow and data-flow guided fuzzing, and explain a method to harden your code in the presence of any bugs that remain.
Elie Bursztein will go through key lessons the Gmail team learned over the past 11 years while protecting users from spam, phishing, malware, and web attacks. Illustrated with concrete numbers and examples from one of the largest email systems on the planet, attendees will gain insight into specific techniques and approaches useful in fighting abuse and securing their online services.
In addition to raw content, my Program Co-Chair,
David Brumley
, and I have prioritized talk quality. Researchers dedicate months or years of their time to thinking about a problem and conducting the technical work of research, but a common criticism of technical conferences is that the actual presentation of that research seems like an afterthought. Rather than be a regurgitation of a research paper in slide format, a presentation is an opportunity for a researcher to explain the context and impact of their work in their own voice; a chance to inspire the audience to want to learn more or dig deeper. Taking inspiration from the
TED conference
, Enigma will have shorter presentations, and the program committee has worked with each speaker to help them craft the best version of their talk.
Hope to see some of you at
USENIX Enigma
later this month!
An update on SHA-1 certificates in Chrome
December 18, 2015
Posted by Lucas Garron, Chrome security and David Benjamin, Chrome networking
As
announced last September
and supported by
further recent research
, Google Chrome does not treat SHA-1 certificates as secure anymore, and will completely stop supporting them over the next year. Chrome will discontinue support in two steps: first, blocking new SHA-1 certificates; and second, blocking all SHA-1 certificates.
Step 1: Blocking new SHA-1 certificates
Starting in early 2016 with Chrome version 48, Chrome will display a certificate error if it encounters a site with a leaf certificate that:
is signed with a SHA-1-based signature
is issued on or after January 1, 2016
chains to a public CA
We are hopeful that no one will encounter this error, since public CAs must stop issuing SHA-1 certificates in 2016 per the
Baseline Requirements for SSL
.
In addition, a later version of Chrome in 2016 may extend these criteria in order to help guard against SHA-1 collision attacks on older devices, by displaying a certificate error for sites with certificate chains that:
contain an intermediate or leaf certificate signed with a SHA-1-based signature
contain an intermediate or leaf certificate issued on or after January 1, 2016
chain to a public CA
(Note that the first two criteria can match different certificates.)
Note that sites using new SHA-1 certificates that chain to local trust anchors (rather than public CAs) will continue to work without a certificate error. However, they will still be subject to the UI downgrade specified in our
original announcement
.
Step 2: Blocking all SHA-1 certificates
Starting January 1, 2017 at the latest, Chrome will completely stop supporting SHA-1 certificates. At this point, sites that have a SHA-1-based signature as part of the certificate chain (not including the self-signature on the root certificate) will trigger a fatal network error. This includes certificate chains that end in a local trust anchor as well as those that end at a public CA.
In line with
Microsoft Edge
and
Mozilla Firefox
, the target date for this step is January 1, 2017, but we are considering moving it earlier to July 1, 2016 in light of ongoing research. We therefore urge sites to replace any remaining SHA-1 certificates as soon as possible.
Note that Chrome uses the certificate trust settings of the host OS where possible, and that an update such as Microsoft’s
planned change
will cause a fatal network error in Chrome, regardless of Chrome’s intended target date.
Keeping your site safe and compatible
As individual TLS features are found to be too weak, browsers need to drop support for those features to keep users safe. Unfortunately, SHA-1 certificates are not the only feature that browsers will remove in the near future.
As we
announced
on our security-dev mailing list, Chrome 48 will also stop supporting RC4 cipher suites for TLS connections. This aligns with timelines for
Microsoft Edge
and
Mozilla Firefox
.
For security and interoperability in the face of upcoming browser changes, site operators should ensure that their servers use SHA-2 certificates, support non-RC4 cipher suites, and follow TLS best practices. In particular, we recommend that most sites support TLS 1.2 and prioritize the ECDHE_RSA_WITH_AES_128_GCM cipher suite. We also encourage site operators to use tools like the
SSL Labs server test
and
Mozilla's SSL Configuration Generator
.
Indexing HTTPS pages by default
December 17, 2015
Posted by
Zineb Ait Bahajji
, WTA, and the Google Security and Indexing teams
[Cross-posted from the
Webmaster Central Blog
]
At Google, user security has always been a top priority. Over the years, we’ve worked hard to promote a more secure web and to provide a better browsing experience for users.
Gmail
,
Google search
, and YouTube have had secure connections for some time, and we also started giving a slight
ranking boost to HTTPS URLs
in search results last year. Browsing the web should be a private experience between the user and the website, and must not be subject to
eavesdropping
,
man-in-the-middle attacks
, or data modification. This is why we’ve been strongly promoting
HTTPS everywhere
.
As a natural continuation of this, today we'd like to announce that we're adjusting our indexing system to look for more HTTPS pages. Specifically, we’ll start crawling HTTPS equivalents of HTTP pages, even when the former are not linked to from any page. When two URLs from the same domain appear to have the same content but are served over different protocol schemes, we’ll typically choose to index the HTTPS URL if:
It doesn’t contain insecure dependencies.
It isn’t blocked from crawling by robots.txt.
It doesn’t redirect users to or through an insecure HTTP page.
It doesn’t have a rel="canonical" link to the HTTP page.
It doesn’t contain a noindex robots meta tag.
It doesn’t have on-host outlinks to HTTP URLs.
The sitemaps lists the HTTPS URL, or doesn’t list the HTTP version of the URL.
The server has a valid TLS certificate.
Although our systems prefer the HTTPS version by default, you can also make this clearer for other search engines by redirecting your HTTP site to your HTTPS version and by implementing the
HSTS header
on your server.
We’re excited about taking another step forward in making the web more secure. By showing users HTTPS pages in our search results, we’re hoping to decrease the risk for users to browse a website over an insecure connection and making themselves vulnerable to content injection attacks. As usual, if you have any questions or comments, please let us know in the comments section below or in our
webmaster help forums
.
Proactive measures in digital certificate security
December 11, 2015
Posted by Ryan Sleevi, Software Engineer
Over the course of the coming weeks, Google will be moving to distrust the “Class 3 Public Primary CA” root certificate operated by Symantec Corporation, across Chrome, Android, and Google products. We are taking this action in response to a
notification by Symantec Corporation
that, as of December 1, 2015, Symantec has decided that this root will no longer comply with the
CA/Browser Forum’s Baseline Requirements
. As these requirements reflect industry best practice and are the foundation for publicly trusted certificates, the failure to comply with these represents an unacceptable risk to users of Google products.
Symantec has informed us they intend to use this root certificate for purposes other than publicly-trusted certificates. However, as this root certificate will no longer adhere to the CA/Browser Forum’s Baseline Requirements, Google is no longer able to ensure that the root certificate, or certificates issued from this root certificate, will not be used to intercept, disrupt, or impersonate the secure communication of Google’s products or users. As Symantec is unwilling to specify the new purposes for these certificates, and as they are aware of the risk to Google’s users, they have requested that Google take preventative action by removing and distrusting this root certificate. This step is necessary because this root certificate is widely trusted on platforms such as Android, Windows, and versions of OS X prior to OS X 10.11, and thus certificates Symantec issues under this root certificate would otherwise be treated as trustworthy.
Symantec has indicated that they do not believe their customers, who are the operators of secure websites, will be affected by this removal. Further, Symantec has also indicated that, to the best of their knowledge, they do not believe customers who attempt to access sites secured with Symantec certificates will be affected by this. Users or site operators who encounter issues with this distrusting and removal should
contact Symantec Technical Support
.
Further Technical Details of Affected Root:
Friendly Name:
Class 3 Public Primary Certification Authority
Subject:
C=US, O=VeriSign, Inc., OU=Class 3 Public Primary Certification Authority
Public Key Hash (SHA-1):
E2:7F:7B:D8:77:D5:DF:9E:0A:3F:9E:B4:CB:0E:2E:A9:EF:DB:69:77
Public Key Hash (SHA-256):
B1:12:41:42:A5:A1:A5:A2:88:19:C7:35:34:0E:FF:8C:9E:2F:81:68:FE:E3:BA:18:7F:25:3B:C1:A3:92:D7:E2
MD2 Version
Fingerprint (SHA-1):
74:2C:31:92:E6:07:E4:24:EB:45:49:54:2B:E1:BB:C5:3E:61:74:E2
Fingerprint (SHA-256):
E7:68:56:34:EF:AC:F6:9A:CE:93:9A:6B:25:5B:7B:4F:AB:EF:42:93:5B:50:A2:65:AC:B5:CB:60:27:E4:4E:70
SHA1 Version
Fingerprint (SHA-1)
: A1:DB:63:93:91:6F:17:E4:18:55:09:40:04:15:C7:02:40:B0:AE:6B
Fingerprint (SHA-256)
: A4:B6:B3:99:6F:C2:F3:06:B3:FD:86:81:BD:63:41:3D:8C:50:09:CC:4F:A3:29:C2:CC:F0:E2:FA:1B:14:03:05
Year one: progress in the fight against Unwanted Software
December 9, 2015
Posted by Moheeb Abu Rajab, Google Security Team
“At least 2 or 3 times a week I get a big blue warning screen with a loud voice telling me that I’ve a virus and to call the number at the end of the big blue warning.”
“I’m covered with ads and unwanted interruptions. what’s the fix?”
“I WORK FROM HOME AND THIS POPING [sic] UP AND RUNNING ALL OVER MY COMPUTER IS NOT RESPECTFUL AT ALL THANK YOU.”
Launched in 2007
, Safe Browsing has long helped protect people across the web from well-known online dangers like phishing and malware. More recently, however, we’ve seen an increase in user complaints like the ones above. These issues and others—hijacked browser settings, software installed without users' permission that resists attempts to uninstall—have signaled the rise of a new type of malware that our systems haven’t been able to reliably detect.
More than a year ago
, we began a broad fight against this category of badness that we now call “Unwanted Software”, or “UwS” (pronounced “ooze”). Today, we wanted to share some progress and outline the work that must happen in order to continue protecting users across the web.
What is UwS and how does it get on my computer?
In order to combat UwS, we first needed to define it. Despite lots of variety, our research enabled us to develop a
defining list of characteristics
that this type of software often displays:
It is deceptive, promising a value proposition that it does not meet.
It tries to trick users into installing it or it piggybacks on the installation of another program.
It doesn’t tell the user about all of its principal and significant functions.
It affects the user’s system in unexpected ways.
It is difficult to remove.
It collects or transmits private information without the user’s knowledge.
It is bundled with other software and its presence is not disclosed.
Next, we had to better understand how UwS is being disseminated.
This varies quite a bit, but time and again, deception is at the heart of these tactics. Common UwS distribution tactics include:
unwanted ad injection
, misleading ads such as “trick-to-click”, ads disguised as ‘download’ or ‘play’ buttons, bad software downloader practices, misleading or missing disclosures about what the software does, hijacked browser default settings, annoying system pop-up messages, and more.
Here are a few specific examples:
Deceptive ads leading to UwS downloads
Ads from unwanted ads injector taking over a New York Times page and sending the user to phone scams
Unwanted ad injector inserts ads on the Google search results page
New tab page is overridden by UwS
UwS hijacks Chrome navigations and directs users to a scam tech support website
One year of progress
Because UwS touches so many different parts of people’s online experiences, we’ve worked to fight it on many different fronts. Weaving UwS detection into Safe Browsing has been critical to this work, and we’ve pursued other efforts as well—here’s an overview:
We now include UwS in
Safe Browsing
and its
API
, enabling people who use Chrome and other browsers to see warnings before they go to sites that contain UwS. The red warning below appears in Chrome.
We launched the
Chrome Cleanup Tool
, a one-shot UwS removal tool that has helped clean more than 40 million devices. We shed more light on a common symptom of UwS—
unwanted ad injectors
. We outlined
how they make money
and
launched a new filter
in DoubleClick Bid Manager that removes impressions generated by unwanted ad injectors before bids are made.
We started using
UwS as a signal in search
to reduce the likelihood that sites with UwS would appear in search results.
We started
disabling
Google ads that lead to sites with UwS downloads.
It’s still early, but these changes have already begun to move the needle.
UwS-related Chrome user complaints have fallen. Last year, before we rolled-out our new policies, these were 40% of total complaints and now they’re 20%.
We’re now showing more than 5 million Safe Browsing warnings per day on Chrome related to UwS to ensure users are aware of a site’s potential risks.
We helped more than 14 million users remove over 190 deceptive Chrome extensions from their devices.
We
reduced the number of UwS warnings
that users see via AdWords by 95%, compared to last year. Even prior to last year, less than 1% of UwS downloads were due to AdWords.
However, there is still a long way to go. 20% of all feedback from Chrome users is related to UwS and we believe 1 in 10 Chrome users have hijacked settings or unwanted ad injectors on their machines. We expect users of other browsers continue to suffer from similar issues; there is lots of work still to be done.
Looking ahead: broad industry participation is essential
Given the complexity of the UwS ecosystem, the involvement of players across the industry is key to making meaningful progress in this fight. This chain is only as strong as its weakest links: everyone must work to develop and enforce strict, clear policies related to major sources of UwS.
If we’re able, as an industry, to enforce these policies, then everyone will be able to provide better experiences for their users. With this in mind, we’re very pleased to see that the
FTC recently warned consumers
about UwS and characterizes UwS as a
form of malware
. This is an important step toward uniting the online community and focusing good actors on the common goal of eliminating UwS.
We’re still in the earliest stages of the fight against UwS, but we’re moving in the right direction. We’ll continue our efforts to protect users from UwS and work across the industry to eliminate these bad practices.
A new version of Authenticator for Android
December 7, 2015
Posted by Alexei Czeskis, Software Engineer
Authenticator for Android is used by millions of users and, combined with
2-Step Verification
, it provides an extra layer of protection for Google Accounts.
Our latest version has some cool new features. You will notice a new icon and a refreshed design. There's also support for
Android Wear
devices, so you'll be able to get verification codes from compatible devices, like your watch.
The new Authenticator also comes with a developer preview of support for NFC Security Key, based on the FIDO Universal 2nd Factor (U2F) protocol via NFC. Play Store will prompt for the NFC permission before you install this version of Authenticator.
Developers who want to learn more about U2F can refer to FIDO's
specifications
. Additionally, you can try it out at
https://u2fdemo.appspot.com
. Note that you'll need an Android device running the latest versions of Google Chrome and Authenticator and also a
Security Key
with NFC support.
You can find the latest
Authenticator for Android on the Play Store
.
Labels
#sharethemicincyber
#supplychain #security #opensource
android
android security
android tr
app security
big data
biometrics
blackhat
C++
chrome
chrome enterprise
chrome security
connected devices
CTF
diversity
encryption
federated learning
fuzzing
Gboard
google play
google play protect
hacking
interoperability
iot security
kubernetes
linux kernel
memory safety
Open Source
pha family highlights
pixel
privacy
private compute core
Rowhammer
rust
Security
security rewards program
sigstore
spyware
supply chain
targeted spyware
tensor
Titan M2
VDP
vulnerabilities
workshop
Archive
2025
Apr
Mar
Feb
Jan
2024
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2023
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2022
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2021
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2020
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2019
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2018
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2017
Dec
Nov
Oct
Sep
Jul
Jun
May
Apr
Mar
Feb
Jan
2016
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Aug
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Sep
Aug
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
2010
Nov
Oct
Sep
Aug
Jul
May
Apr
Mar
2009
Nov
Oct
Aug
Jul
Jun
Mar
2008
Dec
Nov
Oct
Aug
Jul
May
Feb
2007
Nov
Oct
Sep
Jul
Jun
May
Feed
Follow @google
Follow
Give us feedback in our
Product Forums
.