Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

Friday, January 27, 2017

Protect your site from user generated spam

As a website owner, you might have come across some auto-generated content in comments sections or forum threads. When such content is created on your pages, not only does it disrupt those visiting your site, but it also shows some content that you may not want to be associated with your site to Google and other search engines.


In this blog post, we will give you tips to help you deal with this type of spam in your site and forum.


Some spammers abuse sites owned by others by posting deceiving content and links, in an attempt to get more traffic to their sites. Here are a few examples:


Comments and forum threads can be a really good source of information and an efficient way of engaging a site's users in discussions. This valuable content should not be buried by auto-generated keywords and links placed there by spammers.


There are many ways of securing your site’s forums and comment threads and making them unattractive to spammers:


  • Keep your forum software updated and patched. Take the time to keep your software up-to-date and pay special attention to important security updates. Spammers take advantage of security issues in older versions of blogs, bulletin boards, and other content management systems.

  • Add a CAPTCHA. CAPTCHAs require users to confirm that they are not robots in order to prove they're a human being and not an automated script. One way to do this is to use a service like reCAPTCHA, Securimage and  Jcaptcha .
  • Block suspicious behavior. Many forums allow you to set time limits between posts, and you can often find plugins to look for excessive traffic from individual IP addresses or proxies and other activity more common to bots than human beings. For example, phpBB, Simple Machines, myBB, and many other forum platforms enable such configurations.

  • Check your forum’s top posters on a daily basis. If a user joined recently and has an excessive amount of posts, then you probably should review their profile and make sure that their posts and threads are not spammy.

  • Consider disabling some types of comments. For example, It’s a good practice to close some very old forum threads that are unlikely to get legitimate replies.
    If you plan on not monitoring your forum going forward and users are no longer interacting with it, turning off posting completely may prevent spammers from abusing it.

  • Make good use of moderation capabilities. Consider enabling features in moderation that require users to have a certain reputation before links can be posted or where comments with links require moderation.
    If possible, change your settings so that you disallow anonymous posting and make posts from new users require approval before they're publicly visible.
    Moderators, together with your friends/colleagues and some other trusted users can help you review and approve posts while spreading the workload. Keep an eye on your forum's new users by looking on their posts and activities on your forum.  

  • Consider blacklisting obviously spammy terms. Block obviously inappropriate comments with a blacklist of spammy terms (e.g. Illegal streaming or pharma related terms) . Add inappropriate and off-topic terms that are only used by spammers, learn from the spam posts that you often see on your forum or other forums. Built-in features or plugins can delete or mark comments as spam for you.

  • Use the "nofollow" attribute for links in the comment field. This will deter spammers from targeting your site. By default, many blogging sites (such as Blogger) automatically add this attribute to any posted comments.

  • Use automated systems to defend your site.  Comprehensive systems like Akismet, which has plugins for many blogs and forum systems are easy to install and do most of the work for you.




For detailed information about these topics, check out our Help Center document on User Generated Spam and comment spam. You can also visit our Webmaster Central Help Forum if you need any help.



Friday, November 4, 2016

Here’s to more HTTPS on the web!

Cross-posted from the Google Security Blog.

Security has always been critical to the web, but challenges involved in site migration have inhibited HTTPS adoption for several years. In the interest of a safer web for all, at Google we’ve worked alongside many others across the online ecosystem to better understand and address these challenges, resulting in real change. A web with ubiquitous HTTPS is not the distant future. It’s happening now, with secure browsing becoming standard for users of Chrome.

Today, we’re adding a new section to the HTTPS Report Card in our Transparency Report that includes data about how HTTPS usage has been increasing over time. More than half of pages loaded and two-thirds of total time spent by Chrome desktop users occur via HTTPS, and we expect these metrics to continue their strong upward trajectory.
Percentage of pages loaded over HTTPS in Chrome



As the remainder of the web transitions to HTTPS, we’ll continue working to ensure that migrating to HTTPS is a no-brainer, providing business benefit beyond increased security. HTTPS currently enables the best performance the web offers and powerful features that benefit site conversions, including both new features such as service workers for offline support and web push notifications, and existing features such as credit card autofill and the HTML5 geolocation API that are too powerful to be used over non-secure HTTP. As with all major site migrations, there are certain steps webmasters should take to ensure that search ranking transitions are smooth when moving to HTTPS. To help with this, we’ve posted two FAQs to help sites transition correctly, and will continue to improve our web fundamentals guidance.


We’ve seen many sites successfully transition with negligible effect on their search ranking and traffic. Brian Wood, Director of Marketing SEO at Wayfair, a large retail site, commented: “We were able to migrate Wayfair.com to HTTPS with no meaningful impact to Google rankings or Google organic search traffic. We are very pleased to say that all Wayfair sites are now fully HTTPS.” CNET, a large tech news site, had a similar experience: “We successfully completed our move of CNET.com to HTTPS last month,” said John Sherwood, Vice President of Engineering & Technology at CNET. “Since then, there has been no change in our Google rankings or Google organic search traffic.”


Webmasters that include ads on their sites also should carefully monitor ad performance and revenue during large site migrations. The portion of Google ad traffic served over HTTPS has increased dramatically over the past 3 years. All ads that come from any Google source always support HTTPS, including AdWords, AdSense, or DoubleClick Ad Exchange; ads sold directly, such as those through DoubleClick for Publishers, still need to be designed to be HTTPS-friendly. This means there will be no change to the Google-sourced ads that appear on a site after migrating to HTTPS. Many publishing partners have seen this in practice after a successful HTTPS transition. Jason Tollestrup, Director of Programmatic Advertising for the Washington Post, “saw no material impact to AdX revenue with the transition to SSL.”


As migrating to HTTPS becomes even easier, we’ll continue working towards a web that’s secure by default. Don’t hesitate to start planning your HTTPS migration today!


Wednesday, September 7, 2016

More Safe Browsing Help for Webmasters

(Crossposted from the Google Security Blog.)
For more than nine years, Safe Browsing has helped webmasters via Search Console with information about how to fix security issues with their sites. This includes relevant Help Center articles, example URLs to assist in diagnosing the presence of harmful content, and a process for webmasters to request reviews of their site after security issues are addressed. Over time, Safe Browsing has expanded its protection to cover additional threats to user safety such as Deceptive Sites and Unwanted Software.

To help webmasters be even more successful in resolving issues, we’re happy to announce that we’ve updated the information available in Search Console in the Security Issues report.


The updated information provides more specific explanations of six different security issues detected by Safe Browsing, including malware, deceptive pages, harmful downloads, and uncommon downloads. These explanations give webmasters more context and detail about what Safe Browsing found. We also offer tailored recommendations for each type of issue, including sample URLs that webmasters can check to identify the source of the issue, as well as specific remediation actions webmasters can take to resolve the issue.

We on the Safe Browsing team definitely recommend registering your site in Search Console even if it is not currently experiencing a security issue. We send notifications through Search Console so webmasters can address any issues that appear as quickly as possible.

Our goal is to help webmasters provide a safe and secure browsing experience for their users. We welcome any questions or feedback about the new features on the Google Webmaster Help Forum, where Top Contributors and Google employees are available to help.

For more information about Safe Browsing’s ongoing work to shine light on the state of web security and encourage safer web security practices, check out our summary of trends and findings on the Safe Browsing Transparency Report. If you’re interested in the tools Google provides for webmasters and developers dealing with hacked sites, this video provides a great overview.

Tuesday, April 12, 2016

No More Deceptive Download Buttons

(Cross-posted from the Google Security Blog.)
In November, we announced that Safe Browsing would protect you from social engineering attacks - deceptive tactics that try to trick you into doing something dangerous, like installing unwanted software or revealing your personal information (for example, passwords, phone numbers, or credit cards). You may have encountered social engineering in a deceptive download button, or an image ad that falsely claims your system is out of date. Today, we’re expanding Safe Browsing protection to protect you from such deceptive embedded content, like social engineering ads.
Consistent with the social engineering policy we announced in November, embedded content (like ads) on a web page will be considered social engineering when they either:

  • Pretend to act, or look and feel, like a trusted entity — like your own device or browser, or the website itself. 
  • Try to trick you into doing something you’d only do for a trusted entity — like sharing a password or calling tech support.

Below are some examples of deceptive content, shown via ads:
This image claims that your software is out-of-date to trick you into clicking “update”. 

This image mimics a dialogue from the FLV software developer -- but it does not actually originate from this developer.

These buttons seem like they will produce content that relate to the site (like a TV show or sports video stream) by mimicking the site’s look and feel. They are often not distinguishable from the rest of the page.

Our fight against unwanted software and social engineering is still just beginning. We'll continue to improve Google's Safe Browsing protection to help more people stay safe online.

Will my site be affected?

If visitors to your web site consistently see social engineering content, Google Safe Browsing may warn users when they visit the site. If your site is flagged for containing social engineering content, you should troubleshoot with Search Console. Check out our social engineering help for webmasters.

Thursday, December 17, 2015

Indexing HTTPS pages by default

Indexing HTTPS pages by default

At Google, user security has always been a top priority. Over the years, we’ve worked hard to promote a more secure web and to provide a better browsing experience for users. Gmail, Google search, and YouTube have had secure connections for some time, and we also started giving a slight ranking boost to HTTPS URLs in search results last year. Browsing the web should be a private experience between the user and the website, and must not be subject to eavesdropping, man-in-the-middle attacks, or data modification. This is why we’ve been strongly promoting HTTPS everywhere.

As a natural continuation of this, today we'd like to announce that we're adjusting our indexing system to look for more HTTPS pages. Specifically, we’ll start crawling HTTPS equivalents of HTTP pages, even when the former are not linked to from any page. When two URLs from the same domain appear to have the same content but are served over different protocol schemes, we’ll typically choose to index the HTTPS URL if:

  • It doesn’t contain insecure dependencies.
  • It isn’t blocked from crawling by robots.txt.
  • It doesn’t redirect users to or through an insecure HTTP page.
  • It doesn’t have a rel="canonical" link to the HTTP page.
  • It doesn’t contain a noindex robots meta tag.
  • It doesn’t have on-host outlinks to HTTP URLs.
  • The sitemaps lists the HTTPS URL, or doesn’t list the HTTP version of the URL
  • The server has a valid TLS certificate.


Although our systems prefer the HTTPS version by default, you can also make this clearer for other search engines by redirecting your HTTP site to your HTTPS version and by implementing the HSTS header on your server.

We’re excited about taking another step forward in making the web more secure. By showing users HTTPS pages in our search results, we’re hoping to decrease the risk for users to browse a website over an insecure connection and making themselves vulnerable to content injection attacks. As usual, if you have any questions or comments, please let us know in the comments section below or in our webmaster help forums.