IM (Internet Marketing) Optimizer| Search Engines Marketer India

Thursday, April 18, 2013

Google: Adding Too Many Pages Too Quickly May Flag A Site To Be Reviewed Manually


Google’s Matt Cutts answered a question submitted by another Googler, John Mueller, on YouTube asking, “Should I add an archive of hundreds of thousands of pages all at once or in stages?”
The question is, if you build out a new section of your website with tons of content – is it safe to just launch them all at once or should you do smaller chunks at a time?

Matt said that Google can handle it either way, but he did say that if a site released hundreds of thousands of pages overnight, it may raise a red flag and warrant a manual review by the Google spam team. And if you do not want Google to manually review your site and you don’t want to draw attention to your site – you may not want to release too many pages too quickly.
Here is the video:

Read More

Tuesday, April 16, 2013

Duplicate Content: How to solve the problem

What is the root of the duplicate content problem?
As we saw in the previous post duplicate content comes from submitting multiple times the same content in different pages or websites, from using incorrectly multiple domain names and from using incorrect web development and SEO practices.
In the first case, the problem is usually caused by the webmasters who try to promote their websites by posting the same articles, texts or press releases in multiple websites. Additionally this could be the result of an incorrect link building strategy in which SEOs try to increase the number of backlinks by submitting the same content to multiple sources. Thus in this case, the root of the duplicate content problem is theuser who tries to promote his website with grayhat or blackhat techniques.
In the second case, the problem is caused by companies that acquire and use multiple domain names for the same website. For example by pointing incorrectly the example.com and the example.co.uk to the same website it is certain that one will face duplicate content issues. Thus again in this case, the root of the problem is the webmaster or the web development company that does not know how to setup correctly the 301 redirections and that does not use the best web techniques.
The third case is much more interesting and technical. The root of the problem is that the HTTP protocoldoes not provide a standard way to identify the “best” URL of a particular page. This means that one page can be accessed by multiple valid URL addresses and at the same time no information is available about the canonical URL.
Examples:
  • http://example.com
  • http://www.Example.com
  • http://www.example.com
  • http://www.example.com/
  • http://www.example.com/index.html
  • http://www.example.com/index.htm
  • http://www.example.com/index.html?somevar=
All the above URLs could lead to the same page, but the HTTP protocol will neither point out the “best” one nor guarantee that all the above addresses are directed to the same page. So in the above example the http://www.example.com and the http://www.example.com/index.html could either lead to the same or to 2 completely different pages.
Also we need to have in mind that there are lots of different languages (PHP, ASP.NET, ASP, JSP, CGI, ColdFusion, etc) and web technologies that can be used in order to support dynamic websites. Due to the fact that the various web technologies support different features (default vs index pages, case sensitive vs case insensitive etc) the situation gets more complicated.
All the above difficulties make it easy for someone who does not understand how search engines work to make mistakes in the link structure of the website and affect the SEO campaign. So the question is how can we avoid those mistakes?

How to solve the duplicate content problem

In the first case the solution is relatively easy. All you need to do is to avoid submitting the exact same content to multiple sources and always make sure you use whitehat SEO techniques. Make sure you prepare different versions for the same article or press release so that search engines will not consider it as duplicate. This will help you build better links and generate more traffic.
In the second case, when there is the need of acquiring multiple domain names for the same website make sure you select only one primary domain and setup HTTP 301 redirections for the rest. So say for example that for a particular website you use 2 domain names: example.com (primary) and example.co.uk (secondary). Then what you want to do is to setup a 301 redirection to the example.co.uk so that whenever someone types this domain he/she will be redirected to the example.com. There are several ways to do this (DNS settings, .htaccess, PHP/ASP/JSP redirection etc), but the most straight forward is by modifying the DNS settings from the panel of your domain provider.
The third case is a bit more complicated. As we said in the previous article, Search Engines do take steps to minimize the effect of the duplicate problem by identifying the best URL for a particular page. They use a set of rules that are applied in the URLs in order to identify the best possible version (for example the trailing / is added after “.com”, the domain name is converted to lowercase, they determine whether to use the www or the non-www version etc). After that they are forced to visit the different URLs and analyze the pages in order to determine whether they are duplicate or unique. Nevertheless even if search engines do try to solve the issue, the SEO campaign can be affected and thus it is highly recommended working on your link structure in order to eliminate those problems.
Working on your link structure
So what you want to do is to make sure that all the links of your site point to the best URLs and that there are no situations where 2 different URLs lead to the same page.
Here is a list of the most important things you should look out:
  1. Remove all the URL variables that are not important from all the links (SESSIONIDs, empty variables etc).
  2. Decide whether to use the www or the non-www version for your site and place a 301 redirection to the other version.
  3. Decide whether to use the “index.html” in the URL when you point to the default page of a folder
  4. Add the trailing / at the end of each folder.
There is a great article by Ian Ring on this subject so I am not going to discuss it further. Make sure you read his article “Be a Normalizer – a C14N Exterminator” and also the Wikipedia article on URL normalization. All these tips can help you optimize the link structure of your website and this is going to help you solve the major duplicate problems.
301 redirections
Another great way to solve the problem is by using 301 redirections. Especially in cases where the incoming links of a particular page are divided between the various duplicate versions of the page it is highly recommended to use the above rules in order to select the “best” URL and then setup 301 redirections to the rest of the pages. This can be done easily by using either the .htaccess file or a programming solution (PHP redirection).
Canonical Tag
When working on the link structure of the website is not an option there is an alternative called Canonical Tag. The Canonical Tag was proposed by Google in order to resolve the duplicate content issue.
To be precise it is not a tag but a value of the attribute rel of the <link> tag:
<link rel=”canonical” href=”http://www.example.com/product.php?item=swedish-fish” />
It is placed in the HTML headers in order to notify the search engines about the best URL for the particular page. Using the canonical in your pages is something very useful and it can help you reduce drastically the amount of duplicate content within your site. Additionally it is a great way to pass the link juice that is lost to the duplicates back to the canonical pages.
Keep in mind that this tag is only a hint, not a directive for the major search engines. In order to use it properly the URLs (both the canonical and the duplicates) must be almost identical.
For more information about Canonical URLs check out the article of Matt Cutts SEO advice: url canonicalization and the article of the official Google Blog Specify your canonical.

The best methods to solve the duplicate content problem

As we said above, there are several ways to solve the problem. Here is the list of the methods that you should use (Note that it is highly recommended to try solving the problem by using the first 3 ways):
  1. Work on your link structure
  2. Use 301 redirections
  3. Use canonical tag
  4. Exclude parameters such as sessionIDs from Google Webmaster Tools Console
  5. Last resort, block the duplicate content with robots.txt


Source - webseoanalytics
Read More

Friday, April 12, 2013

What Is Your Company’s Digital Marketing IQ?


There is a continuing disconnect with people’s personal use of the internet and how they then apply digital marketing to promote their business.

The company CEO will use Google for researching and buying a new car but when it comes to optimizing his web site for ranking high in Google he struggles to see the value and perceives that building a website is all he needs to do to be “found online” and generate mountains of cash.

The Company With A Feeble Digital IQ

Only the other day I heard about the owner of a company who had recently won a business award  and when they came to present it they mentioned that everything was great about his business except they were surprised that he didn’t even have a website!
This disconnect between time spent on media and use of media for marketing becomes quite apparent when you look at the hard data and becomes apparent that organisations are lagging the consumer.
Companies Media Marketing Dollars Are Being Wasted

JP Morgan research shows that only 7% of media is consumed through our newspapers, yet corporate America is allocating 20% of its media dollars to traditional  newspaper marketing (this is a bad forward leading indicator for shareholders of newspapers).
Looking at the Internet figures it reveals that only 12% of media marketing spend is allocated to online media but 26% of media time is spent online. If we dig a little deeper on the numbers it is even more damning. If you are under 40 and earning over $50,000, media consumption increases to over 50% for the internet. In essence corporations are spending  much more on media where the consumer is no longer hanging out and is somewhere else.


Read more at www.jeffbullas.com
Read More

Wednesday, April 10, 2013

5 SEO Tools You Might Not Have… Yet


When it comes to lists of SEO tools, you typically see the same names bounced back and forth. Today, I’d like to share some tools I use on a regular basis that you may not have in your toolkit just yet. The best part – they’re all perfect for SEO’s and businesses with smaller budgets.
iSpionage
You can’t begin any search marketing campaign without a list of great keywords to target.iSpionage is known for their PPC research tools, but they also have powerful keyword research features as well. When you enter a keyword into their search, you will first see the top 10 PPC competitors along with a graph showing the advertising trend for the keyword.
keyword freelance
After a block of related keywords is one of my favorite parts: a glimpse of the top-performing, longest-running ads in Google related to your keyword.
Top Ads in Google
What’s nice about this is you can see how other websites are using the keyword. When you click on the Ads tab, you can review even more ad copy and landing page combinations sorted conveniently by the number of days seen. It can be a great way to spark some ideas on content and calls to action.
Last but not least, you can see the top SEO competitors in Google based on organic search traffic. Some results will list the number of backlinks, pages, and total number of keywords each competitor has in the search index. This can help you gauge what it will take to beat them for a desired keyword phrase.
Other handy features offered to paid members are the Keyword Monitor Dashboard and Competitor Alert. With the Keyword Monitor, you can keep track of your current rankings for the keywords you are tracking. It also includes a historical analysis so you can see the progress of your search marketing campaign.
Summary Google
With the Competitor Alert, you can enter your top competitor’s domains and be notified when they add new ad copy, PPC keywords, and SEO keywords. This way, you don’t have to keep checking up on your competitors to find out if they are targeting something new.
cognitiveSEO
Once you have your targeted keyword phrases organized, your next job will be researching the backlinks of your competitors. cognitiveSEO offers a visual approach to learning about your competitor’s backlink profile that includes charts for deep link ratios, dofollow to nofollow, website types, website categories, link trustworthiness, and much more.
Links Trustworthiness
You can also use the new Visual Link Explorer tool to see the link profile as a whole and drill down into the specifics.
Visual link explorer
You can also drill down into the traditional list of backlinks to filter out results.
Inbound Links
cognitiveSEO also offers additional features to help you with your link building campaign including a link management tool that you can use to upload backlinks you have acquired / requested to monitor their status as well as a rank tracking tool to make sure your SEO efforts are yielding results. There is even a handy to do list feature to help you manage tasks assigned to various members of your team.
Broken Link Finder
The broken link building strategy is a great one to employ for link building, but it can be time consuming. Fortunately, there is a faster way to do it. The Broken Link Finder tool allows paid members to search for a keyword or keyword phrase and get potential broken link opportunities.
Opportunity
You can then export the backlinks for these broken links into a CSV to start contacting webmasters in hopes that your helpful tip about a broken link will result in your link getting placed on their page. You can use your membership credits to reserve a particular search result so that no one else can discover it again or use them to find similar broken link opportunities.
Whitespark
If you are working with a local website, then Whitespark is a must have resource. This citation finder will go out and find local search opportunities based on your website’s keywords and location.
Freelance Writer
You will get to see the domain, direct submit link, type of website, Majestic SEO AC Rank, Domain Authority, and date discovered. The results are based off of backlinks / citations that other local competitors have gained. It’s a great way to get new link building opportunities for local and non-local websites alike.
Link Detox
Unfortunately, if your website has acquired any form of bad links in the past, it is susceptible to a link penalty from Google. If you think your website has been hit with a Google penalty based on you links, then the Link Detox report from Link Research Tools is a must have. You simply enter your domain or upload your links from Google Webmaster Tools. It will then mark your links as healthy, suspicious, and toxic. This will give you a better idea of which links to remove first.
Link Detox
When you export your report, you will see details about why a link was marked as suspicious or toxic. Along with the reasoning, you’ll see the URL linking to your website, what page it’s linking to, anchor text, dofollow / nofollow, and other metrics.
Nofollow
Best of all, you don’t have to have a recurring membership plan with LRT to get access to this tool. You can sign up for a Daypass at $39 for 72 hours access and 3 Link Detox reports (plus access to some of their other tools). Just be sure to export your reports before your daypass expires.
These are some of the tools I use for keyword research, competitor research, local search, and link building. What are your favorite lesser-known tools for SEO?
Read More

Tuesday, April 9, 2013

Google: 5 common mistakes with rel=canonical

Including a rel=canonical link in your webpage is a strong hint to search engines your preferred version to index among duplicate pages on the web. It’s supported by several search engines, including Yahoo!Bing, and Google. The rel=canonical link consolidates indexing properties from the duplicates, like their inbound links, as well as specifies which URL you’d like displayed in search results. However, rel=canonical can be a bit tricky because it’s not very obvious when there’s a misconfiguration.


While the webmaster sees the “red velvet” page on the left in their browser, search engines notice on the webmaster’s unintended “blue velvet” rel=canonical on the right.

We recommend the following best practices for using rel=canonical:
  • A large portion of the duplicate page’s content should be present on the canonical version.
  • One test is to imagine you don’t understand the language of the content—if you placed the duplicate side-by-side with the canonical, does a very large percentage of the words of the duplicate page appear on the canonical page? If you need to speak the language to understand that the pages are similar; for example, if they’re only topically similar but not extremely close in exact words, the canonical designation might be disregarded by search engines.
  • Double-check that your rel=canonical target exists (it’s not an error or “soft 404”)
  • Verify the rel=canonical target doesn’t contain a noindex robots meta tag
  • Make sure you’d prefer the rel=canonical URL to be displayed in search results (rather than the duplicate URL)
  • Include the rel=canonical link in either the <head> of the page or the HTTP header
  • Specify no more than one rel=canonical for a page. When more than one is specified, all rel=canonicals will be ignored.
Mistake 1: rel=canonical to the first page of a paginated series 

Imagine that you have an article that spans several pages:
  • example.com/article?story=cupcake-news&page=1
  • example.com/article?story=cupcake-news&page=2
  • and so on
Specifying a rel=canonical from page 2 (or any later page) to page 1 is not correct use of rel=canonical, as these are not duplicate pages. Using rel=canonical in this instance would result in the content on pages 2 and beyond not being indexed at all. 


Good content (e.g., “cookies are superior nutrition” and “to vegetables”) is lost when specifying rel=canonical from component pages to the first page of a series.

In cases of paginated content, we recommend either a rel=canonical from component pages to a single-page version of the article, or to use rel=”prev” and rel=”next” pagination markup.


rel=canonical from component pages to the view-all page


If rel=canonical to a view-all page isn’t designated, paginated content can use rel=”prev” and rel=”next” markup.

Mistake 2: Absolute URLs mistakenly written as relative URLs 


The <link> tag, like many HTML tags, accepts both relative and absolute URLs. Relative URLs include a path “relative” to the current page. For example, “images/cupcake.png” means “from the current directory go to the “images” subdirectory, then to cupcake.png.” Absolute URLs specify the full path—including the scheme like http://.

Specifying <link rel=canonical href=“example.com/cupcake.html” /> (a relative URL since there’s no “http://”) implies that the desired canonical URL is http://example.com/example.com/cupcake.html even though that is almost certainly not what was intended. In these cases, our algorithms may ignore the specified rel=canonical. Ultimately this means that whatever you had hoped to accomplish with this rel=canonical will not come to fruition. 

Mistake 3: Unintended or multiple declarations of rel=canonical 

Occasionally, we see rel=canonical designations that we believe are unintentional. In very rare circumstances we see simple typos, but more commonly a busy webmaster copies a page template without thinking to change the target of the rel=canonical. Now the site owner’s pages specify a rel=canonical to the template author’s site. 


If you use a template, check that you didn’t also copy the rel=canonical specification.

Another issue is when pages include multiple rel=canonical links to different URLs. This happens frequently in conjunction with SEO plugins that often insert a default rel=canonical link, possibly unbeknownst to the webmaster who installed the plugin. In cases of multiple declarations of rel=canonical, Google will likely ignore all the rel=canonical hints. Any benefit that a legitimate rel=canonical might have offered will be lost. 

In both these types of cases, double-checking the page’s source code will help correct the issue. Be sure to check the entire <head> section as the rel=canonical links may be spread apart. 


Check the behavior of plugins by looking at the page’s source code.

Mistake 4: Category or landing page specifies rel=canonical to a featured article

Let’s say you run a site about desserts. Your dessert site has useful category pages like “pastry” and “gelato.” Each day the category pages feature a unique article. For instance, your pastry landing page might feature “red velvet cupcakes.” Because the “pastry” category page has nearly all the same content as the “red velvet cupcake” page, you add a rel=canonical from the category page to the featured individual article. 

If we were to accept this rel=canonical, then your pastry category page would not appear in search results. That’s because the rel=canonical signals that you would prefer search engines display the canonical URL in place of the duplicate. However, if you want users to be able to find both the category page and featured article, it’s best to only have a self-referential rel=canonical on the category page, or none at all. 


Remember that the canonical designation also implies the preferred display URL. Avoid adding a rel=canonical from a category or landing page to a featured article.

Mistake 5: rel=canonical in the <body> 

The rel=canonical link tag should only appear in the <head> of an HTML document. Additionally, to avoid HTML parsing issues, it’s good to include the rel=canonical as early as possible in the <head>. When we encounter a rel=canonical designation in the <body>, it’s disregarded. 

This is an easy mistake to correct. Simply double-check that your rel=canonical links are always in the <head> of your page, and as early as possible if you can.


rel=canonical designations in the <head> are processed, not the <body>.

Conclusion 

To create valuable rel=canonical designations:
  • Verify that most of the main text content of a duplicate page also appears in the canonical page.
  • Check that rel=canonical is only specified once (if at all) and in the <head> of the page.
  • Check that rel=canonical points to an existent URL with good content (i.e., not a 404, or worse, a soft 404).
  • Avoid specifying rel=canonical from landing or category pages to featured articles as that will make the featured article the preferred URL in search results.
Read More

10 Things SEOs & SMBs Should Know About New Google Places Dashboard

Recently, Google announced some significant changes to their Google Places dashboard. The wires have been humming ever since, and the reaction has ranged from fall-off-seat excitement to ‘humph, is that it!?’

Whatever the opinion, the truth is that these changes signify a big development in the way Google handles ‘Local.’ Google has been talking up the importance of local for an age, and the increased real estate given to local results in SERPs backs this up. They have also updated and iterated their local product almost more than their main search product in the last 12 months.
But, despite this rhetoric and commitment, Google has given scant attention to how SMBs use and manage their data within maps/Google+ Local.
Tuesday’s announcement changes this. The new ‘Places for Business’ dashboard is all about making life easier and clearer for SMBs to manage their data and promotion within Google’s local products (Maps/Mobile/+Local), and they have really put some thought into solving the backend issues and providing a helpful, consolidated interface.
Hang on… before I go too far with the praise, let me make one thing very clear. This update is also designed to make it easy for SMBs to purchase Adwords Express. Google has struggled to monetize ‘Local,’ and this update puts AdWords Express front and center on the dashboard in the hope that SMBs will start to spend more with them. ‘Google the Benevolent?’ (Yeah, right…)
Listed below are ten things SEOs and SMBs should know about the new dashboard.

1.  Phased Rollout – Many Changes Still To Come

This update has addressed a number of issues which have confused and frustrated SEOs and SMBs for years; but it has not improved all issues and niggles. This is very much a v1.0 for the new dashboard, and there are many more improvements to come over the coming months.

2.  Only Available To New Profiles Or Newly Verified Profiles

The new dashboard is only available for new listings (newly created or newly verified) and not for existing, verified listings. As soon as a listing is verified, they will get access to the new dashboard. However, those of us with existing verified listings will have to wait until the rollout reaches us.
Also, the dashboard is only currently available in the US. Once rollout is complete in the US, then it will jump across to other territories. There’s no clear timeline on this; so, it’s a case of carry on doing what you’re doing and wait till Christmas arrives!

3.  Easier, Faster, Clearer Update Route For Google+ Local Page

This change is a huge improvement – and a big thumbs-up to Google for sorting this out!
The current/old dashboard had a slow and tenuous link to the Google+ Local page with changes made in the old dashboard taking a long time to show up on the visible Google+ listing.
The new dashboard feeds data directly into Google’s updated ‘knowledge graph’ data structure. This enhanced structure makes management of data within Google better, and Google puts more trust in this data.
The upshot for SMBs is that any changes made via the dashboard have greater trust and should go live on their Google+ Local page faster – within 48 hours, according to various sources.

4.  Verification Process Still The Same

Thumbs down on this one, I’m afraid!
The verification process for listings is still the same. Businesses still need to get a PIN via mail, SMS or phone call and enter this into their listings so they can take control of their listing.
But, there is a clearer process for disputed listings. If you want to take control of a listing which is currently claimed into a different Google account. there is a clear, stepped appeal process. Listings can no longer be claimed into multiple accounts, which will greatly reduce confusion over listing ownership and administration.
It also appears (fingers firmly crossed) as if this process is going to be overseen by a dedicated customer support team, which would be a hugely welcome change.
Read More

Sunday, March 31, 2013

SEO Step Delivers Performance



Having worked with some of the biggest brands in India and abroad, SEO Step has the relevant experience and expertise to accomplish high value projects. A strong team of optimizers have contributed to the impeccable reputation that it commands. The portfolio of SEO Step features international brands and boasts of some high performing websites, helped over 700 companies to take advantage of a site that is a lead magnet.

SEO Step provides value added services that ensure that the client’s online requirements are fulfilled. Apart from marketing, it also develops custom web applications that help organizations automate various business processes and generate revenue. All in all SEO Step is an integrated online marketing agency in Mumbai that delivers high quality web based business solutions.

SEO Step is an Online Marketing agency based in Mumbai that has, for over a decade, consistently delivered Performance Based SEO Services. SEO Step has maintained high standards and has created landmark sites in terms of online marketing. Over the last few years, SEO Step has accomplished some very challenging sites in terms of being extensive and involving complicated functionalists.

Author: 








Read More

Slides

Powered by Blogger.

Blog Archive

© 2011 IM (Internet Marketing) Optimizer| Search Engines Marketer India, AllRightsReserved.

Designed by ScreenWritersArena