Marketing Automation

Marketing automation is vital for any business. If you’re considering growth and expansion, as most organisations are, you need a marketing plan that helps you to grow your bottom line. So, it is imperative to create marketing strategies to attract potential customers. To attract new customers, it is helpful to reach out to them at multiple points during the sales cycle – to gauge their interest, to nurture their curiosity, to peak their desire, and to encourage them to convert. The AIDA model springs to mind (Awareness, Interest, Desire, Action) when considering this process. Connecting with them at various points and on multiple occasions in appropriate ways with the right messaging becomes an extremely important factor in the sales conversion process. This is the essence of marketing automation. Of course, marketing automation works better for some organisations than others. You will need to consider a variety of factors, such as quantity of customers, volume of transactions, potential for repurchase, cross-selling and up-selling, and so on, before making up your mind if it is necessary for your organisation to invest in a marketing automation solution.

What is marketing automation?

Marketing automation is essentially a software tool that automatically communicates you’re your prospects and customers through a variety of different media, but mostly through email. It is often connected to a Customer Relationship Management (CRM) system, which helps companies to build up personalised communications with their customers in order to deliver relevant offers. This data also helps to build up customer personas that assist marketing departments with understanding how to target audiences in different ways to deliver the most effective results.

What are the advantages of marketing automation?

The key advantage of marketing automation is that you can programme the software to perform certain communication tasks at certain times, and then the hard work is done – although you will still need to manage it on an ongoing basis. Also, it helps companies to improve their marketing strategy and it is a great tool for tactical lead generation and sales nurturing. It can also reduce marketing costs and it provides measurable results and KPIs for both tactical and strategic campaigns.

What are the disadvantages of marketing automation?

Firstly, it requires a lot of effort and commitment to learn how to use it effectively in order to define the target audience and the appropriate messages to communicate to them along the sales pipeline journey. Also, it can be a significant investment, and it can’t fix everything. Despite the major benefits marketing automation offers, it is not a “cure all.” This is perhaps the number one issue we have seen in the marketplace, with some clients thinking that their marketing automation software is their marketing strategy, rather than a tool used to deliver their strategy. We think of this a little bit like the tail wagging the dog…

What is customer lifetime value (CLV)?

Customer lifetime value (CLV) is a prediction of the net profit of your entire future relationship with a customer. It informs you how to allocate your efforts towards the most profitable channels and audiences, thus resulting in a better ROI. Not all customers are equal, and gaining a thorough understanding of their differences allows you to gauge how much to invest in communicating with each one. After you have segmented your audience, the next task is determining how best to connect with target customers at a personal level. Having identified which high-value customers to address, defined their lifetime value and drawn a profile of their priorities, you can then make informed decisions on which media to use. Marketing automation, when executed correctly, allows companies to market to and nurture customers with personalised and useful content via a multitude of channels.

Some useful tips

Don’t confuse your audience with poorly defined communication channels. There is an abundance of communication channels available to marketers in this day and age. Communication between you and your audience should always be a welcome event (or at least not an unwelcome one). Don’t alienate or anger your audience by forcing correspondence to happen, or by reaching out too often. Always ask prospects to opt in. Not only is it the ethical thing to do, but you’ll also be able to steer clear of any legal issues and reinforce a positive image around your brand to new prospects and current customers alike.

Don’t initiate communication on a channel you cannot use for the entire correspondence. Communication is a two-way street. If you have implemented a well-devised marketing automation process – one that accounts for incoming and outgoing correspondence between your system and your customers with an ability to listen to the other party, you will have a clear picture of their needs and be one step closer to closing the deal. Not having the capability to listen to responses via channels used for customer engagement is a failure, but it is even more so if you lack the ability and the process in-house to follow up. Align sales with marketing. If you ask questions or want responses via the channels you use to engage with your audience, be sure to have the capabilities and process in place to receive them and lead them to the next step.

Don’s smother your audience with irrelevant and unwanted content. During the process of nurturing leads, great marketers uncover a host of intimate details that paint a picture of who they are doing business with. Marketing automation then kicks in and utilises this information to serve tailored and personal content that will push leads further down the sales funnel. Sending content to prospects, especially if you are initiating the action, is quite intrusive, so if this is a part of your workflow you must absolutely make sure that what you’re sending out aligns with their needs and interests. If you don’t know what these are, take a few steps back and review your process of collecting data.

Don’t send duplicate content or correspondence. Flawless marketing automation is difficult to achieve even for the best of brands. It requires a strong top-of-the-funnel base that produces a consistent flow of sales leads. It doesn’t matter if you have to spend a great deal of time carefully reviewing your programmed workflows to automatically send out correspondence. Always ensure you’re not sending out duplicates on any correspondence.

Marketing automation software companies

There are many reputable marketing automation software companies in the marketplace for you to consider, such as HubSpot, Campaign Monitor, Eloqua and InfusionSoft. We recommend you take time to consider which one best suits the needs of your business. Read reviews from other customers to see which ones best resonate with your business needs.

Conversation LAB wins Conosco account

Conversation LAB has welcomed client win Conosco to its London office to help boost its global digital credentials. This comes off the agency’s recent South African wins – Markham and Kinky World of Hair.

Conversation LAB has been contracted as Conosco’s digital agency of record and is responsible for search and content management, UX, data, and analytics. The agency also manages all bought media with a strong focus on Google demand generation.

Conosco provides technology support, service, and strategy to United Kingdom-based businesses. Conosco says that IT belongs in the boardroom, and all its services are delivered with business goals in mind.

Speaking from London, Conosco director Max Mlinaric says, “Conversation LAB was recommended to us through our South African network, and we were impressed by their offering – a broad skillset all under one roof – as well as their competitive pricing. The fact that they now have an office in London is very exciting, and we would like to congratulate them on the expansion of their business.”

Kevin Power, group managing director of Conversation LAB, adds, “It is great to work with Consoco, one of the leading outsourced IT companies in London. They have a superior offering and are single-minded in creating the best possible technology solutions for London based companies. Their focus and dedication to going ‘beyond IT’ really excited us about partnering with them on their next phase of growth.”

For more information, visit www.conversationlab.com. Alternatively, connect with them on Facebook or on Twitter.

 

Source: https://www.mediaupdate.co.za/marketing/142877/conversation-lab-wins-conosco-account

 

The post Conversation LAB wins Conosco account appeared first on Conversation LAB.

Facebook News Feed Experiments: Threat or Opportunity?

As Adam Mosseri, Head of News Feed at Facebook noted in a post on Monday “There have been a number of reports about a test we’re running in Sri Lanka, Bolivia, Slovakia, Serbia, Guatemala, and Cambodia.” The test he is referring to is that of moving all content posted by brand pages (not content shared by friends) from the main user News Feed into a separate tab named “Explore”.

What’s changed?

One of the first sources to write about the test was Filip Struhárik with the starkly titled “Biggest drop in Facebook organic reach we have ever seen”. The story has since been picked up by The Guardian (Facebook moving non-promoted posts out of News Feed in trial) and the BBC (Facebook explores, publishers panic). As you can tell from the titles, tensions are running high, which is understandable because the very people writing them could stand to be the hardest hit by another step of removal from their core audience. As you may also have gleaned, this trial is applying to organic content only, not promoted posts. It is a matter of time before someone comes up with an “-ageddon” nickname for the event, (Explorageddon sounds like a tourist board advert) but as many have pointed out the potential ramifications could be serious.

Purely from a publisher relations standpoint, this could perhaps have been handled better. As Mosseri mentions in his post “It’s also important to know this test in these six countries is different than the version of Explore that has rolled out to most people”. While it’s understandable that Facebook wouldn’t want to panic publishers by warning them of this planned test in advance, rolling out something so controversial in a limited geography and making it easily confused with something else far more widespread wasn’t a fantastic exercise in concern management. It’s akin to starting annual review day by firing the first few employees you meet and leaving everyone else to stew. It’s also understandable that any new release will come with its bugs, but Struhárik has reported page posts being removed from the main News Feed for users that don’t yet have the Explore section, meaning for those users all page posts are hidden in the Pages Feed section which I certainly hadn’t visited before today.

Change isn’t always a bad thing

It’s true that some changes that Facebook implement can make us better writers, marketers, and entertainers. The much-maligned algorithm update which reduced pages’ ability to reach their followers felt like it makes life harder, but it allowed good publishers to get far more for their money by engaging with their communities and learning from what they like, rather than just pumping out 50 posts a day to rack up those juicy clicks. Much like AMP, Instant Articles made us consider what we can pare back and peel away to give visitors only what actually matters with as little wait time as possible, and I’m actually quite interested in some of their plans on monetising chat bots discussed in this podcast, for instance sales messages being blocked if a user hasn’t actually engaged with your messages in the last 24 hours.

A step backwards

However, it’s not always the case that these changes improve the content quality. Facebook has also announced in the past that users don’t like reaching the end of their timeline, in response they allowed individual publishers to appear multiple times in a users’ News Feed, whether this improved satisfaction is debatable. In the same announcement, Facebook described users not wanting to see notifications of friends’ likes in their feed – after Facebook removed these notifications low-quality pages just pivoted to “tag a friend who” memes some of which exemplified the worst side of us on social media. More recently Facebook has gathered that users want to see more from friends and family, that is one of the reasons they have given for this latest test.

My concern is that moving this content to a separate (currently quite hidden) section and only allowing paid content into the News Feed won’t make publishers better, it’ll quarantine the terrible content but lump it in with the good. It stands to make Facebook success more like the deep-pockets-or-black-hat game that exists elsewhere and hampers the success of small but genuinely talented content producers. It’ll also mean that publishers have even more inaccurate figures about the value of a follow, making it harder still for community managers to argue the for investing in a community.

What’s more, I still don’t see it reducing the torrent of “Tag a mate who is s**t at golf” posts coming up in my feed because the real low-quality publishers already know how to get their content past Facebook’s net – get my friends to deliver it to me. There is even a host of “Tag a friend to make them open their phone and look at this cucumber for no reason” content – that’s content that is basing its success on mocking Facebook’s aim of showing you only what you want to see.

Want more advice like this in your inbox? Join the monthly newsletter.

Of course, Facebook has to make money but I am far happier with the current system which stands to make companies pay through the nose to distribute uninteresting and unoriginal content. While it’s far from perfect, the current method of checking content popularity leaves more of a gap for the intelligent, well-targeted, human content to run rings around generic uninspired posts, and even gives smaller publishers a better chance. It could be argued that users going to Explore will be primed to read and engage, but the number of times I open the “promotions” tab in Gmail speaks to the contrary, and that’s ignoring the fact that the Explore section currently won’t be limited to pages I subscribed to, but will include any content that Facebook deems appropriate.

The outcome

As Ziad Ramley, former social lead for Al Jazeera, suggests, this could all just be flash-in-the-pan. After the testing period, Facebook could well kill this experiment dead, or it might even roll out and have nothing like the negative impact we’re envisioning. Even though Facebook explicitly prioritises users over publishers, a stance that Techcrunch describes as the reason Facebook has survived so much change, there are certainly reasons why they might want to reverse this course of action. As Struhárikm observed to The Guardian: when we finally get a News Feed that’s just friends we may just find out just how boring our friends are. Maybe we’ll jump into the Explore section when we get sick of hearing about Clea’s “nightmare” mole operation, or maybe we’ll just stop logging in.

One thing’s for sure, moving publishers out of the News Feed, even if it is accompanied by a reduction in the quality of experience, is bound to be far more frictionless than attempts to move organic page posts back in. If Facebook makes this change and usage goes down I could imagine the smartest marketers playing News Feed exposure like the stock market, waiting for the drop in interest and investing heavily while Facebook tries to gain back its lost momentum.

Why you should work with micro influencers

Influencer marketing is the newest trend for marketing, taking over from programmatic. There’s been a huge shift towards making full use of this form of marketing, developing it into a fully-fledged channel over the past few years.

It’s not hard to see why. On average, businesses are said to generate 6.5x ROI on an influencer marketing campaign. Studies have also shown that marketers also believe that the quality of customers obtained through influencer marketing is better than traditional channels.

No wonder more and more brands are investing in influencer marketing. But are they doing it correctly? Too many brands are aiming to work with Zoella, or Deliciously Ella, when actually they should be aiming a bit lower – for good reason.

So, why is it that micro influencers are better than bigger names?

Authenticity 🕵

In a lot of cases, micro influencers will be a lot more relatable than bigger name influencers. Celebrity influencers can seem very detached from the average, everyday consumer. Their problems aren’t the same problems that most people will face in their day-to-day lives.

However, micro-influencers seem more attainable and more relatable. Their followings tend to be based on their direct approachability and everyday personism. Many consumers trust these micro-influencers much more than they would trust larger, celebrity endorsements.

Priced out 💸

In the previous era of influencer marketing, when we were restricted to Zoella and Alfie Deyes, you either stumped up the huge amount of money required to work with them, or you didn’t do influencer marketing.

However, now you can work directly with micro influencers, you’re not priced out of influencer marketing as a channel.

This means that your marketing budget not only stretches further but also that you can engage with multiple influencers during a campaign, instead of hinging all your focus on one influencer in particular.

Easier to work with 💘

Working with huge, borderline-celebrity influencers can be tough. It can be a real challenge. Influencers who have followers in the hundreds of thousands, if not millions, tend to think of themselves as celebrities, with all the negative connotations that are attached to that.

Demands can be made, egos can be inflated and handling can be difficult. All of this can lead to a less than ideal working situation, as well as friction on projects.

Micro-influencers tend to be free from these sort of behaviours. They’re still at the point where they rely heavily on word-of-mouth and their reputations, so they’ll ensure that they’re pleasant to work with in order to get more work (obviously this isn’t true for everyone, but the majority are).

Better metrics 📊

One of the biggest reasons to work with micro influencers is that the metrics tend to be better than macro influencers.

Whilst micro influencers do have a smaller amount of followers, many studies (here and here) have found that influencers with smaller amounts of followers actually have higher engagement rates.

This means that you’re getting more actual engagement with the posts that you’re spending your money on, and a bigger bang for your buck.

Tap into a niche 🍰

Most micro-influencers have smaller followings for a reason because they appeal to a specific niche. Identifying the correct micro influencer for your target market and for your niche is key.

When you engage with the followers of specific niches, you can be guaranteed that they all like the particular topic, such as interior design. Even if the influencer isn’t a household name, you can be assured that their followers will all really care about interior design and therefore be more engaged with your design brand, for example.

By tapping into these extremely targeted follower bases, you’re more likely to drive better engagement and overall better results.

Brand safety 🔐

There are some arguments that say micro influencers also have an element of greater brand safety than macro influencers do.

Larger influencers have subsets of followers within their followers. All their followers may not follow them for the exact same reason.

However, micro influencers have smaller but, as previously said, more targeted audiences. Their expertise on a particular topic means that they will partner well with the brand, and typically they already produce safe, contextual content.

Overall, working with micro influencers is ideal for any brand, of any size. You’re not priced out, the content is more authentic and you get higher engagement. What’s not to love?

Related posts:

The post Why you should work with micro influencers appeared first on Harvest Digital ™.

Room to Improve Shareholder Communications as Sustainability is Now Considered Smarter Business

The following post was written by Jane Madden, Managing Director, U.S. Corporate Responsibility, and Elizabeth Woodworth, Manager, U.S. Healthcare Practice.

In 2016, Burson-Marsteller’s Corporate Responsibility Practice, together with our sister firm PSB, published the research report, “Is Your ESG Report Getting Noticed by Investors?” which found that institutional investors say a company with strong ESG (Environmental, Social and Governance, the three main factors that measure the sustainability and ethical impact of a business) initiatives is a more attractive investment.

At last week’s Bloomberg Sustainable Business Summit in New York City, we found that investors were the topic of discussion. We heard from leaders that more institutional investors are now thinking of ESG less as a separate investment evaluation input and more as data that provides additional insights into to how businesses operate and their growth trajectory. In fact, ESG is being viewed more closely in line with traditional financial data than ever before.

The theme of the summit was “Sustainability is Good Business,” a sentiment held by many in the business community. In fact, Burson-Marsteller and PSB’s research found that 77 percent of surveyed investors say building ESG initiatives into a company’s business model – the associated costs, risks and benefits of these issues – is a smart business decision. But surprisingly, leaders across various sectors noted there are gaps that hamper investors’ understanding of the value ESG initiatives bring to the business. As Morgan Stanley’s Chief Marketing Officer and Chief Sustainability Officer Audrey Choi noted, “There’s an epidemic of silent interest in ESG among investors.” So, why is the interest silent?

Three themes we identified throughout the two-day summit are, in our view, important in understanding how investors are thinking about sustainability and where the communications opportunities are to improve the dialogue between businesses and their shareholders.

1) Speaking the Same Language: While consumers now demand more “sustainability” efforts from companies, investors as a broader group do not yet speak the language of sustainability. There is a lot of room to improve communications around how ESG is related to financial performance. A more open and proactive dialogue with investors is critical to clarifying that many of the issues they are already thinking about –the cost of carbon, to board diversity. human rights issues – are also sustainability issues, they just aren’t using the same terminology as the Chief Sustainability Officer. In addition, using the right financial vocabulary helps investors understand material ESG data, risks and opportunities.

2) Combine Short and Long-Term Outlooks: Aligning the short and long-term views of standard financial and ESG reporting by coordinating Investor Relations and Sustainability departments is another key theme. While investors and IR teams usually think quarter to quarter and by fiscal year, sustainability and ESG operate on longer timelines. If you are lucky, as Jay Gould, President & CEO of Interface said, you have a board of directors who thinks on a 20-30-year timeline. But both short-term financial performance and longer-term sustainability should be communicated to investors in parallel to illustrate healthy performance, identify opportunities that lead to innovation and note risks that can be mitigated. Sustainability leaders ranging from Dave Stangis, VP-Corporate Responsibly and Chief Sustainability Office of the Campbell Soup Company to JetBlue’s President and CEO Robin Hayes spoke about a greater need to coordinate Investor Relations and Sustainability teams to align the short and long-term nature of these analyses, which can then improve value reporting to shareholders and other stakeholders.

3) Engage to Control Your Message: As Ingrid Dyott, Managing Director at Neuberger Berman noted, ESG are business issues, which are issues investors should care about. But ESG is still a subtle concept for many, so you need to get ahead of your communications before someone else does it for you. From an institutional perspective, look closely at who is behind the investment. Dyott noted, for example, that pension funds are a bourgeoning area for ESG investing because teachers, firefighters and police officers inherently care about sustainability issues and the impact of their investments. And now with a growing Millennial population making investments (by 2025 they will make up 75 percent of the workforce and are two times as likely to back an investment product that aligns with their values, says Morgan Stanley’s Audrey Choi), asset managers should be thinking of these investments not only according to financial risk and reward – but also impact, and what the nature of that impact is. These investor groups have certain expectations and asset managers and IR teams should insert their POV into the narrative before investors do it for you.

There is a rapidly growing consensus that sustainability is good business and drives innovation. Now the opportunity is to further integrate investor relations, sustainability planning and ESG into standard financial reporting and cohesive investor communications. Doing so can help clarify the material and non-material value of these initiatives in the language your investors speak and illustrate the trend that purpose and performance are increasingly one in the same.

Effective Content Marketing: Hub content

In this series, we’re going to look more closely at the Hero, Hub and Hygiene model that can be used as a strategic framework to create content to. We’ll look at each type of content in more depth, as well as some examples, where you should place it and how to make it work for your business.

Hub content is typically aligned with targeted marketing campaigns that are staggered throughout the year.

Hub content is defined by Google as a ‘push’ activity, and we agree. Hub content is designed to be actively pushed out to your audience. This can be done with social media or email, in order to activate your audience. It’s also worth considering having this hub content appear at regular intervals in your customer journey.

Hub content is exactly what it says on the tin, and should contain a ‘hub’ of knowledge about a topic. A series of blog posts can be a hub, as can a series of videos about one topic.

Hub content should be looking to educate your audience where possible. Expert opinions really come into their own in Hub content, so if you have experts, you should be making the most of them.

When should you produce Hub content?

Think about the content that you’ve been producing lately. Have you been producing a lot around a similar theme? Or is it in a series?

If the answer is yes, then you’ve got the good foundations for Hub content. Typically, you’ll just need to pull this all together into a centralised place – a hub.

Example of Hub Content

A great example of Hub content that many digital marketers know (or should know) of is Moz’s Whiteboard Fridays.

moz whiteboard friday

Whiteboard Friday (WBF) is a good example of Hub content as it hits all of Google’s Hub content best practices.

  • WBF, and Rand (Fishkin, the host), have a strong editorial voice and a strong, distinct style. Even if Rand is not hosting, which he sometimes isn’t, the clear style of drawing on a whiteboard makes it very obviously a WBF video.
  • Rand is a good example of a single, identifiable personality that appears in pretty much all of the WBF videos.
  • There is a consistent visual language across all the videos. The format is simple, yet easily identifiable.
  • Moz communicates with their audience about the release of the videos, and there is a clear release schedule (hint: the clue is in the name). Their promotion strategy is evident across channels.

All of this shows that Moz listens to Google’s best practices. This, combined with Rand’s love of 10x content, means that Moz has become the expert name for SEO.

If your hubs are getting a good amount of traffic, it’s time to step it up a notch and start to think about creating some Hero content.

Want to take your content to the next level? Find out how we can help create a content hub for you. Just drop us a line below.

Related posts:

The post Effective Content Marketing: Hub content appeared first on Harvest Digital ™.

11 Phrases That Can Ruin Your Performance Review

Don’t sabotage your review with poorly chosen words or phrases–the meeting is too important to risk verbal missteps.

Between the feeling of being thrust into the spotlight, the one-on-one setting with your manager, and the gravity of what’s at stake, performance reviews can feel pretty uncomfortable. And when you’re made to feel uncomfortable, sometimes you aren’t always the most conscious of (or careful with) your words. But if there’s one time that you want to communicate effectively, it’s then. After all, your performance review is often the one chance you get to push for a raise, secure a promotion, or even save your job.

Read Full Story

Winning Awards

The British have a very specific way of receiving accolades. We’re taught from a very young age to win well but lose better. Be humble. Be self-effacing. Don’t make a big deal of it. It’s the taking part that counts. All very Olympian and playing fields of Eton.

Bit tricky when you’re writing a blog about winning awards though.

Best be straight to the point.

Here’s the thing. Tonic’s had a great 12-months. Great clients. Great projects. Great team. Cracking results. And we’ve won a bunch of awards. The good ones.

  • Recruitment Business Awards 2017 – Grand Prix & Agency of the Year
  • CIPD Recruitment Marketing Awards 2017 – Grand Prix
  • Employer Brand Management Awards 2017 – Grand Prix
  • RAD Awards 2017 – Work of the Year
  • Recruitment Business Awards 2016 – Grand Prix & Agency of the Year

And more than 20 others with clients as diverse as The British Army, PoliceNow, RBS and FirstNames Group.

The question I’d be asking if I were you is how? What can I replicate? What’s the secret sauce? I could tell you, but I’m not going to. You’ll have to speak to us if you want to get the detail.  But – if you want to win some awards – here are some tips that might help:

Work with great clients

Critical. Find the people who share your values. The people that share your ambition. The people, not the business/organisation/authority, are the most important ingredient by a country mile. Dull businesses don’t lack in adventure because of who they are – but because of the people that work there. Find the right clients and collaborate as hard as you can. You can’t do this alone.

Get a great partner

See above. But change the terminology. Find the people who share your values. The people that share your ambition. The people, not the agency/assessment provider/RPO, are the most important ingredient by a country mile. Dull partners are not boring because of who they are – but because of the people that work there. Find the right partners and collaborate as hard as you can. You can’t do this alone.

Wonder

But not in a wishy-washy way. Fight dogma and convention. Always be curious, never be complacent. Look hard for the right idea and be prepared to battle for it. Use your imagination. Don’t settle for second best. Repeat.

Care

I mean really do care. Care about the relationships you have with your client/partner. Care about the goal you have to achieve. Care about the impact you’ll have on your business. Care about the people whose lives you’ll change. Care about the end product and care about why you’re doing this.

Don’t worry

Sometimes you’ll succeed, sometimes you won’t. Sometimes it’ll be easy, mainly it won’t. If a project/campaign goes well, celebrate. If it doesn’t, learn and don’t let it happen again. If you let worry consume you, if you let fear of failure get in the way, then you’ll kill your creativity and ambition.

Use the data & your gut instinct

In equal measure. Listen hard to reality. Data can help make the right decisions (and it will certainly help awards judges). Make sure it’s giving you the right metric rather than the red-herring. Mitigate against disaster. Look for the deeper thought rather than the surface-scratching indicator. If it doesn’t feel right, it probably isn’t. Trust your judgement.

Tell a great story

Make the case. Not only in entering for awards. Also within your business. Why this rather than that? A rather than B? What’s the underlying problem that you’re working to fix? What’s at stake? What’s to be gained? What’s the impact on the people you hire or the people you retain? What’s the logical case for action? What’s the emotional rationale? How do you want people to feel?

Of course winning isn’t everything. But it feels fantastic. RAD Awards 2018 are in for judging now. Good luck.

Tom #Humblebrag Chesterton

October 2017

The post Winning Awards appeared first on Tonic Agency.

Proposing Better Ways to Think about Internal Linking

I’ve long thought that there was an opportunity to improve the way we think about internal links, and to make much more effective recommendations. I feel like, as an industry, we have done a decent job of making the case that internal links are important and that the information architecture of big sites, in particular, makes a massive difference to their performance in search (see: 30-minute IA audit and DistilledU IA module).

And yet we’ve struggled to dig deeper than finding particularly poorly-linked pages, and obviously-bad architectures, leading to recommendations that are hard to implement, with weak business cases.

I’m going to propose a methodology that:

  1. Incorporates external authority metrics into internal PageRank (what I’m calling “local PageRank”) to take pure internal PageRank which is the best data-driven approach we’ve seen for evaluating internal links and avoid its issues that focus attention on the wrong areas

  2. Allows us to specify and evaluate multiple different changes in order to compare alternative approaches, figure out the scale of impact of a proposed change, and make better data-aware recommendations

Current information architecture recommendations are generally poor

Over the years, I’ve seen (and, ahem, made) many recommendations for improvements to internal linking structures and information architecture. In my experience, of all the areas we work in, this is an area of consistently weak recommendations.

I have often seen:

  • Vague recommendations – (“improve your information architecture by linking more to your product pages”) that don’t specify changes carefully enough to be actionable

  • No assessment of alternatives or trade-offs – does anything get worse if we make this change? Which page types might lose? How have we compared approach A and approach B?

  • Lack of a model – very limited assessment of the business value of making proposed changes – if everything goes to plan, what kind of improvement might we see? How do we compare the costs of what we are proposing to the anticipated benefits?

This is compounded in the case of internal linking changes because they are often tricky to specify (and to make at scale), hard to roll back, and very difficult to test (by now you know about our penchant for testing SEO changes – but internal architecture changes are among the trickiest to test because the anticipated uplift comes on pages that are not necessarily those being changed).

In my presentation at SearchLove London this year, I described different courses of action for factors in different areas of this grid:

It’s tough to make recommendations about internal links because while we have a fair amount of data about how links generally affect rankings, we have less information specifically focusing on internal links, and so while we have a high degree of control over them (in theory it’s completely within our control whether page A on our site links to page B) we need better analysis:

The current state of the art is powerful for diagnosis

If you want to get quickly up to speed on the latest thinking in this area, I’d strongly recommend reading these three articles and following their authors:

  1. Calculate internal PageRank by Paul Shapiro

  2. Using PageRank for internal link optimisation by Jan-Willem Bobbink

  3. Easy visualizations of PageRank and page groups by Patrick Stox

A load of smart people have done a ton of thinking on the subject and there are a few key areas where the state of the art is powerful:

There is no doubt that the kind of visualisations generated by techniques like those in the articles above are good for communicating problems you have found, and for convincing stakeholders of the need for action. Many people are highly visual thinkers, and it’s very often easier to explain a complex problem with a diagram. I personally find static visualisations difficult to analyse, however, and for discovering and diagnosing issues, you need data outputs and / or interactive visualisations:

But the state of the art has gaps:

The most obvious limitation is one that Paul calls out in his own article on calculating internal PageRank when he says:

“we see that our top page is our contact page. That doesn’t look right!”

This is a symptom of a wider problem which is that any algorithm looking at authority flow within the site that fails to take into account authority flow into the site from external links will be prone to getting misleading results. Less-relevant pages seem erroneously powerful, and poorly-integrated pages that have tons of external links seem unimportant in the pure internal PR calculation.

In addition, I hinted at this above, but I find visualisations very tricky – on large sites, they get too complex too quickly and have an element of the Rorschach to them:

My general attitude is to agree with O’Reilly that “Everything looks like a graph but almost nothing should ever be drawn as one”:

All of the best visualisations I’ve seen are nonetheless full link-graph visualisations – you will very often see crawl-depth charts which are in my opinion even harder to read and obscure even more information than regular link graphs. It’s not only the sampling but the inherent bias of only showing links in the order discovered from a single starting page – typically the homepage – which is useful only if that’s the only page on your site with any external links. This Sitebulb article talks about some of the challenges of drawing good crawl maps:

But by far the biggest gap I see is the almost total lack of any way of comparing current link structures to proposed ones, or for comparing multiple proposed solutions to see a) if they fix the problem, and b) which is better. The common focus on visualisations doesn’t scale well to comparisons – both because it’s hard to make a visualisation of a proposed change and because even if you can, the graphs will just look totally different because the layout is really sensitive to even fairly small tweaks in the underlying structure.

Our intuition is really bad when it comes to iterative algorithms

All of this wouldn’t be so much of a problem if our intuition was good. If we could just hold the key assumptions in our heads and make sensible recommendations from our many years of experience evaluating different sites.

Unfortunately, the same complexity that made PageRank such a breakthrough for Google in the early days makes for spectacularly hard problems for humans to evaluate. Even more unfortunately, not only are we clearly bad at calculating these things exactly, we’re surprisingly bad even at figuring them out directionally. [Long-time readers will no doubt see many parallels to the work I’ve done evaluating how bad (spoiler: really bad) SEOs are at understanding ranking factors generally].

I think that most people in the SEO field have a high-level understanding of at least the random surfer model of PR (and its extensions like reasonable surfer). Unfortunately, most of us are less good at having a mental model for the underlying eigenvector / eigenvalue problem and the infinite iteration / convergence of surfer models is troublesome to our intuition, to say the least.

I explored this intuition problem recently with a really simplified example and an unscientific poll:

The results were unsurprising – over 1 in 5 people got even a simple question wrong (the right answer is that a lot of the benefit of the link to the new page flows on to other pages in the site and it retains significantly less than an Nth of the PR of the homepage):

I followed this up with a trickier example and got a complete lack of consensus:

The right answer is that it loses (a lot) less than the PR of the new page except in some weird edge cases (I think only if the site has a very strange external link profile) where it can gain a tiny bit of PR. There is essentially zero chance that it doesn’t change, and no way for it to lose the entire PR of the new page.

Most of the wrong answers here are based on non-iterative understanding of the algorithm. It’s really hard to wrap your head around it all intuitively (I built a simulation to check my own answers – using the approach below).

All of this means that, since we don’t truly understand what’s going on, we are likely making very bad recommendations and certainly backing them up and arguing our case badly.

Doing better part 1: local PageRank solves the problems of internal PR

In order to be able to compare different proposed approaches, we need a way of re-running a data-driven calculation for different link graphs. Internal PageRank is one such re-runnable algorithm, but it suffers from the issues I highlighted above from having no concept of which pages it’s especially important to integrate well into the architecture because they have loads of external links, and it can mistakenly categorise pages as much stronger than they should be simply because they have links from many weak pages on your site.

In theory, you get a clearer picture of the performance of every page on your site – taking into account both external and internal links – by looking at internet-wide PageRank-style metrics. Unfortunately, we don’t have access to anything Google-scale here and the established link data providers have only sparse data for most websites – with data about only a fraction of all pages.

Even if they had dense data for all pages on your site, it wouldn’t solve the re-runnability problem – we wouldn’t be able to see how the metrics changed with proposed internal architecture changes.

What I’ve called “local” PageRank is an approach designed to attack this problem. It runs an internal PR calculation with what’s called a personalization vector designed to capture external authority weighting. This is not the same as re-running the whole PR calculation on a subgraph – that’s an extremely difficult problem that Google spent considerable resources to solve in their caffeine update. Instead, it’s an approximation, but it’s one that solves the major issues we had with pure internal PR of unimportant pages showing up among the most powerful pages on the site.

Here’s how to calculate it:

The next stage requires data from an external provider – I used raw mozRank – you can choose whichever provider you prefer, but make sure you are working with a raw metric rather than a logarithmically-scaled one, and make sure you are using a PageRank-like metric rather than a raw link count or ML-based metric like Moz’s page authority:

You need to normalise the external authority metric – as it will be calibrated on the entire internet while we need it to be a probability vector over our crawl – in other words to sum to 1 across our site:

We then use the NetworkX PageRank library to calculate our local PageRank – here’s some outline code:

What’s happening here is that by setting the personalization parameter to be the normalised vector of external authorities, we are saying that every time the random surfer “jumps”, instead of returning to a page on our site with uniform random chance, they return with probabilities proportional to the external authorities of those pages. This is roughly like saying that any time someone leaves your site in the random surfer model, they return via the weighted PageRank of the external links to your site’s pages. It’s fine that your external authority data might be sparse – you can just set values to zero for any pages without external authority data – one feature of this algorithm is that it’ll “fill in” appropriate values for those pages that are missing from the big data providers’ datasets.

In order to make this work, we also need to set the alpha parameter lower than we normally would (this is the damping parameter – normally set to 0.85 in regular PageRank – one minus alpha is the jump probability at each iteration). For much of my analysis, I set it to 0.5 – roughly representing the % of site traffic from external links – approximating the idea of a reasonable surfer.

There are a few things that I need to incorporate into this model to make it more useful – if you end up building any of this before I do, please do let me know:

  • Handle nofollow correctly (see Matt Cutts’ old PageRank sculpting post)

  • Handle redirects and rel canonical sensibly

  • Include top mR pages (or even all pages with mR) – even if they’re not in the crawl that starts at the homepage

    • You could even use each of these as a seed and crawl from these pages

  • Use the weight parameter in NetworkX to weight links by type to get closer to reasonable surfer model

    • The extreme version of this would be to use actual click-data for your own site to calibrate the behaviour to approximate an actual surfer!

Doing better part 2: describing and evaluating proposed changes to internal linking

After my frustration at trying to find a way of accurately evaluating internal link structures, my other major concern has been the challenges of comparing a proposed change to the status quo, or of evaluating multiple different proposed changes. As I said above, I don’t believe that this is easy to do visually as most of the layout algorithms used in the visualisations are very sensitive to the graph structure and just look totally different under even fairly minor changes. You can obviously drill into an interactive visualisation of the proposed change to look for issues, but that’s also fraught with challenges.

So my second proposed change to the methodology is to find ways to compare the local PR distribution we’ve calculated above between different internal linking structures. There are two major components to being able to do this:

  1. Efficiently describing or specifying the proposed change or new link structure; and

  2. Effectively comparing the distributions of local PR – across what is likely tens or hundreds of thousands of pages

How to specify a change to internal linking

I have three proposed ways of specifying changes:

1. Manually adding or removing small numbers of links

Although it doesn’t scale well, if you are just looking at changes to a limited number of pages, one option is simply to manipulate the spreadsheet of crawl data before loading it into your script:

2. Programmatically adding or removing edges as you load the crawl data

Your script will have a function that loads  the data from the crawl file – and as it builds the graph structure (a DiGraph in NetworkX terms – which stands for Directed Graph). At this point, if you want to simulate adding a sitewide link to a particular page, for example, you can do that – for example if this line sat inside the loop loading edges, it would add a link from every page to our London SearchLove page:

site.add_edges_from([(edge['Source'],
'https://www.distilled.net/events/searchlove-london/')])

You don’t need to worry about adding duplicates (i.e. checking whether a page already links to the target) because a DiGraph has no concept of multiple edges in the same direction between the same nodes, so if it’s already there, adding it will do no harm.

Removing edges programmatically is a little trickier – because if you want to remove a link from global navigation, for example, you need logic that knows which pages have non-navigation links to the target, as you don’t want to remove those as well (you generally don’t want to remove all links to the target page). But in principle, you can make arbitrary changes to the link graph in this way.

3. Crawl a staging site to capture more complex changes

As the changes get more complex, it can be tough to describe them in sufficient detail. For certain kinds of changes, it feels to me as though the best way to load the changed structure is to crawl a staging site with the new architecture. Of course, in general, this means having the whole thing implemented and ready to go, the effort of doing which negates a large part of the benefit of evaluating the change in advance. We have a secret weapon here which is that the “meta-CMS” nature of our ODN platform allows us to make certain changes incredibly quickly across site sections and create preview environments where we can see changes even for companies that aren’t customers of the platform yet.

For example, it looks like this to add a breadcrumb across a site section on one of our customers’ sites:

There are a few extra tweaks to the process if you’re going to crawl a staging or preview environment to capture internal link changes – because we need to make sure that the set of pages is identical in both crawls so we can’t just start at each homepage and crawl X levels deep. By definition we have changed the linking structure and therefore will discover a different set of pages. Instead, we need to:

  • Crawl both live and preview to X levels deep

  • Combine into a superset of all pages discovered on either crawl (noting that these pages exist on both sites – we haven’t created any new pages in preview)

  • Make lists of pages missing in each crawl and crawl those from lists

Once you have both crawls, and both include the same set of pages, you can re-run the algorithm described above to get the local PageRanks under each scenario and begin comparing them.

How to compare different internal link graphs

Sometimes you will have a specific problem you are looking to address (e.g. only y% of our product pages are indexed) – in which case you will likely want to check whether your change has improved the flow of authority to those target pages, compare their performance under proposed change A and proposed change B etc. Note that it is hard to evaluate losers with this approach – because the normalisation means that the local PR will always sum to 1 across your whole site so there always are losers if there are winners – in contrast to the real world where it is theoretically possible to have a structure that strictly dominates another.

In general, if you are simply evaluating how to make the internal link architecture “better”, you are less likely to jump to evaluating specific pages. In this case, you probably want to do some evaluation of different kinds of page on your site – identified either by:

  1. Labelling them by URL – e.g. everything in /blog or with ?productId in the URL

  2. Labelling them as you crawl

    1. Either from crawl structure – e.g. all pages 3 levels deep from the homepage, all pages linked from the blog etc)

    2. Or based on the crawled HTML (all pages with more than x links on them, with a particular breadcrumb or piece of meta information labelling them)

  3. Using modularity to label them automatically by algorithmically grouping pages in similar “places” in the link structure

I’d like to be able to also come up with some overall “health” score for an internal linking structure – and have been playing around with scoring it based on some kind of equality metric under the thesis that if you’ve chosen your indexable page set well, you want to distribute external authority as well throughout that set as possible. This thesis seems most likely to hold true for large long-tail-oriented sites that get links to pages which aren’t generally the ones looking to rank (e.g. e-commerce sites). It also builds on some of Tom Capper’s thinking (videoslides, blog post) about links being increasingly important for getting into Google’s consideration set for high-volume keywords which is then reordered by usage metrics and ML proxies for quality.

I have more work to do here, but I hope to develop an effective metric – it’d be great if it could build on established equality metrics like the Gini Coefficient. If you’ve done any thinking about this, or have any bright ideas, I’d love to hear your thoughts in the comments, or on Twitter.