Winning Awards

The British have a very specific way of receiving accolades. We’re taught from a very young age to win well but lose better. Be humble. Be self-effacing. Don’t make a big deal of it. It’s the taking part that counts. All very Olympian and playing fields of Eton.

Bit tricky when you’re writing a blog about winning awards though.

Best be straight to the point.

Here’s the thing. Tonic’s had a great 12-months. Great clients. Great projects. Great team. Cracking results. And we’ve won a bunch of awards. The good ones.

  • Recruitment Business Awards 2017 – Grand Prix & Agency of the Year
  • CIPD Recruitment Marketing Awards 2017 – Grand Prix
  • Employer Brand Management Awards 2017 – Grand Prix
  • RAD Awards 2017 – Work of the Year
  • Recruitment Business Awards 2016 – Grand Prix & Agency of the Year

And more than 20 others with clients as diverse as The British Army, PoliceNow, RBS and FirstNames Group.

The question I’d be asking if I were you is how? What can I replicate? What’s the secret sauce? I could tell you, but I’m not going to. You’ll have to speak to us if you want to get the detail.  But – if you want to win some awards – here are some tips that might help:

Work with great clients

Critical. Find the people who share your values. The people that share your ambition. The people, not the business/organisation/authority, are the most important ingredient by a country mile. Dull businesses don’t lack in adventure because of who they are – but because of the people that work there. Find the right clients and collaborate as hard as you can. You can’t do this alone.

Get a great partner

See above. But change the terminology. Find the people who share your values. The people that share your ambition. The people, not the agency/assessment provider/RPO, are the most important ingredient by a country mile. Dull partners are not boring because of who they are – but because of the people that work there. Find the right partners and collaborate as hard as you can. You can’t do this alone.

Wonder

But not in a wishy-washy way. Fight dogma and convention. Always be curious, never be complacent. Look hard for the right idea and be prepared to battle for it. Use your imagination. Don’t settle for second best. Repeat.

Care

I mean really do care. Care about the relationships you have with your client/partner. Care about the goal you have to achieve. Care about the impact you’ll have on your business. Care about the people whose lives you’ll change. Care about the end product and care about why you’re doing this.

Don’t worry

Sometimes you’ll succeed, sometimes you won’t. Sometimes it’ll be easy, mainly it won’t. If a project/campaign goes well, celebrate. If it doesn’t, learn and don’t let it happen again. If you let worry consume you, if you let fear of failure get in the way, then you’ll kill your creativity and ambition.

Use the data & your gut instinct

In equal measure. Listen hard to reality. Data can help make the right decisions (and it will certainly help awards judges). Make sure it’s giving you the right metric rather than the red-herring. Mitigate against disaster. Look for the deeper thought rather than the surface-scratching indicator. If it doesn’t feel right, it probably isn’t. Trust your judgement.

Tell a great story

Make the case. Not only in entering for awards. Also within your business. Why this rather than that? A rather than B? What’s the underlying problem that you’re working to fix? What’s at stake? What’s to be gained? What’s the impact on the people you hire or the people you retain? What’s the logical case for action? What’s the emotional rationale? How do you want people to feel?

Of course winning isn’t everything. But it feels fantastic. RAD Awards 2018 are in for judging now. Good luck.

Tom #Humblebrag Chesterton

October 2017

The post Winning Awards appeared first on Tonic Agency.

Proposing Better Ways to Think about Internal Linking

I’ve long thought that there was an opportunity to improve the way we think about internal links, and to make much more effective recommendations. I feel like, as an industry, we have done a decent job of making the case that internal links are important and that the information architecture of big sites, in particular, makes a massive difference to their performance in search (see: 30-minute IA audit and DistilledU IA module).

And yet we’ve struggled to dig deeper than finding particularly poorly-linked pages, and obviously-bad architectures, leading to recommendations that are hard to implement, with weak business cases.

I’m going to propose a methodology that:

  1. Incorporates external authority metrics into internal PageRank (what I’m calling “local PageRank”) to take pure internal PageRank which is the best data-driven approach we’ve seen for evaluating internal links and avoid its issues that focus attention on the wrong areas

  2. Allows us to specify and evaluate multiple different changes in order to compare alternative approaches, figure out the scale of impact of a proposed change, and make better data-aware recommendations

Current information architecture recommendations are generally poor

Over the years, I’ve seen (and, ahem, made) many recommendations for improvements to internal linking structures and information architecture. In my experience, of all the areas we work in, this is an area of consistently weak recommendations.

I have often seen:

  • Vague recommendations – (“improve your information architecture by linking more to your product pages”) that don’t specify changes carefully enough to be actionable

  • No assessment of alternatives or trade-offs – does anything get worse if we make this change? Which page types might lose? How have we compared approach A and approach B?

  • Lack of a model – very limited assessment of the business value of making proposed changes – if everything goes to plan, what kind of improvement might we see? How do we compare the costs of what we are proposing to the anticipated benefits?

This is compounded in the case of internal linking changes because they are often tricky to specify (and to make at scale), hard to roll back, and very difficult to test (by now you know about our penchant for testing SEO changes – but internal architecture changes are among the trickiest to test because the anticipated uplift comes on pages that are not necessarily those being changed).

In my presentation at SearchLove London this year, I described different courses of action for factors in different areas of this grid:

It’s tough to make recommendations about internal links because while we have a fair amount of data about how links generally affect rankings, we have less information specifically focusing on internal links, and so while we have a high degree of control over them (in theory it’s completely within our control whether page A on our site links to page B) we need better analysis:

The current state of the art is powerful for diagnosis

If you want to get quickly up to speed on the latest thinking in this area, I’d strongly recommend reading these three articles and following their authors:

  1. Calculate internal PageRank by Paul Shapiro

  2. Using PageRank for internal link optimisation by Jan-Willem Bobbink

  3. Easy visualizations of PageRank and page groups by Patrick Stox

A load of smart people have done a ton of thinking on the subject and there are a few key areas where the state of the art is powerful:

There is no doubt that the kind of visualisations generated by techniques like those in the articles above are good for communicating problems you have found, and for convincing stakeholders of the need for action. Many people are highly visual thinkers, and it’s very often easier to explain a complex problem with a diagram. I personally find static visualisations difficult to analyse, however, and for discovering and diagnosing issues, you need data outputs and / or interactive visualisations:

But the state of the art has gaps:

The most obvious limitation is one that Paul calls out in his own article on calculating internal PageRank when he says:

“we see that our top page is our contact page. That doesn’t look right!”

This is a symptom of a wider problem which is that any algorithm looking at authority flow within the site that fails to take into account authority flow into the site from external links will be prone to getting misleading results. Less-relevant pages seem erroneously powerful, and poorly-integrated pages that have tons of external links seem unimportant in the pure internal PR calculation.

In addition, I hinted at this above, but I find visualisations very tricky – on large sites, they get too complex too quickly and have an element of the Rorschach to them:

My general attitude is to agree with O’Reilly that “Everything looks like a graph but almost nothing should ever be drawn as one”:

All of the best visualisations I’ve seen are nonetheless full link-graph visualisations – you will very often see crawl-depth charts which are in my opinion even harder to read and obscure even more information than regular link graphs. It’s not only the sampling but the inherent bias of only showing links in the order discovered from a single starting page – typically the homepage – which is useful only if that’s the only page on your site with any external links. This Sitebulb article talks about some of the challenges of drawing good crawl maps:

But by far the biggest gap I see is the almost total lack of any way of comparing current link structures to proposed ones, or for comparing multiple proposed solutions to see a) if they fix the problem, and b) which is better. The common focus on visualisations doesn’t scale well to comparisons – both because it’s hard to make a visualisation of a proposed change and because even if you can, the graphs will just look totally different because the layout is really sensitive to even fairly small tweaks in the underlying structure.

Our intuition is really bad when it comes to iterative algorithms

All of this wouldn’t be so much of a problem if our intuition was good. If we could just hold the key assumptions in our heads and make sensible recommendations from our many years of experience evaluating different sites.

Unfortunately, the same complexity that made PageRank such a breakthrough for Google in the early days makes for spectacularly hard problems for humans to evaluate. Even more unfortunately, not only are we clearly bad at calculating these things exactly, we’re surprisingly bad even at figuring them out directionally. [Long-time readers will no doubt see many parallels to the work I’ve done evaluating how bad (spoiler: really bad) SEOs are at understanding ranking factors generally].

I think that most people in the SEO field have a high-level understanding of at least the random surfer model of PR (and its extensions like reasonable surfer). Unfortunately, most of us are less good at having a mental model for the underlying eigenvector / eigenvalue problem and the infinite iteration / convergence of surfer models is troublesome to our intuition, to say the least.

I explored this intuition problem recently with a really simplified example and an unscientific poll:

The results were unsurprising – over 1 in 5 people got even a simple question wrong (the right answer is that a lot of the benefit of the link to the new page flows on to other pages in the site and it retains significantly less than an Nth of the PR of the homepage):

I followed this up with a trickier example and got a complete lack of consensus:

The right answer is that it loses (a lot) less than the PR of the new page except in some weird edge cases (I think only if the site has a very strange external link profile) where it can gain a tiny bit of PR. There is essentially zero chance that it doesn’t change, and no way for it to lose the entire PR of the new page.

Most of the wrong answers here are based on non-iterative understanding of the algorithm. It’s really hard to wrap your head around it all intuitively (I built a simulation to check my own answers – using the approach below).

All of this means that, since we don’t truly understand what’s going on, we are likely making very bad recommendations and certainly backing them up and arguing our case badly.

Doing better part 1: local PageRank solves the problems of internal PR

In order to be able to compare different proposed approaches, we need a way of re-running a data-driven calculation for different link graphs. Internal PageRank is one such re-runnable algorithm, but it suffers from the issues I highlighted above from having no concept of which pages it’s especially important to integrate well into the architecture because they have loads of external links, and it can mistakenly categorise pages as much stronger than they should be simply because they have links from many weak pages on your site.

In theory, you get a clearer picture of the performance of every page on your site – taking into account both external and internal links – by looking at internet-wide PageRank-style metrics. Unfortunately, we don’t have access to anything Google-scale here and the established link data providers have only sparse data for most websites – with data about only a fraction of all pages.

Even if they had dense data for all pages on your site, it wouldn’t solve the re-runnability problem – we wouldn’t be able to see how the metrics changed with proposed internal architecture changes.

What I’ve called “local” PageRank is an approach designed to attack this problem. It runs an internal PR calculation with what’s called a personalization vector designed to capture external authority weighting. This is not the same as re-running the whole PR calculation on a subgraph – that’s an extremely difficult problem that Google spent considerable resources to solve in their caffeine update. Instead, it’s an approximation, but it’s one that solves the major issues we had with pure internal PR of unimportant pages showing up among the most powerful pages on the site.

Here’s how to calculate it:

The next stage requires data from an external provider – I used raw mozRank – you can choose whichever provider you prefer, but make sure you are working with a raw metric rather than a logarithmically-scaled one, and make sure you are using a PageRank-like metric rather than a raw link count or ML-based metric like Moz’s page authority:

You need to normalise the external authority metric – as it will be calibrated on the entire internet while we need it to be a probability vector over our crawl – in other words to sum to 1 across our site:

We then use the NetworkX PageRank library to calculate our local PageRank – here’s some outline code:

What’s happening here is that by setting the personalization parameter to be the normalised vector of external authorities, we are saying that every time the random surfer “jumps”, instead of returning to a page on our site with uniform random chance, they return with probabilities proportional to the external authorities of those pages. This is roughly like saying that any time someone leaves your site in the random surfer model, they return via the weighted PageRank of the external links to your site’s pages. It’s fine that your external authority data might be sparse – you can just set values to zero for any pages without external authority data – one feature of this algorithm is that it’ll “fill in” appropriate values for those pages that are missing from the big data providers’ datasets.

In order to make this work, we also need to set the alpha parameter lower than we normally would (this is the damping parameter – normally set to 0.85 in regular PageRank – one minus alpha is the jump probability at each iteration). For much of my analysis, I set it to 0.5 – roughly representing the % of site traffic from external links – approximating the idea of a reasonable surfer.

There are a few things that I need to incorporate into this model to make it more useful – if you end up building any of this before I do, please do let me know:

  • Handle nofollow correctly (see Matt Cutts’ old PageRank sculpting post)

  • Handle redirects and rel canonical sensibly

  • Include top mR pages (or even all pages with mR) – even if they’re not in the crawl that starts at the homepage

    • You could even use each of these as a seed and crawl from these pages

  • Use the weight parameter in NetworkX to weight links by type to get closer to reasonable surfer model

    • The extreme version of this would be to use actual click-data for your own site to calibrate the behaviour to approximate an actual surfer!

Doing better part 2: describing and evaluating proposed changes to internal linking

After my frustration at trying to find a way of accurately evaluating internal link structures, my other major concern has been the challenges of comparing a proposed change to the status quo, or of evaluating multiple different proposed changes. As I said above, I don’t believe that this is easy to do visually as most of the layout algorithms used in the visualisations are very sensitive to the graph structure and just look totally different under even fairly minor changes. You can obviously drill into an interactive visualisation of the proposed change to look for issues, but that’s also fraught with challenges.

So my second proposed change to the methodology is to find ways to compare the local PR distribution we’ve calculated above between different internal linking structures. There are two major components to being able to do this:

  1. Efficiently describing or specifying the proposed change or new link structure; and

  2. Effectively comparing the distributions of local PR – across what is likely tens or hundreds of thousands of pages

How to specify a change to internal linking

I have three proposed ways of specifying changes:

1. Manually adding or removing small numbers of links

Although it doesn’t scale well, if you are just looking at changes to a limited number of pages, one option is simply to manipulate the spreadsheet of crawl data before loading it into your script:

2. Programmatically adding or removing edges as you load the crawl data

Your script will have a function that loads  the data from the crawl file – and as it builds the graph structure (a DiGraph in NetworkX terms – which stands for Directed Graph). At this point, if you want to simulate adding a sitewide link to a particular page, for example, you can do that – for example if this line sat inside the loop loading edges, it would add a link from every page to our London SearchLove page:

site.add_edges_from([(edge['Source'],
'https://www.distilled.net/events/searchlove-london/')])

You don’t need to worry about adding duplicates (i.e. checking whether a page already links to the target) because a DiGraph has no concept of multiple edges in the same direction between the same nodes, so if it’s already there, adding it will do no harm.

Removing edges programmatically is a little trickier – because if you want to remove a link from global navigation, for example, you need logic that knows which pages have non-navigation links to the target, as you don’t want to remove those as well (you generally don’t want to remove all links to the target page). But in principle, you can make arbitrary changes to the link graph in this way.

3. Crawl a staging site to capture more complex changes

As the changes get more complex, it can be tough to describe them in sufficient detail. For certain kinds of changes, it feels to me as though the best way to load the changed structure is to crawl a staging site with the new architecture. Of course, in general, this means having the whole thing implemented and ready to go, the effort of doing which negates a large part of the benefit of evaluating the change in advance. We have a secret weapon here which is that the “meta-CMS” nature of our ODN platform allows us to make certain changes incredibly quickly across site sections and create preview environments where we can see changes even for companies that aren’t customers of the platform yet.

For example, it looks like this to add a breadcrumb across a site section on one of our customers’ sites:

There are a few extra tweaks to the process if you’re going to crawl a staging or preview environment to capture internal link changes – because we need to make sure that the set of pages is identical in both crawls so we can’t just start at each homepage and crawl X levels deep. By definition we have changed the linking structure and therefore will discover a different set of pages. Instead, we need to:

  • Crawl both live and preview to X levels deep

  • Combine into a superset of all pages discovered on either crawl (noting that these pages exist on both sites – we haven’t created any new pages in preview)

  • Make lists of pages missing in each crawl and crawl those from lists

Once you have both crawls, and both include the same set of pages, you can re-run the algorithm described above to get the local PageRanks under each scenario and begin comparing them.

How to compare different internal link graphs

Sometimes you will have a specific problem you are looking to address (e.g. only y% of our product pages are indexed) – in which case you will likely want to check whether your change has improved the flow of authority to those target pages, compare their performance under proposed change A and proposed change B etc. Note that it is hard to evaluate losers with this approach – because the normalisation means that the local PR will always sum to 1 across your whole site so there always are losers if there are winners – in contrast to the real world where it is theoretically possible to have a structure that strictly dominates another.

In general, if you are simply evaluating how to make the internal link architecture “better”, you are less likely to jump to evaluating specific pages. In this case, you probably want to do some evaluation of different kinds of page on your site – identified either by:

  1. Labelling them by URL – e.g. everything in /blog or with ?productId in the URL

  2. Labelling them as you crawl

    1. Either from crawl structure – e.g. all pages 3 levels deep from the homepage, all pages linked from the blog etc)

    2. Or based on the crawled HTML (all pages with more than x links on them, with a particular breadcrumb or piece of meta information labelling them)

  3. Using modularity to label them automatically by algorithmically grouping pages in similar “places” in the link structure

I’d like to be able to also come up with some overall “health” score for an internal linking structure – and have been playing around with scoring it based on some kind of equality metric under the thesis that if you’ve chosen your indexable page set well, you want to distribute external authority as well throughout that set as possible. This thesis seems most likely to hold true for large long-tail-oriented sites that get links to pages which aren’t generally the ones looking to rank (e.g. e-commerce sites). It also builds on some of Tom Capper’s thinking (videoslides, blog post) about links being increasingly important for getting into Google’s consideration set for high-volume keywords which is then reordered by usage metrics and ML proxies for quality.

I have more work to do here, but I hope to develop an effective metric – it’d be great if it could build on established equality metrics like the Gini Coefficient. If you’ve done any thinking about this, or have any bright ideas, I’d love to hear your thoughts in the comments, or on Twitter.

The SEO Apprentice’s Toolbox: Gearing Up for Analysis

Being new to SEO is tricky. As a niche market within a niche market there many tools and resources unfamiliar to most new professionals. And with so much to learn it is nearly impossible to start real client work without first dedicating six months exclusively to industry training. Well…that’s how it may seem at first.

While it may be intimidating, investigating real-world problems is the best way to learn SEO. It exposes you to industry terminology, introduces you to valuable resources and gets you asking the right questions.

As a fairly new Analyst at Distilled, I know from experience how difficult it can be to get started. So here’s a list of common SEO analyses and supporting tools that may help you get off on the right foot.

Reviewing on-page elements

Page elements are essential building blocks of any web page. And pages with missing or incorrect elements risk not being eligible for search traffic. So checking these is necessary for identifying optimization opportunities and tracking changes. You can always go to the HTML source code and manually identify these problems yourself, but if you’re interested in saving a bit of time and hassle, Ayima’s Google Chrome extension Page Insights is a great resource.

This neat little tool identifies on-page problems by analyzing 24 common on-page issues for the current URL and comparing them against a set of rules and parameters. It then provides a list of all issues found, grouped into four priority levels: Errors, Warnings, Notices and Page Info. Descending from most to least severe, the first 3 categories (Errors, Warnings & Notices) identify all issues that could impact organic traffic for the page in question. The last category (Page Info) provides exact information about certain elements of the page.

For every page you visit Page Insights will give a warning next to its icon, indicating how many vulnerabilities were found on the page.

Clicking on the icon gives you a drop-down listing the vulnerabilities and page information found.

What makes this tool so useful is that it also provides details about each issue, like how it can cause harm to the page and correction opportunities. In this example, we can see that this web page is missing an H1 tag, but in this case, could be corrected by adding anH1 tag around the page’s current heading (which is not coded as an H1).

In a practical setting, Page Insights is great for quickly identify common on-page issues that should be fixed to ensure best SEO practice.

Additional tools for reviewing on-page elements:

Supplemental readings:

Analyzing page performance

Measuring the load functionality and speed of a page is an important and common practice since both metrics are correlated with user experience and are highly valued by search engines. There are a handful of tools that are applicable to this task but because of its large quantity of included metrics, I recommend using WebPagetest.org.

Emulating various browsers, this site allows users to measure the performance of a web page from different locations. After sending a real-time page request, WebPagetest provides a sample of three tests containing request details, such as the complete load time, the load time breakdown of all page content, and a final image of the rendered page. There are various configuration settings and report types within this tool, but for most analyses, I have found that running a simple test and focusing on the metrics presented in the Performance Results supply ample information.

There are several metrics presented in this report, but data provided in Load Time and First Byte work great for most checks. Factoring in Google’s suggestion to have desktop load time no greater than 2 seconds and a time to first byte of 200ms or less, we can gauge whether or not a page’s speed is properly optimized.

Prioritizing page speed performance areas

Knowing if a page needs to improve its performance speed is important, but without knowing what areas need improving you can’t begin to make proper corrections. Using WebPagetest in tandem with Google’s PageSpeed Insights is a great solution for filling in this gap.

Free for use, this tool measures a page’s desktop and mobile performance to evaluate whether it has applied common performance best practices. Scored on a scale of 0-100 a page’s performance can fall into one of three categories: Good, Needs Work or Poor. However, the key feature of this tool, which makes it so useful for page speed performance analysis, is its optimization list.

Located below the review score, this list highlights details related to possible optimization areas and good optimization practices currently in place on the page. By clicking the “Show how to fix” drop down for each suggestion you will see information related to the type of optimization found, why to implement changes and specific elements to correct.

In the image above, for example, compressing two images to reduce the amount bytes that need to be loaded can improve this web page’s speed. By making this change the page could expect a reduction in image byte size by 28%.

Using WebPagetest and PageSpeed Insights together can give you a comprehensive view of a page’s speed performance and assist in identifying and executing on good optimization strategies.

Additional tools for analyzing page performance:

Supplemental readings:

Investigating rendering issues

How Googlebot (or Bingbot or MSNbot) crawls and renders a page can be completely different from what is intended, and typically occurs as a result of the crawler being blocked by a robots.txt file. If Google sees an incomplete or blank page it assumes the user is having the same experience and could affect how that page performs in the SERPs. In these instances, the Webmaster tool Fetch as Google is ideal for identifying how Google renders a page.

Located in Google Search Console, Fetch as Google allows you to test if Googlebot can access pages of a site, identify how it renders the page and determines if any resources are blocked from the crawler.

When you look up a specific URL (or domain) Fetch as Google gives you two tabs of information: fetching, which displays the HTTP response of the specified URL; and rendering, which runs all resources on the page, provides a visual comparison of what Googlebot sees against what (Google estimates) the user sees and lists all resources Googlebot was not able to acquire.

For an analysis application, the rendering tab is where you need to look. Begin by checking the rendering images to ensure both Google and the user are seeing the same thing. Next, look at the list to see what resources were unreachable by Googlebot and why. If the visual elements are not displaying a complete page and/or important page elements are being blocked from Googlebot, there is an indication that the page is experiencing some rendering issues and may perform poorly in the search engine.

Additional tools for investigating rendering issues:

Supplemental readings:

Checking backlink trends

Quality backlinks are extremely important for making a strong web page, as they indicate to search engines a page’s reliability and trustworthiness. Changes to a backlink profile could easily affect how it is ranked in the SERPs, so checking this is important for any webpage/website analysis. A testament to its importance, there are several tools dedicated to backlinks analytics. However, I have a preference for the site Ahrefs due to its comprehensive yet simple layout, which makes it great for on-the-spot research.

An SEO tool well known for its backlink reporting capabilities, Ahrefs measures several backlink performance factors and displays them in a series of dashboards and graphs. While there is plenty to review, for most analysis purposes I find the “Backlinks” metric and “New & lost backlinks” graph to be the best places to focus.

Located under the Site Explorer tab, “Backlinks” identifies the total number of backlinks pointing to a target website or URL. It also shows the quantitative changes in these links over the past 7 days with the difference represented by either a red (negative growth) or green (positive growth) subscript. In a practical setting, this information is ideal for providing quick insight into current backlink trend changes.

Under the same tab, the “New & lost backlinks” graph provides details about the total number of backlinks gained and lost by the target URL over a period of time.

The combination of these particular features works very well for common backlink analytics, such as tracking backlinks profile changes and identifying specific periods of link growth or decline.

Additional tools for checking backlink trends:

Supplemental readings:

Creating your toolbox

This is only a sample of tools you can use for your SEO analyses and there are plenty more, with their own unique strengths and capabilities, available to you. So make sure to do your research and play around to find what works.

And if you are to take away only one thing from this post, just remember that as you work to build your own personal toolbox what you choose to include should best work for your needs and the needs of your clients.

Theresa May Coughs Up New Housing and Energy Policies in Speech to Annual Conservative Party Conference

The UK Prime Minister’s speech to Conservative Party Conference

During a Conservative Party conference dominated by speculation over who is best suited to lead the Party in the future, Theresa May sought to use today’s speech as a platform to re-assert her own leadership credentials and to present her vision of a renewed “British dream”.

However, confronted by an intruder with a mocked up P45 unemployment form and troubled by a persistent cough, that not even the Chancellor’s throat sweets could remedy, this was undoubtedly a challenging experience for a Prime Minister under close scrutiny.

While the headlines tomorrow will focus on the series of unfortunate events that hampered the Prime Ministers delivery, the speech itself contained several significant policy announcements aimed at progressing the Prime Minister’s ambition of leading a Government that offers a “voice to the voiceless”.

In a big shift away from the Cameron/Osborne focus on building homes for owner occupation, May promised a significant expansion in council housing with local authorities to be given new freedoms to build their own homes, while also being forced to assess local need and set targets to construct more housing in their area. Additionally, a further £2 billion will be invested to build affordable housing.

This policy demonstrates the importance that the Prime Minister places in reconnecting the Party to young voters, many of whom have struggled to afford housing and favoured Jeremy Corbyn’s Labour Party during the General Election. The eagle-eyed may also have spotted that the title for this year’s conference “Building a Country that Works for Everyone” contained a clue to the policy announcement to come – even if the slogan itself couldn’t make it to the end of the speech.

The Prime Minister confirmed that she would push ahead with the Conservative manifesto pledge to introduce legislation to cap energy prices, which many speculated had been set aside following the General Election. A draft Bill will be released next week setting out the Government’s framework for implementing this policy. This section of the speech was redacted in the version handed to journalists before the Prime Minister stood up, showing it was meant to be the ‘rabbit out of the hat moment’ that headline writers would focus on – sadly, for Theresa May, events ensured this was not meant to be.

Other policies announced include a review of the Mental Health Act by Professor Sir Simon Wessley aimed at addressing any injustices present in the current system, an extension of the free school programme and the introduction of an opt-out organ donation system in England.

By urging her Party to speak for “ordinary working people” and tailoring a policy platform to match, there are parallels between this speech and May’s initial address outside Downing Street last July. This was also evident in the tone of the speech, which was often of a personal nature.

In the later part of her speech, May received a standing ovation when arguing that “the test of a leader is how you react when tough times come upon you”. Faced with a challenging set of circumstances for a Prime Minister delivering a conference speech, May proved once again that she will continue to confront adversity head on.

If you would like any further information or detail, please do not hesitate to contact the Public and Corporate Affairs team.

MEANINGFUL WORK: BRINGING HOME THE IMPACT OF ADCOLOR

ADCOLOR exists to establish a community of diverse professionals to support and celebrate one another. Every year, those diverse professionals attend a conference full of the brightest, diverse and innovative minds in the industry. This year, a total of nine GSD&M employees attended, and they returned with meaningful, game-changing insights and inspiration. Along with our attendance, we were an incredibly proud sponsor and as such, wanted to create something as a little reminder of the change we have the power to make. These pins were sent home with every attendee:

 

I caught up with the folks who attended to see what they learned, so I’ll let the people at the forefront of diversity and inclusion do the talking.

How can the ad industry influence and inspire more work toward diversity in other industries and beyond?

  • Cara Maschler, account director: Our best efforts are those that strive for as many diverse voices as there are in the world. When we partner with related industries, it’s plain to see that a great idea can truly be cultivated from anywhere.
  • Max Rutherford, vendor diversity director: It is imperative to champion diversity and inclusion at our respective agencies and in work we do on behalf of our clients. It has to create an inclusive environment that embraces talent with diverse perspectives in order to deliver more groundbreaking solutions for clients.
  • Eric Knittel, associate creative director: The biggest thing the ad industry could do is lead by example. We are expert communicators, and we haven’t found a way to really start the conversation about unconscious bias.
  • Laura Guardalabene, designer: Advertising has a huge subconscious influence over the general population. The more we can reflect the diverse culture that is America, the more empathy we can create for disenfranchised communities.
  • Monica Vicens, strategy director: There is a great opportunity for us to educate our clients and push the envelope (ours and theirs) to embrace the people, lifestyles and attitudes that will drive brand growth.

What was your personal most important takeaway from ADCOLOR?

  • Ana Leen, account director: Rising stars in an organization are chosen by the leaders around them. If we want more diversity in leadership positions, we need to create the scaffolding for them to get there.
  • Kirya Francis, VP solutions/decision sciences: It is important not to leave your voice and experience at the door—it is critical in making better work for our clients as well as a better workplace. In general, the ad industry excels in branding diversity, but we have a little ways to go when it comes to embracing workforce and vendor diversity.
  • Shannon Moorman, VP talent acquisition: It’s incumbent upon us in the business to highlight the wins, the good and the bad, and create platforms of communication to galvanize the racial divide across this nation.
  • Candi Clem, analytics manager: ADCOLOR taught me a lesson that I will forever cherish: I am never alone. I have a tribe of brilliant, beautiful, diverse people who have my back. Even when I’m the only person in the room that looks like me, there are a legion of others with me in spirit. I don’t have to fight this fight on my own.

 

This industry has the power to cultivate change—and it must start where the work happens. These conversations must continue to take place inside and outside of agencies and brands, and although we have a ways to go, we should be incredibly proud and excited to have minds like these fighting for diversity in our industry.

Until next year, ADCOLOR. Here’s to progress.

Why This Feminist Weed Camp Isn’t Just For White Women

Marijuana cotton candy, flower crowns, and surprising diversity. Ganja Goddess Getaway is carving a niche in the $563 billion wellness tourism industry.

“The belly dancing class will start on the great lawn in five minutes,” announces a soothing female voice over the public address system. After a pause, she adds, “I love you.”

Read Full Story

Amazon reportedly building rival service to FedEx and UPS

The online retail giant is reportedly testing its own delivery service so it can reduce reliance on FedEx and UPS, reports Bloomberg. The trial program is said to be underway on the west coast before rolling out nationally. According to Bloomberg’s sources, Amazon is hoping its proprietary delivery services would mean it could make more …

The online retail giant is reportedly testing its own delivery service so it can reduce reliance on FedEx and UPS, reports Bloomberg. The trial program is said to be underway on the west coast before rolling out nationally. According to Bloomberg’s sources, Amazon is hoping its proprietary delivery services would mean it could make more of its products available for two-day delivery than it can using FedEx and UPS, as well as reduce congestion in its warehouses.

Read Full Story

The EU is taking Ireland to court over Apple tax deal worth $15.2 billion

The EU’s competition watchdog isn’t having it anymore when it comes to large multinationals using tax-minimization vehicles in EU countries to avoid paying taxes. Earlier today the European Commission levied a €250 million bill after it found that Amazon received illegal state aid from Luxembourg. Now the European Commission has announced it’s taking Ireland to …

The EU’s competition watchdog isn’t having it anymore when it comes to large multinationals using tax-minimization vehicles in EU countries to avoid paying taxes. Earlier today the European Commission levied a €250 million bill after it found that Amazon received illegal state aid from Luxembourg. Now the European Commission has announced it’s taking Ireland to court for “failure to recover illegal tax benefits from Apple” that are worth up to €13 billion (about $15.2 billion). In a statement EC Commissioner Margrethe Vestager said:

Read Full Story

Equifax’s former CEO just threw an employee under the bus

Today former Equifax CEO Richard Smith, who announced his retirement last week, is testifying before the House Energy and Commerce Committee. Lawmakers are grilling him about what exactly happened that led to the huge data breach that impacted 145.5 million people. In his opening statement, Smith said much of what we have already heard. He apologized …

Today former Equifax CEO Richard Smith, who announced his retirement last week, is testifying before the House Energy and Commerce Committee. Lawmakers are grilling him about what exactly happened that led to the huge data breach that impacted 145.5 million people. In his opening statement, Smith said much of what we have already heard. He apologized for what happened and blamed a mixture of human and technological error.

Read Full Story