The company boasting the largest online collection of 3D models, Sketchfab, is taking a dive into virtual reality
And to show off exactly what's possible with the new VR dynamic, the company has launched a showcase app called Sketchfab VR, which introduces users to a number of different 3D environments that can be explored in-depth using a VR headset
But rather than act as a tool, the app is meant as a gateway into Sketchfab's bigger promise: to create a kind of YouTube of VR content created by users and embeddable on nearly any kind of site Read more...More about Facebook, Vr, Virtual Reality, Tech, and Apps Software
User-generated content (UGC), like customer photos and reviews, increases engagement, encourages shoppers to convert into buyers and boosts word-of-mouth marketing. While there are many ways to display UGC throughout your site to create interest and provide shoppers with real information from real buyers, here are four of the most effective and attention-grabbing uses of UGC in ecommerce (and actionable tips on how to emulate their impact).
BOOM! Shows Real Products on Real People
BOOM! cosmetics company gives customers the option to submit original photos with their product reviews. When a shopper sees these user-generated photos of real people using the product they are considering, they can quickly tell whether it meets their expectations.
User-generated photos coupled with customer reviews effectively communicates a comprehensive customer experience that allows shoppers to connect on a personal level and imagine themselves using your product.
BOOM! also implements visual UGC throughout their site with a selfie contest and rotating customer photos on their homepage.
Encourage reviewers to get creative with their customer photos to showcase your product in engaging ways. If you are selling make-up, ask them to take a selfie before they go out. If you are selling arts and crafts materials, ask for pictures of a finished project.
MVMT Mixes UGC and Product Pictures
For jewellery and accessories, studio product pictures are great for highlighting design details, but they usually do not provide context. They leave shoppers wondering what the product will look like on them and how it will pair with other items in their closet.
MVMT addresses this concern by surrounding their studio product images with pictures from Instagram.
By mixing UGC in with product pictures and including an Instagram feed featuring relevant social on each product page, MVMT allows shoppers to better understand the proportions of the product and imagine how they would style it.
Use social pictures with relevant reference points such as a hand or additional accessories that help shoppers understand the actual size and color of the product.
Samuel Hubbard Features UGC in Home Page Design
Uncertainty is a big factor when seeing an online brand for the first time. Authentic customer reviews are a perfect way to reassure new shoppers about the quality of your products and the trustworthiness of your brand.
When displayed on the home page in a way that matches the overall site design and experience, they capture the attention of site visitors and nudge them to take the next step in the customer journey.
Samuel Hubbard features their customer reviews as a major part of their homepage design to showcase happy customers and encourage new site visitors to look further.
Feature detailed reviews that mention a specific use or benefit of the product such as gift giving or all-day use.
Campus Protein Measures Industry Pain Points
Each industry has its own unique customer pain points. Asking past buyers to rate specific metrics related to your industry as part of their product review provides shoppers with trustworthy and relevant information.
Campus Protein uses their review requests to get customers to weigh in on the flavor of their product. Flavor is a key component of purchase decisions when buying consumables online. By asking customers to rate the flavor of the product they bought, Campus Protein ensures that flavor is part of every customer review.
Industry-specific measurements enhance reviews and ensure that the most important information that can make or break a sale is communicated to new shoppers.
When deciding what quality you want reviewers to rate, consider your marketing needs as well as common customer pain points. What do you most want know from past buyers? What do you most want to communicate to new shoppers?
User-generated content grabs shoppers’ attention, engages existing customers and increases sales. These are just four of the many ways to display UGC throughout your online store. Using these examples as inspiration will allow you to extract more value from your customers and make the online shopping experience more genuine.
When marketers like us create landing pages, write email copy, or design call-to-action buttons, it can be tempting to use our intuition to predict what will make people click and convert.
But basing marketing decisions off of a "feeling" can be pretty detrimental to results. Rather than relying on guesses or assumptions to make these decisions, you're much better off running conversion rate optimization (CRO) tests.
CRO testing can be valuable because different audiences behave, well, differently. Something that works for one company may not necessarily work for another. In fact, CRO experts hate the term "best practices" because it may not actually be the best practice for you.
But these tests can also be complex. If you're not careful, you could make incorrect assumptions about what people like and what makes them click -- decisions that could easily misinform other parts of your strategy.
One of the easier (and most common) types of CRO tests is called an A/B test. An A/B test simply tests one variable in a piece of marketing content against another, like a green call-to-action button versus a red one, to see which performs better.
So, what does it take to run an A/B test, exactly? Keep reading to learn what an A/B test is in a little more detail, followed by a full checklist for what marketers should do before, during, and after these tests. You'll want to bookmark this for your next one.
How A/B Tests Work
To run an A/B test, you need to create two different versions of one piece of content with changes to a single variable. Then, you'll show these two versions to two similarly sized audiences, and analyze which one performed better.
For example, let's say you want to see if moving a certain call-to-action button to the top of your homepage instead of keeping it in the sidebar will improve its conversion rate.
To A/B test this change, you'd create another, alternative web page that reflected that CTA placement change. The existing design -- or the "control" -- is Version A. Version B is the "challenger."
Image Credit: ConversionXL
Then, you'd test these two versions by showing each of them to a predetermined percentage of site visitors. (To learn more about A/B testing, download our free introductory guide here.)
Now, let's walk through the checklist for setting up, running, and measuring an A/B test.
Checklist for Running an A/B Test
Before the A/B Test
1) Pick one variable to test.
As you optimize your web pages and emails, you might find there are a number of variables you want to test. But to evaluate how effective a change is, you'll want to isolate one, single variable and measure its performance -- otherwise, you can't be sure which one was responsible for changes in performance. You can test more than one variable for a single web page or email -- just be sure you're testing them one at a time.
Look at the various elements in your marketing resources and their possible alternatives for design, wording, and layout. Other things you might test include email subject lines, sender names, and different ways to personalize your emails.
Keep in mind that even simple changes, like changing the image in your email or the words on your call-to-action button, can drive big improvements. In fact, these sorts of changes are usually easier to measure than the bigger ones.
Note: There are some times when it makes more sense to test multiple variables rather than a single variable. This is a process called multivariate testing. If you're wondering whether you should run an A/B test versus a multivariate test, here's a helpful article from Optimizely that compares the two.
2) Choose your goal.
Although you'll measure a number of metrics for every one test, choose a primary metric to focus on -- before you run the test. In fact, do it before you even set up the second variation. If you wait until afterward to think about which metrics are important to you, what your goals are, and how the changes you're proposing might affect user behavior, then you might not set up the test in the most effective way.
3) Set up your "control" and your "challenger."
Set up your unaltered version of whatever you're testing as your "control." If you're testing a web page, this is the unaltered web page as it exists already. If you're testing a landing page, this would be the landing page design and copy you would normally use.
From there, build a variation, or a "challenger" -- the website, landing page, or email you’ll test against your control. For example, if you're wondering whether including a testimonial on a landing page would make a difference, set up your control page with no testimonials. Then, create your variation with a testimonial.
4) Split your sample groups equally and randomly.
For tests where you have more control over the audience -- like with emails -- you need to test with two or more audiences that are equal in order to have conclusive results.
How you do this will vary depending on the A/B testing tool you use. If you're a HubSpot Enterprise customer conducting an A/B test on an email, for example, HubSpot will automatically split traffic to your variations so that each variation gets a random sampling of visitors.
5) Determine your sample size (if applicable).
How you determine your sample size will also vary depending on your A/B testing tool, as well as the type of A/B test you're running.
If you're A/B testing an email, you'll probably want to send an A/B test to a smaller portion of your list to get statistically significant results. Eventually, you'll pick a winner and send the winning variation on to the rest of the list. (Read this blog post for a more detailed guide on calculating an email A/B test's sample size.)
If you're a HubSpot Enterprise customer, you'll have some help determining the size of your sample group using a slider. It'll let you do a 50/50 A/B test of any sample size -- although all other sample splits require a list of at least 1,000 recipients.
If you're testing something that doesn't have a finite audience, like a web page, then how long you keep your test running will directly affect your sample size. You'll need to let your test run long enough to obtain a substantial number of views, otherwise it'll be hard to tell whether there was a statistically significant difference between the two variations.
6) Decide how significant your results need to be.
Once you've picked your goal metric, think about how significant your results need to be to justify choosing one variation over another. Statistical significance is a super important part of A/B testing process that's often misunderstood. If you need a refresher on statistical significance from a marketing standpoint, I recommend reading this blog post.
The higher the percentage of your confidence level, the more sure you can be about your results. In most cases, you'll want a confidence level of 95% minimum -- preferably even 98% -- especially if it was a time-intensive experiment to set up. However, sometimes it might make sense to use a lower confidence rate if you don't need the test to be as stringent.
Matt Rheault, a senior software engineer at HubSpot, likes to think of statistical significance like placing a bet. What odds are you comfortable placing a bet on? Saying "I'm 80% sure this is the right design and I'm willing to bet everything on it" is similar to running an A/B test to 80% significance and then declaring a winner.
Rheault also says you’ll likely want a higher confidence threshold when testing for something that only slightly improves conversation rate. Why? Because random variance is more likely to play a bigger role.
"An example where we could feel safer lowering our confidence threshold is an experiment that will likely improve conversion rate by 10% or more, such as a redesigned hero section,” he explained. "The takeaway here is that the more radical the change, the less scientific we need to be process-wise. The more uber-specific the change (button color, micro copy, etc.), the more scientific we should be because the change is less likely to have a large and noticeable impact on conversion rate."
7) Make sure you're only running one test at a time on any campaign.
Testing more than one thing for a single campaign -- even if it's not on the same exact asset -- can do a number on your results. For example, if you A/B test an email campaign that directs to a landing page at the same time that you’re A/B testing that landing page ... how can you know which change caused the increase in leads?
During the A/B Test
8) Use an A/B testing tool.
To run an A/B test on your website or in an email, you'll need to use an A/B testing tool. If you're a HubSpot Enterprise customer, the HubSpot software has features that let you A/B test emails (learn how here), calls-to-action (learn how here), and landing pages (learn how here).
For non-HubSpot Enterprise customers, other options include Google Analytics' Experiments, which lets you A/B test up to 10 full versions of a single web page and compare their performance using a random sample of users.
9) Test both variations simultaneously.
Timing plays a significant role in your marketing campaign’s results, whether it's time of day, day of the week, or month of the year. If you were to run Version A during one month and Version B a month later, how would you know whether the performance change was caused by the different design or the different month?
When you run A/B tests, you'll need to run the two variations at the same time, otherwise you may be left second-guessing your results.
The only exception here is if you're testing timing itself, like finding the optimal times for sending out emails. This is a great thing to test because depending on what your business offers and who your subscribers are, the optimal time for subscriber engagement can vary significantly by industry and target market.
10) Run the test long enough to get substantial results.
Again, you'll want to make sure that you let your test run long enough in order to obtain a substantial sample size. Otherwise, it'll be hard to tell whether there was a statistically significant difference between the two variations.
How long is long enough? Depending on your company and how you execute the A/B test, getting statistically significant results could happen in hours ... or days ... or weeks. A big part of how long it takes to get statistically significant results is how much traffic you get -- so if your business doesn't get a lot of traffic to your website, then it'll take much longer for you to run an A/B test. In theory, you shouldn't restrict the time in which you're gathering results. (Read this blog post to learn more about sample size and timing.)
11) Ask for feedback from real users.
A/B testing has a lot to do with quantitative data ... but that won't necessarily help you understand why people take certain actions over others. While you're running your A/B test, why not collect qualitative feedback from real users?
One of the best ways to ask people for their opinions is through a survey or poll. You might add an exit survey on your site that asks visitors why they didn't click on a certain CTA, or one on your thank-you pages that asks visitors why they clicked a button or filled out a form.
You might find, for example, that a lot of people clicked on a call-to-action leading them to an ebook, but once they saw the price, they didn't convert. That kind of information will give you a lot of insight into why your users are behaving in certain ways.
After the A/B Test
12) Focus on your goal metric.
Again, although you'll be measuring multiple metrics, keep your focus on that primary goal metric when you do your analysis.
For example, if you tested two variations of an email and chose leads as your primary metric, don’t get caught up on open rate or clickthrough rate. You might see a high clickthrough rate and poor conversion rates, in which case you might end up choosing the variation that had a lower clickthrough rate in the end.
13) Measure the significance of your results using our A/B testing calculator.
Now that you've determined which variation performs the best, it's time to determine whether or not your results statistically significant. In other words, are they enough to justify a change?
To find out, you'll need to conduct a test of statistical significance. You could do that manually ... or you could just plug in the results from your experiment to our free A/B testing calculator. For each variation you tested, you'll be prompted to input the total number of tries, like emails sent or impressions seen. Then, enter the number of goals it completed -- generally you'll look at clicks, but this could also be other types of conversions.
The calculator will spit out the confidence level your data produces for the winning variation. Then, measure that number against the value you chose to determine statistical significance
14) Take action based on your results.
If one variation is statistically better than the other, then you have a winner. Complete your test by disabling the losing variation in your A/B testing tool.
If neither variation is statistically better, then you've just learned that the variable you tested didn't impact results, and you'll have to mark the test as inconclusive. In this case, stick with the original variation -- or run another test. You can use the failed data to help you figure out a new iteration on your new test.
While A/B tests help you impact results on a case-by-case basis, you can also apply the lessons you learn from each test and apply it to future efforts. For example, if you've conducted A/B tests in your email marketing and have repeatedly found that using numbers in email subject lines generates better clickthrough rates, then you might want to consider using that tactic in more of your emails.
15) Plan your next test.
The A/B test you just finished may have helped you discover a new way to make your marketing content more effective -- but don't stop there. There’s always room for more optimization.
You can even try conducting an A/B test on another feature of the same web page or email you just did a test on. For example, if you just tested a headline on a landing page, why not do a new test on body copy? Or color scheme? Or images? Always keep an eye out for opportunities to increase conversion rates and leads.
What else would you add to this checklist for running an A/B test? Share with us in the comments.
Skill Up, Start Up & Scale Up: 5 Online Talks on Career-Building for Marketing & Sales Professionals [Video Series]
“Always be learning."
It's a simple yet powerful statement for anyone in today's workforce. Think about it: When you were 22, did you imagine that you'd be where you are today? What experiences have you had? What have you learned? And what do you still want to learn?
From the moment we enter the working world, we are faced with challenges -- solving complex problems, growing a team, improving how our companies do business -- that require us to continually grow our existing skill sets. And with business technologies continuing to evolve, what you knew yesterday might not be enough to get you through the challenges that arise tomorrow.
In the spirit of learning and growing, HubSpot Academy is running World Certification Week: A week of inspiring and educational talks broadcast on Google Hangouts, where you can tune into topics on:
- Career development in sales and marketing
- Growing from a startup to a scalable business
- Training and skilling up in a fast-moving world
- How to create and use educational content
- Understanding the full customer lifecycle
Join HubSpot CEO Brian Halligan and eight other global leaders over five days -- from May 23rd – 27th -- for a 30-minute Hangout a day.
Each hangout will only be available for 24 hours during the weeklong event, so don't miss out on all the knowledge. Click here to sign up.
P.S. Don’t forget to get certified while you wait!
Barack Obama, BuzzFeed and Facebook — It was supposed to be a high-profile showcase for new journalism from new technology with the new-media President.
Instead it became the most high-profile example yet of how live streaming can go wrong.
BuzzFeed's livestream interview on Monday with President Barack Obama froze on Facebook, leaving the site with only a short introductory clip of the interviewer.
BuzzFeed ended up re-directing Facebook viewers to watch the livestream on YouTube by pasting a link to the latter in the comments. That video has already logged more than 100,000 views.
The origin of the problem with Facebook's stream was not immediately clear, although YouTube's seemed to work well. The two companies have developed a strong rivalry as leaders in online video, with the two arguing over viewing metrics and battling fo advertiser money. Read more...
As a part of our process of determining how we will work with clients, we typically conduct an assessment of their overall demand generation strategy and execution. We look at what they’re doing to generate and nurture leads, how they’re utilizing their website and other digital (and non-digital) communication channels and how that all aligns with and connects to their sales approach.
For those that have not implemented inbound marketing or sales development initiatives, it provides an opportunity to create a clear roadmap to determine what issues, if any, need to be addressed; how to best address them, and what the quickest path to impact would be.
For companies that have been implementing one or both of these approaches, it’s an opportunity for a nice check-up to identify opportunities to enhance their efforts.
It’s no surprise given the increasing popularity and maturity of inbound marketing that an increasing percentage of these assessments are taking place with companies that actively engage in inbound efforts.
Over the last couple of months, we’ve conducted several assessments with companies that have been engaged in inbound marketing for at least three years, with some that have been doing so for as long as six. In all of these cases, these companies were getting good results, but had found that these results were plateauing or declining.
In the process of reviewing their efforts, we identified some common themes that are contributing to declining results, despite continued investment. As one of our clients said entering into the assessment, “We wonder if we’ve gotten everything we can get from our inbound efforts, and if it’s time to find something else.”
My sense is that she is not alone. I’m increasingly hearing the grumbles of frustration from inbound practitioners. The early (and easy) results from being one of few have disappeared and the playing field is noisier than ever. If you’re finding your results plateauing, be sure that you’re not falling victim to one of these themes.
1) Buyer Personas
I have to admit that this one surprised me. I’m used to talking about buyer personas with companies that aren’t implementing inbound. I figured that for multi-year veterans, personas would be a given. The group that we assessed fell into two groups on this issue:
- They did not have written personas.
- The written personas they had were vague and had fallen out of date.
I get it, creating personas is hard. Keeping them up-to-date is even harder. But they are absolutely crucial if you want to gain and maintain traction.
Creating personas requires more than just a couple of conversations and writing out a paragraph or two describing who they are. Effective personas combine two elements: a clear ideal client profile and an in-depth review of the key people you want to talk with.
When we create personas for our clients, we work to identify three types of personas:
- Primary personas: These are the decision makers or key players involved in your sale.
- Secondary personas: These are the people who may or may not be directly involved in a sales/buying process, but elicit significant influence.
- Negative personas: These are the people who you want to be sure are not in a lead position when dealing with your solutions. For example, we worked with a company that sold HR information systems and in their case, the IT manager was the negative persona. If the interaction was perceived as an IT issue, rather than an HR issue, it represented problems for their efforts.
Regardless of how you create personas, the objective should be to clearly define:
- What the clear identifiers are for each persona.
- The challenges they deal with (from their perspective).
- Their priorities.
- Their experience in dealing with your products/services.
- The important questions they seek to answer on an ongoing basis.
When completed, it’s easy to feel like you’re done with personas. Don’t make that mistake. Personas are never done. They should be constantly tweaked and updated. At a minimum, you should review your personas on an annual basis to ensure the information within them is still relevant and insightful.
One of my favorite byproducts of talking about inbound marketing with businesses is that it naturally changes how executives think about their website. Rather than being a static, digital brochure, filled with we-do’s; the real value of the website emerges.
For anyone who has implemented a new inbound effort, you know that there’s a high probability you’ll make significant changes or even completely redesign your site to support the effort.
As with personas, the danger is when you feel like you’re done with your website. A common theme we’ve seen with inbound veterans is that they fall back on old habits with their website.
As their companies and offerings evolved, they continued to add material to the website, without thinking about the strategy behind what they were doing. As a result, the sites became quite complicated and confusing.
We could see by looking at how the site was originally crafted that many best practices were supported. The layout was clean. The conversion paths were clear. But over time, the site became overloaded and confusing.
Please note, I am in no way saying that you shouldn’t change your website. Quite the contrary. You must be constantly making changes to your website. If you’re not changing something that matters on at least a monthly basis, you’re not doing enough.
It’s how you manage the changes that are important. Today, when considering how to manage your website going forward, you must build it with the assumption that it’s going to be constantly changing. You’ll want to test and adjust layouts, colors, and design elements; not to mention all of the changes you’ll need to make as your company and offerings evolve. That doesn’t mean your site should become the digital equivalent of a Rube Goldberg device.
Remember the battle cry of your website visitor: Don’t Make Me Think!
3) Content Not Aligned With Buyer’s Journey
If you’re looking to continually gain traction and enhance results of your inbound efforts, you must embrace the fact that the relationship with your website and your visitors needs to be highly personalized. That means the message on every page must align with the person visiting - their persona and where they are in their journey.
One of my favorite tools for managing website content is what we call a content map. This map is a spreadsheet that lists every material page and asset (site pages, blogs, landing pages, graphics and CTAs) and identifies:
- Which persona(s) it is built for
- Which stage of the buyer’s journey it’s targeted to
- What questions it’s designed to answer or actions it's designed to stimulate
- What device (mobile/desktop) the visitor is most likely to be using
When content is mapped in such a manner, you can be sure that you are addressing the important points on your visitors’ minds, and you’ll have the data you need to lead them through a well thought out conversion path.
4) Poor Nurturing Strategy
Inbound marketing is not a quick fix. Too often I see people utilizing inbound strategies to generate leads and then apply old school sales tactics to a bunch of people who aren’t ready or in a position to buy anything. Then they complain that inbound doesn’t work.
Our assessments were no exception. Effective content gives you the advantage to be relevant to your market before they’re in the market to buy. This is a HUGE advantage, if only you capitalize on it.
Noted marketing expert Seth Godin often talks about how attention from your desired market is the most valuable asset any business can have (it’s too bad there’s no spot on the balance sheet to report on it). Great content is the vehicle for building that attention.
But remember that people download things for their reasons, not yours. They most often download because their seeking information or knowledge on something that matters to them, not because they want or need to buy anything.
This is where nurturing comes in. An effective lead nurturing strategy cultivates the attention you’ve created, leads them to understand their problems better and highlights the value you create for them when they engage. Done correctly nurturing accelerates the sales cycle, increasing the average sales value and increases your win rates (now that’s what I call a real Triple Crown!).
Yet despite its clear value, very few do it well (if they do it at all). Nurturing is more than just sending emails schilling your webinars and other download offers. Lead nurturing requires a well thought out plan, a high degree of personalization and the discipline to sustain.
5) Not Utilizing Data to Drive Decisions
My absolute favorite attribute of inbound marketing is the data you are able to collect and utilize to assess progress and to make decisions going forward. Yet despite the data available to them, our experience is that very few companies are actually utilizing data to drive decisions.
It is absolutely critical that you develop what we like to call data rhythms. When we’re managing an inbound program, we break metrics into weekly, monthly and quarterly checks (and certainly there are some companies that should have daily rhythms with some metrics).
On a quarterly basis, we’re using data to set our course. We think of these quarterly rhythms as waypoints on our journey for long-term scalable growth. We set our key objectives and themes, and we review and update our service level agreements (SLAs).
Every month, we use the data to track progress against those objectives. More importantly, we dig deep into the data to determine what tests or experiments we want to run. Which pages are getting good traffic, but aren’t converting? What’s converting, but not getting traffic? What can we learn from that? Additionally, we’ll run experiments like testing CTAs in a different location, running an off-beat PPC test and so on.
We’re always running tests and experiments. Some of these are designed specifically to improve performance. Other times, we’re just looking to gain insights. We may move a CTA, or where some key content is, so that we can watch how people interact. We then use that knowledge to drive other decisions.
On a weekly basis we’re watching for emerging trends and seeing how experiments are playing out. Not a week goes by that we aren’t tweaking or adjusting something that was done previously.
By taking such an approach we are able to truly capitalize on the fundamental value of inbound marketing. Every day, week and month we are building our marketing asset and optimizing performance.
By looking to constantly iterate and continuously make small progress, we build significant advances over time and avoid the plateaus and pitfalls associated with other approaches.