Nathaniel Ward

That’s not a story you’re telling

Marketers should tell stories. That’s the best way to engage our customers. But too often, we create content that’s not a story—there’s no protagonist, no drama.

Paul VanDeCar elaborates:

But the “stories” on their websites weren’t so much stories as statements of feeling or timelines or sequences of events. Those all have their place, but they’re not stories, and they typically don’t grab people in the way stories do.


Use psychology to help your customers relieve mental burdens

Here’s some fascinating behavioral research:

Through the experiments, the researchers homed in on a hypothesis: People appear wired to incur a significant physical cost to eliminate a mental burden.

In particular, Dr. Rosenbaum said, people are seeking ways to limit the burden to their “working memory,” a critical but highly limited mental resource that people use to perform immediate tasks. By picking up the bucket earlier, the subjects were eliminating the need to remember to do it later. In essence, they were freeing their brains to focus on other potential tasks.

Savvy marketers will use this knowledge to help their customers relieve mental burdens.



The case against marking form fields as required

To boost response to online forms, Anthony at UX Movement suggests indicating which fields are optional rather than which are required:

Marking required fields enable [sic] users to do the bare minimum to complete your form. They’re going to put more importance on required fields and fill those out first while ignoring the optional ones. Why would they spend time on optional fields if they can fill out what’s required and move on? However, if you use voluntary over-​​disclosure to your advantage and mark optional fields only, users won’t feel the need to take shortcuts.

Worth a test.


The three things you should do before you run an online test

A/​B testing is a powerful way to improve your online marketing.

But more important than any improvement you might get from testing is what you learn from your testing that you can apply in the future. That means the process behind your test is critical.

You should take these three basic steps before running any test:

1. Choose what element of your program you want to improve

For example, you might want to strengthen your e-​​mail newsletter or your donation page.

2. Identify how you measure success

You don’t run a test just to see what happens. You’re trying to improve something. What is that something?

So if you’re optimizing your newsletter, what’s the goal of the newsletter? To drive someone to your website, perhaps? Or what’s the goal of your donation form? To capture the most gifts or to capture the most revenue?

You will then measure your test results based on this goal.

3. Develop a hypothesis about how you will improve that measure

This step is critical. With a clear hypothesis—“more links in my newsletter will drive more traffic to the site,” for example, or “less clutter on the donation page will lead to more gifts”—you have a testable proposition. Your test will either confirm or reject your hypothesis, and you can apply that lesson in the future.


In defense of flexible office space

The debate between fully-​​open and fully-​​enclosed work spaces misses the point, David Craig argues. What really matters is flexibility:

The bigger flaw, though, in recent criticisms of open workplaces is the underlying idea that there’s only one choice: open or enclosed. Work is invariably a combination of individual work, collaboration, coördination, creativity, and other things, all of which can take a variety of forms, sometimes in just one person in one day. As research done by CannonDesign with 14 organizations over the past year has shown, the average employee does want fewer distractions, but they also want 35% more frequent interactions within their teams; they want more energy and buzz in the workplace than less, but they also want the flexibility to escape to a quiet place from time to time. What they definitely don’t want is one space that’s just open or just enclosed.

This is very much in line with what Jason Fried and David Heinemeier write in their book Remote.


Clarity trumps design

Jason Fried explains why he prefers simple, text-​​based web designs to slick interfaces:

None of which is to say that a text-​​heavy design is the right solution for everyone. But I’ve always found it interesting that some of the most popular sites on the Web–Amazon, eBay, Craigslist, Wikipedia, to name a few–are often very heavy on the text and very light on the imagery. These sites won’t win any design awards, but they seem to communicate very clearly to their intended audience.

At the end of the day, you’re designing for your customers, not for other designers.


How Big Data can lead you to make worse decisions

The New York Times:

Social media and Big Data, the term du jour for the collection of vast troves of information that can instantaneously be synthesized, are supposed to help us make smarter, faster decisions. It seems as if just about every C.E.O. of a global company these days is talking about how Big Data is going to transform their business. But with increasing frequency, it may be leading to flawed, panic-​​induced conclusions, often by ascribing too much value to a certain data point or by rushing to make a decision because the feedback is available so quickly. This digital river of information is turning normally level-​​headed decision-​​makers into hypersensitive, reactive neurotics.

One big danger is in using the canned reports that come with e-​​mail providers or services like Facebook. These reports may not align with your actual goals and can lead you astray. E-​​mail programs, for example, usually measure the success of a message by open rate—even though open rate usually isn’t the goal of your message. Facebook’s reports on the likes and shares generated by a link you posted also won’t be of much help if your goal is driving traffic to that link.

Data should inform your decisions and allow you to confirm or disconfirm hypotheses. It shouldn’t make your decisions for you.

Have you ever been led astray by incomplete or poorly-​​represented data?


Stop measuring yourself against industry benchmarks

Are you basing your online marketing plans on the latest benchmark study? What on earth for?

A.G. Lafley and Roger Martin explain why this is foolish in Playing to Win:

Every industry has tools and practices that become widespread and generic. Some organizations define strategy as benchmarking against competition and then doing the same set of activities but more effectively. Sameness isn’t strategy. It is a recipe for mediocrity.

Benchmark studies can be interesting sources of inspiration and ideas. But they’re not a how-​​to manual, and you certainly shouldn’t measure yourself against them. You should be doing what’s best for your audience, not aping what your competitors are doing with theirs.

As Flint McLaughlin puts it, “best practices on the internet are typically pooled ignorance.”


I’ve probably made all of these A/​B testing mistakes

Peep Laja identifies eleven common A/​B testing mistakes (his headline says twelve, but the article is missing a ninth item):

  1. A/​B tests are called early
  2. Tests are not run for full weeks
  3. A/​B split testing is done even when they don’t even have traffic (or conversions)
  4. Tests are not based on a hypothesis
  5. Test data is not sent to Google Analytics
  6. Precious time and traffic are wasted on stupid tests
  7. They give up after the first test fails
  8. They don’t understand false positives
  9. They’re running multiple tests at the same time with overlapping traffic
  10. They’re ignoring small gains
  11. They’re not running tests at all times

In my experience, the first, fourth, and tenth mistakes are easiest to make. I’ve made them myself in my impatience to get a result, my desire to just “try something,” or my desire for a big lift.

But cutting corners to get a big lift as quickly as possible doesn’t teach you anything you can use in the future—and learning is the most valuable takeaway from any test.

Read the whole thing.

What mistakes have you made when testing?