Google
 

NONE: Re: ONLINE-ADS>> Banner Ad Placement Study

Re: ONLINE-ADS>> Banner Ad Placement Study

Mark J. Welch, Esq. (markwelch_at_ca-probate.com)
Sat, 26 Apr 1997 15:37:21 -0700

Okay, I have been reading about the "Banner Ad Placement Study"
for a week now, and I have finally taken the time to go look at the
(scant) information posted at http://www.webreference.com/dev/banners/

My response: "hogwash." While the information from this
study raises some important questions and the raw result (an
improvement of 200%) should spark more serious study, this
is NOT a scientific analysis and the results should NOT be
accepted at face value.

Since the study results were apparently posted on the web by
someone OTHER than the authors, I am not even sure how
the authors believe the results should be viewed (indeed,
the web site describing their results does not include any
way for interested persons to respond to the authors, by email,
phone, or U.S. mail).

The study is fundamentally flawed for several reasons. First, the
"experiment" was done on sites that have traditionally carried a
468x60 banner ad at the top of the page. Second, two major
changes were combined: a change in ad size (from 468x60 to
125x125) and a change in placement (from top of the page to
lower right corner of the first screen). Third, the study only
measured a single outcome (click-throughs), ignoring the single
most likely reason for variation (user error).

(1) Any change is likely to result in increased user response.
Measuring the effect of only ONE of many possible changes,
and measuring for only one week or two, is unlikely to help
establish whether the effect is due to the specific change, or
only to "change in general."

(2) Changing the banner size dramatically -- in this case,
reducing it from 468x60 (28,080 pixels) to 125x125
(15,625) is likely to have a significant effect because of
the faster loading time for a smaller banner: it is more
likely to be seen, period. Apparently, no effort was made
to measure the change in click-through if a smaller ad
(such as 400x40 or 16,000 pixels) were displayed at the
top of the page (e.g. measuring ONLY a change in size),
nor was any effort made to measure the effect of ONLY
moving the ad (e.g. displaying the same 468x60 ad in the
lower right corner of the screen).

Also, given the likely propensity of users to scroll down
before a top-of-screen ad is fully loaded, it is obviously
important to measure the effect of simply moving the ad
down; indeed, the study's authors did examine this effect
by measuring the effect of moving the ad from the top
of the page to a position one-third of the way down the
monitor screen (so that the ad appears below some
"content" information on the page).

(3) Moving an ad to the lower right corner of the screen -- as
the study's author's note, "next to the right scroll bar" -- is
highly likely to result in unintended click-throughs, as users
who intended to scroll down accidentally click on the ad
a few pixels to the left. These spurious and unintended
click-throughs COULD easily be measured in two ways:
first, by using an "image map" in the ad and examining
whether an unusual ratio of clicks occur in the right edge
of the image; and second (and more important) by
measuring something other than click-throughs --
specifically, the PERSISTENCE and DEPTH of the visit
to the site being advertised.

I am NOT suggesting that this study is useless or
meaningless, nor am I suggesting that the authors did
not provide a very useful service. Clearly, a lot of work
was done and a very significant (even dramatic) change
was discovered. However, the meaning of that change
is open to substantial doubt, and therefore this study is
best viewed as a reason to examine more thoroughly
the effect of these changes.

I also note that the results of the study are based on
a very small pool of advertising impressions, well
under 10,000 impressions in each test and under 500
adviews in some tests. A handful of clicks, or a
minor distraction (or hiccup in the internet) could
cause dramatic changes when the sample is so small.
At the Web Advertising '97 conference, several
speakers suggested that at least 100,000 and perhaps
as many as 250,000 adviews are required before the
results of a test campaign can be viewed as reliable.
(Of course, one cannot fault the study's authors if
they simply did not have access to larger sites to
experiment with advertising -- few publishers or
advertisers will be willing to let students meddle
with their primary source of operating revenue.)

-- Mark J. Welch (510) 847-2026 http://www.ca-probate.com/
-- Web Site Banner Ads (Networks, Brokers, Exchanges, Software, PSAs):
-- http://www.ca-probate.com/comm_net.htm
-- Web Counters: http://www.ca-probate.com/counter.htm


HOW TO JOIN THE ONLINE ADVERTISING DISCUSSION LIST

With an archive of more than 14,000 postings, since 1996 the Online Advertising Discussion List has been the Internet's leading forum focused on professional discussion of online advertising and online media buying and selling strategies, results, studies, tools, and media coverage. If you wish to join the discussion list, please use this link to sign up on the home page of the Online Advertising Discussion List.

 


Online Advertising Industry Leaders:

Local SEO with Video
Houston SEO
Houston Web Design

Add your company...

Local SEO with Video
 



 


 
Online Advertising Discussion List Archives: 2003 - Present
Online Advertising Discussion List Archives: 2001 - 2002
Online Advertising Discussion List Archives: 1999 - 2000
Online Advertising Discussion List Archives: 1996 - 1998

Online Advertising Home | Guidelines | Conferences | Testimonials | Contact Us | Sponsorship | Resources
Site Access and Use Policy | Privacy Policy

 
2323 Clear Lake City Blvd., Suite 180-139, Houston, TX 77062-8120
Phone: 281-480-6300
 
Copyright 1996-2007 The Online Advertising Discussion List, a division of ADASTRO Incorporated.
All Rights Reserved.