How do users interact with SERPs on mobile devices?

  • By :
  • Category : Pick Up
Pick Up

As search becomes increasingly mobile, getting closer to being the majority of search visits every month, how we think about user engagement on search results needs to be updated from the old desktop paradigms of the Golden Triangle (which suggest attention sticks to the upper left and decays down and to the right).
Mobile search use cases and device features change the way users distribute their attention on a search result page. Not only does this change CTR curves for mobile, and the relative value of each position, it also changes how search engines evaluate usage metrics and satisfaction. This is compounded by the presence of Knowledge Graph (entity results, panels, and carousels) and Instant Answers (weather, scores, etc.) which may satisfy a users query without logging a click or scroll.

Collecting Mobile Usage Data
Google conducted a study with Emory University to look at how gaze, viewport metrics, and page metrics correlate with each other and with relevance and satisfaction metrics.  In particular, they looked at what could be inferred using viewport metrics.

User behavior was measured in 3 ways.

Gaze Metrics – Traditional eye tracking, monitoring where the eye is looking and how much time it spends looking at it. In particular, tracking how the user looks at Knowledge Graph and Instant Answers that are both relevant and irrelevant. This can show a gaze distribution by ranking position.
Viewport Metrics – The viewport is the visible portion of a webpage on the mobile device’s screen. This measures how long, in seconds, a feature on the page remains visible, including Knowledge Graph, Instant Answers, and a distribution by rank position.
Page Metrics – This includes traditional analytics metrics, like time on page and number of scrolls. Additionally, a user’s satisfaction rating can be collected.

Their data is summarized in the table below:

Comparing the differences highlights a few interesting takeaways.
(Average values of NumberOfScrolls, TimeOnTask, TimeOnPage and SatisfactionScore for four experimental conditions with error bars indicating standard errors. Statistical significance of group pairwise comparisons is annotated using the following coding: NS – not significant, * – p-value

No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Pick Up
How Does Google Authorship Impact CTR?

At the end of last week, some discussion on Hacker News started after a post claimed that Authorship created a 90% drop in traffic. Later, Matt Cutts jumped in to note that it was Penguin and not Authorship that caused the drop. I thought it would be useful to talk …

Pick Up
6 Big Things Star Trek: Discovery’s Two-Part Premiere Showed Us

Star Trek: Discovery has finally arrived, and things are already getting crazy. Be warned that there are major SPOILERS for the first two episodes of Season 1. Source:

Pick Up
6 Writing and Productivity Rituals from the Copyblogger Creative Team

I’ve said for a long time … writers are magicians. We make something out of nothing. We take syllables and turn them into dreams, sights, sounds. Calls to action and detailed plans for shenanigans. And as every magician knows, if you want to perform magic … you have to know …