As search becomes increasingly mobile, getting closer to being the majority of search visits every month, how we think about user engagement on search results needs to be updated from the old desktop paradigms of the Golden Triangle (which suggest attention sticks to the upper left and decays down and to the right).
Mobile search use cases and device features change the way users distribute their attention on a search result page. Not only does this change CTR curves for mobile, and the relative value of each position, it also changes how search engines evaluate usage metrics and satisfaction. This is compounded by the presence of Knowledge Graph (entity results, panels, and carousels) and Instant Answers (weather, scores, etc.) which may satisfy a users query without logging a click or scroll.
Collecting Mobile Usage Data
Google conducted a study with Emory University to look at how gaze, viewport metrics, and page metrics correlate with each other and with relevance and satisfaction metrics. In particular, they looked at what could be inferred using viewport metrics.
User behavior was measured in 3 ways.
Gaze Metrics – Traditional eye tracking, monitoring where the eye is looking and how much time it spends looking at it. In particular, tracking how the user looks at Knowledge Graph and Instant Answers that are both relevant and irrelevant. This can show a gaze distribution by ranking position.
Viewport Metrics – The viewport is the visible portion of a webpage on the mobile device’s screen. This measures how long, in seconds, a feature on the page remains visible, including Knowledge Graph, Instant Answers, and a distribution by rank position.
Page Metrics – This includes traditional analytics metrics, like time on page and number of scrolls. Additionally, a user’s satisfaction rating can be collected.
Their data is summarized in the table below:
Comparing the differences highlights a few interesting takeaways.
(Average values of NumberOfScrolls, TimeOnTask, TimeOnPage and SatisfactionScore for four experimental conditions with error bars indicating standard errors. Statistical significance of group pairwise comparisons is annotated using the following coding: NS – not significant, * – p-value