Research into Inline validation

Inline validation may seem useful at first insight, but should be implemented carefully and only if appropriate. If you’re still sceptical about its costs and benefits are, read on.

In this first article of the series I  outweigh the pro’s and cons and provide existing researches and a checklist. In other of my posts you can read about the design and implementation challenges you might cross.

Inline validation is generally useful where users would expect it, like when filling in a username to see if it still available or a typing a new password to check if it is long and secure enough. If the system is not forgiving and is strict about format types, the user might start to wonder. Also, most users want to complete the form as fast but accurate as possible. If they click Submit and go to the next step, they might be less pleased to see the same form again (with errors communicated) and make the effort of correcting any errors (and waiting for the next step again).

What is inline validation?

Inline, real-time, or instant validation is a error communication strategy in electronic form design. A system responds immediately to an error, constantly validation user input and providing immediate feedback about input errors during filling out a form.

Two variants of inline validation exist:
a) On-the-fly: any error is communicated in real-time or instant, meaning as soon as the user has typed an invalid character in a textfield.
b) OnBlur:  any error is communicated only after a user leaves a field by clicking outside it or clicking another field (a.k.a. loses focus). This is generally preferred method.

The opposite of inline validation is afterwards validation or on-submit (onSubmit) validation. That means that errors are communicated only after the user has clicked the submit button.

Just show me the numbers (or research)

Does inline validation increase conversion? When using inline validation on sign-up and ordering forms, do we really win more confidence from the users or only disturb them in their mental flow while filling the form? What about those green check-marks: do they really make filling in a check-out form more pleasant? Research about inline validation and its advantages for conversion is scarce and contradicting. There are no hard sales figures or proof of increased conversion, because research about inline validation is not conclusive about the link yet between different form types (sign-up form, order form or other) and design (content, error behaviour) in relation with its added value on different types of forms.

Research case A: inline validation didn’t help

One research paper covers ‘usable error messages on the web’ [1] (not confirmed) that afterwards validation is more ‘effective’ than inline validation. I’m not sure what they mean by ‘effective’, but I deduct from the table of contents of the article that they define ‘effective’ by:

  • lower error rates.
  • lower time to complete.
  • higher subjective ratings.

Summarizing their findings:

When users are completing online forms present the errors after the user has completed the form.
–          When completing an online form users have two flows or modes: Completion Mode and Revision Mode.
–          Users tend to ignore immediate error messages when they are in Completion Mode.Of the six possible ways to present error messages, thee proved to be more effective (?) than the others:
o        Present the errors afterward, embedded in the form, all at once.
o        Present the errors afterwards, embedded in the form, one by one.
o        Present the errors afterwards, in dialogues, one by one.Where presented with inline validation, users often simply ignored the messages on the screen and continued completing the form as if nothing happened. These results lead to the postulation of the “Modal Theory of Form Completion“: Users are in either “Completion“ or “Revision Mode“ when filling out online forms. These modes affect the users way of interaction with the system: during Completion Mode the users disposition to correct mistakes is reduced, therefore error messages are often ignored.

Interestingly most users don’t actually look at the inline validation unless they are worried their answers might be wrong. As soon as the user hesitates they look at the form and can see straight away whether their answer is right or not.

Research case B: inline validation helps

On the other hand, an article by Luke L in the aListApart.com article has opposite and positive findings on inline validation:

The inline validation version had:

  • a 22% increase in success rates,
  • a 22% decrease in errors made,
  • a 31% increase in satisfaction rating,
  • a 42% decrease in completion times, and
  • a 47% decrease in the number of eye fixations.

A discussion on ixda.org summarises:

“His research suggested that:

  • validating data that shouldn’t be validated (e.g. first name, last name) was regarded as weird and confused users.
  • speedy, immediate checking of data that should be validated (e.g. is my choice of username available?) is welcomed by users and helpful
  • attempting to validate data before the user has finished typing is intrusive and disliked by users (a point we explore further in our book: it’s all about interrupting the user’s turn in the conversation). “

Conclusion

Inline validation can either harm or benefit your customer satisfaction and conversion. As every forms differs in content and design, so will each research differ in their findings. Not all design research is universally applicable; we should learn from the context rather than to generalise conclusions.

What does this mean for me?

A/B test inline validation on your site. Be weary about blindly applying something that gave positive conversion results on one site to another site – different sites have different audiences, even with time things may change.  Conversion rate is an important monetary measure, but – just as the case studies above have done – measure other KPI’s as well: usability, customer satisfaction, aesthetic appeal and similar emotional factors should not suffer (too) much as a result of increased conversion. As always, mind the development costs, server load and performance decrease are feasible.

What’s next?

Read my related articles on inline validation:
Pros and cons of inline validation
Design and implementation challenges
When to use inline validation

Preparing for inline validation
Lies, damn lies and A/B tests

References

[1] Usable error message presentation in the World Wide Web: Do not show errors right away
by Javier A. Bargas-Avilaa, Glenn Oberholzerb, Peter Schmutza, Marco de Vitoa and Klaus Opwisa

[2]
http://www.cxpartners.co.uk/thoughts/web_forms_design_guidelines_an_eyetracking_study.htm
[3] http://www.getelastic.com/real-time-inline-validation/#more-5612

Examples

Sign-up form examples:
Yahoo! mail sign-up page   (UPDATE: Recently Yahoo has slightly redesigning their form and removed inline validation!)
Mint.com sign-up form
TypePad sign-up form    (Last checked 5/5/11)
Audible.com sign-up   (username onBlur, others onSubmit)    
Eventful sign-up form (use inline hints. Did they get rid of their Javascript alert dialog boxes?)

Order form examples:
Do you know of any e-commerce examples? Let us know.

Books

A short chapter devoted to inline validation in Robert Hoekman Jr ‘s Designing the moment

Advertisements

4 Responses to Research into Inline validation

  1. Thank you for this interesting topic. I was particularly pleased to find out
    about the paper by Bargas-Avilaa, Oberholzerb, Schmutza, de Vitoa and Opwisa
    which was new to me. I’ll call it ‘Bargas-Avilaa et al’ for short here.

    That paper perfectly illustrates the problems I was discussing in my essay
    “Problems of reading research papers for practitioner purposes”.
    http://www.usabilityprofessionals.org/upa_publications/jus/2007november/jarrett.html

    Bargas-Avilaa et al. have put a lot of careful effort into researching
    something *that would never happen in the real world*. They deliberately
    inserted specific validations into a form in a way that would immediately
    condemned as bad practice by any competent forms designer. For example: they
    required that ‘Name’ be entered in capital letters without any indication
    that this was necessary. Real world: who would do that? and if for some
    crazy reason there was a really important requirement for capital letters,
    why not just fix it in the back end programming? And all their other
    validations were like this.

    So we have an unnatural situation – very strange and unusual validations.
    Then they vary the ways that these validations were presented. That part of
    the experiment seems to be quite good. But if you give users weird things to
    do, they react in weird ways. Therefore: I can’t rely on any conclusions
    taken from the weird stuff.

    Had these researchers looked at typical errors on competently-designed
    forms, we might have learned something of value. As it is, the research is
    irrelvant to practice – i.e. to any designer that is working on a real-world
    form.

    Luke Wroblewski’s research was done on the basis of a competely-designed,
    real-world form that had validations that are representative of current
    practice in web design. This is definitely worth reading – and then thinking
    about some more. Is the form that *you* want to design similar to Luke’s?
    Easier? Harder? Are your users typical web users? Or not? If not, how is
    that likely to affect their reactions to your form?

    Best
    Caroline Jarrett
    http://www.formsthatwork.com

  2. […] Top Posts Laws of interaction designWhere is the fold?Research into Inline validation […]

  3. Rob Gillham says:

    Thanks Lucy,

    At the risk of taking on Caroline – who is an acknowledged authority – I’m on the side of Bargas-Avilaa et al. This is a proper study, conducted in robust research conditions, subject to peer review and published in a respected journal: http://portal.acm.org/citation.cfm?id=1235978

    The others, whilst very interesting, and written by well known practitioners are not subject to the same rigour and therefore not really comparable.

    So, whilst I understand and agree with many of Caroline’s points about academic research, sources such as blog posts and proprietary research DO NOT stand up to the same level of scrutiny as published research and cannot be considered the same thing for the purposes of a discussion such as this one.

    • Hi Rob

      Just goes to show that even the most respected journals can slip up.

      Have you read the Bargas-Avilaa et al paper? If so, you’ll see what I mean. And my point is that research is not always relevant to practice. Particularly so in this case, in my view.

      In the end, are you really going to disagree with me about my key point? To be specific: “Think about your form, your users, and your context. And only then consider whether to follow any set of recommendations: mine, Luke’s or published papers”.

      And I’m sure that you’ll find it hard to disagree with this: Whatever else you do, test your form with your users and then make changes based on what you learn. Repeat until the form works.

      best
      Caroline

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: