How non-standard language affects the credibility of your online review
Say you need a break from your daily hustle bustle. A short city trip or a big seven course meal, it doesn’t matter, as long as it takes little preparation and you get guaranteed quality. How would you go about doing this? One of the steps you’d take would probably involve browsing the web in search of hotels and restaurants and a quick look what people have to say about them in their reviews, in addition to the commercial sources of information and advertising you get from the hotel and restaurants themselves. Have you ever stopped to wonder which variables determine the credibility you attach to the reviews you read and even more importantly perhaps, do you take these variables into account when you write reviews yourself? Surely, you want people (both consumers and hotel and restaurant management) to believe you, but how can you make it happen and what determines your own assessment when you read other people’s comments?
Existing research has already shown that the expert status of the reviewer has a significant impact as does the online medium that is used and the presence or absence of a profile picture (preferably not the one with the hideous Christmas jumper). Message polarity, too, has an impact and believe or not, negative reviews are deemed to be less credible than positive ones (regardless of our innate curiosity to find out what those very negative reviews have to say about the service provided). In the remainder of this blogpost, however, we’d like to focus on yet another important and obvious variable: the way the message is wrapped language-wise.
Three specific manifestations of language will be focussed on, based on research carried out at Ghent University:
- Tthe impact of language mistakes (e.g. verb conjugation and spelling)
- The impact of substandard uses (e.g. the integration of informal spoken language in divergent orthographic representations, e.g. English wanna, gonna)
- And the impact of flooding (e.g. lengthening of letters as in the food was sooooo bad)
The reasons behind the selection of these criteria are fed by their attested frequencies and spread in online environments, further fuelled by a generally attested informalisation of language with increased leniency and laxity regarding divergent patterns. The question, however, remains whether the same degree of tolerance can be attested in contexts where writers themselves assume a critical attitude towards the quality of provided services. In other words, can your language be yolo when you complain about yoghurt? The experiments were carried out based on an existing negative restaurant review in Dutch that contained errors, substandard language and flooding. This particular version was compared to a standard language version of the same review, a review with errors only, a review with substandard only and a review with flooding only. The questionnaire that went along the review probed into perceptions of credibility, reliability, professionalism, usefulness and inclination to follow the advice given.
One of the main conclusions is that despite their abundance on the Internet and the recent coolness factor they have acquired in spoken language (see Lybaert 2017), substandard features are not condoned in review contexts. In this way, substandard uses are no different from aberrant language mistakes in having a negative impact on all of the perceptions involved. Apparently, people just don’t buy negative quality assessments if your own assessment does not live up to the Standard Dutch quality benchmark, not even in laymen online reviews. What about the boosting functions of flooding? Is soooo bad more convincing than so bad? Not really, the data only showed a very slight increase in content credibility in the standard version, at the cost of text quality. In fact, in combination with errors and/or substandard features, flooding seemed to make matters worse, supporting the perception of the reviewers as being overtly emotional and not quite the sharpest tool in the box.
In sum, in online review environments, standard language stands out in terms of credibility, despite the spread of substandard uses in many other domains and manifestations of language. Even the creative use of flooding to support opinions does not seem to be very effective, at least not in very negative reviews. The questions remains, however, whether such negative attitudes also prevail in positive reviews, which are deemed to be more credible anyway. Maybe the positive attitude towards the provided services also translates into a more positive attitude towards the attested creativity in the compliments that are given? Stay tuned for further research and in the meantime: in case of complaints, keep it simple, keep it standard or go to better restaurants;).
Mathias Seghers and Bernard De Clerck from the Translation, Interpretation and Communication department from the Ghent University,
This article is based on ‘The assessment of non-standard Dutch deviations in online reviews: errors, tussentaal and flooding. Empirical research on reader perceptions in a review-based context’ 2018 (Halsberghe, G.),The impact of language and gender on the credibility of online reviews’ van 2018 (Depovere, S.), ‘Hoe “schoon” vinden taalgebruikers het Schoon Vlaams? Een perceptie- en attitudeonderzoek over tussentaal’ 2009 (Heymans, G.),‘ The impact of language and gender on the credibility of online reviews’ van 2018 (auteur: Loete, T.), ‘A direct discourse-based approach to the study of language attitudes: the case of tussentaal in Flanders’ 2017 (Lybaert, C.) and ‘Normgevoeligheid: attitude van Vlaamse jongeren ten aanzien van het Standaardnederlands, de tussentaal en het dialect’ 2012 (Vancompernolle, H.)