Tropedia

  • Before making a single edit, Tropedia EXPECTS our site policy and manual of style to be followed. Failure to do so may result in deletion of contributions and blocks of users who refuse to learn to do so. Our policies can be reviewed here.
  • All images MUST now have proper attribution, those who neglect to assign at least the "fair use" licensing to an image may have it deleted. All new pages should use the preloadable templates feature on the edit page to add the appropriate basic page markup. Pages that don't do this will be subject to deletion, with or without explanation.
  • All new trope pages will be made with the "Trope Workshop" found on the "Troper Tools" menu and worked on until they have at least three examples. The Trope workshop specific templates can then be removed and it will be regarded as a regular trope page after being moved to the Main namespace. THIS SHOULD BE WORKING NOW, REPORT ANY ISSUES TO Janna2000, SelfCloak or RRabbit42. DON'T MAKE PAGES MANUALLY UNLESS A TEMPLATE IS BROKEN, AND REPORT IT THAT IS THE CASE. PAGES WILL BE DELETED OTHERWISE IF THEY ARE MISSING BASIC MARKUP.

READ MORE

Tropedia
Register
Advertisement
WikEd fancyquotesQuotesBug-silkHeadscratchersIcons-mini-icon extensionPlaying WithUseful NotesMagnifierAnalysisPhoto linkImage LinksHaiku-wide-iconHaikuLaconic

This is not a Kissing Tropes. Just getting that out of the way.

When something gets dubbed into a language it wasn't originally in, that's when the trouble starts for this trope. The actors have to lipsync along with the old footage, which is tricky if they don't want to turn it into a Hong Kong Dub. Contrary to popular belief, the people who need to deal with Lip Lock are not the voice actors (who only act out what is written in the script) but the people who translate/write the dub script (sometimes the translator and the script writer are the same person, sometimes not). Script writers usually read lines out loud while writing, to make sure that they fit the mouth flaps.

If the script writer doesn't pay attention to match the lip flaps, the result is that the new actors are forced to speak at strange tempos in order to better fit the lipflap. The main ways in which this is manifest are:

In Anime the Japanese studios create the animation first, and then record the voices. This means that characters' mouth often just moves up and down, however, the larger the animation budget, the more effort animation studios make to make the lip flaps match the dialogue (Honey and Clover is a good example with mouth flaps that match the lines perfectly). With American cartoons, the voices are recorded first and the animation is built around them. This means the mouths move in a manner much more consistent with the dialogue, at the cost of making it more difficult to translate into another language. This difference can be very clearly seen in the English dub of Akira, a Japanese animated movie which, unusually, recorded the voices before the animation and took pains to make the mouth flaps match the dialogue. The result is that the English version looks distinctly off. Ironically, live-action dub scripts are easier to write because the natural movements of the mouth while speaking are a lot more vague than in cartoons.

Due to the nature of dubs, no script writer can avoid the curse of Lip Lock. A skilled writer can make it a lot less noticeable, but can't do away with it entirely.

There are ways of avoiding it, but all have their own disadvantages:

  • Editing the footage so that the Mouth Flaps match the new dialogue. This is expensive in animation so it's very rarely used. It's also never used in live-action dubbing, because it's well-nigh impossible. It's a lot cheaper in video games, where all you need to do is edit the facial animation instructions, which can be handled by software, and tends to be used more often there. Doing it for in-engine scenes is simple; redoing a pre-rendered cutscene is effectively just like redoing an animated production. It is, however used in abridged series like Team Four Star's Dragon Ball Abridged.
  • The translator/writer getting creative with the translation so that the lines fit the Mouth Flaps better. This is what usually happens, sometimes leading to meaning being lost, or useless fluff being gained. The degree to which it's done varies but most translators try to find a compromise to match the lip flaps but also get the meaning across. (Unless they just don't care.)
    • Note though that this is an unavoidable product of every translation, be it dub or subtitle. The only exceptions are voiceovers, and only to a certain extent.
  • Filming for Easy Dub: when there's no lipflap to mouth to, say the character is talking off-screen or standing with his back to the camera, the script writer's work is a lot easier. Forcing a dialogue off-screen, however, isn't really doable; even if it can be done, it screams of cost-cutting, despite the intention, and 4Kids and '90s American anime dubs can once again demonstrate that it doesn't always work and why you shouldn't add dialogue in these scenes just because you can.
    • A related, and largely more effective method is when the character has No Mouth, so the translator just needs to match the length of time. (Lord Zedd from Power Rangers is an example of this, and was voiced by a scriptwriter.)
    • Also often used in movie dubs to great efficiency when the camera is focused on something written. Instead of just subtitling it, sometime the line will voiced, in a translated version, by a character off-screen, when it makes sense in context. If used well, you won't even notice the voicing of line wasn't in the original version.
  • Subtitles.

Considering the flak companies tend to get for playing with the original footage, it's best to just avoid it as much as possible by using good translators/script writers to keep the worst of the effects at bay. Of course, this does rely on the companies' ability to find good script writers. This is the hard part.

Examples of Lip Lock include:


Anime[]

Cquote1

 Keiichi Casey: GET out — my way! I'm — going — to — a — FEE-ey-staaaa!

Luffy: You and your NAVY...are ruiningCoby'slifelongDREAM!

Cquote2
  • Besides the normal edits to the dialog necessary for timing, the North American dub of Ranma ½ used a video editing system (WordFit) to tweak the mouth-flaps.
    • To be fair, most of Ocean Group's dubs have this editing system in place. Including Gundam Wing and their version of Dragon Ball Z (which itself has an example below).
  • The 1986 movie dub of Fist of the North Star suffered from this a lot, though not as much as some of the other titles.
Cquote1

 Raoh: See... It's different now... I'm a king... and a king... must demand respect from everyone.

Cquote2
  • The dub of Bobobobo Bobobo actually engages in some Lampshade Hanging regarding this. In episode 53, Bobobo states "Now I'm going to tell all of ya where we're...going. I just hope by the time we arrive I can speak without weird pauses."
  • As mentioned above, The Ocean Group dub of Dragon Ball Z has quite a few notable examples, the first being the (in)famous scene where Vegeta, voiced by Brian Drummond, is asked by Nappa what his scouter says about Goku's growing power level, at which point he takes the scouter off and growls "It's over nine thousa-aaaaaand!" before crushing it in his hands. This has since become an internet meme. Another instance of this elongated delivery is when Vegeta has Gohan by the scruff of his neck and says "I'm going to crush you like a grape in the palm of my hand, you understa-aaaaaand?!" in an especially raspy tone. It can be viewed here.(BL)
  • In Gankutsuou, this actually resulted in the somewhat trite "Wait and hope!" of the original The Count of Monte Cristo being rendered into a memorable Catch Phrase uttered at the end of each "On the Next..." Week's Episode teaser: "Bide your time, and hold out hope!"
  • The dub of Death Note had the very Narm-inducing line (which was also both of the original creators' favorite part in the series): L whispering "I-wanted-to-tell-you, I'm L!", translated from a considerably shorter Japanese sentence, had the former line been spoken at a normal pace. Thankfully, Alessendro Juliani made it sound creepily intimate.
    • The original line was "Watashi wa Eru desu", meaning: "I am L". The issue here was if it had been translated straight, there would have been about five extra syllables and mouth flaps left over, so they had to add something to get it to fit.
  • The dubbing of Transformers Energon was notably bad about this; whenever there was an additional syllable needed, the dub had the characters say various things that sound like they were made up on the spot causing a constant stream of "what?", "uh?", etc. (or as tfwiki.net calls it, "The Pain Count"). Transformers Cybertron suffered less from this and had a better dub script overall.
  • Gash Bell suffered from this immensely during the musical numbers, in which the dubbers would insist on having the VAs sing along to the lip flaps at the expense of any sense of harmony and timing. A good example is the infamous "Very Melon" tune, where the lip flaps did not match at all in the original, but in the VIZ dub they painstakingly made the voices match, which ruined the rhythm of what might have been a nice tune.
    • Most of the "Very Melon" song was in Gratuitous English anyway, so what exactly ruined the rhythm? The 'Yeah!'s being added fit the rhythm better, I thought, and the few lines that weren't in English originally didn't have any lip flaps to be locked into.
  • Spider Riders (of which an actual Japanese version may or may not exist) appears to feature this in spades, to the point where it takes several full episodes to get over the fact that most of the characters come off as having serious mental illnesses. Luckily, it seems like the actors (or the sound editors) get better and better as the series rolls on, so gratuitous pauses grow more and more rare. Strangely, some characters seem almost entirely exempt from this throughout the show.
    • "Will you be...the Inner World's savior or...it's destruction?" The line is awkward enough with the strange pauses without being so horribly acted.
  • Heroic Age has a rather hilarious example in the third episode when Age says that he likes to paint then enunciates it, so the dub has to act like "paint" has three syllables ("pa-ain-tu").
  • Digimon (the original version) usually uses only one or two voice actors to say something in crowd shots; the rest of the scene is completely silent. In the dub Saban usually got a handful of voice actors to the scenes, which results in the crowd scenes sounding more natural, although harder to hear the "important" facts.
  • The infamous Mega Man's death scene from Mega Man NT Warrior. As dying, Rockman originally had, as final words, "Ne...tto...kun...". As far as adapting goes, this was a tricky line for the dubbers. First because Netto's English name is the monosyllabic "Lan", and second because the lips were carefully animated in said scene. The dub opted for the "De...le...ted...", which just seems random and loses most of the emotion.
    • "Good...Bye...Lan" may have worked.
  • Pokémon got it easy with the titular Mons. Since they only say their names or unintelligible noises (generally the former in the dub), all they (the dubbers) had to do was play with how they (the Mons) used the syllables of their names. Of course, some of them have the same name (Pikachu being the best-known example), which means no dubbing of their lines has to be done at all.
    • Notably, the 4Kids dub didn't make much of an attempt to make the lips match, which was even lampshaded in a dub-only scene of one episode. There WERE a few renamings here and there (Lorelei in the games became Prima in the dub) which were mostly done to avoid making a couple lines too much longer than the original.
    • A particularly Narm-ful example shows up early in the series, when Ash's response to all the Joys looking alike is "Yeah, it's a Joy-ful... world" but the pause, combined with the actress' unenthusiastic delivery, makes it sound like he's doing a Who Writes This Crap? in his head.
  • Speaking of 4Kids, Yu-Gi-Oh! sometimes had a hard time with one of its renamings: Jounouchi to Joey. It usually wasn't a problem, since the dub was more of a rewrite than a translation, but whenever Yugi said "Jounouchi-kun" isolated (which happened a lot), 4Kids had to think of new, inventive ways of filling the flaps. Sometimes it worked, sometimes it didn't ("Be careful, Joey!").
  • Axis Powers Hetalia has some problems with this. Specifically the very first scene in the very first episode, which consists of Loads and Loads of Characters all trying to have all trying to have their Establishing Character Moment at the same time. No, they're literally all talking at the same time. Needless to say, Hilarity Ensues. Can you keep up?

Film[]

  • Ignored and spoofed in Kung Pow! Enter the Fist: Enter The Fist, where the writer/director/main actor went out of his way to write joke lines for the actors to speak so he could dub over them later. For instance: The main character says calmly 'I implore you to reconsider', even though it's very obvious on the screen that he's shouting.
    • A bonus audio track on the DVD reveals that the line being dubbed over was "I'M SOMEBODY'S MOMMY!!"
  • The English dub of Godzilla Raids Again (as Gigantis the Fire Monster ) went to extreme lengths to make the English dialogue match the mouth movements of the Japanese actors, which has the unfortunate side effect of making the actual content of the dialogue almost incomprehensible.
    • An example of which was a Japanese word translating to 'stupid fool'. As the lips still had to be in sync, it was replaced by 'banana oil', which makes for a very nonsensical insult.
    • And Mothra vs. Godzilla involves the great line "Yeswealwayskeepourpromises." Apparently, the equivalent Japanese word is really really short.
  • Can be noticed a few times in the otherwise excellent English dub of Sky Blue, where Korean sounds don't match well with their English equivalents.
  • Djinn in Wishmaster does this.
  • A rare example in an English film, Pazuzu speaking through Regan in The Exorcist was done by Mercedes McCambridge in post-production rather than Regan's actress Linda Blair, due to the former having a deeper, androgynous, and more demonic voice. As such, there are moments when the voice doesn't match up with Blair's lip movements.
  • Final Fantasy VII: Advent Children has this because it's all CGI with accurate mouth movements for Japanese. I quote: "Dilly dally shilly shally."
  • Italian productions are often filmed in English, with American or British lead actors. Sometimes, the supporting actors deliver their lines in Italian, with the English dubbed in later. This is much less noticeable than if all of the dialogue is dubbed, but it can lead to awkward situations such as in Suspiria, which includes a scene with an American, an Italian, and a German, each speaking in their own native language.
    • Barbara Steele complained that the production company's policy of dubbing all the voices meant that her own dialogue was dubbed over in Mario Bava's Black Sunday, even though she was speaking English.

Video Games[]

  • Calling is an extremely chronic offender. At the end of the opening cutscene when Rin answers her phone, she clearly mouths "Moshi moshi,", but in the dub, it was changed to a drawn out "Hellloooo?" Half of what the characters say don't even sync up with their mouth flaps.
  • Final Fantasy X occasionally suffered from this because of the fact that a fixed, 'rhubarb-rhubarb' mouthflap loop was being used. Ignoring serendipitous cheats like Auron's face-obscuring collar letting all his lines sound natural and smooth, Yuna, in particular, was injured badly — her voice was already soft and shy. Thankfully, the sequel smartened up the lip sync.
    • But the sequel only re-synced the lip flaps on certain shots of certain scenes. 90% of the time, you're watching the "rhubarb-rhubarb" flaps (and more worryingly is the glazed expression on their faces while doing it). However, Yuna's voice actress still spends less time trying to fit the voice flaps exactly.
    • FFX actually uses technology to speed up the voice clips if they're a little too long. Most of the time this isn't really noticeable, but if I quote the words "WithYunabymyside" I'm sure someone will recognize it.
    • It's also blatantly obvious when in the original Japanese script a character — usually Yuna — just said "hai", because it's usually replaced with a quick/mangled "okay" (or, at least once for Tidus, "oi"). This is presumably because the "s" at the end of "yes" is a fricative, but makes for some awkward scenes.
  • Since Dirge of Cerberus uses the same cutscene engine from Final Fantasy X, this was inevitable. There are many examples, but a particularly Narm-ish one comes to mind — when in the Japanese version Vincent, witnessing Azul's demon form, says "Nanda", in the English version he gives us "Whatthehell?"
    • Makes one wonder why they just didn't go with "What the...?"
  • One pivotal scene in Final Fantasy XII is rather derailed by the ringing declaration that a certain individual "Is not thetypetotakeBASEREVENGE!"
  • The game Yakuza suffers from this really badly with towards the end, possibly as they ran out of budget. Without warning characters will suddenly start using every trick in the book, enthusing random syllables, pausing in the middle of lines and speeding and slowing their speech at random. It?s even more painful as the game has a high quality voice cast, rendered unable to act by insanely strict lip lock.
  • The Codec dialogues in the original Metal Gear Solid were surprisingly well-synced to whoever was talking. The remake, The Twin Snakes, however, suffered from a lazy fix of making the character's mouth move based on how many letters were displayed on the screen, paying no attention to pauses. Most egregiously, Visible Silence caused the characters' mouths to jabber meaninglessly while they said nothing.
  • (Almost all) Early Play Station games with voiced cutscenes suffered from this. With No Budget or space to modify the scenes, and no budget to hire experienced voice actors, Lip Lock was either ignored (leading to a Hong Kong Dub) or the delivery was completely ruined. Examples include Zero's "WhatamIfightingFOOOOOOOOOOOOR!" in Mega Man X4 and Xenogears (all of it.)
    • To give you some idea of how bad this was, the bar was set so low that the above-mentioned Mega Man X4 was generally considered to have unusually good voice acting for a video game when it came out.
  • Since There Is No Such Thing as Notability, This otherwise surprisingly good Kingdom Hearts Fan Dub suffers from this at a few places.
Cquote1

 Ven: We're friends. Therefore, I wanted to ask you...something.

Cquote2
  • Jeanne D Arc contains quite a few anime sequences, but nearly all the text sounds ridiculously rushed. Even worse than usual because most characters speak in various strengths of French accents.
  • In the Ghostbusters game for the PlayStation 3, none of the characters' mouths ever synced with what they were saying. Considering that they managed to get almost the entire cast of the movies involved in the voice acting, it's really disappointing that the animators couldn't have done a better job.
  • Dissidia Final Fantasy suffers heavily from this. Every other cutscene have the characters talking with random punctuations ("I mustn't ruin. Everybody's hopes.") making some of the dialogue sound uncomfortably awkward.
    • They did better with certain characters: Garland, Golbez, Exdeath, Gabranth, and Dark Knight Cecil all have closed helms, making it easier to make good-sounding sentences, due to the lack of lips to sync.
      • Though there's still the scene with Dark Knight Cecil saying, "I must. (Minute-long pause.) Do this."
      • Cloud gets hit with this particularly badly. It ends up with him stopping mid sentence several times with very obvious pauses, and practically everything he says is a variant on "I just... (rest of sentence)." Considering Advent Children had much more complex lip sync, it's surprising how bad Steve Burton sounds at some points in comparison.
      • And compare everyone's dialogue in the cutscenes to the pre- and post- battle lines, where there is no lip sync to deal with.
      • This can also a problem with some scenes in the Japanese dub (the scene with Cosmos and Warrior of Light in Ultimecia's Castle is a good example) due to how the mouths are animated.
  • Kingdom Hearts went from being an example at the top of this page in the two main installments to a lip-locked mess in Re:Chain of Memories and 358/2 Days.
    • Both were the result of having pre-rendered cutscenes instead of the usual game-rendered ones; redoing the lip-syncing for the former is more expensive. And yet, they did it in some rare instances of 358/2 Days anyway: Namely, when DiZ says "She?" (referring to Xion), which in Japanese had the three-syllables lip moves of "Kanojo?"
    • For Kingdom Hearts II they didn't bother to change the lip movements for the flashbacks to the first game.
  • The English dub of Sakura Wars: So Long, My Love makes no attempt to match the voices to the Mouth Flaps outside of animated cutscenes. This makes the dialogue more natural at the expense of agreement between the visuals and spoken dialogue.
  • Digital Devil Saga. Even though the voice actors are very experienced in voicing foreign animation, the dialogue has as much awkward pauses and speed variations as other examples in this page; the care put into the lip matching with the Japanese dialogue certainly doesn't help. It's less noticeable when the characters are in their demon forms, but the rest of the time.... yeah.
  • Onimusha, oh so, so much. Especially in Dawn of Dreams.
  • Infinite Undiscovery suffered from this trope hard. There are multiple scenes where characters are talking to each other, yet their lips rarely, in some cases never, move at all, giving the impression that everyone's either speaking telepathically or skilled ventriloquists.
  • Starcraft and Warcraft III had "cinematics" (non pre-rendered in-game cutscenes) that suffered from this. While the in-game models you were looking at had no speech animations, the "portraits" of the characters displayed at the bottom of the screen did. Which is to say, they had one speaking animation each. You'd see the same generic lip movements from the characters if they were delivering highly emotional dialog, or if they were making funny pop culture references because you clicked them too many times.
    • That's due to engine limitations. The talking animations are all prerecorded; the animation and the audio playback are activated by separate triggers.
  • Sly Cooper, being one of the very rare instances where the series is fully translated into loads and loads of other languages, would inevitably run into this problem. Just play any of the games in any non-English mode and you're guaranteed to find at least one or two sentences that last 10 seconds shorter or longer than the original dialogue.

Exceptions:[]

Film[]

  • In countries with a long dubbing tradition, such as Spain, Germany, France or the Latin American countries, their translators, script adapters and voice actors are particulary well trained and experienced in dubbing, so they know how to deal with this situations efficiently most of the time. Specially when English is the original language, since most of imported media come from the US.
  • Pixar has handled the dubbing of some anime imports, such as the films of Miyazaki. They tend to be meticulous in reworking the dialog to fit the lips and the meaning of the original script, even doing several takes in dubbing to see what works. Getting good voice actors doesn't hurt. Or the fact that the original creator has told Disney in no uncertain terms that gratuitous changes to the movies were to be avoided.
    • Neil Gaiman, who wrote the English script for Princess Mononoke, said in an interview, "People have been asking if we reanimated it. There are two schools of thought coming out from the film. School of Thought #1 is that we reanimated the mouth movements. School #2 is that they must have made two different versions at the same time."
    • In the special features of Howls Moving Castle, there is a clip of Christian Bale desperately trying to speak his line fast enough to match the animation, then commenting on how it's a lot of words. The script editors change it on the spot.
  • The translator who worked with Sergio Leone on the Dollars Trilogy was given basically free rein to rewrite dialogue to make it fit better with the Italian and Spanish lip movements, to the movies' infinite benefit (consider Leone's abuse of extreme closeups and now consider what might have been). He describes in the DVD special features of The Good, the Bad and the Ugly how he spent a whole day trying to figure out how to translate the line "più forte" ("louder"), eventually opting for "more feeling".

Video Games[]

  • Half-Life 2 and games based on Valve's Source Engine, have a phoneme editor built in. It takes the written script and the recorded dialogue, and makes a near-perfect set of facial animation instructions for the character. So, changing languages, at least for languages built on the Latin alphabet, is as painless as feeding the game new scripts and new sounds.
  • Shadow Hearts II eliminates liplock in cutscenes by having the characters ad-lib or grumble for or in-between certain lines. The effect makes the dub sound much more natural than in most video games.
    • And yet many lines still seem to come just before or after the mouths move.
  • Despite most of the Final Fantasy English dubs having to cope with severe cases of Lip Lock as stated above, Final Fantasy XIII completely averts the issue: Square Enix went through the effort of reanimating the lip movements, both in-game and for every cutscene, so as to fit the English dialogue. Whilst re-syncing in-game animations isn't particularly uncommon, doing it for pre-rendered CG cutscenes certainly is. This fortunately results in what can be considered a rather good dub (although Your Mileage May Vary for characters such as Vanille).
    • Another Square Enix game, The Last Remnant, was also given this treatment in its cutscenes. There were exceptions, as some of them were using generic animations, which also included mouth movement and facial expressions. Unfortunately, switching to Japanese voices does not change lip movement, resulting in occasional parts of the dialogue not being properly synced.
    • Even before them Square-Enix had already abandoned that trope. There are the already mentioned Kingdom Hearts games for the Play Station 2 (except for Chain of Memories remake), as well as the PSP game Birth by Sleep. Besides them, Crisis Core, having a cutscene engine reminiscent of Kingdom Hearts', also redid lip moves for the English version.
    • They first encountered this problem with Final Fantasy X. The localization team left the original Japanese lip movements intact, which made it very difficult for the English-speaking actors (Hedy Burress in particular) to read the lines in a cadence that sounded natural (Yuna's labored pauses became a minor meme). Final Fantasy X-2 however, scrapped the original mouth movements completely and re-animated the mouths for the American dub, freeing the actors to finally speak their lines like they weren't choking on their own tongues.
  • Old Wing Commander had a built-in cut-scene lip-syncer that worked according to the subtitled script text. It even took into account and lip-synced the name that you chose for the protagonist — see it for yourself the next time you try out the original! This coming from a game that didn't even have digital speech.
  • Similar to the Wing Commander example above, Working Designs' localization of Popful Mail actually animated the lip sync for the in-game dialogue sequences based on the actual spoken dialogue — though the developers themselves admitted in the manual it occasionally resulted in a Hong Kong Dub.
  • Trinity Universe and Hyperdimension Neptunia also use separate lip animations for their English- and Japanese-language voice tracks. In the latter game, cutscenes that only have Japanese-language voice tracks (such as the ones for DLC characters Red and 5pb) don't use any lip movements when playing the English-language track.
  • Tales of Vesperia and Tales of Symphonia Dawn of the New World had their skits mostly reanimated to match the Engish voice actors' performance.
  • Team Fortress 2: Valve re-"shot" the machinima they made for the "Meet the [Class]" videos when creating the other language versions by using a software to help them lip-sync perfectly down to the last syllable.
  • Catherine likewise averts this by having its animation edited to match the dub for its English release.
  • Devil May Cry4's character facial animations are motion captured from the voice actors themselves while speaking the lines. They redid all motion capturing for all the different languages. Talk about dedication.

Web Original[]

Advertisement